Skip to content
HuanNguyenARL edited this page Sep 21, 2023 · 16 revisions

(Visually-Attentive) Uncertainty-Aware Navigation Using Deep Neural Networks

Welcome to the wiki for ORACLE-family of methods!

This wiki will guide you through the installation and running of the package along with documentation of the package.

Method Overview

A family of learning-based methods for (Visually-Attentive) Uncertainty-Aware Navigation is presented in this repo, including ORACLE, A-ORACLE, and seVAE-ORACLE. An overview of the architectures of these methods is given below:

ORACLE_overview

The algorithmic architecture of Attentive ORACLE (A-ORACLE) and ORACLE: We design two deep neural networks to efficiently estimate the uncertainty-aware collision score and the information gains for multiple action sequences, namely the Collision Prediction Network (CPN) and Information gain Prediction Network (IPN), respectively. Both networks assume access to a) either the depth image (CPN) or the stacked matrix of the current depth image and the detection mask (IPN), alongside b) the estimates of the robot’s linear velocities, $z$-axis angular velocity, and roll/pitch angles and c) candidate action sequences in a Motion Primitives Library (MPL). Notably, CPN utilizes $\mathbf{m}_1$ representing the current mean value of $\mathbf{s}_t$ and $\mathbf{m}_2 ... \mathbf{m}_{N_\Sigma}$ representing the remaining sigma points of the Unscented Transform to account for the uncertainty in the robot's partial state estimate, while an ensemble of CPNs is used to account for the epistemic uncertainty of the neural network model. The predicted uncertainty-aware collision cost $\hat{c}^{uac}$, information gain $\hat{g}$, and a unit goal vector $\mathbf{n}^g_t$ given by a high-level global planner are used to choose the optimal action sequence to be executed in a receding horizon fashion. When the IPN is not engaged, the method reduces to ORACLE method which ensures safe uncertainty-aware map-less navigation.

seVAE_overview

TODO: draw an ensemble of CPNs

The algorithmic architecture of Semantically-enhanced Variational Autoencode (seVAE)-ORACLE: we propose a modularized approach involving the seVAE and the Collision Prediction Network (CPN). The seVAE encodes the input depth image $\mathbf{x}_t$ into the latent representation $\boldsymbol{\mu}_t$ which is used by the CPN to predict the collision scores $\hat{\mathbf{c}}^{col}_{t+1:t+T+1}$ for each action sequence $\mathbf{a}_{t:t+T}$ in the motion primitives library. Notably, the seVAE is trained with both real-world and simulated depth images to compress the input data, while preserving semantically-labeled thin obstacles and handling invalid pixels in the depth sensor's output. Furthermore, the method utilizes $N_{\Sigma}$ sigma points calculated based on $\mathbf{s}_t$ and $\boldsymbol{\Sigma}_t$ and an ensemble of CPNs to calculate the uncertainty-aware collision score $\hat{c}^{uac}$.

Acknowledgements

This open-source release is based upon work supported by a) the Research Council of Norway project SENTIENT (Project No. 321435), and b) the Air Force Office of Scientific Research under award number FA8655-21-1-7033.

Contents

For specific instructions please visit the respective pages:

  1. Installation
  2. Demo Simulation
  3. Experimental results
  4. Parameters
  5. Interface
  6. Training navigation policy for your own drone
Clone this wiki locally