-
Notifications
You must be signed in to change notification settings - Fork 6
Home
Welcome to the wiki for ORACLE-family of methods!
This wiki will guide you through the installation and running of the package along with documentation of the package.
A family of learning-based methods for (Visually-Attentive) Uncertainty-Aware Navigation is presented in this repo, including ORACLE, A-ORACLE, and seVAE-ORACLE.
The problem considered in this work is that of autonomous uncertainty-aware and visually attentive aerial robot navigation. The method explicitly assumes no access to the map of the environment (neither offline nor online) and no information for the robot position but only a partial state estimate of the robot combined with the real-time depth data and a 2D detection mask representing the interestingness of every region within an angle- and range-constrained sensor frustum. We assume that there is a global planner providing the 3D unit goal vector
The below video explains the functionality overview of ORACLE and A-ORACLE:
explanation_slide_multiples-cover_v2.mp4
The algorithmic architecture of Attentive ORACLE (A-ORACLE) and ORACLE: We design two deep neural networks to efficiently estimate the uncertainty-aware collision score and the information gains for multiple action sequences, namely the Collision Prediction Network (CPN) and Information gain Prediction Network (IPN), respectively. Both networks assume access to a) either the depth image (CPN) or the stacked matrix of the current depth image and the detection mask (IPN), alongside b) the estimates of the robot’s linear velocities,
While the above methods can transfer well to the real system, they require a fairly expensive and heuristic pre-processing step on the raw depth images to mitigate the discrepancies between the real and simulated depth images (such as a) missing information, b) loss of detail). Although Deep Ensembles method can (passively) account for the depth image noise by considering them as novel out-of-distribution input, having a pipeline that can incorporate directly noisy real-world exteroceptive sensor input (in addition to simulation data) is beneficial, especially with hard-to-perceive thin obstacles. We address this problem by proposing a modularized learning-based method based on a Semantically-enhanced Variational Autoencoder (seVAE).
The below video explains the functionality overview of seVAE-ORACLE.
vae-oracle-explanation-vF.mp4
The algorithmic architecture of seVAE-ORACLE: we propose a modularized approach involving the seVAE and the Collision Prediction Network (CPN). The seVAE encodes the input depth image
This open-source release is based upon work supported by a) the Research Council of Norway project SENTIENT (Project No. 321435), b) the Air Force Office of Scientific Research under award number FA8655-21-1-7033, and c) the Horizon Europe project DIGIFOREST (EC 101070405).
For specific instructions please visit the respective pages: