Skip to content

Generating Animated Dynamic Environments for Robotics Research

License

Notifications You must be signed in to change notification settings

MotorTrick/GRADE-RR

 
 

Repository files navigation

GRADE-RR or how to Generate Realistic Animated Dynamic Environments for Robotics Research

Note that while we used the v2021 for the paper, that version is now deprecated. I will work only on v2022+

GRADE is a system I developed to seamlessly manage the Isaac Sim simulation software to Generate Realistic Animated Dynamic Environments for Robotics Research

prom

This will help you in:

  1. managing the simulation
  2. load, placem, animate assets
  3. load and control any robot
  4. get sensor readings from such robots, saving ground truth data
  5. customize your workflow
  6. postprocess the data
  7. repeat any experiment --- this includes recording new sensor, getting new data, changing the conditions and repair the data while working in realistically looking environments and in a physics enabled simulator.

Each step of the pipeline can be easily customized, expanded or removed from your workflow.

If you want more information check out the paper or our website.

With this framework in conjuction with our people generator, environment exporter and control framework (which can control any thanks to our custom 6DOF joint controller), we generated a dataset of indoor animated scenes.

The data generated can be post-processed with our set of tools, evaluated against popular SLAM libraries, and used to test the realism your synthetic data.

We used this project to generate both an indoor dynamic environment and a outdoor synthetic Zebra datasets. The details for those are in the corresponding GRADE and Zebra papers.


List of project-related repositories

  1. All the data we generated is or will be available here
  2. The tools to process the data, add noise to the rosbags or during the simulation, to evaluate the SLAM methods, generate training data can be found here
  3. The code to convert SMPL-based animations to USD files is here
  4. To convert any environment from Blender to USD and generate some accompanying data use this. This has a special focus in indoor environmets and Front3D. Based on BlenderProc.
  5. The parent repository which we used to autonomously explore the environments during the data generation is here
  6. The modified version of DynaSLAM working with Python3 and using detectron2 is here
  7. FUEL, our chosen autonomous exploration manager to control the drone within the environment. Link here
  8. custom_6dof_joint_controller which is the bridge between the position/velocity commands and the joint velocities expected by IsaacSim. This will allow you to control any robot within the simulation environment. Link here
  9. moveit_based_collision_checker_and_placement our Move-it based placement strategy. Link here

Requirements and basic software installation

Please check the requirements on the official page.

Install Nucleus, Cache, and Isaac Sim.

From now on, we will assume that you installed Isaac Sim within a ISAAC_FOLDER. Default location is ~/.local/share/ov/pkg/isaac-version/.

Clone this repository. You can clone this wherever you prefer. For simplicity, we usually download it within the isaac folder. However, by using global paths you should be able to run this code anywhere in your PC.

Note Isaac will have its own python installation, if you need packages and you run software within the Isaac python executable remember that. To do so, you usually do something like

cd $ISAAC_FOLDER
./python.sh -m pip install ...
# or
./python.sh python_file.py

We have some dependencies which are not installed by default. To install them run sh req.sh $ISAAC_FOLDER. (This will simply use the main Isaac python.sh to install everything via pip).

Folder structure

├── EDIT_USDS.md # how to convert USDs to text files and edit them
├── SAMPLES.md # how to run the samples
├── MOVEMENT.md # how can you control the camera/robot?
├── PARAMS.md # available (and expandable) parameters description
├── README.md # main readme
├── cp_local_to_diff_folder.sh # update code from/to isaac folder
├── irotate_specific # specific files used for simulate irotate in isaac sim and instructions
│   └── ...
├── isaac_internals # edited isaac files
│   ├── apps
│   │   └── omni.isaac.sim.python.kit # pre-load some additional extensions and disable a moveit (so that we can load the one from the system)
│   ├── exts 
│   │   ├── omni.isaac.shapenet # slightly modified loader
│   │   ├── omni.isaac.synthetic_recorder # custom recorder extension that allows more control
│   │   └── omni.isaac.synthetic_utils # minor edits
│   └── setup_python_env.sh # source the ros environment and show how to source multiple ones
├── kill.sh # script to kill the whole simulation
├── meshes # folder containing meshes
├── req.sh # requirements file
├── scripts # useful scripts
│   ├── bash_process.zsh # multiprocessing procedure (zsh shell)
│   ├── colorize.py # colorize your data
│   ├── get_benchbot.sh # get benchbot environments
│   └── process_paths # folder containing script to automatically process USD files (see EDIT_USD.md file)
├── simulator # main simulator folder, each main file will have it's own description
│   ├── configs # yaml configuration files
│   ├── utils # utils loaded and used by the main files
│   └── ... 
└── usds # usds files

Finishing setting up

Independently on where you cloned the repository you need to run sh cp_local_to_different_folder.sh $CLONE_FOLDER $ISAAC_FOLDER

This will copy the edited files from $1 (source) to the $2 (destination). You can use it in reverse (from Isaac to repo), or with any couple of folders.


Misc

A general note: every script has been more or less commented and almost each piece of code should be self-explanatory. If you don't find it like that please open an issue.

I worked on this mainly alone so the code is far from perfect, super-modular, or anything like that. But together we can make it better.

Thus, we welcome any contribution that you might have. Include coding style, comments, additions, or better strategies that you want to propose (of course after you have published your paper).

How to run the simulation

We're working on this piece of the documentation. Please bear with us while we upgrade the documents with better instructions.

A small tutorial can be found here

Here you will learn what we already tried out, what we tested, what we used to run the simulations.

Here you can learn about our developed codebase, where you can find useful resources, and how you can edit them, file by file.

How to postprocess the data

Please check our dedicated repository here.

How to colorize the saved data

Simply run python scripts/colorize.py --viewport_folder main_folder_with_npy_files. Check our code here, you can save images, images and videos, and decide which kind of data you want.

How to get skeletal, vertices, and SMPL information while correcting bounding boxes

Look here. This is mainly tuned for our data. However, it can be easily expanded to your own dataset.

How to edit directly USD files

Check the tutorial here. This will help you convert USD to txt files for easy file processing.

Isaac's edited files

Edited files are inside isaac_internals. The edited ones are the one that are copied by the cp_local.. script. However, as per Isaac requirements, we had to include all the licenses and other files.

  • Shapenet minor edits regarding the main script since the dowload website seem down. We suggest to pre-download the dataset, unpack it, and set-up the environment folders as we show here to directly use the pre-downloaded data.
  • synthetic_recorder created a custom extension to save our data, and offset the number of cameras. In that way we can save high-resolution images to the disk, while providing ROS smaller images. We found this faster than resizing images afterwards and caused less "issues".
  • synthetic_utils we edited the numpy.py and the syntheticdata.py to save more data and have more flexibility. What is still missing (our bad) is the vertical fov of the camera, which is not directly exposed by Isaac Sim.
  • In setup_python_env.sh we had to prevent the loading of $SCRIPT_DIR/exts/omni.isaac.motion_planning/bin (you can find it commented at the very end of line 8), to be able to run the system version of move_base. That module could be necessary for some of the Isaac extensions or configurations. Please be aware of this.

How to move/control the camera/robot

You have several possibilities with and without ROS, with and without physics. Check them out here

Possible missing textures/wrong paths

When loading humans or environments (or anything else) it may be necessar for you to edit the paths of the shaders, especially when moving between Windows and Linux. To do that you can use the change_shader_path or the correct paths scripts.

Otherwise, you can simply process the text files as explained here.

Segmentation <-> instance

Instance segmentation files will save also the mappings between classes. An example on how to do the mapping and process those file is here.


Known issues

  1. ros clock might have some delay in publishing. This implies that you need to sleep the simulation every time that component gets triggered. Other component behave consistently based on our tests. Alternatively, you can post-process the data as shown in here
  2. BBOX3D are wrong for moving objects. The script here show a way to solve this.
  3. Pose information is wrong for some moving objects. The code here will solve this.
  4. Collisions for dynamic objects are not computed most of the times due to PhysX limitations. This is addressed by the new LiDAR-RTX of the new Isaac Sim version.
  5. The rendering is not blocking. Multiple calls (especially for path tracing) are necessary. Thus, this usually disrupt the motion-vector data. A possible workaround is to do two rendering steps and save the motion-vector data, and then finish rendering to save the rgb information. See here an example on how to do that.
  6. In the v2022 it is not possible to set indipendent vfov of the cameras
  7. In the v2022 the internal PD control for the joints will NOT work using position setpoints. Also, the maximum velocity set is not considered.
  8. In the v2022 the timeline gets updated automatically even if you do not want it. You need to keep track of the ctime and constantly re-update it to correctly generate the data you want.

Download data

The data will be available in our data repository.


CITATION

If you find this work useful please cite our work

  1. GRADE: currently under revision
@misc{bonetto2023grade,
  doi = {10.48550/ARXIV.2303.04466},
  url = {https://arxiv.org/abs/2303.04466},
  author = {Bonetto, Elia and Xu, Chenghao and Ahmad, Aamir},
  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {GRADE: Generating Realistic Animated Dynamic Environments for Robotics Research},
  publisher = {arXiv},
  year = {2023},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
  1. Synthetic zebras: currently under revision
@misc{bonetto2023synthetic,
      title={Synthetic Data-based Detection of Zebras in Drone Imagery}, 
      author={Elia Bonetto and Aamir Ahmad},
      year={2023},
      publisher = {arXiv},
      url = {https://arxiv.org/abs/2305.00432},
      doi = {10.48550/arXiv.2305.00432},
      primaryClass={cs.CV}
}
  1. Dyanmic SLAM evaluations: published at the Active Vision for Robotics Workshop at ICRA 2023
@inproceedings{ bonetto2023dynamicSLAM, 
            title={{S}imulation of {D}ynamic {E}nvironments for {SLAM}}, 
            author={Elia Bonetto and Chenghao Xu and Aamir Ahmad}, 
            booktitle={ICRA2023 Workshop on Active Methods in Autonomous Navigation}, 
            year={2023}, 
            url={https://arxiv.org/abs/2305.04286}}
  1. Detection and segmentation of humans in indoor scenes using synthetic data: published at the Pretraining for Robotics workshop at ICRA 2023
@inproceedings{
bonetto2023learning,
title={Learning from synthetic data generated with {GRADE}},
author={Elia Bonetto and Chenghao Xu and Aamir Ahmad},
booktitle={ICRA2023 Workshop on Pretraining for Robotics (PT4R)},
year={2023},
url={https://openreview.net/forum?id=SUIOuV2y-Ce}
}

LICENSE

By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License

Accompanying software, such as, but not limited to, the one from Isaac Sim, is licensed according to their specific term of use.

If you use data/software from other projects such as, but not limited to, TUM RGB-D, 3D-Front, 3D-Future, ... it is your responsibility to follow their licensing terms.

If you have questions regarding the license, please contact the [email protected].


Thanks

I would like to thank the amazing NVIDIA support for their quick response times and precise answers. Chenghao Xu for helping in testing and refining the evaluation scripts. Aamir Ahmad for his supervision.

About

Generating Animated Dynamic Environments for Robotics Research

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.8%
  • Shell 1.5%
  • Kit 1.2%
  • Other 1.5%