--- title: GRADE-RR --- Redirecting to GRADE-RR home If you are not redirected automatically, follow this GRADE-RR Home.

GRADE

Generating Realistic Animated Dynamic Environments for Robotics Research

Elia Bonetto1,2,*Chenghao Xu1,3Aamir Ahmad2,1
1Max Planck Institute for Intelligent Systems   2University of Stuttgart   3Delft University of Technology  
* Corresponding Author


Project papers: [GRADE Paper]  [Synthetic Data-based Detection of Zebras in Drone Imagery - ECMR 2023] 

Workshop papers: [Learning from synthetic data generated with GRADE - Pretraining4Robotics ICRA2023]  [Simulation of Dynamic Environments for SLAM - Active Methods in Auto. Nav. ICRA2023] 

Data: [Download Data] 

Code: [Environment Generation Code]  [SMPL animation to USD]  [GRADE framework] [Data Processing and Evaluation]

News

Our Mission


With GRADE we provide a pipeline to easily simulate robots in photorealistic dynamic environments. This will simplify the robotics research by a great extent by allowing easy data generation and precise custom simulations. The project is based on Isaac Sim, Omniverse and the open source USD format.

On our first work, we put the focus into four main components: i) indoor environments, ii) dynamic assets, iii) robot/camera setup and control, iv) simulation control, data processing and extra tools. Using our framework we generate an indoor dynamic environment dataset, and showcase various scenarios such as heterogeneous multi-robot setup and outdoor video capturing. The data generated is solely based on publicly available datasets and tools such as: 3D-Front, Google Scanned Objects, ShapeNet, Cloth3D, AMASS, SMPL and BlenderProc. This has been used to extensive test various SLAM libraries and synthetic data usability in human detection/segmentation task with both Mask-RCNN and YOLOv5. You can already check the data used for training, and the sequences we used to test the SLAM frameworks here.

After that, we used the very same framework to generate a dataset of zebras captured outdoor from aerial views, and demonstrated that we can train a detector without using real world images achieving 94% mAP. We already released the images and most of the data for you to experiment with.

Thanks to the ROS support and the python interface, each one of our modules can be easily removed from the pipeline, or exchanged with your custom implementations. Our framework is easily expandable and customizable for any use case, and we will welcome any contribution that you may have.

Note that the use of ROS is neither necessary nor mandatory. You can use the system without any working knowledge of ROS or of its components.

For further details check the papers (GRADE, Zebras) or write me an e-mail.


GRADE pipeline

More informations

Environments

While perception and processing capabilities of our robot increases, there's the need to increase also the quality of our simulated environments. This will enable research based on simulations that are more coherent with the real world that the robot will sense when deployed, closing the sim-to-real gap. Using Omniverse's connectors you can convert almost any environment to the USD format. This can then be imported directly in IsaacSim. Our conversion tool, based on BlenderProc, focuses on the 3D-Front dataset, the biggest available semantically-annotated indoor dataset with actual 3D meshes. However, as we show in our work we can easily import environments from other applications such as UnrealEngine, or download environments from popular marketplaces such as SketchFab. For example, we use the same system to convert BlenderProc environments (download them with this), and FBX files. Our script will automatically extract other useful information such as the STL of the environment, convert it to x3d (which can be then converted to octomap), and get an approximated enclosing polygon. Find more information on how you can easily convert an environment here.

Dynamic assets

Most of the robotics research is being carried out in static environments. This mainly because animated assets are difficult to simulate, place and manage. We focused mostly on animated clothed humans by converting and placing inside our environments Cloth3D and AMASS (CMU) animated assets. To do so, we implemented a simple to use tool that allows a streamlined conversion between SMPL mesh and clothes animation to the USD format. The tool is expandable to different body models (e.g. SMPL-X) or different data sources. The animations are then placed in the simulation with an easy to understand technique which can be easily exchanged with your own script. Our strategy use the STL trace of the animation and of the environment to check for collisions using a custom service based on MoveIt's FCL interface.

However, dynamic assets are not only humans but there can be also objects or animals. Luckly, you can easily simulate those too. For example, here we show how you can add flying properties to objects. The zebras we showcase in the video (get them here) were converted using blender to the USD format. The translation and rotation offset was added manually animated using this procedure and then manually placed and scaled in the savanna environment. The zebras used for the generation instead are randomly scaled and placed (statically) at simulation time. Check how here and here.

Robot/camera setup and control

Any robot can be imported in the simulation. You can control the robot in various ways with or without ROS and with or without physics. Non-physics enabled possibilities include control through teleporting and as a flying object. Physics-enabled ones include software in the loop, joint-based waypoints, pre-developed controllers (e.g. standard vehicles) and directly through joint commands. Find out more here.

Since Isaac Sim does not provide fluid-dynamic simulation and frictionless perpendicular movement, we developed a custom virtual 6DOF joint controller that works both with position and velocity setpoints. This allowed us to control both a drone and a 3-wheeled omnidirectional robot. The joint controller would work for any robot that you have, provided that you include in it your own joint definitions.

Robot's sensors can be created at run-time (main script, function), and dynamically changed, or pre-configured directly in the USD file.

Data publishing can be controlled manually, therefore each sensor can have a custom publishing and, if you want, failure rate (link).

Since the simulator is ROS-enabled any sensor can be published manually with added noise and any custom topic can be added or listened. However, the use of ROS is NOT mandatory NOR required.

Simulation control, data processing and extras

The simulation control include loading the environment, placing and control the robot, placing the animations, animate objects, integrate ROS, assets/materials/lights randomization, dynamically set simulation settings, and data saving. The pipeline is fully customizable, e.g. with new placement strategies, by using or not ROS, have software in the loop, add noise to data etc. Check out more information about this here and here. Based on the complexity of the environment and the on the number of lights performances can reach almost realtime processing (>15fps) with RTX rendering settings. Without using physics and ROS, it should be possible to reach realtime performance. Generated information includes RGB, depth, semantics, 2D-3D bounding-boxes, semantic instances, motion-vector (optical flow), and asset skeleton/vertices positions. We implemented a set of tools to post-process the data, extract it, add noise and fix some known issues of the simulation. Instructions to evaluate various SLAM framework with our or your own data can be found here. We also provide scripts and procedures to prepare the data and train both YOLO and Mask R-CNN (via `detectron2`). However, you can directly download the images, masks and boxes we used in the paper from our data repository. Data visualization, instance-to-semantic mapping tool, get SMPL and corrected 3D bounding boxes, and automatically convert and process the USD as a text file are already available. Finally, we developed a way to replay any experiment in such a way one can generate data with newly added/modified sensors and different environment settings, check the code here.

Download

The full dataset will be made publicly available for research purposes with the final version of the manuscript. Keep checking this page or the main data repository since we will keep publishing new data as the time pass. Since we use only publicly available datasets using our instructions you should be able to reproduce our data. If in doubt, please reach out to me and I will provide assistance.

Known Issues

During our generation we found some issues related to bugs in IsaacSim itself. We address all of them in various sections of our code. The most notable ones are: rosbag clock timing issue fixed here, 3D bounding boxes for humans not correctly generated fixed here, simulation that automatically changes the aperture settings of the camera fixed at runtime. LiDAR and LRF misfire with dynamic objects can be fixed by updating the code to the latest release and using a rendering-based laser sensor (check our branch here and instructions on how to add it).

Publications

If you find our work useful please cite our works.

         
        
          
          @misc{bonetto2023grade,
            doi = {10.48550/ARXIV.2303.04466},
            url = {https://arxiv.org/abs/2303.04466},
            author = {Bonetto, Elia and Xu, Chenghao and Ahmad, Aamir},
            title = {{GRADE}: {G}enerating {R}ealistic {A}nimated {D}ynamic {E}nvironments for {R}obotics {R}esearch},
            publisher = {arXiv},
            year = {2023},
            copyright = {arXiv.org perpetual, non-exclusive license}
        }
        
        
      
        
          
          @conference{bonetto2023ECMRZebras,
            title = {Synthetic Data-based Detection of Zebras in Drone Imagery},
            author = {Bonetto, Elia and Ahmad, Aamir},
            booktitle = {IEEE European Conference on Mobile Robots},
            publisher = {IEEE},
            month = sep,
            year = {2023},
            doi = {},
            month_numeric = {9}
          }
          
        
      
        
          
          @inproceedings{ bonetto2023dynamicSLAM, 
            title={{S}imulation of {D}ynamic {E}nvironments for {SLAM}}, 
            author={Elia Bonetto and Chenghao Xu and Aamir Ahmad}, 
            booktitle={ICRA2023 Workshop on Active Methods in Autonomous Navigation}, 
            year={2023}, 
            url={https://arxiv.org/abs/2305.04286}}
          
        
      
        
          
          @inproceedings{
            bonetto2023learning,
            title={Learning from synthetic data generated with {GRADE}},
            author={Elia Bonetto and Chenghao Xu and Aamir Ahmad},
            booktitle={ICRA2023 Workshop on Pretraining for Robotics (PT4R)},
            year={2023},
            url={https://openreview.net/forum?id=SUIOuV2y-Ce}
            }
          
        
      

Disclaimer

GRADE was developed as a tool for research in robotics. The dataset may have unintended biases (including those of a societal, gender or racial nature).

Copyright

All datasets and benchmarks on this page are copyright by us and published under this license.

For commercial licenses of GRADE-RR and any of its annotations, email us at licensing@tue.mpg.de

The Team

Elia Bonetto

Principal Investigator
Max Planck Institute for Intelligent Systems, Germany
Institute of Flight Mechanics and Controls, University of Stuttgart, Germany
University of Tübingen, Germany

Chenghao Xu

Delft University of Technology, Netherlands
Max Planck Institute for Intelligent Systems, Germany

Aamir Ahmad

Institute of Flight Mechanics and Controls, University of Stuttgart, Germany
Max Planck Institute for Intelligent Systems, Germany