main.mp4
A unified, modular, open-source 3DGS-based simulation framework for Real2Sim2Real robot learning
Our paper "DISCOVERSE: Efficient Robot Simulation in Complex High-Fidelity Environments" has been accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025.
- Clone repository
# Install Git LFS (if not already installed)
## Linux
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
## macOS using Homebrew
brew install git-lfs
git clone https://github.com/TATP-233/DISCOVERSE.git
cd DISCOVERSE- Choose installation method
conda create -n discoverse python=3.10 # >=3.8 is ok
conda activate discoverse
pip install -e .
## Auto-detect and download required submodules
python scripts/setup_submodules.py
## Verify installation
python scripts/check_installation.pypip install -e . # Core functionality onlyIncludes: MuJoCo, OpenCV, NumPy and other basic dependencies
pip install -e ".[lidar,visualization]"- Includes: Taichi GPU acceleration, LiDAR simulation, visualization tools
- Function: High-performance LiDAR simulation with Taichi GPU acceleration
- Dependencies:
taichi>=1.6.0 - Use Cases: Mobile robot SLAM, LiDAR sensor simulation, point cloud processing
pip install -e ".[act_full]"- Includes: ACT algorithm, data collection tools, visualization
- Function: Imitation learning, robot skill training, policy optimization
- Dependencies:
torch,einops,h5py,transformers,wandb - Algorithms: Additional algorithms available: [diffusion-policy] and [rdt]
pip install -e ".[gaussian-rendering]"- Includes: 3D Gaussian Splatting, PyTorch
- Function: Photorealistic 3D scene rendering with real-time lighting
- Dependencies:
torch>=2.0.0,torchvision>=0.14.0,plyfile,PyGlm - Use Cases: High-fidelity visual simulation, 3D scene reconstruction, Real2Sim pipeline
| Module | Install Command | Function | Use Cases |
|---|---|---|---|
| Core | pip install -e . |
Core simulation | Learning, basic development |
| LiDAR | .[lidar] |
High-performance LiDAR simulation | SLAM, navigation research |
| Rendering | .[gaussian-rendering] |
3D Gaussian Splatting rendering | Visual simulation, Real2Sim |
| GUI | .[xml-editor] |
Visual scene editing | Scene design, model debugging |
| ACT | .[act] |
Imitation learning algorithm | Robot skill learning |
| Diffusion Policy | .[diffusion-policy] |
Diffusion model policy | Complex policy learning |
| RDT | .[rdt] |
Large model policy | General robot skills |
| Hardware Integration | .[hardware] |
RealSense+ROS | Real robot control |
We provide a Docker installation method.
# Set up repository
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
# Update and install
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit nvidia-docker2
# Restart Docker service
sudo systemctl restart docker-
Download pre-built Docker image
Baidu Netdisk: https://pan.baidu.com/s/1mLC3Hz-m78Y6qFhurwb8VQ?pwd=xmp9
Currently updated to v1.8.6. After downloading the .tar file, use the docker load command to load the docker image.
Replace
discoverse_tag.tarbelow with the actual downloaded image tar file name.docker load < discoverse_tag.tar -
Or build from
Dockerfilegit clone https://github.com/TATP-233/DISCOVERSE.git cd DISCOVERSE python scripts/setup_submodules.py --module gaussian-rendering docker build -f discoverse/docker/Dockerfile -t discoverse:latest .
Dockerfile.vncis a configuration that supports VNC remote access. It adds VNC server support todiscoverse/docker/Dockerfile, allowing remote access to the container's GUI via a VNC client. This is useful for remote development or headless environments. To use it, replacedocker build -f discoverse/docker/Dockerfile ...withdocker build -f discoverse/docker/Dockerfile.vnc ....
# Run with GPU support
docker run -dit --rm --name discoverse \
--gpus all \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
discoverse:latest
# Note: Replace `latest` with the actual docker image tag (e.g., v1.8.6).
# Set visualization window permissions
xhost +local:docker
# Enter container terminal
docker exec -it discoverse bash
# Test run
python3 examples/active_slam/camera_view.pyThis section covers the setup for high-fidelity 3DGS rendering. If you do not require this feature or are using Docker, you can skip this section.
Install CUDA 11.8+ from NVIDIA's official site, choose the corresponding CUDA version based on your graphics card driver.
# Install Gaussian Splatting requirements
pip install -e ".[gaussian-rendering]"
# Build diff-gaussian-rasterization
cd submodules/diff-gaussian-rasterization/
# Apply patches
sed -i 's/(p_view.z <= 0.2f)/(p_view.z <= 0.01f)/' cuda_rasterizer/auxiliary.h
sed -i '361s/D += depths\[collected_id\[j\]\] \* alpha \* T;/if (depths[collected_id[j]] < 50.0f)\n D += depths[collected_id[j]] * alpha * T;/' cuda_rasterizer/forward.cu
# Install
cd ../..
pip install submodules/diff-gaussian-rasterizationPLY models will be automatically downloaded from Hugging Face when you first run a simulation that requires them. Login to Hugging Face with: hf auth login
Models are stored in the models/3dgs directory:
models/
βββ meshes/ # Mesh geometries
βββ textures/ # Material textures
βββ 3dgs/ # Gaussian Splatting models (auto-downloaded)
β βββ hinge/
β βββ manipulator/
β βββ mobile_chassis/
β βββ objaverse/
β βββ object/
β βββ rm2_car/
β βββ scene/
β βββ skyrover/
βββ mjcf/ # MuJoCo scene descriptions
βββ urdf/ # Robot descriptions
For users in China, the automatic download uses HF-Mirror for faster speeds.
View and edit 3DGS models online using SuperSplat - simply drag and drop .ply files.
DISCOVERSE features a comprehensive Real2Sim pipeline for creating digital twins of real environments. For detailed instructions, visit our Real2Sim repository.
# Launch Airbot Play / MMK2
python discoverse/robots_env/airbot_play_base.py
python discoverse/robots_env/mmk2_base.py
# Run manipulation tasks (automated data generation)
python examples/tasks_airbot_play/place_coffeecup.py
python examples/tasks_mmk2/kiwi_pick.py
# Tactile hand Leap Hand
python examples/robots/leap_hand_env.py
# Inverse Kinematics
python examples/mocap_ik/mocap_ik_airbot_play.py # optional [--mjcf mjcf/tasks_airbot_play/stack_block.xml]
python examples/mocap_ik/mocap_ik_mmk2.py # optional [--mjcf mjcf/tasks_mmk2/pan_pick.xml]sim2real.mp4
- 'h' - Show help menu
- 'F5' - Reload MJCF scene
- 'r' - Reset simulation state
- '['/'']' - Switch camera views
- 'Esc' - Toggle free camera mode
- 'p' - Print robot state information
- 'Ctrl+g' - Toggle Gaussian rendering (requires
gaussian-splattinginstallation andcfg.use_gaussian_renderer = True) - 'Ctrl+d' - Toggle depth visualization
DISCOVERSE provides complete workflows for data collection, training, and inference:
- ACT
- Diffusion Policy
- RDT
- Custom algorithms via extensible framework
- 2025.01.13: π DISCOVERSE open source release
- 2025.01.16: π³ Docker support added
- 2025.01.14: π S2R2025 Competition launched
- 2025.02.17: π Diffusion Policy baseline integration
- 2025.02.19: π‘ Point cloud sensor support added
For installation and runtime issues, please refer to our comprehensive Troubleshooting Guide.
DISCOVERSE is released under the MIT License. See the license file for details.
If you find DISCOVERSE helpful in your research, please consider citing our work:
@article{jia2025discoverse,
title={DISCOVERSE: Efficient Robot Simulation in Complex High-Fidelity Environments},
author={Yufei Jia and Guangyu Wang and Yuhang Dong and Junzhe Wu and Yupei Zeng and Haonan Lin and Zifan Wang and Haizhou Ge and Weibin Gu and Chuxuan Li and Ziming Wang and Yunjie Cheng and Wei Sui and Ruqi Huang and Guyue Zhou},
journal={arXiv preprint arXiv:2507.21981},
year={2025},
url={https://arxiv.org/abs/2507.21981}
}