

Autonomous aerial robots are becoming increasingly common, driving the need for hands-on courses that equip the next-generation workforce with practical skills. A reliable testbed is essential for such courses to be effective.
We introduce VizFlyt, an open-source, perception-centric Hardware-In-The-Loop (HITL) photorealistic testing framework designed for aerial robotics education and research. VizFlyt leverages 3D Gaussian Splatting to generate real-time, photorealistic visual sensor data using pose estimates from an external localization system. This approach enables safe and realistic autonomy testingβwithout the risk of crashes.
With a system update rate exceeding 100Hz, VizFlyt offers a robust platform for developing and evaluating autonomy algorithms. Building on our experience in aerial robotics education, we also introduce an open-source and open-hardware curriculum based on VizFlyt to support future courses. We validate VizFlyt through real-world HITL experiments across various course projects, demonstrating its effectiveness and broad applicability.
π‘ Want to contribute? Whether it's new autonomy algorithms, sensor integrations, or datasets, VizFlyt thrives on community-driven innovation. Join us in advancing aerial robotics research!
This guide will walk you through setting up VizFlyt, installing dependencies, configuring the environment, and downloading necessary data.
Ensure you have the following dependencies installed before proceeding:
- β Ubuntu 22.04
- β NVIDIA Drivers (For GPU acceleration)
- β ROS2 Humble (Required for ROS-based workflows)
- β Miniconda3 (For managing Python environments)
Run the following commands to set up a dedicated Conda environment for VizFlyt:
# Create a Conda environment with Python 3.10
conda create --name vizflyt -y python=3.10.14
# Activate the environment
conda activate vizflyt
# Upgrade pip
python -m pip install --upgrade pip
Install PyTorch and CUDA dependencies for GPU acceleration:
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
# Install CUDA Toolkit (Ensure compatibility with PyTorch version)
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
# Install tiny-cuda-nn (for optimized CUDA operations)
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Clone the VizFlyt repository and install the modified Nerfstudio framework:
# Clone the repository
git clone https://github.com/pearwpi/VizFlyt.git
cd VizFlyt/nerfstudio
# Upgrade pip and setuptools before installing dependencies
pip install --upgrade pip setuptools
# Install Nerfstudio in editable mode
pip install -e .
Once your environment is set up, build the ROS2 workspace:
pip install --upgrade "numpy<2"
pip install transforms3d gdown pyquaternion
cd vizflyt_ws/
colcon build --symlink-install
This ensures all necessary dependencies are installed and the workspace is properly compiled.
To simplify your workflow, you can define aliases in your ~/.bashrc
file for frequently used commands. These are optional but recommended.
π Note: This workflow assumes that you have cloned VizFlyt in your home directory ($HOME/VizFlyt/). If your working directory is different, update the paths accordingly in the alias definitions.
Append the following lines to your ~/.bashrc
or ~/.bash_profile
:
alias viz='conda activate vizflyt'
alias viz_ws='cd $HOME/VizFlyt/vizflyt_ws'
alias source_ws='source install/setup.bash'
alias source_ws2='source install/local_setup.bash'
alias build_ws='colcon build --symlink-install'
alias set_env='export PYTHON_EXECUTABLE="$HOME/miniconda3/envs/vizflyt/bin/python" && export PYTHONPATH="$HOME/miniconda3/envs/vizflyt/lib/python3.10/site-packages:$PYTHONPATH" && export PYTHONPATH=$PYTHONPATH:$HOME/VizFlyt/nerfstudio'
alias init_vizflyt='viz && viz_ws && source_ws && source_ws2 && set_env && cd src'
To make the aliases available immediately, run:
source ~/.bashrc
Now, every time you open a new terminal, simply run:
π Note: Please ensure that you have a successfully built ros2 workspace before executing the below.
init_vizflyt
This command will:
βοΈ Activate the vizflyt Conda environment
βοΈ Navigate to the VizFlyt workspace
βοΈ Source the required ROS2 setup files
βοΈ Set up the necessary Python environment variables
To fetch required datasets and pre-trained models, run:
init_vizflyt # Ensure the environment is set up
chmod +x download_data_and_outputs.sh # Make script executable
./download_data_and_outputs.sh # Run the script to download required data
VizFlyt supports both simulated-Hardware-In-The-Loop (s-HITL) and Hardware-In-The-Loop (HITL) for testing autonomous aerial navigation.
- If you do not have access to a Vicon motion capture system or drone hardware, you can use the simulated drone (
fake_drone
) and simulated Vicon (fake_vicon
) for testing. - To integrate custom autonomy algorithms, edit the
StudentPerception.py
andStudentPlanning.py
scripts.
In this mode, the Robot and Motion Capture elements are replaced with simulated equivalents.
1οΈ. Start the Render Node
Generates hallucinated sensor data using 3D Gaussian Splatting. This may take a few seconds to initialize.
ros2 run vizflyt render_node
2οΈ. Enable Collision Detection
Detects obstacles within the Digital Twin environment. If a collision occurs, the simulation freezes and the drone will land.
ros2 run vizflyt collision_detection_node
3οΈ. Start the Fake Vicon Node
Simulates motion capture by reading pose data from the fake_drone
frame and republishing it.
ros2 run vizflyt fake_vicon_node_hitl
4οΈ. Run the Quadrotor Simulator
- Subscribes to user-defined trajectories (position, velocity, acceleration, yaw).
- Runs a cascaded PID controller, which can be tuned for custom flight behavior.
ros2 run vizflyt quad_simulator_node
5οΈ. Run the User Code Node
- Subscribes to RGB and depth images and Vicon pose.
- Uses the user-defined perception and motion planning modules to compute real-time trajectory commands.
ros2 run vizflyt usercode_node
6οΈ. Launch RViz for Visualization
rviz2
β
Expected Outcome:
Once all nodes are running, you should see a visualization of the simulated drone and its planned trajectory:
If using a real drone with Vicon motion capture, follow these steps:
1οΈ. Launch the Vicon Receiver Node
- Captures real-time pose updates from the drone.
ros2 launch vicon_receiver client.launch.py
2οΈ. Start the Render Node
- Generates hallucinated sensor data from the Digital Twin.
ros2 run vizflyt render_node
3οΈ. Enable Collision Detection
- Stops rendering and triggers a landing in case of collision.
ros2 run vizflyt collision_detection_node
4οΈ. Run the Ardupilot Drone Control Node
- Reads RGB and depth images and Vicon pose.
- Sends trajectory commands to the drone using pymavlink and DroneKit.
ros2 run vizflyt ardupilot_drone_control_node
β Outcome: The real drone should now follow the planned autonomous trajectory while using the Digital Twin for navigation feedback.
This guide provides step-by-step instructions for generating a high-fidelity digital twin using Nerfstudio. The workflow covers dataset preprocessing, training, visualization, and exporting an occupancy grid for collision detection.
Ensure your workspace is set up correctly before proceeding:
init_vizflyt
If your input consists of images, convert them into a format suitable for Nerfstudio:
ns-process-data images \
--data ./vizflyt_viewer/data/washburn-env6-itr0-1fps/ \
--output-dir ./vizflyt_viewer/data/washburn-env6-itr0-1fps_nf_format/
Run the training process using Splatfacto, which will generate a Gaussian Splatting-based representation of the scene:
ns-train splatfacto \
--data ./vizflyt_viewer/data/washburn-env6-itr0-1fps_nf_format/ \
--output-dir vizflyt_viewer/outputs/washburn-env6-itr0-1fps
Visualize the generated digital twin using the Nerfstudio Viewer:
ns-viewer --load-config \
./vizflyt_viewer/outputs/washburn-env6-itr0-1fps/washburn-env6-itr0-1fps_nf_format/splatfacto/2025-03-06_201843/config.yml
Generate an occupancy grid map from the trained digital twin to use for collision detection in autonomous navigation:
ns-export gaussian-splat \
--load-config ./vizflyt_viewer/outputs/washburn-env6-itr0-1fps/washburn-env6-itr0-1fps_nf_format/splatfacto/2025-03-06_201843/config.yml \
--output-dir ./vizflyt_viewer/occupancy_grid/
If the global ns-*
commands fail for any reason, you can manually execute the equivalent Python scripts with the same arguments:
- Preprocess Data:
python vizflyt_viewer/scripts/process_data.py
- Train the Model:
python vizflyt_viewer/scripts/train.py
- View the Digital Twin:
python vizflyt_viewer/scripts/run_viewer.py
- Export the Occupancy Grid:
python vizflyt_viewer/scripts/exporter.py
To fine-tune the initial pose, field of view (FOV), and render resolution, follow these steps:
init_vizflyt
ns-viewer --load-config \
./vizflyt_viewer/outputs/washburn-env6-itr0-1fps/washburn-env6-itr0-1fps_nf_format/splatfacto/2025-03-06_201843/config.yml
-
Set Initial Position & Orientation
- Use the GUI to position the vehicle where you want it to start.
-
Adjust Render Resolution
- Navigate to the Control Tab and adjust the
"Max Res"
slider.
- Navigate to the Control Tab and adjust the
-
Set Field of View (FOV)
- Navigate to the Render Tab and adjust the
"Default FOV"
slider.
- Navigate to the Render Tab and adjust the
-
Save Configuration
- Once satisfied, click "Save Render Settings" to save your settings.
For advanced usage and fine-grained control over input/output parameters, refer to the official Splatfacto documentation.
This guide provides step-by-step instructions for setting up Vicon (for external localization) and an Ardupilot-based Quadrotor, which we used as open-source hardware for our experiments in the paper.
Follow the official Ardupilot First-Time Setup Guide to configure your drone. The hardware setup we used is fully open-source and detailed in our paper under the hardware section.
If you have a Vicon motion capture system, follow the official Ardupilot Vicon Integration Guide to enable non-GPS navigation.
To receive real-time localization data from Vicon in ROS2, we used the following repository:
π Vicon Receiver ROS2
Clone and install it in your ROS2 workspace to enable Vicon-based positioning.
For seamless integration with our HITL framework, use our pre-configured Ardupilot parameter file.
π Download & Upload Parameters:
- File: ICRA_PARAMS
- Upload via: Mission Planner or QGroundControl (GCS software)
- Instructions: Use GCS to load the parameters and apply them to your drone.
By following these steps, you will have a fully functional HITL-compatible quadrotor integrated with Vicon-based localization and Ardupilot.
- Add Hardware Documentation
- Add Docker Image
- Release code
- Adding multiple sensors (stereo, LiDAR, event cameras, etc.)
- Supporting dynamic scenes

If you use this code or find our research useful, please consider citing:
π Note: The VizFlyt framework is based on research accepted for publication at ICRA 2025. The final citation details will be updated once the paper is officially published.
@inproceedings{vizflyt2025,
author = {Kushagra Srivastava*, Rutwik Kulkarni*, Manoj Velmurugan*, Nitin J. Sanket},
title = {VizFlyt: An Open-Source Perception-Centric Hardware-In-The-Loop Framework for Aerial Robotics},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2025},
note = {Accepted for publication},
url = {https://github.com/pearwpi/VizFlyt}
}