A specialized fork of PufferLib focused exclusively on drone racing and drone swarm environments. This repository provides high-performance reinforcement learning environments for training AI agents in drone racing scenarios and multi-agent drone swarm coordination tasks.
- Python 3.9+
- CUDA-compatible GPU (recommended for training)
- C++ compiler (gcc/clang on Linux/macOS, MSVC on Windows)
uv is a fast Python package manager. Install it first:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing
# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install the package in development mode
uv pip install -e .
# Install PyTorch (adjust for your CUDA version)
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Optional: Install additional dependencies for training
uv pip install -e ".[cleanrl]"# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing
# Create conda environment
conda create -n dronerace python=3.11
conda activate dronerace
# Install PyTorch (adjust for your CUDA version)
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
# Install the package
pip install -e .
# Optional: Install additional dependencies for training
pip install -e ".[cleanrl]"# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Upgrade pip
pip install --upgrade pip
# Install PyTorch (adjust for your CUDA version)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Install the package
pip install -e .
# Optional: Install additional dependencies for training
pip install -e ".[cleanrl]"Test your installation with:
import pufferlib
from pufferlib.ocean import drone_race, drone_swarm
# Test drone racing environment
env = pufferlib.make('puffer_drone_race')
obs, info = env.reset()
print(f"Drone racing observation shape: {obs.shape}")
# Test drone swarm environment
env = pufferlib.make('puffer_drone_swarm')
obs, info = env.reset()
print(f"Drone swarm observation shape: {obs.shape}")High-speed drone racing environment with:
- 3D racing tracks with obstacles
- Physics-based drone dynamics
- Real-time rendering
- Configurable difficulty levels
Multi-agent drone swarm coordination with:
- Scalable swarm sizes (default: 64 drones)
- Cooperative objectives
- Collision avoidance
- Formation control tasks
import pufferlib
from pufferlib.ocean import drone_race
# Create environment
env = pufferlib.make('puffer_drone_race', render_mode='human')
# Simple random policy
for episode in range(10):
obs, info = env.reset()
done = False
while not done:
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
env.render()# Train PPO on drone racing
python -m pufferlib.cleanrl_ppo --env puffer_drone_race --total-timesteps 1000000
# Train PPO on drone swarm
python -m pufferlib.cleanrl_ppo --env puffer_drone_swarm --total-timesteps 1000000If you need to modify the C++ environments:
# Install build dependencies
uv pip install setuptools wheel Cython "numpy<2.0" torch
# Build with debug symbols (optional)
DEBUG=1 python setup.py build_ext --inplace --force
# Or build normally
python setup.py build_ext --inplaceEnvironment parameters can be configured via .ini files in config/ocean/:
drone_race.ini- Drone racing settingsdrone_swarm.ini- Drone swarm settings
All of our documentation is hosted at puffer.ai. @jsuarez5341 on Discord for support -- post here before opening issues. We're always looking for new contributors, too!

