Skip to content

Emerge-Lab/DroneRacing

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DroneRacing

figure

PyPI version PyPI - Python Version Github Actions Twitter

A specialized fork of PufferLib focused exclusively on drone racing and drone swarm environments. This repository provides high-performance reinforcement learning environments for training AI agents in drone racing scenarios and multi-agent drone swarm coordination tasks.

Trailer

πŸš€ Quick Start

Prerequisites

  • Python 3.9+
  • CUDA-compatible GPU (recommended for training)
  • C++ compiler (gcc/clang on Linux/macOS, MSVC on Windows)

Installation

Option 1: Using uv (Recommended)

uv is a fast Python package manager. Install it first:

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing

# Create virtual environment and install dependencies
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install the package in development mode
uv pip install -e .

# Install PyTorch (adjust for your CUDA version)
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Optional: Install additional dependencies for training
uv pip install -e ".[cleanrl]"

Option 2: Using Conda

# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing

# Create conda environment
conda create -n dronerace python=3.11
conda activate dronerace

# Install PyTorch (adjust for your CUDA version)
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

# Install the package
pip install -e .

# Optional: Install additional dependencies for training
pip install -e ".[cleanrl]"

Option 3: Using pip

# Clone the repository
git clone https://github.com/YOUR_USERNAME/DroneRacing.git
cd DroneRacing

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Upgrade pip
pip install --upgrade pip

# Install PyTorch (adjust for your CUDA version)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Install the package
pip install -e .

# Optional: Install additional dependencies for training
pip install -e ".[cleanrl]"

Verify Installation

Test your installation with:

import pufferlib
from pufferlib.ocean import drone_race, drone_swarm

# Test drone racing environment
env = pufferlib.make('puffer_drone_race')
obs, info = env.reset()
print(f"Drone racing observation shape: {obs.shape}")

# Test drone swarm environment  
env = pufferlib.make('puffer_drone_swarm')
obs, info = env.reset()
print(f"Drone swarm observation shape: {obs.shape}")

🎯 Available Environments

Drone Racing (puffer_drone_race)

High-speed drone racing environment with:

  • 3D racing tracks with obstacles
  • Physics-based drone dynamics
  • Real-time rendering
  • Configurable difficulty levels

Drone Swarm (puffer_drone_swarm)

Multi-agent drone swarm coordination with:

  • Scalable swarm sizes (default: 64 drones)
  • Cooperative objectives
  • Collision avoidance
  • Formation control tasks

πŸ“š Usage Examples

Basic Training Loop

import pufferlib
from pufferlib.ocean import drone_race

# Create environment
env = pufferlib.make('puffer_drone_race', render_mode='human')

# Simple random policy
for episode in range(10):
    obs, info = env.reset()
    done = False
    
    while not done:
        action = env.action_space.sample()
        obs, reward, terminated, truncated, info = env.step(action)
        done = terminated or truncated
        env.render()

Training with CleanRL

# Train PPO on drone racing
python -m pufferlib.cleanrl_ppo --env puffer_drone_race --total-timesteps 1000000

# Train PPO on drone swarm
python -m pufferlib.cleanrl_ppo --env puffer_drone_swarm --total-timesteps 1000000

πŸ› οΈ Development

Building from Source

If you need to modify the C++ environments:

# Install build dependencies
uv pip install setuptools wheel Cython "numpy<2.0" torch

# Build with debug symbols (optional)
DEBUG=1 python setup.py build_ext --inplace --force

# Or build normally
python setup.py build_ext --inplace

Environment Configuration

Environment parameters can be configured via .ini files in config/ocean/:

  • drone_race.ini - Drone racing settings
  • drone_swarm.ini - Drone swarm settings

πŸ“– Documentation

All of our documentation is hosted at puffer.ai. @jsuarez5341 on Discord for support -- post here before opening issues. We're always looking for new contributors, too!

Star to puff up the project!

Star History Chart

About

Fork of pufferlib just to maintain a drone racing environment for the RL class

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 70.1%
  • C 24.7%
  • Cuda 1.4%
  • Cython 1.2%
  • Shell 1.1%
  • HTML 0.8%
  • C++ 0.7%