A minimal obstacle avoidance self-driving AI written in C++. The AI is built on neural networks trained with an evolutionary algorithm (aka Genetic Algorithms).
The goal of this project is to have a minimal ANN infrastructure for simple simulations where controllers are involved.
This is far from being optimal in any way. It's meant as a starting point and an educational tool. Practical applications are possible, but for better performance you're better off using LibTorch.
- Neural network-driven self-driving AI (obstacle avoidance)
- Neural Network training and inference in C++ with no external dependencies
- Real-time visualization of the simulation and training progress
- Minimal immediate-mode rendering API based on OpenGL
- CMake 3.7 or higher
- C++20 compatible compiler
- Git (Git Bash on Windows)
Required external dependencies are fetched by calling:
./get_externals.sh
./build.sh
The executable will be placed in the _bin
directory.
./_bin/TinyFreeway
Wait a few seconds while the first batch of networks is trained, then the simulation will start to play, progressively improving over time.
--help
: Display help information--use_swrenderer
: Use software rendering instead of hardware acceleration--autoexit_delay <frames>
: Automatically exit after a specified number of frames--autoexit_savesshot <fname>
: Save a screenshot before automatic exit
The user interaction is limited to tweaking the GUI controls. It's safe to play and see.
TinyFreeway/
: Self-driving AI implementation and simulationCommon/
: Core utilities for SDL2, OpenGL, and ImGui integration_externals/
: External dependencies
Graphic display and user interface are implemented using the following libraries:
SDL2
ImGui
GLM
fmt
The actual AI engine, reusable part of this demo, is defined in the following sources:
TA_EvolutionEngine.h
TA_SimpleNN.h
TA_Tensor.h
TA_TrainingManager.h
TA_QuickThreadPool.h
SimpleNN and Tensor are the low-level building blocks of the neural network.
EvolutionEngine is responsible for the genetic algorithm that given a population of neural networks and their fitness, produces a new generation of networks.
TrainingManager orchestrates the training process, by calling the evaluation function and passing the results to the EvolutionEngine.
QuickThreadPool is a simple thread pool implementation that allows the training process to be parallelized.
The simulation logic for the synthetic environment is contained in the Simulation
class.
In main.cpp
, a calcFitnessFn
function is defined for TrainingManager
, which is responsible for running the simulation with a given neural network and returning its fitness (success score).
The rest of the code is for display and user interface.
See the license.txt
file for details.
This project is a spin-off of Demo9
from the dpasca-sdl2-template repository.