This is a new software stack for donkey car based on ROS1. This project is currently in its early stages. I'm working on it to add more features in the future. The software stack can only work on jetson nano.
src/controller # The controller nodes that send the control commends based on user operations or neural network
src/actuator # The actuator nodes that subscirbe the control commends and control the steer and motor
src/recorder # Used to collect data if you want to train an autopilot
src/donkeycar # Store launch files, model files, as well as collected data
src/inferencer # Inferencer plugins for the nn_controller_node- ros-melodic-desktop-full
- ros-melodic-cv-bridge
- ros-melodic-joy
- ros-melodic-gscam
- opencv
- tensorrt (optional)
- tensorflow (optional)
see Ubuntu install of ROS Melodic for details.
- Clone the workspace
git clone https://github.com/Trustworthy-Engineered-Autonomy-Lab/donkeycar_ros.git
- Install dependent ros packages
cd donkeycar_ros rosdep install --from-paths src - Build and export the workspace
catkin_make source devel/setup.bash
roslaunch donkeycar drive.launchThen you can drive the car by joystick
roslaunch donkeycar collect.launchThen you can press the Start button on joystick to start recording and press the Back button to stop. the images will be saved under the data folder.
- Create a
modelsfolder under the donkey package folder. - Download the latest model from Coming soon to the models folder just created.
- run the command
roslaunch donkeycar autopilot.launchThen AI model will control the steer, you can control the throttle by the joystick.
check out Inferencer plugins for details.
Distributed under the MIT License. See LICENSE for more information.
Zhongzheng R. Zhang - [email protected]