Skip to content

ucmercedrobotics/NextBestView

Repository files navigation

Kinova NextBestView

Docker Visual Studio Code

github website python pre-commits

How to Start

Build your container:

make build-image

After, make sure you initialize the repo with the repo pre-commits:

make repo-init

You'll be forwarding graphic display to noVNC using your browser. First start the Docker network that will manage these packets: To start, make your local Docker network to connect your VNC client, local machine, and Kinova together. You'll use this later when remote controlling the Kinova.

make network

Next, standup the VNC container to forward X11 to your web browser. You can see this at localhost:8080.

make vnc

Simulation

To start the Docker environment:

make bash

Finally, to launch the ROS2 simulated drivers for MoveIt Kortex control:

make moveit

Target

Make sure you're on the same subnet as the Kinova and plugged in via ethernet. Then run the NIC setup command:

make config-target-network

By default, the name of the NIC is en7. Change this to whatever your NIC is that connects to the Kinova with the Make argument KINOVA_NIC. Example:

make config-target-network KINOVA_NIC=<your_nic_name>

To start the Docker environment:

make bash

If you want to run planning system with the provided actions (identify object, go to position and next best view) run the following command. This will launch Rviz2, Moveit2, Vision, action nodes and mission interface. If you intent to use this we are providing example xml file created with Chat GPT. After running this command go to section Using with Mission Planning (MP) to understand how to send the generated example plan.

make one4all

Finally, to launch the ROS2 drivers for MoveIt Kortex control:

make moveit-target

Example

If you want to see the robot move in sim or on target, you can launch the custom example node we prebuilt. This will just move the arm to an arbitrary position:

make moveit-example

Vision

If you intend on using the vision module, run an additional vision module by opening up another shell in Docker using docker exec. From there, run the following command:

make vision

This will bring up the vision ROS2 node that exposes the RGBD camera on ROS2 topics and can be visualized in RViz.

NOTE: this works only with hardware connected.

Using with Mission Planning (MP)

Currently, control with an XML generated mission plan is under implementation. XML mission plans are sent via any compliant MP generation tool such as our own GPT planner. Connect the planner to TCP port 12345 after initializing all relevant nodes.

nc localhost 12345

Paste given example plan and Ctrl+c

Expected behavior:

1- Object (pot plant) will be detected.
2- Arm will center the object according to camera frame
3- Arm will get point cloud information of the plant from different angles and these point clouds will be merged.

About

ROS2 project that takes XML mission plan as input and executes actions on KINOVA KORTEX manipulator

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •