Eye-to-hand calibration from RGB-D images using robot mesh as calibration target.
| Unregistered | Registered |
|---|---|
![]() |
![]() |
Three install options are provided:
- Pip (Requires CUDA Toolkit Installation)
- Conda (Installs CUDA Toolkit)
- Docker (Comes with CUDA Toolkit)
Note
During runtime, CUDA Toolkit is required for the differentiable rendering. If you are planning to do differentiable rendering, see CUDA Toolkit Install Instructions. Alternatively, install using conda, see Conda (Installs CUDA Toolkit).
To pip intall roboreg, simply run
pip3 install roboregTo install roboreg within an Anaconda environment (ideally Miniconda, or even better, Mamba), do the following:
-
Create an environment
conda create -n rr-0.4.6 python=3.10
-
Clone this repository and install dependencies
git clone [email protected]:lbr-stack/roboreg.git mamba env update -f roboreg/env.yaml # if Anaconda or Miniconda was used, do 'conda env update -f env.yaml'
-
Install
roboregmamba activate rr-0.4.6 # can also use 'conda activate rr-0.4.6' in either case pip3 install roboreg/
A sample Docker container is provided for testing purposes. First:
- Install Docker, see Docker Install Instructions
- Install NVIDIA Container Toolkit, see NVIDIA Container Toolkit Install Instructions
Next:
-
Clone this repository
git clone [email protected]:lbr-stack/roboreg.git
-
Build the Docker image
cd roboreg docker build . \ --tag roboreg \ --build-arg USER_ID=$(id -u) \ --build-arg GROUP_ID=$(id -g) \ --build-arg USER=$USER
-
Run container
docker remove roboreg-container docker run -it \ --gpus all \ --network host \ --ipc host \ --volume /tmp/.X11-unix:/tmp/.X11-unix \ --volume /dev/shm:/dev/shm \ --volume /dev:/dev --privileged \ --env DISPLAY \ --env QT_X11_NO_MITSHM=1 \ --name roboreg-container \ roboreg
Note
In these examples, the lbr_fri_ros2_stack is used. Make sure to follow Quick Start first. However, you can also use your own robot description files.
This is a required step to generate robot masks.
rr-sam2 \
--path test/assets/lbr_med7/zed2i \
--pattern "left_image_*.png" \
--n-positive-samples 5 \
--n-negative-samples 5 \
--device cudaThe Hydra robust ICP implements a point-to-plane ICP registration on a Lie algebra. It does not use rendering and can also be used on CPU.
rr-hydra \
--camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
--path test/assets/lbr_med7/zed2i \
--mask-pattern mask_sam2_left_image_*.png \
--depth-pattern depth_*.npy \
--joint-states-pattern joint_states_*.npy \
--ros-package lbr_description \
--xacro-path urdf/med7/med7.xacro \
--root-link-name lbr_link_0 \
--end-link-name lbr_link_7 \
--number-of-points 5000 \
--output-file HT_hydra_robust.npyWarning
On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM).
You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation.
Also refer to this Issue.
The camera swarm optimization can serve for finding an initial guess to Monocular Differentiable Rendering or Stereo Differentiable Rendering.
rr-cam-swarm \
--collision-meshes \
--n-cameras 1000 \
--min-distance 0.5 \
--max-distance 3.0 \
--angle-range 3.141 \
--w 0.7 \
--c1 1.5 \
--c2 1.5 \
--max-iterations 100 \
--display-progress \
--ros-package lbr_description \
--xacro-path urdf/med7/med7.xacro \
--root-link-name lbr_link_0 \
--end-link-name lbr_link_7 \
--target-reduction 0.95 \
--scale 0.1 \
--n-samples 1 \
--camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
--path test/assets/lbr_med7/zed2i \
--image-pattern left_image_*.png \
--joint-states-pattern joint_states_*.npy \
--mask-pattern mask_sam2_left_image_*.png \
--output-file HT_cam_swarm.npyWarning
On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM).
You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation.
Also refer to this Issue.
This monocular differentiable rendering refinement requires a good initial estimate, as e.g. obtained from Hydra Robust ICP or Camera Swarm
rr-mono-dr \
--optimizer SGD \
--lr 0.01 \
--max-iterations 100 \
--display-progress \
--ros-package lbr_description \
--xacro-path urdf/med7/med7.xacro \
--root-link-name lbr_link_0 \
--end-link-name lbr_link_7 \
--camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
--extrinsics-file test/assets/lbr_med7/zed2i/HT_hydra_robust.npy \
--path test/assets/lbr_med7/zed2i \
--image-pattern left_image_*.png \
--joint-states-pattern joint_states_*.npy \
--mask-pattern mask_sam2_left_image_*.png \
--output-file HT_dr.npyWarning
On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM).
You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation.
Also refer to this Issue.
This stereo differentiable rendering refinement requires a good initial estimate, as e.g. obtained from Hydra Robust ICP or Camera Swarm
rr-stereo-dr \
--optimizer SGD \
--lr 0.01 \
--max-iterations 100 \
--display-progress \
--ros-package lbr_description \
--xacro-path urdf/med7/med7.xacro \
--root-link-name lbr_link_0 \
--end-link-name lbr_link_7 \
--left-camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
--right-camera-info-file test/assets/lbr_med7/zed2i/right_camera_info.yaml \
--left-extrinsics-file test/assets/lbr_med7/zed2i/HT_hydra_robust.npy \
--right-extrinsics-file test/assets/lbr_med7/zed2i/HT_right_to_left.npy \
--path test/assets/lbr_med7/zed2i \
--left-image-pattern left_image_*.png \
--right-image-pattern right_image_*.png \
--joint-states-pattern joint_states_*.npy \
--left-mask-pattern mask_sam2_left_image_*.png \
--right-mask-pattern mask_sam2_right_image_*.png \
--left-output-file HT_left_dr.npy \
--right-output-file HT_right_dr.npyWarning
On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM).
You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation.
Also refer to this Issue.
Generate renders using the obtained extrinsics:
rr-render \
--batch-size 1 \
--num-workers 0 \
--ros-package lbr_description \
--xacro-path urdf/med7/med7.xacro \
--root-link-name lbr_link_0 \
--end-link-name lbr_link_7 \
--camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
--extrinsics-file test/assets/lbr_med7/zed2i/HT_left_dr.npy \
--images-path test/assets/lbr_med7/zed2i \
--joint-states-path test/assets/lbr_med7/zed2i \
--image-pattern left_image_*.png \
--joint-states-pattern joint_states_*.npy \
--output-path test/assets/lbr_med7/zed2iFor testing on the xarm data, follow Docker (Comes with CUDA Toolkit). Inside the container, do
To run Hydra robust ICP on provided xarm and realsense data, run
rr-hydra \
--camera-info-file test/assets/xarm/realsense/camera_info.yaml \
--path test/assets/xarm/realsense \
--mask-pattern mask_*.png \
--depth-pattern depth_*.npy \
--joint-states-pattern joint_state_*.npy \
--ros-package xarm_description \
--xacro-path urdf/xarm_device.urdf.xacro \
--root-link-name link_base \
--end-link-name link7 \
--number-of-points 5000 \
--output-file HT_hydra_robust.npyGenerate renders using the obtained extrinsics:
rr-render \
--batch-size 1 \
--num-workers 0 \
--ros-package xarm_description \
--xacro-path urdf/xarm_device.urdf.xacro \
--root-link-name link_base \
--end-link-name link7 \
--camera-info-file test/assets/xarm/realsense/camera_info.yaml \
--extrinsics-file test/assets/xarm/realsense/HT_hydra_robust.npy \
--images-path test/assets/xarm/realsense \
--joint-states-path test/assets/xarm/realsense \
--image-pattern img_*.png \
--joint-states-pattern joint_state_*.npy \
--output-path test/assets/xarm/realsenseWe would further like to acknowledge following supporters:
| Logo | Notes |
|---|---|
| This work was supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. | |
| This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016985 (FAROS project). | |
| Built at RViMLab. | |
| Built at CAI4CAI. | |
| Built at King's College London. |

