Skip to content

lbr-stack/roboreg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Roboreg

License: Apache License 2.0 Code Style: Black

Eye-to-hand calibration from RGB-D images using robot mesh as calibration target.

Mesh (purple) and Point Cloud (turqoise).
Unregistered Registered
Unregistered Mesh and Point Cloud Registered Mesh and Point Cloud

Table of Contents

Installation

Three install options are provided:

Pip (Requires CUDA Toolkit Installation)

Note

During runtime, CUDA Toolkit is required for the differentiable rendering. If you are planning to do differentiable rendering, see CUDA Toolkit Install Instructions. Alternatively, install using conda, see Conda (Installs CUDA Toolkit).

To pip intall roboreg, simply run

pip3 install roboreg

Conda (Installs CUDA Toolkit)

To install roboreg within an Anaconda environment (ideally Miniconda, or even better, Mamba), do the following:

  1. Create an environment

    conda create -n rr-0.4.6 python=3.10
  2. Clone this repository and install dependencies

    git clone [email protected]:lbr-stack/roboreg.git
    mamba env update -f roboreg/env.yaml # if Anaconda or Miniconda was used, do 'conda env update -f env.yaml'
  3. Install roboreg

    mamba activate rr-0.4.6 # can also use 'conda activate rr-0.4.6' in either case
    pip3 install roboreg/

Docker (Comes with CUDA Toolkit)

A sample Docker container is provided for testing purposes. First:

Next:

  1. Clone this repository

    git clone [email protected]:lbr-stack/roboreg.git
  2. Build the Docker image

    cd roboreg
    docker build . \
        --tag roboreg \
        --build-arg USER_ID=$(id -u) \
        --build-arg GROUP_ID=$(id -g) \
        --build-arg USER=$USER
  3. Run container

    docker remove roboreg-container
    docker run -it \
        --gpus all \
        --network host \
        --ipc host \
        --volume /tmp/.X11-unix:/tmp/.X11-unix \
        --volume /dev/shm:/dev/shm \
        --volume /dev:/dev --privileged \
        --env DISPLAY \
        --env QT_X11_NO_MITSHM=1 \
        --name roboreg-container \
        roboreg

Command Line Interface

Note

In these examples, the lbr_fri_ros2_stack is used. Make sure to follow Quick Start first. However, you can also use your own robot description files.

Segment

This is a required step to generate robot masks.

rr-sam2 \
    --path test/assets/lbr_med7/zed2i \
    --pattern "left_image_*.png" \
    --n-positive-samples 5 \
    --n-negative-samples 5 \
    --device cuda

Hydra Robust ICP

The Hydra robust ICP implements a point-to-plane ICP registration on a Lie algebra. It does not use rendering and can also be used on CPU.

rr-hydra \
    --camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
    --path test/assets/lbr_med7/zed2i \
    --mask-pattern mask_sam2_left_image_*.png \
    --depth-pattern depth_*.npy \
    --joint-states-pattern joint_states_*.npy \
    --ros-package lbr_description \
    --xacro-path urdf/med7/med7.xacro \
    --root-link-name lbr_link_0 \
    --end-link-name lbr_link_7 \
    --number-of-points 5000 \
    --output-file HT_hydra_robust.npy

Camera Swarm

Warning

On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM). You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation. Also refer to this Issue.

The camera swarm optimization can serve for finding an initial guess to Monocular Differentiable Rendering or Stereo Differentiable Rendering.

rr-cam-swarm \
    --collision-meshes \
    --n-cameras 1000 \
    --min-distance 0.5 \
    --max-distance 3.0 \
    --angle-range 3.141 \
    --w 0.7 \
    --c1 1.5 \
    --c2 1.5 \
    --max-iterations 100 \
    --display-progress \
    --ros-package lbr_description \
    --xacro-path urdf/med7/med7.xacro \
    --root-link-name lbr_link_0 \
    --end-link-name lbr_link_7 \
    --target-reduction 0.95 \
    --scale 0.1 \
    --n-samples 1 \
    --camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
    --path test/assets/lbr_med7/zed2i \
    --image-pattern left_image_*.png \
    --joint-states-pattern joint_states_*.npy \
    --mask-pattern mask_sam2_left_image_*.png \
    --output-file HT_cam_swarm.npy

Monocular Differentiable Rendering

Warning

On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM). You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation. Also refer to this Issue.

This monocular differentiable rendering refinement requires a good initial estimate, as e.g. obtained from Hydra Robust ICP or Camera Swarm

rr-mono-dr \
    --optimizer SGD \
    --lr 0.01 \
    --max-iterations 100 \
    --display-progress \
    --ros-package lbr_description \
    --xacro-path urdf/med7/med7.xacro \
    --root-link-name lbr_link_0 \
    --end-link-name lbr_link_7 \
    --camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
    --extrinsics-file test/assets/lbr_med7/zed2i/HT_hydra_robust.npy \
    --path test/assets/lbr_med7/zed2i \
    --image-pattern left_image_*.png \
    --joint-states-pattern joint_states_*.npy \
    --mask-pattern mask_sam2_left_image_*.png \
    --output-file HT_dr.npy

Stereo Differentiable Rendering

Warning

On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM). You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation. Also refer to this Issue.

This stereo differentiable rendering refinement requires a good initial estimate, as e.g. obtained from Hydra Robust ICP or Camera Swarm

rr-stereo-dr \
    --optimizer SGD \
    --lr 0.01 \
    --max-iterations 100 \
    --display-progress \
    --ros-package lbr_description \
    --xacro-path urdf/med7/med7.xacro \
    --root-link-name lbr_link_0 \
    --end-link-name lbr_link_7 \
    --left-camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
    --right-camera-info-file test/assets/lbr_med7/zed2i/right_camera_info.yaml \
    --left-extrinsics-file test/assets/lbr_med7/zed2i/HT_hydra_robust.npy \
    --right-extrinsics-file test/assets/lbr_med7/zed2i/HT_right_to_left.npy \
    --path test/assets/lbr_med7/zed2i \
    --left-image-pattern left_image_*.png \
    --right-image-pattern right_image_*.png \
    --joint-states-pattern joint_states_*.npy \
    --left-mask-pattern mask_sam2_left_image_*.png \
    --right-mask-pattern mask_sam2_right_image_*.png \
    --left-output-file HT_left_dr.npy \
    --right-output-file HT_right_dr.npy

Render Results

Warning

On first run, nvdiffrast compiles PyTorch extensions. This might use too many resources on some systems (< 16 GB RAM). You can create an environment variable export MAX_JOBS=1 before the first run to limit concurrent compilation. Also refer to this Issue.

Generate renders using the obtained extrinsics:

rr-render \
    --batch-size 1 \
    --num-workers 0 \
    --ros-package lbr_description \
    --xacro-path urdf/med7/med7.xacro \
    --root-link-name lbr_link_0 \
    --end-link-name lbr_link_7 \
    --camera-info-file test/assets/lbr_med7/zed2i/left_camera_info.yaml \
    --extrinsics-file test/assets/lbr_med7/zed2i/HT_left_dr.npy \
    --images-path test/assets/lbr_med7/zed2i \
    --joint-states-path test/assets/lbr_med7/zed2i \
    --image-pattern left_image_*.png \
    --joint-states-pattern joint_states_*.npy \
    --output-path test/assets/lbr_med7/zed2i

Testing

For testing on the xarm data, follow Docker (Comes with CUDA Toolkit). Inside the container, do

Hydra Robust ICP

To run Hydra robust ICP on provided xarm and realsense data, run

rr-hydra \
    --camera-info-file test/assets/xarm/realsense/camera_info.yaml \
    --path test/assets/xarm/realsense \
    --mask-pattern mask_*.png \
    --depth-pattern depth_*.npy \
    --joint-states-pattern joint_state_*.npy \
    --ros-package xarm_description \
    --xacro-path  urdf/xarm_device.urdf.xacro \
    --root-link-name link_base \
    --end-link-name link7 \
    --number-of-points 5000 \
    --output-file HT_hydra_robust.npy

Render Results

Generate renders using the obtained extrinsics:

rr-render \
    --batch-size 1 \
    --num-workers 0 \
    --ros-package xarm_description \
    --xacro-path urdf/xarm_device.urdf.xacro \
    --root-link-name link_base \
    --end-link-name link7 \
    --camera-info-file test/assets/xarm/realsense/camera_info.yaml \
    --extrinsics-file test/assets/xarm/realsense/HT_hydra_robust.npy \
    --images-path test/assets/xarm/realsense \
    --joint-states-path test/assets/xarm/realsense \
    --image-pattern img_*.png \
    --joint-states-pattern joint_state_*.npy \
    --output-path test/assets/xarm/realsense

Acknowledgements

Organizations and Grants

We would further like to acknowledge following supporters:

Logo Notes
wellcome This work was supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1].
eu_flag This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016985 (FAROS project).
RViMLab Built at RViMLab.
King's College London Built at CAI4CAI.
King's College London Built at King's College London.

About

Eye-to-hand calibration from RGB / RGB-D images using robot mesh as calibration target.

Resources

License

Stars

Watchers

Forks

Packages

No packages published