Real-to-Sim Parameter Learning for Deformable Packages Using High-Fidelity Simulators for Robotic Manipulation
Omey M. Manyar1,*, Hantao Ye1,*, Siddharth Mayya2, Fan Wang2, Satyandra K. Gupta1
1University of Southern California, 2Amazon Robotics
*Equal contribution, Listed Alphabetically
Project Page | Video | Paper
This repository is the official implementation of the paper. This code is intended for reproduction purposes only. Current implementation does not support extensions. The objective of this repository is to provide the reader with the implementation details of the simulation proposed in the paper.
To simplify the process of setting up the development environment, we use PDM for Python package and virtual environment management.
- Ubuntu >= 22.04
- Python >= 3.12
- Anaconda/Miniconda (For virtualenv creation)
To install PDM, run the following command:
curl -sSL https://pdm-project.org/install-pdm.py | python3 -Clone the project repository and navigate into the project folder:
git clone https://github.com/RROS-Lab/deformable-sim.git
cd deformable-simNext, create a Python 3.12 virtual environment using PDM and select conda as backend:
pdm venv create --with conda 3.12To verify the virtual environment was created successfully, use:
pdm venv listYou should see output like:
Virtualenvs created with this project:
* in-project: /path/to/deformable-sim/.venvHere, in-project is the default name of the virtual environment. If you'd like to specify a custom name for the environment, use:
pdm venv create --with conda --name my-env-name 3.12Firstly, select the created virtual envrionment 0 using pdm use
$ pdm use
Please enter the Python interpreter to use
0. [email protected] (/path/to/deformable-sim/.venv/bin/python)
1. [email protected] (/path/to/deformable-sim/.venv/bin/python3.12)
2. [email protected] (/home/user/miniconda3/bin/python3.12)
3. [email protected] (/usr/bin/python3.12)
4. [email protected] (/usr/bin/python)To activate the virtual environment and install dependencies, run:
eval $(pdm venv activate in-project)
pdm installAll necessary dependencies will be installed after running the command above.
We used Omniverse Kit for visualizing .usd files, use the link to download the software.
After extracted compressed file, run the following inside the compressed folder to start the visualizer:
./omni.app.editor.base.shThe main environment we are using is package_parameter_optim:
$ python -m src.envs.package_parameter_optim --help
usage: package_parameter_optim.py [-h] [--data_dir DATA_DIR] [--output_dir OUTPUT_DIR] [--iterations ITERATIONS] [--train_test_ratio TRAIN_TEST_RATIO] [--init_points INIT_POINTS]
[--eval EVAL] [--benchmark BENCHMARK]
options:
-h, --help show this help message and exit
--data_dir DATA_DIR Directory containing the data files. (default: data)
--output_dir OUTPUT_DIR
Directory to save the output files. (default: result)
--iterations ITERATIONS
Number of iterations for sampling. (default: 1000)
--train_test_ratio TRAIN_TEST_RATIO
Ratio of training data to test data. (default: 0.8)
--init_points INIT_POINTS
Number of initial registration points for Bayesian optimization. (default: 5)
--eval EVAL Evaluation mode. (default: False)
--benchmark BENCHMARK
Benchmark mode. (default: False)For running parameter identification based on the experiment data, run:
python -m src.envs.package_parameter_optim --iteration 200 # e.g. 200 iterationsAfter it has been finished, ./result/package_parameter_sampling/best_params.csv storing the best parameters will be generated.
For detailed evaluation, .usd files for each iteration will be stored in ./result/package_parameter_sampling/sampling/* and optimizaiton history will be reported in ./result/package_parameter_sampling/package_parameter.csv.
We also finished several scripts for ease of evaluation, run the following:
python ./scripts/best_parameter.pyto generate best.csv, which will be used for evaluation and benchmark in the environment.
Evaluation mode will run under optimized and perturbed parameters through all test and train trajectories. Benchmark mode will test optimized parameters under test trajectories with various number of particles to report loss and FPS difference.
python -m src.envs.package_parameter_optim --eval True
python -m src.envs.package_parameter_optim --benchmark TrueAfter they have finished, use another script to print statistical report in the terminal:
python ./scripts/report.py --mode evaluation
python ./scripts/report.py --mode benchmarktrain_{idx}/test_{idx} in evaluation will print losses through train/test trajectories under 10*{idx}% perturbation from optimized parameters. test_{idx} in benchmark will print losses through test trajectories by adding {idx} particles in between sampled points for package simulation.
TBA