Adrien Meyer, Aditya Murali, Farahdiba Zarin, Didier Mutter, Nicolas Padoy
Click to expand Install
This example guide you to download and use UltraSam in inference mode in a sample dataset. The sample dataset, coco-based, is in "./sample_dataset" (using MMOTU2D samples).Clone the repo
git clone https://github.com/CAMMA-public/UltraSam
cd UltraSamCreate a conda environment and activate it. (Tested with cuda-11.8 & gcc-12)
conda create --name UltraSam python=3.8 -y
conda activate UltraSamInstall the OpenMMLab suite and other dependencies
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118
pip install -U openmim
mim install mmengine
mim install "mmcv==2.1.0"
mim install mmdet
mim install mmpretrain
pip install tensorboardDownload UltraSam ckpt
wget -O ./UltraSam.pth "https://s3.unistra.fr/camma_public/github/ultrasam/UltraSam.pth"
export PYTHONPATH=\$PYTHONPATH:.mim test mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_box_refine.py --checkpoint UltraSam.pth --cfg-options test_dataloader.dataset.data_root="sample_dataset" test_dataloader.dataset.ann_file="sample_coco_MMOTU2D.json" test_dataloader.dataset.data_prefix.img="sample_images" test_evaluator.ann_file="sample_dataset/sample_coco_MMOTU2D.json" --work-dir ./work_dir/example --show-dir ./show_dirIt will run inference on the specified sample dataset, modified inline from the base config. Predicted mask are visible in the show-dir. That is it!
Click to expand Install
You may need to install a specific version of PyTorch, depending on your hardware.Create a conda environment and activate it.
conda create --name UltraSam python=3.8 -y
conda activate UltraSamInstall the OpenMMLab suite and other dependencies
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"
mim install mmdet
mim install mmpretrainIf you wish to process the datasets;
pip install SimpleITK
pip install scikit-image
pip install scipyPre-trained UltraSam model checkpoint is accessible at this link.
To train / test, you will need a coco.json annotation file, and create a symbolik link to it, or modify the config files to point to your annotation file.
To train from scratch, you can use the code in weights to download and convert SAM, MEDSAM and adapters weights.
In local, inside UltraSam repo;
export PYTHONPATH=$PYTHONPATH:.
mim train mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_point_refine.py --gpus 4 --launcher pytorch --work-dir ./work_dirs/UltraSam
mim test mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_point_refine.py --checkpoint ./work_dirs/UltraSam/iter_30000.pth
mim test mmdet configs/UltraSAM/UltraSAM_full/UltraSAM_box_refine.py --checkpoint ./work_dirs/UltraSam/iter_30000.pth
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/resnet50.py \
--work-dir ./work_dirs/classification/BUSBRA/resnet
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/MedSAM.py \
--work-dir ./work_dirs/classification/BUSBRA/MedSam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/SAM.py \
--work-dir ./work_dirs/classification/BUSBRA/Sam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/UltraSam.py \
--work-dir ./work_dirs/classification/BUSBRA/UltraSam
mim train mmpretrain configs/UltraSAM/UltraSAM_full/downstream/classification/BUSBRA/ViT.py \
--work-dir ./work_dirs/classification/BUSBRA/ViT
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/resnet.py \
--work-dir ./work_dirs/segmentation/BUSBRA/resnet
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/UltraSam.py \
--work-dir ./work_dirs/segmentation/BUSBRA/UltraSam_3000
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/SAM.py \
--work-dir ./work_dirs/segmentation/BUSBRA/SAM
mim train mmdet configs/UltraSAM/UltraSAM_full/downstream/segmentation/BUSBRA/MedSAM.py \
--work-dir ./work_dirs/segmentation/BUSBRA/MedSAMUltrasound imaging presents a substantial domain gap compared to other medical imaging modalities; building an ultrasound-specific foundation model therefore requires a specialized large-scale dataset. To build such a dataset, we crawled a multitude of platforms for ultrasound data. We arrived at US-43d, a collection of 43 datasets covering 20 different clinical applications, containing over 280,000 annotated segmentation masks from both 2D and 3D scans.
Click to expand datasets table
| Dataset | Link |
|---|---|
| 105US | researchgate |
| AbdomenUS | kaggle |
| ACOUSLIC | grand-challenge |
| ASUS | onedrive |
| AUL | zenodo |
| brachial plexus | github |
| BrEaST | cancer imaging archive |
| BUID | qamebi |
| BUS_UC | mendeley |
| BUS_UCML | mendeley |
| BUS-BRA | github |
| BUS (Dataset B) | mmu |
| BUSI | HomePage |
| CAMUS | insa-lyon |
| CardiacUDC | kaggle |
| CCAUI | mendeley |
| DDTI | github |
| EchoCP | kaggle |
| EchoNet-Dynamic | github |
| EchoNet-Pediatric | github |
| FALLMUD | kalisteo |
| FASS | mendeley |
| Fast-U-Net | github |
| FH-PS-AOP | zenodo |
| GIST514-DB | github |
| HC | grand-challenge |
| kidneyUS | github |
| LUSS_phantom | Leeds |
| MicroSeg | zenodo |
| MMOTU-2D | github |
| MMOTU-3D | github |
| MUP | zenodo |
| regPro | HomePage |
| S1 | ncbi |
| Segthy | TUM |
| STMUS_NDA | mendeley |
| STU-Hospital | github |
| TG3K | github |
| Thyroid US Cineclip | standford |
| TN3K | github |
| TNSCUI | grand-challenge |
| UPBD | HomePage |
| US nerve Segmentation | kaggle |
Once you downloaded the datasets:
Run each converter in datasets/datasets
# run coco converters
# then preprocessing
python datasets/tools/merge_subdir_coco.py
python datasets/tools/split_coco.py
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode train
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode val
python datasets/tools/create_agnostic_coco.py path_to_datas_root --mode test
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/train.agnostic.noSmall.coco.json --mode train
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/val.agnostic.noSmall.coco.json --mode val
python datasets/tools/merge_agnostic_coco.py path_to_datas_root path_to_datas_root/test.agnostic.noSmall.coco.json --mode testIf you find our work helpful for your research, please consider citing us using the following BibTeX entry:
@article{meyer2025ultrasam,
title={Ultrasam: a foundation model for ultrasound using large open-access segmentation datasets},
author={Meyer, Adrien and Murali, Aditya and Zarin, Farahdiba and Mutter, Didier and Padoy, Nicolas},
journal={International Journal of Computer Assisted Radiology and Surgery},
pages={1--10},
year={2025},
publisher={Springer}
}