Skip to content
/ DFG Public

IEEE TMI 2025: Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting

Notifications You must be signed in to change notification settings

xmed-lab/DFG

Repository files navigation

Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting

This repository contains Pytorch implementation of our source-free domain adaptation (SFDA) method with Dual Feature Guided (DFG) auto-prompting approach. (Arxiv)

Introduction

Source-free domain adaptation (SFDA) for segmentation aims at adapting a model trained in the source domain to perform well in the target domain with only the source model and unlabeled target data. Inspired by the recent success of Segment Anything Model (SAM) which exhibits the generality of segmenting images of various modalities and in different domains given human-annotated prompts like bounding boxes or points, we for the first time explore the potentials of Segment Anything Model for SFDA via automatedly finding an accurate bounding box prompt. We find that the bounding boxes directly generated with existing SFDA approaches are defective due to the domain gap. To tackle this issue, we propose a novel Dual Feature Guided (DFG) auto-prompting approach to search for the box prompt. Specifically, the source model is first trained in a feature aggregation phase, which not only preliminarily adapts the source model to the target domain but also builds a feature distribution well-prepared for box prompt search. In the second phase, based on two feature distribution observations, we gradually expand the box prompt with the guidance of the target model feature and the SAM feature to handle the class-wise clustered target features and the class-wise dispersed target features, respectively. To remove the potentially enlarged false positive regions caused by the over-confident prediction of the target model, the refined pseudo-labels produced by SAM are further postprocessed based on connectivity analysis. Experiments on 3D and 2D datasets indicate that our approach yields superior performance compared to conventional methods.

introduction Take spleen in a target domain image in MRI⟶CT adaptation as an example. (a) MedSAM requires an accurate bounding box prompt. Neither a too-small nor a too-large bounding box leads to a decent segmentation result. (b) Segmentation results of ProtoContra and the corresponding bounding boxes, produced by different output probability thresholds. Due to the domain gap and limited knowledge from source model and target unlabeled data, it is hard for existing SFDA methods to generate precise box prompts even if we vary the probability threshold. (c) After feature aggregation, our dual feature guided bounding box prompt search approach can find an accurate box prompt for MedSAM to yield refined pseudo-labels. (d) The searching procedure of our proposed box prompt search method. The red numbers are indices of the boxes, corresponding to the horizontal axis in (e). (e) The number of pixels of changed MedSAM predictions when the box prompt is switched from the last to the current. MedSAM prediction keeps stable when the box prompt fluctuates near the ground truth. We utilize this property to find the optimal box prompt.

Our method: method

Segmentation results: segmentation results

Installation

Create the environment from the environment.yml file:

conda env create -f environment.yml
conda activate dfg

Data preparation

Training

The following are the steps for the CHAOS (MRI) to BTCV (CT) adaptation.

  • Download the source domain model from here or specify the data path in configs/train_source_seg.yaml and then run
python main_trainer_source.py --config_file configs/train_source_seg.yaml
  • Download the trained model after the feature aggregation phase from here or specify the source model path and data path in configs/train_target_adapt_FA.yaml, and then run
python main_trainer_fa.py --config_file configs/train_target_adapt_FA.yaml
  • Download the MedSAM model checkpoint from here and put it under ./medsam/work_dir/MedSAM.
  • Specify the model (after feature aggregation) path, data path, and refined pseudo-label paths in configs/train_target_adapt_SAM.yaml, and then run
python main_trainer_sam.py --config_file configs/train_target_adapt_SAM.yaml

Acknowledgement

We would like to thank the great work of the following open-source projects: ProtoContra, MedSAM.

Citation

@ARTICLE{11079936,
  author={Huai, Zheang and Tang, Hui and Li, Yi and Chen, Zhuangzhuang and Li, Xiaomeng},
  journal={IEEE Transactions on Medical Imaging}, 
  title={Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting}, 
  year={2025},
  volume={},
  number={},
  pages={1-1},
  keywords={Adaptation models;Image segmentation;Foundation models;Data models;Biomedical imaging;Predictive models;Accuracy;Uncertainty;Training;Spleen;Source-free domain adaptation;Segment Anything Model;Prompt;Bounding box},
  doi={10.1109/TMI.2025.3587733}}

About

IEEE TMI 2025: Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published