A PyTorch implementation of a diffusion model for conditional cortical thickness forecasting using spherical convolutions and Brownian bridge processes.
Spherical Brownian Bridge Diffusion Models for Conditional Cortical Thickness Forecasting
Ivan Stoyanov*, Fabian Bongratz*, Christian Wachinger
*Equal contribution
- Paper: arXiv:2509.08442
- Repository: https://github.com/ai-med/SBDM
This repository contains the implementation of a Spherical Brownian Bridge Diffusion Model (SBDM) designed for longitudinal cortical surface analysis. The model learns to predict cortical morphology changes between baseline and follow-up scans using:
- Spherical Convolutions: Adapted for cortical surface processing using icosahedral meshes
- Brownian Bridge Process: A diffusion process that bridges between baseline and follow-up states
- Conditional Generation: Incorporates demographic and clinical variables (age, sex, diagnosis)
- Longitudinal Analysis: Handles multiple time points and temporal dependencies
Note: This implementation is adapted from the original BBDM (Brownian Bridge Diffusion Models) for image-to-image translation by Li et al. (2023). The original paper can be found at: https://arxiv.org/abs/2205.07680
- 🧠 Cortical Surface Processing: Specialized for brain cortical surface data
- 🔄 Longitudinal Modeling: Predicts morphology changes over time
- 🎯 Conditional Generation: Incorporates clinical and demographic variables
- 📊 Comprehensive Evaluation: Multiple loss functions and evaluation metrics
- ⚙️ Configurable: Flexible configuration system using Hydra
- 🚀 Scalable: Supports distributed training with Accelerate
- Python 3.8+
- PyTorch 1.12+
- CUDA 11.3+ (for GPU acceleration)
- Clone the repository:
git clone https://github.com/ai-med/SBDM.git
cd SBDM- Run the installation script:
# On Linux/macOS:
./install.sh
# On Windows:
install.bat- Clone the repository:
git clone <repository-url>
cd SBDM- Create a conda environment:
conda env create -f environment.yml
conda activate sbdm- Install the package:
pip install -e .For a minimal installation with just the core dependencies:
pip install -r requirements-minimal.txtInstall SphericalUNet (if not available):
# Follow instructions from https://github.com/Deep-MI/SphericalUNetPackagepython train.pypython train.py data=your_data_config model=your_model_config train.batch_size=32python train.py use_wandb=true wandb.project=your_project_nameThe project uses Hydra for configuration management. Key configuration files are located in the configs/ directory:
# Example: configs/data/adni_thickness.yaml
split_files:
train: "data/ADNI/train.feather"
val: "data/ADNI/val.feather"
test: "data/ADNI/test.feather"
morph_scaler_path: "data/scalers/thickness_scaler.pkl"
age_scaler_path: "data/scalers/age_scaler.pkl"
template_ico: 4
morph: thickness# Example: configs/model/bbdm_sphere_condition.yaml
architecture:
type: ConditionalSphericalUNet
dim: 64
ico_order_in: 4
dim_mults: [1, 2, 4, 8]
diffusion:
seq_length: 2562
num_timesteps: 1000
objective: grad
loss_type: l2The model expects data in the following format:
- Feather files: Containing cortical thickness and metadata
- Required columns:
PTID: Subject identifierIMAGEUID: Scan identifierAGE: Age at scanPTGENDER: Sex (Male/Female)DX: Diagnosis (CN/MCI/Dementia)Month: Follow-up month- Cortical thickness: Last N columns (where N = number of vertices)
data/
├── ADNI/
│ ├── train.feather
│ ├── val.feather
│ └── test.feather
├── scalers/
│ ├── thickness_scaler.pkl
│ └── age_scaler.pkl
└── templates/
└── sphericalunet/
├── ico_0.ply
├── ico_1.ply
└── ...
The core architecture is a U-Net adapted for spherical data:
- Spherical Convolutions: 1-ring convolutions on icosahedral meshes
- Spherical Pooling: Hierarchical downsampling preserving mesh structure
- Cross-Attention: Attention mechanisms for conditioning
- Residual Blocks: Skip connections for stable training
The diffusion process bridges between baseline and follow-up states:
- Forward Process: Adds noise according to a Brownian bridge schedule
- Reverse Process: Denoises to predict morphology changes
- Conditioning: Incorporates demographic and clinical variables
The training process includes:
- Data Loading: Longitudinal cortical surface data
- Forward Diffusion: Add noise to ground truth changes
- Model Prediction: Predict the noise/objective
- Loss Computation: L1/L2 loss with optional masking
- Optimization: Adam optimizer with learning rate scheduling
batch_size: Training batch size (default: 64)learning_rate: Learning rate (default: 1e-4)num_steps: Number of training steps (default: 2000)num_timesteps: Diffusion timesteps (default: 1000)objective: Training objective ('grad', 'noise', 'ysubx')
- L1 Loss: Mean absolute error
- L2 Loss: Mean squared error
- Region-wise Loss: Per-cortical-region evaluation
- Longitudinal Consistency: Temporal coherence metrics
python evaluate.py --checkpoint path/to/checkpoint.pt --data configs/data/test.yamlThe model outputs:
- Predicted Changes: Cortical morphology changes over time
- Uncertainty Estimates: Confidence in predictions
- Visualizations: Surface maps and statistical plots
SBDM/
├── src/ # Source code
│ ├── data/ # Data loading and preprocessing
│ ├── models/ # Model architectures
│ └── utils/ # Utility functions
├── configs/ # Configuration files
│ ├── data/ # Data configurations
│ └── model/ # Model configurations
├── train.py # Training script
├── evaluate.py # Evaluation script
├── requirements.txt # Python dependencies
├── environment.yml # Conda environment
└── README.md # This file
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
This work builds upon several open-source projects and datasets:
- BBDM: Original Brownian Bridge Diffusion Models implementation by Li et al. (2023)
- SphericalUNet: Spherical convolution operations from SphericalUNetPackage by Zhao et al. (2019)
- Denoising Diffusion Probabilistic Models (DDPM): For the diffusion framework
- ADNI dataset: For cortical surface data
If you use this work in your research, please cite our paper:
@article{stoyanov2025sbdm,
title={Spherical Brownian Bridge Diffusion Models for Conditional Cortical Thickness Forecasting},
author={Stoyanov, Ivan and Bongratz, Fabian and Wachinger, Christian},
journal={arXiv preprint arXiv:2509.08442},
year={2025}
}Please also cite the foundational works that this research builds upon:
@article{li2023bbdm,
title={BBDM: Image-to-image translation with Brownian bridge diffusion models},
author={Li, Bo and Xue, Kaitao and Liu, Bin and Lai, Yu-Kun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1952--1961},
year={2023}
}@article{zhao2019spherical,
title={Spherical U-Net on Cortical Surfaces: Methods and Applications},
author={Zhao, Fenqiang and Xia, Shunren and Wu, Zhengwang and Duan, Dingna and Wang, Li and Lin, Weili and Gilmore, John H and Shen, Dinggang and Li, Gang},
journal={arXiv preprint arXiv:1904.00906},
year={2019}
}- SphericalUNet Import Error: Ensure the SphericalUNet package is properly installed
- CUDA Out of Memory: Reduce batch size or use gradient accumulation
- Data Loading Issues: Check file paths and data format
- Template Mesh Missing: Ensure template meshes are in the correct location
- Check the issues page for common problems
- Create a new issue with detailed error messages
- Include your configuration and system information
- Initial release
- Spherical UNet architecture
- Brownian bridge diffusion process
- Longitudinal cortical morphology prediction
- Comprehensive evaluation framework
