Providing a unified framework for evaluating and benchmarking glioblastoma models by assessing radiation plan coverage on recurrences observed in follow-up MRI exams.
- Easy-to-use image preprocessing and model evaluation pipeline that handles everything from DICOM conversion to registration, model prediction, radiation plan generation and evaluation.
- Dockered versions of recent glioblastoma growth models and instructions on including novel methods
- Access to a preprocessed glioblastoma dataset comprised of a few hundred subjects
Prerequisites:
- Docker: Installation instructions on the official website
- NVIDIA Container Toolkit: Refer to the NVIDIA install guide and the official GitHub page
- dicom2niix: Required if you plan to process raw DICOM data.
The package can be installed via pypi:
pip install predict-gbm
Preprocessed data can be obtained from huggingface.
Ready-to-use dockered versions are available for certain growth models from huggingface. Placing them in predict_gbm/models/
just like the test_model.tar
, allows you to use them via algo_id="test_model"
.
Examples can be found in /scripts
:
- single_dicom.py shows how to quickly process a single patient with DICOM files.
- single_nifti.py shows how to quickly process a single patient with NIfTI files.
- dataset_example.py shows how to use the PatientDataset to parse datasets.
- stepwise_processing.py shows how to run standalone pipeline components.
- evaluate_predict_gbm.py shows how to evaluate on the PredictGBM dataset.
This repository can be used to perform inference or benchmark with your own tumor growth model. To this end, you need to create a docker image of your growth model. The following sections serve as guideline on how the image should be created.
Input and output data are passed to/from the container using mounted directories:
Input:
/mlcube_io0
┗ Patient-00000
┣ 00000-gm.nii.gz
┣ 00000-wm.nii.gz
┣ 00000-csf.nii.gz
┣ 00000-tumorseg.nii.gz
┗ 00000-pet.nii.gz
Output:
/mlcube_io1
┗ 00000.nii.gz
Ensure the container adheres to the above I/O structure. An example Dockerfile could be:
# Image and environment variables
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Install python
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 python3-pip python3-dev git && \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN python3 -m pip install --no-cache-dir --upgrade pip
WORKDIR /app
# Install requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy your code to workdir
COPY . .
ENTRYPOINT ["python3", "inference.py"]
If you use PredictGBM in your research, please cite it to support the development!
TODO: citation will be added asap
Please open a new issue here.
Nice to have you on board! Please have a look at our CONTRIBUTING.md file.