Skip to content

m1dsolo/EchoCardMAE

Repository files navigation

EchoCardMAE: Video Masked Auto-Encoders Customized for Echocardiography

Introduction

framework

Visualization

Reconstruction

EchoCardMAE reconstruction results on the EchoNet-Dynamic dataset.

Segmentation

Segmentation results on the EchoNet-Dynamic and CAMUS dataset.

echonet-camus-seg

Installation

# remove GIT_LFS_SKIP_SMUDGE=1 if you want to download the pretraining weights
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/m1dsolo/EchoCardMAE.git
cd EchoCardMAE
conda create -n EchoCardMAE python=3.10
conda activate EchoCardMAE
pip install -r requirements.txt
git submodule add --depth=1 https://github.com/m1dsolo/yangdl.git yangdl
cd yangdl
pip install -e .

Experimental environment:

  • PyTorch 2.5.1
  • Python 3.10.15
  • GPU memory 24GB

Usage

Data Preparation

  1. EchoNet-Dynamic: Download to EchoCardMAE/dataset/EchoNet-Dynamic
  2. CAMUS: Download to EchoCardMAE/dataset/CAMUS
  3. HMC-QU: Download to EchoCardMAE/dataset/hmcqu-dataset

Data preprocessing

  1. Ejection fraction (EF) prediction:
python -m echonet.avi2npy
  1. Segmentation:
python -m echonet.avi2edes_npy

Pre-training

You can use pretraining weights provided by us. Or you can pretrain the model by yourself:

python pretrain.py

Fine-tuning

  1. EF prediction:
python -m echonet.train_ef
python -m echonet.val_ef
  1. Segmentation:
python -m echonet.train_seg

TODO

  • upload the code of CAMUS and HMC-QU

Citation

TODO

About

EchoCardMAE: Video Masked Auto-Encoders Customized for Echocardiography

Topics

Resources

Stars

Watchers

Forks

Languages