Skip to content

Towards Generalizable Pathology Reports via a Multimodal LLM with the Multicenter In-Context Learning

Notifications You must be signed in to change notification settings

xmed-lab/WSI-MLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WSI-MLLM

Towards Generalizable Pathology Reports via a Multimodal LLM with the Multicenter In-Context Learning

🤗 MMF dataset 🤗 WSI-MLLM models

Statements

This project is still under review (revision stage). Since the code alone is difficult to understand without the accompanying paper, the complete code will be open-sourced along with the preprint once it is accepted. This repo is mainly intended to provide the dataset and models, indicating that we have already open-sourced the main data and model content. If you are interested in our project, feel free to "Watch" this repo.

💡 Introduction

Pathology report generation has received increasing attention in recent years. However, existing pathology report generation methods still face two main limitations: (1) these methods utilize image-report datasets where some report contents are irrelevant to the given whole slide image (WSI), making it unreliable to train an AI model to generate such content; (2) existing methods trained on data from internal hospitals struggle to generalize to external hospitals due to significant and inevitable textual discrepancies across multiple centers. To address these challenges, we first introduce a multicenter microscopic findings (MMF) dataset, which consists of WSIs and corresponding reports of lung adenocarcinoma from multiple hospitals. The MMF dataset is designed to enable the model to generate relevant report contents and evaluate its generalization to external hospitals. Second, we propose a novel Multicenter In-Context Learning (MICL) method that effectively generalizes the model to external hospitals without the need for fine-tuning. Third, we propose a new WSI-MLLM that incorporates gigabyte-sized WSIs and their image pyramid structure into MLLMs for the first time. Experiments demonstrate that WSI-MLLM significantly outperforms existing pathology report generation methods, and MICL effectively accommodates multicenter discrepancies, achieving consistent performances across hospitals and improving BLEU-4 scores by up to 26.04%. These findings underscore the effectiveness of our WSI-MLLM and MICL in generating high-quality, generalizable pathology reports. The dataset and code will be released upon acceptance.

Eval

python eval.py --pred records/s1/pred.jsonl --gt records/s1/gt.json
python eval.py --pred records/s2/pred.jsonl --gt records/s1/gt.json
python eval.py --pred records/s3/pred.jsonl --gt records/s1/gt.json
python eval.py --pred records/s1_en/pred.jsonl --gt records/s1_en/gt.json
python eval.py --pred records/s1_en/pred.jsonl --gt records/s1_en/gt.json
python eval.py --pred records/s1_en/pred.jsonl --gt records/s1_en/gt.json

We apply 3 different data splits to ensure stable results, where s1 indicates data split-1 (en means English).

Similarlly, we relased 3 models for each language in the huggingface.

If you want to add CONCHScore, please modify the paths of CONCH in the eval.py.

Paper and Citation

We will release the preprint upon it is accepted.

About

Towards Generalizable Pathology Reports via a Multimodal LLM with the Multicenter In-Context Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages