soumickmj's picture
Update README.md
e95cad7 verified
---
license: apache-2.0
pipeline_tag: image-feature-extraction
tags:
- medical
- cardiac MRI
- MRI
- CINE
- dynamic MRI
- representation learning
- unsupervised learning
- 3D
- diffusion
- diffusion autoencoder
- autoencoder
- DiffAE
- 3D DiffAE
- UK Biobank
- latent space
library_name: pytorch
---
# UKBBLatent_Cardiac_20208_DiffAE3D_L128_S2023
Biobank-scale imaging provides a unique opportunity to characterise structural and functional cardiac phenotypes and how they relate to disease outcomes. However, deriving specific phenotypes from MRI data requires time-consuming expert annotation, limiting scalability and does not exploit how information dense such image acquisitions are. In this study, we applied a 3D diffusion autoencoder to temporally resolved cardiac MRI data from 71,021 UK Biobank participants to derive latent phenotypes representing the human heart in motion. These phenotypes were reproducible, heritable (h2 = [4 - 18%]), and significantly associated with cardiometabolic traits and outcomes, including atrial fibrillation (P = 8.5 × 10-29) and myocardial infarction (P = 3.7 × 10-12). By using latent space manipulation techniques, we directly interpreted and visualised what specific latent phenotypes were capturing in a given MRI.
## Model Details
During this research, the original [DiffAE](https://diff-ae.github.io/) model was adapted and extended for 3D to create the 3D DiffAE model, and was trained on the CINE Cardiac Long-axis 4-chamber view MRIs from UK Biobank dataset using 5 different seeds. This model can be used to infer latent representations from similar cardiac MRIs, or can also be used as pretrained models and then fine-tuned on other datasets or tasks.
This model can also be used to generate synthetic cardiac MRIs similar to the training set.
### Model Description
- **Model type:** 3D DiffAE
- **Task:** Obtaining latent representation from 3D input volumes
- **Training dataset:** [CINE Cardiac Long-axis 4-chamber view MRIs from UK Biobank](https://biobank.ctsu.ox.ac.uk/crystal/field.cgi?id=20208)
- **Training seed:** 2023
- **Input:** 3D MRI (2D over time), intensity normalised (min-max, followed by z-score with 0.5 mean and std)
- **Output:** 128 latent factors. Can also be used for generating synthetic MRIs.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/GlastonburyGroup/ImLatent
- **Project page:** https://glastonburygroup.github.io/CardiacDiffAE_GWAS/
- **Preprint:** https://doi.org/10.1101/2024.11.04.24316700
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this model in your research, or utilise code from this repository or the provided weights, please consider citing the following in your publications:
**BibTeX:**
```bibtex
@article{Ometto2024.11.04.24316700,
author = {Ometto, Sara and Chatterjee, Soumick and Vergani, Andrea Mario and Landini, Arianna and Sharapov, Sodbo and Giacopuzzi, Edoardo and Visconti, Alessia and Bianchi, Emanuele and Santonastaso, Federica and Soda, Emanuel M and Cisternino, Francesco and Ieva, Francesca and Di Angelantonio, Emanuele and Pirastu, Nicola and Glastonbury, Craig A},
title = {Unsupervised cardiac MRI phenotyping with 3D diffusion autoencoders reveals novel genetic insights},
elocation-id = {2024.11.04.24316700},
year = {2024},
doi = {10.1101/2024.11.04.24316700},
publisher = {Cold Spring Harbor Laboratory Press},
url = {https://www.medrxiv.org/content/early/2024/11/05/2024.11.04.24316700},
journal = {medRxiv}
}
```
**APA:**
Ometto, S., Chatterjee, S., Vergani, A. M., Landini, A., Sharapov, S., Giacopuzzi, E., … Glastonbury, C. A. (2024). Unsupervised cardiac MRI phenotyping with 3D diffusion autoencoders reveals novel genetic insights. medRxiv. doi:10.1101/2024.11.04.24316700