File size: 2,364 Bytes
96a28c6
 
f85e212
96a28c6
 
 
 
2db43aa
96a28c6
 
 
 
f85e212
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
title: Medfusion App
emoji: 🔬
colorFrom: pink
colorTo: gray
sdk: streamlit
sdk_version: 1.15.2
app_file: streamlit/welcome.py
pinned: false
license: mit
---

Medfusion - Medical Denoising Diffusion Probabilistic Model 
=============

Paper
=======
Please see: [**Diffusion Probabilistic Models beat GANs on Medical 2D Images**]()

![](media/Medfusion.png)
*Figure: Medfusion*

![](media/animation_eye.gif) ![](media/animation_histo.gif) ![](media/animation_chest.gif)\
*Figure: Eye fundus, chest X-ray and colon histology images generated with Medfusion (Warning color quality limited by .gif)*

Demo
=============
[Link]() to streamlit app.

Install
=============

Create virtual environment and install packages: \
`python -m venv venv` \
`source venv/bin/activate`\
`pip install -e .`


Get Started 
=============

1 Prepare Data
-------------

* Go to [medical_diffusion/data/datasets/dataset_simple_2d.py](medical_diffusion/data/datasets/dataset_simple_2d.py) and create a new `SimpleDataset2D` or write your own Dataset. 


2 Train Autoencoder 
----------------
* Go to [scripts/train_latent_embedder_2d.py](scripts/train_latent_embedder_2d.py) and import your Dataset. 
* Load your dataset with eg. `SimpleDataModule` 
* Customize `VAE` to your needs 
* (Optional): Train a `VAEGAN` instead or load a pre-trained `VAE` and set `start_gan_train_step=-1` to start training of GAN immediately.

2.1 Evaluate Autoencoder 
----------------
* Use [scripts/evaluate_latent_embedder.py](scripts/evaluate_latent_embedder.py) to evaluate the performance of the Autoencoder. 

3 Train Diffusion 
----------------
* Go to [scripts/train_diffusion.py](scripts/train_diffusion.py) and import/load your Dataset as before.
* Load your pre-trained VAE or VAEGAN with `latent_embedder_checkpoint=...` 
* Use `cond_embedder = LabelEmbedder` for conditional training, otherwise  `cond_embedder = None`  

3.1 Evaluate Diffusion 
----------------
* Go to [scripts/sample.py](scripts/sample.py) to sample a test image.
* Go to [scripts/helpers/sample_dataset.py](scripts/helpers/sample_dataset.py) to sample a more reprensative sample size.
* Use [scripts/evaluate_images.py](scripts/evaluate_images.py) to evaluate performance of sample (FID, Precision, Recall)

Acknowledgment 
=============
* Code builds upon https://github.com/lucidrains/denoising-diffusion-pytorch