Javier Pérez de Frutos commited on
Commit
2a2378d
·
unverified ·
2 Parent(s): c292437 a84d9c8

Merge pull request #20 from jpdefrutos/master

Browse files
Files changed (1) hide show
  1. README.md +36 -13
README.md CHANGED
@@ -5,13 +5,13 @@
5
  <div align="center">
6
 
7
  <h1 align="center">DDMR: Deep Deformation Map Registration</h1>
8
- <h3 align="center">Train smarter, not harder: learning deep abdominal CT registration on scarce data</h3>
9
 
10
  # ⚠️***WARNING: Under construction***
11
 
12
- **DDMR** was developed by SINTEF Health Research. The corresponding manuscript describing the framework has been submitted to [PLOS ONE](https://journals.plos.org/plosone/) and the preprint is openly available on [arXiv](https://arxiv.org/abs/2211.15717).
 
13
 
14
-
15
  </div>
16
 
17
  ## 💻 Getting started
@@ -24,8 +24,27 @@ source venv/bin/activate
24
 
25
  2. Install requirements:
26
  ```
27
- pip install -r requirements.txt
 
 
 
 
 
 
28
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## 🏋️‍♂️ Training
31
 
@@ -48,21 +67,25 @@ python EvaluationScripts/evaluation.py
48
  ## ✨ How to cite
49
  Please, consider citing our paper, if you find the work useful:
50
  <pre>
51
- @misc{perezdefrutos2022ddmr,
52
- title = {Train smarter, not harder: learning deep abdominal CT registration on scarce data},
53
- author = {Pérez de Frutos, Javier and Pedersen, André and Pelanis, Egidijus and Bouget, David and Survarachakan, Shanmugapriya and Langø, Thomas and Elle, Ole-Jakob and Lindseth, Frank},
54
- year = {2022},
55
- doi = {10.48550/ARXIV.2211.15717},
56
- publisher = {arXiv},
57
- copyright = {Creative Commons Attribution 4.0 International},
58
- note = {preprint on arXiv at https://arxiv.org/abs/2211.15717}
 
 
 
 
59
  }
60
  </pre>
61
 
62
  ## ⭐ Acknowledgements
63
  This project is based on [VoxelMorph](https://github.com/voxelmorph/voxelmorph) library, and its related publication:
64
  <pre>
65
- @article{VoxelMorph2019,
66
  title={VoxelMorph: A Learning Framework for Deformable Medical Image Registration},
67
  author={Balakrishnan, Guha and Zhao, Amy and Sabuncu, Mert R. and Guttag, John and Dalca, Adrian V.},
68
  journal={IEEE Transactions on Medical Imaging},
 
5
  <div align="center">
6
 
7
  <h1 align="center">DDMR: Deep Deformation Map Registration</h1>
8
+ <h3 align="center">Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation</h3>
9
 
10
  # ⚠️***WARNING: Under construction***
11
 
12
+ **DDMR** was developed by SINTEF Health Research. The corresponding manuscript describing the framework has been published in [PLOS ONE](https://journals.plos.org/plosone/) and the manuscript is openly available [here](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0282110).
13
+
14
 
 
15
  </div>
16
 
17
  ## 💻 Getting started
 
24
 
25
  2. Install requirements:
26
  ```
27
+ pip install /path/to/clone/.
28
+ ```
29
+
30
+ ## 🤖 How to use
31
+ Use the following CLI command to register images
32
+ ```
33
+ ddmr --fixed path/to/fixed_image.nii.gz --moving path/to/moving_image.nii.gz --outputdir path/to/output/dir -a <anatomy> --model <model> --gpu <gpu-number> --original-resolution
34
  ```
35
+ where:
36
+ * anatomy: is the type of anatomy you want to register: B (brain) or L (liver)
37
+ * model: is the model you want to use:
38
+ + BL-N (baseline with NCC)
39
+ + BL-NS (baseline with NCC and SSIM)
40
+ + SG-ND (segmentation guided with NCC and DSC)
41
+ + SG-NSD (segmentation guided with NCC, SSIM, and DSC)
42
+ + UW-NSD (uncertainty weighted with NCC, SSIM, and DSC)
43
+ + UW-NSDH (uncertainty weighted with NCC, SSIM, DSC, and HD).
44
+ * gpu: is the GPU number you want to the model to run on, if you have multiple and want to use only one GPU
45
+ * original-resolution: (flag) whether to upsample the registered image to the fixed image resolution (disabled if the flag is not present)
46
+
47
+ Use ```ddmr --help``` to see additional options like using precomputed segmentations to crop the images to the desired ROI, or debugging.
48
 
49
  ## 🏋️‍♂️ Training
50
 
 
67
  ## ✨ How to cite
68
  Please, consider citing our paper, if you find the work useful:
69
  <pre>
70
+ @article{perezdefrutos2022ddmr,
71
+ title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},
72
+ author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},
73
+ journal = {PLOS ONE},
74
+ publisher = {Public Library of Science},
75
+ year = {2023},
76
+ month = {02},
77
+ volume = {18},
78
+ doi = {10.1371/journal.pone.0282110},
79
+ url = {https://doi.org/10.1371/journal.pone.0282110},
80
+ pages = {1-14},
81
+ number = {2}
82
  }
83
  </pre>
84
 
85
  ## ⭐ Acknowledgements
86
  This project is based on [VoxelMorph](https://github.com/voxelmorph/voxelmorph) library, and its related publication:
87
  <pre>
88
+ @article{balakrishnan2019voxelmorph,
89
  title={VoxelMorph: A Learning Framework for Deformable Medical Image Registration},
90
  author={Balakrishnan, Guha and Zhao, Amy and Sabuncu, Mert R. and Guttag, John and Dalca, Adrian V.},
91
  journal={IEEE Transactions on Medical Imaging},