--original-resolution
```
where:
* anatomy: is the type of anatomy you want to register: B (brain) or L (liver)
* model: is the model you want to use:
+ BL-N (baseline with NCC)
+ BL-NS (baseline with NCC and SSIM)
+ SG-ND (segmentation guided with NCC and DSC)
+ SG-NSD (segmentation guided with NCC, SSIM, and DSC)
+ UW-NSD (uncertainty weighted with NCC, SSIM, and DSC)
+ UW-NSDH (uncertainty weighted with NCC, SSIM, DSC, and HD).
* gpu: is the GPU number you want to the model to run on, if you have multiple and want to use only one GPU
* original-resolution: (flag) whether to upsample the registered image to the fixed image resolution (disabled if the flag is not present)
Use ```ddmr --help``` to see additional options like using precomputed segmentations to crop the images to the desired ROI, or debugging.
## 🤗 Demo
A live demo to easily test the best performing pretrained models was developed in Gradio and is deployed on `Hugging Face`.
To access the live demo, click on the `Hugging Face` badge above. Below is a snapshot of the current state of the demo app.
### Development
To develop the Gradio app locally, you can use either Python or Docker.
#### Python
You can run the app locally by:
```
python demo/app.py --cwd ./ --share 0
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
#### Docker
Alternatively, you can use docker:
```
docker build -t ddmr .
docker run -it -p 7860:7860 ddmr
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
## 🏋️♂️ Training
Use the "MultiTrain" scripts to launch the trainings, providing the neccesary parameters. Those in the COMET folder accepts a `.ini` configuration file (see `COMET/train_config_files/` for example configurations).
For instance:
```
python TrainingScripts/Train_3d.py
```
## 🔍 Evaluate
Use Evaluate_network to test the trained models. On the Brain folder, use `Evaluate_network__test_fixed.py` instead.
For instance:
```
python EvaluationScripts/evaluation.py
```
## ✨ How to cite
Please, consider citing our paper, if you find the work useful:
@article{perezdefrutos2022ddmr,
title = {Learning deep abdominal CT registration through adaptive loss weighting and synthetic data generation},
author = {Pérez de Frutos, Javier AND Pedersen, André AND Pelanis, Egidijus AND Bouget, David AND Survarachakan, Shanmugapriya AND Langø, Thomas AND Elle, Ole-Jakob AND Lindseth, Frank},
journal = {PLOS ONE},
publisher = {Public Library of Science},
year = {2023},
month = {02},
volume = {18},
doi = {10.1371/journal.pone.0282110},
url = {https://doi.org/10.1371/journal.pone.0282110},
pages = {1-14},
number = {2}
}
## ⭐ Acknowledgements
This project is based on [VoxelMorph](https://github.com/voxelmorph/voxelmorph) library, and its related publication:
@article{balakrishnan2019voxelmorph,
title={VoxelMorph: A Learning Framework for Deformable Medical Image Registration},
author={Balakrishnan, Guha and Zhao, Amy and Sabuncu, Mert R. and Guttag, John and Dalca, Adrian V.},
journal={IEEE Transactions on Medical Imaging},
year={2019},
volume={38},
number={8},
pages={1788-1800},
doi={10.1109/TMI.2019.2897538}
}