wdika commited on
Commit
7e8544b
1 Parent(s): a67990b

Upload config

Browse files
Files changed (1) hide show
  1. readme_template.md +155 -0
readme_template.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: atommic
6
+ datasets:
7
+ - SKMTEA
8
+ thumbnail: null
9
+ tags:
10
+ - multitask-image-reconstruction-image-segmentation
11
+ - IDSLR
12
+ - ATOMMIC
13
+ - pytorch
14
+ model-index:
15
+ - name: MTL_IDSLR_SKMTEA_poisson2d_4x
16
+ results: []
17
+
18
+ ---
19
+
20
+
21
+ ## Model Overview
22
+
23
+ Image domain Deep Structured Low-Rank network (IDSLR) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
24
+
25
+
26
+ ## ATOMMIC: Training
27
+
28
+ To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
29
+ ```
30
+ pip install atommic['all']
31
+ ```
32
+
33
+ ## How to Use this Model
34
+
35
+ The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
36
+
37
+ Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
38
+
39
+ ### Automatically instantiate the model
40
+
41
+ ```base
42
+ pretrained: true
43
+ checkpoint: https://huggingface.co/wdika/MTL_IDSLR_SKMTEA_poisson2d_4x/blob/main/MTL_IDSLR_SKMTEA_poisson2d_4x.atommic
44
+ mode: test
45
+ ```
46
+
47
+ ### Usage
48
+
49
+ You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
50
+
51
+
52
+ ## Model Architecture
53
+ ```base
54
+ model:
55
+ model_name: IDSLR
56
+ use_reconstruction_module: true
57
+ input_channels: 64 # coils * 2
58
+ reconstruction_module_output_channels: 64 # coils * 2
59
+ segmentation_module_output_channels: 4
60
+ channels: 64
61
+ num_pools: 2
62
+ padding_size: 11
63
+ drop_prob: 0.0
64
+ normalize: false
65
+ padding: true
66
+ norm_groups: 2
67
+ num_iters: 5
68
+ segmentation_loss:
69
+ dice: 1.0
70
+ dice_loss_include_background: true # always set to true if the background is removed
71
+ dice_loss_to_onehot_y: false
72
+ dice_loss_sigmoid: false
73
+ dice_loss_softmax: false
74
+ dice_loss_other_act: none
75
+ dice_loss_squared_pred: false
76
+ dice_loss_jaccard: false
77
+ dice_loss_flatten: false
78
+ dice_loss_reduction: mean_batch
79
+ dice_loss_smooth_nr: 1e-5
80
+ dice_loss_smooth_dr: 1e-5
81
+ dice_loss_batch: true
82
+ dice_metric_include_background: true # always set to true if the background is removed
83
+ dice_metric_to_onehot_y: false
84
+ dice_metric_sigmoid: false
85
+ dice_metric_softmax: false
86
+ dice_metric_other_act: none
87
+ dice_metric_squared_pred: false
88
+ dice_metric_jaccard: false
89
+ dice_metric_flatten: false
90
+ dice_metric_reduction: mean_batch
91
+ dice_metric_smooth_nr: 1e-5
92
+ dice_metric_smooth_dr: 1e-5
93
+ dice_metric_batch: true
94
+ segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
95
+ segmentation_activation: sigmoid
96
+ reconstruction_loss:
97
+ l1: 1.0
98
+ kspace_reconstruction_loss: false
99
+ total_reconstruction_loss_weight: 0.5
100
+ total_segmentation_loss_weight: 0.5
101
+ ```
102
+
103
+ ## Training
104
+ ```base
105
+ optim:
106
+ name: adam
107
+ lr: 1e-4
108
+ betas:
109
+ - 0.9
110
+ - 0.98
111
+ weight_decay: 0.0
112
+ sched:
113
+ name: InverseSquareRootAnnealing
114
+ min_lr: 0.0
115
+ last_epoch: -1
116
+ warmup_ratio: 0.1
117
+
118
+ trainer:
119
+ strategy: ddp
120
+ accelerator: gpu
121
+ devices: 1
122
+ num_nodes: 1
123
+ max_epochs: 10
124
+ precision: 16-mixed
125
+ enable_checkpointing: false
126
+ logger: false
127
+ log_every_n_steps: 50
128
+ check_val_every_n_epoch: -1
129
+ max_steps: -1
130
+ ```
131
+
132
+ ## Performance
133
+
134
+ To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
135
+
136
+ Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
137
+
138
+ Results
139
+ -------
140
+
141
+ Evaluation against SENSE targets
142
+ --------------------------------
143
+ 4x: MSE = 0.001198 +/- 0.002485 NMSE = 0.02524 +/- 0.07112 PSNR = 30.38 +/- 5.67 SSIM = 0.8364 +/- 0.1061 DICE = 0.8695 +/- 0.1342 F1 = 0.225 +/- 0.1936 HD95 = 8.724 +/- 3.298 IOU = 0.2124 +/- 0.1993
144
+
145
+
146
+ ## Limitations
147
+
148
+ This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
149
+
150
+
151
+ ## References
152
+
153
+ [1] [ATOMMIC](https://github.com/wdika/atommic)
154
+
155
+ [2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022