Model Overview
Image domain Deep Structured Low-Rank network (IDSLR) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
ATOMMIC: Training
To train, fine-tune, or test the model you will need to install ATOMMIC. We recommend you install it after you've installed latest Pytorch version.
pip install atommic['all']
How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found here.
Automatically instantiate the model
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_IDSLR_SKMTEA_poisson2d_4x/blob/main/MTL_IDSLR_SKMTEA_poisson2d_4x.atommic
mode: test
Usage
You need to download the SKMTEA dataset to effectively use this model. Check the SKMTEA page for more information.
Model Architecture
model:
model_name: IDSLR
use_reconstruction_module: true
input_channels: 64 # coils * 2
reconstruction_module_output_channels: 64 # coils * 2
segmentation_module_output_channels: 4
channels: 64
num_pools: 2
padding_size: 11
drop_prob: 0.0
normalize: false
padding: true
norm_groups: 2
num_iters: 5
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
Training
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use targets configuration files.
Evaluation can be performed using the reconstruction evaluation and segmentation scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
Evaluation against SENSE targets
4x: MSE = 0.001198 +/- 0.002485 NMSE = 0.02524 +/- 0.07112 PSNR = 30.38 +/- 5.67 SSIM = 0.8364 +/- 0.1061 DICE = 0.8695 +/- 0.1342 F1 = 0.225 +/- 0.1936 HD95 = 8.724 +/- 3.298 IOU = 0.2124 +/- 0.1993
Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
References
[1] ATOMMIC
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022