File size: 4,087 Bytes
81ecc10 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- BraTS2023AdultGlioma
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_BraTS2023AdultGlioma
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_BraTS2023AdultGlioma/blob/main/SEG_UNet_BraTS2023AdultGlioma.atommic
mode: test
```
### Usage
You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 4
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9372 +/- 0.1175 F1 = 0.6713 +/- 0.7867 HD95 = 3.504 +/- 2.089 IOU = 0.5346 +/- 0.6628
## Limitations
This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023
|