File size: 1,771 Bytes
0bb1911 5a84e65 7744ddb 5a84e65 7744ddb 5a84e65 7744ddb 5a84e65 7744ddb 5a84e65 96d254b 5a84e65 96d254b 5a84e65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license: apache-2.0
---
# Instructions
## 1-Data preparation
This is how `inputs_data` organizes
```
$inputs_data/
βββ casename00001
β βββ ct.nii.gz
βββ casename00002
β βββ ct.nii.gz
βββ casename00003
β βββ ct.nii.gz
...
```
## 2-Download
You can choose to use the docker image (recommand) or the singularity container.
#### Download the docker image.
```
docker pull qchen99/suprem:v1
```
or
#### Download the singularity container.
```
wget https://huggingface.co/qicq1c/SuPreM/resolve/main/suprem_final.sif
```
## 3-Inference
You can directly perform inference on your own data. Simply modify `inputs_data` into your data path and adjust `outputs_data` to specify the desired output location for the segmentation results.
#### Use Docker
```
sudo docker container run --gpus "device=0" -m 128G --rm -v $inputs_data:/workspace/inputs/ -v $outputs_data:/workspace/outputs/ qchen99/suprem:v1 /bin/bash -c "sh predict.sh"
```
or
#### Use Singularity
```
SINGULARITYENV_CUDA_VISIBLE_DEVICES=0 singularity run --nv -B $inputs_data:/workspace/inputs -B $outputs_data:/workspace/outputs suprem_final.sif
```
This is how `outputs_data` organizes
```
$outputs_data/
βββ casename00001
βββ casename00002
βββ casename00003
βββ combined_labels.nii.gz
βββ segmentations
βββ aorta.nii.gz
βββ gall_bladder.nii.gz
βββ kidney_left.nii.gz
βββ kidney_right.nii.gz
βββ liver.nii.gz
βββ pancreas.nii.gz
βββ postcava.nii.gz
βββ spleen.nii.gz
βββ stomach.nii.gz
β
...
``` |