Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,24 @@ language:
|
|
6 |
- en
|
7 |
tags:
|
8 |
- medical
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
tags:
|
8 |
- medical
|
9 |
+
---
|
10 |
+
|
11 |
+
This dataset is based on the BraTS2023 dataset.
|
12 |
+
It takes 5 middle slices from each nifti volume of the BraTS2023 dataset after normalizing to a value of (-1,1).
|
13 |
+
All of these images are `.npy` files and one can load them using the `np.load(FILEPATH).astype(np.float32)`.
|
14 |
+
We provide the training and the test set which contains 6255 and 1095 files respectively.
|
15 |
+
|
16 |
+
It is highly recommend to create a separate validation set from the training dataset for applications.
|
17 |
+
We use `Pytorch` to do this. We do this by using the following command.
|
18 |
+
|
19 |
+
```python
|
20 |
+
seed = 97
|
21 |
+
train_dataset, val_dataset = torch.utils.data.random_split(
|
22 |
+
dataset, lengths=(0.9, 0.1), generator=torch.Generator().manual_seed(seed)
|
23 |
+
) # dataset is the dataset instance.
|
24 |
+
```
|
25 |
+
|
26 |
+
This dataset is actually part of a paper which is under peer-review currently.
|
27 |
+
It is mainly used for multi-domain medical image to image translation.
|
28 |
+
|
29 |
+
We hope this helps the community.
|