Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,7 @@ It consists of 390'035 images, which are all 512x512px dimensions and in the web
|
|
12 |
<figcaption>Visual example - the first 48 training tiles</figcaption>
|
13 |
</figure>
|
14 |
|
15 |
-
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
|
16 |
-
Though if an on-the-fly degradation pipeline is used during training, such a high quantity of training tiles would probably generally not be needed since longer training iterations make sure of distribution.
|
17 |
|
18 |
Size on disc:
|
19 |
```
|
@@ -21,7 +20,14 @@ du BHI_HR
|
|
21 |
131148100 BHI_HR/
|
22 |
```
|
23 |
|
24 |
-
Also for the future, I am releasing the full dataset here. But there can of course be attempts in the future to make distilled versions of this dataset that perform better since I might find additional metrics or filtering methods in the future that might help reduce dataset size while achieving better training validation metric performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## Used Datasets
|
27 |
|
|
|
12 |
<figcaption>Visual example - the first 48 training tiles</figcaption>
|
13 |
</figure>
|
14 |
|
15 |
+
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR for paired sisr training, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
|
|
|
16 |
|
17 |
Size on disc:
|
18 |
```
|
|
|
20 |
131148100 BHI_HR/
|
21 |
```
|
22 |
|
23 |
+
Also for the future, I am releasing the full dataset here. But there can of course be (community?) attempts in the future to make distilled versions of this dataset that perform better since I might find additional metrics or filtering methods in the future that might help reduce dataset size while achieving better training validation metric performance.
|
24 |
+
|
25 |
+
In Summary:
|
26 |
+
|
27 |
+
Advantage of this dataset is large quantity of normalized (512x512) tiles
|
28 |
+
- When applying degradations to create a corresponding LR, the distribution of degradation strengths should be sufficient, even when using multiple degradations.
|
29 |
+
- Big arch options in general can profit from the amount of learning content in this dataset.
|
30 |
+
- Since it takes a while to reach a new epoch, higher training iters is advised for big arch options to profit thereof. The filtering method used here made sure that metrics would not worsen during training.
|
31 |
|
32 |
## Used Datasets
|
33 |
|