Update README.md
Browse files
README.md
CHANGED
@@ -3,10 +3,20 @@ license: cc-by-4.0
|
|
3 |
---
|
4 |
# BHI SISR Dataset
|
5 |
|
6 |
-
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering).
|
7 |
|
8 |
-
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
## Used Datasets
|
12 |
|
@@ -66,7 +76,7 @@ HFA2K -> 2'280 Tiles
|
|
66 |
ModernAnimation1080_v3 -> 4'109 Tiles
|
67 |
Nomos_Uni -> 2'466 Tiles
|
68 |
Nomosv2 -> 5'226 Tiles
|
69 |
-
inaturalist_2019 -> 131'943 Tiles
|
70 |
|
71 |
|
72 |
## Files
|
@@ -77,4 +87,8 @@ Files have been named with '{dataset_name}_{index}.png' so that if one of these
|
|
77 |
|
78 |
## Optimization
|
79 |
|
80 |
-
Then I used [oxipng](https://github.com/shssoichiro/oxipng) ("oxipng --strip safe --alpha *.png") for optimization.
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
# BHI SISR Dataset
|
5 |
|
6 |
+
The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering), which can be extremely summarized by that removing (by filtering) only the worst quality tiles from a training set has a way bigger positive effect on training metrics than keeping only the best quality training tiles.
|
7 |
|
8 |
+
It consists of 390'241 images, which are all 512x512px dimensions and in the webp format.
|
9 |
+
|
10 |
+
The advantage of such a big dataset is when applying degradations in a randomized manner to create a corresponding LR, the distribution of degradations and strenghts should be sufficient because of the quantity of training tiles. I will create some corresponding x4 LR datasets to this one and publish them aswell.
|
11 |
+
Though if an on-the-fly degradation pipeline is used during training, such a high quantity of training tiles would probably generally not be needed since longer training iterations make sure of distribution.
|
12 |
+
|
13 |
+
Size on disc:
|
14 |
+
```
|
15 |
+
du BHI_HR
|
16 |
+
131199816 BHI_HR/
|
17 |
+
```
|
18 |
+
|
19 |
+
Also for the future, I am releasing the full dataset here. But there can of course be attempts in the future to make distilled versions of this dataset that perform better since I might find additional metrics or filtering methods in the future that might help reduce dataset size while achieving better training validation metric performance.
|
20 |
|
21 |
## Used Datasets
|
22 |
|
|
|
76 |
ModernAnimation1080_v3 -> 4'109 Tiles
|
77 |
Nomos_Uni -> 2'466 Tiles
|
78 |
Nomosv2 -> 5'226 Tiles
|
79 |
+
inaturalist_2019 -> 131'943 Tiles
|
80 |
|
81 |
|
82 |
## Files
|
|
|
87 |
|
88 |
## Optimization
|
89 |
|
90 |
+
Then I used [oxipng](https://github.com/shssoichiro/oxipng) ("oxipng --strip safe --alpha *.png") for optimization.
|
91 |
+
|
92 |
+
## WebP conversion
|
93 |
+
|
94 |
+
The files have then been converted to lossless webp simply to save storage space locally and for faster uploading/downloading here on huggingface. This reduced size of the dataset by around 50 GB.
|