Phips commited on
Commit
dec5e24
·
verified ·
1 Parent(s): b67dd9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -9
README.md CHANGED
@@ -1,16 +1,57 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
4
 
5
- 512 tiling
6
- COCO 2017 unlabeled dataset of 123’403 images into 8’814 512x512 tiles.
7
- COCO 2017 train dataset of 118'287 images into 8'442 tiles.
8
- inaturalist 2019 train_val dataset of 269'259 images into 238'524 tiles.
9
- HQ50K 213'396 Tiles
10
- ImageNet 197'436 Tiles
11
- LSDIR 179'006 Tiles
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- BHI Filtered Dataset consists of:
14
  DF2K -> 12'639 Tiles
15
  FFHQ -> 35'112 Tiles
16
  HQ50K -> 61'647 Tiles
@@ -18,4 +59,10 @@ ImageNet -> 4'505 Tiles
18
  LSDIR -> 116'141 Tiles
19
  OST -> 1'048 Tiles
20
 
21
- Then run oxipng -o 4 --strip safe --alpha *.png on it
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # BHI SISR Dataset
5
 
6
+ The BHI SISR Dataset's purpose is for training single image super-resolution models and is a result of tests on my BHI filtering method, which I made [a huggingface community blogpost about](https://huggingface.co/blog/Phips/bhi-filtering).
7
+
8
+ TODO it consists of X images, which are all 512x512px dimensions and in the png format.
9
+ TODO visual example of the dataset
10
+
11
+ ## Used Datasets
12
+
13
+ This BHI SISR Dataset consists of the following datasets:
14
+
15
+ [HQ50K](https://github.com/littleYaang/HQ-50K)
16
+ [ImageNet](https://www.image-net.org/)
17
+ [FFHQ](https://github.com/NVlabs/ffhq-dataset)
18
+ [LSDIR](https://github.com/ofsoundof/LSDIR)
19
+ [DF2K](https://www.kaggle.com/datasets/thaihoa1476050/df2k-ost)
20
+ [OST](https://www.kaggle.com/datasets/thaihoa1476050/df2k-ost)
21
+ [iNaturalist 2019](https://github.com/visipedia/inat_comp/tree/master/2019)
22
+ [COCO 2017 Train](https://cocodataset.org/#download)
23
+ [COCO 2017 Unlabeled](https://cocodataset.org/#download)
24
+ [Nomosv2](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets)
25
+ [HFA2K](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets)
26
+ [Nomos_Uni](https://github.com/neosr-project/neosr?tab=readme-ov-file#-datasets)
27
+ [ModernAnimation1080_v3](https://huggingface.co/datasets/Zarxrax/ModernAnimation1080_v3)
28
+
29
+
30
+ ## Tiling
31
+
32
+ These datasets have then been tiled to 512x512px for improved I/O training speed, and normalization of image dimensions is also nice, so it will take consistent ressources if processing.
33
+
34
+ In some cases these led to fewer images in the dataset because they contained images with < 512px dimensions which were filtered out, some examples are:
35
+ COCO 2017 unlabeled from 123'403 images -> 8'814 tiles.
36
+ COCO 2017 train from 118'287 images -> 8'442 tiles.
37
+
38
+ And in some cases this led to more images, because the original images were high resolution and therefore gave multiple 512x512 tiles per single image.
39
+ For example HQ50K -> 213'396 tiles.
40
+
41
+ ## Conversion
42
+
43
+ If the images in the dataset were in the jpg format, they have been converted to png format using [Mogrify](https://imagemagick.org/script/mogrify.php).
44
+
45
+ ## BHI Filtering
46
+
47
+ I then filtered these sets with the BHI filtering method using the following thresholds:
48
+
49
+ Blockiness < 30
50
+ HyperIQA >= 0.2
51
+ IC9600 >= 0.4
52
+
53
+ Which led to following dataset tile quantities that satisfied the filtering process, which made it into the BHI SISR Dataset:
54
 
 
55
  DF2K -> 12'639 Tiles
56
  FFHQ -> 35'112 Tiles
57
  HQ50K -> 61'647 Tiles
 
59
  LSDIR -> 116'141 Tiles
60
  OST -> 1'048 Tiles
61
 
62
+ ## Filename normalization
63
+
64
+ All these subsets have been filename normalized, meaning 0.png, 1.png, 2.png, and so forth. They then have been merged into the BHI dataset, which folder has been normalized again.
65
+
66
+ ## Optimization
67
+
68
+ Then I used [oxipng](https://github.com/shssoichiro/oxipng) ("oxipng --strip safe --alpha *.png") for optimization.