Update README.md
Browse files
README.md
CHANGED
@@ -94,14 +94,21 @@ My main point here also would be that this dataset, even though still consisting
|
|
94 |
Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
|
95 |
|
96 |
I did convert to webp because of file size reduction, because the dataset was originally at around 200GB, when I then used oxipng ("oxipng --strip safe --alpha *.png") for optimization. But lossless webp is just the best option available currently for lossless file size reduction.
|
97 |
-
|
98 |
|
99 |
-
TODO put paper stuff etc in here about webp / jpeg xl being superior concerning lossless compression
|
100 |
|
101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
## Upload
|
104 |
|
105 |
-
I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in
|
106 |
This should work with lfs file size limit, and i chose zip because its such a common format. I could have of course used another format like 7z or zpaq or something.
|
107 |
I actually once in the past worked on an archiver called [ShareArchiver](https://github.com/Phhofm/ShareArchiver) where my main idea was, that online shared data (like this dataset) generally gets archived once (by the uploader) but downloaded and extracted maybe a thousand times. So resulting file size (faster download time for those thousand downloads) and extraction speed (those thousand extraction) would be waay more important than compression speed. So the main idea is we are trading archiving time (very long time to archive) of that one person for faster downloads and extraction for all. The design of this archiver was that I chose only highly assymetrical compression algos, where compression times can very slow as long as decompression speed is high, and then it would brute force during compression, meaning of those available highly assymetric compression algos, it would compress each single file with all of them, check the resulting file sizes, and add only the smallest one to the .share archive. Just something from the past I wanted to mention. (one could also use the max flag to just use all of them, meaning also the symmetrical ones, just to brute force the smallest archive file possible (using paq8o etc), but of corse compression time would also be very long, but this flag was more for archiving purposes than online sharing purposes, in a case where store space would be waay more important than either compression or decompression speed.)
|
|
|
94 |
Files have been named with '{dataset_name}_{index}.webp' so that if one of these used datasets were problematic concerning public access, could still be removed in the future form this dataset.
|
95 |
|
96 |
I did convert to webp because of file size reduction, because the dataset was originally at around 200GB, when I then used oxipng ("oxipng --strip safe --alpha *.png") for optimization. But lossless webp is just the best option available currently for lossless file size reduction.
|
97 |
+
(JPEGXL is not supported by cv2 for training yet. WebP2 is experimental. FLIF was discontinued for JPEGXL.)
|
98 |
|
|
|
99 |
|
100 |
+

|
101 |
+
|
102 |
+
<figure>
|
103 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/634e9aa407e669188d3912f9/BgkkzkhZQBrXY0qTxR_rm.png" alt="Lossless image formats">
|
104 |
+
<figcaption>Table 1 Page 3 from the paper "Comparison of Lossless Image Formats"</figcaption>
|
105 |
+
</figure>
|
106 |
+
|
107 |
+
|
108 |
+
See [file list](https://huggingface.co/datasets/Phips/BHI/resolve/main/files.txt?download=true) for all the files in the dataset
|
109 |
|
110 |
## Upload
|
111 |
|
112 |
+
I uploaded the dataset as multi-part zip archive files with a max of 25GB per file, resulting in 6 archive files.
|
113 |
This should work with lfs file size limit, and i chose zip because its such a common format. I could have of course used another format like 7z or zpaq or something.
|
114 |
I actually once in the past worked on an archiver called [ShareArchiver](https://github.com/Phhofm/ShareArchiver) where my main idea was, that online shared data (like this dataset) generally gets archived once (by the uploader) but downloaded and extracted maybe a thousand times. So resulting file size (faster download time for those thousand downloads) and extraction speed (those thousand extraction) would be waay more important than compression speed. So the main idea is we are trading archiving time (very long time to archive) of that one person for faster downloads and extraction for all. The design of this archiver was that I chose only highly assymetrical compression algos, where compression times can very slow as long as decompression speed is high, and then it would brute force during compression, meaning of those available highly assymetric compression algos, it would compress each single file with all of them, check the resulting file sizes, and add only the smallest one to the .share archive. Just something from the past I wanted to mention. (one could also use the max flag to just use all of them, meaning also the symmetrical ones, just to brute force the smallest archive file possible (using paq8o etc), but of corse compression time would also be very long, but this flag was more for archiving purposes than online sharing purposes, in a case where store space would be waay more important than either compression or decompression speed.)
|