doc: add README
Browse files
README.md
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# YFCC100M subset from OpenAI
|
2 |
+
|
3 |
+
Subset of [YFCC100M](https://arxiv.org/abs/1503.01817) used by OpenAI for [CLIP](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md), filtered to contain only the images that we were able to retrieve.
|
4 |
+
|
5 |
+
| Split | train | validation |
|
6 |
+
| --- | --- | --- |
|
7 |
+
| Number of samples | 14,808,859 | 16,374 |
|
8 |
+
| Size | 1.9 TB | 2.1 GB |
|
9 |
+
|
10 |
+
Features:
|
11 |
+
* from the original dataset: `title`, `description`, `photoid`, `uid`, `unickname`, `datetaken`, `dateuploaded`, `capturedevice`, `usertags`, `machinetags`, `longitude`, `latitude`, `accuracy`, `pageurl`, `downloadurl`, `licensename`, `licenseurl`, `serverid`, `farmid`, `secret`, `secretoriginal`, `ext`, `marker`, `key`
|
12 |
+
* `img`: image content, can be loaded with `PIL.Image.open(io.BytesIO(item['img']))`
|
13 |
+
* `title_clean` and `description_clean`: derived from `title` and `description` using `clean_text` function detailed below
|
14 |
+
|
15 |
+
```python
|
16 |
+
def clean_text(text):
|
17 |
+
# decode url
|
18 |
+
text = urllib.parse.unquote_plus(text)
|
19 |
+
# remove html tags
|
20 |
+
text = re.sub('<[^<]+?>', '', text)
|
21 |
+
# remove multiple spaces + "\r" + "\n" + "\t"
|
22 |
+
text = " ".join(text.split())
|
23 |
+
return text
|
24 |
+
```
|