Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Tags:
image
Libraries:
Datasets
Dask
License:
Nick Padgett commited on
Commit
fc433d8
1 Parent(s): 02dd306

Updating README and adding tutorials.

Browse files
README.md CHANGED
@@ -1,3 +1,43 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Spawning-15m
2
+
3
+ ![Spawning-15m](./assets/spawning-logo.png)
4
+
5
+ # Summary
6
+ **Spawning-15m** dataset is a collection of about 15 million CC0/PD image-caption pairs for the purpose of training generative image models.
7
+
8
+ # About
9
+ Training a state-of-the-art generative image model typically requires vast amounts of images from across the internet. Training with images from across the web introduces several data quality issues: the presence of copyright material, low quality images and captions, violent or nsfw content, PII, decaying dataset quality via broken links, etc. Additionally, downloading from the original image hosts introduces an undue burden to those hosts, impacting services for legitimate users.
10
+
11
+ The Spawning-15m dataset aims to resolve these issues through collecting only public domain and cc0 licensed images, automated recaptioning of image data, quality and safety filtering, and hosting the images in the dataset on dedicated cloud storage separate from the original image hosts. These innovations make Spawning-15m the largest safe and reliable public image dataset available.
12
+
13
+ Built and curated with [Source.Plus](https://source.plus).
14
+
15
+ # Overview
16
+ This dataset has two components. The first is the `metadata`, which contains the image urls, captions, image dimensions, etc. The second component are the `images`.
17
+
18
+ ## Metadata
19
+ The metadata is made available through a series of parquet files with the following schema:
20
+ - `id`: A unique identifier for the image.
21
+ - `url`: The URL of the image.
22
+ - `s3_key`: The S3 file key of the image.
23
+ - `caption`: A caption for the image.
24
+ - `md5_hash`: The MD5 hash of the image file.
25
+ - `mime_type`: The MIME type of the image file.
26
+ - `width`: The width of the image in pixels.
27
+ - `height`: The height of the image in pixels.
28
+ - `license_type`: The URL of the license.
29
+
30
+ ## Images
31
+ The image files are all hosted in the AWS S3 bucket `spawning-15m`. The URLs to the images files are all maintained in the metadata files.
32
+
33
+ # Tutorials
34
+
35
+ [Working with the Metadata](./tutorials/metadata.md)
36
+
37
+ [Downloading Images](./tutorials/images.md)
38
+
39
+ # License
40
+ The dataset is licensed under the [CDLA-Permissive-2.0](https://cdla.dev/permissive-2-0/).
41
+
42
+ # Reporting Issues
43
+ We've gone through great lengths to ensure the dataset is free from objectionable and infringing content. If you find any issues or have any concerns, please report them to us at [[email protected]](mailto:[email protected]), along with the id of the relevant item.
assets/spawning-logo.png ADDED

Git LFS Details

  • SHA256: 94b627afb8f809ad716994d29f93fd9d2e7f3b24d298db0f95f7b915a5e35b5a
  • Pointer size: 130 Bytes
  • Size of remote file: 38.1 kB
tutorials/images.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Downloading Images
2
+ Once you have the URLs or S3 file keys from the metadata ([follow the steps here here]("./metadata.md")), you can download the images through any standard means.
3
+
4
+ #### cURL
5
+ Download an image from a url to a local image file with the name `image.png`:
6
+ ```bash
7
+ curl -O image.png https://spawning-15m.s3.us-west-2.amazonaws.com/image.png
8
+ ```
9
+ #### Python
10
+ Download an image from a url to a local image file with the name `image.png`:
11
+ ```python
12
+ import requests
13
+
14
+ url = "https://spawning-15m.s3.us-west-2.amazonaws.com/image.png"
15
+ response = requests.get(url)
16
+ with open('image.png', 'wb') as f:
17
+ f.write(response.content)
18
+ ```
19
+ #### img2dataset
20
+ You can also use the `img2dataset` tool to quickly download images from a metadata file. The tool is available [here](https://github.com/rom1504/img2dataset). The example below will download all the images to a local `images` directory.
21
+ ```bash
22
+ img2dataset download --url_list spawning-15m-metadata.001.parquet --input_format parquet --url_col url --caption_col caption --output-dir images/
23
+ ```
24
+
25
+ #### S3 CLI
26
+ Download an image from an S3 bucket to an image with the name `image.png`:
27
+ ```bash
28
+ aws s3 cp s3://spawning-15m/image.png image.png
29
+ ```
tutorials/metadata.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Working with the Metadata
2
+ The metadata files are in parquet format, and contain the following attributes:
3
+ - `id`: A unique identifier for the image.
4
+ - `url`: The URL of the image.
5
+ - `s3_key`: The S3 file key of the image.
6
+ - `caption`: A caption for the image.
7
+ - `md5_hash`: The MD5 hash of the image file.
8
+ - `mime_type`: The MIME type of the image file.
9
+ - `width`: The width of the image in pixels.
10
+ - `height`: The height of the image in pixels.
11
+ - `license_type`: The URL of the license.
12
+
13
+ #### Open a metadata file
14
+ The files are in parquet format, and can be opened with a tool like `pandas` in Python.
15
+ ```python
16
+ import pandas as pd
17
+ df = pd.read_parquet('spawning-15m-metadata.001.parquet')
18
+ ```
19
+
20
+ #### Get URLs from metadata
21
+ Once you have opened a maetadata file with pandas, you can get the URLs of the images with the following command:
22
+ ```python
23
+ urls = df['url']
24
+ ```
25
+
26
+ #### Get S3 File Keys from metadata
27
+ You can also get the S3 file keys, which can be used to download the images using the S3 CLI:
28
+ ```python
29
+ s3_keys = df['s3_key']
30
+ ```