Update README.md
Browse files
README.md
CHANGED
@@ -1,85 +1,88 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
|
4 |
-
{}
|
5 |
---
|
6 |
-
|
7 |
# Dataset Card for dSprites
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
-
|
|
|
|
|
|
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
-
- **
|
|
|
|
|
24 |
|
25 |
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
Each sample in the dataset contains:
|
30 |
-
|
31 |
-
- **image:** A 64×64 grayscale image
|
32 |
-
|
33 |
-
- **orientation:** A float representing the rotation angle of the shape
|
34 |
-
|
35 |
-
- **shape:** A categorical label representing the shape type (square, ellipse, heart)
|
36 |
-
|
37 |
-
- **scale:** A float representing the size of the shape
|
38 |
-
|
39 |
-
- **color:** A categorical label representing the color (only 'white' in this dataset)
|
40 |
-
|
41 |
-
- **position_x:** A float representing the x-position of the shape
|
42 |
-
|
43 |
-
- **position_y:** A float representing the y-position of the shape
|
44 |
-
|
45 |
-
Total images: 737,280
|
46 |
-
|
47 |
-
Classes: 3 (square, ellipse, heart)
|
48 |
-
|
49 |
-
Splits:
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
- **Test:** 30% of total dataset
|
54 |
-
|
55 |
-
Image specs: PNG format, 64×64 pixels, grayscale
|
56 |
|
57 |
## Example Usage
|
58 |
-
Below is a quick example of how to load this dataset via the Hugging Face Datasets library
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
```
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
dataset = load_dataset("../../aidatasets/images/dsprites.py", split="train", trust_remote_code=True)
|
64 |
-
# dataset = load_dataset("../../aidatasets/images/dsprites.py", split="test", trust_remote_code=True)
|
65 |
-
|
66 |
-
# Access a sample from the dataset
|
67 |
-
example = dataset[0]
|
68 |
-
image = example["image"]
|
69 |
-
shape = example["shape"]
|
70 |
-
|
71 |
-
image.show() # Display the image
|
72 |
-
print(f"Shape: {shape}")
|
73 |
```
|
74 |
-
|
75 |
## Citation
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
@misc{matthey2017dsprites,
|
82 |
-
title={dsprites: Disentanglement testing sprites dataset},
|
83 |
-
author={Matthey, Loic and Higgins, Irina and Hassabis, Demis and Lerchner, Alexander},
|
84 |
year={2017}
|
85 |
}
|
|
|
|
1 |
---
|
2 |
+
license: zlib
|
|
|
|
|
3 |
---
|
|
|
4 |
# Dataset Card for dSprites
|
5 |
|
6 |
+
## Dataset Description
|
7 |
|
8 |
+
The **dSprites dataset** is a **synthetic 2D shapes dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It is widely used as a standard benchmark in the representation learning community.
|
9 |
|
10 |
+
The dataset was introduced in the **β-VAE paper** and consists of procedurally generated binary black-and-white images of 2D sprites, under controlled variations of **6 known factors of variation**:
|
11 |
|
12 |
+
- Object color (1 value: white)
|
13 |
+
- Object shape (3 values: square, ellipse, heart)
|
14 |
+
- Object scale (6 values)
|
15 |
+
- Object orientation (40 values)
|
16 |
+
- Object position X (32 values)
|
17 |
+
- Object position Y (32 values)
|
18 |
|
19 |
+
All possible combinations of these factors are present exactly once, generating a total of **737,280 images** at a resolution of **64×64 pixels**. The ground-truth latent factors are provided for each image, both as **discrete classes** and **continuous values**. The dataset is specifically designed for assessing the ability of models to learn **disentangled representations**, and has been used in many follow-up works after β-VAE.
|
20 |
|
21 |
+

|
22 |
|
23 |
+
## Dataset Source
|
24 |
+
- **Homepage**: [https://github.com/google-deepmind/dsprites-dataset](https://github.com/google-deepmind/dsprites-dataset)
|
25 |
+
- **License**: zlib/libpng License
|
26 |
+
- **Paper**: Irina Higgins et al. _β-VAE: Learning basic visual concepts with a constrained variational framework_. ICLR 2017.
|
27 |
|
28 |
## Dataset Structure
|
29 |
+
|Factors|Possible Classes (Indices)|Values|
|
30 |
+
|---|---|---|
|
31 |
+
|color|white=0|1.0 (fixed)|
|
32 |
+
|shape|square=0, ellipse=1, heart=2|1.0, 2.0, 3.0 (categorical)|
|
33 |
+
|scale|0,...,5|[0.5, 1.0] linearly spaced (6 values)|
|
34 |
+
|orientation|0,...,39|[0, 2π] radians (40 values)|
|
35 |
+
|posX|0,...,31|[0, 1] normalized position (32 values)|
|
36 |
+
|posY|0,...,31|[0, 1] normalized position (32 values)|
|
37 |
|
38 |
+
Each image corresponds to a unique combination of these 6 factors. The images are stored in a **row-major order** (fastest-changing factor is `posY`, slowest-changing factor is `color`).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
### Why no train/test split?
|
41 |
+
The dSprites dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
|
|
|
|
|
|
|
42 |
|
43 |
## Example Usage
|
44 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
|
45 |
+
```python
|
46 |
+
from datasets import load_dataset
|
47 |
+
|
48 |
+
# Load the dataset
|
49 |
+
dataset = load_dataset("randall-lab/dsprites", split="train", trust_remote_code=True)
|
50 |
+
# Access a sample from the dataset
|
51 |
+
example = dataset[0]
|
52 |
+
image = example["image"]
|
53 |
+
label = example["label"] # [color_idx, shape_idx, scale_idx, orientation_idx, posX_idx, posY_idx]
|
54 |
+
label_values = example["label_values"] # corresponding continuous values
|
55 |
+
|
56 |
+
# Label Classes
|
57 |
+
color = example["color"] # 0
|
58 |
+
shape = example["shape"] # 0-2
|
59 |
+
scale = example["scale"] # 0-5
|
60 |
+
orientation = example["orientation"] # 0-39
|
61 |
+
posX = example["posX"] # 0-31
|
62 |
+
posY = example["posY"] # 0-31
|
63 |
+
|
64 |
+
# Label Values
|
65 |
+
color_value = example["colorValue"] # 1.0
|
66 |
+
shape_value = example["shapeValue"] # 1.0, 2.0, 3.0
|
67 |
+
scale_value = example["scaleValue"] # [0.5, 1.0]
|
68 |
+
orientation_value = example["orientationValue"] # [0, 2π]
|
69 |
+
posX_value = example["posXValue"] # [0, 1]
|
70 |
+
posY_value = example["posYValue"] # [0, 1]
|
71 |
+
|
72 |
+
image.show() # Display the image
|
73 |
+
print(f"Label (factors): {label}")
|
74 |
+
print(f"Label values (factors): {label_values}")
|
75 |
```
|
76 |
+
If you are using colab, you should update datasets to avoid errors
|
77 |
+
```
|
78 |
+
pip install -U datasets
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
```
|
|
|
80 |
## Citation
|
81 |
+
```
|
82 |
+
@inproceedings{higgins2017beta,
|
83 |
+
title={beta-vae: Learning basic visual concepts with a constrained variational framework},
|
84 |
+
author={Higgins, Irina and Matthey, Loic and Pal, Arka and Burgess, Christopher and Glorot, Xavier and Botvinick, Matthew and Mohamed, Shakir and Lerchner, Alexander},
|
85 |
+
booktitle={International conference on learning representations},
|
|
|
|
|
|
|
86 |
year={2017}
|
87 |
}
|
88 |
+
```
|