|
--- |
|
license: zlib |
|
--- |
|
# Dataset Card for dSprites |
|
|
|
## Dataset Description |
|
|
|
The **dSprites dataset** is a **synthetic 2D shapes dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It is widely used as a standard benchmark in the representation learning community. |
|
|
|
The dataset was introduced in the **β-VAE paper** and consists of procedurally generated binary black-and-white images of 2D sprites, under controlled variations of **6 known factors of variation**: |
|
|
|
- Object color (1 value: white) |
|
- Object shape (3 values: square, ellipse, heart) |
|
- Object scale (6 values) |
|
- Object orientation (40 values) |
|
- Object position X (32 values) |
|
- Object position Y (32 values) |
|
|
|
All possible combinations of these factors are present exactly once, generating a total of **737,280 images** at a resolution of **64×64 pixels**. The ground-truth latent factors are provided for each image, both as **discrete classes** and **continuous values**. The dataset is specifically designed for assessing the ability of models to learn **disentangled representations**, and has been used in many follow-up works after β-VAE. |
|
|
|
 |
|
|
|
The dataset is commonly used for **benchmarking disentanglement learning**, and can be used in conjunction with other variants: |
|
|
|
- [randall-lab/dsprites-color](https://huggingface.co/datasets/randall-lab/dsprites-color) |
|
- [randall-lab/dsprites-noisy](https://huggingface.co/datasets/randall-lab/dsprites-noisy) |
|
- [randall-lab/dsprites-scream](https://huggingface.co/datasets/randall-lab/dsprites-scream) |
|
|
|
## Dataset Source |
|
- **Homepage**: [https://github.com/google-deepmind/dsprites-dataset](https://github.com/google-deepmind/dsprites-dataset) |
|
- **License**: zlib/libpng License |
|
- **Paper**: Irina Higgins et al. _β-VAE: Learning basic visual concepts with a constrained variational framework_. ICLR 2017. |
|
|
|
## Dataset Structure |
|
|Factors|Possible Classes (Indices)|Values| |
|
|---|---|---| |
|
|color|white=0|1.0 (fixed)| |
|
|shape|square=0, ellipse=1, heart=2|1.0, 2.0, 3.0 (categorical)| |
|
|scale|0,...,5|[0.5, 1.0] linearly spaced (6 values)| |
|
|orientation|0,...,39|[0, 2π] radians (40 values)| |
|
|posX|0,...,31|[0, 1] normalized position (32 values)| |
|
|posY|0,...,31|[0, 1] normalized position (32 values)| |
|
|
|
Each image corresponds to a unique combination of these 6 factors. The images are stored in a **row-major order** (fastest-changing factor is `posY`, slowest-changing factor is `color`). |
|
|
|
### Why no train/test split? |
|
The dSprites dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations. |
|
|
|
## Example Usage |
|
Below is a quick example of how to load this dataset via the Hugging Face Datasets library: |
|
```python |
|
from datasets import load_dataset |
|
|
|
# Load the dataset |
|
dataset = load_dataset("randall-lab/dsprites", split="train", trust_remote_code=True) |
|
# Access a sample from the dataset |
|
example = dataset[0] |
|
image = example["image"] |
|
label = example["label"] # [color_idx, shape_idx, scale_idx, orientation_idx, posX_idx, posY_idx] |
|
label_values = example["label_values"] # corresponding continuous values |
|
|
|
# Label Classes |
|
color = example["color"] # 0 |
|
shape = example["shape"] # 0-2 |
|
scale = example["scale"] # 0-5 |
|
orientation = example["orientation"] # 0-39 |
|
posX = example["posX"] # 0-31 |
|
posY = example["posY"] # 0-31 |
|
|
|
# Label Values |
|
color_value = example["colorValue"] # 1.0 |
|
shape_value = example["shapeValue"] # 1.0, 2.0, 3.0 |
|
scale_value = example["scaleValue"] # [0.5, 1.0] |
|
orientation_value = example["orientationValue"] # [0, 2π] |
|
posX_value = example["posXValue"] # [0, 1] |
|
posY_value = example["posYValue"] # [0, 1] |
|
|
|
image.show() # Display the image |
|
print(f"Label (factors): {label}") |
|
print(f"Label values (factors): {label_values}") |
|
``` |
|
If you are using colab, you should update datasets to avoid errors |
|
``` |
|
pip install -U datasets |
|
``` |
|
## Citation |
|
``` |
|
@inproceedings{higgins2017beta, |
|
title={beta-vae: Learning basic visual concepts with a constrained variational framework}, |
|
author={Higgins, Irina and Matthey, Loic and Pal, Arka and Burgess, Christopher and Glorot, Xavier and Botvinick, Matthew and Mohamed, Shakir and Lerchner, Alexander}, |
|
booktitle={International conference on learning representations}, |
|
year={2017} |
|
} |
|
``` |