Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for Scream dSprites

Dataset Description

The Scream dSprites dataset is a synthetic 2D shapes dataset designed for benchmarking algorithms in disentangled representation learning and unsupervised representation learning.

It is a variant of the original dSprites dataset introduced in the β-VAE paper. In this Scream variant, a random patch of a Scream painting image is used as the background for each sample, and the object sprite is embedded onto the patch by inverting the pixel colors of the object region.

This allows researchers to evaluate robustness to complex background textures and assess how well models can learn disentangled representations when the background introduces structured noise.

The dataset consists of procedurally generated images of 2D sprites, under controlled variations of 6 known factors of variation:

  • Object color (1 value in original dSprites, here fixed but inverted on background)
  • Object shape (3 values: square, ellipse, heart)
  • Object scale (6 values)
  • Object orientation (40 values)
  • Object position X (32 values)
  • Object position Y (32 values)

All possible combinations of these factors are present exactly once, generating a total of 737,280 images at a resolution of 64×64 pixels. Each image is provided along with Discrete latent classes (indices for each factor), Continuous latent values.

These variants allow systematic testing under different nuisance variations (color, noise, background texture, abstract factors). Dataset Visualization

Scream background visualization

Scream background

You can also explore other variants:

Dataset Source

Dataset Structure

Factors Possible Classes (Indices) Values
color white=0 (original label, fixed) Inverted object pixels on scream background
shape square=0, ellipse=1, heart=2 1.0, 2.0, 3.0 (categorical)
scale 0,...,5 [0.5, 1.0] linearly spaced (6 values)
orientation 0,...,39 [0, 2π] radians (40 values)
posX 0,...,31 [0, 1] normalized position (32 values)
posY 0,...,31 [0, 1] normalized position (32 values)

Note: In this Scream variant, the color and colorValue fields remain 0 to match the original dSprites format. The object pixels are inverted on top of a random Scream background patch.

Each image corresponds to a unique combination of these 6 factors. The images are stored in a row-major order (fastest-changing factor is posY, slowest-changing factor is color).

Why no train/test split?

The Scream dSprites dataset does not provide an official train/test split. It is designed for representation learning research, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.

Example Usage

Below is a quick example of how to load this dataset via the Hugging Face Datasets library:

from datasets import load_dataset
# Load the dataset
dataset = load_dataset("randall-lab/dsprites-scream", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"]         # [color_idx, shape_idx, scale_idx, orientation_idx, posX_idx, posY_idx]
label_values = example["label_values"]  # corresponding continuous values

# Label Classes
color = example["color"]         # 0
shape = example["shape"]         # 0-2
scale = example["scale"]         # 0-5
orientation = example["orientation"]   # 0-39
posX = example["posX"]           # 0-31
posY = example["posY"]           # 0-31
# Label Values
color_value = example["colorValue"]         # 1
shape_value = example["shapeValue"]         # 1.0, 2.0, 3.0
scale_value = example["scaleValue"]         # [0.5, 1]
orientation_value = example["orientationValue"]   # [0, 2π]
posX_value = example["posXValue"]           # [0, 1]
posY_value = example["posYValue"]           # [0, 1]

image.show()  # Display the image
print(f"Label (factors): {label}")
print(f"Label values (factors): {label_values}")

If you are using colab, you should update datasets to avoid errors

pip install -U datasets

Citation

@inproceedings{locatello2019challenging,
  title={Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations},
  author={Locatello, Francesco and Bauer, Stefan and Lucic, Mario and Raetsch, Gunnar and Gelly, Sylvain and Sch{\"o}lkopf, Bernhard and Bachem, Olivier},
  booktitle={International Conference on Machine Learning},
  pages={4114--4124},
  year={2019}
}
Downloads last month
25