Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for MPI3D-real

Dataset Description

The MPI3D-real dataset is a real-world image dataset designed for benchmarking algorithms in disentangled representation learning and unsupervised representation learning. It is part of the broader MPI3D dataset suite, which also includes synthetic toy, realistic simulated and complex real-world variants.

The real version was recorded using a robotic platform that physically manipulates 3D-printed objects under controlled conditions. Images were captured using real cameras at multiple heights, with controlled lighting and background settings. This enables researchers to systematically study sim-to-real transfer and compare performance across different levels of simulation fidelity.

All images depict physical 3D objects manipulated by the robot arm, under controlled variations of 7 known factors:

  • Object color (6 values)
  • Object shape (6 values)
  • Object size (2 values)
  • Camera height (3 values)
  • Background color (3 values)
  • Robotic arm horizontal axis (40 values)
  • Robotic arm vertical axis (40 values)

The dataset contains 1,036,800 images at a resolution of 64×64 pixels (downsampled from the original resolution for benchmarking, as commonly used in the literature). All factors are identical to those used in the toy and realistic simulated versions of MPI3D, enabling direct comparisons between different domains. Dataset Visualization

Dataset Source

Dataset Structure

Factors Possible Values
object_color white=0, green=1, red=2, blue=3, brown=4, olive=5
object_shape cone=0, cube=1, cylinder=2, hexagonal=3, pyramid=4, sphere=5
object_size small=0, large=1
camera_height top=0, center=1, bottom=2
background_color purple=0, sea green=1, salmon=2
horizontal_axis (DOF1) 0,...,39
vertical_axis (DOF2) 0,...,39

Each image corresponds to a unique combination of these 7 factors. The images are stored in a row-major order (fastest-changing factor is vertical_axis, slowest-changing factor is object_color).

Why no train/test split?

The MPI3D-real dataset does not provide an official train/test split. It is designed for representation learning research, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.

Example Usage

Below is a quick example of how to load this dataset via the Hugging Face Datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("randall-lab/mpi3d-real", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"]  # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
color = example["color"] # 0 
shape = example["shape"] # 0
size = example["size"] # 0
height = example["height"] # 0
background = example["background"] # 0 
dof1 = example["dof1"] # 0 
dof2 = example["dof2"] # 0

image.show()  # Display the image
print(f"Label (factors): {label}")

If you are using colab, you should update datasets to avoid errors

pip install -U datasets

Citation

@article{gondal2019transfer,
  title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
  author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
  journal={Advances in Neural Information Processing Systems},
  volume={32},
  year={2019}
}
Downloads last month
10