Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
# Dataset Card for MPI3D-toy
|
5 |
+
|
6 |
+
## Dataset Description
|
7 |
+
|
8 |
+
The **MPI3D-toy dataset** is a **synthetically rendered image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It is part of the broader MPI3D dataset suite, which also includes **realistic simulated** and **real-world** variants.
|
9 |
+
|
10 |
+
The **toy version** was generated using a **simplified computer graphics renderer** (Quicksilver hardware renderer in Autodesk 3ds Max), based on CAD models of a physical robotic setup. This allows researchers to systematically study **sim-to-real transfer** and the effect of simulation fidelity.
|
11 |
+
|
12 |
+
All images depict **physical 3D objects** that would be manipulated by a robotic platform, under **controlled variations of 7 known factors**:
|
13 |
+
|
14 |
+
- Object color (6 values)
|
15 |
+
- Object shape (6 values)
|
16 |
+
- Object size (2 values)
|
17 |
+
- Camera height (3 values)
|
18 |
+
- Background color (3 values)
|
19 |
+
- Robotic arm horizontal axis (40 values)
|
20 |
+
- Robotic arm vertical axis (40 values)
|
21 |
+
|
22 |
+
The dataset contains **1,036,800 images** at a resolution of **64×64 pixels**, representing the full Cartesian product of these factors. All factors are **identical** to those used in the realistic and real-world versions of MPI3D, enabling direct comparisons between different levels of simulation fidelity.
|
23 |
+
|
24 |
+
## Dataset Source
|
25 |
+
- **Homepage**: [https://github.com/rr-learning/disentanglement_dataset](https://github.com/rr-learning/disentanglement_dataset)
|
26 |
+
- **License**: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
|
27 |
+
- **Paper**: Muhammad Waleed Gondal et al. _On the Transfer of Inductive Bias from Simulation to the Real World: A New Disentanglement Dataset_. NeurIPS 2019.
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
|Factors|Possible Values|
|
32 |
+
|---|---|
|
33 |
+
|object_color|white=0, green=1, red=2, blue=3, brown=4, olive=5|
|
34 |
+
|object_shape|cone=0, cube=1, cylinder=2, hexagonal=3, pyramid=4, sphere=5|
|
35 |
+
|object_size|small=0, large=1|
|
36 |
+
|camera_height|top=0, center=1, bottom=2|
|
37 |
+
|background_color|purple=0, sea green=1, salmon=2|
|
38 |
+
|horizontal_axis (DOF1)|0,...,39|
|
39 |
+
|vertical_axis (DOF2)|0,...,39|
|
40 |
+
|
41 |
+
Each image corresponds to a unique combination of these 7 factors. The images are stored in a **row-major order** (fastest-changing factor is `vertical_axis`, slowest-changing factor is `object_color`).
|
42 |
+
|
43 |
+
### Why no train/test split?
|
44 |
+
The MPI3D-toy dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
|
45 |
+
|
46 |
+
## Example Usage
|
47 |
+
|
48 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
|
49 |
+
|
50 |
+
```python
|
51 |
+
from datasets import load_dataset
|
52 |
+
|
53 |
+
# Load the dataset
|
54 |
+
dataset = load_dataset("your-hf-namespace/mpi3d_toy", split="train", trust_remote_code=True)
|
55 |
+
|
56 |
+
# Access a sample from the dataset
|
57 |
+
example = dataset[0]
|
58 |
+
image = example["image"]
|
59 |
+
label = example["label"] # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
|
60 |
+
color = example["color"] # 0
|
61 |
+
shape = example["shape"] # 0
|
62 |
+
size = example["size"] # 0
|
63 |
+
height = example["height"] # 0
|
64 |
+
background = example["background"] # 0
|
65 |
+
dof1 = example["dof1"] # 0
|
66 |
+
dof2 = example["dof2"] # 0
|
67 |
+
|
68 |
+
|
69 |
+
image.show() # Display the image
|
70 |
+
print(f"Label (factors): {label}")
|
71 |
+
```
|
72 |
+
If you are using colab, you should update datasets to avoid errors
|
73 |
+
```
|
74 |
+
pip install -U datasets
|
75 |
+
```
|
76 |
+
|
77 |
+
## Citation
|
78 |
+
```
|
79 |
+
@article{gondal2019transfer,
|
80 |
+
title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
|
81 |
+
author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
|
82 |
+
journal={Advances in Neural Information Processing Systems},
|
83 |
+
volume={32},
|
84 |
+
year={2019}
|
85 |
+
}
|
86 |
+
```
|