Upload 3 files
Browse files- README.md +81 -0
- complex1.gif +3 -0
- mpi3d-complex.py +92 -0
README.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
# Dataset Card for MPI3D-complex
|
5 |
+
|
6 |
+
## Dataset Description
|
7 |
+
|
8 |
+
The **MPI3D-complex dataset** is a **real-world image dataset** of **complex everyday objects**, designed for benchmarking algorithms in **disentangled representation learning** and **robustness to object variability**. It is an **extension** of the broader MPI3D dataset suite, which also includes [synthetic toy](https://huggingface.co/datasets/randall-lab/mpi3d-toy), [realistic simulated](https://huggingface.co/datasets/randall-lab/mpi3d-realistic), and [real-world geometric shape](https://huggingface.co/datasets/randall-lab/mpi3d-real) variants.
|
9 |
+
|
10 |
+
The **complex version** was recorded using the same **robotic platform** as the other MPI3D datasets, but with a new set of **real-world objects** (coffee-cup, tennis-ball, croissant, beer-cup) and a reduced color set. Images were captured using **real cameras**, with realistic lighting and background conditions. This allows researchers to assess how well models trained on simpler domains generalize to more complex object appearances.
|
11 |
+
|
12 |
+
All images depict **real-world objects** under **controlled variations of 7 known factors**:
|
13 |
+
|
14 |
+
- Object color (4 values)
|
15 |
+
- Object shape (4 values)
|
16 |
+
- Object size (2 values)
|
17 |
+
- Camera height (3 values)
|
18 |
+
- Background color (3 values)
|
19 |
+
- Robotic arm horizontal axis (40 values)
|
20 |
+
- Robotic arm vertical axis (40 values)
|
21 |
+
|
22 |
+
The dataset contains **460,800 images** at a resolution of **64×64 pixels**. The factors of variation are consistent with the other MPI3D datasets, but with **different factor sizes** for object shape and color.
|
23 |
+

|
24 |
+
|
25 |
+
## Dataset Source
|
26 |
+
- **Homepage**: [https://github.com/rr-learning/disentanglement_dataset](https://github.com/rr-learning/disentanglement_dataset)
|
27 |
+
- **License**: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
|
28 |
+
- **Paper**: Muhammad Waleed Gondal et al. _On the Transfer of Inductive Bias from Simulation to the Real World: A New Disentanglement Dataset_. NeurIPS 2019.
|
29 |
+
|
30 |
+
## Dataset Structure
|
31 |
+
|Factors|Possible Values|
|
32 |
+
|---|---|
|
33 |
+
|object_color|yellow=0, green=1, olive=2, red=3|
|
34 |
+
|object_shape|coffee-cup=0, tennis-ball=1, croissant=2, beer-cup=3|
|
35 |
+
|object_size|small=0, large=1|
|
36 |
+
|camera_height|top=0, center=1, bottom=2|
|
37 |
+
|background_color|purple=0, sea green=1, salmon=2|
|
38 |
+
|horizontal_axis (DOF1)|0,...,39|
|
39 |
+
|vertical_axis (DOF2)|0,...,39|
|
40 |
+
|
41 |
+
Each image corresponds to a unique combination of these 7 factors. The images are stored in a **row-major order** (fastest-changing factor is `vertical_axis`, slowest-changing factor is `object_color`).
|
42 |
+
|
43 |
+
### Why no train/test split?
|
44 |
+
The MPI3D-complex dataset does not provide an official train/test split. It is designed for **representation learning research** and for testing **robustness to object complexity and appearance variation**. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
|
45 |
+
|
46 |
+
## Example Usage
|
47 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
|
48 |
+
```python
|
49 |
+
from datasets import load_dataset
|
50 |
+
|
51 |
+
# Load the dataset
|
52 |
+
dataset = load_dataset("randall-lab/mpi3d-complex", split="train", trust_remote_code=True)
|
53 |
+
# Access a sample from the dataset
|
54 |
+
example = dataset[0]
|
55 |
+
image = example["image"]
|
56 |
+
label = example["label"] # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
|
57 |
+
color = example["color"] # 0
|
58 |
+
shape = example["shape"] # 0
|
59 |
+
size = example["size"] # 0
|
60 |
+
height = example["height"] # 0
|
61 |
+
background = example["background"] # 0
|
62 |
+
dof1 = example["dof1"] # 0
|
63 |
+
dof2 = example["dof2"] # 0
|
64 |
+
|
65 |
+
image.show() # Display the image
|
66 |
+
print(f"Label (factors): {label}")
|
67 |
+
```
|
68 |
+
If you are using colab, you should update datasets to avoid errors
|
69 |
+
```
|
70 |
+
pip install -U datasets
|
71 |
+
```
|
72 |
+
## Citation
|
73 |
+
```
|
74 |
+
@article{gondal2019transfer,
|
75 |
+
title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
|
76 |
+
author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
|
77 |
+
journal={Advances in Neural Information Processing Systems},
|
78 |
+
volume={32},
|
79 |
+
year={2019}
|
80 |
+
}
|
81 |
+
```
|
complex1.gif
ADDED
![]() |
Git LFS Details
|
mpi3d-complex.py
ADDED
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import datasets
|
2 |
+
import numpy as np
|
3 |
+
from PIL import Image
|
4 |
+
|
5 |
+
_MPI3D_URL = "https://drive.google.com/file/d/1Tp8eTdHxgUMtsZv5uAoYAbJR1BOa_OQm/view?usp=sharing"
|
6 |
+
|
7 |
+
class MPI3DComplex(datasets.GeneratorBasedBuilder):
|
8 |
+
"""MPI3D Complex dataset: 4x4x2x3x3x40x40 factor combinations, 64x64 RGB images."""
|
9 |
+
|
10 |
+
VERSION = datasets.Version("1.0.0")
|
11 |
+
|
12 |
+
def _info(self):
|
13 |
+
return datasets.DatasetInfo(
|
14 |
+
description=(
|
15 |
+
"MPI3D Complex dataset: real-world images of complex everyday objects (coffee-cup, tennis-ball, "
|
16 |
+
"croissant, beer-cup) manipulated by a robotic platform under controlled variations of 7 known factors. "
|
17 |
+
"Images are 64x64 RGB (downsampled). "
|
18 |
+
"Factors: object color (4), object shape (4), object size (2), camera height (3), "
|
19 |
+
"background color (3), robotic arm DOF1 (40), robotic arm DOF2 (40). "
|
20 |
+
"Images were captured using real cameras, introducing realistic noise and lighting conditions. "
|
21 |
+
"The images are ordered as the Cartesian product of the factors in row-major order."
|
22 |
+
),
|
23 |
+
features=datasets.Features(
|
24 |
+
{
|
25 |
+
"image": datasets.Image(), # (64, 64, 3)
|
26 |
+
"index": datasets.Value("int32"), # index of the image
|
27 |
+
"label": datasets.Sequence(datasets.Value("int32")), # 7 factor indices
|
28 |
+
"color": datasets.Value("int32"), # object color index (0-3)
|
29 |
+
"shape": datasets.Value("int32"), # object shape index (0-3)
|
30 |
+
"size": datasets.Value("int32"), # object size index (0-1)
|
31 |
+
"height": datasets.Value("int32"), # camera height index (0-2)
|
32 |
+
"background": datasets.Value("int32"), # background color index (0-2)
|
33 |
+
"dof1": datasets.Value("int32"), # robotic arm DOF1 index (0-39)
|
34 |
+
"dof2": datasets.Value("int32"), # robotic arm DOF2 index (0-39)
|
35 |
+
}
|
36 |
+
),
|
37 |
+
supervised_keys=("image", "label"),
|
38 |
+
homepage="https://github.com/rr-learning/disentanglement_dataset",
|
39 |
+
license="Creative Commons Attribution 4.0 International",
|
40 |
+
citation="""@article{gondal2019transfer,
|
41 |
+
title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
|
42 |
+
author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
|
43 |
+
journal={Advances in Neural Information Processing Systems},
|
44 |
+
volume={32},
|
45 |
+
year={2019}
|
46 |
+
}""",
|
47 |
+
)
|
48 |
+
|
49 |
+
def _split_generators(self, dl_manager):
|
50 |
+
npz_path = dl_manager.download(_MPI3D_URL)
|
51 |
+
|
52 |
+
return [
|
53 |
+
datasets.SplitGenerator(
|
54 |
+
name=datasets.Split.TRAIN,
|
55 |
+
gen_kwargs={"npz_path": npz_path},
|
56 |
+
),
|
57 |
+
]
|
58 |
+
|
59 |
+
def _generate_examples(self, npz_path):
|
60 |
+
# Load npz
|
61 |
+
data = np.load(npz_path)
|
62 |
+
images = data["images"] # shape: (460800, 64, 64, 3)
|
63 |
+
|
64 |
+
factor_sizes = np.array([4, 4, 2, 3, 3, 40, 40]) # key change for complex
|
65 |
+
factor_bases = np.cumprod([1] + list(factor_sizes[::-1]))[::-1][1:]
|
66 |
+
|
67 |
+
def index_to_factors(index):
|
68 |
+
factors = []
|
69 |
+
for base, size in zip(factor_bases, factor_sizes):
|
70 |
+
factor = (index // base) % size
|
71 |
+
factors.append(int(factor))
|
72 |
+
return factors
|
73 |
+
|
74 |
+
# Iterate over images
|
75 |
+
for idx in range(len(images)):
|
76 |
+
img = images[idx]
|
77 |
+
img_pil = Image.fromarray(img)
|
78 |
+
|
79 |
+
factors = index_to_factors(idx)
|
80 |
+
|
81 |
+
yield idx, {
|
82 |
+
"image": img_pil,
|
83 |
+
"index": idx,
|
84 |
+
"label": factors,
|
85 |
+
"color": factors[0],
|
86 |
+
"shape": factors[1],
|
87 |
+
"size": factors[2],
|
88 |
+
"height": factors[3],
|
89 |
+
"background": factors[4],
|
90 |
+
"dof1": factors[5],
|
91 |
+
"dof2": factors[6],
|
92 |
+
}
|