haodoz0118 commited on
Commit
01522d1
·
verified ·
1 Parent(s): 668ae07

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +80 -0
  2. mpi3d-real.py +93 -0
  3. real1.gif +3 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ # Dataset Card for MPI3D-real
5
+
6
+ ## Dataset Description
7
+
8
+ The **MPI3D-real dataset** is a **real-world image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**. It is part of the broader MPI3D dataset suite, which also includes **synthetic toy** and **realistic simulated** variants.
9
+
10
+ The **real version** was recorded using a **robotic platform** that physically manipulates **3D-printed objects** under controlled conditions. Images were captured using **real cameras** at multiple heights, with controlled lighting and background settings. This enables researchers to systematically study **sim-to-real transfer** and compare performance across different levels of simulation fidelity.
11
+
12
+ All images depict **physical 3D objects** manipulated by the robot arm, under **controlled variations of 7 known factors**:
13
+
14
+ - Object color (6 values)
15
+ - Object shape (6 values)
16
+ - Object size (2 values)
17
+ - Camera height (3 values)
18
+ - Background color (3 values)
19
+ - Robotic arm horizontal axis (40 values)
20
+ - Robotic arm vertical axis (40 values)
21
+
22
+ The dataset contains **1,036,800 images** at a resolution of **64×64 pixels** (downsampled from the original resolution for benchmarking, as commonly used in the literature). All factors are **identical** to those used in the toy and realistic simulated versions of MPI3D, enabling direct comparisons between different domains.
23
+ ![Dataset Visualization](https://huggingface.co/datasets/randall-lab/mpi3d-real/resolve/main/real1.gif)
24
+
25
+ ## Dataset Source
26
+ - **Homepage**: [https://github.com/rr-learning/disentanglement_dataset](https://github.com/rr-learning/disentanglement_dataset)
27
+ - **License**: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
28
+ - **Paper**: Muhammad Waleed Gondal et al. _On the Transfer of Inductive Bias from Simulation to the Real World: A New Disentanglement Dataset_. NeurIPS 2019.
29
+
30
+ ## Dataset Structure
31
+ |Factors|Possible Values|
32
+ |---|---|
33
+ |object_color|white=0, green=1, red=2, blue=3, brown=4, olive=5|
34
+ |object_shape|cone=0, cube=1, cylinder=2, hexagonal=3, pyramid=4, sphere=5|
35
+ |object_size|small=0, large=1|
36
+ |camera_height|top=0, center=1, bottom=2|
37
+ |background_color|purple=0, sea green=1, salmon=2|
38
+ |horizontal_axis (DOF1)|0,...,39|
39
+ |vertical_axis (DOF2)|0,...,39|
40
+ Each image corresponds to a unique combination of these 7 factors. The images are stored in a **row-major order** (fastest-changing factor is `vertical_axis`, slowest-changing factor is `object_color`).
41
+
42
+ ### Why no train/test split?
43
+ The MPI3D-real dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a complete Cartesian product of all factor combinations, models typically require access to the full dataset to explore factor-wise variations.
44
+
45
+ ## Example Usage
46
+ Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
47
+ ```python
48
+ from datasets import load_dataset
49
+
50
+ # Load the dataset
51
+ dataset = load_dataset("randall-lab/mpi3d-real", split="train", trust_remote_code=True)
52
+ # Access a sample from the dataset
53
+ example = dataset[0]
54
+ image = example["image"]
55
+ label = example["label"] # [object_color: 0, object_shape: 0, object_size: 0, camera_height: 0, background_color: 0, horizontal_axis: 0, vertical_axis: 0]
56
+ color = example["color"] # 0
57
+ shape = example["shape"] # 0
58
+ size = example["size"] # 0
59
+ height = example["height"] # 0
60
+ background = example["background"] # 0
61
+ dof1 = example["dof1"] # 0
62
+ dof2 = example["dof2"] # 0
63
+
64
+ image.show() # Display the image
65
+ print(f"Label (factors): {label}")
66
+ ```
67
+ If you are using colab, you should update datasets to avoid errors
68
+ ```
69
+ pip install -U datasets
70
+ ```
71
+ ## Citation
72
+ ```
73
+ @article{gondal2019transfer,
74
+ title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
75
+ author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
76
+ journal={Advances in Neural Information Processing Systems},
77
+ volume={32},
78
+ year={2019}
79
+ }
80
+ ```
mpi3d-real.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+ import numpy as np
3
+ import os
4
+ from PIL import Image
5
+
6
+ _MPI3D_URL = "https://huggingface.co/datasets/waleedgondal/mpi3d/resolve/main/mpi3d_real.npz"
7
+
8
+ class MPI3DReal(datasets.GeneratorBasedBuilder):
9
+ """MPI3D Real dataset: 6x6x2x3x3x40x40 factor combinations, 64x64 RGB images."""
10
+
11
+ VERSION = datasets.Version("1.0.0")
12
+
13
+ def _info(self):
14
+ return datasets.DatasetInfo(
15
+ description=(
16
+ "MPI3D Real dataset: real-world images of physical 3D objects captured using "
17
+ "a robotic platform under controlled variations of 7 known factors of variation. "
18
+ "Images are 64x64 RGB (downsampled from original resolution for benchmarking). "
19
+ "Factors: object color (6), object shape (6), object size (2), camera height (3), "
20
+ "background color (3), robotic arm DOF1 (40), robotic arm DOF2 (40). "
21
+ "Images were recorded using real cameras and depict actual 3D-printed objects manipulated by a robot. "
22
+ "The images are ordered as the Cartesian product of the factors in row-major order."
23
+ ),
24
+ features=datasets.Features(
25
+ {
26
+ "image": datasets.Image(), # (64, 64, 3)
27
+ "index": datasets.Value("int32"), # index of the image
28
+ "label": datasets.Sequence(datasets.Value("int32")), # 7 factor indices
29
+ "color": datasets.Value("int32"), # object color index (0-5)
30
+ "shape": datasets.Value("int32"), # object shape index (0-5)
31
+ "size": datasets.Value("int32"), # object size index (0-1)
32
+ "height": datasets.Value("int32"), # camera height index (0-2)
33
+ "background": datasets.Value("int32"), # background color index (0-2)
34
+ "dof1": datasets.Value("int32"), # robotic arm DOF1 index (0-39)
35
+ "dof2": datasets.Value("int32"), # robotic arm DOF2 index (0-39)
36
+ }
37
+ ),
38
+ supervised_keys=("image", "label"),
39
+ homepage="https://github.com/rr-learning/disentanglement_dataset",
40
+ license="Creative Commons Attribution 4.0 International",
41
+ citation="""@article{gondal2019transfer,
42
+ title={On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset},
43
+ author={Gondal, Muhammad Waleed and Wuthrich, Manuel and Miladinovic, Djordje and Locatello, Francesco and Breidt, Martin and Volchkov, Valentin and Akpo, Joel and Bachem, Olivier and Sch{\"o}lkopf, Bernhard and Bauer, Stefan},
44
+ journal={Advances in Neural Information Processing Systems},
45
+ volume={32},
46
+ year={2019}
47
+ }""",
48
+ )
49
+
50
+ def _split_generators(self, dl_manager):
51
+ npz_path = dl_manager.download(_MPI3D_URL)
52
+
53
+ return [
54
+ datasets.SplitGenerator(
55
+ name=datasets.Split.TRAIN,
56
+ gen_kwargs={"npz_path": npz_path},
57
+ ),
58
+ ]
59
+
60
+ def _generate_examples(self, npz_path):
61
+ # Load npz
62
+ data = np.load(npz_path)
63
+ images = data["images"] # shape: (1036800, 64, 64, 3)
64
+
65
+ factor_sizes = np.array([6, 6, 2, 3, 3, 40, 40])
66
+ factor_bases = np.cumprod([1] + list(factor_sizes[::-1]))[::-1][1:]
67
+
68
+ def index_to_factors(index):
69
+ factors = []
70
+ for base, size in zip(factor_bases, factor_sizes):
71
+ factor = (index // base) % size
72
+ factors.append(int(factor))
73
+ return factors
74
+
75
+ # Iterate over images
76
+ for idx in range(len(images)):
77
+ img = images[idx]
78
+ img_pil = Image.fromarray(img)
79
+
80
+ factors = index_to_factors(idx)
81
+
82
+ yield idx, {
83
+ "image": img_pil,
84
+ "index": idx,
85
+ "label": factors,
86
+ "color": factors[0],
87
+ "shape": factors[1],
88
+ "size": factors[2],
89
+ "height": factors[3],
90
+ "background": factors[4],
91
+ "dof1": factors[5],
92
+ "dof2": factors[6],
93
+ }
real1.gif ADDED

Git LFS Details

  • SHA256: 02921e72d51c304322b4c1a011483a1df18ac74b365be3345e9024fb0ad5acb0
  • Pointer size: 131 Bytes
  • Size of remote file: 524 kB