Upload 2 files
Browse files- README.md +97 -0
- shapes3d.py +112 -0
README.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
# Dataset Card for 3dshapes
|
5 |
+
|
6 |
+
## Dataset Description
|
7 |
+
|
8 |
+
The **3dshapes dataset** is a **synthetic 3D object image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**.
|
9 |
+
|
10 |
+
It was introduced in the **FactorVAE** paper [[Kim & Mnih, ICML 2018](https://proceedings.mlr.press/v80/kim18b.html)], as one of the standard testbeds for learning interpretable and disentangled latent factors. The dataset consists of images of **3D procedurally generated scenes**, where 6 **ground-truth independent factors of variation** are explicitly controlled:
|
11 |
+
|
12 |
+
- **Floor color** (hue)
|
13 |
+
- **Wall color** (hue)
|
14 |
+
- **Object color** (hue)
|
15 |
+
- **Object size** (scale)
|
16 |
+
- **Object shape** (categorical)
|
17 |
+
- **Object orientation** (rotation angle)
|
18 |
+
|
19 |
+
**3dshapes is generated as a full Cartesian product of all factor combinations**, making it perfectly suited for systematic evaluation of disentanglement. The dataset contains **480,000 images** at a resolution of **64×64 pixels**, covering **all possible combinations of the 6 factors exactly once**. The images are stored in **row-major order** according to the factor sweep, enabling precise control over factor-based evaluation.
|
20 |
+

|
21 |
+
|
22 |
+
## Dataset Source
|
23 |
+
- **Homepage**: [https://github.com/deepmind/3dshapes-dataset](https://github.com/deepmind/3dshapes-dataset)
|
24 |
+
- **License**: [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
|
25 |
+
- **Paper**: Hyunjik Kim & Andriy Mnih. _Disentangling by Factorising_. ICML 2018.
|
26 |
+
|
27 |
+
## Dataset Structure
|
28 |
+
|Factors|Possible Values|
|
29 |
+
|---|---|
|
30 |
+
|floor_color (hue)| 10 values linearly spaced in [0, 1] |
|
31 |
+
|wall_color (hue)| 10 values linearly spaced in [0, 1] |
|
32 |
+
|object_color (hue)| 10 values linearly spaced in [0, 1] |
|
33 |
+
|scale| 8 values linearly spaced in [0.75, 1.25] |
|
34 |
+
|shape| 4 values: 0, 1, 2, 3 |
|
35 |
+
|orientation| 15 values linearly spaced in [-30, 30] |
|
36 |
+
|
37 |
+
Each image corresponds to a unique combination of these **6 factors**. The images are stored in a **row-major order** (fastest-changing factor is `orientation`, slowest-changing factor is `floor_color`).
|
38 |
+
|
39 |
+
### Why no train/test split?
|
40 |
+
The 3dshapes dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a **complete Cartesian product of all factor combinations**, models typically require access to the full dataset to explore factor-wise variations.
|
41 |
+
|
42 |
+
## Example Usage
|
43 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
|
44 |
+
```python
|
45 |
+
from datasets import load_dataset
|
46 |
+
|
47 |
+
# Load the dataset
|
48 |
+
dataset = load_dataset("YOUR_HF_REPO/shapes3d", split="train", trust_remote_code=True)
|
49 |
+
|
50 |
+
# Access a sample from the dataset
|
51 |
+
example = dataset[0]
|
52 |
+
image = example["image"]
|
53 |
+
label = example["label"] # Value labels: [floor_hue, wall_hue, object_hue, scale, shape, orientation]
|
54 |
+
label_index = example["label_index"] # Index labels: [floor_idx, wall_idx, object_idx, scale_idx, shape_idx, orientation_idx]
|
55 |
+
|
56 |
+
# Label Value
|
57 |
+
floor_value = example["floor"] # 0-1
|
58 |
+
wall_value = example["wall"] # 0-1
|
59 |
+
object_value = example["object"] # 0-1
|
60 |
+
scale_value = example["scale"] # 0.75-1.25
|
61 |
+
shape_value = example["shape"] # 0,1,2,3
|
62 |
+
orientation_value = example["orientation"] # -30 - 30
|
63 |
+
|
64 |
+
# Label index
|
65 |
+
floor_idx = example["floor_idx"] # 0-9
|
66 |
+
wall_idx = example["wall_idx"] # 0-9
|
67 |
+
object_idx = example["object_idx"] # 0-9
|
68 |
+
scale_idx = example["scale_idx"] # 0-7
|
69 |
+
shape_idx = example["shape_idx"] # 0-3
|
70 |
+
orientation_idx = example["orientation_idx"] # 0-14
|
71 |
+
|
72 |
+
image.show() # Display the image
|
73 |
+
print(f"Label (factor values): {label}")
|
74 |
+
print(f"Label (factor indices): {label_index}")
|
75 |
+
```
|
76 |
+
If you are using colab, you should update datasets to avoid errors
|
77 |
+
```
|
78 |
+
pip install -U datasets
|
79 |
+
```
|
80 |
+
## Citation
|
81 |
+
```
|
82 |
+
@InProceedings{pmlr-v80-kim18b,
|
83 |
+
title = {Disentangling by Factorising},
|
84 |
+
author = {Kim, Hyunjik and Mnih, Andriy},
|
85 |
+
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
|
86 |
+
pages = {2649--2658},
|
87 |
+
year = {2018},
|
88 |
+
editor = {Dy, Jennifer and Krause, Andreas},
|
89 |
+
volume = {80},
|
90 |
+
series = {Proceedings of Machine Learning Research},
|
91 |
+
month = {10--15 Jul},
|
92 |
+
publisher = {PMLR},
|
93 |
+
pdf = {http://proceedings.mlr.press/v80/kim18b/kim18b.pdf},
|
94 |
+
url = {https://proceedings.mlr.press/v80/kim18b.html},
|
95 |
+
abstract = {We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon beta-VAE by providing a better trade-off between disentanglement and reconstruction quality and being more robust to the number of training iterations. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.}
|
96 |
+
}
|
97 |
+
```
|
shapes3d.py
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import datasets
|
2 |
+
import numpy as np
|
3 |
+
import os
|
4 |
+
from PIL import Image
|
5 |
+
|
6 |
+
_SHAPES3D_URL = "https://huggingface.co/datasets/randall-lab/shapes3d/resolve/main/shapes3d.npz"
|
7 |
+
|
8 |
+
class Shapes3D(datasets.GeneratorBasedBuilder):
|
9 |
+
"""Shapes3D dataset: 10x10x10x8x4x15 factor combinations, 64x64 RGB images."""
|
10 |
+
|
11 |
+
VERSION = datasets.Version("1.0.0")
|
12 |
+
|
13 |
+
def _info(self):
|
14 |
+
return datasets.DatasetInfo(
|
15 |
+
description=(
|
16 |
+
"Shapes3D dataset: procedurally generated images of 3D shapes with 6 independent factors of variation. "
|
17 |
+
"Commonly used for disentangled representation learning. "
|
18 |
+
"Factors: floor hue (10), wall hue (10), object hue (10), scale (8), shape (4), orientation (15). "
|
19 |
+
"Images are stored as the Cartesian product of the factors in row-major order."
|
20 |
+
),
|
21 |
+
features=datasets.Features(
|
22 |
+
{
|
23 |
+
"image": datasets.Image(), # (64, 64, 3)
|
24 |
+
"index": datasets.Value("int32"), # index of the image
|
25 |
+
"label": datasets.Sequence(datasets.Value("float32")), # 6 factor values (continuous)
|
26 |
+
"label_index": datasets.Sequence(datasets.Value("int32")), # 6 factor indices
|
27 |
+
"floor": datasets.Value("float32"), # value of floor (0-1)
|
28 |
+
"wall": datasets.Value("float32"), # value of wall (0-1)
|
29 |
+
"object": datasets.Value("float32"), # value of object (0-1)
|
30 |
+
"scale": datasets.Value("float32"), # value of scale (0.75-1.25)
|
31 |
+
"shape": datasets.Value("float32"), # value of shape (0-3)
|
32 |
+
"orientation": datasets.Value("float32"), # value of orientation (-30 to 30)
|
33 |
+
"floor_idx": datasets.Value("int32"),
|
34 |
+
"wall_idx": datasets.Value("int32"),
|
35 |
+
"object_idx": datasets.Value("int32"),
|
36 |
+
"scale_idx": datasets.Value("int32"),
|
37 |
+
"shape_idx": datasets.Value("int32"),
|
38 |
+
"orientation_idx": datasets.Value("int32"),
|
39 |
+
}
|
40 |
+
),
|
41 |
+
supervised_keys=("image", "label"),
|
42 |
+
homepage="https://github.com/google-deepmind/3dshapes-dataset/",
|
43 |
+
license="apache-2.0",
|
44 |
+
citation="""@InProceedings{pmlr-v80-kim18b,
|
45 |
+
title = {Disentangling by Factorising},
|
46 |
+
author = {Kim, Hyunjik and Mnih, Andriy},
|
47 |
+
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
|
48 |
+
pages = {2649--2658},
|
49 |
+
year = {2018},
|
50 |
+
editor = {Dy, Jennifer and Krause, Andreas},
|
51 |
+
volume = {80},
|
52 |
+
series = {Proceedings of Machine Learning Research},
|
53 |
+
month = {10--15 Jul},
|
54 |
+
publisher = {PMLR},
|
55 |
+
pdf = {http://proceedings.mlr.press/v80/kim18b/kim18b.pdf},
|
56 |
+
url = {https://proceedings.mlr.press/v80/kim18b.html}
|
57 |
+
}""",
|
58 |
+
)
|
59 |
+
|
60 |
+
def _split_generators(self, dl_manager):
|
61 |
+
npz_path = dl_manager.download(_SHAPES3D_URL)
|
62 |
+
|
63 |
+
return [
|
64 |
+
datasets.SplitGenerator(
|
65 |
+
name=datasets.Split.TRAIN,
|
66 |
+
gen_kwargs={"npz_path": npz_path},
|
67 |
+
),
|
68 |
+
]
|
69 |
+
|
70 |
+
def _generate_examples(self, npz_path):
|
71 |
+
# Load npz
|
72 |
+
data = np.load(npz_path)
|
73 |
+
images = data["images"] # (480000, 64, 64, 3)
|
74 |
+
labels = data["labels"] # (480000, 6)
|
75 |
+
|
76 |
+
# Define factor sizes (from README / paper)
|
77 |
+
factor_sizes = np.array([10, 10, 10, 8, 4, 15])
|
78 |
+
factor_bases = np.cumprod([1] + list(factor_sizes[::-1]))[::-1][1:]
|
79 |
+
|
80 |
+
def index_to_factors(index):
|
81 |
+
factors = []
|
82 |
+
for base, size in zip(factor_bases, factor_sizes):
|
83 |
+
factor = (index // base) % size
|
84 |
+
factors.append(int(factor))
|
85 |
+
return factors
|
86 |
+
|
87 |
+
# Iterate over images
|
88 |
+
for idx in range(len(images)):
|
89 |
+
img = images[idx]
|
90 |
+
img_pil = Image.fromarray(img)
|
91 |
+
|
92 |
+
label_value = labels[idx].tolist()
|
93 |
+
label_index = index_to_factors(idx)
|
94 |
+
|
95 |
+
yield idx, {
|
96 |
+
"image": img_pil,
|
97 |
+
"index": idx,
|
98 |
+
"label": label_value,
|
99 |
+
"label_index": label_index,
|
100 |
+
"floor": label_value[0],
|
101 |
+
"wall": label_value[1],
|
102 |
+
"object": label_value[2],
|
103 |
+
"scale": label_value[3],
|
104 |
+
"shape": label_value[4],
|
105 |
+
"orientation": label_value[5],
|
106 |
+
"floor_idx": label_index[0],
|
107 |
+
"wall_idx": label_index[1],
|
108 |
+
"object_idx": label_index[2],
|
109 |
+
"scale_idx": label_index[3],
|
110 |
+
"shape_idx": label_index[4],
|
111 |
+
"orientation_idx": label_index[5],
|
112 |
+
}
|