The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for Cars3D
Dataset Description
The Cars3D dataset is a synthetic image dataset widely used for benchmarking algorithms in disentangled representation learning and unsupervised representation learning. It consists of 183 distinct car models (identity factor), each rendered under controlled variations of:
- Azimuth angle: 24 discrete values (rotating the car around its vertical axis)
- Elevation angle: 4 discrete values (camera viewpoint height)
Each image is rendered at a resolution of 128×128 pixels, stored as PNG files.
Dataset Source
- Homepage: https://github.com/google-research/disentanglement_lib/
- License: Apache License 2.0
- Paper: Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. International Conference on Machine Learning (ICML), 2019.
Dataset Structure
Data | Description |
---|---|
Images | 17,568 PNG Images |
Label | [Car type, Elevation Azimuth] |
Car type | Car model label (0–182) |
Elevation | Elevation angle index (0–3) |
Azimuth | Azimuth angle index (0–23) |
Why no train/test split?
The Cars3D dataset does not provide an official train/test split, and none is used in the original papers or Google’s disentanglement_lib
. This is because the dataset is designed for disentanglement research, where the goal is to learn structured representations of the underlying factors of variation (elevation, azimuth, car type), not to perform classification or generalization tasks.
The dataset is a full Cartesian product of all factors (183 car types × 24 azimuth angles × 4 elevation angles), so every possible factor combination is present. For disentanglement evaluation, models typically require access to the complete dataset to explore and assess factor-wise variations, rather than being trained on a subset and tested on unseen examples.
Example Usage
Below is a quick example of how to load this dataset via the Hugging Face Datasets library.
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("randall-lab/cars3d", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"] # [0, 0, 0] -> [car_type: 0, elevation: 0, azimuth: 0]
car_type = example["car_type"] # 0 -> car_type: 0
elevation = example["elevation"] # 0 -> elevation: 0
azimuth = example["azimuth"] # 0 -> azimuth: 0
image.show() # Display the image
print(f"Label: {label}")
If you are using colab, you should update datasets to avoid errors
pip install -U datasets
Citation
@inproceedings{locatello2019challenging,
title={Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations},
author={Locatello, Francesco and Bauer, Stefan and Lucic, Mario and Raetsch, Gunnar and Gelly, Sylvain and Sch{\"o}lkopf, Bernhard and Bachem, Olivier},
booktitle={International Conference on Machine Learning},
pages={4114--4124},
year={2019}
}
- Downloads last month
- 9