File size: 3,524 Bytes
9da40d4 0a0fb2d 134e930 0a0fb2d bf7eb2f d1c3879 0a0fb2d bf7eb2f d1c3879 0a0fb2d 134e930 0a0fb2d 59832e3 0a0fb2d 086696b 134e930 0a0fb2d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: cc-by-sa-4.0
size_categories:
- 100K<n<1M
---
# Procedural 3D Synthetic Shapes Dataset
## Overview
This dataset contains 152,508 procedurally synthesized 3D shapes in order to help people better reproduce results for [Learning 3D Representations from Procedural 3D Programs](https://arxiv.org/abs/2411.17467). The shapes are created using a procedural 3D program that combines primitive shapes (e.g., cubes, spheres, and cylinders) and applies various transformations and augmentations to enhance geometric diversity.
Our dataset is collected based on recent works [Xie et al. (2024)](https://desaixie.github.io/lrm-zero/), and we utilized procedure generated data in self-supervised setting. Each 3D shape is represented by uniformly sampled surface points, making it a versatile resource for pretraining models for tasks such as masked point cloud completion, shape classification, and more.
<p align="center">
<img src="./figs/dataset-complexity.jpg" alt="Shape Complexity" style="width: 80%;">
</p>
*Figure 1. Examples of procedurally generated 3D shapes showcasing varying geometric complexity. In this dataset, we only provide data in the category of (d). Please checkout github if you want to render data in different complexity level*
## Key Features
- **Size:** 150,000 procedurally generated 3D shapes.
- **Representation:** Each shape is sampled with 8,192 surface points.
- **Primitives:** Shapes are composed of randomly sampled primitives, including:
- Cubes
- Spheres
- Cylinders
- **Augmentations:**
- Boolean operations (e.g., difference, union)
- Wireframe conversion
## Dataset Size and Performance
We evaluated the impact of dataset size on the **PB-T50-RS benchmark** for shape classification using Point-MAE-Zero. Our findings show that performance improves with larger dataset sizes but exhibits diminishing returns beyond a certain threshold.
<p align="center">
<img src="./figs/scaling_law.jpg" alt="Impact of Dataset Size" style="width: 80%;">
</p>
*Figure 2. The effect of dataset size on downstream shape classification performance. Note that our performance is on par with Point-MAE trained with ShapeNet at exactly the same scale.*
Additional experiments are available in [our paper](https://arxiv.org/abs/2411.17467).
## Dataset Format
The dataset is provided in a format ready for point cloud-based learning:
- **Surface Points:** Stored as `.npy` files.
- Under **data/result**, we have **152508** sub-directories. And in each directory, we provide **object.npy** and **object_aug.npy**. object_aug.npy contains surface points after augmentations. For example of dataloader, please checkout our [github](https://github.com/UVA-Computer-Vision-Lab/point-mae-zero?tab=readme-ov-file).
## License
This dataset is licensed under the [CC BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/). You are free to share and adapt the dataset, provided appropriate credit is given and any derivative works are distributed under the same license. Please also check licence here [zeroverse](https://github.com/desaixie/zeroverse).
## Citation
If you find this dataset useful in your research, please cite our work:
```
@article{chen2024learning3drepresentationsprocedural,
title={Learning 3D Representations from Procedural 3D Programs},
author={Xuweiyi Chen and Zezhou Cheng},
year={2024},
eprint={2411.17467},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.17467},
}
```
|