File size: 6,353 Bytes
0ba25b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0b8cbca
 
 
 
 
6f058ba
0b8cbca
6f058ba
0b8cbca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f058ba
 
 
0b8cbca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f058ba
0b8cbca
 
 
 
 
 
6f058ba
0b8cbca
 
 
6f058ba
0b8cbca
 
 
 
 
 
6f058ba
0b8cbca
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
---
dataset_info:
  features:
  - name: voxels_occupancy
    dtype:
      array4_d:
        shape:
        - 1
        - 32
        - 32
        - 32
        dtype: float32
  - name: voxels_colors
    dtype:
      array4_d:
        shape:
        - 3
        - 32
        - 32
        - 32
        dtype: float32
  - name: model_id
    dtype: string
  - name: name
    dtype: string
  - name: categories
    sequence: string
  - name: tags
    sequence: string
  - name: metadata
    struct:
    - name: is_augmented
      dtype: bool
    - name: original_file
      dtype: string
    - name: num_occupied
      dtype: int32
    - name: split
      dtype: string
  splits:
  - name: train
    num_bytes: 207426028571
    num_examples: 515177
  - name: test
    num_bytes: 10942650293
    num_examples: 27115
  download_size: 4338398625
  dataset_size: 218368678864
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# BlockGen-3D Dataset

## Overview

BlockGen-3D is a large-scale dataset of voxelized 3D models with accompanying text descriptions, specifically designed for text-to-3D generation tasks. By processing and voxelizing models from the [Objaverse dataset](https://huggingface.co/datasets/allenai/objaverse), we have created a standardized representation that is particularly suitable for training 3D diffusion models.

Our dataset provides two types of representations: shape-only models represented as binary occupancy grids, and colored models with full RGBA information. Each model is represented in a 32×32×32 voxel grid. 
The dataset contains 542,292 total samples, with:
- 515,177 training samples
- 27,115 test samples

## Data Format and Structure

Every sample in our dataset consists of a voxel grid with accompanying metadata. The voxel grid uses a consistent 32³ resolution, with values structured as follows:

For shape-only data:
- `voxels_occupancy`: A binary grid of shape [1, 32, 32, 32] where each voxel is either empty (0) or occupied (1)

For colored data:
- `voxels_colors`: RGB color information with shape [3, 32, 32, 32], normalized to [0, 1]
- `voxels_occupancy`: The occupancy mask as described above

Each sample also includes rich metadata:
- Text descriptions derived from the original Objaverse dataset
- Categorization and tagging information
- Augmentation status and original file information
- Number of occupied voxels for quick filtering or analysis

## Using the Dataset

### Basic Loading and Inspection

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("PeterAM4/blockgen-3d")

# Access splits
train_dataset = dataset["train"]
test_dataset = dataset["test"]

# Examine a sample
sample = train_dataset[0]
print(f"Sample description: {sample['name']}")
print(f"Number of occupied voxels: {sample['metadata']['num_occupied']}")
```

### Working with Voxel Data

When working with the voxel data, you'll often want to handle both shape-only and colored samples uniformly. Here's a utility function that helps standardize the processing:

```python
import torch

def process_voxel_data(sample, default_color=[0.5, 0.5, 0.5]):
    """Process voxel data with default colors for shape-only data.
    
    Args:
        sample: Dataset sample
        default_color: Default RGB values for shape-only models (default: gray)
    
    Returns:
        torch.Tensor: RGBA data with shape [4, 32, 32, 32]
    """
    occupancy = torch.from_numpy(sample['voxels_occupancy'])
    
    if sample['voxels_colors'] is not None:
        # Use provided colors for RGBA samples
        colors = torch.from_numpy(sample['voxels_colors'])
    else:
        # Apply default color to shape-only samples
        default_color = torch.tensor(default_color)[:, None, None, None]
        colors = default_color.repeat(1, 32, 32, 32) * occupancy
    
    # Combine into RGBA format
    rgba = torch.cat([colors, occupancy], dim=0)
    return rgba
```

### Batch Processing with DataLoader

For training deep learning models, you'll likely want to use PyTorch's DataLoader. Here's how to set it up with a simple prompt strategy:

For basic text-to-3D generation, you can use the model names as prompts:
```python
def collate_fn(batch):
    """Simple collate function using basic prompts."""
    return {
        'voxels': torch.stack([process_voxel_data(x) for x in batch]),
        'prompt': [x['name'] for x in batch]  # Using name field as prompt, could also use more complex prompts from categories and tags dict keys
    }

# Create DataLoader
dataloader = DataLoader(
    train_dataset,
    batch_size=32,
    shuffle=True,
    collate_fn=collate_fn
)
```

## Visualization Examples

![Dataset Usage Example](https://huggingface.co/datasets/PeterAM4/blockgen-3d/resolve/main/comparison_wooden.gif)
![Dataset Usage Example](https://huggingface.co/datasets/PeterAM4/blockgen-3d/resolve/main/comparison_char.gif)
*Dataset Usage Example - Showing voxelization process*

## Dataset Creation Process

Our dataset was created through the following steps:

1. Starting with the Objaverse dataset
2. Voxelizing 3D models at 32³ resolution
3. Preserving color information where available
4. Generating data augmentations through rotations
5. Creating train/test splits (95%/5% split ratio)

## Training tips

- For shape-only models, use only the occupancy channel
- For colored models:

  - Apply colors only to occupied voxels
  - Use default colors for shape-only samples

## Citation

If you use this dataset in your research, please cite:

```
@misc{blockgen2024,
  title={BlockGen-3D: A Large-Scale Dataset for Text-to-3D Voxel Generation},
  author={Peter A. Massih},
  year={2024},
  publisher={Hugging Face}
}
```

Please also cite the original Objaverse dataset:

```
@article{objaverse2023,
  title={Objaverse: A Universe of Annotated 3D Objects},
  author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and Oscar Michel and Eli VanderBilt and Ludwig Schmidt and Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
  journal={arXiv preprint arXiv:2304.02643},
  year={2023}
}
```

## Limitations

- Fixed 32³ resolution limits fine detail representation
- Not all models have color information
- Augmentations are limited to rotations
- Voxelization may lose some geometric details