File size: 2,982 Bytes
354ba39
 
 
 
 
 
f6cb6cc
 
 
 
 
354ba39
 
 
 
f6cb6cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
354ba39
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: mit
---
# Datacard
This is the official fine-tuning dataset provided by VLABench, with 500 episodes each task. The current version includes 10 primitive tasks.

## Source
- Project Page: [https://vlabench.github.io/](https://vlabench.github.io/)
- Arxiv Paper: [https://arxiv.org/abs/2412.18194](https://arxiv.org/abs/2412.18194)
- Code: [https://github.com/OpenMOSS/VLABench](https://github.com/OpenMOSS/VLABench)

## Uses
Download all archive files and use the following command to extract:
```sh
cat rdt_data.tar.gz.* | tar -xzvf -
```

In the resulting `VLABench_release` folder, there will be two folders: `primitive`(release now) and `composite`(under management). 
In `primitive` folder, there are ten sub-folders and the dataset HDF5 files can be listed as:

```
VLABench_release
└── primitive
    β”œβ”€β”€ add_condiment
        └── episode_0.hdf5
        ...
    β”œβ”€β”€ insert_flower
        └── episode_0.hdf5
        ...
    β”œβ”€β”€ select_book
        └── ...
    β”œβ”€β”€ select_chemistry_tube
        └── ...
    β”œβ”€β”€ select_drink
        └── ...
    β”œβ”€β”€ select_fruit
        └── ...
    β”œβ”€β”€ select_mahjong
        └── ...
    β”œβ”€β”€ select_painting
        └── ...
    β”œβ”€β”€ select_poker
        └── ...
    └── select_toy
        └── ...
```

An example of the single episode data is:
```
data
    2025-02-23 20:46:40
        instruction
         (1,) |S38
         ['Please put the striped_10 in any hole.']
        meta_info
            entities
             (6,) |S15
             ['billiards_table', 'striped_10', 'striped_14', 'striped_11', 'striped_12', 'solid_1']
            episode_config
             () | S1879
            target_entity
             (1,) |S10
             ['striped_10']
        observation
            depth
             (212, 4, 480, 480) float32
            ee_state
             (212, 8) float32
            point_cloud_colors
             (212, 11905, 3) float32
            point_cloud_points
             (212, 11905, 3) float32
            q_acceleration
             (212, 7, 1) float32
            q_state
             (212, 7, 1) float32
            q_velocity
             (212, 7, 1) float32
            rgb
             (212, 4, 480, 480, 3) uint8
            robot_mask
             (212, 4, 480, 480) float32
        trajectory
         (212, 8) float32
```

## Citation
If you find our work helps,please cite us:
```
@misc{zhang2024vlabench,
      title={VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks}, 
      author={Shiduo Zhang and Zhe Xu and Peiju Liu and Xiaopeng Yu and Yuan Li and Qinghui Gao and Zhaoye Fei and Zhangyue Yin and Zuxuan Wu and Yu-Gang Jiang and Xipeng Qiu},
      year={2024},
      eprint={2412.18194},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2412.18194}, 
}
```