Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer
Auto-converted to Parquet
__index_level_0__
int64
0
49
image
imagewidth (px)
1.2k
1.2k
sceneId
int64
0
7.43k
queryObjId
int64
0
18
annotation
stringlengths
7
106
groundTruthObjIds
stringclasses
45 values
difficulty
stringclasses
3 values
ambiguious
bool
2 classes
split
stringclasses
3 values
0
0
2
the bandage
2
Easy
false
0
1
184
8
the grey box
8
Easy
false
0
2
374
1
the lemon
1
Easy
false
0
3
381
2
the sponge
2
Easy
false
0
4
398
0
the blue object in the center of the tray
0
Easy
false
0
5
509
2
the little white bottle
2
Easy
false
0
6
527
1
the inverted red cup
1
Easy
false
0
7
564
4
the yellow thing on the bottom
4
Easy
false
0
8
646
3
the elmex box
3
Easy
false
0
9
834
0
the toy drill
0
Easy
false
0
10
842
0
the toy drill
0
Easy
false
0
11
961
2
the festo box
2
Easy
false
0
12
1,189
0
the can
0
Easy
false
0
13
1,200
1
the screwdriver
1
Easy
false
0
14
1,350
1
the cylinder-like package on the left
1
Easy
false
0
15
1,527
0
the big red box
0
Easy
false
0
16
1,599
3
the white box
3
Easy
false
0
17
1,758
0
the white box
0
Easy
false
0
18
1,776
1
the apple
1
Easy
false
0
19
1,796
2
the knife on the bottom left
2
Easy
false
0
20
1,868
0
the screwdriver on the bottom right
0
Easy
false
0
21
1,869
1
the tennis ball
1
Easy
false
0
22
1,947
4
the screw at the top right corner
4
Easy
false
0
23
2,168
0
the orange submarine-like toy
0
Easy
false
0
24
2,189
3
the white and yellow box
3
Easy
false
0
25
2,473
3
the sponge
3
Easy
false
0
26
2,581
2
the cylindrical container on the left
2
Easy
false
0
27
2,683
5
the banana
5
Easy
false
0
28
3,143
3
the peach on the left
3
Easy
false
0
29
3,298
0
the box
0
Easy
false
0
30
4,735
3
the glass
3
Easy
false
0
31
5,636
1
the calculator
1
Easy
false
0
32
6,717
3
the toothpaste box
3
Easy
false
0
33
6,797
5
the milk box in the bottom right corner
5
Easy
false
0
34
6,811
4
the yellow cube
4
Easy
false
0
35
6,827
0
the toothpaste box
0
Easy
false
0
36
6,935
6
the festo box
6
Easy
false
0
37
6,946
13
the white box on top of the blue ball and the paper box
13
Easy
false
0
38
6,998
7
the blue box
7
Easy
false
0
39
7,020
3
the brown cardboard box
3
Easy
false
0
40
7,022
4
the potato chip cylinder
4
Easy
false
0
41
7,032
1
the tomato can
1
Easy
false
0
42
7,034
3
the grey box
3
Easy
false
0
43
7,071
2
the red apple
2
Easy
false
0
44
7,101
5
the green box
5
Easy
false
0
45
7,106
4
the coffee filters box
4
Easy
false
0
46
7,176
2
the lemon
2
Easy
false
0
47
7,192
8
the red apple
8
Easy
false
0
48
7,201
2
the yellow cube on the bottom right of the tray
2
Easy
false
0
49
7,265
5
the shampoo-like cylinder on the right
5
Easy
false
0
0
79
2
the transparent package with a cable inside
4
Medium
false
0
1
206
5
the domino sugar box
2
Medium
false
0
2
348
0
the strawberry
2
Medium
false
0
3
390
6
the elmex package
1
Medium
false
0
4
422
5
the transparent bottle
3
Medium
false
0
5
491
4
the strawberry on the bottom, under a white box
1
Medium
false
0
6
610
1
the yellow clamp
3
Medium
false
0
7
913
2
the green grinder
0
Medium
false
0
8
944
3
the red drill
2
Medium
false
0
9
988
0
the light blue box
3
Medium
false
0
10
1,064
4
the yellow fan
1
Medium
false
0
11
1,104
7
the spray bottle
6
Medium
false
0
12
1,216
1
the screw under the can
4
Medium
false
0
13
1,449
2
the screwdriver
3
Medium
false
0
14
1,631
6
the spam can
4
Medium
false
0
15
1,670
10
the brown box
2
Medium
false
0
16
1,697
0
the white box on top
2
Medium
false
0
17
1,755
6
the clamp
2
Medium
false
0
18
1,786
2
the big cardboard box
0
Medium
false
0
19
1,860
1
the tennis ball
3
Medium
false
0
20
1,948
0
the roll of bags under the refiller bag
4
Medium
false
0
21
2,037
0
the white plastic bag
5
Medium
false
0
22
2,052
5
the espresso cup
0,2
Medium
false
0
23
2,284
3
the wire cutter
4
Medium
false
0
24
2,520
0
the white box
2
Medium
false
0
25
2,862
3
the spray under the orange stuff
2
Medium
false
0
26
3,288
0
the light blue box on the left
3
Medium
false
0
27
3,661
0
the red box
5
Medium
false
0
28
3,789
0
the strawberry under the blue box
3
Medium
false
0
29
3,961
3
the green cylindrical package in the middle
5
Medium
false
0
30
4,020
0
the yellow box under the cardboard box
3
Medium
false
0
31
4,098
1
the screwdriver
0,2
Medium
false
0
32
4,409
3
the peach
0
Medium
false
0
33
4,877
3
the red cup
0,1
Medium
false
0
34
6,726
5
the bottom right white bottle with a yellow cap
0,3
Medium
false
0
35
6,751
9
the the light blue box at the bottom left
7
Medium
false
0
36
6,758
7
the white box
2
Medium
false
0
37
6,817
0
the yellow object on top of toy drill
3
Medium
false
0
38
6,862
1
the green grinder
2
Medium
false
0
39
6,874
7
the domino sugar box
5
Medium
false
0
40
6,897
3
the clamp
5
Medium
false
0
41
6,903
7
the festo box from top
2
Medium
false
0
42
6,938
0
the white box between the green boxes
3,5,7
Medium
false
0
43
6,962
2
the empty bottle
5
Medium
false
0
44
6,991
5
the scissor
12
Medium
false
0
45
7,024
1
the box under the plier
0,3
Medium
false
0
46
7,025
2
the hacksaw
3
Medium
false
0
47
7,029
1
the juice box behind the plastic bag
5
Medium
false
0
48
7,029
7
the wire cutter
0,3
Medium
false
0
49
7,163
9
the green object on the top left
12
Medium
false
0
End of preview. Expand in Data Studio

Free-form language-based robotic reasoning and grasping Dataset: FreeGraspData

Dataset Description

Teaser Examples of FreeGraspData at different task difficulties with three user-provided instructions. Star indicates the target object, and green circle indicates the ground-truth objects to pick.

We introduce the ree-from language grasping dataset (FreeGraspDat), a novel dataset built upon MetaGraspNetv2 (1) to evaluate the robotic grasping task with free-form language instructions. MetaGraspNetv2 is a large-scale simulated dataset featuring challenging aspects of robot vision in the bin-picking setting, including multi-view RGB-D images and metadata, eg object categories, amodal segmentation masks, and occlusion graphs indicating occlusion relationships between objects from each viewpoint. To build FreeGraspDat, we selected scenes containing at least four objects to ensure sufficient scene clutter.

FreeGraspDat extends MetaGraspNetV2 in three aspects:

  • i) we derive the ground-truth grasp sequence until reaching the target object from the occlusion graphs,
  • ii) we categorize the task difficulty based on the obstruction level and instance ambiguity
  • iii) we provide free-form language instructions, collected from human annotators.

Ground-truth grasp sequence

We obtain the ground-truth grasp sequence based on the object occlusion graphs provided in MetaGraspNetV2. As the visual occlusion does not necessarily indicate obstruction, we thus first prune the edges in the provided occlusion graph that are less likely to form obstruction. Following the heuristic that less occlusion indicates less chance of obstruction, we remove the edges where the percentage of the occlusion area of the occluded object is below $1%$. From the node representing the target object, we can then traverse the pruned graph to locate the leaf node, that is the ground-truth object to grasp first. The sequence from the leaf node to the target node forms the correct sequence for the robotic grasping task.

Grasp Difficulty Categorization

We use the pruned occlusion graph to classify the grasping difficulty of target objects into three levels:

  • Easy: Unobstructed target objects (leaf nodes in the pruned occlusion graph).
  • Medium: Objects obstructed by at least one object (maximum hop distance to leaf nodes is 1).
  • Hard: Objects obstructed by a chain of other objects (maximum hop distance to leaf nodes is more than 1).

Objects are labeled as Ambiguous if multiple instances of the same class exist in the scene.

This results in six robotic grasping difficulty categories:

  • Easy without Ambiguity
  • Medium without Ambiguity
  • Hard without Ambiguity
  • Easy with Ambiguity
  • Medium with Ambiguity
  • Hard with Ambiguity

Free-form language user instructions

For each of the six difficulty categories, we randomly select 50 objects, resulting in 300 robotic grasping scenarios. For each scenario, we provide multiple users with a top-down image of the bin and a visual indicator highlighting the target object. No additional context or information about the object is provided. We instruct the user to provide an unambiguous natural language description of the indicated object with their best effort. In total, ten users are involved in the data collection procedure, with a wide age span. %and a balanced gender distribution. We randomly select three user instructions for each scenario, yielding a total of 900 evaluation scenarios. This results in diverse language instructions.

Teaser

This figure illustrates the similarity distribution among the three user-defined instructions in Free-form language user instructions, based on GPT-4o's interpretability, semantic similarity, and sentence structure similarity. To assess GPT-4o's interpretability, we introduce a novel metric, the GPT score, which measures GPT-4o's coherence in responses. For each target, we provide \gpt with an image containing overlaid object IDs and ask it to identify the object specified by each of the three instructions. The GPT score quantifies the fraction of correctly identified instructions, ranging from 0 (no correct identifications) to 1 (all three correct). We evaluate semantic similarity using the embedding score, defined as the average SBERT (2) similarity across all pairs of user-defined instructions. We assess structural similarity using the Rouge-L score, computed as the average Rouge-L (3) score across all instruction pairs. Results indicate that instructions referring to the same target vary significantly in sentence structure (low Rouge-L score), reflecting differences in word choice and composition, while showing moderate variation in semantics (medium embedding score). Interestingly, despite these variations, the consistently high GPT scores across all task difficulty levels suggest that GPT-4o is robust in identifying the correct target in the image, regardless of differences in instruction phrasing.

(1) Gilles, Maximilian, et al. "Metagraspnetv2: All-in-one dataset enabling fast and reliable robotic bin picking via object relationship reasoning and dexterous grasping." IEEE Transactions on Automation Science and Engineering 21.3 (2023): 2302-2320. (2) Reimers, Nils, and Iryna Gurevych. "Sentence-bert: Sentence embeddings using siamese bert-networks." (2019). (3) Lin, Chin-Yew, and Franz Josef Och. "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics." Proceedings of the 42nd annual meeting of the association for computational linguistics (ACL-04). 2004.(3)

Data Fields

- **__index_level_0__**: An integer representing the unique identifier for each example.
- **image**: The actual image sourced from the MetaGraspNetV2 dataset.
- **sceneId**: An integer identifier for the scene, directly taken from the MetaGrasp dataset. It corresponds to the specific scene in which the object appears.
- **queryObjId**: An integer identifier for the object being targeted, from the MetaGrasp dataset.
- **annotation**: A string containing the annotation details for the target object within the scene in a free-form language description.
- **groundTruthObjIds**: A string listing the object IDs that are considered the ground truth for the scene.
- **difficulty**: A string indicating the difficulty level of the grasp, reported in the introduction of the dataset. The difficulty levels are categorized based on the occlusion graph.
- **ambiguious**: A boolean indicating whether the object is ambiguous. An object is considered ambiguous if multiple instances of the same class are present in the scene.
- **split**: A string denoting the split (0, 1, or 2) corresponding to different annotations for the same image. This split indicates the partition of annotations, not the annotator itself.

ArXiv link

https://arxiv.org/abs/2503.13082

APA Citaion

Jiao, R., Fasoli, A., Giuliari, F., Bortolon, M., Povoli, S., Mei, G., Wang, Y., & Poiesi, F. (2025). Free-form language-based robotic reasoning and grasping. arXiv preprint arXiv:2503.13082.

Bibtex

@article{jiao2025free,
  title={Free-form language-based robotic reasoning and grasping},
  author={Jiao, Runyu and Fasoli, Alice and Giuliari, Francesco and Bortolon, Matteo and Povoli, Sergio and Mei, Guofeng and Wang, Yiming and Poiesi, Fabio},
  journal={arXiv preprint arXiv:2503.13082},
  year={2025}
}

Acknowledgement

VRT Logo

This project was supported by Fondazione VRT under the project Make Grasping Easy, PNRR ICSC National Research Centre for HPC, Big Data and Quantum Computing (CN00000013), and FAIR - Future AI Research (PE00000013), funded by NextGeneration EU.

Partners

FBK UNITN IIT
Downloads last month
96