tsungyi commited on
Commit
6866e2d
·
1 Parent(s): 74aeafc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -15
README.md CHANGED
@@ -1,25 +1,37 @@
1
  ---
2
  configs:
3
- - config_name: bridgev2
4
- data_files:
5
- - split: benchmark
6
- path: bridgev2/bridgev2_benchmark_qa_pairs.json
7
- - config_name: robovqa
8
- data_files:
9
- - split: benchmark
10
- path: robovqa/robovqa_benchmark_qa_pairs.json
11
  - config_name: agibot
12
  data_files:
13
- - split: benchmark
14
- path: agibot/agibot_benchmark_qa_pairs.json
 
 
 
 
 
 
 
 
15
  - config_name: holoassist
16
  data_files:
17
- - split: benchmark
18
- path: holoassist/holoassist_benchmark_qa_pairs.json
19
- - config_name: robofail
 
 
20
  data_files:
21
- - split: benchmark
22
- path: robofail/robofail_benchmark_qa_pairs.json
 
 
 
 
 
 
 
 
 
 
23
  language:
24
  - en
25
  task_categories:
@@ -30,6 +42,7 @@ tags:
30
  license: cc-by-4.0
31
  ---
32
 
 
33
  ## Dataset Description:
34
 
35
  The data format is a pair of video and text annotations. We summarize the data and annotations in Table 4 (SFT), Table 5 (RL), and Table 6 (Benchmark) of the Cosmos-Reason1 paper. ​​ We release the annotations for embodied reasoning tasks for BridgeDatav2, RoboVQA, Agibot, HoloAssist, AV, and the videos for the RoboVQA and AV datasets. We additionally release the annotations and videos for the RoboFail dataset for benchmarks. By releasing the dataset, NVIDIA supports the development of open embodied reasoning models and provides benchmarks to evaluate the progress.
@@ -50,6 +63,7 @@ This dataset is intended to demonstrate and facilitate understanding and usage o
50
 
51
  ## Dataset Characterization
52
  The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data.
 
53
 
54
  **Data Collection Method**:
55
  * RoboVQA: Hybrid: Automatic/Sensors
 
1
  ---
2
  configs:
 
 
 
 
 
 
 
 
3
  - config_name: agibot
4
  data_files:
5
+ - split: understanding
6
+ path: agibot/agibot_understanding.json
7
+ - split: reasoning
8
+ path: agibot/agibot_reasoning.json
9
+ - config_name: bridgev2
10
+ data_files:
11
+ - split: understanding
12
+ path: bridgev2/bridgev2_understanding.json
13
+ - split: reasoning
14
+ path: bridgev2/bridgev2_reasoning.json
15
  - config_name: holoassist
16
  data_files:
17
+ - split: understanding
18
+ path: holoassist/holoassist_understanding.json
19
+ - split: reasoning
20
+ path: holoassist/holoassist_reasoning.json
21
+ - config_name: robovqa
22
  data_files:
23
+ - split: understanding
24
+ path: robovqa/robovqa_understanding.json
25
+ - split: reasoning_0
26
+ path: robovqa/robovqa_reasoning_0.json
27
+ - split: reasoning_1
28
+ path: robovqa/robovqa_reasoning_1.json
29
+ - split: reasoning_2
30
+ path: robovqa/robovqa_reasoning_2.json
31
+ - split: reasoning_3
32
+ path: robovqa/robovqa_reasoning_3.json
33
+ - split: reasoning_4
34
+ path: robovqa/robovqa_reasoning_4.json
35
  language:
36
  - en
37
  task_categories:
 
42
  license: cc-by-4.0
43
  ---
44
 
45
+
46
  ## Dataset Description:
47
 
48
  The data format is a pair of video and text annotations. We summarize the data and annotations in Table 4 (SFT), Table 5 (RL), and Table 6 (Benchmark) of the Cosmos-Reason1 paper. ​​ We release the annotations for embodied reasoning tasks for BridgeDatav2, RoboVQA, Agibot, HoloAssist, AV, and the videos for the RoboVQA and AV datasets. We additionally release the annotations and videos for the RoboFail dataset for benchmarks. By releasing the dataset, NVIDIA supports the development of open embodied reasoning models and provides benchmarks to evaluate the progress.
 
63
 
64
  ## Dataset Characterization
65
  The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data.
66
+ **The AV data is currently unavailable and will be uploaded soon!**
67
 
68
  **Data Collection Method**:
69
  * RoboVQA: Hybrid: Automatic/Sensors