ivm-skl commited on
Commit
97db6df
1 Parent(s): aa736fc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +215 -127
README.md CHANGED
@@ -1,139 +1,227 @@
1
  ---
 
2
  task_categories:
3
  - robotics
4
  tags:
5
  - LeRobot
6
- - RoboGrasp2024
7
- - Virtual
8
- pretty_name: RoboGrasp 2024 Hackathon Dataset
9
  ---
10
 
11
- # 2024 RoboGrasp Dataset
12
-
13
- This is dataset the dataset for the RoboGrasp Hackathon 2024.
14
- It includes 108 simulated pick-and-place robot episodes collected on a [simulated mobile aloha environment](https://github.com/HumanoidTeam/gym-aloha-hackathon.git) through teleoperated demonstrations.
15
- During the collected episodes the right arm of the robot is used to pick an item from the table and put it in a box on the top of the table.
16
-
17
- There are three types of items:
18
- - green cube (47% of episodes)
19
- - red sphere (30% of episodes)
20
- - blue cylinder (22% of episodes)
21
-
22
- You can visualize the dataset episodes [here](https://huggingface.co/spaces/lerobot/visualize_dataset?dataset=HumanoidTeam%2Frobograsp_hackathon_2024&episode=0)
23
-
24
- # Hackathon Task
25
-
26
- ## Clone Repository
27
- As a first step for the hackathon: clone the [hackathon repository](https://github.com/HumanoidTeam/lerobot-hackathon.git) on the hackathon branch on your system.
28
- It's a fork of the original LeRobot repository containing the assets necessary for running following tasks.
29
- ```bash
30
- git clone -b hackathon https://github.com/HumanoidTeam/lerobot-hackathon.git
31
- ```
32
-
33
- ## Install Dependencies
34
- After cloning the repository you can proceed in installing the dependencies using poetry. you can install poetry by running `pip install poetry`.
35
- ```bash
36
- cd lerobot-hackathon
37
- poetry lock
38
- poetry build
39
- poetry install --extras "gym-aloha"
40
- ```
41
- This will create the virtual environment with all the dependencies required. You can source the environment by running `poetry shell` within the folder.
42
- From this point, you can configure and run your policy training using all the models present in Lerobot (e.g. ACT, DiffusionPolicy, VQ-Bet, etc...).
43
-
44
- ## Policy Configuration
45
- You can create a yaml file within the folder `lerobot/configs/policy/`. For example `robograsp2024_submission_model.yaml`.
46
- Within the the yaml file you can configure the input/output data shapes, data normalization strategies, context length and policy parameters.
47
- Some parts of the yaml file will be dependant on this dataset. Thus, we provide the parameters necessary to use this dataset.
48
-
49
- Here are working configurations for the input and output structure:
50
- These go at the beginning of the yaml:
51
- ```yaml
52
- seed: 100000
53
- dataset_repo_id: HumanoidTeam/robograsp_hackathon_2024
54
-
55
- override_dataset_stats:
56
- observation.images.left_wrist:
57
- # stats from imagenet, since we use a pretrained vision model
58
- mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
59
- std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
60
- observation.images.right_wrist:
61
- # stats from imagenet, since we use a pretrained vision model
62
- mean: [ [ [ 0.485 ] ], [ [ 0.456 ] ], [ [ 0.406 ] ] ] # (c,1,1)
63
- std: [ [ [ 0.229 ] ], [ [ 0.224 ] ], [ [ 0.225 ] ] ] # (c,1,1)
64
- observation.images.top:
65
- # stats from imagenet, since we use a pretrained vision model
66
- mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
67
- std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
68
- ```
69
-
70
- These go within the `policy:` scope and regards the input/output datashape
71
-
72
- ```yaml
73
- input_shapes:
74
- observation.images.left_wrist: [3, 480, 640]
75
- observation.images.right_wrist: [3, 480, 640]
76
- observation.images.top: [3, 480, 640]
77
- observation.state: ["${env.state_dim}"]
78
- output_shapes:
79
- action: ["${env.action_dim}"]
80
-
81
- # Normalization / Unnormalization
82
- input_normalization_modes:
83
- observation.images.left_wrist: mean_std
84
- observation.images.right_wrist: mean_std
85
- observation.images.top: mean_std
86
- observation.state: min_max
87
- output_normalization_modes:
88
- action: min_max
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
  ```
90
 
91
- The remaining configuration can be derived from the other examples provided by the lerobot original repo.
92
 
93
- ## Start Policy Training
94
-
95
- you can start the policy training by running the following command, while having sourced the environment built in the previous section.
96
- To source the environment run:
97
- ```bash
98
- poetry shell
99
- ```
100
- To start the training, you can use this command:
101
- ```bash
102
- MUJOCO_GL="egl" python lerobot/scripts/train.py \
103
- policy=robograsp2024_submission_model \
104
- env=humanoid_hackathon_mobile_aloha \
105
- env.task=AlohaHackathon-v0 \
106
- dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024
107
- ```
108
- Where `robograsp2024_submission_model` is the name of the yaml file with the policy configuration, `humanoid_hackathon_mobile_aloha` is the provided yaml configuration for the mujoco environment to test the trained policies.
109
-
110
- ### Resume policy training from a checkpoint.
111
- Terminated training too early? No worries! you can resume training from a previous checkpoint by running:
112
-
113
- ```bash
114
- MUJOCO_GL="egl" python lerobot/scripts/train.py \
115
- policy=robograsp2024_submission_model \
116
- env=humanoid_hackathon_mobile_aloha \
117
- env.task=AlohaHackathon-v0 \
118
- dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024 \
119
- hydra.run.dir=OUTPUT_PATH \
120
- resume=true
121
- ```
122
- Where `OUTPUT_PATH` is the path to the checkpoint folder. It should look something like `outputs/train/2024-10-23/18-38-31_aloha_MODELTYPE_default`
123
-
124
- ## Upload trained policy checkpoint
125
- After training the model you can upload it to Huggingface with:
126
- ```bash
127
- huggingface-cli upload $hf_username/$repo_name PATH_TO_CHECKPOINT
128
- ```
129
- where PATH_TO_CHECKPOINT is the folder containing the checkpoints of your training. it should look like `outputs/train/2024-10-23/23-02-55_aloha_diffusion_default/checkpoints/015000`.
130
-
131
- # Policy Evaluation
132
-
133
-
134
- ```
135
- python hackathon/evaluate_pretrained_policy_hackathon.py --device cuda --pretrained-policy-name-or-path HumanoidTeam/hackathon_sim_aloha --num-videos 5 --num-rollouts 10
136
- ```
137
 
 
138
 
139
- This dataset was created using [🤗 LeRobot](https://github.com/huggingface/lerobot).
 
 
 
1
  ---
2
+ license: apache-2.0
3
  task_categories:
4
  - robotics
5
  tags:
6
  - LeRobot
7
+ configs:
8
+ - config_name: default
9
+ data_files: data/*/*.parquet
10
  ---
11
 
12
+ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
13
+
14
+ ## Dataset Description
15
+
16
+
17
+
18
+ - **Homepage:** [More Information Needed]
19
+ - **Paper:** [More Information Needed]
20
+ - **License:** apache-2.0
21
+
22
+ ## Dataset Structure
23
+
24
+ [meta/info.json](meta/info.json):
25
+ ```json
26
+ {
27
+ "codebase_version": "v2.0",
28
+ "robot_type": "unknown",
29
+ "total_episodes": 108,
30
+ "total_frames": 86400,
31
+ "total_tasks": 1,
32
+ "total_videos": 324,
33
+ "total_chunks": 1,
34
+ "chunks_size": 1000,
35
+ "fps": 50,
36
+ "splits": {
37
+ "train": "0:108"
38
+ },
39
+ "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
40
+ "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
41
+ "features": {
42
+ "observation.images.left_wrist": {
43
+ "dtype": "video",
44
+ "shape": [
45
+ 480,
46
+ 640,
47
+ 3
48
+ ],
49
+ "names": [
50
+ "height",
51
+ "width",
52
+ "channel"
53
+ ],
54
+ "video_info": {
55
+ "video.fps": 50.0,
56
+ "video.codec": "h264",
57
+ "video.pix_fmt": "yuv420p",
58
+ "video.is_depth_map": false,
59
+ "has_audio": false
60
+ }
61
+ },
62
+ "observation.images.right_wrist": {
63
+ "dtype": "video",
64
+ "shape": [
65
+ 480,
66
+ 640,
67
+ 3
68
+ ],
69
+ "names": [
70
+ "height",
71
+ "width",
72
+ "channel"
73
+ ],
74
+ "video_info": {
75
+ "video.fps": 50.0,
76
+ "video.codec": "h264",
77
+ "video.pix_fmt": "yuv420p",
78
+ "video.is_depth_map": false,
79
+ "has_audio": false
80
+ }
81
+ },
82
+ "observation.images.top": {
83
+ "dtype": "video",
84
+ "shape": [
85
+ 480,
86
+ 640,
87
+ 3
88
+ ],
89
+ "names": [
90
+ "height",
91
+ "width",
92
+ "channel"
93
+ ],
94
+ "video_info": {
95
+ "video.fps": 50.0,
96
+ "video.codec": "h264",
97
+ "video.pix_fmt": "yuv420p",
98
+ "video.is_depth_map": false,
99
+ "has_audio": false
100
+ }
101
+ },
102
+ "observation.state": {
103
+ "dtype": "float32",
104
+ "shape": [
105
+ 14
106
+ ],
107
+ "names": {
108
+ "motors": [
109
+ "motor_0",
110
+ "motor_1",
111
+ "motor_2",
112
+ "motor_3",
113
+ "motor_4",
114
+ "motor_5",
115
+ "motor_6",
116
+ "motor_7",
117
+ "motor_8",
118
+ "motor_9",
119
+ "motor_10",
120
+ "motor_11",
121
+ "motor_12",
122
+ "motor_13"
123
+ ]
124
+ }
125
+ },
126
+ "observation.effort": {
127
+ "dtype": "float32",
128
+ "shape": [
129
+ 14
130
+ ],
131
+ "names": {
132
+ "motors": [
133
+ "motor_0",
134
+ "motor_1",
135
+ "motor_2",
136
+ "motor_3",
137
+ "motor_4",
138
+ "motor_5",
139
+ "motor_6",
140
+ "motor_7",
141
+ "motor_8",
142
+ "motor_9",
143
+ "motor_10",
144
+ "motor_11",
145
+ "motor_12",
146
+ "motor_13"
147
+ ]
148
+ }
149
+ },
150
+ "action": {
151
+ "dtype": "float32",
152
+ "shape": [
153
+ 14
154
+ ],
155
+ "names": {
156
+ "motors": [
157
+ "motor_0",
158
+ "motor_1",
159
+ "motor_2",
160
+ "motor_3",
161
+ "motor_4",
162
+ "motor_5",
163
+ "motor_6",
164
+ "motor_7",
165
+ "motor_8",
166
+ "motor_9",
167
+ "motor_10",
168
+ "motor_11",
169
+ "motor_12",
170
+ "motor_13"
171
+ ]
172
+ }
173
+ },
174
+ "episode_index": {
175
+ "dtype": "int64",
176
+ "shape": [
177
+ 1
178
+ ],
179
+ "names": null
180
+ },
181
+ "frame_index": {
182
+ "dtype": "int64",
183
+ "shape": [
184
+ 1
185
+ ],
186
+ "names": null
187
+ },
188
+ "timestamp": {
189
+ "dtype": "float32",
190
+ "shape": [
191
+ 1
192
+ ],
193
+ "names": null
194
+ },
195
+ "next.done": {
196
+ "dtype": "bool",
197
+ "shape": [
198
+ 1
199
+ ],
200
+ "names": null
201
+ },
202
+ "index": {
203
+ "dtype": "int64",
204
+ "shape": [
205
+ 1
206
+ ],
207
+ "names": null
208
+ },
209
+ "task_index": {
210
+ "dtype": "int64",
211
+ "shape": [
212
+ 1
213
+ ],
214
+ "names": null
215
+ }
216
+ }
217
+ }
218
  ```
219
 
 
220
 
221
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
 
223
+ **BibTeX:**
224
 
225
+ ```bibtex
226
+ [More Information Needed]
227
+ ```