Datasets:
Update README.md
Browse filesAdded configuration and training instructions.
README.md
CHANGED
@@ -7,11 +7,118 @@ tags:
|
|
7 |
|
8 |
# 2024 RoboGrasp Dataset
|
9 |
|
10 |
-
This is dataset the dataset for the RoboGrasp Hackathon 2024.
|
11 |
-
|
|
|
|
|
|
|
12 |
- green cube (47% of episodes)
|
13 |
- red sphere (30% of episodes)
|
14 |
- blue cylinder (22% of episodes)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
|
17 |
This dataset was created using [🤗 LeRobot](https://github.com/huggingface/lerobot).
|
|
|
7 |
|
8 |
# 2024 RoboGrasp Dataset
|
9 |
|
10 |
+
This is dataset the dataset for the RoboGrasp Hackathon 2024.
|
11 |
+
It includes 108 simulated pick-and-place robot episodes collected on a [simulated mobile aloha environment](https://github.com/HumanoidTeam/gym-aloha-hackathon.git) through teleoperated demonstrations.
|
12 |
+
During the collected episodes the right arm of the robot is used to pick an item from the table and put it in a box on the top of the table.
|
13 |
+
|
14 |
+
There are three types of items:
|
15 |
- green cube (47% of episodes)
|
16 |
- red sphere (30% of episodes)
|
17 |
- blue cylinder (22% of episodes)
|
18 |
+
|
19 |
+
You can visualize the dataset episodes [here](https://huggingface.co/spaces/lerobot/visualize_dataset?dataset=HumanoidTeam%2Frobograsp_hackathon_2024&episode=0)
|
20 |
+
|
21 |
+
# Hackathon Task
|
22 |
+
|
23 |
+
## Clone Repository
|
24 |
+
As a first step for the hackathon: clone the [hackathon repository](https://github.com/HumanoidTeam/lerobot-hackathon.git) on the hackathon branch on your system.
|
25 |
+
It's a fork of the original LeRobot repository containing the assets necessary for running following tasks.
|
26 |
+
```bash
|
27 |
+
git clone -b hackathon https://github.com/HumanoidTeam/lerobot-hackathon.git
|
28 |
+
```
|
29 |
+
|
30 |
+
## Install Dependencies
|
31 |
+
After cloning the repository you can proceed in installing the dependencies using poetry. you can install poetry by running `pip install poetry`.
|
32 |
+
```bash
|
33 |
+
cd lerobot-hackathon
|
34 |
+
poetry lock
|
35 |
+
poetry build
|
36 |
+
poetry install --extras "gym-aloha"
|
37 |
+
```
|
38 |
+
This will create the virtual environment with all the dependencies required. You can source the environment by running `poetry shell` within the folder.
|
39 |
+
From this point, you can configure and run your policy training using all the models present in Lerobot (e.g. ACT, DiffusionPolicy, VQ-Bet, etc...).
|
40 |
+
|
41 |
+
## Policy Configuration
|
42 |
+
You can create a yaml file within the folder `lerobot/configs/policy/`. For example `robograsp2024_submission_model.yaml`.
|
43 |
+
|
44 |
+
Within the the yaml file you can configure the input/output data shapes, data normalization strategies, context length and policy parameters.
|
45 |
+
|
46 |
+
Some parts of the yaml file will be dependant on this dataset. Here are working configurations for the input and output structure:
|
47 |
+
|
48 |
+
These go at the beginning of the yaml:
|
49 |
+
```yaml
|
50 |
+
seed: 100000
|
51 |
+
dataset_repo_id: HumanoidTeam/robograsp_hackathon_2024
|
52 |
+
|
53 |
+
override_dataset_stats:
|
54 |
+
observation.images.left_wrist:
|
55 |
+
# stats from imagenet, since we use a pretrained vision model
|
56 |
+
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
|
57 |
+
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
|
58 |
+
observation.images.right_wrist:
|
59 |
+
# stats from imagenet, since we use a pretrained vision model
|
60 |
+
mean: [ [ [ 0.485 ] ], [ [ 0.456 ] ], [ [ 0.406 ] ] ] # (c,1,1)
|
61 |
+
std: [ [ [ 0.229 ] ], [ [ 0.224 ] ], [ [ 0.225 ] ] ] # (c,1,1)
|
62 |
+
observation.images.top:
|
63 |
+
# stats from imagenet, since we use a pretrained vision model
|
64 |
+
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
|
65 |
+
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
|
66 |
+
```
|
67 |
+
|
68 |
+
These go within the `policy:` scope and regards the input/output datashape
|
69 |
+
|
70 |
+
```yaml
|
71 |
+
input_shapes:
|
72 |
+
observation.images.left_wrist: [3, 480, 640]
|
73 |
+
observation.images.right_wrist: [3, 480, 640]
|
74 |
+
observation.images.top: [3, 480, 640]
|
75 |
+
observation.state: ["${env.state_dim}"]
|
76 |
+
output_shapes:
|
77 |
+
action: ["${env.action_dim}"]
|
78 |
+
|
79 |
+
# Normalization / Unnormalization
|
80 |
+
input_normalization_modes:
|
81 |
+
observation.images.left_wrist: mean_std
|
82 |
+
observation.images.right_wrist: mean_std
|
83 |
+
observation.images.top: mean_std
|
84 |
+
observation.state: min_max
|
85 |
+
output_normalization_modes:
|
86 |
+
action: min_max
|
87 |
+
```
|
88 |
+
|
89 |
+
The remaining configuration can be derived from the other examples provided by the lerobot original repo.
|
90 |
+
|
91 |
+
## Start Policy Training
|
92 |
+
|
93 |
+
you can start the policy training by running the following command, while having sourced the environment built in the previous section.
|
94 |
+
To source the environment run:
|
95 |
+
```bash
|
96 |
+
poetry shell
|
97 |
+
```
|
98 |
+
To start the training, you can use this command:
|
99 |
+
```bash
|
100 |
+
MUJOCO_GL="egl" python lerobot/scripts/train.py \
|
101 |
+
policy=robograsp2024_submission_model \
|
102 |
+
env=humanoid_hackathon_mobile_aloha \
|
103 |
+
env.task=AlohaHackathon-v0 \
|
104 |
+
dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024
|
105 |
+
```
|
106 |
+
Where `robograsp2024_submission_model` is the name of the yaml file with the policy configuration, `humanoid_hackathon_mobile_aloha` is the provided yaml configuration for the mujoco environment to test the trained policies.
|
107 |
+
|
108 |
+
### Resume policy training from a checkpoint.
|
109 |
+
Terminated training too early? No worries! you can resume training from a previous checkpoint by running:
|
110 |
+
|
111 |
+
```bash
|
112 |
+
MUJOCO_GL="egl" python lerobot/scripts/train.py \
|
113 |
+
policy=robograsp2024_submission_model \
|
114 |
+
env=humanoid_hackathon_mobile_aloha \
|
115 |
+
env.task=AlohaHackathon-v0 \
|
116 |
+
dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024 \
|
117 |
+
hydra.run.dir=OUTPUT_PATH \
|
118 |
+
resume=true
|
119 |
+
```
|
120 |
+
Where `OUTPUT_PATH` is the path to the checkpoint folder. It should look something like `outputs/train/2024-10-23/18-38-31_aloha_MODELTYPE_default`
|
121 |
+
|
122 |
|
123 |
|
124 |
This dataset was created using [🤗 LeRobot](https://github.com/huggingface/lerobot).
|