Datasets:
task_categories:
- robotics
tags:
- LeRobot
- RoboGrasp2024
- Virtual
pretty_name: RoboGrasp 2024 Hackathon Dataset
2024 RoboGrasp Dataset
This is dataset the dataset for the RoboGrasp Hackathon 2024. It includes 108 simulated pick-and-place robot episodes collected on a simulated mobile aloha environment through teleoperated demonstrations. During the collected episodes the right arm of the robot is used to pick an item from the table and put it in a box on the top of the table.
There are three types of items:
- green cube (47% of episodes)
- red sphere (30% of episodes)
- blue cylinder (22% of episodes)
You can visualize the dataset episodes here
Hackathon Task
Clone Repository
As a first step for the hackathon: clone the hackathon repository on the hackathon branch on your system. It's a fork of the original LeRobot repository containing the assets necessary for running following tasks.
git clone -b hackathon https://github.com/HumanoidTeam/lerobot-hackathon.git
Install Dependencies
After cloning the repository you can proceed in installing the dependencies using poetry. you can install poetry by running pip install poetry
.
cd lerobot-hackathon
poetry lock
poetry build
poetry install --extras "gym-aloha"
This will create the virtual environment with all the dependencies required. You can source the environment by running poetry shell
within the folder.
From this point, you can configure and run your policy training using all the models present in Lerobot (e.g. ACT, DiffusionPolicy, VQ-Bet, etc...).
Policy Configuration
You can create a yaml file within the folder lerobot/configs/policy/
. For example robograsp2024_submission_model.yaml
.
Within the the yaml file you can configure the input/output data shapes, data normalization strategies, context length and policy parameters.
Some parts of the yaml file will be dependant on this dataset. Thus, we provide the parameters necessary to use this dataset.
Here are working configurations for the input and output structure: These go at the beginning of the yaml:
seed: 100000
dataset_repo_id: HumanoidTeam/robograsp_hackathon_2024
override_dataset_stats:
observation.images.left_wrist:
# stats from imagenet, since we use a pretrained vision model
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
observation.images.right_wrist:
# stats from imagenet, since we use a pretrained vision model
mean: [ [ [ 0.485 ] ], [ [ 0.456 ] ], [ [ 0.406 ] ] ] # (c,1,1)
std: [ [ [ 0.229 ] ], [ [ 0.224 ] ], [ [ 0.225 ] ] ] # (c,1,1)
observation.images.top:
# stats from imagenet, since we use a pretrained vision model
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
These go within the policy:
scope and regards the input/output datashape
input_shapes:
observation.images.left_wrist: [3, 480, 640]
observation.images.right_wrist: [3, 480, 640]
observation.images.top: [3, 480, 640]
observation.state: ["${env.state_dim}"]
output_shapes:
action: ["${env.action_dim}"]
# Normalization / Unnormalization
input_normalization_modes:
observation.images.left_wrist: mean_std
observation.images.right_wrist: mean_std
observation.images.top: mean_std
observation.state: min_max
output_normalization_modes:
action: min_max
The remaining configuration can be derived from the other examples provided by the lerobot original repo.
Start Policy Training
you can start the policy training by running the following command, while having sourced the environment built in the previous section. To source the environment run:
poetry shell
To start the training, you can use this command:
MUJOCO_GL="egl" python lerobot/scripts/train.py \
policy=robograsp2024_submission_model \
env=humanoid_hackathon_mobile_aloha \
env.task=AlohaHackathon-v0 \
dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024
Where robograsp2024_submission_model
is the name of the yaml file with the policy configuration, humanoid_hackathon_mobile_aloha
is the provided yaml configuration for the mujoco environment to test the trained policies.
Resume policy training from a checkpoint.
Terminated training too early? No worries! you can resume training from a previous checkpoint by running:
MUJOCO_GL="egl" python lerobot/scripts/train.py \
policy=robograsp2024_submission_model \
env=humanoid_hackathon_mobile_aloha \
env.task=AlohaHackathon-v0 \
dataset_repo_id=HumanoidTeam/robograsp_hackathon_2024 \
hydra.run.dir=OUTPUT_PATH \
resume=true
Where OUTPUT_PATH
is the path to the checkpoint folder. It should look something like outputs/train/2024-10-23/18-38-31_aloha_MODELTYPE_default
Upload trained policy checkpoint
After training the model you can upload it to Huggingface with:
huggingface-cli upload $hf_username/$repo_name PATH_TO_CHECKPOINT
where PATH_TO_CHECKPOINT is the folder containing the checkpoints of your training. it should look like outputs/train/2024-10-23/23-02-55_aloha_diffusion_default/checkpoints/015000
.
Policy Evaluation
python hackathon/evaluate_pretrained_policy_hackathon.py --device cuda --pretrained-policy-name-or-path HumanoidTeam/hackathon_sim_aloha --num-videos 5 --num-rollouts 10
This dataset was created using 🤗 LeRobot.