license: apache-2.0
datasets:
- lerobot/pusht
pipeline_tag: robotics
Model Card for Transformer based Diffusion Policy / PushT
Transformer based Diffusion Policy (as per Diffusion Policy: Visuomotor Policy
Learning via Action Diffusion) trained for the PushT
environment from gym-pusht.
How to Get Started with the Model
See the LeRobot library (particularly the evaluation script) for instructions on how to load and evaluate this model.
Training Details
The model was trained using LeRobot's training script and with the pusht dataset, using this command:
python lerobot/scripts/train.py \
hydra.run.dir=outputs/train/diffusion_pusht \
hydra.job.name=diffusion_pusht \
policy=diffusion training.save_model=true \
env=pusht \
env.task=PushT-v0 \
dataset_repo_id=lerobot/pusht \
training.save_freq=25000 \
training.eval_freq=10000 \
wandb.enable=true \
device=cuda
The training and eval curves may be found at https://api.wandb.ai/links/none7/rd2trav7
This took about 6 hours to train on an Nvida Tesla P100 GPU.
Evaluation
The model was evaluated on the PushT
environment from gym-pusht and compared to a similar model trained with the original Diffusion Policy code. There are two evaluation metrics on a per-episode basis:
- Maximum overlap with target (seen as
eval/avg_max_reward
in the charts above). This ranges in [0, 1]. - Success: whether or not the maximum overlap is at least 95%.
Here are the metrics for 500 episodes worth of evaluation.
Ours | Theirs | |
---|---|---|
Average max. overlap ratio | 0.000 | 0.000 |
Success rate for 500 episodes (%) | 0.00 | 0.00 |
The results of each of the individual rollouts may be found in eval_info.json.