Amandee commited on
Commit
36ff285
1 Parent(s): be451c9

Updating model card with initial edits

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lerobot/pusht
5
+ pipeline_tag: robotics
6
+ ---
7
+ # Model Card for Diffusion Policy / PushT
8
+
9
+ Transformer basd Diffusion Policy (as per [Diffusion Policy: Visuomotor Policy
10
+ Learning via Action Diffusion](https://arxiv.org/abs/2303.04137)) trained for the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht).
11
+
12
+ ## How to Get Started with the Model
13
+
14
+ See the [LeRobot library](https://github.com/huggingface/lerobot) (particularly the [evaluation script](https://github.com/huggingface/lerobot/blob/main/lerobot/scripts/eval.py)) for instructions on how to load and evaluate this model.
15
+
16
+ ## Training Details
17
+
18
+ The model was trained using [LeRobot's training script](https://github.com/huggingface/lerobot/blob/d747195c5733c4f68d4bfbe62632d6fc1b605712/lerobot/scripts/train.py) and with the [pusht](https://huggingface.co/datasets/lerobot/pusht/tree/v1.3) dataset, using this command:
19
+
20
+ ```bash
21
+ python lerobot/scripts/train.py \
22
+ hydra.run.dir=outputs/train/diffusion_pusht \
23
+ hydra.job.name=diffusion_pusht \
24
+ policy=diffusion training.save_model=true \
25
+ env=pusht \
26
+ env.task=PushT-v0 \
27
+ dataset_repo_id=lerobot/pusht \
28
+ training.save_freq=25000 \
29
+ training.eval_freq=10000 \
30
+ wandb.enable=true \
31
+ device=cuda
32
+ ```
33
+
34
+
35
+ The training and eval curves may be found at https://api.wandb.ai/links/none7/rd2trav7
36
+
37
+ This took about 6 hours to train on an Nvida Tesla P100 GPU.
38
+
39
+ ## Evaluation
40
+
41
+ The model was evaluated on the `PushT` environment from [gym-pusht](https://github.com/huggingface/gym-pusht) and compared to a similar model trained with the original [Diffusion Policy code](https://github.com/real-stanford/diffusion_policy). There are two evaluation metrics on a per-episode basis:
42
+
43
+ - Maximum overlap with target (seen as `eval/avg_max_reward` in the charts above). This ranges in [0, 1].
44
+ - Success: whether or not the maximum overlap is at least 95%.
45
+
46
+ Here are the metrics for 500 episodes worth of evaluation.
47
+
48
+ <blank>|Ours|Theirs
49
+ -|-|-
50
+ Average max. overlap ratio | 0.000 | 0.000
51
+ Success rate for 500 episodes (%) | 0.00 | 0.00
52
+
53
+ The results of each of the individual rollouts may be found in [eval_info.json](eval_info.json).