Update README: Add model card
Browse files
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- robotics
|
5 |
+
- vla
|
6 |
+
- image-text-to-text
|
7 |
+
- multimodal
|
8 |
+
- pretraining
|
9 |
+
license: mit
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
pipeline_tag: image-text-to-text
|
13 |
+
---
|
14 |
+
|
15 |
+
# OpenVLA 7B (Prismatic-Compatible Version)
|
16 |
+
|
17 |
+
<b>This is the same model as the [OpenVLA 7B model](https://huggingface.co/openvla/openvla-7b), except that this checkpoint is in a format that is
|
18 |
+
compatible with the training script from the original [Prismatic VLMs project codebase](https://github.com/TRI-ML/prismatic-vlms), which the OpenVLA
|
19 |
+
team built on top of to develop the OpenVLA model. See details for the OpenVLA 7B model here: https://huggingface.co/openvla/openvla-7b</b>
|
20 |
+
|
21 |
+
This Prismatic-compatible checkpoint may be useful if you wish to <b>fully fine-tune</b> OpenVLA (all 7.5 billion parameters) via native PyTorch Fully
|
22 |
+
Sharded Data Parallel (FSDP) using the Prismatic VLMs training script. If you instead wish to do Parameter-Efficient Fine-Tuning via LoRA, you
|
23 |
+
can use the OpenVLA checkpoint linked above, which is compatible with the Hugging Face `transformers` library. We recommend fine-tuning via LoRA if
|
24 |
+
you do not have sufficient compute to fully fine-tune a 7B-parameter model (e.g., multiple A100/H100 GPUs).
|
25 |
+
|
26 |
+
## Usage Instructions
|
27 |
+
|
28 |
+
See the [OpenVLA GitHub README](https://github.com/openvla/openvla/blob/main/README.md) for instructions on how to use this checkpoint for full fine-tuning.
|
29 |
+
|
30 |
+
## Citation
|
31 |
+
|
32 |
+
**BibTeX:**
|
33 |
+
|
34 |
+
```bibtex
|
35 |
+
@article{kim24openvla,
|
36 |
+
title={OpenVLA: An Open-Source Vision-Language-Action Model},
|
37 |
+
author={{Moo Jin} Kim and Karl Pertsch and Siddharth Karamcheti and Ted Xiao and Ashwin Balakrishna and Suraj Nair and Rafael Rafailov and Ethan Foster and Grace Lam and Pannag Sanketi and Quan Vuong and Thomas Kollar and Benjamin Burchfiel and Russ Tedrake and Dorsa Sadigh and Sergey Levine and Percy Liang and Chelsea Finn},
|
38 |
+
journal = {arXiv preprint arXiv:2406.09246},
|
39 |
+
year={2024}
|
40 |
+
}
|
41 |
+
```
|