Image-Text-to-Text
Safetensors
openvla
custom_code
Emrys-Hong commited on
Commit
264f92b
·
1 Parent(s): c5d7db4
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Emma-X (7B)
6
+ Model Overview
7
+
8
+ EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy.
9
+
10
+ • Hierarchical Embodiment Dataset:
11
+ EMMA-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:
12
+
13
+ Grounded Chain-of-Thought Reasoning:
14
+ Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.
15
+
16
+ Gripper Position Guidance: Affordance point inside the image.
17
+
18
+ Look-Ahead Spatial Reasoning:
19
+ Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.
20
+
21
+ Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)).
22
+
23
+ ## Model Card
24
+ - **Developed by:** SUTD Declare Lab
25
+ - **Model type:** Vision-language-action (language, image => reasoning, robot actions)
26
+ - **Language(s) (NLP):** en
27
+ - **License:** Apache-2.0
28
+ - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/)
29
+ - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository.
30
+ - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/)
31
+ - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning]()
32
+ - **Project Page & Videos:** []()
33
+
34
+ ## Getting Started
35
+ ```python
36
+ # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
37
+ # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
38
+ from transformers import AutoModelForVision2Seq, AutoProcessor
39
+ from PIL import Image
40
+
41
+ import torch
42
+
43
+ # Load Emma-X
44
+ vla = AutoModelForVision2Seq.from_pretrained(
45
+ "declare-lab/Emma-X",
46
+ attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
47
+ torch_dtype=torch.bfloat16,
48
+ low_cpu_mem_usage=True,
49
+ trust_remote_code=True
50
+ ).to("cuda:0")
51
+
52
+ # Grab image input & format prompt of size 224x224
53
+ image: Image.Image = get_from_camera(...)
54
+ prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "
55
+
56
+ # Predict Action (action is a 7 dimensional vector to control the robot)
57
+ action, grounded_reasoning = vla.generate_actions(
58
+ image=image, prompt_text=prompt, type="act", do_sample=False,
59
+ max_new_tokens=512, do_sample=False
60
+ )
61
+
62
+ print("Grounded Reasoning:", grounded_reasoning)
63
+ # Execute...
64
+ robot.act(action, ...)
65
+ ```