Image-Text-to-Text
Safetensors
openvla
custom_code
soujanyaporia commited on
Commit
14bfb19
·
verified ·
1 Parent(s): 264f92b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -65
README.md CHANGED
@@ -1,65 +1,71 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # Emma-X (7B)
6
- Model Overview
7
-
8
- EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy.
9
-
10
- • Hierarchical Embodiment Dataset:
11
- EMMA-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:
12
-
13
- Grounded Chain-of-Thought Reasoning:
14
- Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.
15
-
16
- Gripper Position Guidance: Affordance point inside the image.
17
-
18
- Look-Ahead Spatial Reasoning:
19
- Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.
20
-
21
- Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)).
22
-
23
- ## Model Card
24
- - **Developed by:** SUTD Declare Lab
25
- - **Model type:** Vision-language-action (language, image => reasoning, robot actions)
26
- - **Language(s) (NLP):** en
27
- - **License:** Apache-2.0
28
- - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/)
29
- - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository.
30
- - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/)
31
- - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning]()
32
- - **Project Page & Videos:** []()
33
-
34
- ## Getting Started
35
- ```python
36
- # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
37
- # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
38
- from transformers import AutoModelForVision2Seq, AutoProcessor
39
- from PIL import Image
40
-
41
- import torch
42
-
43
- # Load Emma-X
44
- vla = AutoModelForVision2Seq.from_pretrained(
45
- "declare-lab/Emma-X",
46
- attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
47
- torch_dtype=torch.bfloat16,
48
- low_cpu_mem_usage=True,
49
- trust_remote_code=True
50
- ).to("cuda:0")
51
-
52
- # Grab image input & format prompt of size 224x224
53
- image: Image.Image = get_from_camera(...)
54
- prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "
55
-
56
- # Predict Action (action is a 7 dimensional vector to control the robot)
57
- action, grounded_reasoning = vla.generate_actions(
58
- image=image, prompt_text=prompt, type="act", do_sample=False,
59
- max_new_tokens=512, do_sample=False
60
- )
61
-
62
- print("Grounded Reasoning:", grounded_reasoning)
63
- # Execute...
64
- robot.act(action, ...)
65
- ```
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - declare-lab/Emma-X-GCOT
5
+ metrics:
6
+ - accuracy
7
+ base_model:
8
+ - openvla/openvla-7b
9
+ ---
10
+
11
+ # Emma-X (7B)
12
+ Model Overview
13
+
14
+ EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy.
15
+
16
+ • Hierarchical Embodiment Dataset:
17
+ EMMA-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:
18
+
19
+ Grounded Chain-of-Thought Reasoning:
20
+ Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.
21
+
22
+ Gripper Position Guidance: Affordance point inside the image.
23
+
24
+ Look-Ahead Spatial Reasoning:
25
+ Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.
26
+
27
+ Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)).
28
+
29
+ ## Model Card
30
+ - **Developed by:** SUTD Declare Lab
31
+ - **Model type:** Vision-language-action (language, image => reasoning, robot actions)
32
+ - **Language(s) (NLP):** en
33
+ - **License:** Apache-2.0
34
+ - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/)
35
+ - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository.
36
+ - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/)
37
+ - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning]()
38
+ - **Project Page & Videos:** []()
39
+
40
+ ## Getting Started
41
+ ```python
42
+ # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
43
+ # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
44
+ from transformers import AutoModelForVision2Seq, AutoProcessor
45
+ from PIL import Image
46
+
47
+ import torch
48
+
49
+ # Load Emma-X
50
+ vla = AutoModelForVision2Seq.from_pretrained(
51
+ "declare-lab/Emma-X",
52
+ attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
53
+ torch_dtype=torch.bfloat16,
54
+ low_cpu_mem_usage=True,
55
+ trust_remote_code=True
56
+ ).to("cuda:0")
57
+
58
+ # Grab image input & format prompt of size 224x224
59
+ image: Image.Image = get_from_camera(...)
60
+ prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "
61
+
62
+ # Predict Action (action is a 7 dimensional vector to control the robot)
63
+ action, grounded_reasoning = vla.generate_actions(
64
+ image=image, prompt_text=prompt, type="act", do_sample=False,
65
+ max_new_tokens=512, do_sample=False
66
+ )
67
+
68
+ print("Grounded Reasoning:", grounded_reasoning)
69
+ # Execute...
70
+ robot.act(action, ...)
71
+ ```