Image-Text-to-Text
Safetensors
openvla
custom_code
Emrys-Hong commited on
Commit
d8e3d98
·
2 Parent(s): 1b73161 3f185b0

Merge branch 'main' of https://huggingface.co/declare-lab/Emma-X

Browse files
Files changed (1) hide show
  1. README.md +99 -65
README.md CHANGED
@@ -1,65 +1,99 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # Emma-X (7B)
6
- Model Overview
7
-
8
- EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy.
9
-
10
- • Hierarchical Embodiment Dataset:
11
- EMMA-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:
12
-
13
- Grounded Chain-of-Thought Reasoning:
14
- Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.
15
-
16
- Gripper Position Guidance: Affordance point inside the image.
17
-
18
- Look-Ahead Spatial Reasoning:
19
- Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.
20
-
21
- Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)).
22
-
23
- ## Model Card
24
- - **Developed by:** SUTD Declare Lab
25
- - **Model type:** Vision-language-action (language, image => reasoning, robot actions)
26
- - **Language(s) (NLP):** en
27
- - **License:** Apache-2.0
28
- - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/)
29
- - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository.
30
- - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/)
31
- - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning]()
32
- - **Project Page & Videos:** []()
33
-
34
- ## Getting Started
35
- ```python
36
- # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
37
- # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
38
- from transformers import AutoModelForVision2Seq, AutoProcessor
39
- from PIL import Image
40
-
41
- import torch
42
-
43
- # Load Emma-X
44
- vla = AutoModelForVision2Seq.from_pretrained(
45
- "declare-lab/Emma-X",
46
- attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
47
- torch_dtype=torch.bfloat16,
48
- low_cpu_mem_usage=True,
49
- trust_remote_code=True
50
- ).to("cuda:0")
51
-
52
- # Grab image input & format prompt of size 224x224
53
- image: Image.Image = get_from_camera(...)
54
- prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "
55
-
56
- # Predict Action (action is a 7 dimensional vector to control the robot)
57
- action, grounded_reasoning = vla.generate_actions(
58
- image=image, prompt_text=prompt, type="act", do_sample=False,
59
- max_new_tokens=512, do_sample=False
60
- )
61
-
62
- print("Grounded Reasoning:", grounded_reasoning)
63
- # Execute...
64
- robot.act(action, ...)
65
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - declare-lab/Emma-X-GCOT
5
+ metrics:
6
+ - accuracy
7
+ base_model:
8
+ - openvla/openvla-7b
9
+ pipeline_tag: image-text-to-text
10
+ ---
11
+
12
+ <h1 align="center">✨
13
+ <br/>
14
+ Meet Emma-X, an Embodied Multimodal Action Model
15
+ <br/>
16
+ ✨✨✨
17
+
18
+
19
+ </h1>
20
+
21
+ <div align="center">
22
+ <img src="https://raw.githubusercontent.com/declare-lab/Emma-X/main/Emma-X.png" alt="Emma-X" width="300" />
23
+
24
+ <br/>
25
+
26
+ [![arXiv](https://img.shields.io/badge/arxiv-2412.11974-b31b1b)](https://arxiv.org/abs/2412.11974) [![Emma-X](https://img.shields.io/badge/Huggingface-Emma--X-brightgreen?style=flat&logo=huggingface&color=violet)](https://huggingface.co/declare-lab/Emma-X) [![Static Badge](https://img.shields.io/badge/Demos-declare--lab-brightred?style=flat)](https://declare-lab.github.io/Emma-X/)
27
+
28
+
29
+ </div>
30
+
31
+ ## Model Overview
32
+
33
+ EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy. It relies on --
34
+
35
+ - Hierarchical Embodiment Dataset: Emma-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:
36
+
37
+ - Grounded Chain-of-Thought Reasoning: Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.
38
+
39
+ - Gripper Position Guidance: Affordance point inside the image.
40
+
41
+ - Look-Ahead Spatial Reasoning: Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.
42
+
43
+ It generates:
44
+
45
+ - Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)).
46
+
47
+ ## Model Card
48
+ - **Developed by:** SUTD Declare Lab
49
+ - **Model type:** Vision-language-action (language, image => reasoning, robot actions)
50
+ - **Language(s) (NLP):** en
51
+ - **License:** Apache-2.0
52
+ - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/)
53
+ - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository.
54
+ - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/)
55
+ - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning](https://arxiv.org/pdf/2412.11974)
56
+ - **Project Page & Videos:** [https://declare-lab.github.io/Emma-X/](https://declare-lab.github.io/Emma-X/)
57
+
58
+ ## Getting Started
59
+ ```python
60
+ # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
61
+ # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
62
+ from transformers import AutoModelForVision2Seq, AutoProcessor
63
+ from PIL import Image
64
+
65
+ import torch
66
+
67
+ # Load Emma-X
68
+ vla = AutoModelForVision2Seq.from_pretrained(
69
+ "declare-lab/Emma-X",
70
+ attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn`
71
+ torch_dtype=torch.bfloat16,
72
+ low_cpu_mem_usage=True,
73
+ trust_remote_code=True
74
+ ).to("cuda:0")
75
+
76
+ # Grab image input & format prompt of size 224x224
77
+ image: Image.Image = get_from_camera(...)
78
+ prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "
79
+
80
+ # Predict Action (action is a 7 dimensional vector to control the robot)
81
+ action, grounded_reasoning = vla.generate_actions(
82
+ image=image, prompt_text=prompt, type="act", do_sample=False,
83
+ max_new_tokens=512, do_sample=False
84
+ )
85
+
86
+ print("Grounded Reasoning:", grounded_reasoning)
87
+ # Execute...
88
+ robot.act(action, ...)
89
+ ```
90
+
91
+ ## Citation
92
+ ```
93
+ @article{sun2024emma,
94
+ title={Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning},
95
+ author={Sun, Qi and Hong, Pengfei and Pala, Tej Deep and Toh, Vernon and Tan, U-Xuan and Ghosal, Deepanway and Poria, Soujanya},
96
+ journal={arXiv preprint arXiv:2412.11974},
97
+ year={2024}
98
+ }
99
+ ```