--- license: apache-2.0 --- # Emma-X (7B) Model Overview EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy. • Hierarchical Embodiment Dataset: EMMA-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components: Grounded Chain-of-Thought Reasoning: Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning. Gripper Position Guidance: Affordance point inside the image. Look-Ahead Spatial Reasoning: Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance. Action: Action policy in 7-dimensional vector to control the robot ([WidowX-6Dof](https://www.trossenrobotics.com/widowx-250)). ## Model Card - **Developed by:** SUTD Declare Lab - **Model type:** Vision-language-action (language, image => reasoning, robot actions) - **Language(s) (NLP):** en - **License:** Apache-2.0 - **Finetuned from:** [`openvla-7B`](https://huggingface.co/openvla/openvla-7b/) - **Pretraining Dataset:** Augmented version of [Bridge V2](https://rail-berkeley.github.io/bridgedata/), for more info check our repository. - **Repository:** [https://github.com/declare-lab/Emma-X/](https://github.com/declare-lab/Emma-X/) - **Paper:** [Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning]() - **Project Page & Videos:** []() ## Getting Started ```python # Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...) # > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt from transformers import AutoModelForVision2Seq, AutoProcessor from PIL import Image import torch # Load Emma-X vla = AutoModelForVision2Seq.from_pretrained( "declare-lab/Emma-X", attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn` torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).to("cuda:0") # Grab image input & format prompt of size 224x224 image: Image.Image = get_from_camera(...) prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{}\n\nOut: " # Predict Action (action is a 7 dimensional vector to control the robot) action, grounded_reasoning = vla.generate_actions( image=image, prompt_text=prompt, type="act", do_sample=False, max_new_tokens=512, do_sample=False ) print("Grounded Reasoning:", grounded_reasoning) # Execute... robot.act(action, ...) ```