Image-Text-to-Text
Safetensors
openvla

✨
Meet Emma-X, an Embodied Multimodal Action Model
✨✨✨

Emma-X

arXiv Emma-X Static Badge

Model Overview

EMMA-X is an Embodied Multimodal Action (VLA) Model designed to bridge the gap between Visual-Language Models (VLMs) and robotic control tasks. EMMA-X generalizes effectively across diverse environments, objects, and instructions while excelling at long-horizon spatial reasoning and grounded task planning using a novel Trajectory Segmentation Strategy. It relies on --

  • Hierarchical Embodiment Dataset: Emma-X is trained on a dataset derived from BridgeV2, containing 60,000 robot manipulation trajectories. Trained using a hierarchical dataset with visual grounded chain-of-thought reasoning, EMMA-X's output will include the following components:

  • Grounded Chain-of-Thought Reasoning: Helps break down tasks into smaller, manageable subtasks, ensuring accurate task execution by mitigating hallucination in reasoning.

  • Gripper Position Guidance: Affordance point inside the image.

  • Look-Ahead Spatial Reasoning: Enables the model to plan actions while considering spatial guidance for effective planning, enhancing long-horizon task performance.

It generates:

  • Action: Action policy in 7-dimensional vector to control the robot (WidowX-6Dof).

Model Card

Getting Started

# Install minimal dependencies (`torch`, `transformers`, `timm`, `tokenizers`, ...)
# > pip install -r https://raw.githubusercontent.com/openvla/openvla/main/requirements-min.txt
from transformers import AutoModelForVision2Seq, AutoProcessor
from PIL import Image

import torch

# Load Emma-X
vla = AutoModelForVision2Seq.from_pretrained(
    "declare-lab/Emma-X",
    attn_implementation="flash_attention_2",  # [Optional] Requires `flash_attn`
    torch_dtype=torch.bfloat16, 
    low_cpu_mem_usage=True, 
    trust_remote_code=True
).to("cuda:0")

# Grab image input & format prompt of size 224x224
image: Image.Image = get_from_camera(...)
prompt = "In: What action should the robot take to achieve the instruction\nINSTRUCTION: \n{<Instruction here>}\n\nOut: "

# Predict Action (action is a 7 dimensional vector to control the robot)
action, grounded_reasoning = vla.generate_actions(
    image=image, prompt_text=prompt, type="act", do_sample=False,
    max_new_tokens=512, do_sample=False
)

print("Grounded Reasoning:", grounded_reasoning)
# Execute...
robot.act(action, ...)

Citation

@article{sun2024emma,
  title={Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning},
  author={Sun, Qi and Hong, Pengfei and Pala, Tej Deep and Toh, Vernon and Tan, U-Xuan and Ghosal, Deepanway and Poria, Soujanya},
  journal={arXiv preprint arXiv:2412.11974},
  year={2024}
}
Downloads last month
21
Safetensors
Model size
7.54B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for declare-lab/Emma-X

Base model

openvla/openvla-7b
Finetuned
(2)
this model

Dataset used to train declare-lab/Emma-X