English

OpenFlamingo-9B (CLIP ViT-L/14, MPT-7B)

Paper | Blog post | Code | Demo

OpenFlamingo is an open source implementation of DeepMind's Flamingo models. This 9B-parameter model uses a CLIP ViT-L/14 vision encoder and MPT-7B language model.

Model Details

We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of LAION-2B and Multimodal C4.

This model has cross-attention modules inserted in every fourth decoder block. It was trained using DistributedDataParallel across 64 A100 80GB GPUs at automatic BF16 mixed precision.

To use these MPT weights, OpenFlamingo must be initialized using revision 68e1a8e0ebb9b30f3c45c1ef6195980f29063ae2 of the MPT-7B modeling code. We suggest using this copy of the model to ensure the code is loaded at that commit.

Uses

OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.

Initialization

from open_flamingo import create_model_and_transforms

model, image_processor, tokenizer = create_model_and_transforms(
    clip_vision_encoder_path="ViT-L-14",
    clip_vision_encoder_pretrained="openai",
    lang_encoder_path="anas-awadalla/mpt-7b",
    tokenizer_path="anas-awadalla/mpt-7b",
    cross_attn_every_n_layers=4
)

# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch

checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-9B-vitl-mpt7b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)

Generation example

Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.

from PIL import Image
import requests

"""
Step 1: Load images
"""
demo_image_one = Image.open(
    requests.get(
        "http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
    ).raw
)

demo_image_two = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
        stream=True
    ).raw
)

query_image = Image.open(
    requests.get(
        "http://images.cocodataset.org/test-stuff2017/000000028352.jpg", 
        stream=True
    ).raw
)


"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape 
 batch_size x num_media x num_frames x channels x height x width. 
 In this case batch_size = 1, num_media = 3, num_frames = 1,
 channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)

"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
 We also expect an <|endofchunk|> special token to indicate the end of the text 
 portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
    ["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
    return_tensors="pt",
)


"""
Step 4: Generate text
"""
generated_text = model.generate(
    vision_x=vision_x,
    lang_x=lang_x["input_ids"],
    attention_mask=lang_x["attention_mask"],
    max_new_tokens=20,
    num_beams=3,
)

print("Generated text: ", tokenizer.decode(generated_text[0]))

Bias, Risks, and Limitations

OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.

In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.

Evaluation

0-shot 4-shot 8-shot 16-shot 32-shot
COCO (CIDEr) 79.5 (0.2) 89.0 (0.3) 96.3 (0.1) 98.8 (0.7) 99.5 (0.1)
VQAv2 (Accuracy) 50.3 (0.7) 50.5 (0.5) 52.8 (0.3) 52.3 (0.3) 50.5 (0.0)
Flickr-30K (CIDEr) 59.5 (1.0) 65.8 (0.6) 62.9 (1.0) 62.8 (1.0) 61.3 (0.7)
OK-VQA (Accuracy) 34.7 (0.1) 34.3 (0.1) 38.4 (0.0) 39.5 (0.1) 38.1 (0.0)
TextVQA (Accuracy) 24.2 (0.5) 28.2 (0.4) 29.1 (0.1) 27.3 (0.1) 23.8 (0.2)
Vizwiz (Accuracy) 17.7 (0.7) 23.1 (0.9) 31.6 (1.5) 38.0 (1.1) 40.2 (0.7)
Hateful Memes (ROC AUC) 50.8 (4.7) 47.5 (2.2) 45.2 (2.7) 46.9 (3.8) 52.0 (2.1)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Space using openflamingo/OpenFlamingo-9B-vitl-mpt7b 1