File size: 4,534 Bytes
0cb15c7 c650833 7697704 c650833 d9893c6 c650833 d9893c6 c650833 d9893c6 c650833 d9893c6 c650833 d9893c6 c650833 d9893c6 7697704 d9893c6 c650833 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
license: mit
datasets:
- AnyModal/flickr30k
base_model:
- meta-llama/Llama-3.2-1B
- google/vit-base-patch16-224
language:
- en
pipeline_tag: image-to-text
library_name: AnyModal
tags:
- vlm
- vision
- multimodal
- AnyModal
---
# AnyModal/Image-Captioning-Llama-3.2-1B
**AnyModal/Image-Captioning-Llama-3.2-1B** is an image captioning model built within the [AnyModal](https://github.com/ritabratamaiti/AnyModal) framework. It integrates a Vision Transformer (ViT) encoder with the Llama 3.2-1B language model and has been trained on the Flickr30k dataset. The model demonstrates the integration of pre-trained vision and language components for generating descriptive captions from natural images.
---
## Trained On
This model was trained on the [Flickr30k Dataset](https://huggingface.co/datasets/AnyModal/flickr30k):
**From Image Descriptions to Visual Denotations: New Similarity Metrics for Semantic Inference Over Event Descriptions**
*Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, Svetlana Lazebnik*
The dataset contains 31,000 images collected from Flickr, each annotated with five descriptive sentences written by human annotators, covering a variety of real-world scenes and events.
---
## How to Use
### Installation
Install the necessary dependencies:
```bash
pip install torch transformers torchvision huggingface_hub tqdm matplotlib Pillow
```
### Inference
Below is an example of generating captions for an image using this model:
```python
import llm
import anymodal
import torch
import vision
from torch.utils.data import DataLoader
import numpy as np
import os
from PIL import Image
from huggingface_hub import hf_hub_download
# Load language model and tokenizer
llm_tokenizer, llm_model = llm.get_llm(
"meta-llama/Llama-3.2-1B",
access_token="GET_YOUR_OWN_TOKEN_FROM_HUGGINGFACE",
use_peft=False,
)
llm_hidden_size = llm.get_hidden_size(llm_tokenizer, llm_model)
# Load vision model components
image_processor, vision_model, vision_hidden_size = vision.get_image_encoder("google/vit-base-patch16-224", use_peft=False)
# Initialize vision tokenizer and encoder
vision_encoder = vision.VisionEncoder(vision_model)
vision_tokenizer = vision.Projector(vision_hidden_size, llm_hidden_size, num_hidden=1)
# Initialize MultiModalModel
multimodal_model = anymodal.MultiModalModel(
input_processor=None,
input_encoder=vision_encoder,
input_tokenizer=vision_tokenizer,
language_tokenizer=llm_tokenizer,
language_model=llm_model,
input_start_token="<|imstart|>",
input_end_token="<|imend|>",
prompt_text="The description of the given image is: ",
)
# Download pre-trained model weights
if not os.path.exists("image_captioning_model"):
os.makedirs("image_captioning_model")
hf_hub_download("AnyModal/Image-Captioning-Llama-3.2-1B", filename="input_tokenizer.pt", local_dir="image_captioning_model")
multimodal_model._load_model("image_captioning_model")
# Generate caption for an image
image_path = "example_image.jpg" # Path to your image
image = Image.open(image_path).convert("RGB")
processed_image = image_processor(image, return_tensors="pt")
processed_image = {key: val.squeeze(0) for key, val in processed_image.items()} # Remove batch dimension
# Generate caption
generated_caption = multimodal_model.generate(processed_image, max_new_tokens=120)
print("Generated Caption:", generated_caption)
```
---
## Project and Training Scripts
This model is part of the [AnyModal Image Captioning Project](https://github.com/ritabratamaiti/AnyModal/tree/main/Image%20Captioning).
- **Training Script**: [train.py](https://github.com/ritabratamaiti/AnyModal/blob/main/Image%20Captioning/train.py)
- **Inference Script**: [inference.py](https://github.com/ritabratamaiti/AnyModal/blob/main/Image%20Captioning/inference.py)
Refer to the project repository for further implementation details and customization.
---
## Project Details
- **Vision Encoder**: Pre-trained Vision Transformer (ViT) model for visual feature extraction.
- **Projector Network**: Projects visual features into a token space compatible with Llama 3.2-1B using a dense network.
- **Language Model**: Llama 3.2-1B, a pre-trained causal language model for text generation.
This implementation serves as a proof of concept, combining a ViT-based image encoder and a small language model. Future iterations could achieve improved performance by incorporating text-conditioned image encoders and larger-scale language models.
|