|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# OFA-tiny |
|
This is the **tiny** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework. |
|
|
|
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet. |
|
|
|
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below. |
|
|
|
``` |
|
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git |
|
pip install OFA/transformers/ |
|
git clone https://huggingface.co/OFA-Sys/OFA-tiny |
|
``` |
|
|
|
After, refer the path to OFA-tiny to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment. |
|
|
|
``` |
|
>>> from PIL import Image |
|
>>> from torchvision import transforms |
|
>>> from transformers import OFATokenizer, OFAForConditionalGeneration |
|
|
|
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5] |
|
>>> resolution = 256 |
|
>>> patch_resize_transform = transforms.Compose([ |
|
lambda image: image.convert("RGB"), |
|
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC), |
|
transforms.ToTensor(), |
|
transforms.Normalize(mean=mean, std=std) |
|
]) |
|
|
|
>>> model = OFAForConditionalGeneration.from_pretrained(ckpt_dir) |
|
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir) |
|
|
|
>>> txt = " what is the description of the image?" |
|
>>> inputs = tokenizer([txt], max_length=1024, return_tensors="pt")["input_ids"] |
|
>>> img = Image.open(path_to_image) |
|
>>> patch_img = patch_resize_transform(img).unsqueeze(0) |
|
|
|
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=4) |
|
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True)) |
|
``` |
|
|