|
--- |
|
tags: |
|
- image-to-text |
|
- image-captioning |
|
language: |
|
- th |
|
--- |
|
# Thai Image Captioning |
|
Encoder-decoder style image captioning model using [CLIP encoder](https://huggingface.co/openai/clip-vit-base-patch32) and [Wangchanberta](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased). Trained on Thai language MSCOCO and IPU24 dataset. |
|
|
|
# Usage |
|
|
|
Use `AutoModel` to load it. Requires `trust_remote_code=True`. |
|
```python |
|
from transformers import AutoModel, AutoImageProcessor, AutoTokenizer |
|
device = 'cuda' |
|
gen_kwargs = {"max_length": 120, "num_beams": 4} |
|
model_path = 'Natthaphon/thaicapgen-clip-gpt2' |
|
feature_extractor = AutoImageProcessor.from_pretrained(model_path) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(device) |
|
pixel_values = feature_extractor(images=[Image.open(image_path)], return_tensors="pt").pixel_values |
|
pixel_values = pixel_values.to(device) |
|
output_ids = model.generate(pixel_values, **gen_kwargs) |
|
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) |
|
``` |
|
|
|
# Acknowledgement |
|
This work is partially supported by the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (PMU-B) [Grant number B04G640107] |