senthilv's picture
Update README.md
0e8d993 verified
|
raw
history blame
1.56 kB
metadata
language:
  - en
base_model:
  - Salesforce/blip-image-captioning-base
pipeline_tag: image-to-text
tags:
  - art
license: apache-2.0
metrics:
  - bleu
library_name: transformers
datasets:
  - phiyodr/coco2017

Fine-Tuned Image Captioning Model

This is a fine-tuned version of BLIP for visual answering on images. This model is finetuned on Stanford Online Products Dataset comprising of 120k product images from online retail platform. The dataset is enriched with answers from LLMs and used to fine-tune the model.

This experimental model can be used for answering questions on product images in retail industry. Product meta data enrichment, Validation of human generated product description are some of the examples sue case.

Examples: (place images here)

       Input Image                                                                                                      |      Model Output

image/jpeg chips nachos

image/jpeg a man in a suit walking across a crosswalk

image/png bush ' s best white beans