Update README.md
Browse files# My Fine-Tuned Image Captioning Model
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on a custom dataset. It generates captions for images based on visual features detected in the image.
## Model Description
This image captioning model uses BLIP architecture to generate descriptive captions for consumer electronics, appliances, and general items. Ideal for e-commerce platforms and accessibility applications.
## Intended Use and Limitations
- **Intended Use**: E-commerce platforms, accessibility apps, and content generation.
- **Limitations**: May produce repetitive phrases for unclear images.
## Training Details
Fine-tuned on a custom dataset with diverse images and captions.
## Evaluation and Metrics
Evaluated using BLEU and CIDEr, achieving BLEU 25 and CIDEr 100 on the test set.
## Model Usage
```python
# Code snippet for usage
@@ -3,9 +3,13 @@ language:
|
|
3 |
- en
|
4 |
base_model:
|
5 |
- Salesforce/blip-image-captioning-base
|
6 |
-
pipeline_tag: image-
|
7 |
tags:
|
8 |
- art
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
# My Fine-Tuned Image Captioning Model
|
11 |
|
|
|
3 |
- en
|
4 |
base_model:
|
5 |
- Salesforce/blip-image-captioning-base
|
6 |
+
pipeline_tag: image-to-text
|
7 |
tags:
|
8 |
- art
|
9 |
+
license: apache-2.0
|
10 |
+
metrics:
|
11 |
+
- bleu
|
12 |
+
library_name: transformers
|
13 |
---
|
14 |
# My Fine-Tuned Image Captioning Model
|
15 |
|