isl-img2text / README.md
Sigurdur's picture
Update README.md
770bbf5 verified
metadata
base_model: microsoft/git-base
datasets:
  - Sigurdur/isl-image-captioning
language:
  - is
  - en
license: mit
metrics:
  - wer
pipeline_tag: image-to-text
tags:
  - generated_from_trainer
model-index:
  - name: isl-img2text
    results: []
widget:
  - src: examples-for-inference/a.jpg
  - src: examples-for-inference/b.jpg
  - src: examples-for-inference/c.jpg

isl-img2text

Author: Sigurdur Haukur Birgisson

This model is a fine-tuned version of microsoft/git-base on Sigurdur/isl-image-captioning. It achieves the following results on the evaluation set:

  • eval_loss: 0.0983
  • eval_wer_score: 0.7295
  • eval_runtime: 20.5346
  • eval_samples_per_second: 7.792
  • eval_steps_per_second: 0.974
  • epoch: 15.0
  • step: 150

It appears that the model heavilly overfitted to the dataset. Also, something I failed to consider was that the base model can't write any Icelandic characters and was thus not suited for this task. Future works might want to add the capability of writing Icelandic characters to the model.

Model description

More information needed

Intended uses & limitations

Image captioning in Icelandic

Training and evaluation data

Scraped images and their descriptions/captions from the Icelandic wikipedia.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Metrics

Epoch Training Loss Validation Loss Wer Score
1 10.096300 8.690205 102.247536
2 8.268200 7.655295 97.659365
3 7.298000 6.679112 95.714129
4 6.319800 5.673368 2.136911
5 5.317500 4.656871 22.439211
6 4.315600 3.667494 1.001095
7 3.340000 2.722741 1.063527
8 2.417700 1.852253 0.944140
9 1.593900 1.136962 0.949617
10 0.944900 0.638581 0.933187
11 0.516200 0.355187 0.955093
12 0.281600 0.215951 0.822563
13 0.167500 0.148763 0.773275
14 0.111700 0.116783 0.792990
15 0.080800 0.098261 0.729463

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.0.1
  • Datasets 2.20.0
  • Tokenizers 0.19.1