File size: 2,949 Bytes
3aef668
 
a58ba4e
 
 
 
 
15ad1dc
a58ba4e
 
 
15ad1dc
 
 
 
 
32cb91a
a712f19
 
 
aa29791
3aef668
 
 
 
 
770bbf5
 
a58ba4e
3aef668
ea1f15f
 
 
 
 
 
 
3aef668
de710f2
 
aa29791
 
3aef668
 
 
 
 
 
a58ba4e
3aef668
 
 
a58ba4e
3aef668
 
 
 
 
 
 
ea1f15f
 
3aef668
ea1f15f
 
3aef668
 
 
 
 
31d9805
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3aef668
 
 
ea1f15f
3aef668
a58ba4e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
base_model: microsoft/git-base
datasets:
- Sigurdur/isl-image-captioning
language:
- is
- en
license: mit
metrics:
- wer
pipeline_tag: image-to-text
tags:
- generated_from_trainer
model-index:
- name: isl-img2text
  results: []
widget:
- src: examples-for-inference/a.jpg
- src: examples-for-inference/b.jpg
- src: examples-for-inference/c.jpg
library_name: transformers
---


# isl-img2text

Author: Sigurdur Haukur Birgisson

This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on [Sigurdur/isl-image-captioning](https://huggingface.co/Sigurdur/isl-image-captioning).
It achieves the following results on the evaluation set:
- eval_loss: 0.0983
- eval_wer_score: 0.7295
- eval_runtime: 20.5346
- eval_samples_per_second: 7.792
- eval_steps_per_second: 0.974
- epoch: 15.0
- step: 150

It appears that the model heavilly overfitted to the dataset. Also, something I failed to consider was that the base model can't write any Icelandic characters and was thus not suited for this task. Future works might want to add the capability of writing Icelandic characters to the model.

repo: [https://github.com/sigurdurhaukur/isl-img-cap](https://github.com/sigurdurhaukur/isl-img-cap)

## Model description

More information needed

## Intended uses & limitations

Image captioning in Icelandic

## Training and evaluation data

Scraped images and their descriptions/captions from the Icelandic wikipedia.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP


## Metrics

| Epoch | Training Loss | Validation Loss | Wer Score   |
|-------|---------------|-----------------|-------------|
| 1     | 10.096300     | 8.690205        | 102.247536  |
| 2     | 8.268200      | 7.655295        | 97.659365   |
| 3     | 7.298000      | 6.679112        | 95.714129   |
| 4     | 6.319800      | 5.673368        | 2.136911    |
| 5     | 5.317500      | 4.656871        | 22.439211   |
| 6     | 4.315600      | 3.667494        | 1.001095    |
| 7     | 3.340000      | 2.722741        | 1.063527    |
| 8     | 2.417700      | 1.852253        | 0.944140    |
| 9     | 1.593900      | 1.136962        | 0.949617    |
| 10    | 0.944900      | 0.638581        | 0.933187    |
| 11    | 0.516200      | 0.355187        | 0.955093    |
| 12    | 0.281600      | 0.215951        | 0.822563    |
| 13    | 0.167500      | 0.148763        | 0.773275    |
| 14    | 0.111700      | 0.116783        | 0.792990    |
| 15    | 0.080800      | 0.098261        | 0.729463    |


### Framework versions

- Transformers 4.42.3
- Pytorch 2.0.1
- Datasets 2.20.0
- Tokenizers 0.19.1