davelotito commited on
Commit
3364de6
1 Parent(s): df59b30

End of training

Browse files
README.md CHANGED
@@ -4,6 +4,7 @@ base_model: naver-clova-ix/donut-base
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
7
  - wer
8
  model-index:
9
  - name: donut-base-sroie-metrics-combined-new
@@ -17,17 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.3400
21
- - Bleu score: 0.0856
22
- - Precisions: [0.8478260869565217, 0.8017817371937639, 0.7755102040816326, 0.755223880597015]
23
- - Brevity penalty: 0.1078
24
- - Length ratio: 0.3099
25
- - Translation length: 506
26
- - Reference length: 1633
27
- - Cer: 0.7597
28
- - Wer: 0.8305
29
- - Cer Hugging Face: 0.7664
30
- - Wer Hugging Face: 0.8347
31
 
32
  ## Model description
33
 
@@ -47,29 +46,37 @@ More information needed
47
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 2e-05
50
- - train_batch_size: 1
51
- - eval_batch_size: 1
52
  - seed: 42
53
  - gradient_accumulation_steps: 2
54
- - total_train_batch_size: 2
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
- - num_epochs: 4
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | Bleu score | Precisions | Brevity penalty | Length ratio | Translation length | Reference length | Cer | Wer | Cer Hugging Face | Wer Hugging Face |
63
- |:-------------:|:-----:|:----:|:---------------:|:----------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|:----------------:|:----------------:|
64
- | 0.9692 | 1.0 | 253 | 0.4901 | 0.0746 | [0.8011928429423459, 0.726457399103139, 0.6760925449871465, 0.6295180722891566] | 0.1058 | 0.3080 | 503 | 1633 | 0.7672 | 0.8440 | 0.7741 | 0.8478 |
65
- | 0.437 | 2.0 | 506 | 0.3906 | 0.0824 | [0.8382642998027613, 0.7755555555555556, 0.7353689567430025, 0.6964285714285714] | 0.1085 | 0.3105 | 507 | 1633 | 0.7611 | 0.8328 | 0.7675 | 0.8367 |
66
- | 0.2997 | 3.0 | 759 | 0.3565 | 0.0858 | [0.828125, 0.778021978021978, 0.7462311557788944, 0.718475073313783] | 0.1120 | 0.3135 | 512 | 1633 | 0.7640 | 0.8363 | 0.7703 | 0.8397 |
67
- | 0.2168 | 4.0 | 1012 | 0.3400 | 0.0856 | [0.8478260869565217, 0.8017817371937639, 0.7755102040816326, 0.755223880597015] | 0.1078 | 0.3099 | 506 | 1633 | 0.7597 | 0.8305 | 0.7664 | 0.8347 |
 
 
 
 
 
 
 
 
68
 
69
 
70
  ### Framework versions
71
 
72
  - Transformers 4.41.0.dev0
73
  - Pytorch 2.1.0
74
- - Datasets 2.19.0
75
  - Tokenizers 0.19.1
 
4
  tags:
5
  - generated_from_trainer
6
  metrics:
7
+ - bleu
8
  - wer
9
  model-index:
10
  - name: donut-base-sroie-metrics-combined-new
 
18
 
19
  This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.3171
22
+ - Bleu: 0.0705
23
+ - Precisions: [0.8333333333333334, 0.7599067599067599, 0.7123655913978495, 0.6761904761904762]
24
+ - Brevity Penalty: 0.0948
25
+ - Length Ratio: 0.2980
26
+ - Translation Length: 486
27
+ - Reference Length: 1631
28
+ - Cer: 0.7492
29
+ - Wer: 0.8169
 
 
30
 
31
  ## Model description
32
 
 
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 2e-05
49
+ - train_batch_size: 2
50
+ - eval_batch_size: 2
51
  - seed: 42
52
  - gradient_accumulation_steps: 2
53
+ - total_train_batch_size: 4
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
+ - num_epochs: 12
57
  - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
62
+ |:-------------:|:-------:|:----:|:---------------:|:------:|:---------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
63
+ | 6.4453 | 0.9960 | 126 | 2.4195 | 0.0 | [0.4214046822742475, 0.08130081300813008, 0.02040816326530612, 0.0] | 0.0116 | 0.1833 | 299 | 1631 | 0.9070 | 0.9677 |
64
+ | 2.7428 | 2.0 | 253 | 1.0515 | 0.0210 | [0.6041666666666666, 0.3696808510638298, 0.2772585669781931, 0.20676691729323307] | 0.0623 | 0.2649 | 432 | 1631 | 0.8040 | 0.9235 |
65
+ | 1.5566 | 2.9960 | 379 | 0.6386 | 0.0442 | [0.7029914529914529, 0.5693430656934306, 0.480225988700565, 0.4107744107744108] | 0.0833 | 0.2869 | 468 | 1631 | 0.7640 | 0.8789 |
66
+ | 0.8362 | 4.0 | 506 | 0.4649 | 0.0646 | [0.7570281124497992, 0.6485260770975056, 0.5911458333333334, 0.5382262996941896] | 0.1028 | 0.3053 | 498 | 1631 | 0.7585 | 0.8472 |
67
+ | 0.6682 | 4.9960 | 632 | 0.4224 | 0.0636 | [0.7540322580645161, 0.6514806378132119, 0.5916230366492147, 0.5323076923076923] | 0.1014 | 0.3041 | 496 | 1631 | 0.7607 | 0.8464 |
68
+ | 0.5031 | 6.0 | 759 | 0.3836 | 0.0655 | [0.7857142857142857, 0.6928406466512702, 0.6382978723404256, 0.5893416927899686] | 0.0974 | 0.3004 | 490 | 1631 | 0.7561 | 0.8344 |
69
+ | 0.446 | 6.9960 | 885 | 0.3603 | 0.0694 | [0.8179959100204499, 0.7384259259259259, 0.6853333333333333, 0.6383647798742138] | 0.0968 | 0.2998 | 489 | 1631 | 0.7526 | 0.8313 |
70
+ | 0.3507 | 8.0 | 1012 | 0.3284 | 0.0700 | [0.8118609406952966, 0.7407407407407407, 0.696, 0.6540880503144654] | 0.0968 | 0.2998 | 489 | 1631 | 0.7534 | 0.8262 |
71
+ | 0.2981 | 8.9960 | 1138 | 0.3234 | 0.0687 | [0.8340248962655602, 0.7623529411764706, 0.720108695652174, 0.6752411575562701] | 0.0922 | 0.2955 | 482 | 1631 | 0.7488 | 0.8198 |
72
+ | 0.322 | 10.0 | 1265 | 0.3247 | 0.0705 | [0.8295687885010267, 0.7511627906976744, 0.710455764075067, 0.6708860759493671] | 0.0955 | 0.2986 | 487 | 1631 | 0.7518 | 0.8219 |
73
+ | 0.2581 | 10.9960 | 1391 | 0.3154 | 0.0704 | [0.8429752066115702, 0.7681498829039812, 0.7243243243243244, 0.6869009584664537] | 0.0935 | 0.2968 | 484 | 1631 | 0.7485 | 0.8162 |
74
+ | 0.2311 | 11.9526 | 1512 | 0.3171 | 0.0705 | [0.8333333333333334, 0.7599067599067599, 0.7123655913978495, 0.6761904761904762] | 0.0948 | 0.2980 | 486 | 1631 | 0.7492 | 0.8169 |
75
 
76
 
77
  ### Framework versions
78
 
79
  - Transformers 4.41.0.dev0
80
  - Pytorch 2.1.0
81
+ - Datasets 2.19.1
82
  - Tokenizers 0.19.1
runs/May07_13-59-12_ip-172-16-119-166.ec2.internal/events.out.tfevents.1715090352.ip-172-16-119-166.ec2.internal.16585.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:654e73ecab190ba546bff5b24efb3ee039f1b6d6499bfcafc4020c1e20a4f74e
3
- size 19202
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b2b7701cf7568b9686c7c6fd2a2fc4586fc710fd3aef25ef00fe4358466363f
3
+ size 20628
tokenizer.json CHANGED
@@ -1,21 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": {
11
- "Fixed": 512
12
- },
13
- "direction": "Right",
14
- "pad_to_multiple_of": null,
15
- "pad_id": 1,
16
- "pad_type_id": 0,
17
- "pad_token": "<pad>"
18
- },
19
  "added_tokens": [
20
  {
21
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,