--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer metrics: - bleu - wer model-index: - name: donut-base-sroie-metrics-combined-new results: [] --- # donut-base-sroie-metrics-combined-new This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3171 - Bleu: 0.0705 - Precisions: [0.8333333333333334, 0.7599067599067599, 0.7123655913978495, 0.6761904761904762] - Brevity Penalty: 0.0948 - Length Ratio: 0.2980 - Translation Length: 486 - Reference Length: 1631 - Cer: 0.7492 - Wer: 0.8169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:|:---------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:| | 6.4453 | 0.9960 | 126 | 2.4195 | 0.0 | [0.4214046822742475, 0.08130081300813008, 0.02040816326530612, 0.0] | 0.0116 | 0.1833 | 299 | 1631 | 0.9070 | 0.9677 | | 2.7428 | 2.0 | 253 | 1.0515 | 0.0210 | [0.6041666666666666, 0.3696808510638298, 0.2772585669781931, 0.20676691729323307] | 0.0623 | 0.2649 | 432 | 1631 | 0.8040 | 0.9235 | | 1.5566 | 2.9960 | 379 | 0.6386 | 0.0442 | [0.7029914529914529, 0.5693430656934306, 0.480225988700565, 0.4107744107744108] | 0.0833 | 0.2869 | 468 | 1631 | 0.7640 | 0.8789 | | 0.8362 | 4.0 | 506 | 0.4649 | 0.0646 | [0.7570281124497992, 0.6485260770975056, 0.5911458333333334, 0.5382262996941896] | 0.1028 | 0.3053 | 498 | 1631 | 0.7585 | 0.8472 | | 0.6682 | 4.9960 | 632 | 0.4224 | 0.0636 | [0.7540322580645161, 0.6514806378132119, 0.5916230366492147, 0.5323076923076923] | 0.1014 | 0.3041 | 496 | 1631 | 0.7607 | 0.8464 | | 0.5031 | 6.0 | 759 | 0.3836 | 0.0655 | [0.7857142857142857, 0.6928406466512702, 0.6382978723404256, 0.5893416927899686] | 0.0974 | 0.3004 | 490 | 1631 | 0.7561 | 0.8344 | | 0.446 | 6.9960 | 885 | 0.3603 | 0.0694 | [0.8179959100204499, 0.7384259259259259, 0.6853333333333333, 0.6383647798742138] | 0.0968 | 0.2998 | 489 | 1631 | 0.7526 | 0.8313 | | 0.3507 | 8.0 | 1012 | 0.3284 | 0.0700 | [0.8118609406952966, 0.7407407407407407, 0.696, 0.6540880503144654] | 0.0968 | 0.2998 | 489 | 1631 | 0.7534 | 0.8262 | | 0.2981 | 8.9960 | 1138 | 0.3234 | 0.0687 | [0.8340248962655602, 0.7623529411764706, 0.720108695652174, 0.6752411575562701] | 0.0922 | 0.2955 | 482 | 1631 | 0.7488 | 0.8198 | | 0.322 | 10.0 | 1265 | 0.3247 | 0.0705 | [0.8295687885010267, 0.7511627906976744, 0.710455764075067, 0.6708860759493671] | 0.0955 | 0.2986 | 487 | 1631 | 0.7518 | 0.8219 | | 0.2581 | 10.9960 | 1391 | 0.3154 | 0.0704 | [0.8429752066115702, 0.7681498829039812, 0.7243243243243244, 0.6869009584664537] | 0.0935 | 0.2968 | 484 | 1631 | 0.7485 | 0.8162 | | 0.2311 | 11.9526 | 1512 | 0.3171 | 0.0705 | [0.8333333333333334, 0.7599067599067599, 0.7123655913978495, 0.6761904761904762] | 0.0948 | 0.2980 | 486 | 1631 | 0.7492 | 0.8169 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.1.0 - Datasets 2.19.1 - Tokenizers 0.19.1