working_dir

This model is a fine-tuned version of microsoft/git-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 7.3083
  • Wer Score: {'bleu': 0.002242953743170335, 'precisions': [0.00878409616273694, 0.004012964963728971, 0.001545833977430824, 0.00046446818392940084], 'brevity_penalty': 1.0, 'length_ratio': 68.30526315789474, 'translation_length': 6489, 'reference_length': 95}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Wer Score
7.9926 0.1 1 7.8580 {'bleu': 0.0, 'precisions': [0.00648248186448526, 0.0017004173751739063, 0.0006192909119058678, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 68.2, 'translation_length': 6479, 'reference_length': 95}
7.8988 0.2 2 7.7407 {'bleu': 0.0, 'precisions': [0.008140531276778063, 0.002717391304347826, 0.0007161271841879118, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 73.70526315789473, 'translation_length': 7002, 'reference_length': 95}
7.8036 0.3 3 7.6263 {'bleu': 0.0, 'precisions': [0.008062234794908063, 0.002974504249291785, 0.0005673758865248227, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 74.42105263157895, 'translation_length': 7070, 'reference_length': 95}
7.7237 0.4 4 7.5370 {'bleu': 0.0, 'precisions': [0.008338044092707745, 0.003538069629210303, 0.0005668934240362812, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 74.48421052631579, 'translation_length': 7076, 'reference_length': 95}
7.5959 0.5 5 7.4688 {'bleu': 0.001689755477270402, 'precisions': [0.008193247633846589, 0.0035365681143018812, 0.0009916418756197763, 0.0002837281883955171], 'brevity_penalty': 1.0, 'length_ratio': 74.51578947368421, 'translation_length': 7079, 'reference_length': 95}
7.545 0.6 6 7.4154 {'bleu': 0.0016910162898086155, 'precisions': [0.008244994110718492, 0.0032438808611029196, 0.0010336680448907265, 0.0002957704821058858], 'brevity_penalty': 1.0, 'length_ratio': 71.49473684210527, 'translation_length': 6792, 'reference_length': 95}
7.5008 0.7 7 7.3736 {'bleu': 0.0027244361260593537, 'precisions': [0.011021452469986223, 0.004732794320646815, 0.0017783046828689982, 0.000593941793704217], 'brevity_penalty': 1.0, 'length_ratio': 53.48421052631579, 'translation_length': 5081, 'reference_length': 95}
7.4952 0.8 8 7.3412 {'bleu': 0.0026505451685217172, 'precisions': [0.010477941176470587, 0.004604051565377533, 0.0018450184501845018, 0.00055452865064695], 'brevity_penalty': 1.0, 'length_ratio': 57.26315789473684, 'translation_length': 5440, 'reference_length': 95}
7.4316 0.9 9 7.3194 {'bleu': 0.0023042253386690104, 'precisions': [0.009205426356589148, 0.0038822387576835974, 0.0016202203499675956, 0.0004868549172346641], 'brevity_penalty': 1.0, 'length_ratio': 65.17894736842105, 'translation_length': 6192, 'reference_length': 95}
7.4141 1.0 10 7.3083 {'bleu': 0.002242953743170335, 'precisions': [0.00878409616273694, 0.004012964963728971, 0.001545833977430824, 0.00046446818392940084], 'brevity_penalty': 1.0, 'length_ratio': 68.30526315789474, 'translation_length': 6489, 'reference_length': 95}

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
13
Safetensors
Model size
177M params
Tensor type
F32
·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for XxIKumaxX/working_dir

Base model

microsoft/git-base
Finetuned
(106)
this model