bart-base-finetuned-question-to-answer

This model is a fine-tuned version of facebook/bart-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0108
  • Bleu: 59.1046
  • Gen Len: 20.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
2.559 1.0 516 0.6275 5.9858 18.5
2.3757 2.0 1032 0.5381 6.9001 18.4
2.1938 3.0 1548 0.5004 9.2119 17.8
2.0192 4.0 2064 0.3875 19.4081 18.1
1.8823 5.0 2580 0.3911 23.1245 18.2
1.7773 6.0 3096 0.3360 35.602 19.1
1.6561 7.0 3612 0.3018 22.9533 17.6
1.5494 8.0 4128 0.2970 32.4812 19.2
1.4596 9.0 4644 0.2351 46.2092 19.9
1.3723 10.0 5160 0.2382 42.1352 19.4
1.3056 11.0 5676 0.2203 43.1825 19.5
1.2302 12.0 6192 0.2005 38.4359 19.4
1.1611 13.0 6708 0.1694 43.435 19.5
1.0921 14.0 7224 0.1600 46.2221 19.5
1.0521 15.0 7740 0.1365 43.6428 19.5
0.9797 16.0 8256 0.1229 47.1793 19.5
0.9153 17.0 8772 0.1048 53.0445 20.0
0.8932 18.0 9288 0.1171 53.0445 20.0
0.8507 19.0 9804 0.0954 48.5863 18.9
0.7885 20.0 10320 0.0794 53.5876 19.0
0.7645 21.0 10836 0.0769 52.3334 18.9
0.7204 22.0 11352 0.0701 48.3328 18.9
0.685 23.0 11868 0.0576 52.7649 19.2
0.6524 24.0 12384 0.0521 54.2149 19.4
0.6302 25.0 12900 0.0486 54.2149 19.4
0.5926 26.0 13416 0.0408 59.1046 20.0
0.5701 27.0 13932 0.0419 48.1233 18.8
0.5483 28.0 14448 0.0418 54.0129 19.3
0.5271 29.0 14964 0.0314 59.1046 20.0
0.501 30.0 15480 0.0283 59.1046 20.0
0.4821 31.0 15996 0.0316 57.5434 19.7
0.4474 32.0 16512 0.0296 57.5434 19.7
0.4328 33.0 17028 0.0229 57.2181 19.7
0.4171 34.0 17544 0.0212 57.5434 19.7
0.4051 35.0 18060 0.0194 59.1046 20.0
0.3924 36.0 18576 0.0161 59.1046 20.0
0.3783 37.0 19092 0.0155 59.1046 20.0
0.3695 38.0 19608 0.0149 59.1046 20.0
0.3626 39.0 20124 0.0140 59.1046 20.0
0.3492 40.0 20640 0.0147 59.1046 20.0
0.3446 41.0 21156 0.0140 59.1046 20.0
0.3377 42.0 21672 0.0125 59.1046 20.0
0.3265 43.0 22188 0.0122 59.1046 20.0
0.3213 44.0 22704 0.0118 59.1046 20.0
0.3154 45.0 23220 0.0116 59.1046 20.0
0.3146 46.0 23736 0.0113 59.1046 20.0
0.3077 47.0 24252 0.0107 59.1046 20.0
0.304 48.0 24768 0.0109 59.1046 20.0
0.3063 49.0 25284 0.0107 59.1046 20.0
0.2998 50.0 25800 0.0108 59.1046 20.0

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
27
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RohanHBTU/bart-base-finetuned-question-to-answer

Base model

facebook/bart-base
Finetuned
(368)
this model