massive_indo

This model is a fine-tuned version of xxxxxxxxx on the massive dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0967
  • F1: 0.8702

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
0.747 1.39 500 1.0303 0.5703
0.5618 2.78 1000 0.9201 0.6479
0.3695 4.17 1500 0.8216 0.6990
0.3392 5.56 2000 0.7637 0.7335
0.2638 6.94 2500 0.8244 0.7678
0.1907 8.33 3000 0.7912 0.7979
0.1661 9.72 3500 0.8266 0.7835
0.1073 11.11 4000 0.8120 0.8139
0.1265 12.5 4500 0.8336 0.8344
0.0481 13.89 5000 0.8240 0.8518
0.0646 15.28 5500 0.9290 0.8333
0.0846 16.67 6000 0.9176 0.8461
0.0228 18.06 6500 0.9600 0.8529
0.0696 19.44 7000 0.9769 0.8525
0.0614 20.83 7500 0.9944 0.8545
0.0173 22.22 8000 1.0110 0.8550
0.004 23.61 8500 1.0140 0.8417
0.0032 25.0 9000 1.0771 0.8314
0.0453 26.39 9500 1.0173 0.8424
0.0471 27.78 10000 1.0068 0.8652
0.0128 29.17 10500 1.0595 0.8658
0.0027 30.56 11000 1.0596 0.8506
0.0198 31.94 11500 1.0468 0.8593
0.0027 33.33 12000 1.0537 0.8693
0.0114 34.72 12500 1.0512 0.8620
0.015 36.11 13000 1.0425 0.8813
0.005 37.5 13500 1.1092 0.8749
0.0038 38.89 14000 1.0829 0.8637
0.0096 40.28 14500 1.0902 0.8794
0.0007 41.67 15000 1.0994 0.8651
0.0109 43.06 15500 1.0957 0.8782
0.0026 44.44 16000 1.0997 0.8643
0.0061 45.83 16500 1.0853 0.8672
0.0005 47.22 17000 1.1082 0.8694
0.0005 48.61 17500 1.1016 0.8696
0.0028 50.0 18000 1.0967 0.8702

Framework versions

  • Transformers 4.34.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Ranjit/final_en