gokuls's picture
End of training
d6d8873
metadata
language:
  - en
tags:
  - generated_from_trainer
datasets:
  - glue
metrics:
  - accuracy
model-index:
  - name: hBERTv1_new_pretrain_48_emb_com_sst2
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE SST2
          type: glue
          config: sst2
          split: validation
          args: sst2
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7947247706422018

hBERTv1_new_pretrain_48_emb_com_sst2

This model is a fine-tuned version of gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48 on the GLUE SST2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4656
  • Accuracy: 0.7947

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 10
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4711 1.0 527 0.4730 0.7787
0.2942 2.0 1054 0.5417 0.7993
0.228 3.0 1581 0.5140 0.7901
0.1897 4.0 2108 0.4951 0.7947
0.1608 5.0 2635 0.4656 0.7947
0.1422 6.0 3162 0.6786 0.7741
0.1244 7.0 3689 0.6886 0.8028
0.1112 8.0 4216 0.7387 0.7821
0.0985 9.0 4743 0.7169 0.7936
0.0883 10.0 5270 0.7695 0.7959

Framework versions

  • Transformers 4.30.2
  • Pytorch 1.14.0a0+410ce96
  • Datasets 2.12.0
  • Tokenizers 0.13.3