gpt2-finetuned-sst2

This model is GPT-2 fine-tuned on GLUE SST-2 dataset. It acheives the following results on the validation set

  • Accuracy: 0.9254

Model Details

GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. However, it acheives very good results on Text Classification tasks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-5
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 123
  • optimizer: epsilon=1e-08
  • num_epochs: 4

Training results

Epoch Training Loss Training Accuracy Validation Loss Validation Accuracy
1 0.32641 0.85419 0.26545 0.90137
2 0.15731 0.94151 0.23625 0.92546
3 0.08982 0.9712 0.33954 0.91514
Downloads last month
110
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train PavanNeerudu/gpt2-finetuned-sst2

Evaluation results