noniewiem's picture
update model card README.md
9204289
metadata
language:
  - en
tags:
  - generated_from_trainer
datasets:
  - glue
metrics:
  - matthews_correlation
model-index:
  - name: cola-pixel-handwritten-mean-vatrpp-256-64-4-5e-5-15000-42
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE COLA
          type: glue
          args: cola
        metrics:
          - name: Matthews Correlation
            type: matthews_correlation
            value: 0.07568068132313144

cola-pixel-handwritten-mean-vatrpp-256-64-4-5e-5-15000-42

This model is a fine-tuned version of noniewiem/pixel-handwritten on the GLUE COLA dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7009
  • Matthews Correlation: 0.0757

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Matthews Correlation
0.6426 3.03 100 0.6255 0.0
0.6176 6.06 200 0.6308 0.0
0.6183 9.09 300 0.6187 0.0
0.6162 12.12 400 0.6158 0.0
0.614 15.15 500 0.6250 -0.0293
0.6096 18.18 600 0.6185 0.0
0.6055 21.21 700 0.6224 0.0175
0.6001 24.24 800 0.6551 0.1301
0.5909 27.27 900 0.6534 0.0566
0.5726 30.3 1000 0.6679 0.1029
0.5524 33.33 1100 0.6901 0.0631
0.5167 36.36 1200 0.7027 0.0948
0.4779 39.39 1300 0.7578 0.1012
0.4271 42.42 1400 0.8021 0.1108
0.3888 45.45 1500 0.8813 0.1025
0.3428 48.48 1600 0.9362 0.1437
0.2977 51.51 1700 1.0786 0.1118
0.2642 54.54 1800 1.0610 0.0901
0.2272 57.57 1900 1.1835 0.1155
0.1915 60.6 2000 1.2531 0.1224
0.1691 63.63 2100 1.3903 0.0754
0.1491 66.66 2200 1.4947 0.0674
0.1339 69.69 2300 1.5434 0.0736
0.1164 72.72 2400 1.5793 0.1165
0.1078 75.75 2500 1.5938 0.0995
0.0974 78.78 2600 1.7009 0.0757

Framework versions

  • Transformers 4.17.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.0.0
  • Tokenizers 0.13.3