fc-binary-prompt-model

This model is a fine-tuned version of line-corporation/line-distilbert-base-japanese on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3427
  • Accuracy: 0.8672

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: tpu
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 306 0.3954 0.8594
0.4092 2.0 612 0.3867 0.8594
0.4092 3.0 918 0.3787 0.8594
0.4011 4.0 1224 0.3747 0.8594
0.3937 5.0 1530 0.3699 0.8594
0.3937 6.0 1836 0.3664 0.8594
0.3896 7.0 2142 0.3700 0.8594
0.3896 8.0 2448 0.3626 0.8594
0.3868 9.0 2754 0.3671 0.8613
0.3813 10.0 3060 0.3537 0.8594
0.3813 11.0 3366 0.3633 0.8613
0.3844 12.0 3672 0.3523 0.8613
0.3844 13.0 3978 0.3523 0.8613
0.3799 14.0 4284 0.3499 0.8613
0.3791 15.0 4590 0.3530 0.8633
0.3791 16.0 4896 0.3499 0.8633
0.3735 17.0 5202 0.3465 0.8613
0.3767 18.0 5508 0.3447 0.8613
0.3767 19.0 5814 0.3457 0.8633
0.3733 20.0 6120 0.3413 0.8613
0.3733 21.0 6426 0.3448 0.8633
0.3721 22.0 6732 0.3438 0.8652
0.3753 23.0 7038 0.3440 0.8652
0.3753 24.0 7344 0.3442 0.8672
0.3726 25.0 7650 0.3459 0.8691
0.3726 26.0 7956 0.3448 0.8672
0.3675 27.0 8262 0.3416 0.8672
0.3686 28.0 8568 0.3425 0.8672
0.3686 29.0 8874 0.3429 0.8672
0.3726 30.0 9180 0.3427 0.8672

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0
Downloads last month
28
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for liwii/fc-binary-prompt-model

Finetuned
(19)
this model