distilbert-base-cased_fine_tuned_food_ner

This model is a fine-tuned version of distilbert-base-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6129
  • Precision: 0.9080
  • Recall: 0.9328
  • F1: 0.9203
  • Accuracy: 0.9095

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 40 1.2541 0.7806 0.7299 0.7544 0.6782
No log 2.0 80 0.7404 0.8301 0.8657 0.8475 0.8047
No log 3.0 120 0.5886 0.8416 0.8900 0.8651 0.8507
No log 4.0 160 0.5094 0.8772 0.9122 0.8944 0.8727
No log 5.0 200 0.4724 0.8727 0.9159 0.8938 0.8863
No log 6.0 240 0.4471 0.8975 0.9240 0.9105 0.8960
No log 7.0 280 0.4446 0.9028 0.9255 0.9140 0.9006
No log 8.0 320 0.4437 0.9042 0.9336 0.9187 0.9032
No log 9.0 360 0.4582 0.9144 0.9299 0.9221 0.9074
No log 10.0 400 0.4525 0.9080 0.9328 0.9203 0.9066
No log 11.0 440 0.4650 0.9076 0.9351 0.9211 0.9032
No log 12.0 480 0.4725 0.9119 0.9395 0.9255 0.9095
0.406 13.0 520 0.4862 0.9161 0.9343 0.9251 0.9095
0.406 14.0 560 0.4735 0.9214 0.9424 0.9318 0.9154
0.406 15.0 600 0.4973 0.9085 0.9380 0.9230 0.9095
0.406 16.0 640 0.5075 0.9026 0.9373 0.9196 0.9099
0.406 17.0 680 0.5057 0.9124 0.9380 0.9250 0.9121
0.406 18.0 720 0.5179 0.9098 0.9380 0.9237 0.9129
0.406 19.0 760 0.5156 0.9111 0.9380 0.9244 0.9121
0.406 20.0 800 0.5325 0.9077 0.9358 0.9215 0.9099
0.406 21.0 840 0.5350 0.9203 0.9373 0.9287 0.9137
0.406 22.0 880 0.5405 0.9077 0.9365 0.9219 0.9108
0.406 23.0 920 0.5682 0.9107 0.9336 0.9220 0.9066
0.406 24.0 960 0.5545 0.9109 0.9351 0.9228 0.9095
0.0303 25.0 1000 0.5717 0.9044 0.9351 0.9194 0.9049
0.0303 26.0 1040 0.5637 0.9101 0.9343 0.9221 0.9108
0.0303 27.0 1080 0.5736 0.9102 0.9351 0.9225 0.9104
0.0303 28.0 1120 0.5793 0.9027 0.9380 0.9200 0.9074
0.0303 29.0 1160 0.5753 0.9137 0.9380 0.9257 0.9112
0.0303 30.0 1200 0.5804 0.9111 0.9380 0.9244 0.9108
0.0303 31.0 1240 0.5877 0.9123 0.9365 0.9243 0.9099
0.0303 32.0 1280 0.5837 0.9116 0.9358 0.9235 0.9087
0.0303 33.0 1320 0.5886 0.9113 0.9402 0.9255 0.9108
0.0303 34.0 1360 0.5847 0.9145 0.9387 0.9264 0.9121
0.0303 35.0 1400 0.5981 0.9083 0.9358 0.9218 0.9082
0.0303 36.0 1440 0.5963 0.9056 0.9343 0.9197 0.9095
0.0303 37.0 1480 0.6027 0.9101 0.9343 0.9221 0.9104
0.0086 38.0 1520 0.6003 0.9102 0.9351 0.9225 0.9099
0.0086 39.0 1560 0.5958 0.9082 0.9343 0.9211 0.9095
0.0086 40.0 1600 0.6054 0.9059 0.9306 0.9181 0.9091
0.0086 41.0 1640 0.6056 0.9075 0.9343 0.9207 0.9112
0.0086 42.0 1680 0.6029 0.9080 0.9321 0.9199 0.9091
0.0086 43.0 1720 0.6027 0.9109 0.9351 0.9228 0.9104
0.0086 44.0 1760 0.6071 0.9075 0.9336 0.9203 0.9099
0.0086 45.0 1800 0.6100 0.9102 0.9351 0.9225 0.9095
0.0086 46.0 1840 0.6106 0.9102 0.9351 0.9225 0.9104
0.0086 47.0 1880 0.6132 0.9101 0.9343 0.9221 0.9091
0.0086 48.0 1920 0.6134 0.9095 0.9343 0.9217 0.9095
0.0086 49.0 1960 0.6129 0.9080 0.9328 0.9203 0.9095
0.005 50.0 2000 0.6129 0.9080 0.9328 0.9203 0.9095

Framework versions

  • Transformers 4.21.0
  • Pytorch 1.12.0+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for davanstrien/distilbert-base-cased_fine_tuned_food_ner

Finetuned
(224)
this model