my_ner_model
This model is a fine-tuned version of distilbert-base-uncased on the wnut_17 dataset. It achieves the following results on the evaluation set:
Loss: 0.2675
Accuracy: 0.9418
F1: 0.4060
Classification Report: precision recall f1-score support
corporation 0.27 0.11 0.15 66
creative-work 0.17 0.01 0.01 142 group 0.43 0.04 0.07 165 location 0.42 0.45 0.43 150 person 0.71 0.59 0.65 429 product 0.08 0.02 0.03 127
micro avg 0.57 0.31 0.41 1079
macro avg 0.35 0.20 0.22 1079
weighted avg 0.45 0.31 0.34 1079
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Classification Report |
---|---|---|---|---|---|---|
No log | 1.0 | 213 | 0.2799 | 0.9390 | 0.3613 | precision recall f1-score support |
corporation 0.00 0.00 0.00 66 creative-work 0.00 0.00 0.00 142 group 0.00 0.00 0.00 165 location 0.38 0.34 0.36 150 person 0.69 0.53 0.60 429 product 0.00 0.00 0.00 127
micro avg 0.59 0.26 0.36 1079
macro avg 0.18 0.15 0.16 1079
weighted avg 0.33 0.26 0.29 1079 | | No log | 2.0 | 426 | 0.2675 | 0.9418 | 0.4060 | precision recall f1-score support
corporation 0.27 0.11 0.15 66 creative-work 0.17 0.01 0.01 142 group 0.43 0.04 0.07 165 location 0.42 0.45 0.43 150 person 0.71 0.59 0.65 429 product 0.08 0.02 0.03 127
micro avg 0.57 0.31 0.41 1079
macro avg 0.35 0.20 0.22 1079
weighted avg 0.45 0.31 0.34 1079 |
Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cpu
- Datasets 1.16.1
- Tokenizers 0.21.0
- Downloads last month
- 8
Dataset used to train mukazhanovn/my_ner_model
Evaluation results
- Accuracy on wnut_17self-reported0.942
- F1 on wnut_17self-reported0.406