File size: 4,951 Bytes
d05822b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLM-RoBERTa-Base-Conll2003-English-NER-Finetune-FP16-BinaryClass-WeightedLoss
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: conll2003
      type: conll2003
      config: conll2003
      split: test
      args: conll2003
    metrics:
    - name: Precision
      type: precision
      value: 0.9526306589757035
    - name: Recall
      type: recall
      value: 0.964943342776204
    - name: F1
      type: f1
      value: 0.9587474711935965
    - name: Accuracy
      type: accuracy
      value: 0.9901367502961128
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# XLM-RoBERTa-Base-Conll2003-English-NER-Finetune-FP16-BinaryClass-WeightedLoss

This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1188
- Precision: 0.9526
- Recall: 0.9649
- F1: 0.9587
- Accuracy: 0.9901

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step  | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2739        | 0.3333 | 1441  | 0.0632          | 0.9412    | 0.9373 | 0.9392 | 0.9863   |
| 0.0329        | 0.6667 | 2882  | 0.0572          | 0.9435    | 0.9347 | 0.9391 | 0.9865   |
| 0.024         | 1.0    | 4323  | 0.0679          | 0.9433    | 0.9536 | 0.9484 | 0.9882   |
| 0.0181        | 1.3333 | 5764  | 0.0652          | 0.9458    | 0.9618 | 0.9537 | 0.9897   |
| 0.0187        | 1.6667 | 7205  | 0.0625          | 0.9531    | 0.9492 | 0.9511 | 0.9895   |
| 0.0176        | 2.0    | 8646  | 0.0685          | 0.9488    | 0.9573 | 0.9530 | 0.9896   |
| 0.0108        | 2.3333 | 10087 | 0.0931          | 0.9470    | 0.9625 | 0.9547 | 0.9897   |
| 0.0117        | 2.6667 | 11528 | 0.0808          | 0.9489    | 0.9632 | 0.9560 | 0.9900   |
| 0.0107        | 3.0    | 12969 | 0.0672          | 0.9531    | 0.9602 | 0.9566 | 0.9908   |
| 0.0076        | 3.3333 | 14410 | 0.0973          | 0.9470    | 0.9587 | 0.9528 | 0.9897   |
| 0.0085        | 3.6667 | 15851 | 0.0741          | 0.9574    | 0.9549 | 0.9561 | 0.9906   |
| 0.0092        | 4.0    | 17292 | 0.0807          | 0.9492    | 0.9621 | 0.9556 | 0.9901   |
| 0.0049        | 4.3333 | 18733 | 0.0886          | 0.9527    | 0.9623 | 0.9575 | 0.9906   |
| 0.0058        | 4.6667 | 20174 | 0.0871          | 0.9516    | 0.9639 | 0.9577 | 0.9904   |
| 0.0047        | 5.0    | 21615 | 0.0928          | 0.9541    | 0.9610 | 0.9576 | 0.9903   |
| 0.0041        | 5.3333 | 23056 | 0.1145          | 0.9491    | 0.9667 | 0.9578 | 0.9899   |
| 0.0048        | 5.6667 | 24497 | 0.0854          | 0.9554    | 0.9623 | 0.9588 | 0.9907   |
| 0.0032        | 6.0    | 25938 | 0.1107          | 0.9488    | 0.9651 | 0.9569 | 0.9899   |
| 0.003         | 6.3333 | 27379 | 0.1038          | 0.9524    | 0.9674 | 0.9599 | 0.9907   |
| 0.0032        | 6.6667 | 28820 | 0.1038          | 0.9533    | 0.9651 | 0.9592 | 0.9904   |
| 0.0034        | 7.0    | 30261 | 0.1038          | 0.9534    | 0.9667 | 0.9600 | 0.9906   |
| 0.0025        | 7.3333 | 31702 | 0.1103          | 0.9528    | 0.9619 | 0.9574 | 0.9899   |
| 0.003         | 7.6667 | 33143 | 0.1177          | 0.9506    | 0.9644 | 0.9575 | 0.9899   |
| 0.0022        | 8.0    | 34584 | 0.1151          | 0.9511    | 0.9633 | 0.9572 | 0.9900   |
| 0.0016        | 8.3333 | 36025 | 0.1141          | 0.9528    | 0.9651 | 0.9589 | 0.9904   |
| 0.0025        | 8.6667 | 37466 | 0.1090          | 0.9550    | 0.9626 | 0.9588 | 0.9905   |
| 0.0024        | 9.0    | 38907 | 0.1115          | 0.9546    | 0.9653 | 0.9599 | 0.9906   |
| 0.002         | 9.3333 | 40348 | 0.1148          | 0.9536    | 0.9639 | 0.9587 | 0.9903   |
| 0.0014        | 9.6667 | 41789 | 0.1201          | 0.9522    | 0.9655 | 0.9588 | 0.9902   |
| 0.0015        | 10.0   | 43230 | 0.1188          | 0.9526    | 0.9649 | 0.9587 | 0.9901   |


### Framework versions

- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1