metadata
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- cnec
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CNEC2_0_extended_xlm-roberta-large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cnec
type: cnec
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8492292870905588
- name: Recall
type: recall
value: 0.8749379652605459
- name: F1
type: f1
value: 0.8618919579564899
- name: Accuracy
type: accuracy
value: 0.973155737704918
CNEC2_0_extended_xlm-roberta-large
This model is a fine-tuned version of FacebookAI/xlm-roberta-large on the cnec dataset. It achieves the following results on the evaluation set:
- Loss: 0.1467
- Precision: 0.8492
- Recall: 0.8749
- F1: 0.8619
- Accuracy: 0.9732
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.508 | 0.56 | 500 | 0.2177 | 0.6604 | 0.6928 | 0.6762 | 0.9423 |
0.2268 | 1.12 | 1000 | 0.1923 | 0.7158 | 0.7960 | 0.7538 | 0.9512 |
0.183 | 1.68 | 1500 | 0.1580 | 0.7825 | 0.8303 | 0.8057 | 0.9636 |
0.1558 | 2.24 | 2000 | 0.1548 | 0.8077 | 0.8382 | 0.8227 | 0.9676 |
0.1371 | 2.8 | 2500 | 0.1278 | 0.8233 | 0.8511 | 0.8370 | 0.9701 |
0.1225 | 3.36 | 3000 | 0.1430 | 0.8128 | 0.8531 | 0.8324 | 0.9667 |
0.1166 | 3.92 | 3500 | 0.1389 | 0.8307 | 0.8501 | 0.8403 | 0.9681 |
0.101 | 4.48 | 4000 | 0.1323 | 0.8277 | 0.8655 | 0.8462 | 0.9708 |
0.0928 | 5.04 | 4500 | 0.1332 | 0.8434 | 0.8660 | 0.8546 | 0.9715 |
0.0848 | 5.6 | 5000 | 0.1273 | 0.8382 | 0.8665 | 0.8521 | 0.9727 |
0.0798 | 6.16 | 5500 | 0.1281 | 0.8447 | 0.8774 | 0.8608 | 0.9716 |
0.0688 | 6.72 | 6000 | 0.1340 | 0.8482 | 0.8734 | 0.8606 | 0.9728 |
0.0638 | 7.28 | 6500 | 0.1346 | 0.8549 | 0.8744 | 0.8646 | 0.9746 |
0.0585 | 7.84 | 7000 | 0.1415 | 0.8442 | 0.8764 | 0.8600 | 0.9730 |
0.0565 | 8.4 | 7500 | 0.1487 | 0.8377 | 0.8809 | 0.8587 | 0.9730 |
0.0497 | 8.96 | 8000 | 0.1416 | 0.8473 | 0.8784 | 0.8626 | 0.9740 |
0.0484 | 9.52 | 8500 | 0.1467 | 0.8492 | 0.8749 | 0.8619 | 0.9732 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0