|
--- |
|
license: mit |
|
base_model: facebook/xlm-roberta-xl |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- precision |
|
- recall |
|
- f1 |
|
- accuracy |
|
model-index: |
|
- name: xlm-roberta-xl-lora |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# xlm-roberta-xl-lora |
|
|
|
This model is a fine-tuned version of [facebook/xlm-roberta-xl](https://huggingface.co/facebook/xlm-roberta-xl) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.5846 |
|
- Precision: 0.8927 |
|
- Recall: 0.9038 |
|
- F1: 0.8982 |
|
- Accuracy: 0.9154 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 8 |
|
- total_train_batch_size: 64 |
|
- total_eval_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 63 |
|
- num_epochs: 50 |
|
- label_smoothing_factor: 0.2 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
|
| No log | 2.0 | 126 | 3.4068 | 0.2417 | 0.2988 | 0.2672 | 0.2522 | |
|
| No log | 4.0 | 252 | 2.5708 | 0.5402 | 0.6641 | 0.5958 | 0.6379 | |
|
| No log | 6.0 | 378 | 2.2050 | 0.6278 | 0.7262 | 0.6734 | 0.7242 | |
|
| 2.8519 | 8.0 | 504 | 2.0050 | 0.7250 | 0.7922 | 0.7571 | 0.7955 | |
|
| 2.8519 | 10.0 | 630 | 1.8831 | 0.8083 | 0.8427 | 0.8252 | 0.8531 | |
|
| 2.8519 | 12.0 | 756 | 1.7923 | 0.8453 | 0.8630 | 0.8540 | 0.8756 | |
|
| 2.8519 | 14.0 | 882 | 1.7371 | 0.8496 | 0.8693 | 0.8593 | 0.8843 | |
|
| 1.8053 | 16.0 | 1008 | 1.7031 | 0.8529 | 0.8753 | 0.8640 | 0.8886 | |
|
| 1.8053 | 18.0 | 1134 | 1.6692 | 0.8691 | 0.8812 | 0.8751 | 0.8969 | |
|
| 1.8053 | 20.0 | 1260 | 1.6555 | 0.8699 | 0.8856 | 0.8777 | 0.8991 | |
|
| 1.8053 | 22.0 | 1386 | 1.6359 | 0.8824 | 0.8903 | 0.8863 | 0.9054 | |
|
| 1.6089 | 24.0 | 1512 | 1.6303 | 0.8756 | 0.8919 | 0.8837 | 0.9043 | |
|
| 1.6089 | 26.0 | 1638 | 1.6169 | 0.8806 | 0.8935 | 0.8870 | 0.9063 | |
|
| 1.6089 | 28.0 | 1764 | 1.6105 | 0.8876 | 0.8952 | 0.8914 | 0.9088 | |
|
| 1.6089 | 30.0 | 1890 | 1.6067 | 0.8861 | 0.8981 | 0.8920 | 0.9089 | |
|
| 1.5373 | 32.0 | 2016 | 1.5998 | 0.8870 | 0.8989 | 0.8929 | 0.9109 | |
|
| 1.5373 | 34.0 | 2142 | 1.5967 | 0.8900 | 0.8996 | 0.8948 | 0.9121 | |
|
| 1.5373 | 36.0 | 2268 | 1.5939 | 0.8912 | 0.9015 | 0.8964 | 0.9137 | |
|
| 1.5373 | 38.0 | 2394 | 1.5922 | 0.8914 | 0.9014 | 0.8964 | 0.9135 | |
|
| 1.501 | 40.0 | 2520 | 1.5894 | 0.8920 | 0.9021 | 0.8970 | 0.9142 | |
|
| 1.501 | 42.0 | 2646 | 1.5874 | 0.8900 | 0.9029 | 0.8964 | 0.9139 | |
|
| 1.501 | 44.0 | 2772 | 1.5865 | 0.8930 | 0.9043 | 0.8986 | 0.9155 | |
|
| 1.501 | 46.0 | 2898 | 1.5866 | 0.8906 | 0.9036 | 0.8971 | 0.9146 | |
|
| 1.4812 | 48.0 | 3024 | 1.5853 | 0.8907 | 0.9033 | 0.8970 | 0.9148 | |
|
| 1.4812 | 50.0 | 3150 | 1.5846 | 0.8927 | 0.9038 | 0.8982 | 0.9154 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.31.0 |
|
- Pytorch 2.1.0 |
|
- Datasets 2.14.5 |
|
- Tokenizers 0.13.3 |
|
|