Enyonam commited on
Commit
70eacbd
·
1 Parent(s): 78ff0ec

End of training

Browse files
Files changed (1) hide show
  1. README.md +23 -23
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.7350
21
- - F1: 0.6663
22
 
23
  ## Model description
24
 
@@ -49,30 +49,30 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
- | 0.8512 | 0.5 | 500 | 0.7909 | 0.6405 |
53
- | 0.7992 | 1.0 | 1000 | 0.8753 | 0.6407 |
54
- | 0.7667 | 1.5 | 1500 | 0.7786 | 0.6428 |
55
- | 0.7583 | 2.01 | 2000 | 0.7407 | 0.6593 |
56
- | 0.7415 | 2.51 | 2500 | 0.7564 | 0.6555 |
57
- | 0.7337 | 3.01 | 3000 | 0.7536 | 0.6526 |
58
- | 0.7224 | 3.51 | 3500 | 0.7777 | 0.6126 |
59
- | 0.7067 | 4.01 | 4000 | 0.7790 | 0.6552 |
60
- | 0.6693 | 4.51 | 4500 | 0.7497 | 0.6665 |
61
- | 0.6744 | 5.02 | 5000 | 0.7350 | 0.6663 |
62
- | 0.6546 | 5.52 | 5500 | 0.7865 | 0.6714 |
63
- | 0.6725 | 6.02 | 6000 | 0.7639 | 0.6721 |
64
- | 0.6361 | 6.52 | 6500 | 0.7780 | 0.6917 |
65
- | 0.6268 | 7.02 | 7000 | 0.7905 | 0.6893 |
66
- | 0.619 | 7.52 | 7500 | 0.7644 | 0.6991 |
67
- | 0.6008 | 8.02 | 8000 | 0.7473 | 0.7086 |
68
- | 0.5824 | 8.53 | 8500 | 0.7601 | 0.7009 |
69
- | 0.5687 | 9.03 | 9000 | 0.7795 | 0.6888 |
70
- | 0.5466 | 9.53 | 9500 | 0.7925 | 0.7045 |
71
 
72
 
73
  ### Framework versions
74
 
75
- - Transformers 4.33.0
76
  - Pytorch 2.0.1+cu118
77
- - Datasets 2.14.4
78
  - Tokenizers 0.13.3
 
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.7971
21
+ - F1: 0.6393
22
 
23
  ## Model description
24
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
+ | 0.8536 | 0.5 | 500 | 0.8421 | 0.6256 |
53
+ | 0.8287 | 1.0 | 1000 | 0.8261 | 0.6295 |
54
+ | 0.8274 | 1.5 | 1500 | 0.8228 | 0.6296 |
55
+ | 0.8716 | 2.01 | 2000 | 0.9578 | 0.3234 |
56
+ | 0.9525 | 2.51 | 2500 | 0.9556 | 0.3222 |
57
+ | 0.9656 | 3.01 | 3000 | 0.9501 | 0.3222 |
58
+ | 0.8652 | 3.51 | 3500 | 0.8374 | 0.6319 |
59
+ | 0.831 | 4.01 | 4000 | 0.8575 | 0.6163 |
60
+ | 0.8292 | 4.51 | 4500 | 0.8372 | 0.6290 |
61
+ | 0.794 | 5.02 | 5000 | 0.8374 | 0.6268 |
62
+ | 0.8711 | 5.52 | 5500 | 0.9145 | 0.5923 |
63
+ | 0.823 | 6.02 | 6000 | 0.8496 | 0.6233 |
64
+ | 0.8323 | 6.52 | 6500 | 0.8339 | 0.6243 |
65
+ | 0.8009 | 7.02 | 7000 | 0.8403 | 0.6248 |
66
+ | 0.8038 | 7.52 | 7500 | 0.8402 | 0.6243 |
67
+ | 0.7961 | 8.02 | 8000 | 0.8370 | 0.6305 |
68
+ | 0.793 | 8.53 | 8500 | 0.8203 | 0.6353 |
69
+ | 0.7915 | 9.03 | 9000 | 0.8192 | 0.6306 |
70
+ | 0.763 | 9.53 | 9500 | 0.7971 | 0.6393 |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.33.1
76
  - Pytorch 2.0.1+cu118
77
+ - Datasets 2.14.5
78
  - Tokenizers 0.13.3