trainer: training complete at 2024-02-19 20:46:24.071770.
Browse files- README.md +22 -21
- meta_data/README_s42_e7.md +90 -0
- model.safetensors +1 -1
README.md
CHANGED
@@ -22,7 +22,7 @@ model-index:
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,17 +32,17 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B-claim: {'precision': 0.
|
37 |
-
- B-majorclaim: {'precision': 0.
|
38 |
-
- B-premise: {'precision': 0.
|
39 |
-
- I-claim: {'precision': 0.
|
40 |
-
- I-majorclaim: {'precision': 0.
|
41 |
-
- I-premise: {'precision': 0.
|
42 |
-
- O: {'precision': 0.
|
43 |
-
- Accuracy: 0.
|
44 |
-
- Macro avg: {'precision': 0.
|
45 |
-
- Weighted avg: {'precision': 0.
|
46 |
|
47 |
## Model description
|
48 |
|
@@ -67,18 +67,19 @@ The following hyperparameters were used during training:
|
|
67 |
- seed: 42
|
68 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
69 |
- lr_scheduler_type: linear
|
70 |
-
- num_epochs:
|
71 |
|
72 |
### Training results
|
73 |
|
74 |
-
| Training Loss | Epoch | Step | Validation Loss | B-claim
|
75 |
-
|
76 |
-
| No log | 1.0 | 41 | 0.
|
77 |
-
| No log | 2.0 | 82 | 0.
|
78 |
-
| No log | 3.0 | 123 | 0.
|
79 |
-
| No log | 4.0 | 164 | 0.
|
80 |
-
| No log | 5.0 | 205 | 0.
|
81 |
-
| No log | 6.0 | 246 | 0.
|
|
|
82 |
|
83 |
|
84 |
### Framework versions
|
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.8431688702569063
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.4438
|
36 |
+
- B-claim: {'precision': 0.5970149253731343, 'recall': 0.5776173285198556, 'f1-score': 0.5871559633027523, 'support': 277.0}
|
37 |
+
- B-majorclaim: {'precision': 0.6540880503144654, 'recall': 0.7375886524822695, 'f1-score': 0.6933333333333332, 'support': 141.0}
|
38 |
+
- B-premise: {'precision': 0.7533039647577092, 'recall': 0.8003120124804992, 'f1-score': 0.7760968229954615, 'support': 641.0}
|
39 |
+
- I-claim: {'precision': 0.6253114100647733, 'recall': 0.6153468987496935, 'f1-score': 0.6202891387618931, 'support': 4079.0}
|
40 |
+
- I-majorclaim: {'precision': 0.7570308898109728, 'recall': 0.8045075943165115, 'f1-score': 0.7800475059382421, 'support': 2041.0}
|
41 |
+
- I-premise: {'precision': 0.8764452113891286, 'recall': 0.8867743343518114, 'f1-score': 0.881579518333695, 'support': 11455.0}
|
42 |
+
- O: {'precision': 0.9354231280460789, 'recall': 0.9105121293800539, 'f1-score': 0.9227995410588428, 'support': 9275.0}
|
43 |
+
- Accuracy: 0.8432
|
44 |
+
- Macro avg: {'precision': 0.7426596542508946, 'recall': 0.7618084214686707, 'f1-score': 0.7516145462463172, 'support': 27909.0}
|
45 |
+
- Weighted avg: {'precision': 0.8438834099280034, 'recall': 0.8431688702569063, 'f1-score': 0.8433686534034867, 'support': 27909.0}
|
46 |
|
47 |
## Model description
|
48 |
|
|
|
67 |
- seed: 42
|
68 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
69 |
- lr_scheduler_type: linear
|
70 |
+
- num_epochs: 7
|
71 |
|
72 |
### Training results
|
73 |
|
74 |
+
| Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
|
75 |
+
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
76 |
+
| No log | 1.0 | 41 | 0.7886 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.8888888888888888, 'recall': 0.0748829953198128, 'f1-score': 0.1381294964028777, 'support': 641.0} | {'precision': 0.47000821692686934, 'recall': 0.1402304486393724, 'f1-score': 0.21601208459214502, 'support': 4079.0} | {'precision': 0.5424354243542435, 'recall': 0.0720235178833905, 'f1-score': 0.12716262975778547, 'support': 2041.0} | {'precision': 0.7775630122158652, 'recall': 0.8779572239196858, 'f1-score': 0.8247160605190865, 'support': 11455.0} | {'precision': 0.6536142336038115, 'recall': 0.9466307277628032, 'f1-score': 0.7732957548000705, 'support': 9275.0} | 0.7024 | {'precision': 0.4760728251413826, 'recall': 0.3016749876464378, 'f1-score': 0.29704514658170933, 'support': 27909.0} | {'precision': 0.6651369922726568, 'recall': 0.7024257408004586, 'f1-score': 0.6395296795513288, 'support': 27909.0} |
|
77 |
+
| No log | 2.0 | 82 | 0.5373 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.5765472312703583, 'recall': 0.828393135725429, 'f1-score': 0.679897567221511, 'support': 641.0} | {'precision': 0.5732105732105732, 'recall': 0.47315518509438587, 'f1-score': 0.5183991404781091, 'support': 4079.0} | {'precision': 0.5972927241962775, 'recall': 0.6918177364037237, 'f1-score': 0.6410896708286039, 'support': 2041.0} | {'precision': 0.858668504004823, 'recall': 0.870362287210825, 'f1-score': 0.8644758519032342, 'support': 11455.0} | {'precision': 0.8686365992742353, 'recall': 0.903288409703504, 'f1-score': 0.8856236786469344, 'support': 9275.0} | 0.7962 | {'precision': 0.4963365188508953, 'recall': 0.538145250591124, 'f1-score': 0.5127837012969133, 'support': 27909.0} | {'precision': 0.7818058448922789, 'recall': 0.7961947758787488, 'f1-score': 0.7874004427160499, 'support': 27909.0} |
|
78 |
+
| No log | 3.0 | 123 | 0.4911 | {'precision': 0.34893617021276596, 'recall': 0.296028880866426, 'f1-score': 0.3203125, 'support': 277.0} | {'precision': 0.8333333333333334, 'recall': 0.03546099290780142, 'f1-score': 0.06802721088435375, 'support': 141.0} | {'precision': 0.6662371134020618, 'recall': 0.8065522620904836, 'f1-score': 0.7297106563161608, 'support': 641.0} | {'precision': 0.6018223234624146, 'recall': 0.3238538857563128, 'f1-score': 0.421102964615875, 'support': 4079.0} | {'precision': 0.7116279069767442, 'recall': 0.7496325330720235, 'f1-score': 0.7301360057265569, 'support': 2041.0} | {'precision': 0.7889374090247453, 'recall': 0.9463116542994325, 'f1-score': 0.8604881921016073, 'support': 11455.0} | {'precision': 0.9330078346769615, 'recall': 0.8859299191374663, 'f1-score': 0.908859639420418, 'support': 9275.0} | 0.8066 | {'precision': 0.6977002987270037, 'recall': 0.5776814468757066, 'f1-score': 0.5769481670092816, 'support': 27909.0} | {'precision': 0.7968542338095115, 'recall': 0.8066215199398044, 'f1-score': 0.7904444769227739, 'support': 27909.0} |
|
79 |
+
| No log | 4.0 | 164 | 0.4471 | {'precision': 0.5464285714285714, 'recall': 0.5523465703971119, 'f1-score': 0.5493716337522441, 'support': 277.0} | {'precision': 0.6544117647058824, 'recall': 0.6312056737588653, 'f1-score': 0.6425992779783394, 'support': 141.0} | {'precision': 0.7510917030567685, 'recall': 0.8049921996879875, 'f1-score': 0.7771084337349398, 'support': 641.0} | {'precision': 0.6000949893137022, 'recall': 0.619514586908556, 'f1-score': 0.6096501809408926, 'support': 4079.0} | {'precision': 0.7037037037037037, 'recall': 0.8005879470847623, 'f1-score': 0.7490258996103598, 'support': 2041.0} | {'precision': 0.8949800652410294, 'recall': 0.8622435617634221, 'f1-score': 0.8783068783068784, 'support': 11455.0} | {'precision': 0.9216195734545848, 'recall': 0.9178436657681941, 'f1-score': 0.9197277441659464, 'support': 9275.0} | 0.8352 | {'precision': 0.7246186244148918, 'recall': 0.7412477436241284, 'f1-score': 0.7322557212128, 'support': 27909.0} | {'precision': 0.838766973613019, 'recall': 0.835178616216991, 'f1-score': 0.8365729339666597, 'support': 27909.0} |
|
80 |
+
| No log | 5.0 | 205 | 0.4553 | {'precision': 0.5725490196078431, 'recall': 0.5270758122743683, 'f1-score': 0.5488721804511277, 'support': 277.0} | {'precision': 0.608433734939759, 'recall': 0.7163120567375887, 'f1-score': 0.6579804560260587, 'support': 141.0} | {'precision': 0.7355021216407355, 'recall': 0.8112324492979719, 'f1-score': 0.7715133531157271, 'support': 641.0} | {'precision': 0.5901240035429584, 'recall': 0.6533464084334396, 'f1-score': 0.6201279813845259, 'support': 4079.0} | {'precision': 0.7180370210934137, 'recall': 0.8172464478196962, 'f1-score': 0.7644362969752522, 'support': 2041.0} | {'precision': 0.8847149103239047, 'recall': 0.8655608904408555, 'f1-score': 0.8750330950489806, 'support': 11455.0} | {'precision': 0.9455065827132226, 'recall': 0.8904582210242588, 'f1-score': 0.9171571349250416, 'support': 9275.0} | 0.8339 | {'precision': 0.7221239134088339, 'recall': 0.7544617551468827, 'f1-score': 0.7364457854181019, 'support': 27909.0} | {'precision': 0.8417519193793559, 'recall': 0.8339245404708159, 'f1-score': 0.8369775321954073, 'support': 27909.0} |
|
81 |
+
| No log | 6.0 | 246 | 0.4431 | {'precision': 0.5860805860805861, 'recall': 0.5776173285198556, 'f1-score': 0.5818181818181819, 'support': 277.0} | {'precision': 0.6503067484662577, 'recall': 0.75177304964539, 'f1-score': 0.6973684210526316, 'support': 141.0} | {'precision': 0.7481804949053857, 'recall': 0.8018720748829953, 'f1-score': 0.7740963855421686, 'support': 641.0} | {'precision': 0.634337807039757, 'recall': 0.614121108114734, 'f1-score': 0.6240657698056801, 'support': 4079.0} | {'precision': 0.7280740414279419, 'recall': 0.809407153356198, 'f1-score': 0.7665893271461718, 'support': 2041.0} | {'precision': 0.8748292349726776, 'recall': 0.8944565691837626, 'f1-score': 0.8845340354815039, 'support': 11455.0} | {'precision': 0.9417344173441734, 'recall': 0.8991913746630728, 'f1-score': 0.9199713198389498, 'support': 9275.0} | 0.8428 | {'precision': 0.7376490471766827, 'recall': 0.7640626654808583, 'f1-score': 0.7497776343836123, 'support': 27909.0} | {'precision': 0.8442738869920543, 'recall': 0.8428463936364614, 'f1-score': 0.8431306326473246, 'support': 27909.0} |
|
82 |
+
| No log | 7.0 | 287 | 0.4438 | {'precision': 0.5970149253731343, 'recall': 0.5776173285198556, 'f1-score': 0.5871559633027523, 'support': 277.0} | {'precision': 0.6540880503144654, 'recall': 0.7375886524822695, 'f1-score': 0.6933333333333332, 'support': 141.0} | {'precision': 0.7533039647577092, 'recall': 0.8003120124804992, 'f1-score': 0.7760968229954615, 'support': 641.0} | {'precision': 0.6253114100647733, 'recall': 0.6153468987496935, 'f1-score': 0.6202891387618931, 'support': 4079.0} | {'precision': 0.7570308898109728, 'recall': 0.8045075943165115, 'f1-score': 0.7800475059382421, 'support': 2041.0} | {'precision': 0.8764452113891286, 'recall': 0.8867743343518114, 'f1-score': 0.881579518333695, 'support': 11455.0} | {'precision': 0.9354231280460789, 'recall': 0.9105121293800539, 'f1-score': 0.9227995410588428, 'support': 9275.0} | 0.8432 | {'precision': 0.7426596542508946, 'recall': 0.7618084214686707, 'f1-score': 0.7516145462463172, 'support': 27909.0} | {'precision': 0.8438834099280034, 'recall': 0.8431688702569063, 'f1-score': 0.8433686534034867, 'support': 27909.0} |
|
83 |
|
84 |
|
85 |
### Framework versions
|
meta_data/README_s42_e7.md
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-full_labels
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: full_labels
|
20 |
+
split: test
|
21 |
+
args: full_labels
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.8431688702569063
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-full_labels
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.4438
|
36 |
+
- B-claim: {'precision': 0.5970149253731343, 'recall': 0.5776173285198556, 'f1-score': 0.5871559633027523, 'support': 277.0}
|
37 |
+
- B-majorclaim: {'precision': 0.6540880503144654, 'recall': 0.7375886524822695, 'f1-score': 0.6933333333333332, 'support': 141.0}
|
38 |
+
- B-premise: {'precision': 0.7533039647577092, 'recall': 0.8003120124804992, 'f1-score': 0.7760968229954615, 'support': 641.0}
|
39 |
+
- I-claim: {'precision': 0.6253114100647733, 'recall': 0.6153468987496935, 'f1-score': 0.6202891387618931, 'support': 4079.0}
|
40 |
+
- I-majorclaim: {'precision': 0.7570308898109728, 'recall': 0.8045075943165115, 'f1-score': 0.7800475059382421, 'support': 2041.0}
|
41 |
+
- I-premise: {'precision': 0.8764452113891286, 'recall': 0.8867743343518114, 'f1-score': 0.881579518333695, 'support': 11455.0}
|
42 |
+
- O: {'precision': 0.9354231280460789, 'recall': 0.9105121293800539, 'f1-score': 0.9227995410588428, 'support': 9275.0}
|
43 |
+
- Accuracy: 0.8432
|
44 |
+
- Macro avg: {'precision': 0.7426596542508946, 'recall': 0.7618084214686707, 'f1-score': 0.7516145462463172, 'support': 27909.0}
|
45 |
+
- Weighted avg: {'precision': 0.8438834099280034, 'recall': 0.8431688702569063, 'f1-score': 0.8433686534034867, 'support': 27909.0}
|
46 |
+
|
47 |
+
## Model description
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Intended uses & limitations
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training and evaluation data
|
56 |
+
|
57 |
+
More information needed
|
58 |
+
|
59 |
+
## Training procedure
|
60 |
+
|
61 |
+
### Training hyperparameters
|
62 |
+
|
63 |
+
The following hyperparameters were used during training:
|
64 |
+
- learning_rate: 2e-05
|
65 |
+
- train_batch_size: 8
|
66 |
+
- eval_batch_size: 8
|
67 |
+
- seed: 42
|
68 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
69 |
+
- lr_scheduler_type: linear
|
70 |
+
- num_epochs: 7
|
71 |
+
|
72 |
+
### Training results
|
73 |
+
|
74 |
+
| Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
|
75 |
+
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
76 |
+
| No log | 1.0 | 41 | 0.7886 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.8888888888888888, 'recall': 0.0748829953198128, 'f1-score': 0.1381294964028777, 'support': 641.0} | {'precision': 0.47000821692686934, 'recall': 0.1402304486393724, 'f1-score': 0.21601208459214502, 'support': 4079.0} | {'precision': 0.5424354243542435, 'recall': 0.0720235178833905, 'f1-score': 0.12716262975778547, 'support': 2041.0} | {'precision': 0.7775630122158652, 'recall': 0.8779572239196858, 'f1-score': 0.8247160605190865, 'support': 11455.0} | {'precision': 0.6536142336038115, 'recall': 0.9466307277628032, 'f1-score': 0.7732957548000705, 'support': 9275.0} | 0.7024 | {'precision': 0.4760728251413826, 'recall': 0.3016749876464378, 'f1-score': 0.29704514658170933, 'support': 27909.0} | {'precision': 0.6651369922726568, 'recall': 0.7024257408004586, 'f1-score': 0.6395296795513288, 'support': 27909.0} |
|
77 |
+
| No log | 2.0 | 82 | 0.5373 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 277.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.5765472312703583, 'recall': 0.828393135725429, 'f1-score': 0.679897567221511, 'support': 641.0} | {'precision': 0.5732105732105732, 'recall': 0.47315518509438587, 'f1-score': 0.5183991404781091, 'support': 4079.0} | {'precision': 0.5972927241962775, 'recall': 0.6918177364037237, 'f1-score': 0.6410896708286039, 'support': 2041.0} | {'precision': 0.858668504004823, 'recall': 0.870362287210825, 'f1-score': 0.8644758519032342, 'support': 11455.0} | {'precision': 0.8686365992742353, 'recall': 0.903288409703504, 'f1-score': 0.8856236786469344, 'support': 9275.0} | 0.7962 | {'precision': 0.4963365188508953, 'recall': 0.538145250591124, 'f1-score': 0.5127837012969133, 'support': 27909.0} | {'precision': 0.7818058448922789, 'recall': 0.7961947758787488, 'f1-score': 0.7874004427160499, 'support': 27909.0} |
|
78 |
+
| No log | 3.0 | 123 | 0.4911 | {'precision': 0.34893617021276596, 'recall': 0.296028880866426, 'f1-score': 0.3203125, 'support': 277.0} | {'precision': 0.8333333333333334, 'recall': 0.03546099290780142, 'f1-score': 0.06802721088435375, 'support': 141.0} | {'precision': 0.6662371134020618, 'recall': 0.8065522620904836, 'f1-score': 0.7297106563161608, 'support': 641.0} | {'precision': 0.6018223234624146, 'recall': 0.3238538857563128, 'f1-score': 0.421102964615875, 'support': 4079.0} | {'precision': 0.7116279069767442, 'recall': 0.7496325330720235, 'f1-score': 0.7301360057265569, 'support': 2041.0} | {'precision': 0.7889374090247453, 'recall': 0.9463116542994325, 'f1-score': 0.8604881921016073, 'support': 11455.0} | {'precision': 0.9330078346769615, 'recall': 0.8859299191374663, 'f1-score': 0.908859639420418, 'support': 9275.0} | 0.8066 | {'precision': 0.6977002987270037, 'recall': 0.5776814468757066, 'f1-score': 0.5769481670092816, 'support': 27909.0} | {'precision': 0.7968542338095115, 'recall': 0.8066215199398044, 'f1-score': 0.7904444769227739, 'support': 27909.0} |
|
79 |
+
| No log | 4.0 | 164 | 0.4471 | {'precision': 0.5464285714285714, 'recall': 0.5523465703971119, 'f1-score': 0.5493716337522441, 'support': 277.0} | {'precision': 0.6544117647058824, 'recall': 0.6312056737588653, 'f1-score': 0.6425992779783394, 'support': 141.0} | {'precision': 0.7510917030567685, 'recall': 0.8049921996879875, 'f1-score': 0.7771084337349398, 'support': 641.0} | {'precision': 0.6000949893137022, 'recall': 0.619514586908556, 'f1-score': 0.6096501809408926, 'support': 4079.0} | {'precision': 0.7037037037037037, 'recall': 0.8005879470847623, 'f1-score': 0.7490258996103598, 'support': 2041.0} | {'precision': 0.8949800652410294, 'recall': 0.8622435617634221, 'f1-score': 0.8783068783068784, 'support': 11455.0} | {'precision': 0.9216195734545848, 'recall': 0.9178436657681941, 'f1-score': 0.9197277441659464, 'support': 9275.0} | 0.8352 | {'precision': 0.7246186244148918, 'recall': 0.7412477436241284, 'f1-score': 0.7322557212128, 'support': 27909.0} | {'precision': 0.838766973613019, 'recall': 0.835178616216991, 'f1-score': 0.8365729339666597, 'support': 27909.0} |
|
80 |
+
| No log | 5.0 | 205 | 0.4553 | {'precision': 0.5725490196078431, 'recall': 0.5270758122743683, 'f1-score': 0.5488721804511277, 'support': 277.0} | {'precision': 0.608433734939759, 'recall': 0.7163120567375887, 'f1-score': 0.6579804560260587, 'support': 141.0} | {'precision': 0.7355021216407355, 'recall': 0.8112324492979719, 'f1-score': 0.7715133531157271, 'support': 641.0} | {'precision': 0.5901240035429584, 'recall': 0.6533464084334396, 'f1-score': 0.6201279813845259, 'support': 4079.0} | {'precision': 0.7180370210934137, 'recall': 0.8172464478196962, 'f1-score': 0.7644362969752522, 'support': 2041.0} | {'precision': 0.8847149103239047, 'recall': 0.8655608904408555, 'f1-score': 0.8750330950489806, 'support': 11455.0} | {'precision': 0.9455065827132226, 'recall': 0.8904582210242588, 'f1-score': 0.9171571349250416, 'support': 9275.0} | 0.8339 | {'precision': 0.7221239134088339, 'recall': 0.7544617551468827, 'f1-score': 0.7364457854181019, 'support': 27909.0} | {'precision': 0.8417519193793559, 'recall': 0.8339245404708159, 'f1-score': 0.8369775321954073, 'support': 27909.0} |
|
81 |
+
| No log | 6.0 | 246 | 0.4431 | {'precision': 0.5860805860805861, 'recall': 0.5776173285198556, 'f1-score': 0.5818181818181819, 'support': 277.0} | {'precision': 0.6503067484662577, 'recall': 0.75177304964539, 'f1-score': 0.6973684210526316, 'support': 141.0} | {'precision': 0.7481804949053857, 'recall': 0.8018720748829953, 'f1-score': 0.7740963855421686, 'support': 641.0} | {'precision': 0.634337807039757, 'recall': 0.614121108114734, 'f1-score': 0.6240657698056801, 'support': 4079.0} | {'precision': 0.7280740414279419, 'recall': 0.809407153356198, 'f1-score': 0.7665893271461718, 'support': 2041.0} | {'precision': 0.8748292349726776, 'recall': 0.8944565691837626, 'f1-score': 0.8845340354815039, 'support': 11455.0} | {'precision': 0.9417344173441734, 'recall': 0.8991913746630728, 'f1-score': 0.9199713198389498, 'support': 9275.0} | 0.8428 | {'precision': 0.7376490471766827, 'recall': 0.7640626654808583, 'f1-score': 0.7497776343836123, 'support': 27909.0} | {'precision': 0.8442738869920543, 'recall': 0.8428463936364614, 'f1-score': 0.8431306326473246, 'support': 27909.0} |
|
82 |
+
| No log | 7.0 | 287 | 0.4438 | {'precision': 0.5970149253731343, 'recall': 0.5776173285198556, 'f1-score': 0.5871559633027523, 'support': 277.0} | {'precision': 0.6540880503144654, 'recall': 0.7375886524822695, 'f1-score': 0.6933333333333332, 'support': 141.0} | {'precision': 0.7533039647577092, 'recall': 0.8003120124804992, 'f1-score': 0.7760968229954615, 'support': 641.0} | {'precision': 0.6253114100647733, 'recall': 0.6153468987496935, 'f1-score': 0.6202891387618931, 'support': 4079.0} | {'precision': 0.7570308898109728, 'recall': 0.8045075943165115, 'f1-score': 0.7800475059382421, 'support': 2041.0} | {'precision': 0.8764452113891286, 'recall': 0.8867743343518114, 'f1-score': 0.881579518333695, 'support': 11455.0} | {'precision': 0.9354231280460789, 'recall': 0.9105121293800539, 'f1-score': 0.9227995410588428, 'support': 9275.0} | 0.8432 | {'precision': 0.7426596542508946, 'recall': 0.7618084214686707, 'f1-score': 0.7516145462463172, 'support': 27909.0} | {'precision': 0.8438834099280034, 'recall': 0.8431688702569063, 'f1-score': 0.8433686534034867, 'support': 27909.0} |
|
83 |
+
|
84 |
+
|
85 |
+
### Framework versions
|
86 |
+
|
87 |
+
- Transformers 4.37.2
|
88 |
+
- Pytorch 2.2.0+cu121
|
89 |
+
- Datasets 2.17.0
|
90 |
+
- Tokenizers 0.15.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 592330980
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:869856a027d05b58aac2546bb5d494fa44c208b65675b409fae0d2aa9376c7c2
|
3 |
size 592330980
|