Theoreticallyhugo commited on
Commit
dc10fea
·
verified ·
1 Parent(s): 93b2b69

trainer: training complete at 2024-10-26 20:46:35.297562.

Browse files
README.md CHANGED
@@ -1,10 +1,11 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: allenai/longformer-base-4096
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - essays_su_g
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +15,15 @@ model-index:
14
  name: Token Classification
15
  type: token-classification
16
  dataset:
17
- name: essays_su_g
18
- type: essays_su_g
19
  config: full_labels
20
  split: train[0%:20%]
21
  args: full_labels
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8364696256903252
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,19 +31,19 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # longformer-full_labels
32
 
33
- This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.0525
36
- - B-claim: {'precision': 0.5714285714285714, 'recall': 0.5915492957746479, 'f1-score': 0.5813148788927336, 'support': 284.0}
37
- - B-majorclaim: {'precision': 0.7328767123287672, 'recall': 0.7588652482269503, 'f1-score': 0.7456445993031359, 'support': 141.0}
38
- - B-premise: {'precision': 0.7592592592592593, 'recall': 0.8107344632768362, 'f1-score': 0.7841530054644807, 'support': 708.0}
39
- - I-claim: {'precision': 0.5995872033023736, 'recall': 0.5700269806230072, 'f1-score': 0.5844335470891487, 'support': 4077.0}
40
- - I-majorclaim: {'precision': 0.7741293532338308, 'recall': 0.7687747035573123, 'f1-score': 0.7714427367377293, 'support': 2024.0}
41
- - I-premise: {'precision': 0.8661675245671502, 'recall': 0.907946370176586, 'f1-score': 0.8865650195577552, 'support': 12232.0}
42
- - O: {'precision': 0.9227995758218451, 'recall': 0.8818402918524524, 'f1-score': 0.9018551145196393, 'support': 9868.0}
43
- - Accuracy: 0.8365
44
- - Macro avg: {'precision': 0.746606885705971, 'recall': 0.7556767647839704, 'f1-score': 0.7507727002235176, 'support': 29334.0}
45
- - Weighted avg: {'precision': 0.8357427933389249, 'recall': 0.8364696256903252, 'f1-score': 0.8356690155425791, 'support': 29334.0}
46
 
47
  ## Model description
48
 
@@ -67,37 +68,22 @@ The following hyperparameters were used during training:
67
  - seed: 42
68
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
  - lr_scheduler_type: linear
70
- - num_epochs: 20
71
 
72
  ### Training results
73
 
74
- | Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
75
- |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
76
- | No log | 1.0 | 81 | 0.6443 | {'precision': 0.5, 'recall': 0.0035211267605633804, 'f1-score': 0.006993006993006993, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.5883777239709443, 'recall': 0.6864406779661016, 'f1-score': 0.6336375488917861, 'support': 708.0} | {'precision': 0.41618672324946954, 'recall': 0.3367672308069659, 'f1-score': 0.37228850325379614, 'support': 4077.0} | {'precision': 0.5611921369689283, 'recall': 0.43725296442687744, 'f1-score': 0.49153013051930017, 'support': 2024.0} | {'precision': 0.7746890504995582, 'recall': 0.9318181818181818, 'f1-score': 0.846019669697532, 'support': 12232.0} | {'precision': 0.9111808904340025, 'recall': 0.8233684637211187, 'f1-score': 0.8650519031141869, 'support': 9868.0} | 0.7591 | {'precision': 0.535946646446129, 'recall': 0.4598812350714013, 'f1-score': 0.4593601089242298, 'support': 29334.0} | {'precision': 0.7451676238152983, 'recall': 0.7591191109292971, 'f1-score': 0.7448054609057474, 'support': 29334.0} |
77
- | No log | 2.0 | 162 | 0.5139 | {'precision': 0.4420731707317073, 'recall': 0.5105633802816901, 'f1-score': 0.47385620915032683, 'support': 284.0} | {'precision': 0.6274509803921569, 'recall': 0.45390070921985815, 'f1-score': 0.5267489711934156, 'support': 141.0} | {'precision': 0.7261306532663316, 'recall': 0.8163841807909604, 'f1-score': 0.7686170212765958, 'support': 708.0} | {'precision': 0.5221008840353614, 'recall': 0.49251900907530044, 'f1-score': 0.5068787075602675, 'support': 4077.0} | {'precision': 0.5741007194244604, 'recall': 0.7885375494071146, 'f1-score': 0.6644462947543713, 'support': 2024.0} | {'precision': 0.8642982877260361, 'recall': 0.8830935251798561, 'f1-score': 0.8735948241002829, 'support': 12232.0} | {'precision': 0.925979519145147, 'recall': 0.8430279691933522, 'f1-score': 0.8825588796944621, 'support': 9868.0} | 0.8015 | {'precision': 0.668876316388743, 'recall': 0.6840037604497332, 'f1-score': 0.6709572725328175, 'support': 29334.0} | {'precision': 0.8089032379475054, 'recall': 0.8015272380173177, 'f1-score': 0.8031401896750007, 'support': 29334.0} |
78
- | No log | 3.0 | 243 | 0.5770 | {'precision': 0.4393939393939394, 'recall': 0.4084507042253521, 'f1-score': 0.4233576642335767, 'support': 284.0} | {'precision': 0.7181818181818181, 'recall': 0.5602836879432624, 'f1-score': 0.6294820717131473, 'support': 141.0} | {'precision': 0.6709816612729234, 'recall': 0.8785310734463276, 'f1-score': 0.7608562691131499, 'support': 708.0} | {'precision': 0.5186211141889813, 'recall': 0.41329408879077756, 'f1-score': 0.46000546000546, 'support': 4077.0} | {'precision': 0.7556615017878426, 'recall': 0.6264822134387352, 'f1-score': 0.6850351161534306, 'support': 2024.0} | {'precision': 0.7999862438957287, 'recall': 0.9508665794637018, 'f1-score': 0.8689253296477533, 'support': 12232.0} | {'precision': 0.9404692424419283, 'recall': 0.8164775030401297, 'f1-score': 0.8740981828044481, 'support': 9868.0} | 0.7997 | {'precision': 0.6918993601661659, 'recall': 0.6649122643354695, 'f1-score': 0.6716800133815666, 'support': 29334.0} | {'precision': 0.7980829724295806, 'recall': 0.7996863707643008, 'f1-score': 0.7930703491848509, 'support': 29334.0} |
79
- | No log | 4.0 | 324 | 0.5089 | {'precision': 0.4666666666666667, 'recall': 0.6654929577464789, 'f1-score': 0.5486211901306242, 'support': 284.0} | {'precision': 0.6477987421383647, 'recall': 0.7304964539007093, 'f1-score': 0.6866666666666668, 'support': 141.0} | {'precision': 0.7981366459627329, 'recall': 0.7259887005649718, 'f1-score': 0.7603550295857988, 'support': 708.0} | {'precision': 0.5201636469900643, 'recall': 0.6548933038999264, 'f1-score': 0.5798045602605862, 'support': 4077.0} | {'precision': 0.7390321121664405, 'recall': 0.8073122529644269, 'f1-score': 0.7716646989374262, 'support': 2024.0} | {'precision': 0.899807994414383, 'recall': 0.842871157619359, 'f1-score': 0.8704094554664417, 'support': 12232.0} | {'precision': 0.9231016731016731, 'recall': 0.8722132144304824, 'f1-score': 0.8969362234264276, 'support': 9868.0} | 0.8191 | {'precision': 0.7135296402057607, 'recall': 0.7570382915894793, 'f1-score': 0.7306368320677102, 'support': 29334.0} | {'precision': 0.835926930625345, 'recall': 0.8190836571896093, 'f1-score': 0.825475128990697, 'support': 29334.0} |
80
- | No log | 5.0 | 405 | 0.5750 | {'precision': 0.5029585798816568, 'recall': 0.5985915492957746, 'f1-score': 0.5466237942122186, 'support': 284.0} | {'precision': 0.728, 'recall': 0.6453900709219859, 'f1-score': 0.6842105263157895, 'support': 141.0} | {'precision': 0.7608695652173914, 'recall': 0.7909604519774012, 'f1-score': 0.775623268698061, 'support': 708.0} | {'precision': 0.5492530345471522, 'recall': 0.577140053961246, 'f1-score': 0.5628513335725392, 'support': 4077.0} | {'precision': 0.8153946510110893, 'recall': 0.6175889328063241, 'f1-score': 0.7028394714647174, 'support': 2024.0} | {'precision': 0.8660855784469097, 'recall': 0.8935578809679529, 'f1-score': 0.8796072750684049, 'support': 12232.0} | {'precision': 0.9043101670447515, 'recall': 0.8887312525334414, 'f1-score': 0.8964530307676581, 'support': 9868.0} | 0.8224 | {'precision': 0.7324102251641358, 'recall': 0.715994313209161, 'f1-score': 0.7211726714427698, 'support': 29334.0} | {'precision': 0.8246928072651426, 'recall': 0.8223904002181769, 'f1-score': 0.8223802682715222, 'support': 29334.0} |
81
- | No log | 6.0 | 486 | 0.5503 | {'precision': 0.5160349854227405, 'recall': 0.6232394366197183, 'f1-score': 0.5645933014354066, 'support': 284.0} | {'precision': 0.6923076923076923, 'recall': 0.7659574468085106, 'f1-score': 0.7272727272727273, 'support': 141.0} | {'precision': 0.7780859916782247, 'recall': 0.7923728813559322, 'f1-score': 0.7851644506648006, 'support': 708.0} | {'precision': 0.5575316048853654, 'recall': 0.6382143733137111, 'f1-score': 0.5951509606587375, 'support': 4077.0} | {'precision': 0.7698019801980198, 'recall': 0.7682806324110671, 'f1-score': 0.7690405539070228, 'support': 2024.0} | {'precision': 0.8899397388684298, 'recall': 0.8692773054283846, 'f1-score': 0.8794871794871795, 'support': 12232.0} | {'precision': 0.9260470513767275, 'recall': 0.8895419537900284, 'f1-score': 0.907427508140797, 'support': 9868.0} | 0.8323 | {'precision': 0.7328212921053143, 'recall': 0.763840575675336, 'f1-score': 0.7468766687952387, 'support': 29334.0} | {'precision': 0.8403274341189826, 'recall': 0.8322765391695643, 'f1-score': 0.835690214793681, 'support': 29334.0} |
82
- | 0.4181 | 7.0 | 567 | 0.6419 | {'precision': 0.5571428571428572, 'recall': 0.5492957746478874, 'f1-score': 0.5531914893617021, 'support': 284.0} | {'precision': 0.7152777777777778, 'recall': 0.7304964539007093, 'f1-score': 0.7228070175438596, 'support': 141.0} | {'precision': 0.7544529262086515, 'recall': 0.8375706214689266, 'f1-score': 0.7938420348058902, 'support': 708.0} | {'precision': 0.6019025655808591, 'recall': 0.5121412803532008, 'f1-score': 0.5534057778955738, 'support': 4077.0} | {'precision': 0.8124655267512411, 'recall': 0.7277667984189723, 'f1-score': 0.7677873338545738, 'support': 2024.0} | {'precision': 0.855129565085619, 'recall': 0.9226618705035972, 'f1-score': 0.8876130554463233, 'support': 12232.0} | {'precision': 0.9203649937785151, 'recall': 0.8994730441832185, 'f1-score': 0.9097990979909799, 'support': 9868.0} | 0.8378 | {'precision': 0.7452480303322171, 'recall': 0.7399151204966445, 'f1-score': 0.7412065438427005, 'support': 29334.0} | {'precision': 0.8329491032454597, 'recall': 0.8377650507943001, 'f1-score': 0.8340655773672634, 'support': 29334.0} |
83
- | 0.4181 | 8.0 | 648 | 0.6668 | {'precision': 0.5745454545454546, 'recall': 0.5563380281690141, 'f1-score': 0.5652951699463328, 'support': 284.0} | {'precision': 0.7027027027027027, 'recall': 0.7375886524822695, 'f1-score': 0.7197231833910034, 'support': 141.0} | {'precision': 0.7538071065989848, 'recall': 0.8389830508474576, 'f1-score': 0.7941176470588234, 'support': 708.0} | {'precision': 0.6235260281852172, 'recall': 0.5317635516311013, 'f1-score': 0.5740005295207837, 'support': 4077.0} | {'precision': 0.8115154807170016, 'recall': 0.7381422924901185, 'f1-score': 0.773091849935317, 'support': 2024.0} | {'precision': 0.8614178024822965, 'recall': 0.9248691955526488, 'f1-score': 0.8920165582495565, 'support': 12232.0} | {'precision': 0.9189412737799835, 'recall': 0.9006890960680989, 'f1-score': 0.9097236438075741, 'support': 9868.0} | 0.8427 | {'precision': 0.7494936927159488, 'recall': 0.7469105524629585, 'f1-score': 0.7468526545584844, 'support': 29334.0} | {'precision': 0.8381245456177384, 'recall': 0.8426740301356788, 'f1-score': 0.8392138000943469, 'support': 29334.0} |
84
- | 0.4181 | 9.0 | 729 | 0.7192 | {'precision': 0.5454545454545454, 'recall': 0.6338028169014085, 'f1-score': 0.5863192182410424, 'support': 284.0} | {'precision': 0.6928104575163399, 'recall': 0.75177304964539, 'f1-score': 0.7210884353741497, 'support': 141.0} | {'precision': 0.7757404795486601, 'recall': 0.7768361581920904, 'f1-score': 0.7762879322512349, 'support': 708.0} | {'precision': 0.5975181456333412, 'recall': 0.6259504537650233, 'f1-score': 0.6114039290848108, 'support': 4077.0} | {'precision': 0.7642474427666829, 'recall': 0.775197628458498, 'f1-score': 0.7696835908756438, 'support': 2024.0} | {'precision': 0.893157763146929, 'recall': 0.8761445389143231, 'f1-score': 0.8845693533077462, 'support': 12232.0} | {'precision': 0.9098686220592729, 'recall': 0.9053506282934739, 'f1-score': 0.9076040026413369, 'support': 9868.0} | 0.8389 | {'precision': 0.7398282080179673, 'recall': 0.7635793248814581, 'f1-score': 0.7509937802537092, 'support': 29334.0} | {'precision': 0.8416318009865815, 'recall': 0.8388900252266994, 'f1-score': 0.8401384747371046, 'support': 29334.0} |
85
- | 0.4181 | 10.0 | 810 | 0.8728 | {'precision': 0.5584905660377358, 'recall': 0.5211267605633803, 'f1-score': 0.5391621129326047, 'support': 284.0} | {'precision': 0.6948051948051948, 'recall': 0.7588652482269503, 'f1-score': 0.7254237288135594, 'support': 141.0} | {'precision': 0.7503201024327785, 'recall': 0.827683615819209, 'f1-score': 0.7871054398925452, 'support': 708.0} | {'precision': 0.5859070464767616, 'recall': 0.4792739759627177, 'f1-score': 0.5272531030760929, 'support': 4077.0} | {'precision': 0.7485322896281801, 'recall': 0.7559288537549407, 'f1-score': 0.7522123893805309, 'support': 2024.0} | {'precision': 0.8385786052009456, 'recall': 0.92797580117724, 'f1-score': 0.8810152126668737, 'support': 12232.0} | {'precision': 0.9320967566981234, 'recall': 0.8707944872314552, 'f1-score': 0.9004034159375491, 'support': 9868.0} | 0.8273 | {'precision': 0.7298186516113886, 'recall': 0.7345212489622704, 'f1-score': 0.7303679146713938, 'support': 29334.0} | {'precision': 0.8231745470223255, 'recall': 0.8273334696938706, 'f1-score': 0.8231582874630071, 'support': 29334.0} |
86
- | 0.4181 | 11.0 | 891 | 0.7904 | {'precision': 0.5487804878048781, 'recall': 0.6338028169014085, 'f1-score': 0.5882352941176471, 'support': 284.0} | {'precision': 0.6956521739130435, 'recall': 0.7943262411347518, 'f1-score': 0.7417218543046358, 'support': 141.0} | {'precision': 0.7777777777777778, 'recall': 0.7810734463276836, 'f1-score': 0.7794221282593374, 'support': 708.0} | {'precision': 0.600095785440613, 'recall': 0.6146676477802305, 'f1-score': 0.6072943172179812, 'support': 4077.0} | {'precision': 0.7808219178082192, 'recall': 0.7885375494071146, 'f1-score': 0.7846607669616519, 'support': 2024.0} | {'precision': 0.8951898734177215, 'recall': 0.8672334859385219, 'f1-score': 0.8809899510007474, 'support': 12232.0} | {'precision': 0.8980524642289348, 'recall': 0.9158897446291042, 'f1-score': 0.9068834035721453, 'support': 9868.0} | 0.8384 | {'precision': 0.742338640055884, 'recall': 0.7707901331598307, 'f1-score': 0.755601102204878, 'support': 29334.0} | {'precision': 0.8401010980182351, 'recall': 0.8383786732119725, 'f1-score': 0.8390590885154817, 'support': 29334.0} |
87
- | 0.4181 | 12.0 | 972 | 0.9021 | {'precision': 0.5766423357664233, 'recall': 0.5563380281690141, 'f1-score': 0.5663082437275986, 'support': 284.0} | {'precision': 0.7272727272727273, 'recall': 0.7375886524822695, 'f1-score': 0.7323943661971831, 'support': 141.0} | {'precision': 0.7567221510883483, 'recall': 0.8347457627118644, 'f1-score': 0.793821356615178, 'support': 708.0} | {'precision': 0.6302699423718532, 'recall': 0.5096884964434634, 'f1-score': 0.5636018443178736, 'support': 4077.0} | {'precision': 0.7813152400835073, 'recall': 0.7396245059288538, 'f1-score': 0.7598984771573604, 'support': 2024.0} | {'precision': 0.8537686174213931, 'recall': 0.9278940483976456, 'f1-score': 0.889289352033221, 'support': 12232.0} | {'precision': 0.9143213210094506, 'recall': 0.892176732873936, 'f1-score': 0.9031132994819715, 'support': 9868.0} | 0.8380 | {'precision': 0.7486160478591003, 'recall': 0.7425794610010066, 'f1-score': 0.7440609913614838, 'support': 29334.0} | {'precision': 0.8324430451309904, 'recall': 0.838003681734506, 'f1-score': 0.8335608269497821, 'support': 29334.0} |
88
- | 0.0774 | 13.0 | 1053 | 0.9174 | {'precision': 0.5379939209726444, 'recall': 0.6232394366197183, 'f1-score': 0.5774877650897227, 'support': 284.0} | {'precision': 0.7013888888888888, 'recall': 0.7163120567375887, 'f1-score': 0.7087719298245613, 'support': 141.0} | {'precision': 0.7626666666666667, 'recall': 0.807909604519774, 'f1-score': 0.784636488340192, 'support': 708.0} | {'precision': 0.5750291715285881, 'recall': 0.6043659553593328, 'f1-score': 0.5893326955273857, 'support': 4077.0} | {'precision': 0.7868589743589743, 'recall': 0.7277667984189723, 'f1-score': 0.7561601642710472, 'support': 2024.0} | {'precision': 0.8721798538290435, 'recall': 0.8975637671680837, 'f1-score': 0.8846897663174859, 'support': 12232.0} | {'precision': 0.9202434336963485, 'recall': 0.8734292663153628, 'f1-score': 0.8962254341270668, 'support': 9868.0} | 0.8313 | {'precision': 0.7366229871344505, 'recall': 0.7500838407341189, 'f1-score': 0.7424720347853516, 'support': 29334.0} | {'precision': 0.8344622887797986, 'recall': 0.8312879252744256, 'f1-score': 0.8324170375280131, 'support': 29334.0} |
89
- | 0.0774 | 14.0 | 1134 | 0.9774 | {'precision': 0.5398773006134969, 'recall': 0.6197183098591549, 'f1-score': 0.5770491803278688, 'support': 284.0} | {'precision': 0.6871165644171779, 'recall': 0.7943262411347518, 'f1-score': 0.736842105263158, 'support': 141.0} | {'precision': 0.7735334242837654, 'recall': 0.8008474576271186, 'f1-score': 0.7869535045107564, 'support': 708.0} | {'precision': 0.5810174281676872, 'recall': 0.6051017905322541, 'f1-score': 0.5928150907124834, 'support': 4077.0} | {'precision': 0.7494387067804221, 'recall': 0.8246047430830039, 'f1-score': 0.7852270054104916, 'support': 2024.0} | {'precision': 0.8794297680412371, 'recall': 0.8926586003924133, 'f1-score': 0.8859948068808828, 'support': 12232.0} | {'precision': 0.9270302504608046, 'recall': 0.8664369679773004, 'f1-score': 0.8957100204284741, 'support': 9868.0} | 0.8338 | {'precision': 0.733920491823513, 'recall': 0.7719563015151424, 'f1-score': 0.751513101933445, 'support': 29334.0} | {'precision': 0.8382307794620859, 'recall': 0.8338446853480602, 'f1-score': 0.835464012013009, 'support': 29334.0} |
90
- | 0.0774 | 15.0 | 1215 | 0.9720 | {'precision': 0.5487804878048781, 'recall': 0.6338028169014085, 'f1-score': 0.5882352941176471, 'support': 284.0} | {'precision': 0.7445255474452555, 'recall': 0.723404255319149, 'f1-score': 0.7338129496402878, 'support': 141.0} | {'precision': 0.7593582887700535, 'recall': 0.8022598870056498, 'f1-score': 0.7802197802197803, 'support': 708.0} | {'precision': 0.570828729281768, 'recall': 0.6335540838852097, 'f1-score': 0.6005580097651709, 'support': 4077.0} | {'precision': 0.7954422137818774, 'recall': 0.724308300395257, 'f1-score': 0.7582104990949057, 'support': 2024.0} | {'precision': 0.8803978651140223, 'recall': 0.8900425114453892, 'f1-score': 0.885193918204732, 'support': 12232.0} | {'precision': 0.9188239054010866, 'recall': 0.874037292257803, 'f1-score': 0.8958712022851208, 'support': 9868.0} | 0.8322 | {'precision': 0.7454510053712774, 'recall': 0.7544870210299808, 'f1-score': 0.748871664761092, 'support': 29334.0} | {'precision': 0.8376519459918353, 'recall': 0.8321742687666189, 'f1-score': 0.8343275428320325, 'support': 29334.0} |
91
- | 0.0774 | 16.0 | 1296 | 1.0037 | {'precision': 0.5662251655629139, 'recall': 0.602112676056338, 'f1-score': 0.5836177474402731, 'support': 284.0} | {'precision': 0.7094594594594594, 'recall': 0.7446808510638298, 'f1-score': 0.726643598615917, 'support': 141.0} | {'precision': 0.766042780748663, 'recall': 0.809322033898305, 'f1-score': 0.7870879120879121, 'support': 708.0} | {'precision': 0.5981858298602599, 'recall': 0.5984792739759627, 'f1-score': 0.5983325159391859, 'support': 4077.0} | {'precision': 0.7981220657276995, 'recall': 0.7559288537549407, 'f1-score': 0.7764526769855367, 'support': 2024.0} | {'precision': 0.8736565560066873, 'recall': 0.8971550032701112, 'f1-score': 0.885249868914613, 'support': 12232.0} | {'precision': 0.9181542958555173, 'recall': 0.8912646939602756, 'f1-score': 0.9045096930117758, 'support': 9868.0} | 0.8382 | {'precision': 0.7471208790316002, 'recall': 0.7569919122828231, 'f1-score': 0.751699144713602, 'support': 29334.0} | {'precision': 0.8387644471781172, 'recall': 0.8382082225403968, 'f1-score': 0.838292846606077, 'support': 29334.0} |
92
- | 0.0774 | 17.0 | 1377 | 1.0845 | {'precision': 0.5382165605095541, 'recall': 0.5950704225352113, 'f1-score': 0.5652173913043479, 'support': 284.0} | {'precision': 0.7163120567375887, 'recall': 0.7163120567375887, 'f1-score': 0.7163120567375887, 'support': 141.0} | {'precision': 0.7509627727856226, 'recall': 0.826271186440678, 'f1-score': 0.7868190988567586, 'support': 708.0} | {'precision': 0.5689655172413793, 'recall': 0.5827814569536424, 'f1-score': 0.5757906215921483, 'support': 4077.0} | {'precision': 0.7852169255490091, 'recall': 0.724308300395257, 'f1-score': 0.7535337959393473, 'support': 2024.0} | {'precision': 0.8561790861698866, 'recall': 0.9130150425114454, 'f1-score': 0.8836841272353221, 'support': 12232.0} | {'precision': 0.9375346721402419, 'recall': 0.8563032022699635, 'f1-score': 0.8950797097611355, 'support': 9868.0} | 0.8289 | {'precision': 0.7361982273047546, 'recall': 0.7448659525491124, 'f1-score': 0.7394909716323783, 'support': 29334.0} | {'precision': 0.8324422630439487, 'recall': 0.8289016158723665, 'f1-score': 0.829519030769714, 'support': 29334.0} |
93
- | 0.0774 | 18.0 | 1458 | 1.0618 | {'precision': 0.5774647887323944, 'recall': 0.5774647887323944, 'f1-score': 0.5774647887323944, 'support': 284.0} | {'precision': 0.7571428571428571, 'recall': 0.75177304964539, 'f1-score': 0.7544483985765125, 'support': 141.0} | {'precision': 0.754863813229572, 'recall': 0.8220338983050848, 'f1-score': 0.7870182555780934, 'support': 708.0} | {'precision': 0.59768299104792, 'recall': 0.5567819475104243, 'f1-score': 0.5765079365079365, 'support': 4077.0} | {'precision': 0.7977588046958378, 'recall': 0.7386363636363636, 'f1-score': 0.767060030785018, 'support': 2024.0} | {'precision': 0.8586523736600307, 'recall': 0.9167756703727927, 'f1-score': 0.8867626126838526, 'support': 12232.0} | {'precision': 0.9229297331774211, 'recall': 0.879813538710985, 'f1-score': 0.9008560311284047, 'support': 9868.0} | 0.8357 | {'precision': 0.7523564802408619, 'recall': 0.7490398938447764, 'f1-score': 0.7500168648560301, 'support': 29334.0} | {'precision': 0.8340875618543231, 'recall': 0.8356514624667621, 'f1-score': 0.8340855697185618, 'support': 29334.0} |
94
- | 0.0228 | 19.0 | 1539 | 1.0645 | {'precision': 0.5694444444444444, 'recall': 0.5774647887323944, 'f1-score': 0.5734265734265734, 'support': 284.0} | {'precision': 0.7394366197183099, 'recall': 0.7446808510638298, 'f1-score': 0.7420494699646644, 'support': 141.0} | {'precision': 0.7552083333333334, 'recall': 0.8192090395480226, 'f1-score': 0.7859078590785908, 'support': 708.0} | {'precision': 0.5936120488184887, 'recall': 0.5607064017660044, 'f1-score': 0.5766902119071644, 'support': 4077.0} | {'precision': 0.7824947589098532, 'recall': 0.7376482213438735, 'f1-score': 0.7594099694811801, 'support': 2024.0} | {'precision': 0.8594939629316312, 'recall': 0.9136690647482014, 'f1-score': 0.8857539132157718, 'support': 12232.0} | {'precision': 0.9252186899935994, 'recall': 0.8789014997973247, 'f1-score': 0.9014655441222326, 'support': 9868.0} | 0.8344 | {'precision': 0.7464155511642371, 'recall': 0.7474685524285215, 'f1-score': 0.7463862201708825, 'support': 29334.0} | {'precision': 0.8334350647066741, 'recall': 0.8344242176314175, 'f1-score': 0.8332419893084726, 'support': 29334.0} |
95
- | 0.0228 | 20.0 | 1620 | 1.0525 | {'precision': 0.5714285714285714, 'recall': 0.5915492957746479, 'f1-score': 0.5813148788927336, 'support': 284.0} | {'precision': 0.7328767123287672, 'recall': 0.7588652482269503, 'f1-score': 0.7456445993031359, 'support': 141.0} | {'precision': 0.7592592592592593, 'recall': 0.8107344632768362, 'f1-score': 0.7841530054644807, 'support': 708.0} | {'precision': 0.5995872033023736, 'recall': 0.5700269806230072, 'f1-score': 0.5844335470891487, 'support': 4077.0} | {'precision': 0.7741293532338308, 'recall': 0.7687747035573123, 'f1-score': 0.7714427367377293, 'support': 2024.0} | {'precision': 0.8661675245671502, 'recall': 0.907946370176586, 'f1-score': 0.8865650195577552, 'support': 12232.0} | {'precision': 0.9227995758218451, 'recall': 0.8818402918524524, 'f1-score': 0.9018551145196393, 'support': 9868.0} | 0.8365 | {'precision': 0.746606885705971, 'recall': 0.7556767647839704, 'f1-score': 0.7507727002235176, 'support': 29334.0} | {'precision': 0.8357427933389249, 'recall': 0.8364696256903252, 'f1-score': 0.8356690155425791, 'support': 29334.0} |
96
 
97
 
98
  ### Framework versions
99
 
100
- - Transformers 4.38.2
101
- - Pytorch 2.2.1+cu121
102
- - Datasets 2.18.0
103
- - Tokenizers 0.15.2
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: allenai/longformer-base-4096
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - stab-gurevych-essays
9
  metrics:
10
  - accuracy
11
  model-index:
 
15
  name: Token Classification
16
  type: token-classification
17
  dataset:
18
+ name: stab-gurevych-essays
19
+ type: stab-gurevych-essays
20
  config: full_labels
21
  split: train[0%:20%]
22
  args: full_labels
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.8572502648576603
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # longformer-full_labels
33
 
34
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the stab-gurevych-essays dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.3818
37
+ - B-claim: {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0}
38
+ - B-majorclaim: {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0}
39
+ - B-premise: {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0}
40
+ - I-claim: {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0}
41
+ - I-majorclaim: {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0}
42
+ - I-premise: {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0}
43
+ - O: {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0}
44
+ - Accuracy: 0.8573
45
+ - Macro avg: {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0}
46
+ - Weighted avg: {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0}
47
 
48
  ## Model description
49
 
 
68
  - seed: 42
69
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
70
  - lr_scheduler_type: linear
71
+ - num_epochs: 5
72
 
73
  ### Training results
74
 
75
+ | Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
76
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
77
+ | No log | 1.0 | 41 | 0.7363 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.7931034482758621, 'recall': 0.06497175141242938, 'f1-score': 0.12010443864229765, 'support': 708.0} | {'precision': 0.35688405797101447, 'recall': 0.09690113133300542, 'f1-score': 0.15241779497098645, 'support': 4066.0} | {'precision': 0.4854771784232365, 'recall': 0.3482142857142857, 'f1-score': 0.4055459272097054, 'support': 2016.0} | {'precision': 0.7254034519284691, 'recall': 0.9546535465354653, 'f1-score': 0.8243874805268375, 'support': 12195.0} | {'precision': 0.8224254998113919, 'recall': 0.8852908334179271, 'f1-score': 0.8527010510877536, 'support': 9851.0} | 0.7349 | {'precision': 0.4547562337728534, 'recall': 0.3357187926304447, 'f1-score': 0.3364509560625115, 'support': 29261.0} | {'precision': 0.6814305221181916, 'recall': 0.7349372885410614, 'f1-score': 0.6826734788782265, 'support': 29261.0} |
78
+ | No log | 2.0 | 82 | 0.4757 | {'precision': 1.0, 'recall': 0.01056338028169014, 'f1-score': 0.020905923344947737, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.6255364806866953, 'recall': 0.8234463276836158, 'f1-score': 0.7109756097560975, 'support': 708.0} | {'precision': 0.5658734764944864, 'recall': 0.4795868175110674, 'f1-score': 0.5191693290734825, 'support': 4066.0} | {'precision': 0.745417515274949, 'recall': 0.5446428571428571, 'f1-score': 0.6294067067927773, 'support': 2016.0} | {'precision': 0.8514935768456895, 'recall': 0.9022550225502255, 'f1-score': 0.8761396663614285, 'support': 12195.0} | {'precision': 0.9034811635670005, 'recall': 0.9616282610902447, 'f1-score': 0.931648308418568, 'support': 9851.0} | 0.8240 | {'precision': 0.6702574589812601, 'recall': 0.5317318094656714, 'f1-score': 0.5268922205353288, 'support': 29261.0} | {'precision': 0.8138696629123667, 'recall': 0.8239636376063703, 'f1-score': 0.8117058591419718, 'support': 29261.0} |
79
+ | No log | 3.0 | 123 | 0.4101 | {'precision': 0.49624060150375937, 'recall': 0.2323943661971831, 'f1-score': 0.31654676258992803, 'support': 284.0} | {'precision': 1.0, 'recall': 0.014184397163120567, 'f1-score': 0.027972027972027972, 'support': 141.0} | {'precision': 0.6877777777777778, 'recall': 0.8742937853107344, 'f1-score': 0.7699004975124378, 'support': 708.0} | {'precision': 0.6374125874125874, 'recall': 0.4483521888834235, 'f1-score': 0.5264221773029165, 'support': 4066.0} | {'precision': 0.7599795291709315, 'recall': 0.7366071428571429, 'f1-score': 0.7481108312342569, 'support': 2016.0} | {'precision': 0.843370836090889, 'recall': 0.9404674046740468, 'f1-score': 0.8892765759478949, 'support': 12195.0} | {'precision': 0.9602568022011617, 'recall': 0.9565526342503299, 'f1-score': 0.9584011391375101, 'support': 9851.0} | 0.8505 | {'precision': 0.7692911620224437, 'recall': 0.6004074170479973, 'f1-score': 0.6052328588138531, 'support': 29261.0} | {'precision': 0.841977868607838, 'recall': 0.8505177540070401, 'f1-score': 0.8398036418020065, 'support': 29261.0} |
80
+ | No log | 4.0 | 164 | 0.3859 | {'precision': 0.538135593220339, 'recall': 0.4471830985915493, 'f1-score': 0.48846153846153845, 'support': 284.0} | {'precision': 1.0, 'recall': 0.10638297872340426, 'f1-score': 0.19230769230769232, 'support': 141.0} | {'precision': 0.7128146453089245, 'recall': 0.8799435028248588, 'f1-score': 0.7876106194690266, 'support': 708.0} | {'precision': 0.6014307613694431, 'recall': 0.5789473684210527, 'f1-score': 0.5899749373433584, 'support': 4066.0} | {'precision': 0.7848036715961244, 'recall': 0.7633928571428571, 'f1-score': 0.7739502137289415, 'support': 2016.0} | {'precision': 0.8792672100718263, 'recall': 0.8933989339893399, 'f1-score': 0.8862767428617913, 'support': 12195.0} | {'precision': 0.9612968591691996, 'recall': 0.9631509491422191, 'f1-score': 0.9622230110034988, 'support': 9851.0} | 0.8558 | {'precision': 0.7825355343908367, 'recall': 0.6617713841193258, 'f1-score': 0.6686863935965496, 'support': 29261.0} | {'precision': 0.8550112416363399, 'recall': 0.855780732032398, 'f1-score': 0.8533403597564397, 'support': 29261.0} |
81
+ | No log | 5.0 | 205 | 0.3818 | {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0} | {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0} | {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0} | {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0} | {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0} | {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0} | {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0} | 0.8573 | {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0} | {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0} |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
  ### Framework versions
85
 
86
+ - Transformers 4.45.2
87
+ - Pytorch 2.5.0+cu124
88
+ - Datasets 2.19.1
89
+ - Tokenizers 0.20.1
meta_data/README_s42_e5.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
 
 
2
  base_model: allenai/longformer-base-4096
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - essays_su_g
7
  metrics:
8
  - accuracy
9
  model-index:
@@ -13,15 +15,15 @@ model-index:
13
  name: Token Classification
14
  type: token-classification
15
  dataset:
16
- name: essays_su_g
17
- type: essays_su_g
18
  config: full_labels
19
- split: train[80%:100%]
20
  args: full_labels
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
- value: 0.8354393714471922
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,19 +31,19 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # longformer-full_labels
31
 
32
- This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.4449
35
- - B-claim: {'precision': 0.5258620689655172, 'recall': 0.45018450184501846, 'f1-score': 0.485089463220676, 'support': 271.0}
36
- - B-majorclaim: {'precision': 0.7142857142857143, 'recall': 0.07194244604316546, 'f1-score': 0.13071895424836602, 'support': 139.0}
37
- - B-premise: {'precision': 0.7081604426002767, 'recall': 0.8088467614533965, 'f1-score': 0.7551622418879057, 'support': 633.0}
38
- - I-claim: {'precision': 0.622454448017149, 'recall': 0.580604848787803, 'f1-score': 0.6008017586964955, 'support': 4001.0}
39
- - I-majorclaim: {'precision': 0.6968287526427062, 'recall': 0.8186785891703925, 'f1-score': 0.7528551850159891, 'support': 2013.0}
40
- - I-premise: {'precision': 0.8654449817595656, 'recall': 0.8998764996471419, 'f1-score': 0.8823249578341911, 'support': 11336.0}
41
- - O: {'precision': 0.9420488250057039, 'recall': 0.8950791242141773, 'f1-score': 0.9179635393508226, 'support': 9226.0}
42
- - Accuracy: 0.8354
43
- - Macro avg: {'precision': 0.725012176182376, 'recall': 0.6464589673087279, 'f1-score': 0.6464165857506352, 'support': 27619.0}
44
- - Weighted avg: {'precision': 0.8358464532914583, 'recall': 0.8354393714471922, 'f1-score': 0.8334161098638371, 'support': 27619.0}
45
 
46
  ## Model description
47
 
@@ -70,18 +72,18 @@ The following hyperparameters were used during training:
70
 
71
  ### Training results
72
 
73
- | Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
74
- |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:----------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
75
- | No log | 1.0 | 41 | 0.6799 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 271.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 139.0} | {'precision': 0.875, 'recall': 0.044233807266982623, 'f1-score': 0.08421052631578947, 'support': 633.0} | {'precision': 0.44683080146673654, 'recall': 0.2131967008247938, 'f1-score': 0.28866328257191204, 'support': 4001.0} | {'precision': 0.592, 'recall': 0.36761053154495776, 'f1-score': 0.4535703340484217, 'support': 2013.0} | {'precision': 0.7292961700421094, 'recall': 0.9625088214537756, 'f1-score': 0.8298284975472487, 'support': 11336.0} | {'precision': 0.8543361149255307, 'recall': 0.8766529373509646, 'f1-score': 0.8653506660247152, 'support': 9226.0} | 0.7466 | {'precision': 0.49963758377633954, 'recall': 0.35202897120592486, 'f1-score': 0.36023190092972673, 'support': 27619.0} | {'precision': 0.7126524282765021, 'recall': 0.7465874941163692, 'f1-score': 0.706468200590435, 'support': 27619.0} |
76
- | No log | 2.0 | 82 | 0.5045 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 271.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 139.0} | {'precision': 0.5941676792223572, 'recall': 0.7725118483412322, 'f1-score': 0.6717032967032966, 'support': 633.0} | {'precision': 0.5916276346604216, 'recall': 0.5051237190702325, 'f1-score': 0.5449642712687071, 'support': 4001.0} | {'precision': 0.65738555922605, 'recall': 0.6920019870839543, 'f1-score': 0.6742497579864472, 'support': 2013.0} | {'precision': 0.8346545866364666, 'recall': 0.910197600564573, 'f1-score': 0.8707907840324077, 'support': 11336.0} | {'precision': 0.9139132389300967, 'recall': 0.8814220680685021, 'f1-score': 0.8973736482012802, 'support': 9226.0} | 0.8093 | {'precision': 0.5131069569536274, 'recall': 0.5373224604469277, 'f1-score': 0.5227259654560198, 'support': 27619.0} | {'precision': 0.7951024792507403, 'recall': 0.8093341540244035, 'f1-score': 0.800655657521358, 'support': 27619.0} |
77
- | No log | 3.0 | 123 | 0.4710 | {'precision': 0.5217391304347826, 'recall': 0.17712177121771217, 'f1-score': 0.2644628099173554, 'support': 271.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 139.0} | {'precision': 0.6571798188874515, 'recall': 0.8025276461295419, 'f1-score': 0.7226173541963017, 'support': 633.0} | {'precision': 0.6227746053073564, 'recall': 0.4633841539615096, 'f1-score': 0.531384350816853, 'support': 4001.0} | {'precision': 0.6513105639396346, 'recall': 0.8147044212617983, 'f1-score': 0.7239020083866696, 'support': 2013.0} | {'precision': 0.8291489025738197, 'recall': 0.9264290755116443, 'f1-score': 0.8750937421881511, 'support': 11336.0} | {'precision': 0.9421622250669149, 'recall': 0.8775200520268805, 'f1-score': 0.9086929681800325, 'support': 9226.0} | 0.8200 | {'precision': 0.6034736066014228, 'recall': 0.5802410171584409, 'f1-score': 0.5751647476693377, 'support': 27619.0} | {'precision': 0.8129119859079974, 'recall': 0.8200152069227705, 'f1-score': 0.8116164134497381, 'support': 27619.0} |
78
- | No log | 4.0 | 164 | 0.4437 | {'precision': 0.4723618090452261, 'recall': 0.34686346863468637, 'f1-score': 0.4, 'support': 271.0} | {'precision': 0.8571428571428571, 'recall': 0.04316546762589928, 'f1-score': 0.08219178082191782, 'support': 139.0} | {'precision': 0.6771653543307087, 'recall': 0.8151658767772512, 'f1-score': 0.739784946236559, 'support': 633.0} | {'precision': 0.6176223776223776, 'recall': 0.5518620344913772, 'f1-score': 0.5828933474128827, 'support': 4001.0} | {'precision': 0.7292452830188679, 'recall': 0.7680079483358172, 'f1-score': 0.7481248487781272, 'support': 2013.0} | {'precision': 0.8598264678628591, 'recall': 0.9004057868736768, 'f1-score': 0.879648381953721, 'support': 11336.0} | {'precision': 0.9251513483764446, 'recall': 0.9110123563841318, 'f1-score': 0.9180274152148981, 'support': 9226.0} | 0.8321 | {'precision': 0.7340736424856201, 'recall': 0.6194975627318342, 'f1-score': 0.621524388631158, 'support': 27619.0} | {'precision': 0.8290421682205733, 'recall': 0.8321083312212607, 'f1-score': 0.8279682509392567, 'support': 27619.0} |
79
- | No log | 5.0 | 205 | 0.4449 | {'precision': 0.5258620689655172, 'recall': 0.45018450184501846, 'f1-score': 0.485089463220676, 'support': 271.0} | {'precision': 0.7142857142857143, 'recall': 0.07194244604316546, 'f1-score': 0.13071895424836602, 'support': 139.0} | {'precision': 0.7081604426002767, 'recall': 0.8088467614533965, 'f1-score': 0.7551622418879057, 'support': 633.0} | {'precision': 0.622454448017149, 'recall': 0.580604848787803, 'f1-score': 0.6008017586964955, 'support': 4001.0} | {'precision': 0.6968287526427062, 'recall': 0.8186785891703925, 'f1-score': 0.7528551850159891, 'support': 2013.0} | {'precision': 0.8654449817595656, 'recall': 0.8998764996471419, 'f1-score': 0.8823249578341911, 'support': 11336.0} | {'precision': 0.9420488250057039, 'recall': 0.8950791242141773, 'f1-score': 0.9179635393508226, 'support': 9226.0} | 0.8354 | {'precision': 0.725012176182376, 'recall': 0.6464589673087279, 'f1-score': 0.6464165857506352, 'support': 27619.0} | {'precision': 0.8358464532914583, 'recall': 0.8354393714471922, 'f1-score': 0.8334161098638371, 'support': 27619.0} |
80
 
81
 
82
  ### Framework versions
83
 
84
- - Transformers 4.37.2
85
- - Pytorch 2.2.0+cu121
86
- - Datasets 2.17.0
87
- - Tokenizers 0.15.2
 
1
  ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
  base_model: allenai/longformer-base-4096
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - stab-gurevych-essays
9
  metrics:
10
  - accuracy
11
  model-index:
 
15
  name: Token Classification
16
  type: token-classification
17
  dataset:
18
+ name: stab-gurevych-essays
19
+ type: stab-gurevych-essays
20
  config: full_labels
21
+ split: train[0%:20%]
22
  args: full_labels
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.8572502648576603
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # longformer-full_labels
33
 
34
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the stab-gurevych-essays dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.3818
37
+ - B-claim: {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0}
38
+ - B-majorclaim: {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0}
39
+ - B-premise: {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0}
40
+ - I-claim: {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0}
41
+ - I-majorclaim: {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0}
42
+ - I-premise: {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0}
43
+ - O: {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0}
44
+ - Accuracy: 0.8573
45
+ - Macro avg: {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0}
46
+ - Weighted avg: {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0}
47
 
48
  ## Model description
49
 
 
72
 
73
  ### Training results
74
 
75
+ | Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
76
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
77
+ | No log | 1.0 | 41 | 0.7363 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.7931034482758621, 'recall': 0.06497175141242938, 'f1-score': 0.12010443864229765, 'support': 708.0} | {'precision': 0.35688405797101447, 'recall': 0.09690113133300542, 'f1-score': 0.15241779497098645, 'support': 4066.0} | {'precision': 0.4854771784232365, 'recall': 0.3482142857142857, 'f1-score': 0.4055459272097054, 'support': 2016.0} | {'precision': 0.7254034519284691, 'recall': 0.9546535465354653, 'f1-score': 0.8243874805268375, 'support': 12195.0} | {'precision': 0.8224254998113919, 'recall': 0.8852908334179271, 'f1-score': 0.8527010510877536, 'support': 9851.0} | 0.7349 | {'precision': 0.4547562337728534, 'recall': 0.3357187926304447, 'f1-score': 0.3364509560625115, 'support': 29261.0} | {'precision': 0.6814305221181916, 'recall': 0.7349372885410614, 'f1-score': 0.6826734788782265, 'support': 29261.0} |
78
+ | No log | 2.0 | 82 | 0.4757 | {'precision': 1.0, 'recall': 0.01056338028169014, 'f1-score': 0.020905923344947737, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.6255364806866953, 'recall': 0.8234463276836158, 'f1-score': 0.7109756097560975, 'support': 708.0} | {'precision': 0.5658734764944864, 'recall': 0.4795868175110674, 'f1-score': 0.5191693290734825, 'support': 4066.0} | {'precision': 0.745417515274949, 'recall': 0.5446428571428571, 'f1-score': 0.6294067067927773, 'support': 2016.0} | {'precision': 0.8514935768456895, 'recall': 0.9022550225502255, 'f1-score': 0.8761396663614285, 'support': 12195.0} | {'precision': 0.9034811635670005, 'recall': 0.9616282610902447, 'f1-score': 0.931648308418568, 'support': 9851.0} | 0.8240 | {'precision': 0.6702574589812601, 'recall': 0.5317318094656714, 'f1-score': 0.5268922205353288, 'support': 29261.0} | {'precision': 0.8138696629123667, 'recall': 0.8239636376063703, 'f1-score': 0.8117058591419718, 'support': 29261.0} |
79
+ | No log | 3.0 | 123 | 0.4101 | {'precision': 0.49624060150375937, 'recall': 0.2323943661971831, 'f1-score': 0.31654676258992803, 'support': 284.0} | {'precision': 1.0, 'recall': 0.014184397163120567, 'f1-score': 0.027972027972027972, 'support': 141.0} | {'precision': 0.6877777777777778, 'recall': 0.8742937853107344, 'f1-score': 0.7699004975124378, 'support': 708.0} | {'precision': 0.6374125874125874, 'recall': 0.4483521888834235, 'f1-score': 0.5264221773029165, 'support': 4066.0} | {'precision': 0.7599795291709315, 'recall': 0.7366071428571429, 'f1-score': 0.7481108312342569, 'support': 2016.0} | {'precision': 0.843370836090889, 'recall': 0.9404674046740468, 'f1-score': 0.8892765759478949, 'support': 12195.0} | {'precision': 0.9602568022011617, 'recall': 0.9565526342503299, 'f1-score': 0.9584011391375101, 'support': 9851.0} | 0.8505 | {'precision': 0.7692911620224437, 'recall': 0.6004074170479973, 'f1-score': 0.6052328588138531, 'support': 29261.0} | {'precision': 0.841977868607838, 'recall': 0.8505177540070401, 'f1-score': 0.8398036418020065, 'support': 29261.0} |
80
+ | No log | 4.0 | 164 | 0.3859 | {'precision': 0.538135593220339, 'recall': 0.4471830985915493, 'f1-score': 0.48846153846153845, 'support': 284.0} | {'precision': 1.0, 'recall': 0.10638297872340426, 'f1-score': 0.19230769230769232, 'support': 141.0} | {'precision': 0.7128146453089245, 'recall': 0.8799435028248588, 'f1-score': 0.7876106194690266, 'support': 708.0} | {'precision': 0.6014307613694431, 'recall': 0.5789473684210527, 'f1-score': 0.5899749373433584, 'support': 4066.0} | {'precision': 0.7848036715961244, 'recall': 0.7633928571428571, 'f1-score': 0.7739502137289415, 'support': 2016.0} | {'precision': 0.8792672100718263, 'recall': 0.8933989339893399, 'f1-score': 0.8862767428617913, 'support': 12195.0} | {'precision': 0.9612968591691996, 'recall': 0.9631509491422191, 'f1-score': 0.9622230110034988, 'support': 9851.0} | 0.8558 | {'precision': 0.7825355343908367, 'recall': 0.6617713841193258, 'f1-score': 0.6686863935965496, 'support': 29261.0} | {'precision': 0.8550112416363399, 'recall': 0.855780732032398, 'f1-score': 0.8533403597564397, 'support': 29261.0} |
81
+ | No log | 5.0 | 205 | 0.3818 | {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0} | {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0} | {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0} | {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0} | {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0} | {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0} | {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0} | 0.8573 | {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0} | {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0} |
82
 
83
 
84
  ### Framework versions
85
 
86
+ - Transformers 4.45.2
87
+ - Pytorch 2.5.0+cu124
88
+ - Datasets 2.19.1
89
+ - Tokenizers 0.20.1
meta_data/meta_s42_e5_cvi0.json CHANGED
@@ -1 +1 @@
1
- {"B-Claim": {"precision": 0.49624060150375937, "recall": 0.2323943661971831, "f1-score": 0.31654676258992803, "support": 284.0}, "B-MajorClaim": {"precision": 1.0, "recall": 0.014184397163120567, "f1-score": 0.027972027972027972, "support": 141.0}, "B-Premise": {"precision": 0.6877777777777778, "recall": 0.8742937853107344, "f1-score": 0.7699004975124378, "support": 708.0}, "I-Claim": {"precision": 0.6374125874125874, "recall": 0.4483521888834235, "f1-score": 0.5264221773029165, "support": 4066.0}, "I-MajorClaim": {"precision": 0.7599795291709315, "recall": 0.7366071428571429, "f1-score": 0.7481108312342569, "support": 2016.0}, "I-Premise": {"precision": 0.843370836090889, "recall": 0.9404674046740468, "f1-score": 0.8892765759478949, "support": 12195.0}, "O": {"precision": 0.9602568022011617, "recall": 0.9565526342503299, "f1-score": 0.9584011391375101, "support": 9851.0}, "accuracy": 0.8505177540070401, "macro avg": {"precision": 0.7692911620224437, "recall": 0.6004074170479973, "f1-score": 0.6052328588138531, "support": 29261.0}, "weighted avg": {"precision": 0.841977868607838, "recall": 0.8505177540070401, "f1-score": 0.8398036418020065, "support": 29261.0}}
 
1
+ {"B-Claim": {"precision": 0.5588235294117647, "recall": 0.46830985915492956, "f1-score": 0.5095785440613027, "support": 284.0}, "B-MajorClaim": {"precision": 0.8787878787878788, "recall": 0.20567375886524822, "f1-score": 0.3333333333333333, "support": 141.0}, "B-Premise": {"precision": 0.7287735849056604, "recall": 0.8728813559322034, "f1-score": 0.794344473007712, "support": 708.0}, "I-Claim": {"precision": 0.6021926389976507, "recall": 0.5673880964092474, "f1-score": 0.5842725085475498, "support": 4066.0}, "I-MajorClaim": {"precision": 0.7885196374622356, "recall": 0.7767857142857143, "f1-score": 0.782608695652174, "support": 2016.0}, "I-Premise": {"precision": 0.8760707709550877, "recall": 0.8973349733497334, "f1-score": 0.8865753868589484, "support": 12195.0}, "O": {"precision": 0.9648159446817165, "recall": 0.9631509491422191, "f1-score": 0.9639827279654559, "support": 9851.0}, "accuracy": 0.8572502648576603, "macro avg": {"precision": 0.7711405693145706, "recall": 0.6787892438770422, "f1-score": 0.693527952775211, "support": 29261.0}, "weighted avg": {"precision": 0.8552285449410628, "recall": 0.8572502648576603, "f1-score": 0.8549088561404111, "support": 29261.0}}
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b7fc971f72dde15b05153d3a4ad8194c9c4868a397e8399ce24ee7abffa2a087
3
  size 592330980
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d30c23be2928784a7f3e7fc8e81454da6a65a6a68935393ae19ac842976b75b
3
  size 592330980