judithrosell commited on
Commit
cbe5558
1 Parent(s): 9c13aa4

End of training

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: dmis-lab/biobert-v1.1
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: CRAFT_bioBERT_NER
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # CRAFT_bioBERT_NER
14
+
15
+ This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1106
18
+ - Seqeval classification report: precision recall f1-score support
19
+
20
+ CHEBI 0.83 0.76 0.80 1109
21
+ CL 0.91 0.90 0.90 3871
22
+ GGP 0.76 0.66 0.71 600
23
+ GO 0.87 0.84 0.85 1061
24
+ SO 0.99 0.99 0.99 87954
25
+ Taxon 0.83 0.87 0.85 3104
26
+
27
+ micro avg 0.98 0.97 0.97 97699
28
+ macro avg 0.87 0.84 0.85 97699
29
+ weighted avg 0.98 0.97 0.97 97699
30
+
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 2e-05
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 2
54
+ - total_train_batch_size: 32
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: linear
57
+ - num_epochs: 3
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Seqeval classification report |
62
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
63
+ | No log | 1.0 | 347 | 0.1141 | precision recall f1-score support
64
+
65
+ CHEBI 0.82 0.65 0.72 1109
66
+ CL 0.90 0.87 0.89 3871
67
+ GGP 0.75 0.62 0.68 600
68
+ GO 0.88 0.77 0.82 1061
69
+ SO 0.99 0.99 0.99 87954
70
+ Taxon 0.79 0.88 0.83 3104
71
+
72
+ micro avg 0.97 0.97 0.97 97699
73
+ macro avg 0.86 0.80 0.82 97699
74
+ weighted avg 0.97 0.97 0.97 97699
75
+ |
76
+ | 0.1705 | 2.0 | 695 | 0.1121 | precision recall f1-score support
77
+
78
+ CHEBI 0.86 0.73 0.79 1109
79
+ CL 0.90 0.90 0.90 3871
80
+ GGP 0.73 0.65 0.69 600
81
+ GO 0.87 0.82 0.85 1061
82
+ SO 0.99 0.99 0.99 87954
83
+ Taxon 0.79 0.89 0.84 3104
84
+
85
+ micro avg 0.97 0.97 0.97 97699
86
+ macro avg 0.86 0.83 0.84 97699
87
+ weighted avg 0.97 0.97 0.97 97699
88
+ |
89
+ | 0.04 | 3.0 | 1041 | 0.1106 | precision recall f1-score support
90
+
91
+ CHEBI 0.83 0.76 0.80 1109
92
+ CL 0.91 0.90 0.90 3871
93
+ GGP 0.76 0.66 0.71 600
94
+ GO 0.87 0.84 0.85 1061
95
+ SO 0.99 0.99 0.99 87954
96
+ Taxon 0.83 0.87 0.85 3104
97
+
98
+ micro avg 0.98 0.97 0.97 97699
99
+ macro avg 0.87 0.84 0.85 97699
100
+ weighted avg 0.98 0.97 0.97 97699
101
+ |
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.35.2
107
+ - Pytorch 2.1.0+cu118
108
+ - Datasets 2.15.0
109
+ - Tokenizers 0.15.0