Lagadro commited on
Commit
2821a7f
1 Parent(s): 12ff74b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - tr
4
+ license: mit
5
+ base_model: dbmdz/bert-base-turkish-cased
6
+ datasets:
7
+ - wikiann
8
+ - tr
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ - accuracy
14
+ model-index:
15
+ - name: bert-base-turkish-cased-None
16
+ results:
17
+ - task:
18
+ name: Token Classification
19
+ type: token-classification
20
+ dataset:
21
+ name: wikiann
22
+ type: wikiann
23
+ args: default
24
+ metrics:
25
+ - name: precision
26
+ type: precision
27
+ value: 0.9026122547249308
28
+ - name: recall
29
+ type: recall
30
+ value: 0.9218096877305139
31
+ - name: f1
32
+ type: f1
33
+ value: 0.912109968979989
34
+ - name: accuracy
35
+ type: accuracy
36
+ value: 0.9604539478979423
37
+ - task:
38
+ name: Token Classification
39
+ type: token-classification
40
+ dataset:
41
+ name: tr
42
+ type: tr
43
+ metrics:
44
+ - name: precision
45
+ type: precision
46
+ value: 0.9026122547249308
47
+ - name: recall
48
+ type: recall
49
+ value: 0.9218096877305139
50
+ - name: f1
51
+ type: f1
52
+ value: 0.912109968979989
53
+ - name: accuracy
54
+ type: accuracy
55
+ value: 0.9604539478979423
56
+ ---
57
+
58
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
59
+ should probably proofread and complete it, then remove this comment. -->
60
+
61
+ # bert-base-turkish-cased-None
62
+
63
+ This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the wikiann and the tr datasets.
64
+ It achieves the following results on the evaluation set:
65
+ - precision: 0.9026
66
+ - recall: 0.9218
67
+ - f1: 0.9121
68
+ - accuracy: 0.9605
69
+
70
+ ## Model description
71
+
72
+ More information needed
73
+
74
+ ## Intended uses & limitations
75
+
76
+ More information needed
77
+
78
+ ## Training and evaluation data
79
+
80
+ More information needed
81
+
82
+ ## Training procedure
83
+
84
+ ### Training hyperparameters
85
+
86
+ The following hyperparameters were used during training:
87
+ - num_train_epochs: 5
88
+ - train_batch_size: 16
89
+ - eval_batch_size: 32
90
+ - learning_rate: 2e-05
91
+ - weight_decay_rate: 0.01
92
+ - num_warmup_steps: 0
93
+ - fp16: True
94
+
95
+ ### Framework versions
96
+
97
+ - Transformers 4.38.2
98
+ - Pytorch 2.1.0+cu121
99
+ - Datasets 2.18.0
100
+ - Tokenizers 0.15.2