Ghunghru commited on
Commit
2b16ffb
·
verified ·
1 Parent(s): 102a891

End of training

Browse files
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: bert-base-german-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - f1
8
+ model-index:
9
+ - name: Misinformation-Covid-LowLearningRatebert-base-german-cased
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Misinformation-Covid-LowLearningRatebert-base-german-cased
17
+
18
+ This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.5151
21
+ - F1: 0.3793
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 2e-07
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 50
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss | F1 |
51
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
52
+ | 0.6534 | 1.0 | 189 | 0.6298 | 0.1000 |
53
+ | 0.6467 | 2.0 | 378 | 0.6222 | 0.1379 |
54
+ | 0.6302 | 3.0 | 567 | 0.6121 | 0.0784 |
55
+ | 0.6259 | 4.0 | 756 | 0.6042 | 0.0870 |
56
+ | 0.6255 | 5.0 | 945 | 0.5987 | 0.0870 |
57
+ | 0.6091 | 6.0 | 1134 | 0.5922 | 0.0909 |
58
+ | 0.6237 | 7.0 | 1323 | 0.5881 | 0.1224 |
59
+ | 0.6019 | 8.0 | 1512 | 0.5826 | 0.1277 |
60
+ | 0.6038 | 9.0 | 1701 | 0.5779 | 0.2 |
61
+ | 0.5996 | 10.0 | 1890 | 0.5730 | 0.1961 |
62
+ | 0.5858 | 11.0 | 2079 | 0.5678 | 0.2353 |
63
+ | 0.5794 | 12.0 | 2268 | 0.5636 | 0.24 |
64
+ | 0.5806 | 13.0 | 2457 | 0.5587 | 0.2264 |
65
+ | 0.5586 | 14.0 | 2646 | 0.5548 | 0.24 |
66
+ | 0.5682 | 15.0 | 2835 | 0.5514 | 0.24 |
67
+ | 0.5631 | 16.0 | 3024 | 0.5471 | 0.2353 |
68
+ | 0.5603 | 17.0 | 3213 | 0.5425 | 0.2593 |
69
+ | 0.5437 | 18.0 | 3402 | 0.5393 | 0.2593 |
70
+ | 0.5439 | 19.0 | 3591 | 0.5368 | 0.2642 |
71
+ | 0.547 | 20.0 | 3780 | 0.5329 | 0.2909 |
72
+ | 0.5408 | 21.0 | 3969 | 0.5297 | 0.3158 |
73
+ | 0.5327 | 22.0 | 4158 | 0.5270 | 0.3158 |
74
+ | 0.5194 | 23.0 | 4347 | 0.5256 | 0.3214 |
75
+ | 0.5206 | 24.0 | 4536 | 0.5227 | 0.3214 |
76
+ | 0.516 | 25.0 | 4725 | 0.5205 | 0.3214 |
77
+ | 0.5103 | 26.0 | 4914 | 0.5191 | 0.3214 |
78
+ | 0.5037 | 27.0 | 5103 | 0.5172 | 0.3214 |
79
+ | 0.4974 | 28.0 | 5292 | 0.5180 | 0.3214 |
80
+ | 0.5116 | 29.0 | 5481 | 0.5156 | 0.3214 |
81
+ | 0.5006 | 30.0 | 5670 | 0.5150 | 0.3214 |
82
+ | 0.509 | 31.0 | 5859 | 0.5141 | 0.3214 |
83
+ | 0.4832 | 32.0 | 6048 | 0.5150 | 0.3273 |
84
+ | 0.4877 | 33.0 | 6237 | 0.5133 | 0.3214 |
85
+ | 0.49 | 34.0 | 6426 | 0.5131 | 0.3158 |
86
+ | 0.4827 | 35.0 | 6615 | 0.5143 | 0.3214 |
87
+ | 0.4986 | 36.0 | 6804 | 0.5125 | 0.3214 |
88
+ | 0.4794 | 37.0 | 6993 | 0.5131 | 0.3793 |
89
+ | 0.4809 | 38.0 | 7182 | 0.5137 | 0.3793 |
90
+ | 0.4929 | 39.0 | 7371 | 0.5114 | 0.3793 |
91
+ | 0.465 | 40.0 | 7560 | 0.5135 | 0.3793 |
92
+ | 0.4867 | 41.0 | 7749 | 0.5121 | 0.3793 |
93
+ | 0.4685 | 42.0 | 7938 | 0.5129 | 0.3793 |
94
+ | 0.4643 | 43.0 | 8127 | 0.5142 | 0.3793 |
95
+ | 0.4804 | 44.0 | 8316 | 0.5144 | 0.3793 |
96
+ | 0.4779 | 45.0 | 8505 | 0.5141 | 0.3793 |
97
+ | 0.4701 | 46.0 | 8694 | 0.5139 | 0.3793 |
98
+ | 0.4619 | 47.0 | 8883 | 0.5146 | 0.3793 |
99
+ | 0.4558 | 48.0 | 9072 | 0.5151 | 0.3793 |
100
+ | 0.4824 | 49.0 | 9261 | 0.5152 | 0.3793 |
101
+ | 0.4758 | 50.0 | 9450 | 0.5151 | 0.3793 |
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.32.1
107
+ - Pytorch 2.1.2
108
+ - Datasets 2.12.0
109
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-german-cased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "No misinformation",
13
+ "1": "Potential misinformation"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "No misinformation": 0,
19
+ "Potential misinformation": 1
20
+ },
21
+ "layer_norm_eps": 1e-12,
22
+ "max_position_embeddings": 512,
23
+ "model_type": "bert",
24
+ "num_attention_heads": 12,
25
+ "num_hidden_layers": 12,
26
+ "pad_token_id": 0,
27
+ "position_embedding_type": "absolute",
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.32.1",
30
+ "type_vocab_size": 2,
31
+ "use_cache": true,
32
+ "vocab_size": 30000
33
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bc622ce0dffcabc53e192ba2be013b3a4a297668b2928eac816a8ab40cc25cc
3
+ size 436400366
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "BertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd92ff64dfe4e93ad36a2373e89058ed6a0900403914319e89a156f8ed3dae5
3
+ size 4536
vocab.txt ADDED
The diff for this file is too large to render. See raw diff