gokuls commited on
Commit
2fc1c50
·
1 Parent(s): a562996

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - glue
6
+ metrics:
7
+ - spearmanr
8
+ model-index:
9
+ - name: hBERTv1_new_pretrain_48_emb_com_stsb
10
+ results:
11
+ - task:
12
+ name: Text Classification
13
+ type: text-classification
14
+ dataset:
15
+ name: glue
16
+ type: glue
17
+ config: stsb
18
+ split: validation
19
+ args: stsb
20
+ metrics:
21
+ - name: Spearmanr
22
+ type: spearmanr
23
+ value: 0.4507892146083376
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # hBERTv1_new_pretrain_48_emb_com_stsb
30
+
31
+ This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the glue dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 1.9571
34
+ - Pearson: 0.4588
35
+ - Spearmanr: 0.4508
36
+ - Combined Score: 0.4548
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 4e-05
56
+ - train_batch_size: 128
57
+ - eval_batch_size: 128
58
+ - seed: 10
59
+ - distributed_type: multi-GPU
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 50
63
+
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
67
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
68
+ | 2.5817 | 1.0 | 45 | 2.6028 | 0.2027 | 0.1896 | 0.1962 |
69
+ | 2.1023 | 2.0 | 90 | 2.1596 | 0.2035 | 0.1938 | 0.1986 |
70
+ | 1.9567 | 3.0 | 135 | 2.3409 | 0.1855 | 0.1931 | 0.1893 |
71
+ | 1.7201 | 4.0 | 180 | 2.1790 | 0.2865 | 0.2934 | 0.2899 |
72
+ | 1.5153 | 5.0 | 225 | 2.1208 | 0.3381 | 0.3352 | 0.3367 |
73
+ | 1.2674 | 6.0 | 270 | 2.1224 | 0.3882 | 0.3898 | 0.3890 |
74
+ | 1.0115 | 7.0 | 315 | 2.2253 | 0.4304 | 0.4281 | 0.4293 |
75
+ | 0.7449 | 8.0 | 360 | 2.3235 | 0.4236 | 0.4323 | 0.4279 |
76
+ | 0.66 | 9.0 | 405 | 2.3617 | 0.4340 | 0.4351 | 0.4346 |
77
+ | 0.4678 | 10.0 | 450 | 2.0741 | 0.4300 | 0.4258 | 0.4279 |
78
+ | 0.4438 | 11.0 | 495 | 2.3816 | 0.4285 | 0.4294 | 0.4289 |
79
+ | 0.3192 | 12.0 | 540 | 2.1673 | 0.4580 | 0.4602 | 0.4591 |
80
+ | 0.2481 | 13.0 | 585 | 2.1544 | 0.4392 | 0.4357 | 0.4374 |
81
+ | 0.2296 | 14.0 | 630 | 2.0075 | 0.4603 | 0.4582 | 0.4593 |
82
+ | 0.1765 | 15.0 | 675 | 2.1395 | 0.4624 | 0.4617 | 0.4621 |
83
+ | 0.1533 | 16.0 | 720 | 2.2715 | 0.4512 | 0.4427 | 0.4469 |
84
+ | 0.1343 | 17.0 | 765 | 2.1726 | 0.4441 | 0.4417 | 0.4429 |
85
+ | 0.1373 | 18.0 | 810 | 2.0223 | 0.4532 | 0.4424 | 0.4478 |
86
+ | 0.1277 | 19.0 | 855 | 1.9992 | 0.4395 | 0.4299 | 0.4347 |
87
+ | 0.0968 | 20.0 | 900 | 2.1078 | 0.4620 | 0.4601 | 0.4610 |
88
+ | 0.084 | 21.0 | 945 | 2.0684 | 0.4627 | 0.4577 | 0.4602 |
89
+ | 0.0777 | 22.0 | 990 | 1.9214 | 0.4648 | 0.4600 | 0.4624 |
90
+ | 0.0572 | 23.0 | 1035 | 2.0636 | 0.4506 | 0.4422 | 0.4464 |
91
+ | 0.0615 | 24.0 | 1080 | 2.0404 | 0.4489 | 0.4388 | 0.4438 |
92
+ | 0.0516 | 25.0 | 1125 | 2.0599 | 0.4516 | 0.4435 | 0.4475 |
93
+ | 0.0501 | 26.0 | 1170 | 2.0359 | 0.4530 | 0.4489 | 0.4510 |
94
+ | 0.0515 | 27.0 | 1215 | 1.9571 | 0.4588 | 0.4508 | 0.4548 |
95
+
96
+
97
+ ### Framework versions
98
+
99
+ - Transformers 4.30.2
100
+ - Pytorch 1.14.0a0+410ce96
101
+ - Datasets 2.12.0
102
+ - Tokenizers 0.13.3