lapp0 commited on
Commit
ffddc25
1 Parent(s): f5392e4

End of training

Browse files
README.md CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
- - eval_enwikippl: 210.2820
20
- - eval_frwikippl: 1274.1346
21
- - eval_zhwikippl: 583.2827
22
- - eval_loss: 1.2965
23
- - eval_runtime: 17.2526
24
- - eval_samples_per_second: 57.962
25
- - eval_steps_per_second: 7.245
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=2.0, loss_fn=mse, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
@@ -56,26 +56,26 @@ The following hyperparameters were used during training:
56
  - num_epochs: 1.0
57
 
58
  ### Resource Usage
59
- Peak GPU Memory: 8.0904 GB
60
 
61
  ### Eval-Phase Metrics
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
- | 0 | 0 | 58037.3203 | 58017.0117 | 6.0237 | 17.2607 | 57.935 | 7.242 | 56038.0625 |
66
- | 1000 | 0.0808 | 715.0994 | 4658.6846 | 2.0131 | 17.1734 | 58.23 | 7.279 | 16350.8623 |
67
- | 2000 | 0.1616 | 508.9246 | 3343.2109 | 1.8201 | 17.2004 | 58.138 | 7.267 | 3102.6990 |
68
- | 3000 | 0.2424 | 419.7101 | 2552.4004 | 1.7020 | 17.1441 | 58.329 | 7.291 | 1042.4126 |
69
- | 4000 | 0.3232 | 361.0421 | 2336.7490 | 1.6177 | 17.0616 | 58.611 | 7.326 | 911.8621 |
70
- | 5000 | 0.4040 | 313.2633 | 1815.2219 | 1.5316 | 17.1786 | 58.212 | 7.276 | 863.9713 |
71
- | 6000 | 0.4848 | 281.3860 | 1725.1301 | 1.4597 | 17.3168 | 57.747 | 7.218 | 705.6341 |
72
- | 7000 | 0.5657 | 253.9131 | 1485.6165 | 1.3999 | 17.1434 | 58.332 | 7.291 | 605.2624 |
73
- | 8000 | 0.6465 | 229.4073 | 1427.2965 | 1.3455 | 17.134 | 58.363 | 7.295 | 629.6656 |
74
- | 9000 | 0.7273 | 210.2820 | 1274.1346 | 1.2965 | 17.2526 | 57.962 | 7.245 | 583.2827 |
75
- | 10000 | 0.8081 | 194.6313 | 1199.3423 | 1.2490 | 17.1679 | 58.248 | 7.281 | 677.5621 |
76
- | 11000 | 0.8889 | 180.3274 | 1160.25 | 1.1980 | 17.1591 | 58.278 | 7.285 | 758.1945 |
77
- | 12000 | 0.9697 | 164.7045 | 1005.8066 | 1.1583 | 17.1824 | 58.199 | 7.275 | 600.1918 |
78
- | 12375 | 1.0 | 161.0243 | 969.7354 | 1.1403 | 17.1939 | 58.16 | 7.27 | 632.9536 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
 
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 248.6255
20
+ - eval_frwikippl: 1465.2275
21
+ - eval_zhwikippl: 910.6450
22
+ - eval_loss: 1.4609
23
+ - eval_runtime: 17.1765
24
+ - eval_samples_per_second: 58.219
25
+ - eval_steps_per_second: 7.277
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=2.0, loss_fn=mse_sum, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
 
56
  - num_epochs: 1.0
57
 
58
  ### Resource Usage
59
+ Peak GPU Memory: 8.0903 GB
60
 
61
  ### Eval-Phase Metrics
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 55922.5859 | 57739.5352 | 7.7238 | 17.0878 | 58.521 | 7.315 | 57324.7461 |
66
+ | 1000 | 0.0808 | 920.8879 | 4922.0625 | 2.3268 | 17.0862 | 58.527 | 7.316 | 22360.4277 |
67
+ | 2000 | 0.1616 | 620.1771 | 3480.3069 | 2.0488 | 17.0703 | 58.581 | 7.323 | 9003.6973 |
68
+ | 3000 | 0.2424 | 497.6692 | 3095.0298 | 1.9112 | 17.0714 | 58.578 | 7.322 | 2670.2615 |
69
+ | 4000 | 0.3232 | 420.1666 | 2924.0510 | 1.7926 | 17.0681 | 58.589 | 7.324 | 1505.7640 |
70
+ | 5000 | 0.4040 | 363.9979 | 2463.9880 | 1.6927 | 17.0999 | 58.48 | 7.31 | 1190.3823 |
71
+ | 6000 | 0.4848 | 321.6444 | 2008.3508 | 1.6180 | 17.0952 | 58.496 | 7.312 | 2308.5518 |
72
+ | 7000 | 0.5657 | 288.7571 | 1772.2247 | 1.5521 | 17.1061 | 58.459 | 7.307 | 943.5735 |
73
+ | 8000 | 0.6465 | 268.0555 | 1636.7375 | 1.5025 | 17.0661 | 58.596 | 7.324 | 1002.2805 |
74
+ | 9000 | 0.7273 | 248.6255 | 1465.2275 | 1.4609 | 17.1765 | 58.219 | 7.277 | 910.6450 |
75
+ | 10000 | 0.8081 | 230.5145 | 1351.8748 | 1.4215 | 17.0631 | 58.606 | 7.326 | 754.1554 |
76
+ | 11000 | 0.8889 | 218.0646 | 1356.4580 | 1.3820 | 17.0844 | 58.533 | 7.317 | 892.8242 |
77
+ | 12000 | 0.9697 | 200.7094 | 1234.1702 | 1.3464 | 17.0571 | 58.627 | 7.328 | 822.1012 |
78
+ | 12375 | 1.0 | 195.8138 | 1216.7174 | 1.3332 | 17.1185 | 58.416 | 7.302 | 906.7622 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
logs/hs_loss_fn=mse_sum, hs_weight=2.0/events.out.tfevents.1723663889.5f530b1cf724 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:011251518598d8a096c7b6288ae3464baee7562f5274f5d03cab848e4a78e1b0
3
+ size 249