lapp0 commited on
Commit
726221e
·
verified ·
1 Parent(s): b01fea1

End of training

Browse files
README.md CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
- - eval_enwikippl: 228.1461
20
- - eval_frwikippl: 1416.6694
21
- - eval_zhwikippl: 848.6490
22
- - eval_loss: 2.4667
23
- - eval_runtime: 17.2058
24
- - eval_samples_per_second: 58.12
25
- - eval_steps_per_second: 7.265
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=cos, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
@@ -62,20 +62,20 @@ Peak GPU Memory: 8.2195 GB
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
- | 0 | 0 | 56797.875 | 58468.6992 | 8.0273 | 17.152 | 58.302 | 7.288 | 59002.2891 |
66
- | 1000 | 0.0808 | 797.1624 | 5157.9775 | 3.3194 | 17.2397 | 58.006 | 7.251 | 24401.0566 |
67
- | 2000 | 0.1616 | 567.0632 | 3629.1594 | 3.0871 | 17.1941 | 58.16 | 7.27 | 3184.9797 |
68
- | 3000 | 0.2424 | 464.5085 | 3017.8862 | 2.9667 | 17.2095 | 58.108 | 7.263 | 1129.6726 |
69
- | 4000 | 0.3232 | 401.2574 | 2690.6233 | 2.8541 | 17.2873 | 57.846 | 7.231 | 880.7457 |
70
- | 5000 | 0.4040 | 348.5625 | 2427.4329 | 2.7534 | 17.2981 | 57.81 | 7.226 | 1079.5291 |
71
- | 6000 | 0.4848 | 304.7929 | 2054.1772 | 2.6701 | 17.2106 | 58.104 | 7.263 | 904.3437 |
72
- | 7000 | 0.5657 | 277.6311 | 1738.0712 | 2.5931 | 17.2745 | 57.889 | 7.236 | 861.2068 |
73
- | 8000 | 0.6465 | 248.1049 | 1555.2847 | 2.5229 | 17.2275 | 58.047 | 7.256 | 875.1184 |
74
- | 9000 | 0.7273 | 228.1461 | 1416.6694 | 2.4667 | 17.2058 | 58.12 | 7.265 | 848.6490 |
75
- | 10000 | 0.8081 | 208.8987 | 1238.1790 | 2.4113 | 17.26 | 57.938 | 7.242 | 711.3105 |
76
- | 11000 | 0.8889 | 194.2086 | 1232.7786 | 2.3591 | 17.2456 | 57.986 | 7.248 | 517.6449 |
77
- | 12000 | 0.9697 | 175.7651 | 1108.7455 | 2.3060 | 17.3467 | 57.648 | 7.206 | 513.5140 |
78
- | 12375 | 1.0 | 170.5086 | 1069.4347 | 2.2860 | 17.2133 | 58.095 | 7.262 | 531.0175 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
 
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 207.4599
20
+ - eval_frwikippl: 1342.3768
21
+ - eval_zhwikippl: 657.2436
22
+ - eval_loss: 1.3331
23
+ - eval_runtime: 17.3049
24
+ - eval_samples_per_second: 57.787
25
+ - eval_steps_per_second: 7.223
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=kl, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
 
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 55429.6875 | 57698.8047 | 6.1743 | 17.396 | 57.484 | 7.186 | 56988.9141 |
66
+ | 1000 | 0.0808 | 682.2277 | 4513.8320 | 2.0387 | 17.3415 | 57.665 | 7.208 | 21742.0879 |
67
+ | 2000 | 0.1616 | 493.2831 | 3192.1013 | 1.8548 | 17.3304 | 57.702 | 7.213 | 1917.9761 |
68
+ | 3000 | 0.2424 | 408.6152 | 2650.7046 | 1.7483 | 17.3279 | 57.71 | 7.214 | 937.2945 |
69
+ | 4000 | 0.3232 | 362.1653 | 2422.9863 | 1.6582 | 17.3944 | 57.49 | 7.186 | 807.3055 |
70
+ | 5000 | 0.4040 | 311.0092 | 2075.7251 | 1.5707 | 17.3884 | 57.51 | 7.189 | 967.0451 |
71
+ | 6000 | 0.4848 | 271.9341 | 1744.2100 | 1.4998 | 17.372 | 57.564 | 7.195 | 798.9407 |
72
+ | 7000 | 0.5657 | 249.6316 | 1538.4886 | 1.4376 | 17.3071 | 57.78 | 7.222 | 768.1817 |
73
+ | 8000 | 0.6465 | 225.5740 | 1397.4233 | 1.3836 | 17.3097 | 57.771 | 7.221 | 701.6876 |
74
+ | 9000 | 0.7273 | 207.4599 | 1342.3768 | 1.3331 | 17.3049 | 57.787 | 7.223 | 657.2436 |
75
+ | 10000 | 0.8081 | 189.0748 | 1151.9358 | 1.2846 | 17.3724 | 57.563 | 7.195 | 561.3511 |
76
+ | 11000 | 0.8889 | 173.5948 | 1120.0602 | 1.2337 | 17.3912 | 57.5 | 7.188 | 488.3670 |
77
+ | 12000 | 0.9697 | 157.5976 | 1006.0906 | 1.1896 | 17.3686 | 57.575 | 7.197 | 640.5209 |
78
+ | 12375 | 1.0 | 156.4636 | 960.7520 | 1.1773 | 17.446 | 57.32 | 7.165 | 627.6509 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
logs/attn_loss_fn=kl, attn_weight=2.0/events.out.tfevents.1723668499.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d97c8b26b2565ba448a23be26007f03fdf0e087ded6b2f3236637202c54726c0
3
+ size 249