End of training
Browse files
README.md
CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
- eval_enwikippl:
|
20 |
-
- eval_frwikippl:
|
21 |
-
- eval_zhwikippl:
|
22 |
-
- eval_loss:
|
23 |
-
- eval_runtime: 17.
|
24 |
-
- eval_samples_per_second: 58.
|
25 |
-
- eval_steps_per_second: 7.
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
@@ -45,7 +45,7 @@ More information needed
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
-
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
@@ -56,26 +56,26 @@ The following hyperparameters were used during training:
|
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
### Resource Usage
|
59 |
-
Peak GPU Memory: 8.
|
60 |
|
61 |
### Eval-Phase Metrics
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
-
| 0 | 0 |
|
66 |
-
| 1000 | 0.0808 |
|
67 |
-
| 2000 | 0.1616 |
|
68 |
-
| 3000 | 0.2424 |
|
69 |
-
| 4000 | 0.3232 |
|
70 |
-
| 5000 | 0.4040 |
|
71 |
-
| 6000 | 0.4848 |
|
72 |
-
| 7000 | 0.5657 |
|
73 |
-
| 8000 | 0.6465 |
|
74 |
-
| 9000 | 0.7273 |
|
75 |
-
| 10000 | 0.8081 |
|
76 |
-
| 11000 | 0.8889 |
|
77 |
-
| 12000 | 0.9697 |
|
78 |
-
| 12375 | 1.0 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_enwikippl: 228.1461
|
20 |
+
- eval_frwikippl: 1416.6694
|
21 |
+
- eval_zhwikippl: 848.6490
|
22 |
+
- eval_loss: 2.4667
|
23 |
+
- eval_runtime: 17.2058
|
24 |
+
- eval_samples_per_second: 58.12
|
25 |
+
- eval_steps_per_second: 7.265
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=cos, layer_mapper=None, projector=None))
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
|
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
### Resource Usage
|
59 |
+
Peak GPU Memory: 8.2195 GB
|
60 |
|
61 |
### Eval-Phase Metrics
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
+
| 0 | 0 | 56797.875 | 58468.6992 | 8.0273 | 17.152 | 58.302 | 7.288 | 59002.2891 |
|
66 |
+
| 1000 | 0.0808 | 797.1624 | 5157.9775 | 3.3194 | 17.2397 | 58.006 | 7.251 | 24401.0566 |
|
67 |
+
| 2000 | 0.1616 | 567.0632 | 3629.1594 | 3.0871 | 17.1941 | 58.16 | 7.27 | 3184.9797 |
|
68 |
+
| 3000 | 0.2424 | 464.5085 | 3017.8862 | 2.9667 | 17.2095 | 58.108 | 7.263 | 1129.6726 |
|
69 |
+
| 4000 | 0.3232 | 401.2574 | 2690.6233 | 2.8541 | 17.2873 | 57.846 | 7.231 | 880.7457 |
|
70 |
+
| 5000 | 0.4040 | 348.5625 | 2427.4329 | 2.7534 | 17.2981 | 57.81 | 7.226 | 1079.5291 |
|
71 |
+
| 6000 | 0.4848 | 304.7929 | 2054.1772 | 2.6701 | 17.2106 | 58.104 | 7.263 | 904.3437 |
|
72 |
+
| 7000 | 0.5657 | 277.6311 | 1738.0712 | 2.5931 | 17.2745 | 57.889 | 7.236 | 861.2068 |
|
73 |
+
| 8000 | 0.6465 | 248.1049 | 1555.2847 | 2.5229 | 17.2275 | 58.047 | 7.256 | 875.1184 |
|
74 |
+
| 9000 | 0.7273 | 228.1461 | 1416.6694 | 2.4667 | 17.2058 | 58.12 | 7.265 | 848.6490 |
|
75 |
+
| 10000 | 0.8081 | 208.8987 | 1238.1790 | 2.4113 | 17.26 | 57.938 | 7.242 | 711.3105 |
|
76 |
+
| 11000 | 0.8889 | 194.2086 | 1232.7786 | 2.3591 | 17.2456 | 57.986 | 7.248 | 517.6449 |
|
77 |
+
| 12000 | 0.9697 | 175.7651 | 1108.7455 | 2.3060 | 17.3467 | 57.648 | 7.206 | 513.5140 |
|
78 |
+
| 12375 | 1.0 | 170.5086 | 1069.4347 | 2.2860 | 17.2133 | 58.095 | 7.262 | 531.0175 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
logs/attn_loss_fn=cos, attn_weight=2.0/events.out.tfevents.1723664510.93d6cbb3ad53
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:86ec553d19e87355b52dca0eebb630cb3c20db9357187020606cdee30b58cec4
|
3 |
+
size 249
|