End of training
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
base_model: gpt2
|
|
|
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
@@ -8,14 +9,23 @@ model-index:
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
-
should probably proofread and complete it, then remove this comment. -->
|
13 |
-
|
14 |
# distily_bench_gpt2_attn
|
15 |
|
16 |
-
This model is
|
|
|
|
|
|
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -28,12 +38,15 @@ More information needed
|
|
28 |
## Training and evaluation data
|
29 |
|
30 |
More information needed
|
|
|
31 |
|
32 |
## Training procedure
|
33 |
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
|
|
|
|
37 |
- learning_rate: 4e-05
|
38 |
- train_batch_size: 8
|
39 |
- eval_batch_size: 8
|
@@ -42,28 +55,30 @@ The following hyperparameters were used during training:
|
|
42 |
- lr_scheduler_type: constant
|
43 |
- num_epochs: 1.0
|
44 |
|
45 |
-
###
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
50 |
-
|
|
51 |
-
|
|
52 |
-
|
|
53 |
-
|
|
54 |
-
|
|
55 |
-
|
|
56 |
-
|
|
57 |
-
|
|
58 |
-
|
|
59 |
-
|
|
60 |
-
|
|
61 |
-
|
|
62 |
-
|
|
|
|
|
|
|
63 |
|
64 |
### Framework versions
|
65 |
-
|
66 |
- Transformers 4.44.0
|
67 |
- Pytorch 2.3.0
|
68 |
- Datasets 2.20.0
|
69 |
-
- Tokenizers 0.19.1
|
|
|
1 |
---
|
|
|
2 |
base_model: gpt2
|
3 |
+
library_name: Distily
|
4 |
+
license: mit
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
|
|
9 |
results: []
|
10 |
---
|
11 |
|
|
|
|
|
|
|
12 |
# distily_bench_gpt2_attn
|
13 |
|
14 |
+
This student model is distilled from the teacher model [gpt2](https://huggingface.co/gpt2) using the dataset (unspecified).
|
15 |
+
|
16 |
+
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
+
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_enwikippl: 212.2672
|
20 |
+
- eval_frwikippl: 1352.8285
|
21 |
+
- eval_zhwikippl: 811.8465
|
22 |
+
- eval_loss: 1.2429
|
23 |
+
- eval_runtime: 17.2351
|
24 |
+
- eval_samples_per_second: 58.021
|
25 |
+
- eval_steps_per_second: 7.253
|
26 |
+
|
27 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
+
should probably proofread and complete it, then remove this comment.
|
29 |
|
30 |
## Model description
|
31 |
|
|
|
38 |
## Training and evaluation data
|
39 |
|
40 |
More information needed
|
41 |
+
-->
|
42 |
|
43 |
## Training procedure
|
44 |
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=mse, layer_mapper=None, projector=None))
|
49 |
+
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
52 |
- eval_batch_size: 8
|
|
|
55 |
- lr_scheduler_type: constant
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
+
### Resource Usage
|
59 |
+
Peak GPU Memory: 8.2202 GB
|
60 |
+
|
61 |
+
### Eval-Phase Metrics
|
62 |
+
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
+
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
+
| 0 | 0 | 58954.875 | 57690.6602 | 5.9433 | 17.2113 | 58.101 | 7.263 | 54707.1133 |
|
66 |
+
| 1000 | 0.0808 | 710.2846 | 4396.9824 | 1.9296 | 17.117 | 58.421 | 7.303 | 18078.0879 |
|
67 |
+
| 2000 | 0.1616 | 503.9694 | 3086.3142 | 1.7454 | 17.2903 | 57.836 | 7.229 | 2703.2686 |
|
68 |
+
| 3000 | 0.2424 | 420.6563 | 2955.9736 | 1.6378 | 17.2224 | 58.064 | 7.258 | 1477.8748 |
|
69 |
+
| 4000 | 0.3232 | 367.0064 | 2704.3167 | 1.5544 | 17.2208 | 58.069 | 7.259 | 851.8279 |
|
70 |
+
| 5000 | 0.4040 | 317.6482 | 2113.5315 | 1.4722 | 17.1418 | 58.337 | 7.292 | 1214.1425 |
|
71 |
+
| 6000 | 0.4848 | 276.7272 | 1629.8280 | 1.3995 | 17.1258 | 58.392 | 7.299 | 813.5826 |
|
72 |
+
| 7000 | 0.5657 | 250.8947 | 1553.0933 | 1.3412 | 17.2475 | 57.979 | 7.247 | 773.1216 |
|
73 |
+
| 8000 | 0.6465 | 228.6603 | 1347.1174 | 1.2915 | 17.2115 | 58.101 | 7.263 | 716.5538 |
|
74 |
+
| 9000 | 0.7273 | 212.2672 | 1352.8285 | 1.2429 | 17.2351 | 58.021 | 7.253 | 811.8465 |
|
75 |
+
| 10000 | 0.8081 | 193.2158 | 1189.5732 | 1.1981 | 17.1888 | 58.177 | 7.272 | 670.6308 |
|
76 |
+
| 11000 | 0.8889 | 178.6132 | 1058.1842 | 1.1502 | 17.2336 | 58.026 | 7.253 | 653.9169 |
|
77 |
+
| 12000 | 0.9697 | 165.5636 | 977.5611 | 1.1143 | 17.2114 | 58.101 | 7.263 | 509.6881 |
|
78 |
+
| 12375 | 1.0 | 160.1887 | 948.9035 | 1.0983 | 17.1765 | 58.219 | 7.277 | 518.8907 |
|
79 |
|
80 |
### Framework versions
|
81 |
+
- Distily 0.2.0
|
82 |
- Transformers 4.44.0
|
83 |
- Pytorch 2.3.0
|
84 |
- Datasets 2.20.0
|
|
logs/attn_loss_fn=mse, attn_weight=2.0/events.out.tfevents.1723660225.93d6cbb3ad53
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17a33e40555070df61c3730cf97614fbbead92007ee047df21579527a03296a1
|
3 |
+
size 249
|