agentlans commited on
Commit
6a3f3f2
1 Parent(s): bca1304

Upload 13 files

Browse files
README.md CHANGED
@@ -1,3 +1,113 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ library_name: transformers
5
+ tags:
6
+ - generated_from_trainer
7
+ - text-classification
8
+ - fill-mask
9
+ - embeddings
10
+ metrics:
11
+ - accuracy
12
+ model-index:
13
+ - name: deberta-v3-xsmall-zyda-2
14
+ results:
15
+ - task:
16
+ type: text-classification
17
+ name: Text Classification
18
+ dataset:
19
+ name: Zyphra/Zyda-2 (subset)
20
+ type: Zyphra/Zyda-2
21
+ metrics:
22
+ - type: accuracy
23
+ value: 0.5387
24
+ name: Accuracy
25
+ base_model: agentlans/deberta-finewebedu
26
+ ---
27
+
28
+ # DeBERTa-v3-xsmall-zyda-2
29
+
30
+ ## Model Description
31
+
32
+ This model is a fine-tuned version of [agentlans/deberta-finewebedu](https://huggingface.co/agentlans/deberta-finewebedu) on a subset of the [Zyphra/Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2) dataset. It was trained using the Masked Language Modeling (MLM) objective to enhance its understanding of the English language.
33
+
34
+ ## Performance
35
+
36
+ The model achieves the following results on the evaluation set:
37
+ - Loss: 2.9234
38
+ - Accuracy: 0.5387
39
+
40
+ ## Intended Uses & Limitations
41
+
42
+ This model is designed to be used and finetuned for the following tasks:
43
+ - Text embedding
44
+ - Text classification
45
+ - Fill-in-the-blank tasks
46
+
47
+ **Limitations:**
48
+ - English language only
49
+ - May be inaccurate for specialized jargon, dialects, slang, code, and LaTeX
50
+
51
+ ## Training Data
52
+
53
+ The model was trained on the first 100 000 rows of the [Zyphra/Zyda-2](https://huggingface.co/datasets/Zyphra/Zyda-2) dataset.
54
+ 5% of that data was used for validation.
55
+
56
+ ## Training Procedure
57
+
58
+ ### Hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - Learning rate: 5e-05
62
+ - Train batch size: 8
63
+ - Eval batch size: 8
64
+ - Seed: 42
65
+ - Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - Learning rate scheduler: Linear
67
+ - Number of epochs: 1.0
68
+
69
+ ### Framework Versions
70
+
71
+ - Transformers: 4.44.2
72
+ - PyTorch: 2.5.1+cu124
73
+ - Datasets: 3.1.0
74
+ - Tokenizers: 0.19.1
75
+
76
+ ## Usage Examples
77
+
78
+ ### Masked Language Modeling
79
+
80
+ ```python
81
+ from transformers import pipeline
82
+
83
+ unmasker = pipeline('fill-mask', model='agentlans/deberta-v3-xsmall-zyda-2')
84
+ result = unmasker("[MASK] is the capital of France.")
85
+ print(result)
86
+ ```
87
+
88
+ ### Text Embedding
89
+
90
+ ```python
91
+ from transformers import AutoTokenizer, AutoModel
92
+ import torch
93
+
94
+ model_name = "agentlans/deberta-v3-xsmall-zyda-2"
95
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
96
+ model = AutoModel.from_pretrained(model_name)
97
+
98
+ text = "Example sentence for embedding."
99
+ inputs = tokenizer(text, return_tensors='pt')
100
+ with torch.no_grad():
101
+ outputs = model(**inputs)
102
+
103
+ embeddings = outputs.last_hidden_state.mean(dim=1)
104
+ print(embeddings)
105
+ ```
106
+
107
+ ## Ethical Considerations and Bias
108
+
109
+ As this model is trained on a subset of the Zyda-2 dataset, it may inherit biases present in that data. Users should be aware of potential biases and evaluate the model's output critically, especially for sensitive applications.
110
+
111
+ ## Additional Information
112
+
113
+ For more details about the base model, please refer to [agentlans/deberta-finewebedu](https://huggingface.co/agentlans/deberta-finewebedu).
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[MASK]": 128000
3
+ }
all_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_accuracy": 0.5387296045953106,
4
+ "eval_loss": 2.923440933227539,
5
+ "eval_runtime": 126.5222,
6
+ "eval_samples": 11620,
7
+ "eval_samples_per_second": 91.842,
8
+ "eval_steps_per_second": 11.484,
9
+ "perplexity": 18.60519668247528,
10
+ "total_flos": 1.5038202327662592e+16,
11
+ "train_loss": 3.3210895868885175,
12
+ "train_runtime": 6630.0799,
13
+ "train_samples": 226928,
14
+ "train_samples_per_second": 34.227,
15
+ "train_steps_per_second": 4.278
16
+ }
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "agentlans/deberta-finewebedu",
3
+ "architectures": [
4
+ "DebertaV2ForMaskedLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 384,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 1536,
12
+ "layer_norm_eps": 1e-07,
13
+ "max_position_embeddings": 512,
14
+ "max_relative_positions": -1,
15
+ "model_type": "deberta-v2",
16
+ "norm_rel_ebd": "layer_norm",
17
+ "num_attention_heads": 6,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "pooler_dropout": 0,
21
+ "pooler_hidden_act": "gelu",
22
+ "pooler_hidden_size": 384,
23
+ "pos_att_type": [
24
+ "p2c",
25
+ "c2p"
26
+ ],
27
+ "position_biased_input": false,
28
+ "position_buckets": 256,
29
+ "relative_attention": true,
30
+ "share_att_key": true,
31
+ "torch_dtype": "float32",
32
+ "transformers_version": "4.44.2",
33
+ "type_vocab_size": 0,
34
+ "vocab_size": 128100
35
+ }
eval_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "eval_accuracy": 0.5387296045953106,
4
+ "eval_loss": 2.923440933227539,
5
+ "eval_runtime": 126.5222,
6
+ "eval_samples": 11620,
7
+ "eval_samples_per_second": 91.842,
8
+ "eval_steps_per_second": 11.484,
9
+ "perplexity": 18.60519668247528
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5947d8166d7e82611f205b72ba9585b7060868017f4255ccd5ad3405d5e7e9df
3
+ size 283860016
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c679fbf93643d19aab7ee10c0b99e460bdbc02fedf34b92b05af343b4af586fd
3
+ size 2464616
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128000": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "eos_token": "[SEP]",
49
+ "mask_token": "[MASK]",
50
+ "max_length": 1024,
51
+ "model_max_length": 1000000000000000019884624838656,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "sp_model_kwargs": {},
55
+ "split_by_punct": false,
56
+ "stride": 0,
57
+ "tokenizer_class": "DebertaV2Tokenizer",
58
+ "truncation_side": "right",
59
+ "truncation_strategy": "longest_first",
60
+ "unk_token": "[UNK]",
61
+ "vocab_type": "spm"
62
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.5038202327662592e+16,
4
+ "train_loss": 3.3210895868885175,
5
+ "train_runtime": 6630.0799,
6
+ "train_samples": 226928,
7
+ "train_samples_per_second": 34.227,
8
+ "train_steps_per_second": 4.278
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,434 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 500,
6
+ "global_step": 28366,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.017626736233519003,
13
+ "grad_norm": 5.204168796539307,
14
+ "learning_rate": 4.9118663188324054e-05,
15
+ "loss": 3.9535,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.035253472467038006,
20
+ "grad_norm": 5.162827968597412,
21
+ "learning_rate": 4.82373263766481e-05,
22
+ "loss": 3.761,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.052880208700557006,
27
+ "grad_norm": 5.309798240661621,
28
+ "learning_rate": 4.735598956497215e-05,
29
+ "loss": 3.7096,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.07050694493407601,
34
+ "grad_norm": 5.0922369956970215,
35
+ "learning_rate": 4.64746527532962e-05,
36
+ "loss": 3.6577,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.08813368116759501,
41
+ "grad_norm": 5.067632675170898,
42
+ "learning_rate": 4.559331594162025e-05,
43
+ "loss": 3.6288,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.10576041740111401,
48
+ "grad_norm": 5.3605475425720215,
49
+ "learning_rate": 4.4711979129944304e-05,
50
+ "loss": 3.6192,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.12338715363463301,
55
+ "grad_norm": 5.510789394378662,
56
+ "learning_rate": 4.383064231826835e-05,
57
+ "loss": 3.559,
58
+ "step": 3500
59
+ },
60
+ {
61
+ "epoch": 0.14101388986815203,
62
+ "grad_norm": 5.7333855628967285,
63
+ "learning_rate": 4.29493055065924e-05,
64
+ "loss": 3.5382,
65
+ "step": 4000
66
+ },
67
+ {
68
+ "epoch": 0.158640626101671,
69
+ "grad_norm": 5.04295539855957,
70
+ "learning_rate": 4.206796869491645e-05,
71
+ "loss": 3.4962,
72
+ "step": 4500
73
+ },
74
+ {
75
+ "epoch": 0.17626736233519003,
76
+ "grad_norm": 4.932398796081543,
77
+ "learning_rate": 4.11866318832405e-05,
78
+ "loss": 3.5339,
79
+ "step": 5000
80
+ },
81
+ {
82
+ "epoch": 0.193894098568709,
83
+ "grad_norm": 5.262182235717773,
84
+ "learning_rate": 4.0305295071564555e-05,
85
+ "loss": 3.4758,
86
+ "step": 5500
87
+ },
88
+ {
89
+ "epoch": 0.21152083480222802,
90
+ "grad_norm": 5.248316764831543,
91
+ "learning_rate": 3.94239582598886e-05,
92
+ "loss": 3.4524,
93
+ "step": 6000
94
+ },
95
+ {
96
+ "epoch": 0.229147571035747,
97
+ "grad_norm": 5.176753520965576,
98
+ "learning_rate": 3.854262144821265e-05,
99
+ "loss": 3.4403,
100
+ "step": 6500
101
+ },
102
+ {
103
+ "epoch": 0.24677430726926602,
104
+ "grad_norm": 5.396851539611816,
105
+ "learning_rate": 3.76612846365367e-05,
106
+ "loss": 3.4066,
107
+ "step": 7000
108
+ },
109
+ {
110
+ "epoch": 0.26440104350278504,
111
+ "grad_norm": 4.905313968658447,
112
+ "learning_rate": 3.677994782486075e-05,
113
+ "loss": 3.4277,
114
+ "step": 7500
115
+ },
116
+ {
117
+ "epoch": 0.28202777973630405,
118
+ "grad_norm": 5.581764221191406,
119
+ "learning_rate": 3.58986110131848e-05,
120
+ "loss": 3.3977,
121
+ "step": 8000
122
+ },
123
+ {
124
+ "epoch": 0.299654515969823,
125
+ "grad_norm": 4.564020156860352,
126
+ "learning_rate": 3.501727420150885e-05,
127
+ "loss": 3.3739,
128
+ "step": 8500
129
+ },
130
+ {
131
+ "epoch": 0.317281252203342,
132
+ "grad_norm": 5.451286315917969,
133
+ "learning_rate": 3.41359373898329e-05,
134
+ "loss": 3.3724,
135
+ "step": 9000
136
+ },
137
+ {
138
+ "epoch": 0.33490798843686104,
139
+ "grad_norm": 5.060819149017334,
140
+ "learning_rate": 3.325460057815695e-05,
141
+ "loss": 3.3393,
142
+ "step": 9500
143
+ },
144
+ {
145
+ "epoch": 0.35253472467038005,
146
+ "grad_norm": 5.474411487579346,
147
+ "learning_rate": 3.2373263766481e-05,
148
+ "loss": 3.3186,
149
+ "step": 10000
150
+ },
151
+ {
152
+ "epoch": 0.370161460903899,
153
+ "grad_norm": 5.26786994934082,
154
+ "learning_rate": 3.149192695480505e-05,
155
+ "loss": 3.3223,
156
+ "step": 10500
157
+ },
158
+ {
159
+ "epoch": 0.387788197137418,
160
+ "grad_norm": 5.467500686645508,
161
+ "learning_rate": 3.06105901431291e-05,
162
+ "loss": 3.3054,
163
+ "step": 11000
164
+ },
165
+ {
166
+ "epoch": 0.40541493337093704,
167
+ "grad_norm": 5.263679027557373,
168
+ "learning_rate": 2.972925333145315e-05,
169
+ "loss": 3.3193,
170
+ "step": 11500
171
+ },
172
+ {
173
+ "epoch": 0.42304166960445605,
174
+ "grad_norm": 4.835860729217529,
175
+ "learning_rate": 2.88479165197772e-05,
176
+ "loss": 3.2871,
177
+ "step": 12000
178
+ },
179
+ {
180
+ "epoch": 0.44066840583797506,
181
+ "grad_norm": 4.88271951675415,
182
+ "learning_rate": 2.7966579708101248e-05,
183
+ "loss": 3.2783,
184
+ "step": 12500
185
+ },
186
+ {
187
+ "epoch": 0.458295142071494,
188
+ "grad_norm": 5.228416442871094,
189
+ "learning_rate": 2.70852428964253e-05,
190
+ "loss": 3.2845,
191
+ "step": 13000
192
+ },
193
+ {
194
+ "epoch": 0.47592187830501304,
195
+ "grad_norm": 5.097890853881836,
196
+ "learning_rate": 2.6203906084749348e-05,
197
+ "loss": 3.2731,
198
+ "step": 13500
199
+ },
200
+ {
201
+ "epoch": 0.49354861453853205,
202
+ "grad_norm": 4.9926066398620605,
203
+ "learning_rate": 2.53225692730734e-05,
204
+ "loss": 3.27,
205
+ "step": 14000
206
+ },
207
+ {
208
+ "epoch": 0.511175350772051,
209
+ "grad_norm": 5.329204559326172,
210
+ "learning_rate": 2.4441232461397447e-05,
211
+ "loss": 3.253,
212
+ "step": 14500
213
+ },
214
+ {
215
+ "epoch": 0.5288020870055701,
216
+ "grad_norm": 4.740358352661133,
217
+ "learning_rate": 2.35598956497215e-05,
218
+ "loss": 3.2511,
219
+ "step": 15000
220
+ },
221
+ {
222
+ "epoch": 0.546428823239089,
223
+ "grad_norm": 5.418153285980225,
224
+ "learning_rate": 2.267855883804555e-05,
225
+ "loss": 3.2315,
226
+ "step": 15500
227
+ },
228
+ {
229
+ "epoch": 0.5640555594726081,
230
+ "grad_norm": 4.993420600891113,
231
+ "learning_rate": 2.1797222026369598e-05,
232
+ "loss": 3.2453,
233
+ "step": 16000
234
+ },
235
+ {
236
+ "epoch": 0.5816822957061271,
237
+ "grad_norm": 5.474274635314941,
238
+ "learning_rate": 2.091588521469365e-05,
239
+ "loss": 3.2328,
240
+ "step": 16500
241
+ },
242
+ {
243
+ "epoch": 0.599309031939646,
244
+ "grad_norm": 4.977609157562256,
245
+ "learning_rate": 2.0034548403017698e-05,
246
+ "loss": 3.2181,
247
+ "step": 17000
248
+ },
249
+ {
250
+ "epoch": 0.6169357681731651,
251
+ "grad_norm": 4.982664585113525,
252
+ "learning_rate": 1.915321159134175e-05,
253
+ "loss": 3.2106,
254
+ "step": 17500
255
+ },
256
+ {
257
+ "epoch": 0.634562504406684,
258
+ "grad_norm": 5.291051387786865,
259
+ "learning_rate": 1.8271874779665797e-05,
260
+ "loss": 3.2134,
261
+ "step": 18000
262
+ },
263
+ {
264
+ "epoch": 0.652189240640203,
265
+ "grad_norm": 5.687000751495361,
266
+ "learning_rate": 1.739053796798985e-05,
267
+ "loss": 3.1905,
268
+ "step": 18500
269
+ },
270
+ {
271
+ "epoch": 0.6698159768737221,
272
+ "grad_norm": 5.048547267913818,
273
+ "learning_rate": 1.6509201156313897e-05,
274
+ "loss": 3.2165,
275
+ "step": 19000
276
+ },
277
+ {
278
+ "epoch": 0.687442713107241,
279
+ "grad_norm": 5.21890926361084,
280
+ "learning_rate": 1.5627864344637945e-05,
281
+ "loss": 3.216,
282
+ "step": 19500
283
+ },
284
+ {
285
+ "epoch": 0.7050694493407601,
286
+ "grad_norm": 4.901352405548096,
287
+ "learning_rate": 1.4746527532961998e-05,
288
+ "loss": 3.1903,
289
+ "step": 20000
290
+ },
291
+ {
292
+ "epoch": 0.7226961855742791,
293
+ "grad_norm": 5.835772514343262,
294
+ "learning_rate": 1.3865190721286048e-05,
295
+ "loss": 3.1971,
296
+ "step": 20500
297
+ },
298
+ {
299
+ "epoch": 0.740322921807798,
300
+ "grad_norm": 4.900722503662109,
301
+ "learning_rate": 1.2983853909610097e-05,
302
+ "loss": 3.1832,
303
+ "step": 21000
304
+ },
305
+ {
306
+ "epoch": 0.7579496580413171,
307
+ "grad_norm": 4.764721870422363,
308
+ "learning_rate": 1.2102517097934147e-05,
309
+ "loss": 3.1808,
310
+ "step": 21500
311
+ },
312
+ {
313
+ "epoch": 0.775576394274836,
314
+ "grad_norm": 5.3555731773376465,
315
+ "learning_rate": 1.1221180286258197e-05,
316
+ "loss": 3.1847,
317
+ "step": 22000
318
+ },
319
+ {
320
+ "epoch": 0.7932031305083551,
321
+ "grad_norm": 5.72691535949707,
322
+ "learning_rate": 1.0339843474582247e-05,
323
+ "loss": 3.1689,
324
+ "step": 22500
325
+ },
326
+ {
327
+ "epoch": 0.8108298667418741,
328
+ "grad_norm": 5.263107776641846,
329
+ "learning_rate": 9.458506662906296e-06,
330
+ "loss": 3.1666,
331
+ "step": 23000
332
+ },
333
+ {
334
+ "epoch": 0.828456602975393,
335
+ "grad_norm": 5.273736476898193,
336
+ "learning_rate": 8.577169851230346e-06,
337
+ "loss": 3.1583,
338
+ "step": 23500
339
+ },
340
+ {
341
+ "epoch": 0.8460833392089121,
342
+ "grad_norm": 5.418051719665527,
343
+ "learning_rate": 7.695833039554396e-06,
344
+ "loss": 3.1429,
345
+ "step": 24000
346
+ },
347
+ {
348
+ "epoch": 0.8637100754424311,
349
+ "grad_norm": 4.837016582489014,
350
+ "learning_rate": 6.814496227878446e-06,
351
+ "loss": 3.1831,
352
+ "step": 24500
353
+ },
354
+ {
355
+ "epoch": 0.8813368116759501,
356
+ "grad_norm": 5.3440680503845215,
357
+ "learning_rate": 5.933159416202496e-06,
358
+ "loss": 3.151,
359
+ "step": 25000
360
+ },
361
+ {
362
+ "epoch": 0.8989635479094691,
363
+ "grad_norm": 5.674468517303467,
364
+ "learning_rate": 5.051822604526546e-06,
365
+ "loss": 3.142,
366
+ "step": 25500
367
+ },
368
+ {
369
+ "epoch": 0.916590284142988,
370
+ "grad_norm": 5.245038986206055,
371
+ "learning_rate": 4.170485792850596e-06,
372
+ "loss": 3.1537,
373
+ "step": 26000
374
+ },
375
+ {
376
+ "epoch": 0.9342170203765071,
377
+ "grad_norm": 5.040459632873535,
378
+ "learning_rate": 3.289148981174646e-06,
379
+ "loss": 3.1496,
380
+ "step": 26500
381
+ },
382
+ {
383
+ "epoch": 0.9518437566100261,
384
+ "grad_norm": 4.918792724609375,
385
+ "learning_rate": 2.4078121694986958e-06,
386
+ "loss": 3.1541,
387
+ "step": 27000
388
+ },
389
+ {
390
+ "epoch": 0.9694704928435451,
391
+ "grad_norm": 5.169427394866943,
392
+ "learning_rate": 1.5264753578227457e-06,
393
+ "loss": 3.1609,
394
+ "step": 27500
395
+ },
396
+ {
397
+ "epoch": 0.9870972290770641,
398
+ "grad_norm": 5.406129837036133,
399
+ "learning_rate": 6.451385461467955e-07,
400
+ "loss": 3.1467,
401
+ "step": 28000
402
+ },
403
+ {
404
+ "epoch": 1.0,
405
+ "step": 28366,
406
+ "total_flos": 1.5038202327662592e+16,
407
+ "train_loss": 3.3210895868885175,
408
+ "train_runtime": 6630.0799,
409
+ "train_samples_per_second": 34.227,
410
+ "train_steps_per_second": 4.278
411
+ }
412
+ ],
413
+ "logging_steps": 500,
414
+ "max_steps": 28366,
415
+ "num_input_tokens_seen": 0,
416
+ "num_train_epochs": 1,
417
+ "save_steps": 500,
418
+ "stateful_callbacks": {
419
+ "TrainerControl": {
420
+ "args": {
421
+ "should_epoch_stop": false,
422
+ "should_evaluate": false,
423
+ "should_log": false,
424
+ "should_save": true,
425
+ "should_training_stop": true
426
+ },
427
+ "attributes": {}
428
+ }
429
+ },
430
+ "total_flos": 1.5038202327662592e+16,
431
+ "train_batch_size": 8,
432
+ "trial_name": null,
433
+ "trial_params": null
434
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1ebc3c8cf034541f337347c16a9572f2dada04919b0087a438aadaad09a5406
3
+ size 5240