anthonymeo commited on
Commit
9715f5a
1 Parent(s): 4c81152

Upload 15 files

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: internlm/internlm2_5-7b-chat
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.11.1
adapter_config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "internlm/internlm2_5-7b-chat",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "w1",
24
+ "w2",
25
+ "w3",
26
+ "wqkv",
27
+ "wo"
28
+ ],
29
+ "task_type": "CAUSAL_LM",
30
+ "use_dora": false,
31
+ "use_rslora": false
32
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe2b12bcdb80274d6ecd705994572909d2c7c66711463c88090188b0dc52a6f4
3
+ size 75539712
added_tokens.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "[UNUSED_TOKEN_141]": 92544,
3
+ "[UNUSED_TOKEN_142]": 92545,
4
+ "[UNUSED_TOKEN_143]": 92546,
5
+ "[UNUSED_TOKEN_144]": 92547,
6
+ "[UNUSED_TOKEN_145]": 92548,
7
+ "[UNUSED_TOKEN_146]": 92549
8
+ }
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f95b8de737a58ddcf7d5ffccfbada00e1b4d49515877991fe6a2725722407e2c
3
+ size 151263802
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7682299c566684ea51cf26f0c86b6ffaa3c0bc63cbdf84674b29a2c62ac72143
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:746eb22239c269a70a10e5e28d8fc8887c591193ad9d1615681bceee3e53f2cc
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|action_start|>",
6
+ "<|action_end|>",
7
+ "<|interpreter|>",
8
+ "<|plugin|>"
9
+ ],
10
+ "bos_token": {
11
+ "content": "<s>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ "eos_token": {
18
+ "content": "</s>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "</s>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ },
31
+ "unk_token": {
32
+ "content": "<unk>",
33
+ "lstrip": false,
34
+ "normalized": false,
35
+ "rstrip": false,
36
+ "single_word": false
37
+ }
38
+ }
tokenization_internlm2.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """Tokenization classes for InternLM."""
19
+ import os
20
+ from shutil import copyfile
21
+ from typing import Any, Dict, List, Optional, Tuple
22
+
23
+ import sentencepiece as spm
24
+ from transformers.tokenization_utils import PreTrainedTokenizer
25
+ from transformers.utils import logging
26
+
27
+ logger = logging.get_logger(__name__)
28
+
29
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
30
+
31
+ PRETRAINED_VOCAB_FILES_MAP = {}
32
+
33
+
34
+ # Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
35
+ class InternLM2Tokenizer(PreTrainedTokenizer):
36
+ """
37
+ Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
38
+
39
+ Args:
40
+ vocab_file (`str`):
41
+ Path to the vocabulary file.
42
+ """
43
+
44
+ vocab_files_names = VOCAB_FILES_NAMES
45
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
46
+ model_input_names = ["input_ids", "attention_mask"]
47
+ _auto_class = "AutoTokenizer"
48
+
49
+ def __init__(
50
+ self,
51
+ vocab_file,
52
+ unk_token="<unk>",
53
+ bos_token="<s>",
54
+ eos_token="</s>",
55
+ pad_token="</s>",
56
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
57
+ add_bos_token=True,
58
+ add_eos_token=False,
59
+ decode_with_prefix_space=False,
60
+ clean_up_tokenization_spaces=False,
61
+ **kwargs,
62
+ ):
63
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
64
+ self.vocab_file = vocab_file
65
+ self.add_bos_token = add_bos_token
66
+ self.add_eos_token = add_eos_token
67
+ self.decode_with_prefix_space = decode_with_prefix_space
68
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
69
+ self.sp_model.Load(vocab_file)
70
+ self._no_prefix_space_tokens = None
71
+ super().__init__(
72
+ bos_token=bos_token,
73
+ eos_token=eos_token,
74
+ unk_token=unk_token,
75
+ pad_token=pad_token,
76
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
77
+ **kwargs,
78
+ )
79
+
80
+ @property
81
+ def no_prefix_space_tokens(self):
82
+ if self._no_prefix_space_tokens is None:
83
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
84
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
85
+ return self._no_prefix_space_tokens
86
+
87
+ @property
88
+ def vocab_size(self):
89
+ """Returns vocab size"""
90
+ return self.sp_model.get_piece_size()
91
+
92
+ @property
93
+ def bos_token_id(self) -> Optional[int]:
94
+ return self.sp_model.bos_id()
95
+
96
+ @property
97
+ def eos_token_id(self) -> Optional[int]:
98
+ return self.sp_model.eos_id()
99
+
100
+ def get_vocab(self):
101
+ """Returns vocab as a dict"""
102
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
103
+ vocab.update(self.added_tokens_encoder)
104
+ return vocab
105
+
106
+ def _tokenize(self, text):
107
+ """Returns a tokenized string."""
108
+ return self.sp_model.encode(text, out_type=str)
109
+
110
+ def _convert_token_to_id(self, token):
111
+ """Converts a token (str) in an id using the vocab."""
112
+ return self.sp_model.piece_to_id(token)
113
+
114
+ def _convert_id_to_token(self, index):
115
+ """Converts an index (integer) in a token (str) using the vocab."""
116
+ token = self.sp_model.IdToPiece(index)
117
+ return token
118
+
119
+ def _maybe_add_prefix_space(self, tokens, decoded):
120
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
121
+ return " " + decoded
122
+ else:
123
+ return decoded
124
+
125
+ def convert_tokens_to_string(self, tokens):
126
+ """Converts a sequence of tokens (string) in a single string."""
127
+ current_sub_tokens = []
128
+ out_string = ""
129
+ prev_is_special = False
130
+ for token in tokens:
131
+ # make sure that special tokens are not decoded using sentencepiece model
132
+ if token in self.all_special_tokens:
133
+ if not prev_is_special:
134
+ out_string += " "
135
+ out_string += self.sp_model.decode(current_sub_tokens) + token
136
+ prev_is_special = True
137
+ current_sub_tokens = []
138
+ else:
139
+ current_sub_tokens.append(token)
140
+ prev_is_special = False
141
+ out_string += self.sp_model.decode(current_sub_tokens)
142
+ out_string = self.clean_up_tokenization(out_string)
143
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
144
+ return out_string[1:]
145
+
146
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
147
+ """
148
+ Save the vocabulary and special tokens file to a directory.
149
+
150
+ Args:
151
+ save_directory (`str`):
152
+ The directory in which to save the vocabulary.
153
+
154
+ Returns:
155
+ `Tuple(str)`: Paths to the files saved.
156
+ """
157
+ if not os.path.isdir(save_directory):
158
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
159
+ return
160
+ out_vocab_file = os.path.join(
161
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
162
+ )
163
+
164
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
165
+ copyfile(self.vocab_file, out_vocab_file)
166
+ elif not os.path.isfile(self.vocab_file):
167
+ with open(out_vocab_file, "wb") as fi:
168
+ content_spiece_model = self.sp_model.serialized_model_proto()
169
+ fi.write(content_spiece_model)
170
+
171
+ return (out_vocab_file,)
172
+
173
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
174
+ if self.add_bos_token:
175
+ bos_token_ids = [self.bos_token_id]
176
+ else:
177
+ bos_token_ids = []
178
+
179
+ output = bos_token_ids + token_ids_0
180
+
181
+ if token_ids_1 is not None:
182
+ output = output + token_ids_1
183
+
184
+ if self.add_eos_token:
185
+ output = output + [self.eos_token_id]
186
+
187
+ return output
188
+
189
+ def get_special_tokens_mask(
190
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
191
+ ) -> List[int]:
192
+ """
193
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
194
+ special tokens using the tokenizer `prepare_for_model` method.
195
+
196
+ Args:
197
+ token_ids_0 (`List[int]`):
198
+ List of IDs.
199
+ token_ids_1 (`List[int]`, *optional*):
200
+ Optional second list of IDs for sequence pairs.
201
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
202
+ Whether or not the token list is already formatted with special tokens for the model.
203
+
204
+ Returns:
205
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
206
+ """
207
+ if already_has_special_tokens:
208
+ return super().get_special_tokens_mask(
209
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
210
+ )
211
+
212
+ if token_ids_1 is None:
213
+ return [1] + ([0] * len(token_ids_0)) + [1]
214
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
215
+
216
+ def create_token_type_ids_from_sequences(
217
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
218
+ ) -> List[int]:
219
+ """
220
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
221
+ use of token type ids, therefore a list of zeros is returned.
222
+
223
+ Args:
224
+ token_ids_0 (`List[int]`):
225
+ List of IDs.
226
+ token_ids_1 (`List[int]`, *optional*):
227
+ Optional second list of IDs for sequence pairs.
228
+
229
+ Returns:
230
+ `List[int]`: List of zeros.
231
+ """
232
+ eos = [self.eos_token_id]
233
+
234
+ if token_ids_1 is None:
235
+ return len(token_ids_0 + eos) * [0]
236
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenization_internlm2_fast.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama_fast.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+
18
+ """Tokenization Fast class for InternLM."""
19
+ import os
20
+ from shutil import copyfile
21
+ from typing import Any, Dict, Optional, Tuple
22
+
23
+ from tokenizers import processors, decoders, Tokenizer, normalizers
24
+ from tokenizers.models import BPE
25
+
26
+ from transformers.tokenization_utils_fast import PreTrainedTokenizerFast
27
+ from transformers.utils import logging
28
+
29
+ from transformers.convert_slow_tokenizer import (
30
+ SLOW_TO_FAST_CONVERTERS,
31
+ SpmConverter,
32
+ SentencePieceExtractor,
33
+ )
34
+
35
+ from .tokenization_internlm2 import InternLM2Tokenizer
36
+
37
+ logger = logging.get_logger(__name__)
38
+
39
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
40
+
41
+ # Modified from transformers.convert_slow_tokenizer.LlamaConverter
42
+ class InternLM2Converter(SpmConverter):
43
+ handle_byte_fallback = True
44
+
45
+ def vocab(self, proto):
46
+ vocab = [
47
+ ("<unk>", 0.0),
48
+ ("<s>", 0.0),
49
+ ("</s>", 0.0),
50
+ ]
51
+ vocab += [(piece.piece, piece.score) for piece in proto.pieces[3:]]
52
+ return vocab
53
+
54
+ def unk_id(self, proto):
55
+ unk_id = 0
56
+ return unk_id
57
+
58
+ def decoder(self, replacement, add_prefix_space):
59
+ decoders_sequence = [
60
+ decoders.Replace("▁", " "),
61
+ decoders.ByteFallback(),
62
+ decoders.Fuse(),
63
+ ]
64
+ if self.proto.normalizer_spec.add_dummy_prefix:
65
+ decoders_sequence.append(decoders.Strip(content=" ", left=1))
66
+ return decoders.Sequence(decoders_sequence)
67
+
68
+ def tokenizer(self, proto):
69
+ model_type = proto.trainer_spec.model_type
70
+ vocab_scores = self.vocab(proto)
71
+ # special tokens
72
+ added_tokens = self.original_tokenizer.added_tokens_decoder
73
+ for i in range(len(vocab_scores)):
74
+ piece, score = vocab_scores[i]
75
+ if i in added_tokens:
76
+ vocab_scores[i] = (added_tokens[i].content, score)
77
+ if model_type == 1:
78
+ raise RuntimeError("InternLM2 is supposed to be a BPE model!")
79
+
80
+ elif model_type == 2:
81
+ _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract(vocab_scores)
82
+ bpe_vocab = {word: i for i, (word, _score) in enumerate(vocab_scores)}
83
+ tokenizer = Tokenizer(
84
+ BPE(bpe_vocab, merges, unk_token=proto.trainer_spec.unk_piece, fuse_unk=True, byte_fallback=True)
85
+ )
86
+ tokenizer.add_special_tokens(
87
+ [ added_token for index, added_token in added_tokens.items()]
88
+ )
89
+ else:
90
+ raise Exception(
91
+ "You're trying to run a `Unigram` model but you're file was trained with a different algorithm"
92
+ )
93
+
94
+ return tokenizer
95
+
96
+ def normalizer(self, proto):
97
+ normalizers_list = []
98
+ if proto.normalizer_spec.add_dummy_prefix:
99
+ normalizers_list.append(normalizers.Prepend(prepend="▁"))
100
+ normalizers_list.append(normalizers.Replace(pattern=" ", content="▁"))
101
+ return normalizers.Sequence(normalizers_list)
102
+
103
+ def pre_tokenizer(self, replacement, add_prefix_space):
104
+ return None
105
+
106
+ SLOW_TO_FAST_CONVERTERS["InternLM2Tokenizer"] = InternLM2Converter
107
+
108
+
109
+ # Modified from transformers.model.llama.tokenization_llama_fast.LlamaTokenizerFast -> InternLM2TokenizerFast
110
+ class InternLM2TokenizerFast(PreTrainedTokenizerFast):
111
+ vocab_files_names = VOCAB_FILES_NAMES
112
+ slow_tokenizer_class = InternLM2Tokenizer
113
+ padding_side = "left"
114
+ model_input_names = ["input_ids", "attention_mask"]
115
+ _auto_class = "AutoTokenizer"
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_file,
120
+ unk_token="<unk>",
121
+ bos_token="<s>",
122
+ eos_token="</s>",
123
+ pad_token="</s>",
124
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
125
+ add_bos_token=True,
126
+ add_eos_token=False,
127
+ decode_with_prefix_space=False,
128
+ clean_up_tokenization_spaces=False,
129
+ **kwargs,
130
+ ):
131
+ super().__init__(
132
+ vocab_file=vocab_file,
133
+ unk_token=unk_token,
134
+ bos_token=bos_token,
135
+ eos_token=eos_token,
136
+ pad_token=pad_token,
137
+ sp_model_kwargs=sp_model_kwargs,
138
+ add_bos_token=add_bos_token,
139
+ add_eos_token=add_eos_token,
140
+ decode_with_prefix_space=decode_with_prefix_space,
141
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
142
+ **kwargs,
143
+ )
144
+ self._add_bos_token = add_bos_token
145
+ self._add_eos_token = add_eos_token
146
+ self.update_post_processor()
147
+ self.vocab_file = vocab_file
148
+
149
+ @property
150
+ def can_save_slow_tokenizer(self) -> bool:
151
+ return os.path.isfile(self.vocab_file) if self.vocab_file else False
152
+
153
+ def update_post_processor(self):
154
+ """
155
+ Updates the underlying post processor with the current `bos_token` and `eos_token`.
156
+ """
157
+ bos = self.bos_token
158
+ bos_token_id = self.bos_token_id
159
+ if bos is None and self.add_bos_token:
160
+ raise ValueError("add_bos_token = True but bos_token = None")
161
+
162
+ eos = self.eos_token
163
+ eos_token_id = self.eos_token_id
164
+ if eos is None and self.add_eos_token:
165
+ raise ValueError("add_eos_token = True but eos_token = None")
166
+
167
+ single = f"{(bos+':0 ') if self.add_bos_token else ''}$A:0{(' '+eos+':0') if self.add_eos_token else ''}"
168
+ pair = f"{single}{(' '+bos+':1') if self.add_bos_token else ''} $B:1{(' '+eos+':1') if self.add_eos_token else ''}"
169
+
170
+ special_tokens = []
171
+ if self.add_bos_token:
172
+ special_tokens.append((bos, bos_token_id))
173
+ if self.add_eos_token:
174
+ special_tokens.append((eos, eos_token_id))
175
+ self._tokenizer.post_processor = processors.TemplateProcessing(
176
+ single=single, pair=pair, special_tokens=special_tokens
177
+ )
178
+
179
+ @property
180
+ def add_eos_token(self):
181
+ return self._add_eos_token
182
+
183
+ @property
184
+ def add_bos_token(self):
185
+ return self._add_bos_token
186
+
187
+ @add_eos_token.setter
188
+ def add_eos_token(self, value):
189
+ self._add_eos_token = value
190
+ self.update_post_processor()
191
+
192
+ @add_bos_token.setter
193
+ def add_bos_token(self, value):
194
+ self._add_bos_token = value
195
+ self.update_post_processor()
196
+
197
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
198
+ if not self.can_save_slow_tokenizer:
199
+ raise ValueError(
200
+ "Your fast tokenizer does not have the necessary information to save the vocabulary for a slow "
201
+ "tokenizer."
202
+ )
203
+
204
+ if not os.path.isdir(save_directory):
205
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
206
+ return
207
+ out_vocab_file = os.path.join(
208
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
209
+ )
210
+
211
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):
212
+ copyfile(self.vocab_file, out_vocab_file)
213
+
214
+ return (out_vocab_file,)
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json ADDED
@@ -0,0 +1,1640 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "92352": {
30
+ "content": "E",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": false
36
+ },
37
+ "92353": {
38
+ "content": "F",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": false
44
+ },
45
+ "92354": {
46
+ "content": "G",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": false
52
+ },
53
+ "92355": {
54
+ "content": "H",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": false
60
+ },
61
+ "92356": {
62
+ "content": "I",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": false
68
+ },
69
+ "92357": {
70
+ "content": "J",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": false
76
+ },
77
+ "92358": {
78
+ "content": "K",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": false
84
+ },
85
+ "92359": {
86
+ "content": "L",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": false
92
+ },
93
+ "92360": {
94
+ "content": "M",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": false
100
+ },
101
+ "92361": {
102
+ "content": "N",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": false
108
+ },
109
+ "92362": {
110
+ "content": "R",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": false
116
+ },
117
+ "92363": {
118
+ "content": "U",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "92364": {
126
+ "content": "V",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "92365": {
134
+ "content": "W",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "92366": {
142
+ "content": "X",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "92367": {
150
+ "content": "Y",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "92368": {
158
+ "content": "Z",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "92369": {
166
+ "content": "a",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "92370": {
174
+ "content": "b",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "92371": {
182
+ "content": "c",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "92372": {
190
+ "content": "d",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "92373": {
198
+ "content": "e",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "92374": {
206
+ "content": "f",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ },
213
+ "92375": {
214
+ "content": "g",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": false
220
+ },
221
+ "92376": {
222
+ "content": "h",
223
+ "lstrip": false,
224
+ "normalized": false,
225
+ "rstrip": false,
226
+ "single_word": false,
227
+ "special": false
228
+ },
229
+ "92377": {
230
+ "content": "i",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": false
236
+ },
237
+ "92378": {
238
+ "content": "j",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": false
244
+ },
245
+ "92379": {
246
+ "content": "k",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": false
252
+ },
253
+ "92380": {
254
+ "content": "l",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": false
260
+ },
261
+ "92381": {
262
+ "content": "m",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": false
268
+ },
269
+ "92382": {
270
+ "content": "n",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": false
276
+ },
277
+ "92383": {
278
+ "content": "o",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": false
284
+ },
285
+ "92384": {
286
+ "content": "p",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": false
292
+ },
293
+ "92385": {
294
+ "content": "q",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": false
300
+ },
301
+ "92386": {
302
+ "content": "r",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": false
308
+ },
309
+ "92387": {
310
+ "content": "s",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": false
316
+ },
317
+ "92388": {
318
+ "content": "t",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": false
324
+ },
325
+ "92389": {
326
+ "content": "u",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": false
332
+ },
333
+ "92390": {
334
+ "content": "v",
335
+ "lstrip": false,
336
+ "normalized": false,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": false
340
+ },
341
+ "92391": {
342
+ "content": "w",
343
+ "lstrip": false,
344
+ "normalized": false,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": false
348
+ },
349
+ "92392": {
350
+ "content": "x",
351
+ "lstrip": false,
352
+ "normalized": false,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": false
356
+ },
357
+ "92393": {
358
+ "content": "y",
359
+ "lstrip": false,
360
+ "normalized": false,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": false
364
+ },
365
+ "92394": {
366
+ "content": "z",
367
+ "lstrip": false,
368
+ "normalized": false,
369
+ "rstrip": false,
370
+ "single_word": false,
371
+ "special": false
372
+ },
373
+ "92395": {
374
+ "content": "——",
375
+ "lstrip": false,
376
+ "normalized": false,
377
+ "rstrip": false,
378
+ "single_word": false,
379
+ "special": false
380
+ },
381
+ "92396": {
382
+ "content": "……",
383
+ "lstrip": false,
384
+ "normalized": false,
385
+ "rstrip": false,
386
+ "single_word": false,
387
+ "special": false
388
+ },
389
+ "92397": {
390
+ "content": "[UNUSED_TOKEN_0]",
391
+ "lstrip": false,
392
+ "normalized": false,
393
+ "rstrip": false,
394
+ "single_word": false,
395
+ "special": false
396
+ },
397
+ "92398": {
398
+ "content": "[UNUSED_TOKEN_1]",
399
+ "lstrip": false,
400
+ "normalized": false,
401
+ "rstrip": false,
402
+ "single_word": false,
403
+ "special": false
404
+ },
405
+ "92399": {
406
+ "content": "[UNUSED_TOKEN_2]",
407
+ "lstrip": false,
408
+ "normalized": false,
409
+ "rstrip": false,
410
+ "single_word": false,
411
+ "special": false
412
+ },
413
+ "92400": {
414
+ "content": "[UNUSED_TOKEN_3]",
415
+ "lstrip": false,
416
+ "normalized": false,
417
+ "rstrip": false,
418
+ "single_word": false,
419
+ "special": false
420
+ },
421
+ "92401": {
422
+ "content": "[UNUSED_TOKEN_4]",
423
+ "lstrip": false,
424
+ "normalized": false,
425
+ "rstrip": false,
426
+ "single_word": false,
427
+ "special": false
428
+ },
429
+ "92402": {
430
+ "content": "[UNUSED_TOKEN_5]",
431
+ "lstrip": false,
432
+ "normalized": false,
433
+ "rstrip": false,
434
+ "single_word": false,
435
+ "special": false
436
+ },
437
+ "92403": {
438
+ "content": "[UNUSED_TOKEN_6]",
439
+ "lstrip": false,
440
+ "normalized": false,
441
+ "rstrip": false,
442
+ "single_word": false,
443
+ "special": false
444
+ },
445
+ "92404": {
446
+ "content": "[UNUSED_TOKEN_7]",
447
+ "lstrip": false,
448
+ "normalized": false,
449
+ "rstrip": false,
450
+ "single_word": false,
451
+ "special": false
452
+ },
453
+ "92405": {
454
+ "content": "[UNUSED_TOKEN_8]",
455
+ "lstrip": false,
456
+ "normalized": false,
457
+ "rstrip": false,
458
+ "single_word": false,
459
+ "special": false
460
+ },
461
+ "92406": {
462
+ "content": "[UNUSED_TOKEN_9]",
463
+ "lstrip": false,
464
+ "normalized": false,
465
+ "rstrip": false,
466
+ "single_word": false,
467
+ "special": false
468
+ },
469
+ "92407": {
470
+ "content": "[UNUSED_TOKEN_10]",
471
+ "lstrip": false,
472
+ "normalized": false,
473
+ "rstrip": false,
474
+ "single_word": false,
475
+ "special": false
476
+ },
477
+ "92408": {
478
+ "content": "[UNUSED_TOKEN_11]",
479
+ "lstrip": false,
480
+ "normalized": false,
481
+ "rstrip": false,
482
+ "single_word": false,
483
+ "special": false
484
+ },
485
+ "92409": {
486
+ "content": "[UNUSED_TOKEN_12]",
487
+ "lstrip": false,
488
+ "normalized": false,
489
+ "rstrip": false,
490
+ "single_word": false,
491
+ "special": false
492
+ },
493
+ "92410": {
494
+ "content": "[UNUSED_TOKEN_13]",
495
+ "lstrip": false,
496
+ "normalized": false,
497
+ "rstrip": false,
498
+ "single_word": false,
499
+ "special": false
500
+ },
501
+ "92411": {
502
+ "content": "[UNUSED_TOKEN_14]",
503
+ "lstrip": false,
504
+ "normalized": false,
505
+ "rstrip": false,
506
+ "single_word": false,
507
+ "special": false
508
+ },
509
+ "92412": {
510
+ "content": "[UNUSED_TOKEN_15]",
511
+ "lstrip": false,
512
+ "normalized": false,
513
+ "rstrip": false,
514
+ "single_word": false,
515
+ "special": false
516
+ },
517
+ "92413": {
518
+ "content": "[UNUSED_TOKEN_16]",
519
+ "lstrip": false,
520
+ "normalized": false,
521
+ "rstrip": false,
522
+ "single_word": false,
523
+ "special": false
524
+ },
525
+ "92414": {
526
+ "content": "[UNUSED_TOKEN_17]",
527
+ "lstrip": false,
528
+ "normalized": false,
529
+ "rstrip": false,
530
+ "single_word": false,
531
+ "special": false
532
+ },
533
+ "92415": {
534
+ "content": "[UNUSED_TOKEN_18]",
535
+ "lstrip": false,
536
+ "normalized": false,
537
+ "rstrip": false,
538
+ "single_word": false,
539
+ "special": false
540
+ },
541
+ "92416": {
542
+ "content": "[UNUSED_TOKEN_19]",
543
+ "lstrip": false,
544
+ "normalized": false,
545
+ "rstrip": false,
546
+ "single_word": false,
547
+ "special": false
548
+ },
549
+ "92417": {
550
+ "content": "[UNUSED_TOKEN_20]",
551
+ "lstrip": false,
552
+ "normalized": false,
553
+ "rstrip": false,
554
+ "single_word": false,
555
+ "special": false
556
+ },
557
+ "92418": {
558
+ "content": "[UNUSED_TOKEN_21]",
559
+ "lstrip": false,
560
+ "normalized": false,
561
+ "rstrip": false,
562
+ "single_word": false,
563
+ "special": false
564
+ },
565
+ "92419": {
566
+ "content": "[UNUSED_TOKEN_22]",
567
+ "lstrip": false,
568
+ "normalized": false,
569
+ "rstrip": false,
570
+ "single_word": false,
571
+ "special": false
572
+ },
573
+ "92420": {
574
+ "content": "[UNUSED_TOKEN_23]",
575
+ "lstrip": false,
576
+ "normalized": false,
577
+ "rstrip": false,
578
+ "single_word": false,
579
+ "special": false
580
+ },
581
+ "92421": {
582
+ "content": "[UNUSED_TOKEN_24]",
583
+ "lstrip": false,
584
+ "normalized": false,
585
+ "rstrip": false,
586
+ "single_word": false,
587
+ "special": false
588
+ },
589
+ "92422": {
590
+ "content": "[UNUSED_TOKEN_25]",
591
+ "lstrip": false,
592
+ "normalized": false,
593
+ "rstrip": false,
594
+ "single_word": false,
595
+ "special": false
596
+ },
597
+ "92423": {
598
+ "content": "[UNUSED_TOKEN_26]",
599
+ "lstrip": false,
600
+ "normalized": false,
601
+ "rstrip": false,
602
+ "single_word": false,
603
+ "special": false
604
+ },
605
+ "92424": {
606
+ "content": "[UNUSED_TOKEN_27]",
607
+ "lstrip": false,
608
+ "normalized": false,
609
+ "rstrip": false,
610
+ "single_word": false,
611
+ "special": false
612
+ },
613
+ "92425": {
614
+ "content": "[UNUSED_TOKEN_28]",
615
+ "lstrip": false,
616
+ "normalized": false,
617
+ "rstrip": false,
618
+ "single_word": false,
619
+ "special": false
620
+ },
621
+ "92426": {
622
+ "content": "[UNUSED_TOKEN_29]",
623
+ "lstrip": false,
624
+ "normalized": false,
625
+ "rstrip": false,
626
+ "single_word": false,
627
+ "special": false
628
+ },
629
+ "92427": {
630
+ "content": "[UNUSED_TOKEN_30]",
631
+ "lstrip": false,
632
+ "normalized": false,
633
+ "rstrip": false,
634
+ "single_word": false,
635
+ "special": false
636
+ },
637
+ "92428": {
638
+ "content": "[UNUSED_TOKEN_31]",
639
+ "lstrip": false,
640
+ "normalized": false,
641
+ "rstrip": false,
642
+ "single_word": false,
643
+ "special": false
644
+ },
645
+ "92429": {
646
+ "content": "[UNUSED_TOKEN_32]",
647
+ "lstrip": false,
648
+ "normalized": false,
649
+ "rstrip": false,
650
+ "single_word": false,
651
+ "special": false
652
+ },
653
+ "92430": {
654
+ "content": "[UNUSED_TOKEN_33]",
655
+ "lstrip": false,
656
+ "normalized": false,
657
+ "rstrip": false,
658
+ "single_word": false,
659
+ "special": false
660
+ },
661
+ "92431": {
662
+ "content": "[UNUSED_TOKEN_34]",
663
+ "lstrip": false,
664
+ "normalized": false,
665
+ "rstrip": false,
666
+ "single_word": false,
667
+ "special": false
668
+ },
669
+ "92432": {
670
+ "content": "[UNUSED_TOKEN_35]",
671
+ "lstrip": false,
672
+ "normalized": false,
673
+ "rstrip": false,
674
+ "single_word": false,
675
+ "special": false
676
+ },
677
+ "92433": {
678
+ "content": "[UNUSED_TOKEN_36]",
679
+ "lstrip": false,
680
+ "normalized": false,
681
+ "rstrip": false,
682
+ "single_word": false,
683
+ "special": false
684
+ },
685
+ "92434": {
686
+ "content": "[UNUSED_TOKEN_37]",
687
+ "lstrip": false,
688
+ "normalized": false,
689
+ "rstrip": false,
690
+ "single_word": false,
691
+ "special": false
692
+ },
693
+ "92435": {
694
+ "content": "[UNUSED_TOKEN_38]",
695
+ "lstrip": false,
696
+ "normalized": false,
697
+ "rstrip": false,
698
+ "single_word": false,
699
+ "special": false
700
+ },
701
+ "92436": {
702
+ "content": "[UNUSED_TOKEN_39]",
703
+ "lstrip": false,
704
+ "normalized": false,
705
+ "rstrip": false,
706
+ "single_word": false,
707
+ "special": false
708
+ },
709
+ "92437": {
710
+ "content": "[UNUSED_TOKEN_40]",
711
+ "lstrip": false,
712
+ "normalized": false,
713
+ "rstrip": false,
714
+ "single_word": false,
715
+ "special": false
716
+ },
717
+ "92438": {
718
+ "content": "[UNUSED_TOKEN_41]",
719
+ "lstrip": false,
720
+ "normalized": false,
721
+ "rstrip": false,
722
+ "single_word": false,
723
+ "special": false
724
+ },
725
+ "92439": {
726
+ "content": "[UNUSED_TOKEN_42]",
727
+ "lstrip": false,
728
+ "normalized": false,
729
+ "rstrip": false,
730
+ "single_word": false,
731
+ "special": false
732
+ },
733
+ "92440": {
734
+ "content": "[UNUSED_TOKEN_43]",
735
+ "lstrip": false,
736
+ "normalized": false,
737
+ "rstrip": false,
738
+ "single_word": false,
739
+ "special": false
740
+ },
741
+ "92441": {
742
+ "content": "[UNUSED_TOKEN_44]",
743
+ "lstrip": false,
744
+ "normalized": false,
745
+ "rstrip": false,
746
+ "single_word": false,
747
+ "special": false
748
+ },
749
+ "92442": {
750
+ "content": "[UNUSED_TOKEN_45]",
751
+ "lstrip": false,
752
+ "normalized": false,
753
+ "rstrip": false,
754
+ "single_word": false,
755
+ "special": false
756
+ },
757
+ "92443": {
758
+ "content": "[UNUSED_TOKEN_46]",
759
+ "lstrip": false,
760
+ "normalized": false,
761
+ "rstrip": false,
762
+ "single_word": false,
763
+ "special": false
764
+ },
765
+ "92444": {
766
+ "content": "[UNUSED_TOKEN_47]",
767
+ "lstrip": false,
768
+ "normalized": false,
769
+ "rstrip": false,
770
+ "single_word": false,
771
+ "special": false
772
+ },
773
+ "92445": {
774
+ "content": "[UNUSED_TOKEN_48]",
775
+ "lstrip": false,
776
+ "normalized": false,
777
+ "rstrip": false,
778
+ "single_word": false,
779
+ "special": false
780
+ },
781
+ "92446": {
782
+ "content": "[UNUSED_TOKEN_49]",
783
+ "lstrip": false,
784
+ "normalized": false,
785
+ "rstrip": false,
786
+ "single_word": false,
787
+ "special": false
788
+ },
789
+ "92447": {
790
+ "content": "[UNUSED_TOKEN_50]",
791
+ "lstrip": false,
792
+ "normalized": false,
793
+ "rstrip": false,
794
+ "single_word": false,
795
+ "special": false
796
+ },
797
+ "92448": {
798
+ "content": "[UNUSED_TOKEN_51]",
799
+ "lstrip": false,
800
+ "normalized": false,
801
+ "rstrip": false,
802
+ "single_word": false,
803
+ "special": false
804
+ },
805
+ "92449": {
806
+ "content": "[UNUSED_TOKEN_52]",
807
+ "lstrip": false,
808
+ "normalized": false,
809
+ "rstrip": false,
810
+ "single_word": false,
811
+ "special": false
812
+ },
813
+ "92450": {
814
+ "content": "[UNUSED_TOKEN_53]",
815
+ "lstrip": false,
816
+ "normalized": false,
817
+ "rstrip": false,
818
+ "single_word": false,
819
+ "special": false
820
+ },
821
+ "92451": {
822
+ "content": "[UNUSED_TOKEN_54]",
823
+ "lstrip": false,
824
+ "normalized": false,
825
+ "rstrip": false,
826
+ "single_word": false,
827
+ "special": false
828
+ },
829
+ "92452": {
830
+ "content": "[UNUSED_TOKEN_55]",
831
+ "lstrip": false,
832
+ "normalized": false,
833
+ "rstrip": false,
834
+ "single_word": false,
835
+ "special": false
836
+ },
837
+ "92453": {
838
+ "content": "[UNUSED_TOKEN_56]",
839
+ "lstrip": false,
840
+ "normalized": false,
841
+ "rstrip": false,
842
+ "single_word": false,
843
+ "special": false
844
+ },
845
+ "92454": {
846
+ "content": "[UNUSED_TOKEN_57]",
847
+ "lstrip": false,
848
+ "normalized": false,
849
+ "rstrip": false,
850
+ "single_word": false,
851
+ "special": false
852
+ },
853
+ "92455": {
854
+ "content": "[UNUSED_TOKEN_58]",
855
+ "lstrip": false,
856
+ "normalized": false,
857
+ "rstrip": false,
858
+ "single_word": false,
859
+ "special": false
860
+ },
861
+ "92456": {
862
+ "content": "[UNUSED_TOKEN_59]",
863
+ "lstrip": false,
864
+ "normalized": false,
865
+ "rstrip": false,
866
+ "single_word": false,
867
+ "special": false
868
+ },
869
+ "92457": {
870
+ "content": "[UNUSED_TOKEN_60]",
871
+ "lstrip": false,
872
+ "normalized": false,
873
+ "rstrip": false,
874
+ "single_word": false,
875
+ "special": false
876
+ },
877
+ "92458": {
878
+ "content": "[UNUSED_TOKEN_61]",
879
+ "lstrip": false,
880
+ "normalized": false,
881
+ "rstrip": false,
882
+ "single_word": false,
883
+ "special": false
884
+ },
885
+ "92459": {
886
+ "content": "[UNUSED_TOKEN_62]",
887
+ "lstrip": false,
888
+ "normalized": false,
889
+ "rstrip": false,
890
+ "single_word": false,
891
+ "special": false
892
+ },
893
+ "92460": {
894
+ "content": "[UNUSED_TOKEN_63]",
895
+ "lstrip": false,
896
+ "normalized": false,
897
+ "rstrip": false,
898
+ "single_word": false,
899
+ "special": false
900
+ },
901
+ "92461": {
902
+ "content": "[UNUSED_TOKEN_64]",
903
+ "lstrip": false,
904
+ "normalized": false,
905
+ "rstrip": false,
906
+ "single_word": false,
907
+ "special": false
908
+ },
909
+ "92462": {
910
+ "content": "[UNUSED_TOKEN_65]",
911
+ "lstrip": false,
912
+ "normalized": false,
913
+ "rstrip": false,
914
+ "single_word": false,
915
+ "special": false
916
+ },
917
+ "92463": {
918
+ "content": "[UNUSED_TOKEN_66]",
919
+ "lstrip": false,
920
+ "normalized": false,
921
+ "rstrip": false,
922
+ "single_word": false,
923
+ "special": false
924
+ },
925
+ "92464": {
926
+ "content": "[UNUSED_TOKEN_67]",
927
+ "lstrip": false,
928
+ "normalized": false,
929
+ "rstrip": false,
930
+ "single_word": false,
931
+ "special": false
932
+ },
933
+ "92465": {
934
+ "content": "[UNUSED_TOKEN_68]",
935
+ "lstrip": false,
936
+ "normalized": false,
937
+ "rstrip": false,
938
+ "single_word": false,
939
+ "special": false
940
+ },
941
+ "92466": {
942
+ "content": "[UNUSED_TOKEN_69]",
943
+ "lstrip": false,
944
+ "normalized": false,
945
+ "rstrip": false,
946
+ "single_word": false,
947
+ "special": false
948
+ },
949
+ "92467": {
950
+ "content": "[UNUSED_TOKEN_70]",
951
+ "lstrip": false,
952
+ "normalized": false,
953
+ "rstrip": false,
954
+ "single_word": false,
955
+ "special": false
956
+ },
957
+ "92468": {
958
+ "content": "[UNUSED_TOKEN_71]",
959
+ "lstrip": false,
960
+ "normalized": false,
961
+ "rstrip": false,
962
+ "single_word": false,
963
+ "special": false
964
+ },
965
+ "92469": {
966
+ "content": "[UNUSED_TOKEN_72]",
967
+ "lstrip": false,
968
+ "normalized": false,
969
+ "rstrip": false,
970
+ "single_word": false,
971
+ "special": false
972
+ },
973
+ "92470": {
974
+ "content": "[UNUSED_TOKEN_73]",
975
+ "lstrip": false,
976
+ "normalized": false,
977
+ "rstrip": false,
978
+ "single_word": false,
979
+ "special": false
980
+ },
981
+ "92471": {
982
+ "content": "[UNUSED_TOKEN_74]",
983
+ "lstrip": false,
984
+ "normalized": false,
985
+ "rstrip": false,
986
+ "single_word": false,
987
+ "special": false
988
+ },
989
+ "92472": {
990
+ "content": "[UNUSED_TOKEN_75]",
991
+ "lstrip": false,
992
+ "normalized": false,
993
+ "rstrip": false,
994
+ "single_word": false,
995
+ "special": false
996
+ },
997
+ "92473": {
998
+ "content": "[UNUSED_TOKEN_76]",
999
+ "lstrip": false,
1000
+ "normalized": false,
1001
+ "rstrip": false,
1002
+ "single_word": false,
1003
+ "special": false
1004
+ },
1005
+ "92474": {
1006
+ "content": "[UNUSED_TOKEN_77]",
1007
+ "lstrip": false,
1008
+ "normalized": false,
1009
+ "rstrip": false,
1010
+ "single_word": false,
1011
+ "special": false
1012
+ },
1013
+ "92475": {
1014
+ "content": "[UNUSED_TOKEN_78]",
1015
+ "lstrip": false,
1016
+ "normalized": false,
1017
+ "rstrip": false,
1018
+ "single_word": false,
1019
+ "special": false
1020
+ },
1021
+ "92476": {
1022
+ "content": "[UNUSED_TOKEN_79]",
1023
+ "lstrip": false,
1024
+ "normalized": false,
1025
+ "rstrip": false,
1026
+ "single_word": false,
1027
+ "special": false
1028
+ },
1029
+ "92477": {
1030
+ "content": "[UNUSED_TOKEN_80]",
1031
+ "lstrip": false,
1032
+ "normalized": false,
1033
+ "rstrip": false,
1034
+ "single_word": false,
1035
+ "special": false
1036
+ },
1037
+ "92478": {
1038
+ "content": "[UNUSED_TOKEN_81]",
1039
+ "lstrip": false,
1040
+ "normalized": false,
1041
+ "rstrip": false,
1042
+ "single_word": false,
1043
+ "special": false
1044
+ },
1045
+ "92479": {
1046
+ "content": "[UNUSED_TOKEN_82]",
1047
+ "lstrip": false,
1048
+ "normalized": false,
1049
+ "rstrip": false,
1050
+ "single_word": false,
1051
+ "special": false
1052
+ },
1053
+ "92480": {
1054
+ "content": "[UNUSED_TOKEN_83]",
1055
+ "lstrip": false,
1056
+ "normalized": false,
1057
+ "rstrip": false,
1058
+ "single_word": false,
1059
+ "special": false
1060
+ },
1061
+ "92481": {
1062
+ "content": "[UNUSED_TOKEN_84]",
1063
+ "lstrip": false,
1064
+ "normalized": false,
1065
+ "rstrip": false,
1066
+ "single_word": false,
1067
+ "special": false
1068
+ },
1069
+ "92482": {
1070
+ "content": "[UNUSED_TOKEN_85]",
1071
+ "lstrip": false,
1072
+ "normalized": false,
1073
+ "rstrip": false,
1074
+ "single_word": false,
1075
+ "special": false
1076
+ },
1077
+ "92483": {
1078
+ "content": "[UNUSED_TOKEN_86]",
1079
+ "lstrip": false,
1080
+ "normalized": false,
1081
+ "rstrip": false,
1082
+ "single_word": false,
1083
+ "special": false
1084
+ },
1085
+ "92484": {
1086
+ "content": "[UNUSED_TOKEN_87]",
1087
+ "lstrip": false,
1088
+ "normalized": false,
1089
+ "rstrip": false,
1090
+ "single_word": false,
1091
+ "special": false
1092
+ },
1093
+ "92485": {
1094
+ "content": "[UNUSED_TOKEN_88]",
1095
+ "lstrip": false,
1096
+ "normalized": false,
1097
+ "rstrip": false,
1098
+ "single_word": false,
1099
+ "special": false
1100
+ },
1101
+ "92486": {
1102
+ "content": "[UNUSED_TOKEN_89]",
1103
+ "lstrip": false,
1104
+ "normalized": false,
1105
+ "rstrip": false,
1106
+ "single_word": false,
1107
+ "special": false
1108
+ },
1109
+ "92487": {
1110
+ "content": "[UNUSED_TOKEN_90]",
1111
+ "lstrip": false,
1112
+ "normalized": false,
1113
+ "rstrip": false,
1114
+ "single_word": false,
1115
+ "special": false
1116
+ },
1117
+ "92488": {
1118
+ "content": "[UNUSED_TOKEN_91]",
1119
+ "lstrip": false,
1120
+ "normalized": false,
1121
+ "rstrip": false,
1122
+ "single_word": false,
1123
+ "special": false
1124
+ },
1125
+ "92489": {
1126
+ "content": "[UNUSED_TOKEN_92]",
1127
+ "lstrip": false,
1128
+ "normalized": false,
1129
+ "rstrip": false,
1130
+ "single_word": false,
1131
+ "special": false
1132
+ },
1133
+ "92490": {
1134
+ "content": "[UNUSED_TOKEN_93]",
1135
+ "lstrip": false,
1136
+ "normalized": false,
1137
+ "rstrip": false,
1138
+ "single_word": false,
1139
+ "special": false
1140
+ },
1141
+ "92491": {
1142
+ "content": "[UNUSED_TOKEN_94]",
1143
+ "lstrip": false,
1144
+ "normalized": false,
1145
+ "rstrip": false,
1146
+ "single_word": false,
1147
+ "special": false
1148
+ },
1149
+ "92492": {
1150
+ "content": "[UNUSED_TOKEN_95]",
1151
+ "lstrip": false,
1152
+ "normalized": false,
1153
+ "rstrip": false,
1154
+ "single_word": false,
1155
+ "special": false
1156
+ },
1157
+ "92493": {
1158
+ "content": "[UNUSED_TOKEN_96]",
1159
+ "lstrip": false,
1160
+ "normalized": false,
1161
+ "rstrip": false,
1162
+ "single_word": false,
1163
+ "special": false
1164
+ },
1165
+ "92494": {
1166
+ "content": "[UNUSED_TOKEN_97]",
1167
+ "lstrip": false,
1168
+ "normalized": false,
1169
+ "rstrip": false,
1170
+ "single_word": false,
1171
+ "special": false
1172
+ },
1173
+ "92495": {
1174
+ "content": "[UNUSED_TOKEN_98]",
1175
+ "lstrip": false,
1176
+ "normalized": false,
1177
+ "rstrip": false,
1178
+ "single_word": false,
1179
+ "special": false
1180
+ },
1181
+ "92496": {
1182
+ "content": "[UNUSED_TOKEN_99]",
1183
+ "lstrip": false,
1184
+ "normalized": false,
1185
+ "rstrip": false,
1186
+ "single_word": false,
1187
+ "special": false
1188
+ },
1189
+ "92497": {
1190
+ "content": "[UNUSED_TOKEN_100]",
1191
+ "lstrip": false,
1192
+ "normalized": false,
1193
+ "rstrip": false,
1194
+ "single_word": false,
1195
+ "special": false
1196
+ },
1197
+ "92498": {
1198
+ "content": "[UNUSED_TOKEN_101]",
1199
+ "lstrip": false,
1200
+ "normalized": false,
1201
+ "rstrip": false,
1202
+ "single_word": false,
1203
+ "special": false
1204
+ },
1205
+ "92499": {
1206
+ "content": "[UNUSED_TOKEN_102]",
1207
+ "lstrip": false,
1208
+ "normalized": false,
1209
+ "rstrip": false,
1210
+ "single_word": false,
1211
+ "special": false
1212
+ },
1213
+ "92500": {
1214
+ "content": "[UNUSED_TOKEN_103]",
1215
+ "lstrip": false,
1216
+ "normalized": false,
1217
+ "rstrip": false,
1218
+ "single_word": false,
1219
+ "special": false
1220
+ },
1221
+ "92501": {
1222
+ "content": "[UNUSED_TOKEN_104]",
1223
+ "lstrip": false,
1224
+ "normalized": false,
1225
+ "rstrip": false,
1226
+ "single_word": false,
1227
+ "special": false
1228
+ },
1229
+ "92502": {
1230
+ "content": "[UNUSED_TOKEN_105]",
1231
+ "lstrip": false,
1232
+ "normalized": false,
1233
+ "rstrip": false,
1234
+ "single_word": false,
1235
+ "special": false
1236
+ },
1237
+ "92503": {
1238
+ "content": "[UNUSED_TOKEN_106]",
1239
+ "lstrip": false,
1240
+ "normalized": false,
1241
+ "rstrip": false,
1242
+ "single_word": false,
1243
+ "special": false
1244
+ },
1245
+ "92504": {
1246
+ "content": "[UNUSED_TOKEN_107]",
1247
+ "lstrip": false,
1248
+ "normalized": false,
1249
+ "rstrip": false,
1250
+ "single_word": false,
1251
+ "special": false
1252
+ },
1253
+ "92505": {
1254
+ "content": "[UNUSED_TOKEN_108]",
1255
+ "lstrip": false,
1256
+ "normalized": false,
1257
+ "rstrip": false,
1258
+ "single_word": false,
1259
+ "special": false
1260
+ },
1261
+ "92506": {
1262
+ "content": "[UNUSED_TOKEN_109]",
1263
+ "lstrip": false,
1264
+ "normalized": false,
1265
+ "rstrip": false,
1266
+ "single_word": false,
1267
+ "special": false
1268
+ },
1269
+ "92507": {
1270
+ "content": "[UNUSED_TOKEN_110]",
1271
+ "lstrip": false,
1272
+ "normalized": false,
1273
+ "rstrip": false,
1274
+ "single_word": false,
1275
+ "special": false
1276
+ },
1277
+ "92508": {
1278
+ "content": "[UNUSED_TOKEN_111]",
1279
+ "lstrip": false,
1280
+ "normalized": false,
1281
+ "rstrip": false,
1282
+ "single_word": false,
1283
+ "special": false
1284
+ },
1285
+ "92509": {
1286
+ "content": "[UNUSED_TOKEN_112]",
1287
+ "lstrip": false,
1288
+ "normalized": false,
1289
+ "rstrip": false,
1290
+ "single_word": false,
1291
+ "special": false
1292
+ },
1293
+ "92510": {
1294
+ "content": "[UNUSED_TOKEN_113]",
1295
+ "lstrip": false,
1296
+ "normalized": false,
1297
+ "rstrip": false,
1298
+ "single_word": false,
1299
+ "special": false
1300
+ },
1301
+ "92511": {
1302
+ "content": "[UNUSED_TOKEN_114]",
1303
+ "lstrip": false,
1304
+ "normalized": false,
1305
+ "rstrip": false,
1306
+ "single_word": false,
1307
+ "special": false
1308
+ },
1309
+ "92512": {
1310
+ "content": "[UNUSED_TOKEN_115]",
1311
+ "lstrip": false,
1312
+ "normalized": false,
1313
+ "rstrip": false,
1314
+ "single_word": false,
1315
+ "special": false
1316
+ },
1317
+ "92513": {
1318
+ "content": "[UNUSED_TOKEN_116]",
1319
+ "lstrip": false,
1320
+ "normalized": false,
1321
+ "rstrip": false,
1322
+ "single_word": false,
1323
+ "special": false
1324
+ },
1325
+ "92514": {
1326
+ "content": "[UNUSED_TOKEN_117]",
1327
+ "lstrip": false,
1328
+ "normalized": false,
1329
+ "rstrip": false,
1330
+ "single_word": false,
1331
+ "special": false
1332
+ },
1333
+ "92515": {
1334
+ "content": "[UNUSED_TOKEN_118]",
1335
+ "lstrip": false,
1336
+ "normalized": false,
1337
+ "rstrip": false,
1338
+ "single_word": false,
1339
+ "special": false
1340
+ },
1341
+ "92516": {
1342
+ "content": "[UNUSED_TOKEN_119]",
1343
+ "lstrip": false,
1344
+ "normalized": false,
1345
+ "rstrip": false,
1346
+ "single_word": false,
1347
+ "special": false
1348
+ },
1349
+ "92517": {
1350
+ "content": "[UNUSED_TOKEN_120]",
1351
+ "lstrip": false,
1352
+ "normalized": false,
1353
+ "rstrip": false,
1354
+ "single_word": false,
1355
+ "special": false
1356
+ },
1357
+ "92518": {
1358
+ "content": "[UNUSED_TOKEN_121]",
1359
+ "lstrip": false,
1360
+ "normalized": false,
1361
+ "rstrip": false,
1362
+ "single_word": false,
1363
+ "special": false
1364
+ },
1365
+ "92519": {
1366
+ "content": "[UNUSED_TOKEN_122]",
1367
+ "lstrip": false,
1368
+ "normalized": false,
1369
+ "rstrip": false,
1370
+ "single_word": false,
1371
+ "special": false
1372
+ },
1373
+ "92520": {
1374
+ "content": "[UNUSED_TOKEN_123]",
1375
+ "lstrip": false,
1376
+ "normalized": false,
1377
+ "rstrip": false,
1378
+ "single_word": false,
1379
+ "special": false
1380
+ },
1381
+ "92521": {
1382
+ "content": "[UNUSED_TOKEN_124]",
1383
+ "lstrip": false,
1384
+ "normalized": false,
1385
+ "rstrip": false,
1386
+ "single_word": false,
1387
+ "special": false
1388
+ },
1389
+ "92522": {
1390
+ "content": "[UNUSED_TOKEN_125]",
1391
+ "lstrip": false,
1392
+ "normalized": false,
1393
+ "rstrip": false,
1394
+ "single_word": false,
1395
+ "special": false
1396
+ },
1397
+ "92523": {
1398
+ "content": "[UNUSED_TOKEN_126]",
1399
+ "lstrip": false,
1400
+ "normalized": false,
1401
+ "rstrip": false,
1402
+ "single_word": false,
1403
+ "special": false
1404
+ },
1405
+ "92524": {
1406
+ "content": "[UNUSED_TOKEN_127]",
1407
+ "lstrip": false,
1408
+ "normalized": false,
1409
+ "rstrip": false,
1410
+ "single_word": false,
1411
+ "special": false
1412
+ },
1413
+ "92525": {
1414
+ "content": "[UNUSED_TOKEN_128]",
1415
+ "lstrip": false,
1416
+ "normalized": false,
1417
+ "rstrip": false,
1418
+ "single_word": false,
1419
+ "special": false
1420
+ },
1421
+ "92526": {
1422
+ "content": "[UNUSED_TOKEN_129]",
1423
+ "lstrip": false,
1424
+ "normalized": false,
1425
+ "rstrip": false,
1426
+ "single_word": false,
1427
+ "special": false
1428
+ },
1429
+ "92527": {
1430
+ "content": "[UNUSED_TOKEN_130]",
1431
+ "lstrip": false,
1432
+ "normalized": false,
1433
+ "rstrip": false,
1434
+ "single_word": false,
1435
+ "special": false
1436
+ },
1437
+ "92528": {
1438
+ "content": "[UNUSED_TOKEN_131]",
1439
+ "lstrip": false,
1440
+ "normalized": false,
1441
+ "rstrip": false,
1442
+ "single_word": false,
1443
+ "special": false
1444
+ },
1445
+ "92529": {
1446
+ "content": "[UNUSED_TOKEN_132]",
1447
+ "lstrip": false,
1448
+ "normalized": false,
1449
+ "rstrip": false,
1450
+ "single_word": false,
1451
+ "special": false
1452
+ },
1453
+ "92530": {
1454
+ "content": "[UNUSED_TOKEN_133]",
1455
+ "lstrip": false,
1456
+ "normalized": false,
1457
+ "rstrip": false,
1458
+ "single_word": false,
1459
+ "special": false
1460
+ },
1461
+ "92531": {
1462
+ "content": "[UNUSED_TOKEN_134]",
1463
+ "lstrip": false,
1464
+ "normalized": false,
1465
+ "rstrip": false,
1466
+ "single_word": false,
1467
+ "special": false
1468
+ },
1469
+ "92532": {
1470
+ "content": "[UNUSED_TOKEN_135]",
1471
+ "lstrip": false,
1472
+ "normalized": false,
1473
+ "rstrip": false,
1474
+ "single_word": false,
1475
+ "special": false
1476
+ },
1477
+ "92533": {
1478
+ "content": "[UNUSED_TOKEN_136]",
1479
+ "lstrip": false,
1480
+ "normalized": false,
1481
+ "rstrip": false,
1482
+ "single_word": false,
1483
+ "special": false
1484
+ },
1485
+ "92534": {
1486
+ "content": "[UNUSED_TOKEN_137]",
1487
+ "lstrip": false,
1488
+ "normalized": false,
1489
+ "rstrip": false,
1490
+ "single_word": false,
1491
+ "special": false
1492
+ },
1493
+ "92535": {
1494
+ "content": "[UNUSED_TOKEN_138]",
1495
+ "lstrip": false,
1496
+ "normalized": false,
1497
+ "rstrip": false,
1498
+ "single_word": false,
1499
+ "special": false
1500
+ },
1501
+ "92536": {
1502
+ "content": "[UNUSED_TOKEN_139]",
1503
+ "lstrip": false,
1504
+ "normalized": false,
1505
+ "rstrip": false,
1506
+ "single_word": false,
1507
+ "special": false
1508
+ },
1509
+ "92537": {
1510
+ "content": "[UNUSED_TOKEN_140]",
1511
+ "lstrip": false,
1512
+ "normalized": false,
1513
+ "rstrip": false,
1514
+ "single_word": false,
1515
+ "special": false
1516
+ },
1517
+ "92538": {
1518
+ "content": "<|plugin|>",
1519
+ "lstrip": false,
1520
+ "normalized": false,
1521
+ "rstrip": false,
1522
+ "single_word": false,
1523
+ "special": true
1524
+ },
1525
+ "92539": {
1526
+ "content": "<|interpreter|>",
1527
+ "lstrip": false,
1528
+ "normalized": false,
1529
+ "rstrip": false,
1530
+ "single_word": false,
1531
+ "special": true
1532
+ },
1533
+ "92540": {
1534
+ "content": "<|action_end|>",
1535
+ "lstrip": false,
1536
+ "normalized": false,
1537
+ "rstrip": false,
1538
+ "single_word": false,
1539
+ "special": true
1540
+ },
1541
+ "92541": {
1542
+ "content": "<|action_start|>",
1543
+ "lstrip": false,
1544
+ "normalized": false,
1545
+ "rstrip": false,
1546
+ "single_word": false,
1547
+ "special": true
1548
+ },
1549
+ "92542": {
1550
+ "content": "<|im_end|>",
1551
+ "lstrip": false,
1552
+ "normalized": false,
1553
+ "rstrip": false,
1554
+ "single_word": false,
1555
+ "special": true
1556
+ },
1557
+ "92543": {
1558
+ "content": "<|im_start|>",
1559
+ "lstrip": false,
1560
+ "normalized": false,
1561
+ "rstrip": false,
1562
+ "single_word": false,
1563
+ "special": true
1564
+ },
1565
+ "92544": {
1566
+ "content": "[UNUSED_TOKEN_141]",
1567
+ "lstrip": false,
1568
+ "normalized": false,
1569
+ "rstrip": false,
1570
+ "single_word": false,
1571
+ "special": false
1572
+ },
1573
+ "92545": {
1574
+ "content": "[UNUSED_TOKEN_142]",
1575
+ "lstrip": false,
1576
+ "normalized": false,
1577
+ "rstrip": false,
1578
+ "single_word": false,
1579
+ "special": false
1580
+ },
1581
+ "92546": {
1582
+ "content": "[UNUSED_TOKEN_143]",
1583
+ "lstrip": false,
1584
+ "normalized": false,
1585
+ "rstrip": false,
1586
+ "single_word": false,
1587
+ "special": false
1588
+ },
1589
+ "92547": {
1590
+ "content": "[UNUSED_TOKEN_144]",
1591
+ "lstrip": false,
1592
+ "normalized": false,
1593
+ "rstrip": false,
1594
+ "single_word": false,
1595
+ "special": false
1596
+ },
1597
+ "92548": {
1598
+ "content": "[UNUSED_TOKEN_145]",
1599
+ "lstrip": false,
1600
+ "normalized": false,
1601
+ "rstrip": false,
1602
+ "single_word": false,
1603
+ "special": false
1604
+ },
1605
+ "92549": {
1606
+ "content": "[UNUSED_TOKEN_146]",
1607
+ "lstrip": false,
1608
+ "normalized": false,
1609
+ "rstrip": false,
1610
+ "single_word": false,
1611
+ "special": false
1612
+ }
1613
+ },
1614
+ "additional_special_tokens": [
1615
+ "<|im_start|>",
1616
+ "<|im_end|>",
1617
+ "<|action_start|>",
1618
+ "<|action_end|>",
1619
+ "<|interpreter|>",
1620
+ "<|plugin|>"
1621
+ ],
1622
+ "auto_map": {
1623
+ "AutoTokenizer": [
1624
+ "tokenization_internlm2.InternLM2Tokenizer",
1625
+ "tokenization_internlm2_fast.InternLM2TokenizerFast"
1626
+ ]
1627
+ },
1628
+ "bos_token": "<s>",
1629
+ "chat_template": "{{ '<s>' }}{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ '<|im_start|>system\n' + system_message + '<|im_end|>\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\n' + content + '<|im_end|>\n<|im_start|>assistant\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>\n' }}{% endif %}{% endfor %}",
1630
+ "clean_up_tokenization_spaces": false,
1631
+ "decode_with_prefix_space": false,
1632
+ "eos_token": "</s>",
1633
+ "model_max_length": 1000000000000000019884624838656,
1634
+ "pad_token": "</s>",
1635
+ "padding_side": "right",
1636
+ "sp_model_kwargs": null,
1637
+ "split_special_tokens": false,
1638
+ "tokenizer_class": "InternLM2Tokenizer",
1639
+ "unk_token": "<unk>"
1640
+ }
trainer_state.json ADDED
@@ -0,0 +1,1733 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.5188916876574305,
5
+ "eval_steps": 100,
6
+ "global_step": 1000,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.012594458438287154,
13
+ "grad_norm": 1.2670824527740479,
14
+ "learning_rate": 4.999782569758238e-05,
15
+ "loss": 2.5898,
16
+ "num_input_tokens_seen": 17064,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.02518891687657431,
21
+ "grad_norm": 1.8668310642242432,
22
+ "learning_rate": 4.9991303168536793e-05,
23
+ "loss": 2.69,
24
+ "num_input_tokens_seen": 31136,
25
+ "step": 10
26
+ },
27
+ {
28
+ "epoch": 0.037783375314861464,
29
+ "grad_norm": 1.4463753700256348,
30
+ "learning_rate": 4.9980433547419305e-05,
31
+ "loss": 2.2158,
32
+ "num_input_tokens_seen": 47520,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.05037783375314862,
37
+ "grad_norm": 1.9465786218643188,
38
+ "learning_rate": 4.996521872493738e-05,
39
+ "loss": 1.8421,
40
+ "num_input_tokens_seen": 66432,
41
+ "step": 20
42
+ },
43
+ {
44
+ "epoch": 0.06297229219143577,
45
+ "grad_norm": 1.3088130950927734,
46
+ "learning_rate": 4.994566134762105e-05,
47
+ "loss": 2.0699,
48
+ "num_input_tokens_seen": 83544,
49
+ "step": 25
50
+ },
51
+ {
52
+ "epoch": 0.07556675062972293,
53
+ "grad_norm": 1.5839314460754395,
54
+ "learning_rate": 4.992176481736254e-05,
55
+ "loss": 1.4037,
56
+ "num_input_tokens_seen": 97680,
57
+ "step": 30
58
+ },
59
+ {
60
+ "epoch": 0.08816120906801007,
61
+ "grad_norm": 2.81331729888916,
62
+ "learning_rate": 4.989353329082452e-05,
63
+ "loss": 1.8734,
64
+ "num_input_tokens_seen": 115456,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.10075566750629723,
69
+ "grad_norm": 1.999861240386963,
70
+ "learning_rate": 4.986097167871711e-05,
71
+ "loss": 1.584,
72
+ "num_input_tokens_seen": 132872,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.11335012594458438,
77
+ "grad_norm": 1.38965904712677,
78
+ "learning_rate": 4.982408564494367e-05,
79
+ "loss": 1.0849,
80
+ "num_input_tokens_seen": 148296,
81
+ "step": 45
82
+ },
83
+ {
84
+ "epoch": 0.12594458438287154,
85
+ "grad_norm": 1.410891056060791,
86
+ "learning_rate": 4.978288160561558e-05,
87
+ "loss": 1.0605,
88
+ "num_input_tokens_seen": 167504,
89
+ "step": 50
90
+ },
91
+ {
92
+ "epoch": 0.1385390428211587,
93
+ "grad_norm": 1.8659024238586426,
94
+ "learning_rate": 4.9737366727936235e-05,
95
+ "loss": 1.596,
96
+ "num_input_tokens_seen": 182536,
97
+ "step": 55
98
+ },
99
+ {
100
+ "epoch": 0.15113350125944586,
101
+ "grad_norm": 0.825343906879425,
102
+ "learning_rate": 4.968754892895432e-05,
103
+ "loss": 1.1978,
104
+ "num_input_tokens_seen": 201568,
105
+ "step": 60
106
+ },
107
+ {
108
+ "epoch": 0.163727959697733,
109
+ "grad_norm": 1.3539998531341553,
110
+ "learning_rate": 4.963343687418669e-05,
111
+ "loss": 1.1129,
112
+ "num_input_tokens_seen": 217584,
113
+ "step": 65
114
+ },
115
+ {
116
+ "epoch": 0.17632241813602015,
117
+ "grad_norm": 1.6213319301605225,
118
+ "learning_rate": 4.9575039976111084e-05,
119
+ "loss": 1.3955,
120
+ "num_input_tokens_seen": 235280,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 0.1889168765743073,
125
+ "grad_norm": 1.7147520780563354,
126
+ "learning_rate": 4.9512368392528806e-05,
127
+ "loss": 1.029,
128
+ "num_input_tokens_seen": 253544,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 0.20151133501259447,
133
+ "grad_norm": 1.0490851402282715,
134
+ "learning_rate": 4.9445433024797936e-05,
135
+ "loss": 1.1447,
136
+ "num_input_tokens_seen": 270112,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 0.2141057934508816,
141
+ "grad_norm": 1.7510355710983276,
142
+ "learning_rate": 4.937424551593702e-05,
143
+ "loss": 1.0042,
144
+ "num_input_tokens_seen": 287296,
145
+ "step": 85
146
+ },
147
+ {
148
+ "epoch": 0.22670025188916876,
149
+ "grad_norm": 1.5315502882003784,
150
+ "learning_rate": 4.929881824859985e-05,
151
+ "loss": 1.3522,
152
+ "num_input_tokens_seen": 302304,
153
+ "step": 90
154
+ },
155
+ {
156
+ "epoch": 0.23929471032745592,
157
+ "grad_norm": 1.2641905546188354,
158
+ "learning_rate": 4.9219164342921634e-05,
159
+ "loss": 1.2398,
160
+ "num_input_tokens_seen": 317624,
161
+ "step": 95
162
+ },
163
+ {
164
+ "epoch": 0.2518891687657431,
165
+ "grad_norm": 1.116264820098877,
166
+ "learning_rate": 4.9135297654236724e-05,
167
+ "loss": 0.8733,
168
+ "num_input_tokens_seen": 332920,
169
+ "step": 100
170
+ },
171
+ {
172
+ "epoch": 0.2518891687657431,
173
+ "eval_accuracy": 0.7302086027292422,
174
+ "eval_loss": 1.238808035850525,
175
+ "eval_runtime": 533.8541,
176
+ "eval_samples_per_second": 0.332,
177
+ "eval_steps_per_second": 0.332,
178
+ "num_input_tokens_seen": 332920,
179
+ "step": 100
180
+ },
181
+ {
182
+ "epoch": 0.26448362720403024,
183
+ "grad_norm": 1.540872573852539,
184
+ "learning_rate": 4.904723277066864e-05,
185
+ "loss": 1.1192,
186
+ "num_input_tokens_seen": 347680,
187
+ "step": 105
188
+ },
189
+ {
190
+ "epoch": 0.2770780856423174,
191
+ "grad_norm": 1.2526932954788208,
192
+ "learning_rate": 4.8954985010592534e-05,
193
+ "loss": 1.1241,
194
+ "num_input_tokens_seen": 364952,
195
+ "step": 110
196
+ },
197
+ {
198
+ "epoch": 0.28967254408060455,
199
+ "grad_norm": 1.2870938777923584,
200
+ "learning_rate": 4.8858570419970616e-05,
201
+ "loss": 1.1762,
202
+ "num_input_tokens_seen": 383104,
203
+ "step": 115
204
+ },
205
+ {
206
+ "epoch": 0.3022670025188917,
207
+ "grad_norm": 1.6976910829544067,
208
+ "learning_rate": 4.875800576956108e-05,
209
+ "loss": 1.0041,
210
+ "num_input_tokens_seen": 401448,
211
+ "step": 120
212
+ },
213
+ {
214
+ "epoch": 0.3148614609571788,
215
+ "grad_norm": 1.2784565687179565,
216
+ "learning_rate": 4.865330855200094e-05,
217
+ "loss": 0.9044,
218
+ "num_input_tokens_seen": 419184,
219
+ "step": 125
220
+ },
221
+ {
222
+ "epoch": 0.327455919395466,
223
+ "grad_norm": 2.2473819255828857,
224
+ "learning_rate": 4.854449697876325e-05,
225
+ "loss": 1.5228,
226
+ "num_input_tokens_seen": 436976,
227
+ "step": 130
228
+ },
229
+ {
230
+ "epoch": 0.34005037783375314,
231
+ "grad_norm": 1.252365231513977,
232
+ "learning_rate": 4.843158997698936e-05,
233
+ "loss": 1.2336,
234
+ "num_input_tokens_seen": 455432,
235
+ "step": 135
236
+ },
237
+ {
238
+ "epoch": 0.3526448362720403,
239
+ "grad_norm": 1.5875591039657593,
240
+ "learning_rate": 4.831460718619661e-05,
241
+ "loss": 1.0085,
242
+ "num_input_tokens_seen": 473896,
243
+ "step": 140
244
+ },
245
+ {
246
+ "epoch": 0.36523929471032746,
247
+ "grad_norm": 1.8101682662963867,
248
+ "learning_rate": 4.819356895486219e-05,
249
+ "loss": 1.3333,
250
+ "num_input_tokens_seen": 488936,
251
+ "step": 145
252
+ },
253
+ {
254
+ "epoch": 0.3778337531486146,
255
+ "grad_norm": 1.2537903785705566,
256
+ "learning_rate": 4.806849633688363e-05,
257
+ "loss": 1.1225,
258
+ "num_input_tokens_seen": 507800,
259
+ "step": 150
260
+ },
261
+ {
262
+ "epoch": 0.3904282115869018,
263
+ "grad_norm": 1.2231327295303345,
264
+ "learning_rate": 4.7939411087916566e-05,
265
+ "loss": 1.0405,
266
+ "num_input_tokens_seen": 524704,
267
+ "step": 155
268
+ },
269
+ {
270
+ "epoch": 0.40302267002518893,
271
+ "grad_norm": 0.9885507822036743,
272
+ "learning_rate": 4.7806335661590526e-05,
273
+ "loss": 1.0268,
274
+ "num_input_tokens_seen": 542320,
275
+ "step": 160
276
+ },
277
+ {
278
+ "epoch": 0.4156171284634761,
279
+ "grad_norm": 1.273047685623169,
280
+ "learning_rate": 4.7669293205603196e-05,
281
+ "loss": 1.1071,
282
+ "num_input_tokens_seen": 560488,
283
+ "step": 165
284
+ },
285
+ {
286
+ "epoch": 0.4282115869017632,
287
+ "grad_norm": 1.8347108364105225,
288
+ "learning_rate": 4.752830755769405e-05,
289
+ "loss": 1.169,
290
+ "num_input_tokens_seen": 577680,
291
+ "step": 170
292
+ },
293
+ {
294
+ "epoch": 0.44080604534005036,
295
+ "grad_norm": 1.9854867458343506,
296
+ "learning_rate": 4.73834032414979e-05,
297
+ "loss": 0.9519,
298
+ "num_input_tokens_seen": 596208,
299
+ "step": 175
300
+ },
301
+ {
302
+ "epoch": 0.4534005037783375,
303
+ "grad_norm": 1.2936229705810547,
304
+ "learning_rate": 4.723460546227914e-05,
305
+ "loss": 1.2277,
306
+ "num_input_tokens_seen": 613120,
307
+ "step": 180
308
+ },
309
+ {
310
+ "epoch": 0.4659949622166247,
311
+ "grad_norm": 0.9850680232048035,
312
+ "learning_rate": 4.7081940102547463e-05,
313
+ "loss": 0.9588,
314
+ "num_input_tokens_seen": 630336,
315
+ "step": 185
316
+ },
317
+ {
318
+ "epoch": 0.47858942065491183,
319
+ "grad_norm": 1.643853783607483,
320
+ "learning_rate": 4.692543371755572e-05,
321
+ "loss": 1.0816,
322
+ "num_input_tokens_seen": 644488,
323
+ "step": 190
324
+ },
325
+ {
326
+ "epoch": 0.491183879093199,
327
+ "grad_norm": 1.7307960987091064,
328
+ "learning_rate": 4.6765113530680825e-05,
329
+ "loss": 0.8216,
330
+ "num_input_tokens_seen": 660432,
331
+ "step": 195
332
+ },
333
+ {
334
+ "epoch": 0.5037783375314862,
335
+ "grad_norm": 1.731389045715332,
336
+ "learning_rate": 4.660100742868836e-05,
337
+ "loss": 0.8872,
338
+ "num_input_tokens_seen": 680080,
339
+ "step": 200
340
+ },
341
+ {
342
+ "epoch": 0.5037783375314862,
343
+ "eval_accuracy": 0.7530999930068348,
344
+ "eval_loss": 1.129120111465454,
345
+ "eval_runtime": 534.7922,
346
+ "eval_samples_per_second": 0.331,
347
+ "eval_steps_per_second": 0.331,
348
+ "num_input_tokens_seen": 680080,
349
+ "step": 200
350
+ },
351
+ {
352
+ "epoch": 0.5163727959697733,
353
+ "grad_norm": 2.0154528617858887,
354
+ "learning_rate": 4.643314395688188e-05,
355
+ "loss": 1.0193,
356
+ "num_input_tokens_seen": 695688,
357
+ "step": 205
358
+ },
359
+ {
360
+ "epoch": 0.5289672544080605,
361
+ "grad_norm": 1.5947645902633667,
362
+ "learning_rate": 4.626155231413758e-05,
363
+ "loss": 1.1722,
364
+ "num_input_tokens_seen": 718192,
365
+ "step": 210
366
+ },
367
+ {
368
+ "epoch": 0.5415617128463476,
369
+ "grad_norm": 1.233261227607727,
370
+ "learning_rate": 4.608626234782536e-05,
371
+ "loss": 0.9888,
372
+ "num_input_tokens_seen": 733136,
373
+ "step": 215
374
+ },
375
+ {
376
+ "epoch": 0.5541561712846348,
377
+ "grad_norm": 2.006932497024536,
378
+ "learning_rate": 4.5907304548617024e-05,
379
+ "loss": 1.018,
380
+ "num_input_tokens_seen": 748016,
381
+ "step": 220
382
+ },
383
+ {
384
+ "epoch": 0.5667506297229219,
385
+ "grad_norm": 1.6554147005081177,
386
+ "learning_rate": 4.572471004518261e-05,
387
+ "loss": 0.8822,
388
+ "num_input_tokens_seen": 763264,
389
+ "step": 225
390
+ },
391
+ {
392
+ "epoch": 0.5793450881612091,
393
+ "grad_norm": 2.6174421310424805,
394
+ "learning_rate": 4.553851059877573e-05,
395
+ "loss": 1.1984,
396
+ "num_input_tokens_seen": 778632,
397
+ "step": 230
398
+ },
399
+ {
400
+ "epoch": 0.5919395465994962,
401
+ "grad_norm": 1.639770269393921,
402
+ "learning_rate": 4.534873859770892e-05,
403
+ "loss": 1.0492,
404
+ "num_input_tokens_seen": 799096,
405
+ "step": 235
406
+ },
407
+ {
408
+ "epoch": 0.6045340050377834,
409
+ "grad_norm": 1.863054871559143,
410
+ "learning_rate": 4.515542705171981e-05,
411
+ "loss": 0.7339,
412
+ "num_input_tokens_seen": 815240,
413
+ "step": 240
414
+ },
415
+ {
416
+ "epoch": 0.6171284634760705,
417
+ "grad_norm": 2.6653268337249756,
418
+ "learning_rate": 4.495860958622937e-05,
419
+ "loss": 1.2535,
420
+ "num_input_tokens_seen": 831320,
421
+ "step": 245
422
+ },
423
+ {
424
+ "epoch": 0.6297229219143576,
425
+ "grad_norm": 3.9443094730377197,
426
+ "learning_rate": 4.475832043649287e-05,
427
+ "loss": 1.883,
428
+ "num_input_tokens_seen": 850376,
429
+ "step": 250
430
+ },
431
+ {
432
+ "epoch": 0.6423173803526449,
433
+ "grad_norm": 3.2649178504943848,
434
+ "learning_rate": 4.455459444164492e-05,
435
+ "loss": 0.9972,
436
+ "num_input_tokens_seen": 868192,
437
+ "step": 255
438
+ },
439
+ {
440
+ "epoch": 0.654911838790932,
441
+ "grad_norm": 2.0120456218719482,
442
+ "learning_rate": 4.4347467038639364e-05,
443
+ "loss": 1.0848,
444
+ "num_input_tokens_seen": 883560,
445
+ "step": 260
446
+ },
447
+ {
448
+ "epoch": 0.6675062972292192,
449
+ "grad_norm": 1.7830870151519775,
450
+ "learning_rate": 4.4136974256085236e-05,
451
+ "loss": 0.8549,
452
+ "num_input_tokens_seen": 902952,
453
+ "step": 265
454
+ },
455
+ {
456
+ "epoch": 0.6801007556675063,
457
+ "grad_norm": 1.284114956855774,
458
+ "learning_rate": 4.392315270797985e-05,
459
+ "loss": 0.5925,
460
+ "num_input_tokens_seen": 919584,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.6926952141057935,
465
+ "grad_norm": 1.58772873878479,
466
+ "learning_rate": 4.3706039587339894e-05,
467
+ "loss": 1.2722,
468
+ "num_input_tokens_seen": 940200,
469
+ "step": 275
470
+ },
471
+ {
472
+ "epoch": 0.7052896725440806,
473
+ "grad_norm": 1.5675506591796875,
474
+ "learning_rate": 4.3485672659732034e-05,
475
+ "loss": 0.9741,
476
+ "num_input_tokens_seen": 961256,
477
+ "step": 280
478
+ },
479
+ {
480
+ "epoch": 0.7178841309823678,
481
+ "grad_norm": 1.5801304578781128,
482
+ "learning_rate": 4.3262090256703736e-05,
483
+ "loss": 0.9787,
484
+ "num_input_tokens_seen": 978000,
485
+ "step": 285
486
+ },
487
+ {
488
+ "epoch": 0.7304785894206549,
489
+ "grad_norm": 1.2644524574279785,
490
+ "learning_rate": 4.303533126911577e-05,
491
+ "loss": 1.1364,
492
+ "num_input_tokens_seen": 997512,
493
+ "step": 290
494
+ },
495
+ {
496
+ "epoch": 0.743073047858942,
497
+ "grad_norm": 1.307681918144226,
498
+ "learning_rate": 4.280543514037731e-05,
499
+ "loss": 1.1322,
500
+ "num_input_tokens_seen": 1016824,
501
+ "step": 295
502
+ },
503
+ {
504
+ "epoch": 0.7556675062972292,
505
+ "grad_norm": 1.8267008066177368,
506
+ "learning_rate": 4.257244185958505e-05,
507
+ "loss": 1.0074,
508
+ "num_input_tokens_seen": 1036168,
509
+ "step": 300
510
+ },
511
+ {
512
+ "epoch": 0.7556675062972292,
513
+ "eval_accuracy": 0.7609178510672793,
514
+ "eval_loss": 1.095440149307251,
515
+ "eval_runtime": 537.6469,
516
+ "eval_samples_per_second": 0.329,
517
+ "eval_steps_per_second": 0.329,
518
+ "num_input_tokens_seen": 1036168,
519
+ "step": 300
520
+ },
521
+ {
522
+ "epoch": 0.7682619647355163,
523
+ "grad_norm": 1.6457531452178955,
524
+ "learning_rate": 4.233639195456729e-05,
525
+ "loss": 0.939,
526
+ "num_input_tokens_seen": 1053264,
527
+ "step": 305
528
+ },
529
+ {
530
+ "epoch": 0.7808564231738035,
531
+ "grad_norm": 1.752995252609253,
532
+ "learning_rate": 4.2097326484834346e-05,
533
+ "loss": 1.0468,
534
+ "num_input_tokens_seen": 1068696,
535
+ "step": 310
536
+ },
537
+ {
538
+ "epoch": 0.7934508816120907,
539
+ "grad_norm": 1.001474142074585,
540
+ "learning_rate": 4.1855287034436555e-05,
541
+ "loss": 0.8325,
542
+ "num_input_tokens_seen": 1085264,
543
+ "step": 315
544
+ },
545
+ {
546
+ "epoch": 0.8060453400503779,
547
+ "grad_norm": 1.4869558811187744,
548
+ "learning_rate": 4.1610315704730854e-05,
549
+ "loss": 0.8035,
550
+ "num_input_tokens_seen": 1102368,
551
+ "step": 320
552
+ },
553
+ {
554
+ "epoch": 0.818639798488665,
555
+ "grad_norm": 1.8002692461013794,
556
+ "learning_rate": 4.136245510705762e-05,
557
+ "loss": 1.0207,
558
+ "num_input_tokens_seen": 1117768,
559
+ "step": 325
560
+ },
561
+ {
562
+ "epoch": 0.8312342569269522,
563
+ "grad_norm": 1.747013807296753,
564
+ "learning_rate": 4.111174835532857e-05,
565
+ "loss": 1.2914,
566
+ "num_input_tokens_seen": 1133440,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 0.8438287153652393,
571
+ "grad_norm": 1.6267316341400146,
572
+ "learning_rate": 4.085823905852745e-05,
573
+ "loss": 1.2979,
574
+ "num_input_tokens_seen": 1146480,
575
+ "step": 335
576
+ },
577
+ {
578
+ "epoch": 0.8564231738035264,
579
+ "grad_norm": 1.2453759908676147,
580
+ "learning_rate": 4.06019713131244e-05,
581
+ "loss": 0.6644,
582
+ "num_input_tokens_seen": 1162248,
583
+ "step": 340
584
+ },
585
+ {
586
+ "epoch": 0.8690176322418136,
587
+ "grad_norm": 0.9214743971824646,
588
+ "learning_rate": 4.034298969540567e-05,
589
+ "loss": 1.1669,
590
+ "num_input_tokens_seen": 1179224,
591
+ "step": 345
592
+ },
593
+ {
594
+ "epoch": 0.8816120906801007,
595
+ "grad_norm": 1.0969208478927612,
596
+ "learning_rate": 4.008133925371988e-05,
597
+ "loss": 1.2072,
598
+ "num_input_tokens_seen": 1195240,
599
+ "step": 350
600
+ },
601
+ {
602
+ "epoch": 0.8942065491183879,
603
+ "grad_norm": 1.6422204971313477,
604
+ "learning_rate": 3.981706550064208e-05,
605
+ "loss": 0.9078,
606
+ "num_input_tokens_seen": 1213056,
607
+ "step": 355
608
+ },
609
+ {
610
+ "epoch": 0.906801007556675,
611
+ "grad_norm": 1.5316386222839355,
612
+ "learning_rate": 3.955021440505706e-05,
613
+ "loss": 0.4814,
614
+ "num_input_tokens_seen": 1230744,
615
+ "step": 360
616
+ },
617
+ {
618
+ "epoch": 0.9193954659949622,
619
+ "grad_norm": 1.2854864597320557,
620
+ "learning_rate": 3.928083238416342e-05,
621
+ "loss": 0.9272,
622
+ "num_input_tokens_seen": 1246904,
623
+ "step": 365
624
+ },
625
+ {
626
+ "epoch": 0.9319899244332494,
627
+ "grad_norm": 1.3557329177856445,
628
+ "learning_rate": 3.9008966295399494e-05,
629
+ "loss": 0.9995,
630
+ "num_input_tokens_seen": 1263880,
631
+ "step": 370
632
+ },
633
+ {
634
+ "epoch": 0.9445843828715366,
635
+ "grad_norm": 1.2092177867889404,
636
+ "learning_rate": 3.873466342829281e-05,
637
+ "loss": 0.8696,
638
+ "num_input_tokens_seen": 1283816,
639
+ "step": 375
640
+ },
641
+ {
642
+ "epoch": 0.9571788413098237,
643
+ "grad_norm": 2.7501771450042725,
644
+ "learning_rate": 3.845797149623434e-05,
645
+ "loss": 1.4119,
646
+ "num_input_tokens_seen": 1300192,
647
+ "step": 380
648
+ },
649
+ {
650
+ "epoch": 0.9697732997481109,
651
+ "grad_norm": 1.222266435623169,
652
+ "learning_rate": 3.817893862817902e-05,
653
+ "loss": 0.8804,
654
+ "num_input_tokens_seen": 1317224,
655
+ "step": 385
656
+ },
657
+ {
658
+ "epoch": 0.982367758186398,
659
+ "grad_norm": 0.6030636429786682,
660
+ "learning_rate": 3.789761336027403e-05,
661
+ "loss": 0.5944,
662
+ "num_input_tokens_seen": 1335296,
663
+ "step": 390
664
+ },
665
+ {
666
+ "epoch": 0.9949622166246851,
667
+ "grad_norm": 1.5092536211013794,
668
+ "learning_rate": 3.761404462741618e-05,
669
+ "loss": 1.0303,
670
+ "num_input_tokens_seen": 1351920,
671
+ "step": 395
672
+ },
673
+ {
674
+ "epoch": 1.0075566750629723,
675
+ "grad_norm": 2.0633888244628906,
676
+ "learning_rate": 3.7328281754739974e-05,
677
+ "loss": 0.9671,
678
+ "num_input_tokens_seen": 1370864,
679
+ "step": 400
680
+ },
681
+ {
682
+ "epoch": 1.0075566750629723,
683
+ "eval_accuracy": 0.7659399606125864,
684
+ "eval_loss": 1.0698517560958862,
685
+ "eval_runtime": 535.9639,
686
+ "eval_samples_per_second": 0.33,
687
+ "eval_steps_per_second": 0.33,
688
+ "num_input_tokens_seen": 1370864,
689
+ "step": 400
690
+ },
691
+ {
692
+ "epoch": 1.0201511335012594,
693
+ "grad_norm": 1.7792761325836182,
694
+ "learning_rate": 3.704037444903782e-05,
695
+ "loss": 1.0106,
696
+ "num_input_tokens_seen": 1390136,
697
+ "step": 405
698
+ },
699
+ {
700
+ "epoch": 1.0327455919395465,
701
+ "grad_norm": 1.8169097900390625,
702
+ "learning_rate": 3.6750372790113766e-05,
703
+ "loss": 0.7452,
704
+ "num_input_tokens_seen": 1411432,
705
+ "step": 410
706
+ },
707
+ {
708
+ "epoch": 1.0453400503778338,
709
+ "grad_norm": 1.3711882829666138,
710
+ "learning_rate": 3.645832722207248e-05,
711
+ "loss": 0.9704,
712
+ "num_input_tokens_seen": 1429024,
713
+ "step": 415
714
+ },
715
+ {
716
+ "epoch": 1.057934508816121,
717
+ "grad_norm": 0.9055928587913513,
718
+ "learning_rate": 3.6164288544544725e-05,
719
+ "loss": 0.5268,
720
+ "num_input_tokens_seen": 1445848,
721
+ "step": 420
722
+ },
723
+ {
724
+ "epoch": 1.070528967254408,
725
+ "grad_norm": 1.2504442930221558,
726
+ "learning_rate": 3.586830790385109e-05,
727
+ "loss": 0.6362,
728
+ "num_input_tokens_seen": 1463232,
729
+ "step": 425
730
+ },
731
+ {
732
+ "epoch": 1.0831234256926952,
733
+ "grad_norm": 2.108982563018799,
734
+ "learning_rate": 3.55704367841054e-05,
735
+ "loss": 0.5694,
736
+ "num_input_tokens_seen": 1478584,
737
+ "step": 430
738
+ },
739
+ {
740
+ "epoch": 1.0957178841309823,
741
+ "grad_norm": 2.6890852451324463,
742
+ "learning_rate": 3.52707269982593e-05,
743
+ "loss": 0.5836,
744
+ "num_input_tokens_seen": 1495112,
745
+ "step": 435
746
+ },
747
+ {
748
+ "epoch": 1.1083123425692696,
749
+ "grad_norm": 3.2210803031921387,
750
+ "learning_rate": 3.496923067908977e-05,
751
+ "loss": 1.0356,
752
+ "num_input_tokens_seen": 1513000,
753
+ "step": 440
754
+ },
755
+ {
756
+ "epoch": 1.1209068010075567,
757
+ "grad_norm": 1.5383672714233398,
758
+ "learning_rate": 3.466600027013084e-05,
759
+ "loss": 1.0125,
760
+ "num_input_tokens_seen": 1526896,
761
+ "step": 445
762
+ },
763
+ {
764
+ "epoch": 1.1335012594458438,
765
+ "grad_norm": 2.0438127517700195,
766
+ "learning_rate": 3.436108851655143e-05,
767
+ "loss": 1.0554,
768
+ "num_input_tokens_seen": 1542448,
769
+ "step": 450
770
+ },
771
+ {
772
+ "epoch": 1.146095717884131,
773
+ "grad_norm": 2.1045315265655518,
774
+ "learning_rate": 3.4054548455980565e-05,
775
+ "loss": 0.714,
776
+ "num_input_tokens_seen": 1557656,
777
+ "step": 455
778
+ },
779
+ {
780
+ "epoch": 1.1586901763224182,
781
+ "grad_norm": 2.4777379035949707,
782
+ "learning_rate": 3.3746433409281844e-05,
783
+ "loss": 0.8676,
784
+ "num_input_tokens_seen": 1575192,
785
+ "step": 460
786
+ },
787
+ {
788
+ "epoch": 1.1712846347607053,
789
+ "grad_norm": 2.529090642929077,
790
+ "learning_rate": 3.3436796971278526e-05,
791
+ "loss": 0.6624,
792
+ "num_input_tokens_seen": 1596112,
793
+ "step": 465
794
+ },
795
+ {
796
+ "epoch": 1.1838790931989924,
797
+ "grad_norm": 2.93548846244812,
798
+ "learning_rate": 3.312569300143108e-05,
799
+ "loss": 0.795,
800
+ "num_input_tokens_seen": 1610768,
801
+ "step": 470
802
+ },
803
+ {
804
+ "epoch": 1.1964735516372795,
805
+ "grad_norm": 2.8596348762512207,
806
+ "learning_rate": 3.2813175614468604e-05,
807
+ "loss": 1.3433,
808
+ "num_input_tokens_seen": 1627672,
809
+ "step": 475
810
+ },
811
+ {
812
+ "epoch": 1.2090680100755669,
813
+ "grad_norm": 3.336879253387451,
814
+ "learning_rate": 3.24992991709759e-05,
815
+ "loss": 1.0594,
816
+ "num_input_tokens_seen": 1641912,
817
+ "step": 480
818
+ },
819
+ {
820
+ "epoch": 1.221662468513854,
821
+ "grad_norm": 2.030346155166626,
822
+ "learning_rate": 3.218411826793777e-05,
823
+ "loss": 0.972,
824
+ "num_input_tokens_seen": 1659832,
825
+ "step": 485
826
+ },
827
+ {
828
+ "epoch": 1.234256926952141,
829
+ "grad_norm": 2.6392228603363037,
830
+ "learning_rate": 3.186768772924216e-05,
831
+ "loss": 1.032,
832
+ "num_input_tokens_seen": 1679064,
833
+ "step": 490
834
+ },
835
+ {
836
+ "epoch": 1.2468513853904282,
837
+ "grad_norm": 2.443204402923584,
838
+ "learning_rate": 3.1550062596143886e-05,
839
+ "loss": 1.3751,
840
+ "num_input_tokens_seen": 1692608,
841
+ "step": 495
842
+ },
843
+ {
844
+ "epoch": 1.2594458438287153,
845
+ "grad_norm": 1.4907230138778687,
846
+ "learning_rate": 3.1231298117690554e-05,
847
+ "loss": 0.7884,
848
+ "num_input_tokens_seen": 1709712,
849
+ "step": 500
850
+ },
851
+ {
852
+ "epoch": 1.2594458438287153,
853
+ "eval_accuracy": 0.7672396326295029,
854
+ "eval_loss": 1.0675625801086426,
855
+ "eval_runtime": 535.3969,
856
+ "eval_samples_per_second": 0.331,
857
+ "eval_steps_per_second": 0.331,
858
+ "num_input_tokens_seen": 1709712,
859
+ "step": 500
860
+ },
861
+ {
862
+ "epoch": 1.2720403022670026,
863
+ "grad_norm": 0.9061765074729919,
864
+ "learning_rate": 3.091144974111224e-05,
865
+ "loss": 0.781,
866
+ "num_input_tokens_seen": 1729424,
867
+ "step": 505
868
+ },
869
+ {
870
+ "epoch": 1.2846347607052897,
871
+ "grad_norm": 2.395761251449585,
872
+ "learning_rate": 3.059057310217683e-05,
873
+ "loss": 0.8264,
874
+ "num_input_tokens_seen": 1749136,
875
+ "step": 510
876
+ },
877
+ {
878
+ "epoch": 1.2972292191435768,
879
+ "grad_norm": 2.069194793701172,
880
+ "learning_rate": 3.0268724015512463e-05,
881
+ "loss": 0.6579,
882
+ "num_input_tokens_seen": 1765216,
883
+ "step": 515
884
+ },
885
+ {
886
+ "epoch": 1.309823677581864,
887
+ "grad_norm": 1.5534199476242065,
888
+ "learning_rate": 2.994595846489892e-05,
889
+ "loss": 0.7195,
890
+ "num_input_tokens_seen": 1781320,
891
+ "step": 520
892
+ },
893
+ {
894
+ "epoch": 1.322418136020151,
895
+ "grad_norm": 1.4709012508392334,
896
+ "learning_rate": 2.9622332593529563e-05,
897
+ "loss": 0.5226,
898
+ "num_input_tokens_seen": 1797760,
899
+ "step": 525
900
+ },
901
+ {
902
+ "epoch": 1.3350125944584383,
903
+ "grad_norm": 2.4601175785064697,
904
+ "learning_rate": 2.9297902694245542e-05,
905
+ "loss": 1.2005,
906
+ "num_input_tokens_seen": 1813224,
907
+ "step": 530
908
+ },
909
+ {
910
+ "epoch": 1.3476070528967254,
911
+ "grad_norm": 2.9571943283081055,
912
+ "learning_rate": 2.8972725199744033e-05,
913
+ "loss": 0.7554,
914
+ "num_input_tokens_seen": 1830728,
915
+ "step": 535
916
+ },
917
+ {
918
+ "epoch": 1.3602015113350125,
919
+ "grad_norm": 1.610404372215271,
920
+ "learning_rate": 2.864685667276201e-05,
921
+ "loss": 1.0766,
922
+ "num_input_tokens_seen": 1848816,
923
+ "step": 540
924
+ },
925
+ {
926
+ "epoch": 1.3727959697732999,
927
+ "grad_norm": 2.7103452682495117,
928
+ "learning_rate": 2.8320353796237553e-05,
929
+ "loss": 0.8778,
930
+ "num_input_tokens_seen": 1863224,
931
+ "step": 545
932
+ },
933
+ {
934
+ "epoch": 1.385390428211587,
935
+ "grad_norm": 3.026928424835205,
936
+ "learning_rate": 2.7993273363450184e-05,
937
+ "loss": 0.6799,
938
+ "num_input_tokens_seen": 1880288,
939
+ "step": 550
940
+ },
941
+ {
942
+ "epoch": 1.397984886649874,
943
+ "grad_norm": 1.5982580184936523,
944
+ "learning_rate": 2.7665672268141956e-05,
945
+ "loss": 0.4951,
946
+ "num_input_tokens_seen": 1896552,
947
+ "step": 555
948
+ },
949
+ {
950
+ "epoch": 1.4105793450881612,
951
+ "grad_norm": 2.5049729347229004,
952
+ "learning_rate": 2.7337607494621152e-05,
953
+ "loss": 0.9428,
954
+ "num_input_tokens_seen": 1915872,
955
+ "step": 560
956
+ },
957
+ {
958
+ "epoch": 1.4231738035264483,
959
+ "grad_norm": 2.13607120513916,
960
+ "learning_rate": 2.7009136107850185e-05,
961
+ "loss": 0.8936,
962
+ "num_input_tokens_seen": 1934704,
963
+ "step": 565
964
+ },
965
+ {
966
+ "epoch": 1.4357682619647356,
967
+ "grad_norm": 3.7449216842651367,
968
+ "learning_rate": 2.668031524351949e-05,
969
+ "loss": 0.8481,
970
+ "num_input_tokens_seen": 1951816,
971
+ "step": 570
972
+ },
973
+ {
974
+ "epoch": 1.4483627204030227,
975
+ "grad_norm": 2.881800651550293,
976
+ "learning_rate": 2.6351202098109083e-05,
977
+ "loss": 1.1778,
978
+ "num_input_tokens_seen": 1970640,
979
+ "step": 575
980
+ },
981
+ {
982
+ "epoch": 1.4609571788413098,
983
+ "grad_norm": 3.845482110977173,
984
+ "learning_rate": 2.6021853918939587e-05,
985
+ "loss": 0.7675,
986
+ "num_input_tokens_seen": 1986504,
987
+ "step": 580
988
+ },
989
+ {
990
+ "epoch": 1.473551637279597,
991
+ "grad_norm": 2.6726672649383545,
992
+ "learning_rate": 2.5692327994214383e-05,
993
+ "loss": 0.8112,
994
+ "num_input_tokens_seen": 2003440,
995
+ "step": 585
996
+ },
997
+ {
998
+ "epoch": 1.486146095717884,
999
+ "grad_norm": 3.1416702270507812,
1000
+ "learning_rate": 2.536268164305465e-05,
1001
+ "loss": 1.3447,
1002
+ "num_input_tokens_seen": 2020568,
1003
+ "step": 590
1004
+ },
1005
+ {
1006
+ "epoch": 1.4987405541561714,
1007
+ "grad_norm": 1.6394110918045044,
1008
+ "learning_rate": 2.5032972205529044e-05,
1009
+ "loss": 0.9512,
1010
+ "num_input_tokens_seen": 2037096,
1011
+ "step": 595
1012
+ },
1013
+ {
1014
+ "epoch": 1.5113350125944585,
1015
+ "grad_norm": 4.283825874328613,
1016
+ "learning_rate": 2.4703257032679744e-05,
1017
+ "loss": 1.0526,
1018
+ "num_input_tokens_seen": 2053296,
1019
+ "step": 600
1020
+ },
1021
+ {
1022
+ "epoch": 1.5113350125944585,
1023
+ "eval_accuracy": 0.7689241678712095,
1024
+ "eval_loss": 1.0595225095748901,
1025
+ "eval_runtime": 536.6614,
1026
+ "eval_samples_per_second": 0.33,
1027
+ "eval_steps_per_second": 0.33,
1028
+ "num_input_tokens_seen": 2053296,
1029
+ "step": 600
1030
+ },
1031
+ {
1032
+ "epoch": 1.5239294710327456,
1033
+ "grad_norm": 2.488013505935669,
1034
+ "learning_rate": 2.437359347654655e-05,
1035
+ "loss": 0.7499,
1036
+ "num_input_tokens_seen": 2070840,
1037
+ "step": 605
1038
+ },
1039
+ {
1040
+ "epoch": 1.536523929471033,
1041
+ "grad_norm": 2.6227219104766846,
1042
+ "learning_rate": 2.4044038880190824e-05,
1043
+ "loss": 0.6816,
1044
+ "num_input_tokens_seen": 2087952,
1045
+ "step": 610
1046
+ },
1047
+ {
1048
+ "epoch": 1.5491183879093198,
1049
+ "grad_norm": 1.874306082725525,
1050
+ "learning_rate": 2.3714650567721016e-05,
1051
+ "loss": 0.7719,
1052
+ "num_input_tokens_seen": 2103824,
1053
+ "step": 615
1054
+ },
1055
+ {
1056
+ "epoch": 1.561712846347607,
1057
+ "grad_norm": 2.8610432147979736,
1058
+ "learning_rate": 2.338548583432144e-05,
1059
+ "loss": 1.1229,
1060
+ "num_input_tokens_seen": 2120240,
1061
+ "step": 620
1062
+ },
1063
+ {
1064
+ "epoch": 1.5743073047858942,
1065
+ "grad_norm": 6.437571048736572,
1066
+ "learning_rate": 2.305660193628618e-05,
1067
+ "loss": 1.1712,
1068
+ "num_input_tokens_seen": 2135416,
1069
+ "step": 625
1070
+ },
1071
+ {
1072
+ "epoch": 1.5869017632241813,
1073
+ "grad_norm": 2.076413154602051,
1074
+ "learning_rate": 2.272805608105958e-05,
1075
+ "loss": 0.6688,
1076
+ "num_input_tokens_seen": 2151904,
1077
+ "step": 630
1078
+ },
1079
+ {
1080
+ "epoch": 1.5994962216624686,
1081
+ "grad_norm": 2.047494888305664,
1082
+ "learning_rate": 2.2399905417285434e-05,
1083
+ "loss": 0.8043,
1084
+ "num_input_tokens_seen": 2168952,
1085
+ "step": 635
1086
+ },
1087
+ {
1088
+ "epoch": 1.6120906801007555,
1089
+ "grad_norm": 2.4243392944335938,
1090
+ "learning_rate": 2.2072207024866266e-05,
1091
+ "loss": 0.5582,
1092
+ "num_input_tokens_seen": 2185192,
1093
+ "step": 640
1094
+ },
1095
+ {
1096
+ "epoch": 1.6246851385390428,
1097
+ "grad_norm": 2.66733717918396,
1098
+ "learning_rate": 2.1745017905034625e-05,
1099
+ "loss": 1.1033,
1100
+ "num_input_tokens_seen": 2200856,
1101
+ "step": 645
1102
+ },
1103
+ {
1104
+ "epoch": 1.63727959697733,
1105
+ "grad_norm": 2.6124303340911865,
1106
+ "learning_rate": 2.141839497043806e-05,
1107
+ "loss": 0.8529,
1108
+ "num_input_tokens_seen": 2215080,
1109
+ "step": 650
1110
+ },
1111
+ {
1112
+ "epoch": 1.649874055415617,
1113
+ "grad_norm": 2.1553502082824707,
1114
+ "learning_rate": 2.1092395035239472e-05,
1115
+ "loss": 0.7331,
1116
+ "num_input_tokens_seen": 2229808,
1117
+ "step": 655
1118
+ },
1119
+ {
1120
+ "epoch": 1.6624685138539044,
1121
+ "grad_norm": 2.305765390396118,
1122
+ "learning_rate": 2.076707480523464e-05,
1123
+ "loss": 0.9966,
1124
+ "num_input_tokens_seen": 2247584,
1125
+ "step": 660
1126
+ },
1127
+ {
1128
+ "epoch": 1.6750629722921915,
1129
+ "grad_norm": 1.9919112920761108,
1130
+ "learning_rate": 2.0442490867988582e-05,
1131
+ "loss": 1.0719,
1132
+ "num_input_tokens_seen": 2264280,
1133
+ "step": 665
1134
+ },
1135
+ {
1136
+ "epoch": 1.6876574307304786,
1137
+ "grad_norm": 2.5872271060943604,
1138
+ "learning_rate": 2.011869968299245e-05,
1139
+ "loss": 0.8667,
1140
+ "num_input_tokens_seen": 2281600,
1141
+ "step": 670
1142
+ },
1143
+ {
1144
+ "epoch": 1.700251889168766,
1145
+ "grad_norm": 2.303976535797119,
1146
+ "learning_rate": 1.9795757571842744e-05,
1147
+ "loss": 0.559,
1148
+ "num_input_tokens_seen": 2298688,
1149
+ "step": 675
1150
+ },
1151
+ {
1152
+ "epoch": 1.7128463476070528,
1153
+ "grad_norm": 2.5704333782196045,
1154
+ "learning_rate": 1.947372070844452e-05,
1155
+ "loss": 0.899,
1156
+ "num_input_tokens_seen": 2318312,
1157
+ "step": 680
1158
+ },
1159
+ {
1160
+ "epoch": 1.7254408060453401,
1161
+ "grad_norm": 2.638066530227661,
1162
+ "learning_rate": 1.915264510924022e-05,
1163
+ "loss": 0.7052,
1164
+ "num_input_tokens_seen": 2334496,
1165
+ "step": 685
1166
+ },
1167
+ {
1168
+ "epoch": 1.7380352644836272,
1169
+ "grad_norm": 2.046243906021118,
1170
+ "learning_rate": 1.883258662346596e-05,
1171
+ "loss": 0.9922,
1172
+ "num_input_tokens_seen": 2355640,
1173
+ "step": 690
1174
+ },
1175
+ {
1176
+ "epoch": 1.7506297229219143,
1177
+ "grad_norm": 2.731766700744629,
1178
+ "learning_rate": 1.8513600923436923e-05,
1179
+ "loss": 0.9633,
1180
+ "num_input_tokens_seen": 2375368,
1181
+ "step": 695
1182
+ },
1183
+ {
1184
+ "epoch": 1.7632241813602016,
1185
+ "grad_norm": 4.190962791442871,
1186
+ "learning_rate": 1.8195743494863387e-05,
1187
+ "loss": 1.2255,
1188
+ "num_input_tokens_seen": 2392384,
1189
+ "step": 700
1190
+ },
1191
+ {
1192
+ "epoch": 1.7632241813602016,
1193
+ "eval_accuracy": 0.7701849009613134,
1194
+ "eval_loss": 1.0550936460494995,
1195
+ "eval_runtime": 535.3581,
1196
+ "eval_samples_per_second": 0.331,
1197
+ "eval_steps_per_second": 0.331,
1198
+ "num_input_tokens_seen": 2392384,
1199
+ "step": 700
1200
+ },
1201
+ {
1202
+ "epoch": 1.7758186397984885,
1203
+ "grad_norm": 6.717029094696045,
1204
+ "learning_rate": 1.787906962719939e-05,
1205
+ "loss": 0.7964,
1206
+ "num_input_tokens_seen": 2409672,
1207
+ "step": 705
1208
+ },
1209
+ {
1210
+ "epoch": 1.7884130982367759,
1211
+ "grad_norm": 2.1568119525909424,
1212
+ "learning_rate": 1.7563634404025414e-05,
1213
+ "loss": 0.7568,
1214
+ "num_input_tokens_seen": 2425144,
1215
+ "step": 710
1216
+ },
1217
+ {
1218
+ "epoch": 1.801007556675063,
1219
+ "grad_norm": 2.602142810821533,
1220
+ "learning_rate": 1.7249492693466934e-05,
1221
+ "loss": 0.739,
1222
+ "num_input_tokens_seen": 2445872,
1223
+ "step": 715
1224
+ },
1225
+ {
1226
+ "epoch": 1.81360201511335,
1227
+ "grad_norm": 2.0826470851898193,
1228
+ "learning_rate": 1.6936699138650397e-05,
1229
+ "loss": 0.7168,
1230
+ "num_input_tokens_seen": 2463232,
1231
+ "step": 720
1232
+ },
1233
+ {
1234
+ "epoch": 1.8261964735516374,
1235
+ "grad_norm": 3.3417489528656006,
1236
+ "learning_rate": 1.6625308148198413e-05,
1237
+ "loss": 0.7757,
1238
+ "num_input_tokens_seen": 2480816,
1239
+ "step": 725
1240
+ },
1241
+ {
1242
+ "epoch": 1.8387909319899243,
1243
+ "grad_norm": 2.6503310203552246,
1244
+ "learning_rate": 1.6315373886765646e-05,
1245
+ "loss": 0.6779,
1246
+ "num_input_tokens_seen": 2498488,
1247
+ "step": 730
1248
+ },
1249
+ {
1250
+ "epoch": 1.8513853904282116,
1251
+ "grad_norm": 3.9675190448760986,
1252
+ "learning_rate": 1.600695026561721e-05,
1253
+ "loss": 0.9367,
1254
+ "num_input_tokens_seen": 2516792,
1255
+ "step": 735
1256
+ },
1257
+ {
1258
+ "epoch": 1.8639798488664987,
1259
+ "grad_norm": 2.195193290710449,
1260
+ "learning_rate": 1.5700090933251115e-05,
1261
+ "loss": 0.447,
1262
+ "num_input_tokens_seen": 2533232,
1263
+ "step": 740
1264
+ },
1265
+ {
1266
+ "epoch": 1.8765743073047858,
1267
+ "grad_norm": 2.9985008239746094,
1268
+ "learning_rate": 1.5394849266066416e-05,
1269
+ "loss": 0.6294,
1270
+ "num_input_tokens_seen": 2552824,
1271
+ "step": 745
1272
+ },
1273
+ {
1274
+ "epoch": 1.8891687657430731,
1275
+ "grad_norm": 3.8788790702819824,
1276
+ "learning_rate": 1.509127835907872e-05,
1277
+ "loss": 1.3992,
1278
+ "num_input_tokens_seen": 2571512,
1279
+ "step": 750
1280
+ },
1281
+ {
1282
+ "epoch": 1.9017632241813602,
1283
+ "grad_norm": 1.619795799255371,
1284
+ "learning_rate": 1.4789431016684558e-05,
1285
+ "loss": 0.8268,
1286
+ "num_input_tokens_seen": 2588120,
1287
+ "step": 755
1288
+ },
1289
+ {
1290
+ "epoch": 1.9143576826196473,
1291
+ "grad_norm": 3.272700786590576,
1292
+ "learning_rate": 1.4489359743476461e-05,
1293
+ "loss": 0.6765,
1294
+ "num_input_tokens_seen": 2605248,
1295
+ "step": 760
1296
+ },
1297
+ {
1298
+ "epoch": 1.9269521410579347,
1299
+ "grad_norm": 2.847982168197632,
1300
+ "learning_rate": 1.4191116735110007e-05,
1301
+ "loss": 1.1278,
1302
+ "num_input_tokens_seen": 2621656,
1303
+ "step": 765
1304
+ },
1305
+ {
1306
+ "epoch": 1.9395465994962215,
1307
+ "grad_norm": 3.8507871627807617,
1308
+ "learning_rate": 1.3894753869224725e-05,
1309
+ "loss": 0.6863,
1310
+ "num_input_tokens_seen": 2639440,
1311
+ "step": 770
1312
+ },
1313
+ {
1314
+ "epoch": 1.9521410579345089,
1315
+ "grad_norm": 2.1526479721069336,
1316
+ "learning_rate": 1.3600322696420275e-05,
1317
+ "loss": 0.6884,
1318
+ "num_input_tokens_seen": 2657952,
1319
+ "step": 775
1320
+ },
1321
+ {
1322
+ "epoch": 1.964735516372796,
1323
+ "grad_norm": 2.447094202041626,
1324
+ "learning_rate": 1.330787443128953e-05,
1325
+ "loss": 0.6405,
1326
+ "num_input_tokens_seen": 2673752,
1327
+ "step": 780
1328
+ },
1329
+ {
1330
+ "epoch": 1.977329974811083,
1331
+ "grad_norm": 1.717159628868103,
1332
+ "learning_rate": 1.3017459943510084e-05,
1333
+ "loss": 0.6037,
1334
+ "num_input_tokens_seen": 2689440,
1335
+ "step": 785
1336
+ },
1337
+ {
1338
+ "epoch": 1.9899244332493704,
1339
+ "grad_norm": 3.210221529006958,
1340
+ "learning_rate": 1.2729129748995749e-05,
1341
+ "loss": 0.9172,
1342
+ "num_input_tokens_seen": 2706624,
1343
+ "step": 790
1344
+ },
1345
+ {
1346
+ "epoch": 2.0025188916876573,
1347
+ "grad_norm": 1.6862154006958008,
1348
+ "learning_rate": 1.2442934001109671e-05,
1349
+ "loss": 0.7949,
1350
+ "num_input_tokens_seen": 2723496,
1351
+ "step": 795
1352
+ },
1353
+ {
1354
+ "epoch": 2.0151133501259446,
1355
+ "grad_norm": 2.2537808418273926,
1356
+ "learning_rate": 1.2158922481940361e-05,
1357
+ "loss": 0.5226,
1358
+ "num_input_tokens_seen": 2738304,
1359
+ "step": 800
1360
+ },
1361
+ {
1362
+ "epoch": 2.0151133501259446,
1363
+ "eval_accuracy": 0.7704949500614456,
1364
+ "eval_loss": 1.0528764724731445,
1365
+ "eval_runtime": 535.8681,
1366
+ "eval_samples_per_second": 0.33,
1367
+ "eval_steps_per_second": 0.33,
1368
+ "num_input_tokens_seen": 2738304,
1369
+ "step": 800
1370
+ },
1371
+ {
1372
+ "epoch": 2.027707808564232,
1373
+ "grad_norm": 5.461361408233643,
1374
+ "learning_rate": 1.1877144593642439e-05,
1375
+ "loss": 0.9266,
1376
+ "num_input_tokens_seen": 2755120,
1377
+ "step": 805
1378
+ },
1379
+ {
1380
+ "epoch": 2.040302267002519,
1381
+ "grad_norm": 2.930683135986328,
1382
+ "learning_rate": 1.1597649349843413e-05,
1383
+ "loss": 0.9805,
1384
+ "num_input_tokens_seen": 2771080,
1385
+ "step": 810
1386
+ },
1387
+ {
1388
+ "epoch": 2.052896725440806,
1389
+ "grad_norm": 3.590545654296875,
1390
+ "learning_rate": 1.1320485367118017e-05,
1391
+ "loss": 1.1796,
1392
+ "num_input_tokens_seen": 2786752,
1393
+ "step": 815
1394
+ },
1395
+ {
1396
+ "epoch": 2.065491183879093,
1397
+ "grad_norm": 2.6750876903533936,
1398
+ "learning_rate": 1.1045700856531668e-05,
1399
+ "loss": 0.4839,
1400
+ "num_input_tokens_seen": 2804392,
1401
+ "step": 820
1402
+ },
1403
+ {
1404
+ "epoch": 2.0780856423173804,
1405
+ "grad_norm": 1.7390782833099365,
1406
+ "learning_rate": 1.0773343615254446e-05,
1407
+ "loss": 0.359,
1408
+ "num_input_tokens_seen": 2818640,
1409
+ "step": 825
1410
+ },
1411
+ {
1412
+ "epoch": 2.0906801007556677,
1413
+ "grad_norm": 3.9132091999053955,
1414
+ "learning_rate": 1.0503461018246977e-05,
1415
+ "loss": 1.0472,
1416
+ "num_input_tokens_seen": 2836256,
1417
+ "step": 830
1418
+ },
1419
+ {
1420
+ "epoch": 2.1032745591939546,
1421
+ "grad_norm": 3.056783437728882,
1422
+ "learning_rate": 1.0236100010019919e-05,
1423
+ "loss": 0.8781,
1424
+ "num_input_tokens_seen": 2855496,
1425
+ "step": 835
1426
+ },
1427
+ {
1428
+ "epoch": 2.115869017632242,
1429
+ "grad_norm": 3.0513293743133545,
1430
+ "learning_rate": 9.971307096468203e-06,
1431
+ "loss": 0.9041,
1432
+ "num_input_tokens_seen": 2872200,
1433
+ "step": 840
1434
+ },
1435
+ {
1436
+ "epoch": 2.1284634760705288,
1437
+ "grad_norm": 2.966620445251465,
1438
+ "learning_rate": 9.709128336781592e-06,
1439
+ "loss": 0.7187,
1440
+ "num_input_tokens_seen": 2888256,
1441
+ "step": 845
1442
+ },
1443
+ {
1444
+ "epoch": 2.141057934508816,
1445
+ "grad_norm": 2.2629482746124268,
1446
+ "learning_rate": 9.449609335432972e-06,
1447
+ "loss": 0.8696,
1448
+ "num_input_tokens_seen": 2905920,
1449
+ "step": 850
1450
+ },
1451
+ {
1452
+ "epoch": 2.1536523929471034,
1453
+ "grad_norm": 5.053290367126465,
1454
+ "learning_rate": 9.192795234245697e-06,
1455
+ "loss": 0.6862,
1456
+ "num_input_tokens_seen": 2921056,
1457
+ "step": 855
1458
+ },
1459
+ {
1460
+ "epoch": 2.1662468513853903,
1461
+ "grad_norm": 2.8935582637786865,
1462
+ "learning_rate": 8.938730704541473e-06,
1463
+ "loss": 0.702,
1464
+ "num_input_tokens_seen": 2942096,
1465
+ "step": 860
1466
+ },
1467
+ {
1468
+ "epoch": 2.1788413098236776,
1469
+ "grad_norm": 4.863999843597412,
1470
+ "learning_rate": 8.687459939369983e-06,
1471
+ "loss": 0.9868,
1472
+ "num_input_tokens_seen": 2959320,
1473
+ "step": 865
1474
+ },
1475
+ {
1476
+ "epoch": 2.1914357682619645,
1477
+ "grad_norm": 2.9665706157684326,
1478
+ "learning_rate": 8.439026645821802e-06,
1479
+ "loss": 0.5647,
1480
+ "num_input_tokens_seen": 2976800,
1481
+ "step": 870
1482
+ },
1483
+ {
1484
+ "epoch": 2.204030226700252,
1485
+ "grad_norm": 4.912269115447998,
1486
+ "learning_rate": 8.193474037425794e-06,
1487
+ "loss": 0.7983,
1488
+ "num_input_tokens_seen": 2994536,
1489
+ "step": 875
1490
+ },
1491
+ {
1492
+ "epoch": 2.216624685138539,
1493
+ "grad_norm": 3.118041515350342,
1494
+ "learning_rate": 7.950844826632373e-06,
1495
+ "loss": 0.9227,
1496
+ "num_input_tokens_seen": 3010504,
1497
+ "step": 880
1498
+ },
1499
+ {
1500
+ "epoch": 2.229219143576826,
1501
+ "grad_norm": 1.5938191413879395,
1502
+ "learning_rate": 7.711181217383896e-06,
1503
+ "loss": 0.9203,
1504
+ "num_input_tokens_seen": 3027240,
1505
+ "step": 885
1506
+ },
1507
+ {
1508
+ "epoch": 2.2418136020151134,
1509
+ "grad_norm": 2.1337177753448486,
1510
+ "learning_rate": 7.474524897773555e-06,
1511
+ "loss": 0.5222,
1512
+ "num_input_tokens_seen": 3044024,
1513
+ "step": 890
1514
+ },
1515
+ {
1516
+ "epoch": 2.2544080604534007,
1517
+ "grad_norm": 3.2635252475738525,
1518
+ "learning_rate": 7.240917032794003e-06,
1519
+ "loss": 0.5499,
1520
+ "num_input_tokens_seen": 3060632,
1521
+ "step": 895
1522
+ },
1523
+ {
1524
+ "epoch": 2.2670025188916876,
1525
+ "grad_norm": 2.3565027713775635,
1526
+ "learning_rate": 7.010398257176878e-06,
1527
+ "loss": 0.7812,
1528
+ "num_input_tokens_seen": 3075440,
1529
+ "step": 900
1530
+ },
1531
+ {
1532
+ "epoch": 2.2670025188916876,
1533
+ "eval_accuracy": 0.7686542323215242,
1534
+ "eval_loss": 1.077448844909668,
1535
+ "eval_runtime": 536.5671,
1536
+ "eval_samples_per_second": 0.33,
1537
+ "eval_steps_per_second": 0.33,
1538
+ "num_input_tokens_seen": 3075440,
1539
+ "step": 900
1540
+ },
1541
+ {
1542
+ "epoch": 2.279596977329975,
1543
+ "grad_norm": 2.752980947494507,
1544
+ "learning_rate": 6.78300866832467e-06,
1545
+ "loss": 0.9713,
1546
+ "num_input_tokens_seen": 3094664,
1547
+ "step": 905
1548
+ },
1549
+ {
1550
+ "epoch": 2.292191435768262,
1551
+ "grad_norm": 3.7448384761810303,
1552
+ "learning_rate": 6.558787819336002e-06,
1553
+ "loss": 0.6824,
1554
+ "num_input_tokens_seen": 3111856,
1555
+ "step": 910
1556
+ },
1557
+ {
1558
+ "epoch": 2.304785894206549,
1559
+ "grad_norm": 4.047982692718506,
1560
+ "learning_rate": 6.337774712125597e-06,
1561
+ "loss": 0.7068,
1562
+ "num_input_tokens_seen": 3128312,
1563
+ "step": 915
1564
+ },
1565
+ {
1566
+ "epoch": 2.3173803526448364,
1567
+ "grad_norm": 1.6455196142196655,
1568
+ "learning_rate": 6.120007790640123e-06,
1569
+ "loss": 0.7046,
1570
+ "num_input_tokens_seen": 3146240,
1571
+ "step": 920
1572
+ },
1573
+ {
1574
+ "epoch": 2.3299748110831233,
1575
+ "grad_norm": 3.6144297122955322,
1576
+ "learning_rate": 5.905524934171086e-06,
1577
+ "loss": 0.493,
1578
+ "num_input_tokens_seen": 3164896,
1579
+ "step": 925
1580
+ },
1581
+ {
1582
+ "epoch": 2.3425692695214106,
1583
+ "grad_norm": 2.6038029193878174,
1584
+ "learning_rate": 5.694363450765958e-06,
1585
+ "loss": 0.5957,
1586
+ "num_input_tokens_seen": 3180744,
1587
+ "step": 930
1588
+ },
1589
+ {
1590
+ "epoch": 2.355163727959698,
1591
+ "grad_norm": 2.931267023086548,
1592
+ "learning_rate": 5.486560070738647e-06,
1593
+ "loss": 0.5896,
1594
+ "num_input_tokens_seen": 3196144,
1595
+ "step": 935
1596
+ },
1597
+ {
1598
+ "epoch": 2.367758186397985,
1599
+ "grad_norm": 3.1074607372283936,
1600
+ "learning_rate": 5.282150940280481e-06,
1601
+ "loss": 0.7852,
1602
+ "num_input_tokens_seen": 3214888,
1603
+ "step": 940
1604
+ },
1605
+ {
1606
+ "epoch": 2.380352644836272,
1607
+ "grad_norm": 2.5608012676239014,
1608
+ "learning_rate": 5.081171615172781e-06,
1609
+ "loss": 0.6306,
1610
+ "num_input_tokens_seen": 3231664,
1611
+ "step": 945
1612
+ },
1613
+ {
1614
+ "epoch": 2.392947103274559,
1615
+ "grad_norm": 3.663986921310425,
1616
+ "learning_rate": 4.883657054602148e-06,
1617
+ "loss": 0.5392,
1618
+ "num_input_tokens_seen": 3249000,
1619
+ "step": 950
1620
+ },
1621
+ {
1622
+ "epoch": 2.4055415617128464,
1623
+ "grad_norm": 3.1661274433135986,
1624
+ "learning_rate": 4.689641615079499e-06,
1625
+ "loss": 0.6259,
1626
+ "num_input_tokens_seen": 3267528,
1627
+ "step": 955
1628
+ },
1629
+ {
1630
+ "epoch": 2.4181360201511337,
1631
+ "grad_norm": 3.51973819732666,
1632
+ "learning_rate": 4.499159044463983e-06,
1633
+ "loss": 1.0237,
1634
+ "num_input_tokens_seen": 3284096,
1635
+ "step": 960
1636
+ },
1637
+ {
1638
+ "epoch": 2.4307304785894206,
1639
+ "grad_norm": 3.3107500076293945,
1640
+ "learning_rate": 4.312242476092698e-06,
1641
+ "loss": 0.4255,
1642
+ "num_input_tokens_seen": 3300344,
1643
+ "step": 965
1644
+ },
1645
+ {
1646
+ "epoch": 2.443324937027708,
1647
+ "grad_norm": 2.484478712081909,
1648
+ "learning_rate": 4.1289244230173715e-06,
1649
+ "loss": 0.4904,
1650
+ "num_input_tokens_seen": 3317032,
1651
+ "step": 970
1652
+ },
1653
+ {
1654
+ "epoch": 2.455919395465995,
1655
+ "grad_norm": 2.773125410079956,
1656
+ "learning_rate": 3.9492367723488685e-06,
1657
+ "loss": 0.7385,
1658
+ "num_input_tokens_seen": 3335752,
1659
+ "step": 975
1660
+ },
1661
+ {
1662
+ "epoch": 2.468513853904282,
1663
+ "grad_norm": 4.129367351531982,
1664
+ "learning_rate": 3.773210779710662e-06,
1665
+ "loss": 0.5983,
1666
+ "num_input_tokens_seen": 3352968,
1667
+ "step": 980
1668
+ },
1669
+ {
1670
+ "epoch": 2.4811083123425695,
1671
+ "grad_norm": 2.1071560382843018,
1672
+ "learning_rate": 3.600877063802055e-06,
1673
+ "loss": 0.6822,
1674
+ "num_input_tokens_seen": 3366088,
1675
+ "step": 985
1676
+ },
1677
+ {
1678
+ "epoch": 2.4937027707808563,
1679
+ "grad_norm": 1.8715922832489014,
1680
+ "learning_rate": 3.4322656010722542e-06,
1681
+ "loss": 0.5104,
1682
+ "num_input_tokens_seen": 3382936,
1683
+ "step": 990
1684
+ },
1685
+ {
1686
+ "epoch": 2.5062972292191437,
1687
+ "grad_norm": 3.2055535316467285,
1688
+ "learning_rate": 3.267405720506156e-06,
1689
+ "loss": 0.6008,
1690
+ "num_input_tokens_seen": 3397648,
1691
+ "step": 995
1692
+ },
1693
+ {
1694
+ "epoch": 2.5188916876574305,
1695
+ "grad_norm": 3.4338555335998535,
1696
+ "learning_rate": 3.106326098522705e-06,
1697
+ "loss": 0.6864,
1698
+ "num_input_tokens_seen": 3416024,
1699
+ "step": 1000
1700
+ },
1701
+ {
1702
+ "epoch": 2.5188916876574305,
1703
+ "eval_accuracy": 0.7695559062966267,
1704
+ "eval_loss": 1.0843136310577393,
1705
+ "eval_runtime": 538.1124,
1706
+ "eval_samples_per_second": 0.329,
1707
+ "eval_steps_per_second": 0.329,
1708
+ "num_input_tokens_seen": 3416024,
1709
+ "step": 1000
1710
+ }
1711
+ ],
1712
+ "logging_steps": 5,
1713
+ "max_steps": 1191,
1714
+ "num_input_tokens_seen": 3416024,
1715
+ "num_train_epochs": 3,
1716
+ "save_steps": 100,
1717
+ "stateful_callbacks": {
1718
+ "TrainerControl": {
1719
+ "args": {
1720
+ "should_epoch_stop": false,
1721
+ "should_evaluate": false,
1722
+ "should_log": false,
1723
+ "should_save": true,
1724
+ "should_training_stop": false
1725
+ },
1726
+ "attributes": {}
1727
+ }
1728
+ },
1729
+ "total_flos": 1.5121076737651507e+17,
1730
+ "train_batch_size": 1,
1731
+ "trial_name": null,
1732
+ "trial_params": null
1733
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28a84bb19d0747e5e32709d720a3932cbdf61dc13a8bc483f6470afbfc08189f
3
+ size 5304