NIEChaojun commited on
Commit
e9d0ee1
·
verified ·
1 Parent(s): 9c4a1c9

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen-1_8B
3
+ library_name: peft
4
+ license: other
5
+ tags:
6
+ - llama-factory
7
+ - lora
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: train_2024-09-02-15-46-54
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # train_2024-09-02-15-46-54
18
+
19
+ This model is a fine-tuned version of [Qwen/Qwen-1_8B](https://huggingface.co/Qwen/Qwen-1_8B) on the glaive_toolcall_en dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.3859
22
+ - Num Input Tokens Seen: 1596080
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 2
43
+ - eval_batch_size: 2
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 8
46
+ - total_train_batch_size: 16
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - num_epochs: 3.0
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
55
+ |:-------------:|:------:|:----:|:---------------:|:-----------------:|
56
+ | 0.4726 | 1.7778 | 100 | 0.3941 | 949424 |
57
+
58
+
59
+ ### Framework versions
60
+
61
+ - PEFT 0.12.0
62
+ - Transformers 4.44.2
63
+ - Pytorch 2.3.1+cu121
64
+ - Datasets 2.21.0
65
+ - Tokenizers 0.19.1
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen-1_8B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "c_attn",
24
+ "w2",
25
+ "c_proj",
26
+ "w1"
27
+ ],
28
+ "task_type": "CAUSAL_LM",
29
+ "use_dora": false,
30
+ "use_rslora": false
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3944faf4720d867e867863b0e244d4f412b6f008b9359fcc98360ebb37bf510b
3
+ size 26867400
all_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "eval_loss": 0.38587039709091187,
4
+ "eval_runtime": 13.1955,
5
+ "eval_samples_per_second": 7.578,
6
+ "eval_steps_per_second": 3.789,
7
+ "num_input_tokens_seen": 1596080,
8
+ "total_flos": 1.467473931042816e+16,
9
+ "train_loss": 0.5455739086582547,
10
+ "train_runtime": 1063.0707,
11
+ "train_samples_per_second": 2.54,
12
+ "train_steps_per_second": 0.158
13
+ }
checkpoint-100/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen-1_8B
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen-1_8B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "c_attn",
24
+ "w2",
25
+ "c_proj",
26
+ "w1"
27
+ ],
28
+ "task_type": "CAUSAL_LM",
29
+ "use_dora": false,
30
+ "use_rslora": false
31
+ }
checkpoint-100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64238ab16528a40f2c5fbcde5419a3e8d83e298bd7b8e677cbcaa1ea66ceb7c4
3
+ size 26867400
checkpoint-100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a469617e9cb3688483ed2ba66fcdc7dcaa3e584dc768982c6f6b7442fcf46982
3
+ size 53875386
checkpoint-100/qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42477e3febec09c1d5f853ced62e06fb0b175d9d96e488c83c57aeeb062d8c97
3
+ size 14244
checkpoint-100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:367336456ed85b45075ea2f90d51ce174bed627c54e3e0ef29c073a10e04faa6
3
+ size 1064
checkpoint-100/special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "pad_token": "<|endoftext|>"
10
+ }
checkpoint-100/tokenization_qwen.py ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Tokenization classes for QWen."""
7
+
8
+ import base64
9
+ import logging
10
+ import os
11
+ import unicodedata
12
+ from typing import Collection, Dict, List, Set, Tuple, Union
13
+
14
+ import tiktoken
15
+ from transformers import PreTrainedTokenizer, AddedToken
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+
20
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
21
+
22
+ PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
23
+ ENDOFTEXT = "<|endoftext|>"
24
+ IMSTART = "<|im_start|>"
25
+ IMEND = "<|im_end|>"
26
+ # as the default behavior is changed to allow special tokens in
27
+ # regular texts, the surface forms of special tokens need to be
28
+ # as different as possible to minimize the impact
29
+ EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
30
+ # changed to use actual index to avoid misconfiguration with vocabulary expansion
31
+ SPECIAL_START_ID = 151643
32
+ SPECIAL_TOKENS = tuple(
33
+ enumerate(
34
+ (
35
+ (
36
+ ENDOFTEXT,
37
+ IMSTART,
38
+ IMEND,
39
+ )
40
+ + EXTRAS
41
+ ),
42
+ start=SPECIAL_START_ID,
43
+ )
44
+ )
45
+ SPECIAL_TOKENS_SET = set(t for i, t in SPECIAL_TOKENS)
46
+
47
+
48
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
49
+ with open(tiktoken_bpe_file, "rb") as f:
50
+ contents = f.read()
51
+ return {
52
+ base64.b64decode(token): int(rank)
53
+ for token, rank in (line.split() for line in contents.splitlines() if line)
54
+ }
55
+
56
+
57
+ class QWenTokenizer(PreTrainedTokenizer):
58
+ """QWen tokenizer."""
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+
62
+ def __init__(
63
+ self,
64
+ vocab_file,
65
+ errors="replace",
66
+ extra_vocab_file=None,
67
+ **kwargs,
68
+ ):
69
+ super().__init__(**kwargs)
70
+
71
+ # how to handle errors in decoding UTF-8 byte sequences
72
+ # use ignore if you are in streaming inference
73
+ self.errors = errors
74
+
75
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: Dict[bytes, int]
76
+ self.special_tokens = {
77
+ token: index
78
+ for index, token in SPECIAL_TOKENS
79
+ }
80
+
81
+ # try load extra vocab from file
82
+ if extra_vocab_file is not None:
83
+ used_ids = set(self.mergeable_ranks.values()) | set(self.special_tokens.values())
84
+ extra_mergeable_ranks = _load_tiktoken_bpe(extra_vocab_file)
85
+ for token, index in extra_mergeable_ranks.items():
86
+ if token in self.mergeable_ranks:
87
+ logger.info(f"extra token {token} exists, skipping")
88
+ continue
89
+ if index in used_ids:
90
+ logger.info(f'the index {index} for extra token {token} exists, skipping')
91
+ continue
92
+ self.mergeable_ranks[token] = index
93
+ # the index may be sparse after this, but don't worry tiktoken.Encoding will handle this
94
+
95
+ enc = tiktoken.Encoding(
96
+ "Qwen",
97
+ pat_str=PAT_STR,
98
+ mergeable_ranks=self.mergeable_ranks,
99
+ special_tokens=self.special_tokens,
100
+ )
101
+ assert (
102
+ len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
103
+ ), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
104
+
105
+ self.decoder = {
106
+ v: k for k, v in self.mergeable_ranks.items()
107
+ } # type: dict[int, bytes|str]
108
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
109
+
110
+ self.tokenizer = enc # type: tiktoken.Encoding
111
+
112
+ self.eod_id = self.tokenizer.eot_token
113
+ self.im_start_id = self.special_tokens[IMSTART]
114
+ self.im_end_id = self.special_tokens[IMEND]
115
+
116
+ def __getstate__(self):
117
+ # for pickle lovers
118
+ state = self.__dict__.copy()
119
+ del state["tokenizer"]
120
+ return state
121
+
122
+ def __setstate__(self, state):
123
+ # tokenizer is not python native; don't pass it; rebuild it
124
+ self.__dict__.update(state)
125
+ enc = tiktoken.Encoding(
126
+ "Qwen",
127
+ pat_str=PAT_STR,
128
+ mergeable_ranks=self.mergeable_ranks,
129
+ special_tokens=self.special_tokens,
130
+ )
131
+ self.tokenizer = enc
132
+
133
+ def __len__(self) -> int:
134
+ return self.tokenizer.n_vocab
135
+
136
+ def get_vocab(self) -> Dict[bytes, int]:
137
+ return self.mergeable_ranks
138
+
139
+ def convert_tokens_to_ids(
140
+ self, tokens: Union[bytes, str, List[Union[bytes, str]]]
141
+ ) -> List[int]:
142
+ ids = []
143
+ if isinstance(tokens, (str, bytes)):
144
+ if tokens in self.special_tokens:
145
+ return self.special_tokens[tokens]
146
+ else:
147
+ return self.mergeable_ranks.get(tokens)
148
+ for token in tokens:
149
+ if token in self.special_tokens:
150
+ ids.append(self.special_tokens[token])
151
+ else:
152
+ ids.append(self.mergeable_ranks.get(token))
153
+ return ids
154
+
155
+ def _add_tokens(
156
+ self,
157
+ new_tokens: Union[List[str], List[AddedToken]],
158
+ special_tokens: bool = False,
159
+ ) -> int:
160
+ if not special_tokens and new_tokens:
161
+ raise ValueError("Adding regular tokens is not supported")
162
+ for token in new_tokens:
163
+ surface_form = token.content if isinstance(token, AddedToken) else token
164
+ if surface_form not in SPECIAL_TOKENS_SET:
165
+ raise ValueError("Adding unknown special tokens is not supported")
166
+ return 0
167
+
168
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
169
+ """
170
+ Save only the vocabulary of the tokenizer (vocabulary).
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ file_path = os.path.join(save_directory, "qwen.tiktoken")
176
+ with open(file_path, "w", encoding="utf8") as w:
177
+ for k, v in self.mergeable_ranks.items():
178
+ line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
179
+ w.write(line)
180
+ return (file_path,)
181
+
182
+ def tokenize(
183
+ self,
184
+ text: str,
185
+ allowed_special: Union[Set, str] = "all",
186
+ disallowed_special: Union[Collection, str] = (),
187
+ **kwargs,
188
+ ) -> List[Union[bytes, str]]:
189
+ """
190
+ Converts a string in a sequence of tokens.
191
+
192
+ Args:
193
+ text (`str`):
194
+ The sequence to be encoded.
195
+ allowed_special (`Literal["all"]` or `set`):
196
+ The surface forms of the tokens to be encoded as special tokens in regular texts.
197
+ Default to "all".
198
+ disallowed_special (`Literal["all"]` or `Collection`):
199
+ The surface forms of the tokens that should not be in regular texts and trigger errors.
200
+ Default to an empty tuple.
201
+
202
+ kwargs (additional keyword arguments, *optional*):
203
+ Will be passed to the underlying model specific encode method.
204
+
205
+ Returns:
206
+ `List[bytes|str]`: The list of tokens.
207
+ """
208
+ tokens = []
209
+ text = unicodedata.normalize("NFC", text)
210
+
211
+ # this implementation takes a detour: text -> token id -> token surface forms
212
+ for t in self.tokenizer.encode(
213
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
214
+ ):
215
+ tokens.append(self.decoder[t])
216
+ return tokens
217
+
218
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
219
+ """
220
+ Converts a sequence of tokens in a single string.
221
+ """
222
+ text = ""
223
+ temp = b""
224
+ for t in tokens:
225
+ if isinstance(t, str):
226
+ if temp:
227
+ text += temp.decode("utf-8", errors=self.errors)
228
+ temp = b""
229
+ text += t
230
+ elif isinstance(t, bytes):
231
+ temp += t
232
+ else:
233
+ raise TypeError("token should only be of type types or str")
234
+ if temp:
235
+ text += temp.decode("utf-8", errors=self.errors)
236
+ return text
237
+
238
+ @property
239
+ def vocab_size(self):
240
+ return self.tokenizer.n_vocab
241
+
242
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
243
+ """Converts an id to a token, special tokens included"""
244
+ if index in self.decoder:
245
+ return self.decoder[index]
246
+ raise ValueError("unknown ids")
247
+
248
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
249
+ """Converts a token to an id using the vocab, special tokens included"""
250
+ if token in self.special_tokens:
251
+ return self.special_tokens[token]
252
+ if token in self.mergeable_ranks:
253
+ return self.mergeable_ranks[token]
254
+ raise ValueError("unknown token")
255
+
256
+ def _tokenize(self, text: str, **kwargs):
257
+ """
258
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
259
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
260
+
261
+ Do NOT take care of added tokens.
262
+ """
263
+ raise NotImplementedError
264
+
265
+ def _decode(
266
+ self,
267
+ token_ids: Union[int, List[int]],
268
+ skip_special_tokens: bool = False,
269
+ errors: str = None,
270
+ **kwargs,
271
+ ) -> str:
272
+ if isinstance(token_ids, int):
273
+ token_ids = [token_ids]
274
+ if skip_special_tokens:
275
+ token_ids = [i for i in token_ids if i < self.eod_id]
276
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "auto_map": {
4
+ "AutoTokenizer": [
5
+ "tokenization_qwen.QWenTokenizer",
6
+ null
7
+ ]
8
+ },
9
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'Human: ' + content + '\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' + '\n' }}{% endif %}{% endfor %}",
10
+ "clean_up_tokenization_spaces": true,
11
+ "eos_token": "<|endoftext|>",
12
+ "model_max_length": 8192,
13
+ "pad_token": "<|endoftext|>",
14
+ "padding_side": "right",
15
+ "split_special_tokens": false,
16
+ "tokenizer_class": "QWenTokenizer"
17
+ }
checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.7777777777777777,
5
+ "eval_steps": 100,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08888888888888889,
13
+ "grad_norm": 0.7499012351036072,
14
+ "learning_rate": 4.989080197352834e-05,
15
+ "loss": 0.9542,
16
+ "num_input_tokens_seen": 47168,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.17777777777777778,
21
+ "grad_norm": 0.6703861355781555,
22
+ "learning_rate": 4.956416183083221e-05,
23
+ "loss": 0.7834,
24
+ "num_input_tokens_seen": 100640,
25
+ "step": 10
26
+ },
27
+ {
28
+ "epoch": 0.26666666666666666,
29
+ "grad_norm": 0.6384798884391785,
30
+ "learning_rate": 4.9022933048627496e-05,
31
+ "loss": 0.7296,
32
+ "num_input_tokens_seen": 150048,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.35555555555555557,
37
+ "grad_norm": 0.36834755539894104,
38
+ "learning_rate": 4.827184371610511e-05,
39
+ "loss": 0.6653,
40
+ "num_input_tokens_seen": 196928,
41
+ "step": 20
42
+ },
43
+ {
44
+ "epoch": 0.4444444444444444,
45
+ "grad_norm": 0.2962658405303955,
46
+ "learning_rate": 4.731745523109029e-05,
47
+ "loss": 0.683,
48
+ "num_input_tokens_seen": 244592,
49
+ "step": 25
50
+ },
51
+ {
52
+ "epoch": 0.5333333333333333,
53
+ "grad_norm": 0.33365580439567566,
54
+ "learning_rate": 4.6168104980707107e-05,
55
+ "loss": 0.5651,
56
+ "num_input_tokens_seen": 293488,
57
+ "step": 30
58
+ },
59
+ {
60
+ "epoch": 0.6222222222222222,
61
+ "grad_norm": 0.4523426294326782,
62
+ "learning_rate": 4.4833833507280884e-05,
63
+ "loss": 0.5411,
64
+ "num_input_tokens_seen": 338160,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.7111111111111111,
69
+ "grad_norm": 0.41214215755462646,
70
+ "learning_rate": 4.332629679574566e-05,
71
+ "loss": 0.572,
72
+ "num_input_tokens_seen": 382768,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.8,
77
+ "grad_norm": 0.3807964026927948,
78
+ "learning_rate": 4.16586644488001e-05,
79
+ "loss": 0.6106,
80
+ "num_input_tokens_seen": 430928,
81
+ "step": 45
82
+ },
83
+ {
84
+ "epoch": 0.8888888888888888,
85
+ "grad_norm": 0.45064786076545715,
86
+ "learning_rate": 3.9845504639337535e-05,
87
+ "loss": 0.5552,
88
+ "num_input_tokens_seen": 477520,
89
+ "step": 50
90
+ },
91
+ {
92
+ "epoch": 0.9777777777777777,
93
+ "grad_norm": 0.4501620829105377,
94
+ "learning_rate": 3.790265684518767e-05,
95
+ "loss": 0.5659,
96
+ "num_input_tokens_seen": 526640,
97
+ "step": 55
98
+ },
99
+ {
100
+ "epoch": 1.0666666666666667,
101
+ "grad_norm": 0.24308599531650543,
102
+ "learning_rate": 3.5847093477938956e-05,
103
+ "loss": 0.58,
104
+ "num_input_tokens_seen": 577184,
105
+ "step": 60
106
+ },
107
+ {
108
+ "epoch": 1.1555555555555554,
109
+ "grad_norm": 0.33691608905792236,
110
+ "learning_rate": 3.369677161463068e-05,
111
+ "loss": 0.5429,
112
+ "num_input_tokens_seen": 625344,
113
+ "step": 65
114
+ },
115
+ {
116
+ "epoch": 1.2444444444444445,
117
+ "grad_norm": 0.2771952748298645,
118
+ "learning_rate": 3.147047612756302e-05,
119
+ "loss": 0.4384,
120
+ "num_input_tokens_seen": 670224,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 1.3333333333333333,
125
+ "grad_norm": 0.4307265281677246,
126
+ "learning_rate": 2.918765558261841e-05,
127
+ "loss": 0.5749,
128
+ "num_input_tokens_seen": 714880,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 1.4222222222222223,
133
+ "grad_norm": 0.4149227440357208,
134
+ "learning_rate": 2.686825233966061e-05,
135
+ "loss": 0.4222,
136
+ "num_input_tokens_seen": 752944,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 1.511111111111111,
141
+ "grad_norm": 0.44140687584877014,
142
+ "learning_rate": 2.4532528339227452e-05,
143
+ "loss": 0.556,
144
+ "num_input_tokens_seen": 802064,
145
+ "step": 85
146
+ },
147
+ {
148
+ "epoch": 1.6,
149
+ "grad_norm": 0.376556396484375,
150
+ "learning_rate": 2.2200888097417307e-05,
151
+ "loss": 0.4963,
152
+ "num_input_tokens_seen": 852112,
153
+ "step": 90
154
+ },
155
+ {
156
+ "epoch": 1.6888888888888889,
157
+ "grad_norm": 0.3596157431602478,
158
+ "learning_rate": 1.9893700455257996e-05,
159
+ "loss": 0.5085,
160
+ "num_input_tokens_seen": 905184,
161
+ "step": 95
162
+ },
163
+ {
164
+ "epoch": 1.7777777777777777,
165
+ "grad_norm": 0.4776301085948944,
166
+ "learning_rate": 1.7631120639727393e-05,
167
+ "loss": 0.4726,
168
+ "num_input_tokens_seen": 949424,
169
+ "step": 100
170
+ },
171
+ {
172
+ "epoch": 1.7777777777777777,
173
+ "eval_loss": 0.39410004019737244,
174
+ "eval_runtime": 11.4337,
175
+ "eval_samples_per_second": 8.746,
176
+ "eval_steps_per_second": 4.373,
177
+ "num_input_tokens_seen": 949424,
178
+ "step": 100
179
+ }
180
+ ],
181
+ "logging_steps": 5,
182
+ "max_steps": 168,
183
+ "num_input_tokens_seen": 949424,
184
+ "num_train_epochs": 3,
185
+ "save_steps": 100,
186
+ "stateful_callbacks": {
187
+ "TrainerControl": {
188
+ "args": {
189
+ "should_epoch_stop": false,
190
+ "should_evaluate": false,
191
+ "should_log": false,
192
+ "should_save": true,
193
+ "should_training_stop": false
194
+ },
195
+ "attributes": {}
196
+ }
197
+ },
198
+ "total_flos": 8729230173339648.0,
199
+ "train_batch_size": 2,
200
+ "trial_name": null,
201
+ "trial_params": null
202
+ }
checkpoint-100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3714cd435343e43fff99200ac2402d2ab09a63941e76fb7bd73195c6e4f4472
3
+ size 5368
checkpoint-168/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen-1_8B
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.12.0
checkpoint-168/adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Qwen/Qwen-1_8B",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "c_attn",
24
+ "w2",
25
+ "c_proj",
26
+ "w1"
27
+ ],
28
+ "task_type": "CAUSAL_LM",
29
+ "use_dora": false,
30
+ "use_rslora": false
31
+ }
checkpoint-168/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3944faf4720d867e867863b0e244d4f412b6f008b9359fcc98360ebb37bf510b
3
+ size 26867400
checkpoint-168/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ac14cfb7644f827b5f35128e3320eaa98f6f87f7ad323a5b94ba2a253d83bd5
3
+ size 53875386
checkpoint-168/qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-168/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40cb1869983733651486e797294b9d075e35ed911b745abc673b44d5ac187b23
3
+ size 14244
checkpoint-168/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffd46888908b2132048cd2c5b54269699691843b9c7405080bb376310c5ce3c5
3
+ size 1064
checkpoint-168/special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "pad_token": "<|endoftext|>"
10
+ }
checkpoint-168/tokenization_qwen.py ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Tokenization classes for QWen."""
7
+
8
+ import base64
9
+ import logging
10
+ import os
11
+ import unicodedata
12
+ from typing import Collection, Dict, List, Set, Tuple, Union
13
+
14
+ import tiktoken
15
+ from transformers import PreTrainedTokenizer, AddedToken
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+
20
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
21
+
22
+ PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
23
+ ENDOFTEXT = "<|endoftext|>"
24
+ IMSTART = "<|im_start|>"
25
+ IMEND = "<|im_end|>"
26
+ # as the default behavior is changed to allow special tokens in
27
+ # regular texts, the surface forms of special tokens need to be
28
+ # as different as possible to minimize the impact
29
+ EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
30
+ # changed to use actual index to avoid misconfiguration with vocabulary expansion
31
+ SPECIAL_START_ID = 151643
32
+ SPECIAL_TOKENS = tuple(
33
+ enumerate(
34
+ (
35
+ (
36
+ ENDOFTEXT,
37
+ IMSTART,
38
+ IMEND,
39
+ )
40
+ + EXTRAS
41
+ ),
42
+ start=SPECIAL_START_ID,
43
+ )
44
+ )
45
+ SPECIAL_TOKENS_SET = set(t for i, t in SPECIAL_TOKENS)
46
+
47
+
48
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
49
+ with open(tiktoken_bpe_file, "rb") as f:
50
+ contents = f.read()
51
+ return {
52
+ base64.b64decode(token): int(rank)
53
+ for token, rank in (line.split() for line in contents.splitlines() if line)
54
+ }
55
+
56
+
57
+ class QWenTokenizer(PreTrainedTokenizer):
58
+ """QWen tokenizer."""
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+
62
+ def __init__(
63
+ self,
64
+ vocab_file,
65
+ errors="replace",
66
+ extra_vocab_file=None,
67
+ **kwargs,
68
+ ):
69
+ super().__init__(**kwargs)
70
+
71
+ # how to handle errors in decoding UTF-8 byte sequences
72
+ # use ignore if you are in streaming inference
73
+ self.errors = errors
74
+
75
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: Dict[bytes, int]
76
+ self.special_tokens = {
77
+ token: index
78
+ for index, token in SPECIAL_TOKENS
79
+ }
80
+
81
+ # try load extra vocab from file
82
+ if extra_vocab_file is not None:
83
+ used_ids = set(self.mergeable_ranks.values()) | set(self.special_tokens.values())
84
+ extra_mergeable_ranks = _load_tiktoken_bpe(extra_vocab_file)
85
+ for token, index in extra_mergeable_ranks.items():
86
+ if token in self.mergeable_ranks:
87
+ logger.info(f"extra token {token} exists, skipping")
88
+ continue
89
+ if index in used_ids:
90
+ logger.info(f'the index {index} for extra token {token} exists, skipping')
91
+ continue
92
+ self.mergeable_ranks[token] = index
93
+ # the index may be sparse after this, but don't worry tiktoken.Encoding will handle this
94
+
95
+ enc = tiktoken.Encoding(
96
+ "Qwen",
97
+ pat_str=PAT_STR,
98
+ mergeable_ranks=self.mergeable_ranks,
99
+ special_tokens=self.special_tokens,
100
+ )
101
+ assert (
102
+ len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
103
+ ), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
104
+
105
+ self.decoder = {
106
+ v: k for k, v in self.mergeable_ranks.items()
107
+ } # type: dict[int, bytes|str]
108
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
109
+
110
+ self.tokenizer = enc # type: tiktoken.Encoding
111
+
112
+ self.eod_id = self.tokenizer.eot_token
113
+ self.im_start_id = self.special_tokens[IMSTART]
114
+ self.im_end_id = self.special_tokens[IMEND]
115
+
116
+ def __getstate__(self):
117
+ # for pickle lovers
118
+ state = self.__dict__.copy()
119
+ del state["tokenizer"]
120
+ return state
121
+
122
+ def __setstate__(self, state):
123
+ # tokenizer is not python native; don't pass it; rebuild it
124
+ self.__dict__.update(state)
125
+ enc = tiktoken.Encoding(
126
+ "Qwen",
127
+ pat_str=PAT_STR,
128
+ mergeable_ranks=self.mergeable_ranks,
129
+ special_tokens=self.special_tokens,
130
+ )
131
+ self.tokenizer = enc
132
+
133
+ def __len__(self) -> int:
134
+ return self.tokenizer.n_vocab
135
+
136
+ def get_vocab(self) -> Dict[bytes, int]:
137
+ return self.mergeable_ranks
138
+
139
+ def convert_tokens_to_ids(
140
+ self, tokens: Union[bytes, str, List[Union[bytes, str]]]
141
+ ) -> List[int]:
142
+ ids = []
143
+ if isinstance(tokens, (str, bytes)):
144
+ if tokens in self.special_tokens:
145
+ return self.special_tokens[tokens]
146
+ else:
147
+ return self.mergeable_ranks.get(tokens)
148
+ for token in tokens:
149
+ if token in self.special_tokens:
150
+ ids.append(self.special_tokens[token])
151
+ else:
152
+ ids.append(self.mergeable_ranks.get(token))
153
+ return ids
154
+
155
+ def _add_tokens(
156
+ self,
157
+ new_tokens: Union[List[str], List[AddedToken]],
158
+ special_tokens: bool = False,
159
+ ) -> int:
160
+ if not special_tokens and new_tokens:
161
+ raise ValueError("Adding regular tokens is not supported")
162
+ for token in new_tokens:
163
+ surface_form = token.content if isinstance(token, AddedToken) else token
164
+ if surface_form not in SPECIAL_TOKENS_SET:
165
+ raise ValueError("Adding unknown special tokens is not supported")
166
+ return 0
167
+
168
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
169
+ """
170
+ Save only the vocabulary of the tokenizer (vocabulary).
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ file_path = os.path.join(save_directory, "qwen.tiktoken")
176
+ with open(file_path, "w", encoding="utf8") as w:
177
+ for k, v in self.mergeable_ranks.items():
178
+ line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
179
+ w.write(line)
180
+ return (file_path,)
181
+
182
+ def tokenize(
183
+ self,
184
+ text: str,
185
+ allowed_special: Union[Set, str] = "all",
186
+ disallowed_special: Union[Collection, str] = (),
187
+ **kwargs,
188
+ ) -> List[Union[bytes, str]]:
189
+ """
190
+ Converts a string in a sequence of tokens.
191
+
192
+ Args:
193
+ text (`str`):
194
+ The sequence to be encoded.
195
+ allowed_special (`Literal["all"]` or `set`):
196
+ The surface forms of the tokens to be encoded as special tokens in regular texts.
197
+ Default to "all".
198
+ disallowed_special (`Literal["all"]` or `Collection`):
199
+ The surface forms of the tokens that should not be in regular texts and trigger errors.
200
+ Default to an empty tuple.
201
+
202
+ kwargs (additional keyword arguments, *optional*):
203
+ Will be passed to the underlying model specific encode method.
204
+
205
+ Returns:
206
+ `List[bytes|str]`: The list of tokens.
207
+ """
208
+ tokens = []
209
+ text = unicodedata.normalize("NFC", text)
210
+
211
+ # this implementation takes a detour: text -> token id -> token surface forms
212
+ for t in self.tokenizer.encode(
213
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
214
+ ):
215
+ tokens.append(self.decoder[t])
216
+ return tokens
217
+
218
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
219
+ """
220
+ Converts a sequence of tokens in a single string.
221
+ """
222
+ text = ""
223
+ temp = b""
224
+ for t in tokens:
225
+ if isinstance(t, str):
226
+ if temp:
227
+ text += temp.decode("utf-8", errors=self.errors)
228
+ temp = b""
229
+ text += t
230
+ elif isinstance(t, bytes):
231
+ temp += t
232
+ else:
233
+ raise TypeError("token should only be of type types or str")
234
+ if temp:
235
+ text += temp.decode("utf-8", errors=self.errors)
236
+ return text
237
+
238
+ @property
239
+ def vocab_size(self):
240
+ return self.tokenizer.n_vocab
241
+
242
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
243
+ """Converts an id to a token, special tokens included"""
244
+ if index in self.decoder:
245
+ return self.decoder[index]
246
+ raise ValueError("unknown ids")
247
+
248
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
249
+ """Converts a token to an id using the vocab, special tokens included"""
250
+ if token in self.special_tokens:
251
+ return self.special_tokens[token]
252
+ if token in self.mergeable_ranks:
253
+ return self.mergeable_ranks[token]
254
+ raise ValueError("unknown token")
255
+
256
+ def _tokenize(self, text: str, **kwargs):
257
+ """
258
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
259
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
260
+
261
+ Do NOT take care of added tokens.
262
+ """
263
+ raise NotImplementedError
264
+
265
+ def _decode(
266
+ self,
267
+ token_ids: Union[int, List[int]],
268
+ skip_special_tokens: bool = False,
269
+ errors: str = None,
270
+ **kwargs,
271
+ ) -> str:
272
+ if isinstance(token_ids, int):
273
+ token_ids = [token_ids]
274
+ if skip_special_tokens:
275
+ token_ids = [i for i in token_ids if i < self.eod_id]
276
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
checkpoint-168/tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "auto_map": {
4
+ "AutoTokenizer": [
5
+ "tokenization_qwen.QWenTokenizer",
6
+ null
7
+ ]
8
+ },
9
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'Human: ' + content + '\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' + '\n' }}{% endif %}{% endfor %}",
10
+ "clean_up_tokenization_spaces": true,
11
+ "eos_token": "<|endoftext|>",
12
+ "model_max_length": 8192,
13
+ "pad_token": "<|endoftext|>",
14
+ "padding_side": "right",
15
+ "split_special_tokens": false,
16
+ "tokenizer_class": "QWenTokenizer"
17
+ }
checkpoint-168/trainer_state.json ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.986666666666667,
5
+ "eval_steps": 100,
6
+ "global_step": 168,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08888888888888889,
13
+ "grad_norm": 0.7499012351036072,
14
+ "learning_rate": 4.989080197352834e-05,
15
+ "loss": 0.9542,
16
+ "num_input_tokens_seen": 47168,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.17777777777777778,
21
+ "grad_norm": 0.6703861355781555,
22
+ "learning_rate": 4.956416183083221e-05,
23
+ "loss": 0.7834,
24
+ "num_input_tokens_seen": 100640,
25
+ "step": 10
26
+ },
27
+ {
28
+ "epoch": 0.26666666666666666,
29
+ "grad_norm": 0.6384798884391785,
30
+ "learning_rate": 4.9022933048627496e-05,
31
+ "loss": 0.7296,
32
+ "num_input_tokens_seen": 150048,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.35555555555555557,
37
+ "grad_norm": 0.36834755539894104,
38
+ "learning_rate": 4.827184371610511e-05,
39
+ "loss": 0.6653,
40
+ "num_input_tokens_seen": 196928,
41
+ "step": 20
42
+ },
43
+ {
44
+ "epoch": 0.4444444444444444,
45
+ "grad_norm": 0.2962658405303955,
46
+ "learning_rate": 4.731745523109029e-05,
47
+ "loss": 0.683,
48
+ "num_input_tokens_seen": 244592,
49
+ "step": 25
50
+ },
51
+ {
52
+ "epoch": 0.5333333333333333,
53
+ "grad_norm": 0.33365580439567566,
54
+ "learning_rate": 4.6168104980707107e-05,
55
+ "loss": 0.5651,
56
+ "num_input_tokens_seen": 293488,
57
+ "step": 30
58
+ },
59
+ {
60
+ "epoch": 0.6222222222222222,
61
+ "grad_norm": 0.4523426294326782,
62
+ "learning_rate": 4.4833833507280884e-05,
63
+ "loss": 0.5411,
64
+ "num_input_tokens_seen": 338160,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.7111111111111111,
69
+ "grad_norm": 0.41214215755462646,
70
+ "learning_rate": 4.332629679574566e-05,
71
+ "loss": 0.572,
72
+ "num_input_tokens_seen": 382768,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.8,
77
+ "grad_norm": 0.3807964026927948,
78
+ "learning_rate": 4.16586644488001e-05,
79
+ "loss": 0.6106,
80
+ "num_input_tokens_seen": 430928,
81
+ "step": 45
82
+ },
83
+ {
84
+ "epoch": 0.8888888888888888,
85
+ "grad_norm": 0.45064786076545715,
86
+ "learning_rate": 3.9845504639337535e-05,
87
+ "loss": 0.5552,
88
+ "num_input_tokens_seen": 477520,
89
+ "step": 50
90
+ },
91
+ {
92
+ "epoch": 0.9777777777777777,
93
+ "grad_norm": 0.4501620829105377,
94
+ "learning_rate": 3.790265684518767e-05,
95
+ "loss": 0.5659,
96
+ "num_input_tokens_seen": 526640,
97
+ "step": 55
98
+ },
99
+ {
100
+ "epoch": 1.0666666666666667,
101
+ "grad_norm": 0.24308599531650543,
102
+ "learning_rate": 3.5847093477938956e-05,
103
+ "loss": 0.58,
104
+ "num_input_tokens_seen": 577184,
105
+ "step": 60
106
+ },
107
+ {
108
+ "epoch": 1.1555555555555554,
109
+ "grad_norm": 0.33691608905792236,
110
+ "learning_rate": 3.369677161463068e-05,
111
+ "loss": 0.5429,
112
+ "num_input_tokens_seen": 625344,
113
+ "step": 65
114
+ },
115
+ {
116
+ "epoch": 1.2444444444444445,
117
+ "grad_norm": 0.2771952748298645,
118
+ "learning_rate": 3.147047612756302e-05,
119
+ "loss": 0.4384,
120
+ "num_input_tokens_seen": 670224,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 1.3333333333333333,
125
+ "grad_norm": 0.4307265281677246,
126
+ "learning_rate": 2.918765558261841e-05,
127
+ "loss": 0.5749,
128
+ "num_input_tokens_seen": 714880,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 1.4222222222222223,
133
+ "grad_norm": 0.4149227440357208,
134
+ "learning_rate": 2.686825233966061e-05,
135
+ "loss": 0.4222,
136
+ "num_input_tokens_seen": 752944,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 1.511111111111111,
141
+ "grad_norm": 0.44140687584877014,
142
+ "learning_rate": 2.4532528339227452e-05,
143
+ "loss": 0.556,
144
+ "num_input_tokens_seen": 802064,
145
+ "step": 85
146
+ },
147
+ {
148
+ "epoch": 1.6,
149
+ "grad_norm": 0.376556396484375,
150
+ "learning_rate": 2.2200888097417307e-05,
151
+ "loss": 0.4963,
152
+ "num_input_tokens_seen": 852112,
153
+ "step": 90
154
+ },
155
+ {
156
+ "epoch": 1.6888888888888889,
157
+ "grad_norm": 0.3596157431602478,
158
+ "learning_rate": 1.9893700455257996e-05,
159
+ "loss": 0.5085,
160
+ "num_input_tokens_seen": 905184,
161
+ "step": 95
162
+ },
163
+ {
164
+ "epoch": 1.7777777777777777,
165
+ "grad_norm": 0.4776301085948944,
166
+ "learning_rate": 1.7631120639727393e-05,
167
+ "loss": 0.4726,
168
+ "num_input_tokens_seen": 949424,
169
+ "step": 100
170
+ },
171
+ {
172
+ "epoch": 1.7777777777777777,
173
+ "eval_loss": 0.39410004019737244,
174
+ "eval_runtime": 11.4337,
175
+ "eval_samples_per_second": 8.746,
176
+ "eval_steps_per_second": 4.373,
177
+ "num_input_tokens_seen": 949424,
178
+ "step": 100
179
+ },
180
+ {
181
+ "epoch": 1.8666666666666667,
182
+ "grad_norm": 0.26946938037872314,
183
+ "learning_rate": 1.5432914190872757e-05,
184
+ "loss": 0.4874,
185
+ "num_input_tokens_seen": 996112,
186
+ "step": 105
187
+ },
188
+ {
189
+ "epoch": 1.9555555555555557,
190
+ "grad_norm": 0.37138107419013977,
191
+ "learning_rate": 1.331828429317345e-05,
192
+ "loss": 0.5235,
193
+ "num_input_tokens_seen": 1047520,
194
+ "step": 110
195
+ },
196
+ {
197
+ "epoch": 2.0444444444444443,
198
+ "grad_norm": 0.33912232518196106,
199
+ "learning_rate": 1.130570401955322e-05,
200
+ "loss": 0.4984,
201
+ "num_input_tokens_seen": 1096832,
202
+ "step": 115
203
+ },
204
+ {
205
+ "epoch": 2.1333333333333333,
206
+ "grad_norm": 0.4000737965106964,
207
+ "learning_rate": 9.412754953531663e-06,
208
+ "loss": 0.4777,
209
+ "num_input_tokens_seen": 1151104,
210
+ "step": 120
211
+ },
212
+ {
213
+ "epoch": 2.2222222222222223,
214
+ "grad_norm": 0.5536447167396545,
215
+ "learning_rate": 7.65597359928646e-06,
216
+ "loss": 0.4918,
217
+ "num_input_tokens_seen": 1194592,
218
+ "step": 125
219
+ },
220
+ {
221
+ "epoch": 2.311111111111111,
222
+ "grad_norm": 0.42406854033470154,
223
+ "learning_rate": 6.050706921363672e-06,
224
+ "loss": 0.4573,
225
+ "num_input_tokens_seen": 1235104,
226
+ "step": 130
227
+ },
228
+ {
229
+ "epoch": 2.4,
230
+ "grad_norm": 0.37090376019477844,
231
+ "learning_rate": 4.610978276018496e-06,
232
+ "loss": 0.4549,
233
+ "num_input_tokens_seen": 1280560,
234
+ "step": 135
235
+ },
236
+ {
237
+ "epoch": 2.488888888888889,
238
+ "grad_norm": 0.3070351779460907,
239
+ "learning_rate": 3.3493649053890326e-06,
240
+ "loss": 0.5422,
241
+ "num_input_tokens_seen": 1329792,
242
+ "step": 140
243
+ },
244
+ {
245
+ "epoch": 2.5777777777777775,
246
+ "grad_norm": 0.3824547529220581,
247
+ "learning_rate": 2.2768880646947268e-06,
248
+ "loss": 0.4968,
249
+ "num_input_tokens_seen": 1380976,
250
+ "step": 145
251
+ },
252
+ {
253
+ "epoch": 2.6666666666666665,
254
+ "grad_norm": 0.4585011899471283,
255
+ "learning_rate": 1.4029167422908107e-06,
256
+ "loss": 0.4361,
257
+ "num_input_tokens_seen": 1423712,
258
+ "step": 150
259
+ },
260
+ {
261
+ "epoch": 2.7555555555555555,
262
+ "grad_norm": 0.3712801933288574,
263
+ "learning_rate": 7.350858136652261e-07,
264
+ "loss": 0.4517,
265
+ "num_input_tokens_seen": 1470000,
266
+ "step": 155
267
+ },
268
+ {
269
+ "epoch": 2.8444444444444446,
270
+ "grad_norm": 0.4640035629272461,
271
+ "learning_rate": 2.7922934437178695e-07,
272
+ "loss": 0.4099,
273
+ "num_input_tokens_seen": 1515168,
274
+ "step": 160
275
+ },
276
+ {
277
+ "epoch": 2.9333333333333336,
278
+ "grad_norm": 0.24545951187610626,
279
+ "learning_rate": 3.9329624554584884e-08,
280
+ "loss": 0.4861,
281
+ "num_input_tokens_seen": 1566432,
282
+ "step": 165
283
+ }
284
+ ],
285
+ "logging_steps": 5,
286
+ "max_steps": 168,
287
+ "num_input_tokens_seen": 1596080,
288
+ "num_train_epochs": 3,
289
+ "save_steps": 100,
290
+ "stateful_callbacks": {
291
+ "TrainerControl": {
292
+ "args": {
293
+ "should_epoch_stop": false,
294
+ "should_evaluate": false,
295
+ "should_log": false,
296
+ "should_save": true,
297
+ "should_training_stop": true
298
+ },
299
+ "attributes": {}
300
+ }
301
+ },
302
+ "total_flos": 1.467473931042816e+16,
303
+ "train_batch_size": 2,
304
+ "trial_name": null,
305
+ "trial_params": null
306
+ }
checkpoint-168/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3714cd435343e43fff99200ac2402d2ab09a63941e76fb7bd73195c6e4f4472
3
+ size 5368
eval_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "eval_loss": 0.38587039709091187,
4
+ "eval_runtime": 13.1955,
5
+ "eval_samples_per_second": 7.578,
6
+ "eval_steps_per_second": 3.789,
7
+ "num_input_tokens_seen": 1596080
8
+ }
llamaboard_config.yaml ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ top.booster: auto
2
+ top.checkpoint_path: []
3
+ top.finetuning_type: lora
4
+ top.model_name: Qwen-1.8B
5
+ top.quantization_bit: none
6
+ top.quantization_method: bitsandbytes
7
+ top.rope_scaling: none
8
+ top.template: default
9
+ top.visual_inputs: false
10
+ train.additional_target: ''
11
+ train.badam_mode: layer
12
+ train.badam_switch_interval: 50
13
+ train.badam_switch_mode: ascending
14
+ train.badam_update_ratio: 0.05
15
+ train.batch_size: 2
16
+ train.compute_type: fp16
17
+ train.create_new_adapter: false
18
+ train.cutoff_len: 1024
19
+ train.dataset:
20
+ - glaive_toolcall_en
21
+ train.dataset_dir: data
22
+ train.ds_offload: false
23
+ train.ds_stage: none
24
+ train.freeze_extra_modules: ''
25
+ train.freeze_trainable_layers: 2
26
+ train.freeze_trainable_modules: all
27
+ train.galore_rank: 16
28
+ train.galore_scale: 0.25
29
+ train.galore_target: all
30
+ train.galore_update_interval: 200
31
+ train.gradient_accumulation_steps: 8
32
+ train.learning_rate: 5e-5
33
+ train.logging_steps: 5
34
+ train.lora_alpha: 16
35
+ train.lora_dropout: 0
36
+ train.lora_rank: 8
37
+ train.lora_target: ''
38
+ train.loraplus_lr_ratio: 0
39
+ train.lr_scheduler_type: cosine
40
+ train.mask_history: false
41
+ train.max_grad_norm: '1.0'
42
+ train.max_samples: '100000'
43
+ train.neat_packing: false
44
+ train.neftune_alpha: 0
45
+ train.num_train_epochs: '3.0'
46
+ train.optim: adamw_torch
47
+ train.packing: false
48
+ train.ppo_score_norm: false
49
+ train.ppo_whiten_rewards: false
50
+ train.pref_beta: 0.1
51
+ train.pref_ftx: 0
52
+ train.pref_loss: sigmoid
53
+ train.report_to: false
54
+ train.resize_vocab: false
55
+ train.reward_model: null
56
+ train.save_steps: 100
57
+ train.shift_attn: false
58
+ train.train_on_prompt: false
59
+ train.training_stage: Supervised Fine-Tuning
60
+ train.use_badam: false
61
+ train.use_dora: false
62
+ train.use_galore: false
63
+ train.use_llama_pro: false
64
+ train.use_pissa: false
65
+ train.use_rslora: false
66
+ train.val_size: 0.1
67
+ train.warmup_steps: 0
qwen.tiktoken ADDED
The diff for this file is too large to render. See raw diff
 
running_log.txt ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [INFO|parser.py:355] 2024-09-02 15:49:30,713 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.float16
2
+
3
+ [INFO|tokenization_utils_base.py:2269] 2024-09-02 15:49:32,191 >> loading file qwen.tiktoken from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\qwen.tiktoken
4
+
5
+ [INFO|tokenization_utils_base.py:2269] 2024-09-02 15:49:32,192 >> loading file added_tokens.json from cache at None
6
+
7
+ [INFO|tokenization_utils_base.py:2269] 2024-09-02 15:49:32,192 >> loading file special_tokens_map.json from cache at None
8
+
9
+ [INFO|tokenization_utils_base.py:2269] 2024-09-02 15:49:32,192 >> loading file tokenizer_config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\tokenizer_config.json
10
+
11
+ [INFO|tokenization_utils_base.py:2269] 2024-09-02 15:49:32,192 >> loading file tokenizer.json from cache at None
12
+
13
+ [INFO|template.py:270] 2024-09-02 15:49:32,688 >> Add eos token: <|endoftext|>
14
+
15
+ [INFO|template.py:375] 2024-09-02 15:49:32,689 >> Add pad token: <|endoftext|>
16
+
17
+ [INFO|loader.py:52] 2024-09-02 15:49:32,690 >> Loading dataset llamafactory/glaive_toolcall_en...
18
+
19
+ [INFO|configuration_utils.py:733] 2024-09-02 15:49:40,666 >> loading configuration file config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\config.json
20
+
21
+ [INFO|configuration_utils.py:733] 2024-09-02 15:49:41,313 >> loading configuration file config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\config.json
22
+
23
+ [INFO|configuration_utils.py:800] 2024-09-02 15:49:41,314 >> Model config QWenConfig {
24
+ "_name_or_path": "Qwen/Qwen-1_8B",
25
+ "architectures": [
26
+ "QWenLMHeadModel"
27
+ ],
28
+ "attn_dropout_prob": 0.0,
29
+ "auto_map": {
30
+ "AutoConfig": "Qwen/Qwen-1_8B--configuration_qwen.QWenConfig",
31
+ "AutoModelForCausalLM": "Qwen/Qwen-1_8B--modeling_qwen.QWenLMHeadModel"
32
+ },
33
+ "bf16": false,
34
+ "emb_dropout_prob": 0.0,
35
+ "fp16": false,
36
+ "fp32": false,
37
+ "hidden_size": 2048,
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 11008,
40
+ "kv_channels": 128,
41
+ "layer_norm_epsilon": 1e-06,
42
+ "max_position_embeddings": 8192,
43
+ "model_type": "qwen",
44
+ "no_bias": true,
45
+ "num_attention_heads": 16,
46
+ "num_hidden_layers": 24,
47
+ "onnx_safe": null,
48
+ "rotary_emb_base": 10000,
49
+ "rotary_pct": 1.0,
50
+ "scale_attn_weights": true,
51
+ "seq_length": 8192,
52
+ "softmax_in_fp32": false,
53
+ "tie_word_embeddings": false,
54
+ "tokenizer_class": "QWenTokenizer",
55
+ "transformers_version": "4.44.2",
56
+ "use_cache": true,
57
+ "use_cache_kernel": false,
58
+ "use_cache_quantization": false,
59
+ "use_dynamic_ntk": true,
60
+ "use_flash_attn": "auto",
61
+ "use_logn_attn": true,
62
+ "vocab_size": 151936
63
+ }
64
+
65
+
66
+ [INFO|modeling_utils.py:3678] 2024-09-02 15:49:41,731 >> loading weights file model.safetensors from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\model.safetensors.index.json
67
+
68
+ [INFO|modeling_utils.py:1606] 2024-09-02 16:16:57,834 >> Instantiating QWenLMHeadModel model under default dtype torch.float16.
69
+
70
+ [INFO|configuration_utils.py:1038] 2024-09-02 16:16:57,838 >> Generate config GenerationConfig {}
71
+
72
+
73
+ [INFO|modeling_utils.py:4507] 2024-09-02 16:17:02,439 >> All model checkpoint weights were used when initializing QWenLMHeadModel.
74
+
75
+
76
+ [INFO|modeling_utils.py:4515] 2024-09-02 16:17:02,440 >> All the weights of QWenLMHeadModel were initialized from the model checkpoint at Qwen/Qwen-1_8B.
77
+ If your task is similar to the task the model of the checkpoint was trained on, you can already use QWenLMHeadModel for predictions without further training.
78
+
79
+ [INFO|modeling_utils.py:4003] 2024-09-02 16:17:12,457 >> Generation config file not found, using a generation config created from the model config.
80
+
81
+ [WARNING|checkpointing.py:70] 2024-09-02 16:17:12,489 >> You are using the old GC format, some features (e.g. BAdam) will be invalid.
82
+
83
+ [INFO|checkpointing.py:103] 2024-09-02 16:17:12,491 >> Gradient checkpointing enabled.
84
+
85
+ [INFO|attention.py:86] 2024-09-02 16:17:12,493 >> Using vanilla attention implementation.
86
+
87
+ [INFO|adapter.py:302] 2024-09-02 16:17:12,493 >> Upcasting trainable params to float32.
88
+
89
+ [INFO|adapter.py:158] 2024-09-02 16:17:12,494 >> Fine-tuning method: LoRA
90
+
91
+ [INFO|misc.py:56] 2024-09-02 16:17:12,497 >> Found linear modules: c_attn,w2,c_proj,w1
92
+
93
+ [INFO|loader.py:196] 2024-09-02 16:17:13,257 >> trainable params: 6,709,248 || all params: 1,843,537,920 || trainable%: 0.3639
94
+
95
+ [INFO|trainer.py:648] 2024-09-02 16:17:13,289 >> Using auto half precision backend
96
+
97
+ [INFO|trainer.py:2134] 2024-09-02 16:17:13,776 >> ***** Running training *****
98
+
99
+ [INFO|trainer.py:2135] 2024-09-02 16:17:13,777 >> Num examples = 900
100
+
101
+ [INFO|trainer.py:2136] 2024-09-02 16:17:13,778 >> Num Epochs = 3
102
+
103
+ [INFO|trainer.py:2137] 2024-09-02 16:17:13,779 >> Instantaneous batch size per device = 2
104
+
105
+ [INFO|trainer.py:2140] 2024-09-02 16:17:13,780 >> Total train batch size (w. parallel, distributed & accumulation) = 16
106
+
107
+ [INFO|trainer.py:2141] 2024-09-02 16:17:13,781 >> Gradient Accumulation steps = 8
108
+
109
+ [INFO|trainer.py:2142] 2024-09-02 16:17:13,782 >> Total optimization steps = 168
110
+
111
+ [INFO|trainer.py:2143] 2024-09-02 16:17:13,788 >> Number of trainable parameters = 6,709,248
112
+
113
+ [INFO|callbacks.py:319] 2024-09-02 16:17:47,749 >> {'loss': 0.9542, 'learning_rate': 4.9891e-05, 'epoch': 0.09, 'throughput': 1389.23}
114
+
115
+ [INFO|callbacks.py:319] 2024-09-02 16:18:21,440 >> {'loss': 0.7834, 'learning_rate': 4.9564e-05, 'epoch': 0.18, 'throughput': 1487.79}
116
+
117
+ [INFO|callbacks.py:319] 2024-09-02 16:18:54,465 >> {'loss': 0.7296, 'learning_rate': 4.9023e-05, 'epoch': 0.27, 'throughput': 1490.51}
118
+
119
+ [INFO|callbacks.py:319] 2024-09-02 16:19:25,676 >> {'loss': 0.6653, 'learning_rate': 4.8272e-05, 'epoch': 0.36, 'throughput': 1493.24}
120
+
121
+ [INFO|callbacks.py:319] 2024-09-02 16:19:55,399 >> {'loss': 0.6830, 'learning_rate': 4.7317e-05, 'epoch': 0.44, 'throughput': 1513.54}
122
+
123
+ [INFO|callbacks.py:319] 2024-09-02 16:20:25,205 >> {'loss': 0.5651, 'learning_rate': 4.6168e-05, 'epoch': 0.53, 'throughput': 1533.3}
124
+
125
+ [INFO|callbacks.py:319] 2024-09-02 16:20:56,227 >> {'loss': 0.5411, 'learning_rate': 4.4834e-05, 'epoch': 0.62, 'throughput': 1520.29}
126
+
127
+ [INFO|callbacks.py:319] 2024-09-02 16:21:25,069 >> {'loss': 0.5720, 'learning_rate': 4.3326e-05, 'epoch': 0.71, 'throughput': 1523.32}
128
+
129
+ [INFO|callbacks.py:319] 2024-09-02 16:21:55,779 >> {'loss': 0.6106, 'learning_rate': 4.1659e-05, 'epoch': 0.80, 'throughput': 1528.21}
130
+
131
+ [INFO|callbacks.py:319] 2024-09-02 16:22:26,052 >> {'loss': 0.5552, 'learning_rate': 3.9846e-05, 'epoch': 0.89, 'throughput': 1529.26}
132
+
133
+ [INFO|callbacks.py:319] 2024-09-02 16:22:58,326 >> {'loss': 0.5659, 'learning_rate': 3.7903e-05, 'epoch': 0.98, 'throughput': 1528.57}
134
+
135
+ [INFO|callbacks.py:319] 2024-09-02 16:23:31,779 >> {'loss': 0.5800, 'learning_rate': 3.5847e-05, 'epoch': 1.07, 'throughput': 1527.01}
136
+
137
+ [INFO|callbacks.py:319] 2024-09-02 16:24:02,611 >> {'loss': 0.5429, 'learning_rate': 3.3697e-05, 'epoch': 1.16, 'throughput': 1529.65}
138
+
139
+ [INFO|callbacks.py:319] 2024-09-02 16:24:31,189 >> {'loss': 0.4384, 'learning_rate': 3.1470e-05, 'epoch': 1.24, 'throughput': 1532.32}
140
+
141
+ [INFO|callbacks.py:319] 2024-09-02 16:25:02,456 >> {'loss': 0.5749, 'learning_rate': 2.9188e-05, 'epoch': 1.33, 'throughput': 1525.37}
142
+
143
+ [INFO|callbacks.py:319] 2024-09-02 16:25:30,813 >> {'loss': 0.4222, 'learning_rate': 2.6868e-05, 'epoch': 1.42, 'throughput': 1514.93}
144
+
145
+ [INFO|callbacks.py:319] 2024-09-02 16:26:01,816 >> {'loss': 0.5560, 'learning_rate': 2.4533e-05, 'epoch': 1.51, 'throughput': 1519.0}
146
+
147
+ [INFO|callbacks.py:319] 2024-09-02 16:26:33,656 >> {'loss': 0.4963, 'learning_rate': 2.2201e-05, 'epoch': 1.60, 'throughput': 1522.01}
148
+
149
+ [INFO|callbacks.py:319] 2024-09-02 16:27:08,164 >> {'loss': 0.5085, 'learning_rate': 1.9894e-05, 'epoch': 1.69, 'throughput': 1522.94}
150
+
151
+ [INFO|callbacks.py:319] 2024-09-02 16:27:37,551 >> {'loss': 0.4726, 'learning_rate': 1.7631e-05, 'epoch': 1.78, 'throughput': 1522.11}
152
+
153
+ [INFO|trainer.py:3819] 2024-09-02 16:27:37,560 >>
154
+ ***** Running Evaluation *****
155
+
156
+ [INFO|trainer.py:3821] 2024-09-02 16:27:37,560 >> Num examples = 100
157
+
158
+ [INFO|trainer.py:3824] 2024-09-02 16:27:37,561 >> Batch size = 2
159
+
160
+ [INFO|trainer.py:3503] 2024-09-02 16:27:48,996 >> Saving model checkpoint to saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-100
161
+
162
+ [INFO|configuration_utils.py:733] 2024-09-02 16:27:51,471 >> loading configuration file config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\config.json
163
+
164
+ [INFO|configuration_utils.py:800] 2024-09-02 16:27:51,473 >> Model config QWenConfig {
165
+ "architectures": [
166
+ "QWenLMHeadModel"
167
+ ],
168
+ "attn_dropout_prob": 0.0,
169
+ "auto_map": {
170
+ "AutoConfig": "Qwen/Qwen-1_8B--configuration_qwen.QWenConfig",
171
+ "AutoModelForCausalLM": "Qwen/Qwen-1_8B--modeling_qwen.QWenLMHeadModel"
172
+ },
173
+ "bf16": false,
174
+ "emb_dropout_prob": 0.0,
175
+ "fp16": false,
176
+ "fp32": false,
177
+ "hidden_size": 2048,
178
+ "initializer_range": 0.02,
179
+ "intermediate_size": 11008,
180
+ "kv_channels": 128,
181
+ "layer_norm_epsilon": 1e-06,
182
+ "max_position_embeddings": 8192,
183
+ "model_type": "qwen",
184
+ "no_bias": true,
185
+ "num_attention_heads": 16,
186
+ "num_hidden_layers": 24,
187
+ "onnx_safe": null,
188
+ "rotary_emb_base": 10000,
189
+ "rotary_pct": 1.0,
190
+ "scale_attn_weights": true,
191
+ "seq_length": 8192,
192
+ "softmax_in_fp32": false,
193
+ "tie_word_embeddings": false,
194
+ "tokenizer_class": "QWenTokenizer",
195
+ "transformers_version": "4.44.2",
196
+ "use_cache": true,
197
+ "use_cache_kernel": false,
198
+ "use_cache_quantization": false,
199
+ "use_dynamic_ntk": true,
200
+ "use_flash_attn": "auto",
201
+ "use_logn_attn": true,
202
+ "vocab_size": 151936
203
+ }
204
+
205
+
206
+ [INFO|tokenization_utils_base.py:2684] 2024-09-02 16:27:51,605 >> tokenizer config file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-100\tokenizer_config.json
207
+
208
+ [INFO|tokenization_utils_base.py:2693] 2024-09-02 16:27:51,607 >> Special tokens file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-100\special_tokens_map.json
209
+
210
+ [INFO|callbacks.py:319] 2024-09-02 16:28:22,125 >> {'loss': 0.4874, 'learning_rate': 1.5433e-05, 'epoch': 1.87, 'throughput': 1490.45}
211
+
212
+ [INFO|callbacks.py:319] 2024-09-02 16:28:55,988 >> {'loss': 0.5235, 'learning_rate': 1.3318e-05, 'epoch': 1.96, 'throughput': 1491.79}
213
+
214
+ [INFO|callbacks.py:319] 2024-09-02 16:29:28,437 >> {'loss': 0.4984, 'learning_rate': 1.1306e-05, 'epoch': 2.04, 'throughput': 1493.02}
215
+
216
+ [INFO|callbacks.py:319] 2024-09-02 16:30:01,857 >> {'loss': 0.4777, 'learning_rate': 9.4128e-06, 'epoch': 2.13, 'throughput': 1498.71}
217
+
218
+ [INFO|callbacks.py:319] 2024-09-02 16:30:30,832 >> {'loss': 0.4918, 'learning_rate': 7.6560e-06, 'epoch': 2.22, 'throughput': 1498.79}
219
+
220
+ [INFO|callbacks.py:319] 2024-09-02 16:31:00,464 >> {'loss': 0.4573, 'learning_rate': 6.0507e-06, 'epoch': 2.31, 'throughput': 1494.08}
221
+
222
+ [INFO|callbacks.py:319] 2024-09-02 16:31:30,565 >> {'loss': 0.4549, 'learning_rate': 4.6110e-06, 'epoch': 2.40, 'throughput': 1494.64}
223
+
224
+ [INFO|callbacks.py:319] 2024-09-02 16:32:01,843 >> {'loss': 0.5422, 'learning_rate': 3.3494e-06, 'epoch': 2.49, 'throughput': 1497.43}
225
+
226
+ [INFO|callbacks.py:319] 2024-09-02 16:32:34,313 >> {'loss': 0.4968, 'learning_rate': 2.2769e-06, 'epoch': 2.58, 'throughput': 1500.22}
227
+
228
+ [INFO|callbacks.py:319] 2024-09-02 16:33:04,696 >> {'loss': 0.4361, 'learning_rate': 1.4029e-06, 'epoch': 2.67, 'throughput': 1497.23}
229
+
230
+ [INFO|callbacks.py:319] 2024-09-02 16:33:34,901 >> {'loss': 0.4517, 'learning_rate': 7.3509e-07, 'epoch': 2.76, 'throughput': 1498.31}
231
+
232
+ [INFO|callbacks.py:319] 2024-09-02 16:34:05,001 >> {'loss': 0.4099, 'learning_rate': 2.7923e-07, 'epoch': 2.84, 'throughput': 1498.38}
233
+
234
+ [INFO|callbacks.py:319] 2024-09-02 16:34:34,884 >> {'loss': 0.4861, 'learning_rate': 3.9330e-08, 'epoch': 2.93, 'throughput': 1504.61}
235
+
236
+ [INFO|trainer.py:3503] 2024-09-02 16:34:53,706 >> Saving model checkpoint to saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-168
237
+
238
+ [INFO|configuration_utils.py:733] 2024-09-02 16:34:56,148 >> loading configuration file config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\config.json
239
+
240
+ [INFO|configuration_utils.py:800] 2024-09-02 16:34:56,150 >> Model config QWenConfig {
241
+ "architectures": [
242
+ "QWenLMHeadModel"
243
+ ],
244
+ "attn_dropout_prob": 0.0,
245
+ "auto_map": {
246
+ "AutoConfig": "Qwen/Qwen-1_8B--configuration_qwen.QWenConfig",
247
+ "AutoModelForCausalLM": "Qwen/Qwen-1_8B--modeling_qwen.QWenLMHeadModel"
248
+ },
249
+ "bf16": false,
250
+ "emb_dropout_prob": 0.0,
251
+ "fp16": false,
252
+ "fp32": false,
253
+ "hidden_size": 2048,
254
+ "initializer_range": 0.02,
255
+ "intermediate_size": 11008,
256
+ "kv_channels": 128,
257
+ "layer_norm_epsilon": 1e-06,
258
+ "max_position_embeddings": 8192,
259
+ "model_type": "qwen",
260
+ "no_bias": true,
261
+ "num_attention_heads": 16,
262
+ "num_hidden_layers": 24,
263
+ "onnx_safe": null,
264
+ "rotary_emb_base": 10000,
265
+ "rotary_pct": 1.0,
266
+ "scale_attn_weights": true,
267
+ "seq_length": 8192,
268
+ "softmax_in_fp32": false,
269
+ "tie_word_embeddings": false,
270
+ "tokenizer_class": "QWenTokenizer",
271
+ "transformers_version": "4.44.2",
272
+ "use_cache": true,
273
+ "use_cache_kernel": false,
274
+ "use_cache_quantization": false,
275
+ "use_dynamic_ntk": true,
276
+ "use_flash_attn": "auto",
277
+ "use_logn_attn": true,
278
+ "vocab_size": 151936
279
+ }
280
+
281
+
282
+ [INFO|tokenization_utils_base.py:2684] 2024-09-02 16:34:56,239 >> tokenizer config file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-168\tokenizer_config.json
283
+
284
+ [INFO|tokenization_utils_base.py:2693] 2024-09-02 16:34:56,240 >> Special tokens file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\checkpoint-168\special_tokens_map.json
285
+
286
+ [INFO|trainer.py:2394] 2024-09-02 16:34:56,859 >>
287
+
288
+ Training completed. Do not forget to share your model on huggingface.co/models =)
289
+
290
+
291
+
292
+ [INFO|trainer.py:3503] 2024-09-02 16:34:56,864 >> Saving model checkpoint to saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54
293
+
294
+ [INFO|configuration_utils.py:733] 2024-09-02 16:34:58,089 >> loading configuration file config.json from cache at C:\Users\22320\.cache\huggingface\hub\models--Qwen--Qwen-1_8B\snapshots\fa6e214ccbbc6a55235c26ef406355b6bfdf5eed\config.json
295
+
296
+ [INFO|configuration_utils.py:800] 2024-09-02 16:34:58,092 >> Model config QWenConfig {
297
+ "architectures": [
298
+ "QWenLMHeadModel"
299
+ ],
300
+ "attn_dropout_prob": 0.0,
301
+ "auto_map": {
302
+ "AutoConfig": "Qwen/Qwen-1_8B--configuration_qwen.QWenConfig",
303
+ "AutoModelForCausalLM": "Qwen/Qwen-1_8B--modeling_qwen.QWenLMHeadModel"
304
+ },
305
+ "bf16": false,
306
+ "emb_dropout_prob": 0.0,
307
+ "fp16": false,
308
+ "fp32": false,
309
+ "hidden_size": 2048,
310
+ "initializer_range": 0.02,
311
+ "intermediate_size": 11008,
312
+ "kv_channels": 128,
313
+ "layer_norm_epsilon": 1e-06,
314
+ "max_position_embeddings": 8192,
315
+ "model_type": "qwen",
316
+ "no_bias": true,
317
+ "num_attention_heads": 16,
318
+ "num_hidden_layers": 24,
319
+ "onnx_safe": null,
320
+ "rotary_emb_base": 10000,
321
+ "rotary_pct": 1.0,
322
+ "scale_attn_weights": true,
323
+ "seq_length": 8192,
324
+ "softmax_in_fp32": false,
325
+ "tie_word_embeddings": false,
326
+ "tokenizer_class": "QWenTokenizer",
327
+ "transformers_version": "4.44.2",
328
+ "use_cache": true,
329
+ "use_cache_kernel": false,
330
+ "use_cache_quantization": false,
331
+ "use_dynamic_ntk": true,
332
+ "use_flash_attn": "auto",
333
+ "use_logn_attn": true,
334
+ "vocab_size": 151936
335
+ }
336
+
337
+
338
+ [INFO|tokenization_utils_base.py:2684] 2024-09-02 16:34:58,235 >> tokenizer config file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\tokenizer_config.json
339
+
340
+ [INFO|tokenization_utils_base.py:2693] 2024-09-02 16:34:58,238 >> Special tokens file saved in saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54\special_tokens_map.json
341
+
342
+ [WARNING|ploting.py:89] 2024-09-02 16:34:59,062 >> No metric eval_accuracy to plot.
343
+
344
+ [INFO|trainer.py:3819] 2024-09-02 16:34:59,082 >>
345
+ ***** Running Evaluation *****
346
+
347
+ [INFO|trainer.py:3821] 2024-09-02 16:34:59,084 >> Num examples = 100
348
+
349
+ [INFO|trainer.py:3824] 2024-09-02 16:34:59,086 >> Batch size = 2
350
+
351
+ [INFO|modelcard.py:449] 2024-09-02 16:35:12,284 >> Dropping the following result as it does not have all the necessary fields:
352
+ {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}
353
+
special_tokens_map.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "eos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "pad_token": "<|endoftext|>"
10
+ }
tokenization_qwen.py ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Alibaba Cloud.
2
+ #
3
+ # This source code is licensed under the license found in the
4
+ # LICENSE file in the root directory of this source tree.
5
+
6
+ """Tokenization classes for QWen."""
7
+
8
+ import base64
9
+ import logging
10
+ import os
11
+ import unicodedata
12
+ from typing import Collection, Dict, List, Set, Tuple, Union
13
+
14
+ import tiktoken
15
+ from transformers import PreTrainedTokenizer, AddedToken
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+
20
+ VOCAB_FILES_NAMES = {"vocab_file": "qwen.tiktoken"}
21
+
22
+ PAT_STR = r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+"""
23
+ ENDOFTEXT = "<|endoftext|>"
24
+ IMSTART = "<|im_start|>"
25
+ IMEND = "<|im_end|>"
26
+ # as the default behavior is changed to allow special tokens in
27
+ # regular texts, the surface forms of special tokens need to be
28
+ # as different as possible to minimize the impact
29
+ EXTRAS = tuple((f"<|extra_{i}|>" for i in range(205)))
30
+ # changed to use actual index to avoid misconfiguration with vocabulary expansion
31
+ SPECIAL_START_ID = 151643
32
+ SPECIAL_TOKENS = tuple(
33
+ enumerate(
34
+ (
35
+ (
36
+ ENDOFTEXT,
37
+ IMSTART,
38
+ IMEND,
39
+ )
40
+ + EXTRAS
41
+ ),
42
+ start=SPECIAL_START_ID,
43
+ )
44
+ )
45
+ SPECIAL_TOKENS_SET = set(t for i, t in SPECIAL_TOKENS)
46
+
47
+
48
+ def _load_tiktoken_bpe(tiktoken_bpe_file: str) -> Dict[bytes, int]:
49
+ with open(tiktoken_bpe_file, "rb") as f:
50
+ contents = f.read()
51
+ return {
52
+ base64.b64decode(token): int(rank)
53
+ for token, rank in (line.split() for line in contents.splitlines() if line)
54
+ }
55
+
56
+
57
+ class QWenTokenizer(PreTrainedTokenizer):
58
+ """QWen tokenizer."""
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+
62
+ def __init__(
63
+ self,
64
+ vocab_file,
65
+ errors="replace",
66
+ extra_vocab_file=None,
67
+ **kwargs,
68
+ ):
69
+ super().__init__(**kwargs)
70
+
71
+ # how to handle errors in decoding UTF-8 byte sequences
72
+ # use ignore if you are in streaming inference
73
+ self.errors = errors
74
+
75
+ self.mergeable_ranks = _load_tiktoken_bpe(vocab_file) # type: Dict[bytes, int]
76
+ self.special_tokens = {
77
+ token: index
78
+ for index, token in SPECIAL_TOKENS
79
+ }
80
+
81
+ # try load extra vocab from file
82
+ if extra_vocab_file is not None:
83
+ used_ids = set(self.mergeable_ranks.values()) | set(self.special_tokens.values())
84
+ extra_mergeable_ranks = _load_tiktoken_bpe(extra_vocab_file)
85
+ for token, index in extra_mergeable_ranks.items():
86
+ if token in self.mergeable_ranks:
87
+ logger.info(f"extra token {token} exists, skipping")
88
+ continue
89
+ if index in used_ids:
90
+ logger.info(f'the index {index} for extra token {token} exists, skipping')
91
+ continue
92
+ self.mergeable_ranks[token] = index
93
+ # the index may be sparse after this, but don't worry tiktoken.Encoding will handle this
94
+
95
+ enc = tiktoken.Encoding(
96
+ "Qwen",
97
+ pat_str=PAT_STR,
98
+ mergeable_ranks=self.mergeable_ranks,
99
+ special_tokens=self.special_tokens,
100
+ )
101
+ assert (
102
+ len(self.mergeable_ranks) + len(self.special_tokens) == enc.n_vocab
103
+ ), f"{len(self.mergeable_ranks) + len(self.special_tokens)} != {enc.n_vocab} in encoding"
104
+
105
+ self.decoder = {
106
+ v: k for k, v in self.mergeable_ranks.items()
107
+ } # type: dict[int, bytes|str]
108
+ self.decoder.update({v: k for k, v in self.special_tokens.items()})
109
+
110
+ self.tokenizer = enc # type: tiktoken.Encoding
111
+
112
+ self.eod_id = self.tokenizer.eot_token
113
+ self.im_start_id = self.special_tokens[IMSTART]
114
+ self.im_end_id = self.special_tokens[IMEND]
115
+
116
+ def __getstate__(self):
117
+ # for pickle lovers
118
+ state = self.__dict__.copy()
119
+ del state["tokenizer"]
120
+ return state
121
+
122
+ def __setstate__(self, state):
123
+ # tokenizer is not python native; don't pass it; rebuild it
124
+ self.__dict__.update(state)
125
+ enc = tiktoken.Encoding(
126
+ "Qwen",
127
+ pat_str=PAT_STR,
128
+ mergeable_ranks=self.mergeable_ranks,
129
+ special_tokens=self.special_tokens,
130
+ )
131
+ self.tokenizer = enc
132
+
133
+ def __len__(self) -> int:
134
+ return self.tokenizer.n_vocab
135
+
136
+ def get_vocab(self) -> Dict[bytes, int]:
137
+ return self.mergeable_ranks
138
+
139
+ def convert_tokens_to_ids(
140
+ self, tokens: Union[bytes, str, List[Union[bytes, str]]]
141
+ ) -> List[int]:
142
+ ids = []
143
+ if isinstance(tokens, (str, bytes)):
144
+ if tokens in self.special_tokens:
145
+ return self.special_tokens[tokens]
146
+ else:
147
+ return self.mergeable_ranks.get(tokens)
148
+ for token in tokens:
149
+ if token in self.special_tokens:
150
+ ids.append(self.special_tokens[token])
151
+ else:
152
+ ids.append(self.mergeable_ranks.get(token))
153
+ return ids
154
+
155
+ def _add_tokens(
156
+ self,
157
+ new_tokens: Union[List[str], List[AddedToken]],
158
+ special_tokens: bool = False,
159
+ ) -> int:
160
+ if not special_tokens and new_tokens:
161
+ raise ValueError("Adding regular tokens is not supported")
162
+ for token in new_tokens:
163
+ surface_form = token.content if isinstance(token, AddedToken) else token
164
+ if surface_form not in SPECIAL_TOKENS_SET:
165
+ raise ValueError("Adding unknown special tokens is not supported")
166
+ return 0
167
+
168
+ def save_vocabulary(self, save_directory: str, **kwargs) -> Tuple[str]:
169
+ """
170
+ Save only the vocabulary of the tokenizer (vocabulary).
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ file_path = os.path.join(save_directory, "qwen.tiktoken")
176
+ with open(file_path, "w", encoding="utf8") as w:
177
+ for k, v in self.mergeable_ranks.items():
178
+ line = base64.b64encode(k).decode("utf8") + " " + str(v) + "\n"
179
+ w.write(line)
180
+ return (file_path,)
181
+
182
+ def tokenize(
183
+ self,
184
+ text: str,
185
+ allowed_special: Union[Set, str] = "all",
186
+ disallowed_special: Union[Collection, str] = (),
187
+ **kwargs,
188
+ ) -> List[Union[bytes, str]]:
189
+ """
190
+ Converts a string in a sequence of tokens.
191
+
192
+ Args:
193
+ text (`str`):
194
+ The sequence to be encoded.
195
+ allowed_special (`Literal["all"]` or `set`):
196
+ The surface forms of the tokens to be encoded as special tokens in regular texts.
197
+ Default to "all".
198
+ disallowed_special (`Literal["all"]` or `Collection`):
199
+ The surface forms of the tokens that should not be in regular texts and trigger errors.
200
+ Default to an empty tuple.
201
+
202
+ kwargs (additional keyword arguments, *optional*):
203
+ Will be passed to the underlying model specific encode method.
204
+
205
+ Returns:
206
+ `List[bytes|str]`: The list of tokens.
207
+ """
208
+ tokens = []
209
+ text = unicodedata.normalize("NFC", text)
210
+
211
+ # this implementation takes a detour: text -> token id -> token surface forms
212
+ for t in self.tokenizer.encode(
213
+ text, allowed_special=allowed_special, disallowed_special=disallowed_special
214
+ ):
215
+ tokens.append(self.decoder[t])
216
+ return tokens
217
+
218
+ def convert_tokens_to_string(self, tokens: List[Union[bytes, str]]) -> str:
219
+ """
220
+ Converts a sequence of tokens in a single string.
221
+ """
222
+ text = ""
223
+ temp = b""
224
+ for t in tokens:
225
+ if isinstance(t, str):
226
+ if temp:
227
+ text += temp.decode("utf-8", errors=self.errors)
228
+ temp = b""
229
+ text += t
230
+ elif isinstance(t, bytes):
231
+ temp += t
232
+ else:
233
+ raise TypeError("token should only be of type types or str")
234
+ if temp:
235
+ text += temp.decode("utf-8", errors=self.errors)
236
+ return text
237
+
238
+ @property
239
+ def vocab_size(self):
240
+ return self.tokenizer.n_vocab
241
+
242
+ def _convert_id_to_token(self, index: int) -> Union[bytes, str]:
243
+ """Converts an id to a token, special tokens included"""
244
+ if index in self.decoder:
245
+ return self.decoder[index]
246
+ raise ValueError("unknown ids")
247
+
248
+ def _convert_token_to_id(self, token: Union[bytes, str]) -> int:
249
+ """Converts a token to an id using the vocab, special tokens included"""
250
+ if token in self.special_tokens:
251
+ return self.special_tokens[token]
252
+ if token in self.mergeable_ranks:
253
+ return self.mergeable_ranks[token]
254
+ raise ValueError("unknown token")
255
+
256
+ def _tokenize(self, text: str, **kwargs):
257
+ """
258
+ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based
259
+ vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
260
+
261
+ Do NOT take care of added tokens.
262
+ """
263
+ raise NotImplementedError
264
+
265
+ def _decode(
266
+ self,
267
+ token_ids: Union[int, List[int]],
268
+ skip_special_tokens: bool = False,
269
+ errors: str = None,
270
+ **kwargs,
271
+ ) -> str:
272
+ if isinstance(token_ids, int):
273
+ token_ids = [token_ids]
274
+ if skip_special_tokens:
275
+ token_ids = [i for i in token_ids if i < self.eod_id]
276
+ return self.tokenizer.decode(token_ids, errors=errors or self.errors)
tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "auto_map": {
4
+ "AutoTokenizer": [
5
+ "tokenization_qwen.QWenTokenizer",
6
+ null
7
+ ]
8
+ },
9
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}{% if system_message is defined %}{{ system_message + '\n' }}{% endif %}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'Human: ' + content + '\nAssistant:' }}{% elif message['role'] == 'assistant' %}{{ content + '<|endoftext|>' + '\n' }}{% endif %}{% endfor %}",
10
+ "clean_up_tokenization_spaces": true,
11
+ "eos_token": "<|endoftext|>",
12
+ "model_max_length": 8192,
13
+ "pad_token": "<|endoftext|>",
14
+ "padding_side": "right",
15
+ "split_special_tokens": false,
16
+ "tokenizer_class": "QWenTokenizer"
17
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.986666666666667,
3
+ "num_input_tokens_seen": 1596080,
4
+ "total_flos": 1.467473931042816e+16,
5
+ "train_loss": 0.5455739086582547,
6
+ "train_runtime": 1063.0707,
7
+ "train_samples_per_second": 2.54,
8
+ "train_steps_per_second": 0.158
9
+ }
trainer_log.jsonl ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"current_steps": 5, "total_steps": 168, "loss": 0.9542, "learning_rate": 4.989080197352834e-05, "epoch": 0.08888888888888889, "percentage": 2.98, "elapsed_time": "0:00:33", "remaining_time": "0:18:26", "throughput": 1389.23, "total_tokens": 47168}
2
+ {"current_steps": 10, "total_steps": 168, "loss": 0.7834, "learning_rate": 4.956416183083221e-05, "epoch": 0.17777777777777778, "percentage": 5.95, "elapsed_time": "0:01:07", "remaining_time": "0:17:48", "throughput": 1487.79, "total_tokens": 100640}
3
+ {"current_steps": 15, "total_steps": 168, "loss": 0.7296, "learning_rate": 4.9022933048627496e-05, "epoch": 0.26666666666666666, "percentage": 8.93, "elapsed_time": "0:01:40", "remaining_time": "0:17:06", "throughput": 1490.51, "total_tokens": 150048}
4
+ {"current_steps": 20, "total_steps": 168, "loss": 0.6653, "learning_rate": 4.827184371610511e-05, "epoch": 0.35555555555555557, "percentage": 11.9, "elapsed_time": "0:02:11", "remaining_time": "0:16:15", "throughput": 1493.24, "total_tokens": 196928}
5
+ {"current_steps": 25, "total_steps": 168, "loss": 0.683, "learning_rate": 4.731745523109029e-05, "epoch": 0.4444444444444444, "percentage": 14.88, "elapsed_time": "0:02:41", "remaining_time": "0:15:24", "throughput": 1513.54, "total_tokens": 244592}
6
+ {"current_steps": 30, "total_steps": 168, "loss": 0.5651, "learning_rate": 4.6168104980707107e-05, "epoch": 0.5333333333333333, "percentage": 17.86, "elapsed_time": "0:03:11", "remaining_time": "0:14:40", "throughput": 1533.3, "total_tokens": 293488}
7
+ {"current_steps": 35, "total_steps": 168, "loss": 0.5411, "learning_rate": 4.4833833507280884e-05, "epoch": 0.6222222222222222, "percentage": 20.83, "elapsed_time": "0:03:42", "remaining_time": "0:14:05", "throughput": 1520.29, "total_tokens": 338160}
8
+ {"current_steps": 40, "total_steps": 168, "loss": 0.572, "learning_rate": 4.332629679574566e-05, "epoch": 0.7111111111111111, "percentage": 23.81, "elapsed_time": "0:04:11", "remaining_time": "0:13:24", "throughput": 1523.32, "total_tokens": 382768}
9
+ {"current_steps": 45, "total_steps": 168, "loss": 0.6106, "learning_rate": 4.16586644488001e-05, "epoch": 0.8, "percentage": 26.79, "elapsed_time": "0:04:41", "remaining_time": "0:12:50", "throughput": 1528.21, "total_tokens": 430928}
10
+ {"current_steps": 50, "total_steps": 168, "loss": 0.5552, "learning_rate": 3.9845504639337535e-05, "epoch": 0.8888888888888888, "percentage": 29.76, "elapsed_time": "0:05:12", "remaining_time": "0:12:16", "throughput": 1529.26, "total_tokens": 477520}
11
+ {"current_steps": 55, "total_steps": 168, "loss": 0.5659, "learning_rate": 3.790265684518767e-05, "epoch": 0.9777777777777777, "percentage": 32.74, "elapsed_time": "0:05:44", "remaining_time": "0:11:47", "throughput": 1528.57, "total_tokens": 526640}
12
+ {"current_steps": 60, "total_steps": 168, "loss": 0.58, "learning_rate": 3.5847093477938956e-05, "epoch": 1.0666666666666667, "percentage": 35.71, "elapsed_time": "0:06:17", "remaining_time": "0:11:20", "throughput": 1527.01, "total_tokens": 577184}
13
+ {"current_steps": 65, "total_steps": 168, "loss": 0.5429, "learning_rate": 3.369677161463068e-05, "epoch": 1.1555555555555554, "percentage": 38.69, "elapsed_time": "0:06:48", "remaining_time": "0:10:47", "throughput": 1529.65, "total_tokens": 625344}
14
+ {"current_steps": 70, "total_steps": 168, "loss": 0.4384, "learning_rate": 3.147047612756302e-05, "epoch": 1.2444444444444445, "percentage": 41.67, "elapsed_time": "0:07:17", "remaining_time": "0:10:12", "throughput": 1532.32, "total_tokens": 670224}
15
+ {"current_steps": 75, "total_steps": 168, "loss": 0.5749, "learning_rate": 2.918765558261841e-05, "epoch": 1.3333333333333333, "percentage": 44.64, "elapsed_time": "0:07:48", "remaining_time": "0:09:41", "throughput": 1525.37, "total_tokens": 714880}
16
+ {"current_steps": 80, "total_steps": 168, "loss": 0.4222, "learning_rate": 2.686825233966061e-05, "epoch": 1.4222222222222223, "percentage": 47.62, "elapsed_time": "0:08:17", "remaining_time": "0:09:06", "throughput": 1514.93, "total_tokens": 752944}
17
+ {"current_steps": 85, "total_steps": 168, "loss": 0.556, "learning_rate": 2.4532528339227452e-05, "epoch": 1.511111111111111, "percentage": 50.6, "elapsed_time": "0:08:48", "remaining_time": "0:08:35", "throughput": 1519.0, "total_tokens": 802064}
18
+ {"current_steps": 90, "total_steps": 168, "loss": 0.4963, "learning_rate": 2.2200888097417307e-05, "epoch": 1.6, "percentage": 53.57, "elapsed_time": "0:09:19", "remaining_time": "0:08:05", "throughput": 1522.01, "total_tokens": 852112}
19
+ {"current_steps": 95, "total_steps": 168, "loss": 0.5085, "learning_rate": 1.9893700455257996e-05, "epoch": 1.6888888888888889, "percentage": 56.55, "elapsed_time": "0:09:54", "remaining_time": "0:07:36", "throughput": 1522.94, "total_tokens": 905184}
20
+ {"current_steps": 100, "total_steps": 168, "loss": 0.4726, "learning_rate": 1.7631120639727393e-05, "epoch": 1.7777777777777777, "percentage": 59.52, "elapsed_time": "0:10:23", "remaining_time": "0:07:04", "throughput": 1522.11, "total_tokens": 949424}
21
+ {"current_steps": 100, "total_steps": 168, "eval_loss": 0.39410004019737244, "epoch": 1.7777777777777777, "percentage": 59.52, "elapsed_time": "0:10:35", "remaining_time": "0:07:11", "throughput": 1494.69, "total_tokens": 949424}
22
+ {"current_steps": 105, "total_steps": 168, "loss": 0.4874, "learning_rate": 1.5432914190872757e-05, "epoch": 1.8666666666666667, "percentage": 62.5, "elapsed_time": "0:11:08", "remaining_time": "0:06:40", "throughput": 1490.45, "total_tokens": 996112}
23
+ {"current_steps": 110, "total_steps": 168, "loss": 0.5235, "learning_rate": 1.331828429317345e-05, "epoch": 1.9555555555555557, "percentage": 65.48, "elapsed_time": "0:11:42", "remaining_time": "0:06:10", "throughput": 1491.79, "total_tokens": 1047520}
24
+ {"current_steps": 115, "total_steps": 168, "loss": 0.4984, "learning_rate": 1.130570401955322e-05, "epoch": 2.0444444444444443, "percentage": 68.45, "elapsed_time": "0:12:14", "remaining_time": "0:05:38", "throughput": 1493.02, "total_tokens": 1096832}
25
+ {"current_steps": 120, "total_steps": 168, "loss": 0.4777, "learning_rate": 9.412754953531663e-06, "epoch": 2.1333333333333333, "percentage": 71.43, "elapsed_time": "0:12:48", "remaining_time": "0:05:07", "throughput": 1498.71, "total_tokens": 1151104}
26
+ {"current_steps": 125, "total_steps": 168, "loss": 0.4918, "learning_rate": 7.65597359928646e-06, "epoch": 2.2222222222222223, "percentage": 74.4, "elapsed_time": "0:13:17", "remaining_time": "0:04:34", "throughput": 1498.79, "total_tokens": 1194592}
27
+ {"current_steps": 130, "total_steps": 168, "loss": 0.4573, "learning_rate": 6.050706921363672e-06, "epoch": 2.311111111111111, "percentage": 77.38, "elapsed_time": "0:13:46", "remaining_time": "0:04:01", "throughput": 1494.08, "total_tokens": 1235104}
28
+ {"current_steps": 135, "total_steps": 168, "loss": 0.4549, "learning_rate": 4.610978276018496e-06, "epoch": 2.4, "percentage": 80.36, "elapsed_time": "0:14:16", "remaining_time": "0:03:29", "throughput": 1494.64, "total_tokens": 1280560}
29
+ {"current_steps": 140, "total_steps": 168, "loss": 0.5422, "learning_rate": 3.3493649053890326e-06, "epoch": 2.488888888888889, "percentage": 83.33, "elapsed_time": "0:14:48", "remaining_time": "0:02:57", "throughput": 1497.43, "total_tokens": 1329792}
30
+ {"current_steps": 145, "total_steps": 168, "loss": 0.4968, "learning_rate": 2.2768880646947268e-06, "epoch": 2.5777777777777775, "percentage": 86.31, "elapsed_time": "0:15:20", "remaining_time": "0:02:26", "throughput": 1500.22, "total_tokens": 1380976}
31
+ {"current_steps": 150, "total_steps": 168, "loss": 0.4361, "learning_rate": 1.4029167422908107e-06, "epoch": 2.6666666666666665, "percentage": 89.29, "elapsed_time": "0:15:50", "remaining_time": "0:01:54", "throughput": 1497.23, "total_tokens": 1423712}
32
+ {"current_steps": 155, "total_steps": 168, "loss": 0.4517, "learning_rate": 7.350858136652261e-07, "epoch": 2.7555555555555555, "percentage": 92.26, "elapsed_time": "0:16:21", "remaining_time": "0:01:22", "throughput": 1498.31, "total_tokens": 1470000}
33
+ {"current_steps": 160, "total_steps": 168, "loss": 0.4099, "learning_rate": 2.7922934437178695e-07, "epoch": 2.8444444444444446, "percentage": 95.24, "elapsed_time": "0:16:51", "remaining_time": "0:00:50", "throughput": 1498.38, "total_tokens": 1515168}
34
+ {"current_steps": 165, "total_steps": 168, "loss": 0.4861, "learning_rate": 3.9329624554584884e-08, "epoch": 2.9333333333333336, "percentage": 98.21, "elapsed_time": "0:17:21", "remaining_time": "0:00:18", "throughput": 1504.61, "total_tokens": 1566432}
35
+ {"current_steps": 168, "total_steps": 168, "epoch": 2.986666666666667, "percentage": 100.0, "elapsed_time": "0:17:43", "remaining_time": "0:00:00", "throughput": 1501.4, "total_tokens": 1596080}
trainer_state.json ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.986666666666667,
5
+ "eval_steps": 100,
6
+ "global_step": 168,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08888888888888889,
13
+ "grad_norm": 0.7499012351036072,
14
+ "learning_rate": 4.989080197352834e-05,
15
+ "loss": 0.9542,
16
+ "num_input_tokens_seen": 47168,
17
+ "step": 5
18
+ },
19
+ {
20
+ "epoch": 0.17777777777777778,
21
+ "grad_norm": 0.6703861355781555,
22
+ "learning_rate": 4.956416183083221e-05,
23
+ "loss": 0.7834,
24
+ "num_input_tokens_seen": 100640,
25
+ "step": 10
26
+ },
27
+ {
28
+ "epoch": 0.26666666666666666,
29
+ "grad_norm": 0.6384798884391785,
30
+ "learning_rate": 4.9022933048627496e-05,
31
+ "loss": 0.7296,
32
+ "num_input_tokens_seen": 150048,
33
+ "step": 15
34
+ },
35
+ {
36
+ "epoch": 0.35555555555555557,
37
+ "grad_norm": 0.36834755539894104,
38
+ "learning_rate": 4.827184371610511e-05,
39
+ "loss": 0.6653,
40
+ "num_input_tokens_seen": 196928,
41
+ "step": 20
42
+ },
43
+ {
44
+ "epoch": 0.4444444444444444,
45
+ "grad_norm": 0.2962658405303955,
46
+ "learning_rate": 4.731745523109029e-05,
47
+ "loss": 0.683,
48
+ "num_input_tokens_seen": 244592,
49
+ "step": 25
50
+ },
51
+ {
52
+ "epoch": 0.5333333333333333,
53
+ "grad_norm": 0.33365580439567566,
54
+ "learning_rate": 4.6168104980707107e-05,
55
+ "loss": 0.5651,
56
+ "num_input_tokens_seen": 293488,
57
+ "step": 30
58
+ },
59
+ {
60
+ "epoch": 0.6222222222222222,
61
+ "grad_norm": 0.4523426294326782,
62
+ "learning_rate": 4.4833833507280884e-05,
63
+ "loss": 0.5411,
64
+ "num_input_tokens_seen": 338160,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.7111111111111111,
69
+ "grad_norm": 0.41214215755462646,
70
+ "learning_rate": 4.332629679574566e-05,
71
+ "loss": 0.572,
72
+ "num_input_tokens_seen": 382768,
73
+ "step": 40
74
+ },
75
+ {
76
+ "epoch": 0.8,
77
+ "grad_norm": 0.3807964026927948,
78
+ "learning_rate": 4.16586644488001e-05,
79
+ "loss": 0.6106,
80
+ "num_input_tokens_seen": 430928,
81
+ "step": 45
82
+ },
83
+ {
84
+ "epoch": 0.8888888888888888,
85
+ "grad_norm": 0.45064786076545715,
86
+ "learning_rate": 3.9845504639337535e-05,
87
+ "loss": 0.5552,
88
+ "num_input_tokens_seen": 477520,
89
+ "step": 50
90
+ },
91
+ {
92
+ "epoch": 0.9777777777777777,
93
+ "grad_norm": 0.4501620829105377,
94
+ "learning_rate": 3.790265684518767e-05,
95
+ "loss": 0.5659,
96
+ "num_input_tokens_seen": 526640,
97
+ "step": 55
98
+ },
99
+ {
100
+ "epoch": 1.0666666666666667,
101
+ "grad_norm": 0.24308599531650543,
102
+ "learning_rate": 3.5847093477938956e-05,
103
+ "loss": 0.58,
104
+ "num_input_tokens_seen": 577184,
105
+ "step": 60
106
+ },
107
+ {
108
+ "epoch": 1.1555555555555554,
109
+ "grad_norm": 0.33691608905792236,
110
+ "learning_rate": 3.369677161463068e-05,
111
+ "loss": 0.5429,
112
+ "num_input_tokens_seen": 625344,
113
+ "step": 65
114
+ },
115
+ {
116
+ "epoch": 1.2444444444444445,
117
+ "grad_norm": 0.2771952748298645,
118
+ "learning_rate": 3.147047612756302e-05,
119
+ "loss": 0.4384,
120
+ "num_input_tokens_seen": 670224,
121
+ "step": 70
122
+ },
123
+ {
124
+ "epoch": 1.3333333333333333,
125
+ "grad_norm": 0.4307265281677246,
126
+ "learning_rate": 2.918765558261841e-05,
127
+ "loss": 0.5749,
128
+ "num_input_tokens_seen": 714880,
129
+ "step": 75
130
+ },
131
+ {
132
+ "epoch": 1.4222222222222223,
133
+ "grad_norm": 0.4149227440357208,
134
+ "learning_rate": 2.686825233966061e-05,
135
+ "loss": 0.4222,
136
+ "num_input_tokens_seen": 752944,
137
+ "step": 80
138
+ },
139
+ {
140
+ "epoch": 1.511111111111111,
141
+ "grad_norm": 0.44140687584877014,
142
+ "learning_rate": 2.4532528339227452e-05,
143
+ "loss": 0.556,
144
+ "num_input_tokens_seen": 802064,
145
+ "step": 85
146
+ },
147
+ {
148
+ "epoch": 1.6,
149
+ "grad_norm": 0.376556396484375,
150
+ "learning_rate": 2.2200888097417307e-05,
151
+ "loss": 0.4963,
152
+ "num_input_tokens_seen": 852112,
153
+ "step": 90
154
+ },
155
+ {
156
+ "epoch": 1.6888888888888889,
157
+ "grad_norm": 0.3596157431602478,
158
+ "learning_rate": 1.9893700455257996e-05,
159
+ "loss": 0.5085,
160
+ "num_input_tokens_seen": 905184,
161
+ "step": 95
162
+ },
163
+ {
164
+ "epoch": 1.7777777777777777,
165
+ "grad_norm": 0.4776301085948944,
166
+ "learning_rate": 1.7631120639727393e-05,
167
+ "loss": 0.4726,
168
+ "num_input_tokens_seen": 949424,
169
+ "step": 100
170
+ },
171
+ {
172
+ "epoch": 1.7777777777777777,
173
+ "eval_loss": 0.39410004019737244,
174
+ "eval_runtime": 11.4337,
175
+ "eval_samples_per_second": 8.746,
176
+ "eval_steps_per_second": 4.373,
177
+ "num_input_tokens_seen": 949424,
178
+ "step": 100
179
+ },
180
+ {
181
+ "epoch": 1.8666666666666667,
182
+ "grad_norm": 0.26946938037872314,
183
+ "learning_rate": 1.5432914190872757e-05,
184
+ "loss": 0.4874,
185
+ "num_input_tokens_seen": 996112,
186
+ "step": 105
187
+ },
188
+ {
189
+ "epoch": 1.9555555555555557,
190
+ "grad_norm": 0.37138107419013977,
191
+ "learning_rate": 1.331828429317345e-05,
192
+ "loss": 0.5235,
193
+ "num_input_tokens_seen": 1047520,
194
+ "step": 110
195
+ },
196
+ {
197
+ "epoch": 2.0444444444444443,
198
+ "grad_norm": 0.33912232518196106,
199
+ "learning_rate": 1.130570401955322e-05,
200
+ "loss": 0.4984,
201
+ "num_input_tokens_seen": 1096832,
202
+ "step": 115
203
+ },
204
+ {
205
+ "epoch": 2.1333333333333333,
206
+ "grad_norm": 0.4000737965106964,
207
+ "learning_rate": 9.412754953531663e-06,
208
+ "loss": 0.4777,
209
+ "num_input_tokens_seen": 1151104,
210
+ "step": 120
211
+ },
212
+ {
213
+ "epoch": 2.2222222222222223,
214
+ "grad_norm": 0.5536447167396545,
215
+ "learning_rate": 7.65597359928646e-06,
216
+ "loss": 0.4918,
217
+ "num_input_tokens_seen": 1194592,
218
+ "step": 125
219
+ },
220
+ {
221
+ "epoch": 2.311111111111111,
222
+ "grad_norm": 0.42406854033470154,
223
+ "learning_rate": 6.050706921363672e-06,
224
+ "loss": 0.4573,
225
+ "num_input_tokens_seen": 1235104,
226
+ "step": 130
227
+ },
228
+ {
229
+ "epoch": 2.4,
230
+ "grad_norm": 0.37090376019477844,
231
+ "learning_rate": 4.610978276018496e-06,
232
+ "loss": 0.4549,
233
+ "num_input_tokens_seen": 1280560,
234
+ "step": 135
235
+ },
236
+ {
237
+ "epoch": 2.488888888888889,
238
+ "grad_norm": 0.3070351779460907,
239
+ "learning_rate": 3.3493649053890326e-06,
240
+ "loss": 0.5422,
241
+ "num_input_tokens_seen": 1329792,
242
+ "step": 140
243
+ },
244
+ {
245
+ "epoch": 2.5777777777777775,
246
+ "grad_norm": 0.3824547529220581,
247
+ "learning_rate": 2.2768880646947268e-06,
248
+ "loss": 0.4968,
249
+ "num_input_tokens_seen": 1380976,
250
+ "step": 145
251
+ },
252
+ {
253
+ "epoch": 2.6666666666666665,
254
+ "grad_norm": 0.4585011899471283,
255
+ "learning_rate": 1.4029167422908107e-06,
256
+ "loss": 0.4361,
257
+ "num_input_tokens_seen": 1423712,
258
+ "step": 150
259
+ },
260
+ {
261
+ "epoch": 2.7555555555555555,
262
+ "grad_norm": 0.3712801933288574,
263
+ "learning_rate": 7.350858136652261e-07,
264
+ "loss": 0.4517,
265
+ "num_input_tokens_seen": 1470000,
266
+ "step": 155
267
+ },
268
+ {
269
+ "epoch": 2.8444444444444446,
270
+ "grad_norm": 0.4640035629272461,
271
+ "learning_rate": 2.7922934437178695e-07,
272
+ "loss": 0.4099,
273
+ "num_input_tokens_seen": 1515168,
274
+ "step": 160
275
+ },
276
+ {
277
+ "epoch": 2.9333333333333336,
278
+ "grad_norm": 0.24545951187610626,
279
+ "learning_rate": 3.9329624554584884e-08,
280
+ "loss": 0.4861,
281
+ "num_input_tokens_seen": 1566432,
282
+ "step": 165
283
+ },
284
+ {
285
+ "epoch": 2.986666666666667,
286
+ "num_input_tokens_seen": 1596080,
287
+ "step": 168,
288
+ "total_flos": 1.467473931042816e+16,
289
+ "train_loss": 0.5455739086582547,
290
+ "train_runtime": 1063.0707,
291
+ "train_samples_per_second": 2.54,
292
+ "train_steps_per_second": 0.158
293
+ }
294
+ ],
295
+ "logging_steps": 5,
296
+ "max_steps": 168,
297
+ "num_input_tokens_seen": 1596080,
298
+ "num_train_epochs": 3,
299
+ "save_steps": 100,
300
+ "stateful_callbacks": {
301
+ "TrainerControl": {
302
+ "args": {
303
+ "should_epoch_stop": false,
304
+ "should_evaluate": false,
305
+ "should_log": false,
306
+ "should_save": true,
307
+ "should_training_stop": true
308
+ },
309
+ "attributes": {}
310
+ }
311
+ },
312
+ "total_flos": 1.467473931042816e+16,
313
+ "train_batch_size": 2,
314
+ "trial_name": null,
315
+ "trial_params": null
316
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3714cd435343e43fff99200ac2402d2ab09a63941e76fb7bd73195c6e4f4472
3
+ size 5368
training_args.yaml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cutoff_len: 1024
2
+ dataset: glaive_toolcall_en
3
+ dataset_dir: data
4
+ ddp_timeout: 180000000
5
+ do_train: true
6
+ eval_steps: 100
7
+ eval_strategy: steps
8
+ finetuning_type: lora
9
+ flash_attn: auto
10
+ fp16: true
11
+ gradient_accumulation_steps: 8
12
+ include_num_input_tokens_seen: true
13
+ learning_rate: 5.0e-05
14
+ logging_steps: 5
15
+ lora_alpha: 16
16
+ lora_dropout: 0
17
+ lora_rank: 8
18
+ lora_target: all
19
+ lr_scheduler_type: cosine
20
+ max_grad_norm: 1.0
21
+ max_samples: 100000
22
+ model_name_or_path: Qwen/Qwen-1_8B
23
+ num_train_epochs: 3.0
24
+ optim: adamw_torch
25
+ output_dir: saves\Qwen-1.8B\lora\train_2024-09-02-15-46-54
26
+ packing: false
27
+ per_device_eval_batch_size: 2
28
+ per_device_train_batch_size: 2
29
+ plot_loss: true
30
+ preprocessing_num_workers: 16
31
+ report_to: none
32
+ save_steps: 100
33
+ stage: sft
34
+ template: default
35
+ val_size: 0.1
36
+ warmup_steps: 0
training_eval_loss.png ADDED
training_loss.png ADDED