zhaokun commited on
Commit
58d149a
·
1 Parent(s): 1152745

Add models

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +52 -3
  2. README.zh.md +53 -3
  3. adapter_config.json +25 -0
  4. adapter_model.safetensors +3 -0
  5. all_results.json +7 -0
  6. checkpoint-100/README.md +204 -0
  7. checkpoint-100/adapter_config.json +25 -0
  8. checkpoint-100/adapter_model.safetensors +3 -0
  9. checkpoint-100/optimizer.pt +3 -0
  10. checkpoint-100/rng_state.pth +3 -0
  11. checkpoint-100/scheduler.pt +3 -0
  12. checkpoint-100/special_tokens_map.json +18 -0
  13. checkpoint-100/tokenization_chatglm.py +300 -0
  14. checkpoint-100/tokenizer.model +3 -0
  15. checkpoint-100/tokenizer_config.json +41 -0
  16. checkpoint-100/trainer_state.json +141 -0
  17. checkpoint-100/training_args.bin +3 -0
  18. checkpoint-1000/README.md +204 -0
  19. checkpoint-1000/adapter_config.json +25 -0
  20. checkpoint-1000/adapter_model.safetensors +3 -0
  21. checkpoint-1000/optimizer.pt +3 -0
  22. checkpoint-1000/rng_state.pth +3 -0
  23. checkpoint-1000/scheduler.pt +3 -0
  24. checkpoint-1000/special_tokens_map.json +18 -0
  25. checkpoint-1000/tokenization_chatglm.py +300 -0
  26. checkpoint-1000/tokenizer.model +3 -0
  27. checkpoint-1000/tokenizer_config.json +41 -0
  28. checkpoint-1000/trainer_state.json +1221 -0
  29. checkpoint-1000/training_args.bin +3 -0
  30. checkpoint-1100/README.md +204 -0
  31. checkpoint-1100/adapter_config.json +25 -0
  32. checkpoint-1100/adapter_model.safetensors +3 -0
  33. checkpoint-1100/optimizer.pt +3 -0
  34. checkpoint-1100/rng_state.pth +3 -0
  35. checkpoint-1100/scheduler.pt +3 -0
  36. checkpoint-1100/special_tokens_map.json +18 -0
  37. checkpoint-1100/tokenization_chatglm.py +300 -0
  38. checkpoint-1100/tokenizer.model +3 -0
  39. checkpoint-1100/tokenizer_config.json +41 -0
  40. checkpoint-1100/trainer_state.json +1341 -0
  41. checkpoint-1100/training_args.bin +3 -0
  42. checkpoint-200/README.md +204 -0
  43. checkpoint-200/adapter_config.json +25 -0
  44. checkpoint-200/adapter_model.safetensors +3 -0
  45. checkpoint-200/optimizer.pt +3 -0
  46. checkpoint-200/rng_state.pth +3 -0
  47. checkpoint-200/scheduler.pt +3 -0
  48. checkpoint-200/special_tokens_map.json +18 -0
  49. checkpoint-200/tokenization_chatglm.py +300 -0
  50. checkpoint-200/tokenizer.model +3 -0
README.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  # CoolShell LLM <!-- omit from toc -->
2
 
3
  We express our deepest gratitude to Mr. Chen Hao for his selfless sharing in the internet community, especially in the field of technology.
@@ -5,7 +16,42 @@ We express our deepest gratitude to Mr. Chen Hao for his selfless sharing in the
5
  > An orchid in deep forest won't stop giving out aroma despite nobody appreciating it.
6
  > A good man who is moral and well-behaved won't give up his principles despite poverty.
7
 
8
- This repository's model has been trained using the dataset provided by coolshell-llm. For detailed usage instructions and more information, please visit the [coolshell-llm GitHub page](https://github.com/megaease/coolshell-llm).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  ### Demo
11
 
@@ -28,9 +74,12 @@ User: 酷壳网有哪些内容
28
  User: exit
29
  ```
30
 
31
-
32
  ### Statement
33
 
34
  The CoolShell LLM model aims to perpetuate the spirit of Mr. Chen Hao. Do not use the open-source model and code, and any derivatives produced from the open-source project, for any purpose that may harm the nation and society, or for any service that has not undergone safety assessment and registration.
35
 
36
- Although every effort has been made to ensure the compliance and accuracy of the data at every stage of model training, due to the influence of probabilistic randomness, the accuracy of output content cannot be guaranteed. Furthermore, the model's output can be easily misled by user input. This project does not assume any responsibility for data security, public opinion risks, or any risks and liabilities arising from the model being misled, abused, disseminated, or improperly utilized.
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - llama-factory
4
+ - lora
5
+ - generated_from_trainer
6
+ - coolshell
7
+ base_model: chatglm3-6b
8
+ model-index:
9
+ - name: coolshell-llm
10
+ ---
11
+
12
  # CoolShell LLM <!-- omit from toc -->
13
 
14
  We express our deepest gratitude to Mr. Chen Hao for his selfless sharing in the internet community, especially in the field of technology.
 
16
  > An orchid in deep forest won't stop giving out aroma despite nobody appreciating it.
17
  > A good man who is moral and well-behaved won't give up his principles despite poverty.
18
 
19
+
20
+ - [Model description](#model-description)
21
+ - [Training procedure](#training-procedure)
22
+ - [Training hyperparameters](#training-hyperparameters)
23
+ - [Framework versions](#framework-versions)
24
+ - [Demo](#demo)
25
+ - [Statement](#statement)
26
+ - [Special Thanks](#special-thanks)
27
+
28
+ ## Model description
29
+
30
+ This model is a fine-tuned version of [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b) on the [coolshell-llm](https://github.com/megaease/coolshell-llm) dataset by using Qlora-4bit method. For detailed usage instructions and more information, please visit the [coolshell-llm GitHub page](https://github.com/megaease/coolshell-llm).
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.002
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 16
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: cosine
45
+ - num_epochs: 25.0
46
+
47
+ ### Framework versions
48
+
49
+ - PEFT 0.7.1
50
+ - Transformers 4.36.2
51
+ - Pytorch 2.1.2+cu121
52
+ - Datasets 2.15.0
53
+ - Tokenizers 0.15.0
54
+ - LLaMA-Factory 0.4.0
55
 
56
  ### Demo
57
 
 
74
  User: exit
75
  ```
76
 
 
77
  ### Statement
78
 
79
  The CoolShell LLM model aims to perpetuate the spirit of Mr. Chen Hao. Do not use the open-source model and code, and any derivatives produced from the open-source project, for any purpose that may harm the nation and society, or for any service that has not undergone safety assessment and registration.
80
 
81
+ Although every effort has been made to ensure the compliance and accuracy of the data at every stage of model training, due to the influence of probabilistic randomness, the accuracy of output content cannot be guaranteed. Furthermore, the model's output can be easily misled by user input. This project does not assume any responsibility for data security, public opinion risks, or any risks and liabilities arising from the model being misled, abused, disseminated, or improperly utilized.
82
+
83
+
84
+ ### Special Thanks
85
+ We are immensely grateful to [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for providing such a feature-rich and easy-to-use LLM fine-tuning framework. Similarly, we would like to thank Zhipu AI and the KEG Laboratory of Tsinghua University for their open-source contribution to the [ChatGLM3](https://github.com/THUDM/ChatGLM3) model. Without their exceptional work, the establishment of this repository would not have been possible.
README.zh.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  # CoolShell LLM <!-- omit from toc -->
2
 
3
  感恩陈皓先生对中文互联网,尤其是技术领域无私的分享。
@@ -5,7 +16,42 @@
5
  > 芝兰生于深谷,不以无人而不芳。
6
  > 君子修身养德,不以穷困而改志。
7
 
8
- 本仓库的模型是基于 coolshell-llm 提供的数据集训练得来的。详细的使用方法和更多信息,请访问[coolshell-llm 的 GitHub 页面](https://github.com/megaease/coolshell-llm)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  ### 演示示例
11
 
@@ -30,9 +76,13 @@ User: 酷壳网有哪些内容
30
  User: exit
31
  ```
32
 
33
-
34
  ### 声明
35
 
36
  CoolShell LLM 模型旨在传承陈皓先生精神,勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。
37
 
38
- 尽管模型在训练的各个阶段都尽力确保数据的合规性和准确性,但由于模型受概率随机性因素影响,无法保证输出内容的准确。同时模型的输出容易被用户的输入误导。本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - llama-factory
4
+ - lora
5
+ - generated_from_trainer
6
+ - coolshell
7
+ base_model: chatglm3-6b
8
+ model-index:
9
+ - name: coolshell-llm
10
+ ---
11
+
12
  # CoolShell LLM <!-- omit from toc -->
13
 
14
  感恩陈皓先生对中文互联网,尤其是技术领域无私的分享。
 
16
  > 芝兰生于深谷,不以无人而不芳。
17
  > 君子修身养德,不以穷困而改志。
18
 
19
+ - [模型描述](#模型描述)
20
+ - [训练过程](#训练过程)
21
+ - [训练超参数](#训练超参数)
22
+ - [框架版本](#框架版本)
23
+ - [演示示例](#演示示例)
24
+ - [声明](#声明)
25
+ - [特别鸣谢](#特别鸣谢)
26
+
27
+
28
+ ## 模型描述
29
+
30
+ 这个模型是基于 [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b) 使用 [coolshell-llm](https://github.com/megaease/coolshell-llm) 数据集并用 Qlora-4bit 进行微调的结果。更多使用方法请查看[coolshell-llm GitHub 页面](https://github.com/megaease/coolshell-llm)。
31
+
32
+ ## 训练过程
33
+
34
+ ### 训练超参数
35
+
36
+ 训练使用下边这些超参数
37
+ - learning_rate: 0.002
38
+ - train_batch_size: 4
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 16
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: cosine
45
+ - num_epochs: 25.0
46
+
47
+ ### 框架版本
48
+
49
+ - PEFT 0.7.1
50
+ - Transformers 4.36.2
51
+ - Pytorch 2.1.2+cu121
52
+ - Datasets 2.15.0
53
+ - Tokenizers 0.15.0
54
+ - LLaMA-Factory 0.4.0
55
 
56
  ### 演示示例
57
 
 
76
  User: exit
77
  ```
78
 
 
79
  ### 声明
80
 
81
  CoolShell LLM 模型旨在传承陈皓先生精神,勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。
82
 
83
+ 尽管模型在训练的各个阶段都尽力确保数据的合规性和准确性,但由于模型受概率随机性因素影响,无法保证输出内容的准确。同时模型的输出容易被用户的输入误导。本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。
84
+
85
+
86
+ ### 特别鸣谢
87
+
88
+ 我们非常感谢 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 提供了如此功能丰富且易于使用的 LLM 微调框架。同样,我们也要感谢智谱 AI 和清华大学 KEG 实验室对 [ChatGLM3](https://github.com/THUDM/ChatGLM3) 模型的开源贡献。没有他们的杰出工作,本仓库的建立将无从谈起。
adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/root/chatglm3-6b",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 64.0,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "query_key_value"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bc2583490c7dc47bededcc0eaaa25d9aafe96d7680d7ecf5ec077c85de59604
3
+ size 31204248
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 25.0,
3
+ "train_loss": 1.3768115234375,
4
+ "train_runtime": 24197.7873,
5
+ "train_samples_per_second": 0.724,
6
+ "train_steps_per_second": 0.045
7
+ }
checkpoint-100/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /root/chatglm3-6b
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/root/chatglm3-6b",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 64.0,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "query_key_value"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
checkpoint-100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f8a01b2ff9ae8d39695c90956fca3c08f1cbc215ff8ec47d39cdb42704f85f7
3
+ size 31204248
checkpoint-100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f89a17984e8f8325a843e199ab06bda3f078c75a4a70fd390368380879c4da9
3
+ size 62437882
checkpoint-100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dabbebc3b7aae0f1e2e08720110c236a4c4ad8bcc4021283756db5a9251a361
3
+ size 14244
checkpoint-100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5a75a62743becb9bf113e0f626f02da4c2bf599473c2d2862708dd9fbc349c5
3
+ size 1064
checkpoint-100/special_tokens_map.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|observation|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ]
18
+ }
checkpoint-100/tokenization_chatglm.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import re
4
+ from typing import List, Optional, Union, Dict
5
+ from sentencepiece import SentencePieceProcessor
6
+ from transformers import PreTrainedTokenizer
7
+ from transformers.utils import logging, PaddingStrategy
8
+ from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
9
+
10
+
11
+ class SPTokenizer:
12
+ def __init__(self, model_path: str):
13
+ # reload tokenizer
14
+ assert os.path.isfile(model_path), model_path
15
+ self.sp_model = SentencePieceProcessor(model_file=model_path)
16
+
17
+ # BOS / EOS token IDs
18
+ self.n_words: int = self.sp_model.vocab_size()
19
+ self.bos_id: int = self.sp_model.bos_id()
20
+ self.eos_id: int = self.sp_model.eos_id()
21
+ self.pad_id: int = self.sp_model.unk_id()
22
+ assert self.sp_model.vocab_size() == self.sp_model.get_piece_size()
23
+
24
+ role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"]
25
+ special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens
26
+ self.special_tokens = {}
27
+ self.index_special_tokens = {}
28
+ for token in special_tokens:
29
+ self.special_tokens[token] = self.n_words
30
+ self.index_special_tokens[self.n_words] = token
31
+ self.n_words += 1
32
+ self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens])
33
+
34
+ def tokenize(self, s: str, encode_special_tokens=False):
35
+ if encode_special_tokens:
36
+ last_index = 0
37
+ t = []
38
+ for match in re.finditer(self.role_special_token_expression, s):
39
+ if last_index < match.start():
40
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()]))
41
+ t.append(s[match.start():match.end()])
42
+ last_index = match.end()
43
+ if last_index < len(s):
44
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:]))
45
+ return t
46
+ else:
47
+ return self.sp_model.EncodeAsPieces(s)
48
+
49
+ def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]:
50
+ assert type(s) is str
51
+ t = self.sp_model.encode(s)
52
+ if bos:
53
+ t = [self.bos_id] + t
54
+ if eos:
55
+ t = t + [self.eos_id]
56
+ return t
57
+
58
+ def decode(self, t: List[int]) -> str:
59
+ text, buffer = "", []
60
+ for token in t:
61
+ if token in self.index_special_tokens:
62
+ if buffer:
63
+ text += self.sp_model.decode(buffer)
64
+ buffer = []
65
+ text += self.index_special_tokens[token]
66
+ else:
67
+ buffer.append(token)
68
+ if buffer:
69
+ text += self.sp_model.decode(buffer)
70
+ return text
71
+
72
+ def decode_tokens(self, tokens: List[str]) -> str:
73
+ text = self.sp_model.DecodePieces(tokens)
74
+ return text
75
+
76
+ def convert_token_to_id(self, token):
77
+ """ Converts a token (str) in an id using the vocab. """
78
+ if token in self.special_tokens:
79
+ return self.special_tokens[token]
80
+ return self.sp_model.PieceToId(token)
81
+
82
+ def convert_id_to_token(self, index):
83
+ """Converts an index (integer) in a token (str) using the vocab."""
84
+ if index in self.index_special_tokens:
85
+ return self.index_special_tokens[index]
86
+ if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size():
87
+ return ""
88
+ return self.sp_model.IdToPiece(index)
89
+
90
+
91
+ class ChatGLMTokenizer(PreTrainedTokenizer):
92
+ vocab_files_names = {"vocab_file": "tokenizer.model"}
93
+
94
+ model_input_names = ["input_ids", "attention_mask", "position_ids"]
95
+
96
+ def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False,
97
+ **kwargs):
98
+ self.name = "GLMTokenizer"
99
+
100
+ self.vocab_file = vocab_file
101
+ self.tokenizer = SPTokenizer(vocab_file)
102
+ self.special_tokens = {
103
+ "<bos>": self.tokenizer.bos_id,
104
+ "<eos>": self.tokenizer.eos_id,
105
+ "<pad>": self.tokenizer.pad_id
106
+ }
107
+ self.encode_special_tokens = encode_special_tokens
108
+ super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces,
109
+ encode_special_tokens=encode_special_tokens,
110
+ **kwargs)
111
+
112
+ def get_command(self, token):
113
+ if token in self.special_tokens:
114
+ return self.special_tokens[token]
115
+ assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}"
116
+ return self.tokenizer.special_tokens[token]
117
+
118
+ @property
119
+ def unk_token(self) -> str:
120
+ return "<unk>"
121
+
122
+ @property
123
+ def pad_token(self) -> str:
124
+ return "<unk>"
125
+
126
+ @property
127
+ def pad_token_id(self):
128
+ return self.get_command("<pad>")
129
+
130
+ @property
131
+ def eos_token(self) -> str:
132
+ return "</s>"
133
+
134
+ @property
135
+ def eos_token_id(self):
136
+ return self.get_command("<eos>")
137
+
138
+ @property
139
+ def vocab_size(self):
140
+ return self.tokenizer.n_words
141
+
142
+ def get_vocab(self):
143
+ """ Returns vocab as a dict """
144
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
145
+ vocab.update(self.added_tokens_encoder)
146
+ return vocab
147
+
148
+ def _tokenize(self, text, **kwargs):
149
+ return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens)
150
+
151
+ def _convert_token_to_id(self, token):
152
+ """ Converts a token (str) in an id using the vocab. """
153
+ return self.tokenizer.convert_token_to_id(token)
154
+
155
+ def _convert_id_to_token(self, index):
156
+ """Converts an index (integer) in a token (str) using the vocab."""
157
+ return self.tokenizer.convert_id_to_token(index)
158
+
159
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
160
+ return self.tokenizer.decode_tokens(tokens)
161
+
162
+ def save_vocabulary(self, save_directory, filename_prefix=None):
163
+ """
164
+ Save the vocabulary and special tokens file to a directory.
165
+
166
+ Args:
167
+ save_directory (`str`):
168
+ The directory in which to save the vocabulary.
169
+ filename_prefix (`str`, *optional*):
170
+ An optional prefix to add to the named of the saved files.
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ if os.path.isdir(save_directory):
176
+ vocab_file = os.path.join(
177
+ save_directory, self.vocab_files_names["vocab_file"]
178
+ )
179
+ else:
180
+ vocab_file = save_directory
181
+
182
+ with open(self.vocab_file, 'rb') as fin:
183
+ proto_str = fin.read()
184
+
185
+ with open(vocab_file, "wb") as writer:
186
+ writer.write(proto_str)
187
+
188
+ return (vocab_file,)
189
+
190
+ def get_prefix_tokens(self):
191
+ prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")]
192
+ return prefix_tokens
193
+
194
+ def build_single_message(self, role, metadata, message):
195
+ assert role in ["system", "user", "assistant", "observation"], role
196
+ role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n")
197
+ message_tokens = self.tokenizer.encode(message)
198
+ tokens = role_tokens + message_tokens
199
+ return tokens
200
+
201
+ def build_chat_input(self, query, history=None, role="user"):
202
+ if history is None:
203
+ history = []
204
+ input_ids = []
205
+ for item in history:
206
+ content = item["content"]
207
+ if item["role"] == "system" and "tools" in item:
208
+ content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False)
209
+ input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content))
210
+ input_ids.extend(self.build_single_message(role, "", query))
211
+ input_ids.extend([self.get_command("<|assistant|>")])
212
+ return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True)
213
+
214
+ def build_inputs_with_special_tokens(
215
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
216
+ ) -> List[int]:
217
+ """
218
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
219
+ adding special tokens. A BERT sequence has the following format:
220
+
221
+ - single sequence: `[CLS] X [SEP]`
222
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
223
+
224
+ Args:
225
+ token_ids_0 (`List[int]`):
226
+ List of IDs to which the special tokens will be added.
227
+ token_ids_1 (`List[int]`, *optional*):
228
+ Optional second list of IDs for sequence pairs.
229
+
230
+ Returns:
231
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
232
+ """
233
+ prefix_tokens = self.get_prefix_tokens()
234
+ token_ids_0 = prefix_tokens + token_ids_0
235
+ if token_ids_1 is not None:
236
+ token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")]
237
+ return token_ids_0
238
+
239
+ def _pad(
240
+ self,
241
+ encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
242
+ max_length: Optional[int] = None,
243
+ padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
244
+ pad_to_multiple_of: Optional[int] = None,
245
+ return_attention_mask: Optional[bool] = None,
246
+ ) -> dict:
247
+ """
248
+ Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
249
+
250
+ Args:
251
+ encoded_inputs:
252
+ Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
253
+ max_length: maximum length of the returned list and optionally padding length (see below).
254
+ Will truncate by taking into account the special tokens.
255
+ padding_strategy: PaddingStrategy to use for padding.
256
+
257
+ - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
258
+ - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
259
+ - PaddingStrategy.DO_NOT_PAD: Do not pad
260
+ The tokenizer padding sides are defined in self.padding_side:
261
+
262
+ - 'left': pads on the left of the sequences
263
+ - 'right': pads on the right of the sequences
264
+ pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
265
+ This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
266
+ `>= 7.5` (Volta).
267
+ return_attention_mask:
268
+ (optional) Set to False to avoid returning attention mask (default: set to model specifics)
269
+ """
270
+ # Load from model defaults
271
+ assert self.padding_side == "left"
272
+
273
+ required_input = encoded_inputs[self.model_input_names[0]]
274
+ seq_length = len(required_input)
275
+
276
+ if padding_strategy == PaddingStrategy.LONGEST:
277
+ max_length = len(required_input)
278
+
279
+ if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
280
+ max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
281
+
282
+ needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
283
+
284
+ # Initialize attention mask if not present.
285
+ if "attention_mask" not in encoded_inputs:
286
+ encoded_inputs["attention_mask"] = [1] * seq_length
287
+
288
+ if "position_ids" not in encoded_inputs:
289
+ encoded_inputs["position_ids"] = list(range(seq_length))
290
+
291
+ if needs_to_be_padded:
292
+ difference = max_length - len(required_input)
293
+
294
+ if "attention_mask" in encoded_inputs:
295
+ encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
296
+ if "position_ids" in encoded_inputs:
297
+ encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
298
+ encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
299
+
300
+ return encoded_inputs
checkpoint-100/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2
3
+ size 1018370
checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "64795": {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "64797": {
12
+ "content": "<|observation|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ }
19
+ },
20
+ "additional_special_tokens": [
21
+ "<|user|>",
22
+ "<|observation|>"
23
+ ],
24
+ "auto_map": {
25
+ "AutoTokenizer": [
26
+ "tokenization_chatglm.ChatGLMTokenizer",
27
+ null
28
+ ]
29
+ },
30
+ "clean_up_tokenization_spaces": false,
31
+ "do_lower_case": false,
32
+ "encode_special_tokens": false,
33
+ "eos_token": "</s>",
34
+ "model_max_length": 1000000000000000019884624838656,
35
+ "pad_token": "<unk>",
36
+ "padding_side": "right",
37
+ "remove_space": false,
38
+ "split_special_tokens": false,
39
+ "tokenizer_class": "ChatGLMTokenizer",
40
+ "unk_token": "<unk>"
41
+ }
checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.2727272727272725,
5
+ "eval_steps": 500,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.11,
13
+ "learning_rate": 0.001999898043009433,
14
+ "loss": 4.5094,
15
+ "step": 5
16
+ },
17
+ {
18
+ "epoch": 0.23,
19
+ "learning_rate": 0.0019995921928281893,
20
+ "loss": 3.8047,
21
+ "step": 10
22
+ },
23
+ {
24
+ "epoch": 0.34,
25
+ "learning_rate": 0.001999082511823396,
26
+ "loss": 3.8813,
27
+ "step": 15
28
+ },
29
+ {
30
+ "epoch": 0.45,
31
+ "learning_rate": 0.0019983691039261358,
32
+ "loss": 3.7188,
33
+ "step": 20
34
+ },
35
+ {
36
+ "epoch": 0.57,
37
+ "learning_rate": 0.0019974521146102534,
38
+ "loss": 3.6695,
39
+ "step": 25
40
+ },
41
+ {
42
+ "epoch": 0.68,
43
+ "learning_rate": 0.001996331730862691,
44
+ "loss": 3.7078,
45
+ "step": 30
46
+ },
47
+ {
48
+ "epoch": 0.8,
49
+ "learning_rate": 0.0019950081811453595,
50
+ "loss": 3.6844,
51
+ "step": 35
52
+ },
53
+ {
54
+ "epoch": 0.91,
55
+ "learning_rate": 0.0019934817353485504,
56
+ "loss": 3.6961,
57
+ "step": 40
58
+ },
59
+ {
60
+ "epoch": 1.02,
61
+ "learning_rate": 0.0019917527047359027,
62
+ "loss": 3.5758,
63
+ "step": 45
64
+ },
65
+ {
66
+ "epoch": 1.14,
67
+ "learning_rate": 0.001989821441880933,
68
+ "loss": 3.4102,
69
+ "step": 50
70
+ },
71
+ {
72
+ "epoch": 1.25,
73
+ "learning_rate": 0.0019876883405951376,
74
+ "loss": 3.3984,
75
+ "step": 55
76
+ },
77
+ {
78
+ "epoch": 1.36,
79
+ "learning_rate": 0.001985353835847693,
80
+ "loss": 3.3602,
81
+ "step": 60
82
+ },
83
+ {
84
+ "epoch": 1.48,
85
+ "learning_rate": 0.0019828184036767556,
86
+ "loss": 3.4461,
87
+ "step": 65
88
+ },
89
+ {
90
+ "epoch": 1.59,
91
+ "learning_rate": 0.0019800825610923932,
92
+ "loss": 3.3461,
93
+ "step": 70
94
+ },
95
+ {
96
+ "epoch": 1.7,
97
+ "learning_rate": 0.0019771468659711597,
98
+ "loss": 3.4172,
99
+ "step": 75
100
+ },
101
+ {
102
+ "epoch": 1.82,
103
+ "learning_rate": 0.0019740119169423336,
104
+ "loss": 3.4359,
105
+ "step": 80
106
+ },
107
+ {
108
+ "epoch": 1.93,
109
+ "learning_rate": 0.0019706783532658523,
110
+ "loss": 3.5141,
111
+ "step": 85
112
+ },
113
+ {
114
+ "epoch": 2.05,
115
+ "learning_rate": 0.001967146854701957,
116
+ "loss": 3.2242,
117
+ "step": 90
118
+ },
119
+ {
120
+ "epoch": 2.16,
121
+ "learning_rate": 0.0019634181413725788,
122
+ "loss": 3.0227,
123
+ "step": 95
124
+ },
125
+ {
126
+ "epoch": 2.27,
127
+ "learning_rate": 0.0019594929736144974,
128
+ "loss": 2.8984,
129
+ "step": 100
130
+ }
131
+ ],
132
+ "logging_steps": 5,
133
+ "max_steps": 1100,
134
+ "num_input_tokens_seen": 0,
135
+ "num_train_epochs": 25,
136
+ "save_steps": 100,
137
+ "total_flos": 5.099717548376064e+16,
138
+ "train_batch_size": 4,
139
+ "trial_name": null,
140
+ "trial_params": null
141
+ }
checkpoint-100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c
3
+ size 4920
checkpoint-1000/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /root/chatglm3-6b
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-1000/adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/root/chatglm3-6b",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 64.0,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "query_key_value"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
checkpoint-1000/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:323caf0b1e8894e4ef8b0dbe356d83adafb2f8672a02f89fb8729684fbf30c82
3
+ size 31204248
checkpoint-1000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:914475ddbfdc97f3d9f8637d5b05f797d202f9a60e23df9d28710afb7e06205a
3
+ size 62437882
checkpoint-1000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1073cb8b57930e10d4affaf055d83ef268bea78a4de9ff17cd6d0203574a40d
3
+ size 14244
checkpoint-1000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bed216a1f1980adb444c4a55e2b348e6b6c8174e1a232afea7a11177b3480627
3
+ size 1064
checkpoint-1000/special_tokens_map.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|observation|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ]
18
+ }
checkpoint-1000/tokenization_chatglm.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import re
4
+ from typing import List, Optional, Union, Dict
5
+ from sentencepiece import SentencePieceProcessor
6
+ from transformers import PreTrainedTokenizer
7
+ from transformers.utils import logging, PaddingStrategy
8
+ from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
9
+
10
+
11
+ class SPTokenizer:
12
+ def __init__(self, model_path: str):
13
+ # reload tokenizer
14
+ assert os.path.isfile(model_path), model_path
15
+ self.sp_model = SentencePieceProcessor(model_file=model_path)
16
+
17
+ # BOS / EOS token IDs
18
+ self.n_words: int = self.sp_model.vocab_size()
19
+ self.bos_id: int = self.sp_model.bos_id()
20
+ self.eos_id: int = self.sp_model.eos_id()
21
+ self.pad_id: int = self.sp_model.unk_id()
22
+ assert self.sp_model.vocab_size() == self.sp_model.get_piece_size()
23
+
24
+ role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"]
25
+ special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens
26
+ self.special_tokens = {}
27
+ self.index_special_tokens = {}
28
+ for token in special_tokens:
29
+ self.special_tokens[token] = self.n_words
30
+ self.index_special_tokens[self.n_words] = token
31
+ self.n_words += 1
32
+ self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens])
33
+
34
+ def tokenize(self, s: str, encode_special_tokens=False):
35
+ if encode_special_tokens:
36
+ last_index = 0
37
+ t = []
38
+ for match in re.finditer(self.role_special_token_expression, s):
39
+ if last_index < match.start():
40
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()]))
41
+ t.append(s[match.start():match.end()])
42
+ last_index = match.end()
43
+ if last_index < len(s):
44
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:]))
45
+ return t
46
+ else:
47
+ return self.sp_model.EncodeAsPieces(s)
48
+
49
+ def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]:
50
+ assert type(s) is str
51
+ t = self.sp_model.encode(s)
52
+ if bos:
53
+ t = [self.bos_id] + t
54
+ if eos:
55
+ t = t + [self.eos_id]
56
+ return t
57
+
58
+ def decode(self, t: List[int]) -> str:
59
+ text, buffer = "", []
60
+ for token in t:
61
+ if token in self.index_special_tokens:
62
+ if buffer:
63
+ text += self.sp_model.decode(buffer)
64
+ buffer = []
65
+ text += self.index_special_tokens[token]
66
+ else:
67
+ buffer.append(token)
68
+ if buffer:
69
+ text += self.sp_model.decode(buffer)
70
+ return text
71
+
72
+ def decode_tokens(self, tokens: List[str]) -> str:
73
+ text = self.sp_model.DecodePieces(tokens)
74
+ return text
75
+
76
+ def convert_token_to_id(self, token):
77
+ """ Converts a token (str) in an id using the vocab. """
78
+ if token in self.special_tokens:
79
+ return self.special_tokens[token]
80
+ return self.sp_model.PieceToId(token)
81
+
82
+ def convert_id_to_token(self, index):
83
+ """Converts an index (integer) in a token (str) using the vocab."""
84
+ if index in self.index_special_tokens:
85
+ return self.index_special_tokens[index]
86
+ if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size():
87
+ return ""
88
+ return self.sp_model.IdToPiece(index)
89
+
90
+
91
+ class ChatGLMTokenizer(PreTrainedTokenizer):
92
+ vocab_files_names = {"vocab_file": "tokenizer.model"}
93
+
94
+ model_input_names = ["input_ids", "attention_mask", "position_ids"]
95
+
96
+ def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False,
97
+ **kwargs):
98
+ self.name = "GLMTokenizer"
99
+
100
+ self.vocab_file = vocab_file
101
+ self.tokenizer = SPTokenizer(vocab_file)
102
+ self.special_tokens = {
103
+ "<bos>": self.tokenizer.bos_id,
104
+ "<eos>": self.tokenizer.eos_id,
105
+ "<pad>": self.tokenizer.pad_id
106
+ }
107
+ self.encode_special_tokens = encode_special_tokens
108
+ super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces,
109
+ encode_special_tokens=encode_special_tokens,
110
+ **kwargs)
111
+
112
+ def get_command(self, token):
113
+ if token in self.special_tokens:
114
+ return self.special_tokens[token]
115
+ assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}"
116
+ return self.tokenizer.special_tokens[token]
117
+
118
+ @property
119
+ def unk_token(self) -> str:
120
+ return "<unk>"
121
+
122
+ @property
123
+ def pad_token(self) -> str:
124
+ return "<unk>"
125
+
126
+ @property
127
+ def pad_token_id(self):
128
+ return self.get_command("<pad>")
129
+
130
+ @property
131
+ def eos_token(self) -> str:
132
+ return "</s>"
133
+
134
+ @property
135
+ def eos_token_id(self):
136
+ return self.get_command("<eos>")
137
+
138
+ @property
139
+ def vocab_size(self):
140
+ return self.tokenizer.n_words
141
+
142
+ def get_vocab(self):
143
+ """ Returns vocab as a dict """
144
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
145
+ vocab.update(self.added_tokens_encoder)
146
+ return vocab
147
+
148
+ def _tokenize(self, text, **kwargs):
149
+ return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens)
150
+
151
+ def _convert_token_to_id(self, token):
152
+ """ Converts a token (str) in an id using the vocab. """
153
+ return self.tokenizer.convert_token_to_id(token)
154
+
155
+ def _convert_id_to_token(self, index):
156
+ """Converts an index (integer) in a token (str) using the vocab."""
157
+ return self.tokenizer.convert_id_to_token(index)
158
+
159
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
160
+ return self.tokenizer.decode_tokens(tokens)
161
+
162
+ def save_vocabulary(self, save_directory, filename_prefix=None):
163
+ """
164
+ Save the vocabulary and special tokens file to a directory.
165
+
166
+ Args:
167
+ save_directory (`str`):
168
+ The directory in which to save the vocabulary.
169
+ filename_prefix (`str`, *optional*):
170
+ An optional prefix to add to the named of the saved files.
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ if os.path.isdir(save_directory):
176
+ vocab_file = os.path.join(
177
+ save_directory, self.vocab_files_names["vocab_file"]
178
+ )
179
+ else:
180
+ vocab_file = save_directory
181
+
182
+ with open(self.vocab_file, 'rb') as fin:
183
+ proto_str = fin.read()
184
+
185
+ with open(vocab_file, "wb") as writer:
186
+ writer.write(proto_str)
187
+
188
+ return (vocab_file,)
189
+
190
+ def get_prefix_tokens(self):
191
+ prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")]
192
+ return prefix_tokens
193
+
194
+ def build_single_message(self, role, metadata, message):
195
+ assert role in ["system", "user", "assistant", "observation"], role
196
+ role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n")
197
+ message_tokens = self.tokenizer.encode(message)
198
+ tokens = role_tokens + message_tokens
199
+ return tokens
200
+
201
+ def build_chat_input(self, query, history=None, role="user"):
202
+ if history is None:
203
+ history = []
204
+ input_ids = []
205
+ for item in history:
206
+ content = item["content"]
207
+ if item["role"] == "system" and "tools" in item:
208
+ content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False)
209
+ input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content))
210
+ input_ids.extend(self.build_single_message(role, "", query))
211
+ input_ids.extend([self.get_command("<|assistant|>")])
212
+ return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True)
213
+
214
+ def build_inputs_with_special_tokens(
215
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
216
+ ) -> List[int]:
217
+ """
218
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
219
+ adding special tokens. A BERT sequence has the following format:
220
+
221
+ - single sequence: `[CLS] X [SEP]`
222
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
223
+
224
+ Args:
225
+ token_ids_0 (`List[int]`):
226
+ List of IDs to which the special tokens will be added.
227
+ token_ids_1 (`List[int]`, *optional*):
228
+ Optional second list of IDs for sequence pairs.
229
+
230
+ Returns:
231
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
232
+ """
233
+ prefix_tokens = self.get_prefix_tokens()
234
+ token_ids_0 = prefix_tokens + token_ids_0
235
+ if token_ids_1 is not None:
236
+ token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")]
237
+ return token_ids_0
238
+
239
+ def _pad(
240
+ self,
241
+ encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
242
+ max_length: Optional[int] = None,
243
+ padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
244
+ pad_to_multiple_of: Optional[int] = None,
245
+ return_attention_mask: Optional[bool] = None,
246
+ ) -> dict:
247
+ """
248
+ Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
249
+
250
+ Args:
251
+ encoded_inputs:
252
+ Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
253
+ max_length: maximum length of the returned list and optionally padding length (see below).
254
+ Will truncate by taking into account the special tokens.
255
+ padding_strategy: PaddingStrategy to use for padding.
256
+
257
+ - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
258
+ - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
259
+ - PaddingStrategy.DO_NOT_PAD: Do not pad
260
+ The tokenizer padding sides are defined in self.padding_side:
261
+
262
+ - 'left': pads on the left of the sequences
263
+ - 'right': pads on the right of the sequences
264
+ pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
265
+ This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
266
+ `>= 7.5` (Volta).
267
+ return_attention_mask:
268
+ (optional) Set to False to avoid returning attention mask (default: set to model specifics)
269
+ """
270
+ # Load from model defaults
271
+ assert self.padding_side == "left"
272
+
273
+ required_input = encoded_inputs[self.model_input_names[0]]
274
+ seq_length = len(required_input)
275
+
276
+ if padding_strategy == PaddingStrategy.LONGEST:
277
+ max_length = len(required_input)
278
+
279
+ if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
280
+ max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
281
+
282
+ needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
283
+
284
+ # Initialize attention mask if not present.
285
+ if "attention_mask" not in encoded_inputs:
286
+ encoded_inputs["attention_mask"] = [1] * seq_length
287
+
288
+ if "position_ids" not in encoded_inputs:
289
+ encoded_inputs["position_ids"] = list(range(seq_length))
290
+
291
+ if needs_to_be_padded:
292
+ difference = max_length - len(required_input)
293
+
294
+ if "attention_mask" in encoded_inputs:
295
+ encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
296
+ if "position_ids" in encoded_inputs:
297
+ encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
298
+ encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
299
+
300
+ return encoded_inputs
checkpoint-1000/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2
3
+ size 1018370
checkpoint-1000/tokenizer_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "64795": {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "64797": {
12
+ "content": "<|observation|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ }
19
+ },
20
+ "additional_special_tokens": [
21
+ "<|user|>",
22
+ "<|observation|>"
23
+ ],
24
+ "auto_map": {
25
+ "AutoTokenizer": [
26
+ "tokenization_chatglm.ChatGLMTokenizer",
27
+ null
28
+ ]
29
+ },
30
+ "clean_up_tokenization_spaces": false,
31
+ "do_lower_case": false,
32
+ "encode_special_tokens": false,
33
+ "eos_token": "</s>",
34
+ "model_max_length": 1000000000000000019884624838656,
35
+ "pad_token": "<unk>",
36
+ "padding_side": "right",
37
+ "remove_space": false,
38
+ "split_special_tokens": false,
39
+ "tokenizer_class": "ChatGLMTokenizer",
40
+ "unk_token": "<unk>"
41
+ }
checkpoint-1000/trainer_state.json ADDED
@@ -0,0 +1,1221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 22.727272727272727,
5
+ "eval_steps": 500,
6
+ "global_step": 1000,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.11,
13
+ "learning_rate": 0.001999898043009433,
14
+ "loss": 4.5094,
15
+ "step": 5
16
+ },
17
+ {
18
+ "epoch": 0.23,
19
+ "learning_rate": 0.0019995921928281893,
20
+ "loss": 3.8047,
21
+ "step": 10
22
+ },
23
+ {
24
+ "epoch": 0.34,
25
+ "learning_rate": 0.001999082511823396,
26
+ "loss": 3.8813,
27
+ "step": 15
28
+ },
29
+ {
30
+ "epoch": 0.45,
31
+ "learning_rate": 0.0019983691039261358,
32
+ "loss": 3.7188,
33
+ "step": 20
34
+ },
35
+ {
36
+ "epoch": 0.57,
37
+ "learning_rate": 0.0019974521146102534,
38
+ "loss": 3.6695,
39
+ "step": 25
40
+ },
41
+ {
42
+ "epoch": 0.68,
43
+ "learning_rate": 0.001996331730862691,
44
+ "loss": 3.7078,
45
+ "step": 30
46
+ },
47
+ {
48
+ "epoch": 0.8,
49
+ "learning_rate": 0.0019950081811453595,
50
+ "loss": 3.6844,
51
+ "step": 35
52
+ },
53
+ {
54
+ "epoch": 0.91,
55
+ "learning_rate": 0.0019934817353485504,
56
+ "loss": 3.6961,
57
+ "step": 40
58
+ },
59
+ {
60
+ "epoch": 1.02,
61
+ "learning_rate": 0.0019917527047359027,
62
+ "loss": 3.5758,
63
+ "step": 45
64
+ },
65
+ {
66
+ "epoch": 1.14,
67
+ "learning_rate": 0.001989821441880933,
68
+ "loss": 3.4102,
69
+ "step": 50
70
+ },
71
+ {
72
+ "epoch": 1.25,
73
+ "learning_rate": 0.0019876883405951376,
74
+ "loss": 3.3984,
75
+ "step": 55
76
+ },
77
+ {
78
+ "epoch": 1.36,
79
+ "learning_rate": 0.001985353835847693,
80
+ "loss": 3.3602,
81
+ "step": 60
82
+ },
83
+ {
84
+ "epoch": 1.48,
85
+ "learning_rate": 0.0019828184036767556,
86
+ "loss": 3.4461,
87
+ "step": 65
88
+ },
89
+ {
90
+ "epoch": 1.59,
91
+ "learning_rate": 0.0019800825610923932,
92
+ "loss": 3.3461,
93
+ "step": 70
94
+ },
95
+ {
96
+ "epoch": 1.7,
97
+ "learning_rate": 0.0019771468659711597,
98
+ "loss": 3.4172,
99
+ "step": 75
100
+ },
101
+ {
102
+ "epoch": 1.82,
103
+ "learning_rate": 0.0019740119169423336,
104
+ "loss": 3.4359,
105
+ "step": 80
106
+ },
107
+ {
108
+ "epoch": 1.93,
109
+ "learning_rate": 0.0019706783532658523,
110
+ "loss": 3.5141,
111
+ "step": 85
112
+ },
113
+ {
114
+ "epoch": 2.05,
115
+ "learning_rate": 0.001967146854701957,
116
+ "loss": 3.2242,
117
+ "step": 90
118
+ },
119
+ {
120
+ "epoch": 2.16,
121
+ "learning_rate": 0.0019634181413725788,
122
+ "loss": 3.0227,
123
+ "step": 95
124
+ },
125
+ {
126
+ "epoch": 2.27,
127
+ "learning_rate": 0.0019594929736144974,
128
+ "loss": 2.8984,
129
+ "step": 100
130
+ },
131
+ {
132
+ "epoch": 2.39,
133
+ "learning_rate": 0.001955372151824297,
134
+ "loss": 3.0781,
135
+ "step": 105
136
+ },
137
+ {
138
+ "epoch": 2.5,
139
+ "learning_rate": 0.0019510565162951536,
140
+ "loss": 3.1203,
141
+ "step": 110
142
+ },
143
+ {
144
+ "epoch": 2.61,
145
+ "learning_rate": 0.00194654694704549,
146
+ "loss": 3.1828,
147
+ "step": 115
148
+ },
149
+ {
150
+ "epoch": 2.73,
151
+ "learning_rate": 0.0019418443636395248,
152
+ "loss": 3.0531,
153
+ "step": 120
154
+ },
155
+ {
156
+ "epoch": 2.84,
157
+ "learning_rate": 0.001936949724999762,
158
+ "loss": 3.1523,
159
+ "step": 125
160
+ },
161
+ {
162
+ "epoch": 2.95,
163
+ "learning_rate": 0.0019318640292114524,
164
+ "loss": 3.1156,
165
+ "step": 130
166
+ },
167
+ {
168
+ "epoch": 3.07,
169
+ "learning_rate": 0.0019265883133190713,
170
+ "loss": 2.7844,
171
+ "step": 135
172
+ },
173
+ {
174
+ "epoch": 3.18,
175
+ "learning_rate": 0.0019211236531148502,
176
+ "loss": 2.6711,
177
+ "step": 140
178
+ },
179
+ {
180
+ "epoch": 3.3,
181
+ "learning_rate": 0.0019154711629194062,
182
+ "loss": 2.6609,
183
+ "step": 145
184
+ },
185
+ {
186
+ "epoch": 3.41,
187
+ "learning_rate": 0.0019096319953545184,
188
+ "loss": 2.7531,
189
+ "step": 150
190
+ },
191
+ {
192
+ "epoch": 3.52,
193
+ "learning_rate": 0.0019036073411080917,
194
+ "loss": 2.7977,
195
+ "step": 155
196
+ },
197
+ {
198
+ "epoch": 3.64,
199
+ "learning_rate": 0.0018973984286913585,
200
+ "loss": 2.7914,
201
+ "step": 160
202
+ },
203
+ {
204
+ "epoch": 3.75,
205
+ "learning_rate": 0.0018910065241883678,
206
+ "loss": 2.8188,
207
+ "step": 165
208
+ },
209
+ {
210
+ "epoch": 3.86,
211
+ "learning_rate": 0.0018844329309978143,
212
+ "loss": 2.8945,
213
+ "step": 170
214
+ },
215
+ {
216
+ "epoch": 3.98,
217
+ "learning_rate": 0.0018776789895672556,
218
+ "loss": 2.8883,
219
+ "step": 175
220
+ },
221
+ {
222
+ "epoch": 4.09,
223
+ "learning_rate": 0.0018707460771197773,
224
+ "loss": 2.4617,
225
+ "step": 180
226
+ },
227
+ {
228
+ "epoch": 4.2,
229
+ "learning_rate": 0.001863635607373157,
230
+ "loss": 2.4633,
231
+ "step": 185
232
+ },
233
+ {
234
+ "epoch": 4.32,
235
+ "learning_rate": 0.001856349030251589,
236
+ "loss": 2.5094,
237
+ "step": 190
238
+ },
239
+ {
240
+ "epoch": 4.43,
241
+ "learning_rate": 0.0018488878315900226,
242
+ "loss": 2.432,
243
+ "step": 195
244
+ },
245
+ {
246
+ "epoch": 4.55,
247
+ "learning_rate": 0.0018412535328311812,
248
+ "loss": 2.5648,
249
+ "step": 200
250
+ },
251
+ {
252
+ "epoch": 4.66,
253
+ "learning_rate": 0.0018334476907153176,
254
+ "loss": 2.4836,
255
+ "step": 205
256
+ },
257
+ {
258
+ "epoch": 4.77,
259
+ "learning_rate": 0.001825471896962774,
260
+ "loss": 2.6617,
261
+ "step": 210
262
+ },
263
+ {
264
+ "epoch": 4.89,
265
+ "learning_rate": 0.0018173277779494068,
266
+ "loss": 2.6734,
267
+ "step": 215
268
+ },
269
+ {
270
+ "epoch": 5.0,
271
+ "learning_rate": 0.0018090169943749475,
272
+ "loss": 2.6742,
273
+ "step": 220
274
+ },
275
+ {
276
+ "epoch": 5.11,
277
+ "learning_rate": 0.0018005412409243604,
278
+ "loss": 2.1379,
279
+ "step": 225
280
+ },
281
+ {
282
+ "epoch": 5.23,
283
+ "learning_rate": 0.0017919022459222751,
284
+ "loss": 2.1508,
285
+ "step": 230
286
+ },
287
+ {
288
+ "epoch": 5.34,
289
+ "learning_rate": 0.0017831017709805555,
290
+ "loss": 2.2582,
291
+ "step": 235
292
+ },
293
+ {
294
+ "epoch": 5.45,
295
+ "learning_rate": 0.0017741416106390826,
296
+ "loss": 2.2367,
297
+ "step": 240
298
+ },
299
+ {
300
+ "epoch": 5.57,
301
+ "learning_rate": 0.0017650235919998232,
302
+ "loss": 2.325,
303
+ "step": 245
304
+ },
305
+ {
306
+ "epoch": 5.68,
307
+ "learning_rate": 0.0017557495743542584,
308
+ "loss": 2.2703,
309
+ "step": 250
310
+ },
311
+ {
312
+ "epoch": 5.8,
313
+ "learning_rate": 0.0017463214488042471,
314
+ "loss": 2.3703,
315
+ "step": 255
316
+ },
317
+ {
318
+ "epoch": 5.91,
319
+ "learning_rate": 0.001736741137876405,
320
+ "loss": 2.4648,
321
+ "step": 260
322
+ },
323
+ {
324
+ "epoch": 6.02,
325
+ "learning_rate": 0.0017270105951300739,
326
+ "loss": 2.2734,
327
+ "step": 265
328
+ },
329
+ {
330
+ "epoch": 6.14,
331
+ "learning_rate": 0.0017171318047589637,
332
+ "loss": 1.9898,
333
+ "step": 270
334
+ },
335
+ {
336
+ "epoch": 6.25,
337
+ "learning_rate": 0.0017071067811865474,
338
+ "loss": 1.9816,
339
+ "step": 275
340
+ },
341
+ {
342
+ "epoch": 6.36,
343
+ "learning_rate": 0.0016969375686552938,
344
+ "loss": 1.9648,
345
+ "step": 280
346
+ },
347
+ {
348
+ "epoch": 6.48,
349
+ "learning_rate": 0.0016866262408098134,
350
+ "loss": 2.1672,
351
+ "step": 285
352
+ },
353
+ {
354
+ "epoch": 6.59,
355
+ "learning_rate": 0.0016761749002740195,
356
+ "loss": 2.0074,
357
+ "step": 290
358
+ },
359
+ {
360
+ "epoch": 6.7,
361
+ "learning_rate": 0.0016655856782223683,
362
+ "loss": 2.1598,
363
+ "step": 295
364
+ },
365
+ {
366
+ "epoch": 6.82,
367
+ "learning_rate": 0.0016548607339452852,
368
+ "loss": 2.0996,
369
+ "step": 300
370
+ },
371
+ {
372
+ "epoch": 6.93,
373
+ "learning_rate": 0.0016440022544088554,
374
+ "loss": 2.1434,
375
+ "step": 305
376
+ },
377
+ {
378
+ "epoch": 7.05,
379
+ "learning_rate": 0.0016330124538088703,
380
+ "loss": 2.0699,
381
+ "step": 310
382
+ },
383
+ {
384
+ "epoch": 7.16,
385
+ "learning_rate": 0.0016218935731193223,
386
+ "loss": 1.7312,
387
+ "step": 315
388
+ },
389
+ {
390
+ "epoch": 7.27,
391
+ "learning_rate": 0.0016106478796354383,
392
+ "loss": 1.7799,
393
+ "step": 320
394
+ },
395
+ {
396
+ "epoch": 7.39,
397
+ "learning_rate": 0.0015992776665113468,
398
+ "loss": 1.7008,
399
+ "step": 325
400
+ },
401
+ {
402
+ "epoch": 7.5,
403
+ "learning_rate": 0.0015877852522924731,
404
+ "loss": 1.8969,
405
+ "step": 330
406
+ },
407
+ {
408
+ "epoch": 7.61,
409
+ "learning_rate": 0.0015761729804427528,
410
+ "loss": 1.8156,
411
+ "step": 335
412
+ },
413
+ {
414
+ "epoch": 7.73,
415
+ "learning_rate": 0.0015644432188667695,
416
+ "loss": 1.9336,
417
+ "step": 340
418
+ },
419
+ {
420
+ "epoch": 7.84,
421
+ "learning_rate": 0.0015525983594269026,
422
+ "loss": 1.9918,
423
+ "step": 345
424
+ },
425
+ {
426
+ "epoch": 7.95,
427
+ "learning_rate": 0.0015406408174555976,
428
+ "loss": 2.0055,
429
+ "step": 350
430
+ },
431
+ {
432
+ "epoch": 8.07,
433
+ "learning_rate": 0.0015285730312628418,
434
+ "loss": 1.7168,
435
+ "step": 355
436
+ },
437
+ {
438
+ "epoch": 8.18,
439
+ "learning_rate": 0.001516397461638962,
440
+ "loss": 1.5531,
441
+ "step": 360
442
+ },
443
+ {
444
+ "epoch": 8.3,
445
+ "learning_rate": 0.001504116591352832,
446
+ "loss": 1.5922,
447
+ "step": 365
448
+ },
449
+ {
450
+ "epoch": 8.41,
451
+ "learning_rate": 0.001491732924645604,
452
+ "loss": 1.618,
453
+ "step": 370
454
+ },
455
+ {
456
+ "epoch": 8.52,
457
+ "learning_rate": 0.0014792489867200569,
458
+ "loss": 1.6738,
459
+ "step": 375
460
+ },
461
+ {
462
+ "epoch": 8.64,
463
+ "learning_rate": 0.0014666673232256737,
464
+ "loss": 1.7461,
465
+ "step": 380
466
+ },
467
+ {
468
+ "epoch": 8.75,
469
+ "learning_rate": 0.0014539904997395467,
470
+ "loss": 1.6746,
471
+ "step": 385
472
+ },
473
+ {
474
+ "epoch": 8.86,
475
+ "learning_rate": 0.0014412211012432212,
476
+ "loss": 1.7711,
477
+ "step": 390
478
+ },
479
+ {
480
+ "epoch": 8.98,
481
+ "learning_rate": 0.0014283617315955814,
482
+ "loss": 1.8387,
483
+ "step": 395
484
+ },
485
+ {
486
+ "epoch": 9.09,
487
+ "learning_rate": 0.0014154150130018866,
488
+ "loss": 1.475,
489
+ "step": 400
490
+ },
491
+ {
492
+ "epoch": 9.2,
493
+ "learning_rate": 0.001402383585479068,
494
+ "loss": 1.4523,
495
+ "step": 405
496
+ },
497
+ {
498
+ "epoch": 9.32,
499
+ "learning_rate": 0.0013892701063173917,
500
+ "loss": 1.4812,
501
+ "step": 410
502
+ },
503
+ {
504
+ "epoch": 9.43,
505
+ "learning_rate": 0.0013760772495385997,
506
+ "loss": 1.525,
507
+ "step": 415
508
+ },
509
+ {
510
+ "epoch": 9.55,
511
+ "learning_rate": 0.001362807705350641,
512
+ "loss": 1.398,
513
+ "step": 420
514
+ },
515
+ {
516
+ "epoch": 9.66,
517
+ "learning_rate": 0.0013494641795990985,
518
+ "loss": 1.4477,
519
+ "step": 425
520
+ },
521
+ {
522
+ "epoch": 9.77,
523
+ "learning_rate": 0.00133604939321543,
524
+ "loss": 1.5801,
525
+ "step": 430
526
+ },
527
+ {
528
+ "epoch": 9.89,
529
+ "learning_rate": 0.0013225660816621341,
530
+ "loss": 1.6422,
531
+ "step": 435
532
+ },
533
+ {
534
+ "epoch": 10.0,
535
+ "learning_rate": 0.0013090169943749475,
536
+ "loss": 1.5535,
537
+ "step": 440
538
+ },
539
+ {
540
+ "epoch": 10.11,
541
+ "learning_rate": 0.0012954048942022001,
542
+ "loss": 1.2324,
543
+ "step": 445
544
+ },
545
+ {
546
+ "epoch": 10.23,
547
+ "learning_rate": 0.0012817325568414298,
548
+ "loss": 1.2613,
549
+ "step": 450
550
+ },
551
+ {
552
+ "epoch": 10.34,
553
+ "learning_rate": 0.001268002770273379,
554
+ "loss": 1.3293,
555
+ "step": 455
556
+ },
557
+ {
558
+ "epoch": 10.45,
559
+ "learning_rate": 0.0012542183341934872,
560
+ "loss": 1.2852,
561
+ "step": 460
562
+ },
563
+ {
564
+ "epoch": 10.57,
565
+ "learning_rate": 0.0012403820594409924,
566
+ "loss": 1.3295,
567
+ "step": 465
568
+ },
569
+ {
570
+ "epoch": 10.68,
571
+ "learning_rate": 0.0012264967674257645,
572
+ "loss": 1.3287,
573
+ "step": 470
574
+ },
575
+ {
576
+ "epoch": 10.8,
577
+ "learning_rate": 0.0012125652895529767,
578
+ "loss": 1.3566,
579
+ "step": 475
580
+ },
581
+ {
582
+ "epoch": 10.91,
583
+ "learning_rate": 0.0011985904666457455,
584
+ "loss": 1.4414,
585
+ "step": 480
586
+ },
587
+ {
588
+ "epoch": 11.02,
589
+ "learning_rate": 0.0011845751483658454,
590
+ "loss": 1.3695,
591
+ "step": 485
592
+ },
593
+ {
594
+ "epoch": 11.14,
595
+ "learning_rate": 0.0011705221926326238,
596
+ "loss": 1.1363,
597
+ "step": 490
598
+ },
599
+ {
600
+ "epoch": 11.25,
601
+ "learning_rate": 0.001156434465040231,
602
+ "loss": 1.1354,
603
+ "step": 495
604
+ },
605
+ {
606
+ "epoch": 11.36,
607
+ "learning_rate": 0.0011423148382732854,
608
+ "loss": 1.0725,
609
+ "step": 500
610
+ },
611
+ {
612
+ "epoch": 11.48,
613
+ "learning_rate": 0.001128166191521093,
614
+ "loss": 1.1754,
615
+ "step": 505
616
+ },
617
+ {
618
+ "epoch": 11.59,
619
+ "learning_rate": 0.0011139914098905405,
620
+ "loss": 1.1848,
621
+ "step": 510
622
+ },
623
+ {
624
+ "epoch": 11.7,
625
+ "learning_rate": 0.0010997933838177826,
626
+ "loss": 1.2354,
627
+ "step": 515
628
+ },
629
+ {
630
+ "epoch": 11.82,
631
+ "learning_rate": 0.0010855750084788399,
632
+ "loss": 1.1984,
633
+ "step": 520
634
+ },
635
+ {
636
+ "epoch": 11.93,
637
+ "learning_rate": 0.0010713391831992322,
638
+ "loss": 1.2666,
639
+ "step": 525
640
+ },
641
+ {
642
+ "epoch": 12.05,
643
+ "learning_rate": 0.001057088810862768,
644
+ "loss": 1.1408,
645
+ "step": 530
646
+ },
647
+ {
648
+ "epoch": 12.16,
649
+ "learning_rate": 0.0010428267973196027,
650
+ "loss": 0.9385,
651
+ "step": 535
652
+ },
653
+ {
654
+ "epoch": 12.27,
655
+ "learning_rate": 0.0010285560507936962,
656
+ "loss": 1.0158,
657
+ "step": 540
658
+ },
659
+ {
660
+ "epoch": 12.39,
661
+ "learning_rate": 0.0010142794812897874,
662
+ "loss": 0.9936,
663
+ "step": 545
664
+ },
665
+ {
666
+ "epoch": 12.5,
667
+ "learning_rate": 0.001,
668
+ "loss": 0.9891,
669
+ "step": 550
670
+ },
671
+ {
672
+ "epoch": 12.61,
673
+ "learning_rate": 0.000985720518710213,
674
+ "loss": 1.0684,
675
+ "step": 555
676
+ },
677
+ {
678
+ "epoch": 12.73,
679
+ "learning_rate": 0.0009714439492063038,
680
+ "loss": 1.076,
681
+ "step": 560
682
+ },
683
+ {
684
+ "epoch": 12.84,
685
+ "learning_rate": 0.0009571732026803976,
686
+ "loss": 1.0609,
687
+ "step": 565
688
+ },
689
+ {
690
+ "epoch": 12.95,
691
+ "learning_rate": 0.000942911189137232,
692
+ "loss": 1.1297,
693
+ "step": 570
694
+ },
695
+ {
696
+ "epoch": 13.07,
697
+ "learning_rate": 0.0009286608168007677,
698
+ "loss": 0.9342,
699
+ "step": 575
700
+ },
701
+ {
702
+ "epoch": 13.18,
703
+ "learning_rate": 0.0009144249915211606,
704
+ "loss": 0.8511,
705
+ "step": 580
706
+ },
707
+ {
708
+ "epoch": 13.3,
709
+ "learning_rate": 0.0009002066161822172,
710
+ "loss": 0.8336,
711
+ "step": 585
712
+ },
713
+ {
714
+ "epoch": 13.41,
715
+ "learning_rate": 0.0008860085901094594,
716
+ "loss": 0.8652,
717
+ "step": 590
718
+ },
719
+ {
720
+ "epoch": 13.52,
721
+ "learning_rate": 0.0008718338084789072,
722
+ "loss": 0.9744,
723
+ "step": 595
724
+ },
725
+ {
726
+ "epoch": 13.64,
727
+ "learning_rate": 0.000857685161726715,
728
+ "loss": 0.9006,
729
+ "step": 600
730
+ },
731
+ {
732
+ "epoch": 13.75,
733
+ "learning_rate": 0.000843565534959769,
734
+ "loss": 0.9619,
735
+ "step": 605
736
+ },
737
+ {
738
+ "epoch": 13.86,
739
+ "learning_rate": 0.0008294778073673762,
740
+ "loss": 0.9123,
741
+ "step": 610
742
+ },
743
+ {
744
+ "epoch": 13.98,
745
+ "learning_rate": 0.0008154248516341547,
746
+ "loss": 0.9959,
747
+ "step": 615
748
+ },
749
+ {
750
+ "epoch": 14.09,
751
+ "learning_rate": 0.0008014095333542549,
752
+ "loss": 0.7503,
753
+ "step": 620
754
+ },
755
+ {
756
+ "epoch": 14.2,
757
+ "learning_rate": 0.0007874347104470233,
758
+ "loss": 0.7357,
759
+ "step": 625
760
+ },
761
+ {
762
+ "epoch": 14.32,
763
+ "learning_rate": 0.0007735032325742355,
764
+ "loss": 0.7477,
765
+ "step": 630
766
+ },
767
+ {
768
+ "epoch": 14.43,
769
+ "learning_rate": 0.0007596179405590076,
770
+ "loss": 0.8088,
771
+ "step": 635
772
+ },
773
+ {
774
+ "epoch": 14.55,
775
+ "learning_rate": 0.0007457816658065133,
776
+ "loss": 0.7652,
777
+ "step": 640
778
+ },
779
+ {
780
+ "epoch": 14.66,
781
+ "learning_rate": 0.0007319972297266214,
782
+ "loss": 0.7847,
783
+ "step": 645
784
+ },
785
+ {
786
+ "epoch": 14.77,
787
+ "learning_rate": 0.0007182674431585703,
788
+ "loss": 0.7984,
789
+ "step": 650
790
+ },
791
+ {
792
+ "epoch": 14.89,
793
+ "learning_rate": 0.0007045951057978,
794
+ "loss": 0.8732,
795
+ "step": 655
796
+ },
797
+ {
798
+ "epoch": 15.0,
799
+ "learning_rate": 0.0006909830056250527,
800
+ "loss": 0.8258,
801
+ "step": 660
802
+ },
803
+ {
804
+ "epoch": 15.11,
805
+ "learning_rate": 0.0006774339183378663,
806
+ "loss": 0.6311,
807
+ "step": 665
808
+ },
809
+ {
810
+ "epoch": 15.23,
811
+ "learning_rate": 0.0006639506067845697,
812
+ "loss": 0.6543,
813
+ "step": 670
814
+ },
815
+ {
816
+ "epoch": 15.34,
817
+ "learning_rate": 0.0006505358204009018,
818
+ "loss": 0.6421,
819
+ "step": 675
820
+ },
821
+ {
822
+ "epoch": 15.45,
823
+ "learning_rate": 0.0006371922946493591,
824
+ "loss": 0.6937,
825
+ "step": 680
826
+ },
827
+ {
828
+ "epoch": 15.57,
829
+ "learning_rate": 0.0006239227504614003,
830
+ "loss": 0.6887,
831
+ "step": 685
832
+ },
833
+ {
834
+ "epoch": 15.68,
835
+ "learning_rate": 0.0006107298936826086,
836
+ "loss": 0.7097,
837
+ "step": 690
838
+ },
839
+ {
840
+ "epoch": 15.8,
841
+ "learning_rate": 0.0005976164145209322,
842
+ "loss": 0.6778,
843
+ "step": 695
844
+ },
845
+ {
846
+ "epoch": 15.91,
847
+ "learning_rate": 0.0005845849869981136,
848
+ "loss": 0.7124,
849
+ "step": 700
850
+ },
851
+ {
852
+ "epoch": 16.02,
853
+ "learning_rate": 0.000571638268404419,
854
+ "loss": 0.7053,
855
+ "step": 705
856
+ },
857
+ {
858
+ "epoch": 16.14,
859
+ "learning_rate": 0.0005587788987567784,
860
+ "loss": 0.5863,
861
+ "step": 710
862
+ },
863
+ {
864
+ "epoch": 16.25,
865
+ "learning_rate": 0.0005460095002604533,
866
+ "loss": 0.5588,
867
+ "step": 715
868
+ },
869
+ {
870
+ "epoch": 16.36,
871
+ "learning_rate": 0.0005333326767743263,
872
+ "loss": 0.5363,
873
+ "step": 720
874
+ },
875
+ {
876
+ "epoch": 16.48,
877
+ "learning_rate": 0.0005207510132799435,
878
+ "loss": 0.6137,
879
+ "step": 725
880
+ },
881
+ {
882
+ "epoch": 16.59,
883
+ "learning_rate": 0.0005082670753543961,
884
+ "loss": 0.5606,
885
+ "step": 730
886
+ },
887
+ {
888
+ "epoch": 16.7,
889
+ "learning_rate": 0.0004958834086471683,
890
+ "loss": 0.629,
891
+ "step": 735
892
+ },
893
+ {
894
+ "epoch": 16.82,
895
+ "learning_rate": 0.00048360253836103817,
896
+ "loss": 0.5754,
897
+ "step": 740
898
+ },
899
+ {
900
+ "epoch": 16.93,
901
+ "learning_rate": 0.0004714269687371581,
902
+ "loss": 0.6239,
903
+ "step": 745
904
+ },
905
+ {
906
+ "epoch": 17.05,
907
+ "learning_rate": 0.0004593591825444028,
908
+ "loss": 0.5807,
909
+ "step": 750
910
+ },
911
+ {
912
+ "epoch": 17.16,
913
+ "learning_rate": 0.0004474016405730973,
914
+ "loss": 0.465,
915
+ "step": 755
916
+ },
917
+ {
918
+ "epoch": 17.27,
919
+ "learning_rate": 0.00043555678113323104,
920
+ "loss": 0.4871,
921
+ "step": 760
922
+ },
923
+ {
924
+ "epoch": 17.39,
925
+ "learning_rate": 0.00042382701955724725,
926
+ "loss": 0.4623,
927
+ "step": 765
928
+ },
929
+ {
930
+ "epoch": 17.5,
931
+ "learning_rate": 0.00041221474770752696,
932
+ "loss": 0.5059,
933
+ "step": 770
934
+ },
935
+ {
936
+ "epoch": 17.61,
937
+ "learning_rate": 0.00040072233348865304,
938
+ "loss": 0.5021,
939
+ "step": 775
940
+ },
941
+ {
942
+ "epoch": 17.73,
943
+ "learning_rate": 0.0003893521203645618,
944
+ "loss": 0.5138,
945
+ "step": 780
946
+ },
947
+ {
948
+ "epoch": 17.84,
949
+ "learning_rate": 0.00037810642688067796,
950
+ "loss": 0.5212,
951
+ "step": 785
952
+ },
953
+ {
954
+ "epoch": 17.95,
955
+ "learning_rate": 0.00036698754619112975,
956
+ "loss": 0.5611,
957
+ "step": 790
958
+ },
959
+ {
960
+ "epoch": 18.07,
961
+ "learning_rate": 0.00035599774559114475,
962
+ "loss": 0.4956,
963
+ "step": 795
964
+ },
965
+ {
966
+ "epoch": 18.18,
967
+ "learning_rate": 0.000345139266054715,
968
+ "loss": 0.4243,
969
+ "step": 800
970
+ },
971
+ {
972
+ "epoch": 18.3,
973
+ "learning_rate": 0.0003344143217776319,
974
+ "loss": 0.4391,
975
+ "step": 805
976
+ },
977
+ {
978
+ "epoch": 18.41,
979
+ "learning_rate": 0.00032382509972598086,
980
+ "loss": 0.4627,
981
+ "step": 810
982
+ },
983
+ {
984
+ "epoch": 18.52,
985
+ "learning_rate": 0.0003133737591901864,
986
+ "loss": 0.4208,
987
+ "step": 815
988
+ },
989
+ {
990
+ "epoch": 18.64,
991
+ "learning_rate": 0.0003030624313447067,
992
+ "loss": 0.45,
993
+ "step": 820
994
+ },
995
+ {
996
+ "epoch": 18.75,
997
+ "learning_rate": 0.00029289321881345256,
998
+ "loss": 0.44,
999
+ "step": 825
1000
+ },
1001
+ {
1002
+ "epoch": 18.86,
1003
+ "learning_rate": 0.0002828681952410366,
1004
+ "loss": 0.4451,
1005
+ "step": 830
1006
+ },
1007
+ {
1008
+ "epoch": 18.98,
1009
+ "learning_rate": 0.0002729894048699265,
1010
+ "loss": 0.4494,
1011
+ "step": 835
1012
+ },
1013
+ {
1014
+ "epoch": 19.09,
1015
+ "learning_rate": 0.00026325886212359495,
1016
+ "loss": 0.3839,
1017
+ "step": 840
1018
+ },
1019
+ {
1020
+ "epoch": 19.2,
1021
+ "learning_rate": 0.0002536785511957531,
1022
+ "loss": 0.3728,
1023
+ "step": 845
1024
+ },
1025
+ {
1026
+ "epoch": 19.32,
1027
+ "learning_rate": 0.00024425042564574185,
1028
+ "loss": 0.4126,
1029
+ "step": 850
1030
+ },
1031
+ {
1032
+ "epoch": 19.43,
1033
+ "learning_rate": 0.00023497640800017682,
1034
+ "loss": 0.4183,
1035
+ "step": 855
1036
+ },
1037
+ {
1038
+ "epoch": 19.55,
1039
+ "learning_rate": 0.0002258583893609175,
1040
+ "loss": 0.3778,
1041
+ "step": 860
1042
+ },
1043
+ {
1044
+ "epoch": 19.66,
1045
+ "learning_rate": 0.00021689822901944456,
1046
+ "loss": 0.3758,
1047
+ "step": 865
1048
+ },
1049
+ {
1050
+ "epoch": 19.77,
1051
+ "learning_rate": 0.000208097754077725,
1052
+ "loss": 0.4034,
1053
+ "step": 870
1054
+ },
1055
+ {
1056
+ "epoch": 19.89,
1057
+ "learning_rate": 0.0001994587590756397,
1058
+ "loss": 0.4085,
1059
+ "step": 875
1060
+ },
1061
+ {
1062
+ "epoch": 20.0,
1063
+ "learning_rate": 0.00019098300562505265,
1064
+ "loss": 0.3673,
1065
+ "step": 880
1066
+ },
1067
+ {
1068
+ "epoch": 20.11,
1069
+ "learning_rate": 0.0001826722220505931,
1070
+ "loss": 0.363,
1071
+ "step": 885
1072
+ },
1073
+ {
1074
+ "epoch": 20.23,
1075
+ "learning_rate": 0.000174528103037226,
1076
+ "loss": 0.3707,
1077
+ "step": 890
1078
+ },
1079
+ {
1080
+ "epoch": 20.34,
1081
+ "learning_rate": 0.00016655230928468257,
1082
+ "loss": 0.369,
1083
+ "step": 895
1084
+ },
1085
+ {
1086
+ "epoch": 20.45,
1087
+ "learning_rate": 0.00015874646716881869,
1088
+ "loss": 0.3528,
1089
+ "step": 900
1090
+ },
1091
+ {
1092
+ "epoch": 20.57,
1093
+ "learning_rate": 0.00015111216840997744,
1094
+ "loss": 0.3581,
1095
+ "step": 905
1096
+ },
1097
+ {
1098
+ "epoch": 20.68,
1099
+ "learning_rate": 0.00014365096974841107,
1100
+ "loss": 0.3466,
1101
+ "step": 910
1102
+ },
1103
+ {
1104
+ "epoch": 20.8,
1105
+ "learning_rate": 0.00013636439262684297,
1106
+ "loss": 0.3274,
1107
+ "step": 915
1108
+ },
1109
+ {
1110
+ "epoch": 20.91,
1111
+ "learning_rate": 0.00012925392288022297,
1112
+ "loss": 0.3401,
1113
+ "step": 920
1114
+ },
1115
+ {
1116
+ "epoch": 21.02,
1117
+ "learning_rate": 0.00012232101043274435,
1118
+ "loss": 0.3435,
1119
+ "step": 925
1120
+ },
1121
+ {
1122
+ "epoch": 21.14,
1123
+ "learning_rate": 0.00011556706900218572,
1124
+ "loss": 0.2972,
1125
+ "step": 930
1126
+ },
1127
+ {
1128
+ "epoch": 21.25,
1129
+ "learning_rate": 0.00010899347581163222,
1130
+ "loss": 0.3153,
1131
+ "step": 935
1132
+ },
1133
+ {
1134
+ "epoch": 21.36,
1135
+ "learning_rate": 0.00010260157130864178,
1136
+ "loss": 0.3315,
1137
+ "step": 940
1138
+ },
1139
+ {
1140
+ "epoch": 21.48,
1141
+ "learning_rate": 9.639265889190829e-05,
1142
+ "loss": 0.3264,
1143
+ "step": 945
1144
+ },
1145
+ {
1146
+ "epoch": 21.59,
1147
+ "learning_rate": 9.036800464548156e-05,
1148
+ "loss": 0.3427,
1149
+ "step": 950
1150
+ },
1151
+ {
1152
+ "epoch": 21.7,
1153
+ "learning_rate": 8.4528837080594e-05,
1154
+ "loss": 0.3415,
1155
+ "step": 955
1156
+ },
1157
+ {
1158
+ "epoch": 21.82,
1159
+ "learning_rate": 7.887634688515e-05,
1160
+ "loss": 0.323,
1161
+ "step": 960
1162
+ },
1163
+ {
1164
+ "epoch": 21.93,
1165
+ "learning_rate": 7.341168668092857e-05,
1166
+ "loss": 0.2961,
1167
+ "step": 965
1168
+ },
1169
+ {
1170
+ "epoch": 22.05,
1171
+ "learning_rate": 6.813597078854772e-05,
1172
+ "loss": 0.3276,
1173
+ "step": 970
1174
+ },
1175
+ {
1176
+ "epoch": 22.16,
1177
+ "learning_rate": 6.305027500023842e-05,
1178
+ "loss": 0.3045,
1179
+ "step": 975
1180
+ },
1181
+ {
1182
+ "epoch": 22.27,
1183
+ "learning_rate": 5.8155636360475384e-05,
1184
+ "loss": 0.3167,
1185
+ "step": 980
1186
+ },
1187
+ {
1188
+ "epoch": 22.39,
1189
+ "learning_rate": 5.345305295450997e-05,
1190
+ "loss": 0.319,
1191
+ "step": 985
1192
+ },
1193
+ {
1194
+ "epoch": 22.5,
1195
+ "learning_rate": 4.894348370484647e-05,
1196
+ "loss": 0.2852,
1197
+ "step": 990
1198
+ },
1199
+ {
1200
+ "epoch": 22.61,
1201
+ "learning_rate": 4.4627848175703315e-05,
1202
+ "loss": 0.3034,
1203
+ "step": 995
1204
+ },
1205
+ {
1206
+ "epoch": 22.73,
1207
+ "learning_rate": 4.050702638550274e-05,
1208
+ "loss": 0.2845,
1209
+ "step": 1000
1210
+ }
1211
+ ],
1212
+ "logging_steps": 5,
1213
+ "max_steps": 1100,
1214
+ "num_input_tokens_seen": 0,
1215
+ "num_train_epochs": 25,
1216
+ "save_steps": 100,
1217
+ "total_flos": 5.092929071525069e+17,
1218
+ "train_batch_size": 4,
1219
+ "trial_name": null,
1220
+ "trial_params": null
1221
+ }
checkpoint-1000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c
3
+ size 4920
checkpoint-1100/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /root/chatglm3-6b
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-1100/adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/root/chatglm3-6b",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 64.0,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "query_key_value"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
checkpoint-1100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bc2583490c7dc47bededcc0eaaa25d9aafe96d7680d7ecf5ec077c85de59604
3
+ size 31204248
checkpoint-1100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71abfda018effb690a77e01b7df48e60cb730b12599e5ad6fdc26845b844760a
3
+ size 62437882
checkpoint-1100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7866b8fc933c6248bae764638e49b94ebe1f35463171c6986de52c6a81632428
3
+ size 14244
checkpoint-1100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:834bea796770b94431ea03d70df0b96b826ab2cbdccf7ff1204aca5c40cb9ee7
3
+ size 1064
checkpoint-1100/special_tokens_map.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|observation|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ]
18
+ }
checkpoint-1100/tokenization_chatglm.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import re
4
+ from typing import List, Optional, Union, Dict
5
+ from sentencepiece import SentencePieceProcessor
6
+ from transformers import PreTrainedTokenizer
7
+ from transformers.utils import logging, PaddingStrategy
8
+ from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
9
+
10
+
11
+ class SPTokenizer:
12
+ def __init__(self, model_path: str):
13
+ # reload tokenizer
14
+ assert os.path.isfile(model_path), model_path
15
+ self.sp_model = SentencePieceProcessor(model_file=model_path)
16
+
17
+ # BOS / EOS token IDs
18
+ self.n_words: int = self.sp_model.vocab_size()
19
+ self.bos_id: int = self.sp_model.bos_id()
20
+ self.eos_id: int = self.sp_model.eos_id()
21
+ self.pad_id: int = self.sp_model.unk_id()
22
+ assert self.sp_model.vocab_size() == self.sp_model.get_piece_size()
23
+
24
+ role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"]
25
+ special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens
26
+ self.special_tokens = {}
27
+ self.index_special_tokens = {}
28
+ for token in special_tokens:
29
+ self.special_tokens[token] = self.n_words
30
+ self.index_special_tokens[self.n_words] = token
31
+ self.n_words += 1
32
+ self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens])
33
+
34
+ def tokenize(self, s: str, encode_special_tokens=False):
35
+ if encode_special_tokens:
36
+ last_index = 0
37
+ t = []
38
+ for match in re.finditer(self.role_special_token_expression, s):
39
+ if last_index < match.start():
40
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()]))
41
+ t.append(s[match.start():match.end()])
42
+ last_index = match.end()
43
+ if last_index < len(s):
44
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:]))
45
+ return t
46
+ else:
47
+ return self.sp_model.EncodeAsPieces(s)
48
+
49
+ def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]:
50
+ assert type(s) is str
51
+ t = self.sp_model.encode(s)
52
+ if bos:
53
+ t = [self.bos_id] + t
54
+ if eos:
55
+ t = t + [self.eos_id]
56
+ return t
57
+
58
+ def decode(self, t: List[int]) -> str:
59
+ text, buffer = "", []
60
+ for token in t:
61
+ if token in self.index_special_tokens:
62
+ if buffer:
63
+ text += self.sp_model.decode(buffer)
64
+ buffer = []
65
+ text += self.index_special_tokens[token]
66
+ else:
67
+ buffer.append(token)
68
+ if buffer:
69
+ text += self.sp_model.decode(buffer)
70
+ return text
71
+
72
+ def decode_tokens(self, tokens: List[str]) -> str:
73
+ text = self.sp_model.DecodePieces(tokens)
74
+ return text
75
+
76
+ def convert_token_to_id(self, token):
77
+ """ Converts a token (str) in an id using the vocab. """
78
+ if token in self.special_tokens:
79
+ return self.special_tokens[token]
80
+ return self.sp_model.PieceToId(token)
81
+
82
+ def convert_id_to_token(self, index):
83
+ """Converts an index (integer) in a token (str) using the vocab."""
84
+ if index in self.index_special_tokens:
85
+ return self.index_special_tokens[index]
86
+ if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size():
87
+ return ""
88
+ return self.sp_model.IdToPiece(index)
89
+
90
+
91
+ class ChatGLMTokenizer(PreTrainedTokenizer):
92
+ vocab_files_names = {"vocab_file": "tokenizer.model"}
93
+
94
+ model_input_names = ["input_ids", "attention_mask", "position_ids"]
95
+
96
+ def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False,
97
+ **kwargs):
98
+ self.name = "GLMTokenizer"
99
+
100
+ self.vocab_file = vocab_file
101
+ self.tokenizer = SPTokenizer(vocab_file)
102
+ self.special_tokens = {
103
+ "<bos>": self.tokenizer.bos_id,
104
+ "<eos>": self.tokenizer.eos_id,
105
+ "<pad>": self.tokenizer.pad_id
106
+ }
107
+ self.encode_special_tokens = encode_special_tokens
108
+ super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces,
109
+ encode_special_tokens=encode_special_tokens,
110
+ **kwargs)
111
+
112
+ def get_command(self, token):
113
+ if token in self.special_tokens:
114
+ return self.special_tokens[token]
115
+ assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}"
116
+ return self.tokenizer.special_tokens[token]
117
+
118
+ @property
119
+ def unk_token(self) -> str:
120
+ return "<unk>"
121
+
122
+ @property
123
+ def pad_token(self) -> str:
124
+ return "<unk>"
125
+
126
+ @property
127
+ def pad_token_id(self):
128
+ return self.get_command("<pad>")
129
+
130
+ @property
131
+ def eos_token(self) -> str:
132
+ return "</s>"
133
+
134
+ @property
135
+ def eos_token_id(self):
136
+ return self.get_command("<eos>")
137
+
138
+ @property
139
+ def vocab_size(self):
140
+ return self.tokenizer.n_words
141
+
142
+ def get_vocab(self):
143
+ """ Returns vocab as a dict """
144
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
145
+ vocab.update(self.added_tokens_encoder)
146
+ return vocab
147
+
148
+ def _tokenize(self, text, **kwargs):
149
+ return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens)
150
+
151
+ def _convert_token_to_id(self, token):
152
+ """ Converts a token (str) in an id using the vocab. """
153
+ return self.tokenizer.convert_token_to_id(token)
154
+
155
+ def _convert_id_to_token(self, index):
156
+ """Converts an index (integer) in a token (str) using the vocab."""
157
+ return self.tokenizer.convert_id_to_token(index)
158
+
159
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
160
+ return self.tokenizer.decode_tokens(tokens)
161
+
162
+ def save_vocabulary(self, save_directory, filename_prefix=None):
163
+ """
164
+ Save the vocabulary and special tokens file to a directory.
165
+
166
+ Args:
167
+ save_directory (`str`):
168
+ The directory in which to save the vocabulary.
169
+ filename_prefix (`str`, *optional*):
170
+ An optional prefix to add to the named of the saved files.
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ if os.path.isdir(save_directory):
176
+ vocab_file = os.path.join(
177
+ save_directory, self.vocab_files_names["vocab_file"]
178
+ )
179
+ else:
180
+ vocab_file = save_directory
181
+
182
+ with open(self.vocab_file, 'rb') as fin:
183
+ proto_str = fin.read()
184
+
185
+ with open(vocab_file, "wb") as writer:
186
+ writer.write(proto_str)
187
+
188
+ return (vocab_file,)
189
+
190
+ def get_prefix_tokens(self):
191
+ prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")]
192
+ return prefix_tokens
193
+
194
+ def build_single_message(self, role, metadata, message):
195
+ assert role in ["system", "user", "assistant", "observation"], role
196
+ role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n")
197
+ message_tokens = self.tokenizer.encode(message)
198
+ tokens = role_tokens + message_tokens
199
+ return tokens
200
+
201
+ def build_chat_input(self, query, history=None, role="user"):
202
+ if history is None:
203
+ history = []
204
+ input_ids = []
205
+ for item in history:
206
+ content = item["content"]
207
+ if item["role"] == "system" and "tools" in item:
208
+ content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False)
209
+ input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content))
210
+ input_ids.extend(self.build_single_message(role, "", query))
211
+ input_ids.extend([self.get_command("<|assistant|>")])
212
+ return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True)
213
+
214
+ def build_inputs_with_special_tokens(
215
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
216
+ ) -> List[int]:
217
+ """
218
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
219
+ adding special tokens. A BERT sequence has the following format:
220
+
221
+ - single sequence: `[CLS] X [SEP]`
222
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
223
+
224
+ Args:
225
+ token_ids_0 (`List[int]`):
226
+ List of IDs to which the special tokens will be added.
227
+ token_ids_1 (`List[int]`, *optional*):
228
+ Optional second list of IDs for sequence pairs.
229
+
230
+ Returns:
231
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
232
+ """
233
+ prefix_tokens = self.get_prefix_tokens()
234
+ token_ids_0 = prefix_tokens + token_ids_0
235
+ if token_ids_1 is not None:
236
+ token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")]
237
+ return token_ids_0
238
+
239
+ def _pad(
240
+ self,
241
+ encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
242
+ max_length: Optional[int] = None,
243
+ padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
244
+ pad_to_multiple_of: Optional[int] = None,
245
+ return_attention_mask: Optional[bool] = None,
246
+ ) -> dict:
247
+ """
248
+ Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
249
+
250
+ Args:
251
+ encoded_inputs:
252
+ Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
253
+ max_length: maximum length of the returned list and optionally padding length (see below).
254
+ Will truncate by taking into account the special tokens.
255
+ padding_strategy: PaddingStrategy to use for padding.
256
+
257
+ - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
258
+ - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
259
+ - PaddingStrategy.DO_NOT_PAD: Do not pad
260
+ The tokenizer padding sides are defined in self.padding_side:
261
+
262
+ - 'left': pads on the left of the sequences
263
+ - 'right': pads on the right of the sequences
264
+ pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
265
+ This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
266
+ `>= 7.5` (Volta).
267
+ return_attention_mask:
268
+ (optional) Set to False to avoid returning attention mask (default: set to model specifics)
269
+ """
270
+ # Load from model defaults
271
+ assert self.padding_side == "left"
272
+
273
+ required_input = encoded_inputs[self.model_input_names[0]]
274
+ seq_length = len(required_input)
275
+
276
+ if padding_strategy == PaddingStrategy.LONGEST:
277
+ max_length = len(required_input)
278
+
279
+ if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
280
+ max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
281
+
282
+ needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
283
+
284
+ # Initialize attention mask if not present.
285
+ if "attention_mask" not in encoded_inputs:
286
+ encoded_inputs["attention_mask"] = [1] * seq_length
287
+
288
+ if "position_ids" not in encoded_inputs:
289
+ encoded_inputs["position_ids"] = list(range(seq_length))
290
+
291
+ if needs_to_be_padded:
292
+ difference = max_length - len(required_input)
293
+
294
+ if "attention_mask" in encoded_inputs:
295
+ encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
296
+ if "position_ids" in encoded_inputs:
297
+ encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
298
+ encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
299
+
300
+ return encoded_inputs
checkpoint-1100/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2
3
+ size 1018370
checkpoint-1100/tokenizer_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "64795": {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "64797": {
12
+ "content": "<|observation|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ }
19
+ },
20
+ "additional_special_tokens": [
21
+ "<|user|>",
22
+ "<|observation|>"
23
+ ],
24
+ "auto_map": {
25
+ "AutoTokenizer": [
26
+ "tokenization_chatglm.ChatGLMTokenizer",
27
+ null
28
+ ]
29
+ },
30
+ "clean_up_tokenization_spaces": false,
31
+ "do_lower_case": false,
32
+ "encode_special_tokens": false,
33
+ "eos_token": "</s>",
34
+ "model_max_length": 1000000000000000019884624838656,
35
+ "pad_token": "<unk>",
36
+ "padding_side": "right",
37
+ "remove_space": false,
38
+ "split_special_tokens": false,
39
+ "tokenizer_class": "ChatGLMTokenizer",
40
+ "unk_token": "<unk>"
41
+ }
checkpoint-1100/trainer_state.json ADDED
@@ -0,0 +1,1341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 25.0,
5
+ "eval_steps": 500,
6
+ "global_step": 1100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.11,
13
+ "learning_rate": 0.001999898043009433,
14
+ "loss": 4.5094,
15
+ "step": 5
16
+ },
17
+ {
18
+ "epoch": 0.23,
19
+ "learning_rate": 0.0019995921928281893,
20
+ "loss": 3.8047,
21
+ "step": 10
22
+ },
23
+ {
24
+ "epoch": 0.34,
25
+ "learning_rate": 0.001999082511823396,
26
+ "loss": 3.8813,
27
+ "step": 15
28
+ },
29
+ {
30
+ "epoch": 0.45,
31
+ "learning_rate": 0.0019983691039261358,
32
+ "loss": 3.7188,
33
+ "step": 20
34
+ },
35
+ {
36
+ "epoch": 0.57,
37
+ "learning_rate": 0.0019974521146102534,
38
+ "loss": 3.6695,
39
+ "step": 25
40
+ },
41
+ {
42
+ "epoch": 0.68,
43
+ "learning_rate": 0.001996331730862691,
44
+ "loss": 3.7078,
45
+ "step": 30
46
+ },
47
+ {
48
+ "epoch": 0.8,
49
+ "learning_rate": 0.0019950081811453595,
50
+ "loss": 3.6844,
51
+ "step": 35
52
+ },
53
+ {
54
+ "epoch": 0.91,
55
+ "learning_rate": 0.0019934817353485504,
56
+ "loss": 3.6961,
57
+ "step": 40
58
+ },
59
+ {
60
+ "epoch": 1.02,
61
+ "learning_rate": 0.0019917527047359027,
62
+ "loss": 3.5758,
63
+ "step": 45
64
+ },
65
+ {
66
+ "epoch": 1.14,
67
+ "learning_rate": 0.001989821441880933,
68
+ "loss": 3.4102,
69
+ "step": 50
70
+ },
71
+ {
72
+ "epoch": 1.25,
73
+ "learning_rate": 0.0019876883405951376,
74
+ "loss": 3.3984,
75
+ "step": 55
76
+ },
77
+ {
78
+ "epoch": 1.36,
79
+ "learning_rate": 0.001985353835847693,
80
+ "loss": 3.3602,
81
+ "step": 60
82
+ },
83
+ {
84
+ "epoch": 1.48,
85
+ "learning_rate": 0.0019828184036767556,
86
+ "loss": 3.4461,
87
+ "step": 65
88
+ },
89
+ {
90
+ "epoch": 1.59,
91
+ "learning_rate": 0.0019800825610923932,
92
+ "loss": 3.3461,
93
+ "step": 70
94
+ },
95
+ {
96
+ "epoch": 1.7,
97
+ "learning_rate": 0.0019771468659711597,
98
+ "loss": 3.4172,
99
+ "step": 75
100
+ },
101
+ {
102
+ "epoch": 1.82,
103
+ "learning_rate": 0.0019740119169423336,
104
+ "loss": 3.4359,
105
+ "step": 80
106
+ },
107
+ {
108
+ "epoch": 1.93,
109
+ "learning_rate": 0.0019706783532658523,
110
+ "loss": 3.5141,
111
+ "step": 85
112
+ },
113
+ {
114
+ "epoch": 2.05,
115
+ "learning_rate": 0.001967146854701957,
116
+ "loss": 3.2242,
117
+ "step": 90
118
+ },
119
+ {
120
+ "epoch": 2.16,
121
+ "learning_rate": 0.0019634181413725788,
122
+ "loss": 3.0227,
123
+ "step": 95
124
+ },
125
+ {
126
+ "epoch": 2.27,
127
+ "learning_rate": 0.0019594929736144974,
128
+ "loss": 2.8984,
129
+ "step": 100
130
+ },
131
+ {
132
+ "epoch": 2.39,
133
+ "learning_rate": 0.001955372151824297,
134
+ "loss": 3.0781,
135
+ "step": 105
136
+ },
137
+ {
138
+ "epoch": 2.5,
139
+ "learning_rate": 0.0019510565162951536,
140
+ "loss": 3.1203,
141
+ "step": 110
142
+ },
143
+ {
144
+ "epoch": 2.61,
145
+ "learning_rate": 0.00194654694704549,
146
+ "loss": 3.1828,
147
+ "step": 115
148
+ },
149
+ {
150
+ "epoch": 2.73,
151
+ "learning_rate": 0.0019418443636395248,
152
+ "loss": 3.0531,
153
+ "step": 120
154
+ },
155
+ {
156
+ "epoch": 2.84,
157
+ "learning_rate": 0.001936949724999762,
158
+ "loss": 3.1523,
159
+ "step": 125
160
+ },
161
+ {
162
+ "epoch": 2.95,
163
+ "learning_rate": 0.0019318640292114524,
164
+ "loss": 3.1156,
165
+ "step": 130
166
+ },
167
+ {
168
+ "epoch": 3.07,
169
+ "learning_rate": 0.0019265883133190713,
170
+ "loss": 2.7844,
171
+ "step": 135
172
+ },
173
+ {
174
+ "epoch": 3.18,
175
+ "learning_rate": 0.0019211236531148502,
176
+ "loss": 2.6711,
177
+ "step": 140
178
+ },
179
+ {
180
+ "epoch": 3.3,
181
+ "learning_rate": 0.0019154711629194062,
182
+ "loss": 2.6609,
183
+ "step": 145
184
+ },
185
+ {
186
+ "epoch": 3.41,
187
+ "learning_rate": 0.0019096319953545184,
188
+ "loss": 2.7531,
189
+ "step": 150
190
+ },
191
+ {
192
+ "epoch": 3.52,
193
+ "learning_rate": 0.0019036073411080917,
194
+ "loss": 2.7977,
195
+ "step": 155
196
+ },
197
+ {
198
+ "epoch": 3.64,
199
+ "learning_rate": 0.0018973984286913585,
200
+ "loss": 2.7914,
201
+ "step": 160
202
+ },
203
+ {
204
+ "epoch": 3.75,
205
+ "learning_rate": 0.0018910065241883678,
206
+ "loss": 2.8188,
207
+ "step": 165
208
+ },
209
+ {
210
+ "epoch": 3.86,
211
+ "learning_rate": 0.0018844329309978143,
212
+ "loss": 2.8945,
213
+ "step": 170
214
+ },
215
+ {
216
+ "epoch": 3.98,
217
+ "learning_rate": 0.0018776789895672556,
218
+ "loss": 2.8883,
219
+ "step": 175
220
+ },
221
+ {
222
+ "epoch": 4.09,
223
+ "learning_rate": 0.0018707460771197773,
224
+ "loss": 2.4617,
225
+ "step": 180
226
+ },
227
+ {
228
+ "epoch": 4.2,
229
+ "learning_rate": 0.001863635607373157,
230
+ "loss": 2.4633,
231
+ "step": 185
232
+ },
233
+ {
234
+ "epoch": 4.32,
235
+ "learning_rate": 0.001856349030251589,
236
+ "loss": 2.5094,
237
+ "step": 190
238
+ },
239
+ {
240
+ "epoch": 4.43,
241
+ "learning_rate": 0.0018488878315900226,
242
+ "loss": 2.432,
243
+ "step": 195
244
+ },
245
+ {
246
+ "epoch": 4.55,
247
+ "learning_rate": 0.0018412535328311812,
248
+ "loss": 2.5648,
249
+ "step": 200
250
+ },
251
+ {
252
+ "epoch": 4.66,
253
+ "learning_rate": 0.0018334476907153176,
254
+ "loss": 2.4836,
255
+ "step": 205
256
+ },
257
+ {
258
+ "epoch": 4.77,
259
+ "learning_rate": 0.001825471896962774,
260
+ "loss": 2.6617,
261
+ "step": 210
262
+ },
263
+ {
264
+ "epoch": 4.89,
265
+ "learning_rate": 0.0018173277779494068,
266
+ "loss": 2.6734,
267
+ "step": 215
268
+ },
269
+ {
270
+ "epoch": 5.0,
271
+ "learning_rate": 0.0018090169943749475,
272
+ "loss": 2.6742,
273
+ "step": 220
274
+ },
275
+ {
276
+ "epoch": 5.11,
277
+ "learning_rate": 0.0018005412409243604,
278
+ "loss": 2.1379,
279
+ "step": 225
280
+ },
281
+ {
282
+ "epoch": 5.23,
283
+ "learning_rate": 0.0017919022459222751,
284
+ "loss": 2.1508,
285
+ "step": 230
286
+ },
287
+ {
288
+ "epoch": 5.34,
289
+ "learning_rate": 0.0017831017709805555,
290
+ "loss": 2.2582,
291
+ "step": 235
292
+ },
293
+ {
294
+ "epoch": 5.45,
295
+ "learning_rate": 0.0017741416106390826,
296
+ "loss": 2.2367,
297
+ "step": 240
298
+ },
299
+ {
300
+ "epoch": 5.57,
301
+ "learning_rate": 0.0017650235919998232,
302
+ "loss": 2.325,
303
+ "step": 245
304
+ },
305
+ {
306
+ "epoch": 5.68,
307
+ "learning_rate": 0.0017557495743542584,
308
+ "loss": 2.2703,
309
+ "step": 250
310
+ },
311
+ {
312
+ "epoch": 5.8,
313
+ "learning_rate": 0.0017463214488042471,
314
+ "loss": 2.3703,
315
+ "step": 255
316
+ },
317
+ {
318
+ "epoch": 5.91,
319
+ "learning_rate": 0.001736741137876405,
320
+ "loss": 2.4648,
321
+ "step": 260
322
+ },
323
+ {
324
+ "epoch": 6.02,
325
+ "learning_rate": 0.0017270105951300739,
326
+ "loss": 2.2734,
327
+ "step": 265
328
+ },
329
+ {
330
+ "epoch": 6.14,
331
+ "learning_rate": 0.0017171318047589637,
332
+ "loss": 1.9898,
333
+ "step": 270
334
+ },
335
+ {
336
+ "epoch": 6.25,
337
+ "learning_rate": 0.0017071067811865474,
338
+ "loss": 1.9816,
339
+ "step": 275
340
+ },
341
+ {
342
+ "epoch": 6.36,
343
+ "learning_rate": 0.0016969375686552938,
344
+ "loss": 1.9648,
345
+ "step": 280
346
+ },
347
+ {
348
+ "epoch": 6.48,
349
+ "learning_rate": 0.0016866262408098134,
350
+ "loss": 2.1672,
351
+ "step": 285
352
+ },
353
+ {
354
+ "epoch": 6.59,
355
+ "learning_rate": 0.0016761749002740195,
356
+ "loss": 2.0074,
357
+ "step": 290
358
+ },
359
+ {
360
+ "epoch": 6.7,
361
+ "learning_rate": 0.0016655856782223683,
362
+ "loss": 2.1598,
363
+ "step": 295
364
+ },
365
+ {
366
+ "epoch": 6.82,
367
+ "learning_rate": 0.0016548607339452852,
368
+ "loss": 2.0996,
369
+ "step": 300
370
+ },
371
+ {
372
+ "epoch": 6.93,
373
+ "learning_rate": 0.0016440022544088554,
374
+ "loss": 2.1434,
375
+ "step": 305
376
+ },
377
+ {
378
+ "epoch": 7.05,
379
+ "learning_rate": 0.0016330124538088703,
380
+ "loss": 2.0699,
381
+ "step": 310
382
+ },
383
+ {
384
+ "epoch": 7.16,
385
+ "learning_rate": 0.0016218935731193223,
386
+ "loss": 1.7312,
387
+ "step": 315
388
+ },
389
+ {
390
+ "epoch": 7.27,
391
+ "learning_rate": 0.0016106478796354383,
392
+ "loss": 1.7799,
393
+ "step": 320
394
+ },
395
+ {
396
+ "epoch": 7.39,
397
+ "learning_rate": 0.0015992776665113468,
398
+ "loss": 1.7008,
399
+ "step": 325
400
+ },
401
+ {
402
+ "epoch": 7.5,
403
+ "learning_rate": 0.0015877852522924731,
404
+ "loss": 1.8969,
405
+ "step": 330
406
+ },
407
+ {
408
+ "epoch": 7.61,
409
+ "learning_rate": 0.0015761729804427528,
410
+ "loss": 1.8156,
411
+ "step": 335
412
+ },
413
+ {
414
+ "epoch": 7.73,
415
+ "learning_rate": 0.0015644432188667695,
416
+ "loss": 1.9336,
417
+ "step": 340
418
+ },
419
+ {
420
+ "epoch": 7.84,
421
+ "learning_rate": 0.0015525983594269026,
422
+ "loss": 1.9918,
423
+ "step": 345
424
+ },
425
+ {
426
+ "epoch": 7.95,
427
+ "learning_rate": 0.0015406408174555976,
428
+ "loss": 2.0055,
429
+ "step": 350
430
+ },
431
+ {
432
+ "epoch": 8.07,
433
+ "learning_rate": 0.0015285730312628418,
434
+ "loss": 1.7168,
435
+ "step": 355
436
+ },
437
+ {
438
+ "epoch": 8.18,
439
+ "learning_rate": 0.001516397461638962,
440
+ "loss": 1.5531,
441
+ "step": 360
442
+ },
443
+ {
444
+ "epoch": 8.3,
445
+ "learning_rate": 0.001504116591352832,
446
+ "loss": 1.5922,
447
+ "step": 365
448
+ },
449
+ {
450
+ "epoch": 8.41,
451
+ "learning_rate": 0.001491732924645604,
452
+ "loss": 1.618,
453
+ "step": 370
454
+ },
455
+ {
456
+ "epoch": 8.52,
457
+ "learning_rate": 0.0014792489867200569,
458
+ "loss": 1.6738,
459
+ "step": 375
460
+ },
461
+ {
462
+ "epoch": 8.64,
463
+ "learning_rate": 0.0014666673232256737,
464
+ "loss": 1.7461,
465
+ "step": 380
466
+ },
467
+ {
468
+ "epoch": 8.75,
469
+ "learning_rate": 0.0014539904997395467,
470
+ "loss": 1.6746,
471
+ "step": 385
472
+ },
473
+ {
474
+ "epoch": 8.86,
475
+ "learning_rate": 0.0014412211012432212,
476
+ "loss": 1.7711,
477
+ "step": 390
478
+ },
479
+ {
480
+ "epoch": 8.98,
481
+ "learning_rate": 0.0014283617315955814,
482
+ "loss": 1.8387,
483
+ "step": 395
484
+ },
485
+ {
486
+ "epoch": 9.09,
487
+ "learning_rate": 0.0014154150130018866,
488
+ "loss": 1.475,
489
+ "step": 400
490
+ },
491
+ {
492
+ "epoch": 9.2,
493
+ "learning_rate": 0.001402383585479068,
494
+ "loss": 1.4523,
495
+ "step": 405
496
+ },
497
+ {
498
+ "epoch": 9.32,
499
+ "learning_rate": 0.0013892701063173917,
500
+ "loss": 1.4812,
501
+ "step": 410
502
+ },
503
+ {
504
+ "epoch": 9.43,
505
+ "learning_rate": 0.0013760772495385997,
506
+ "loss": 1.525,
507
+ "step": 415
508
+ },
509
+ {
510
+ "epoch": 9.55,
511
+ "learning_rate": 0.001362807705350641,
512
+ "loss": 1.398,
513
+ "step": 420
514
+ },
515
+ {
516
+ "epoch": 9.66,
517
+ "learning_rate": 0.0013494641795990985,
518
+ "loss": 1.4477,
519
+ "step": 425
520
+ },
521
+ {
522
+ "epoch": 9.77,
523
+ "learning_rate": 0.00133604939321543,
524
+ "loss": 1.5801,
525
+ "step": 430
526
+ },
527
+ {
528
+ "epoch": 9.89,
529
+ "learning_rate": 0.0013225660816621341,
530
+ "loss": 1.6422,
531
+ "step": 435
532
+ },
533
+ {
534
+ "epoch": 10.0,
535
+ "learning_rate": 0.0013090169943749475,
536
+ "loss": 1.5535,
537
+ "step": 440
538
+ },
539
+ {
540
+ "epoch": 10.11,
541
+ "learning_rate": 0.0012954048942022001,
542
+ "loss": 1.2324,
543
+ "step": 445
544
+ },
545
+ {
546
+ "epoch": 10.23,
547
+ "learning_rate": 0.0012817325568414298,
548
+ "loss": 1.2613,
549
+ "step": 450
550
+ },
551
+ {
552
+ "epoch": 10.34,
553
+ "learning_rate": 0.001268002770273379,
554
+ "loss": 1.3293,
555
+ "step": 455
556
+ },
557
+ {
558
+ "epoch": 10.45,
559
+ "learning_rate": 0.0012542183341934872,
560
+ "loss": 1.2852,
561
+ "step": 460
562
+ },
563
+ {
564
+ "epoch": 10.57,
565
+ "learning_rate": 0.0012403820594409924,
566
+ "loss": 1.3295,
567
+ "step": 465
568
+ },
569
+ {
570
+ "epoch": 10.68,
571
+ "learning_rate": 0.0012264967674257645,
572
+ "loss": 1.3287,
573
+ "step": 470
574
+ },
575
+ {
576
+ "epoch": 10.8,
577
+ "learning_rate": 0.0012125652895529767,
578
+ "loss": 1.3566,
579
+ "step": 475
580
+ },
581
+ {
582
+ "epoch": 10.91,
583
+ "learning_rate": 0.0011985904666457455,
584
+ "loss": 1.4414,
585
+ "step": 480
586
+ },
587
+ {
588
+ "epoch": 11.02,
589
+ "learning_rate": 0.0011845751483658454,
590
+ "loss": 1.3695,
591
+ "step": 485
592
+ },
593
+ {
594
+ "epoch": 11.14,
595
+ "learning_rate": 0.0011705221926326238,
596
+ "loss": 1.1363,
597
+ "step": 490
598
+ },
599
+ {
600
+ "epoch": 11.25,
601
+ "learning_rate": 0.001156434465040231,
602
+ "loss": 1.1354,
603
+ "step": 495
604
+ },
605
+ {
606
+ "epoch": 11.36,
607
+ "learning_rate": 0.0011423148382732854,
608
+ "loss": 1.0725,
609
+ "step": 500
610
+ },
611
+ {
612
+ "epoch": 11.48,
613
+ "learning_rate": 0.001128166191521093,
614
+ "loss": 1.1754,
615
+ "step": 505
616
+ },
617
+ {
618
+ "epoch": 11.59,
619
+ "learning_rate": 0.0011139914098905405,
620
+ "loss": 1.1848,
621
+ "step": 510
622
+ },
623
+ {
624
+ "epoch": 11.7,
625
+ "learning_rate": 0.0010997933838177826,
626
+ "loss": 1.2354,
627
+ "step": 515
628
+ },
629
+ {
630
+ "epoch": 11.82,
631
+ "learning_rate": 0.0010855750084788399,
632
+ "loss": 1.1984,
633
+ "step": 520
634
+ },
635
+ {
636
+ "epoch": 11.93,
637
+ "learning_rate": 0.0010713391831992322,
638
+ "loss": 1.2666,
639
+ "step": 525
640
+ },
641
+ {
642
+ "epoch": 12.05,
643
+ "learning_rate": 0.001057088810862768,
644
+ "loss": 1.1408,
645
+ "step": 530
646
+ },
647
+ {
648
+ "epoch": 12.16,
649
+ "learning_rate": 0.0010428267973196027,
650
+ "loss": 0.9385,
651
+ "step": 535
652
+ },
653
+ {
654
+ "epoch": 12.27,
655
+ "learning_rate": 0.0010285560507936962,
656
+ "loss": 1.0158,
657
+ "step": 540
658
+ },
659
+ {
660
+ "epoch": 12.39,
661
+ "learning_rate": 0.0010142794812897874,
662
+ "loss": 0.9936,
663
+ "step": 545
664
+ },
665
+ {
666
+ "epoch": 12.5,
667
+ "learning_rate": 0.001,
668
+ "loss": 0.9891,
669
+ "step": 550
670
+ },
671
+ {
672
+ "epoch": 12.61,
673
+ "learning_rate": 0.000985720518710213,
674
+ "loss": 1.0684,
675
+ "step": 555
676
+ },
677
+ {
678
+ "epoch": 12.73,
679
+ "learning_rate": 0.0009714439492063038,
680
+ "loss": 1.076,
681
+ "step": 560
682
+ },
683
+ {
684
+ "epoch": 12.84,
685
+ "learning_rate": 0.0009571732026803976,
686
+ "loss": 1.0609,
687
+ "step": 565
688
+ },
689
+ {
690
+ "epoch": 12.95,
691
+ "learning_rate": 0.000942911189137232,
692
+ "loss": 1.1297,
693
+ "step": 570
694
+ },
695
+ {
696
+ "epoch": 13.07,
697
+ "learning_rate": 0.0009286608168007677,
698
+ "loss": 0.9342,
699
+ "step": 575
700
+ },
701
+ {
702
+ "epoch": 13.18,
703
+ "learning_rate": 0.0009144249915211606,
704
+ "loss": 0.8511,
705
+ "step": 580
706
+ },
707
+ {
708
+ "epoch": 13.3,
709
+ "learning_rate": 0.0009002066161822172,
710
+ "loss": 0.8336,
711
+ "step": 585
712
+ },
713
+ {
714
+ "epoch": 13.41,
715
+ "learning_rate": 0.0008860085901094594,
716
+ "loss": 0.8652,
717
+ "step": 590
718
+ },
719
+ {
720
+ "epoch": 13.52,
721
+ "learning_rate": 0.0008718338084789072,
722
+ "loss": 0.9744,
723
+ "step": 595
724
+ },
725
+ {
726
+ "epoch": 13.64,
727
+ "learning_rate": 0.000857685161726715,
728
+ "loss": 0.9006,
729
+ "step": 600
730
+ },
731
+ {
732
+ "epoch": 13.75,
733
+ "learning_rate": 0.000843565534959769,
734
+ "loss": 0.9619,
735
+ "step": 605
736
+ },
737
+ {
738
+ "epoch": 13.86,
739
+ "learning_rate": 0.0008294778073673762,
740
+ "loss": 0.9123,
741
+ "step": 610
742
+ },
743
+ {
744
+ "epoch": 13.98,
745
+ "learning_rate": 0.0008154248516341547,
746
+ "loss": 0.9959,
747
+ "step": 615
748
+ },
749
+ {
750
+ "epoch": 14.09,
751
+ "learning_rate": 0.0008014095333542549,
752
+ "loss": 0.7503,
753
+ "step": 620
754
+ },
755
+ {
756
+ "epoch": 14.2,
757
+ "learning_rate": 0.0007874347104470233,
758
+ "loss": 0.7357,
759
+ "step": 625
760
+ },
761
+ {
762
+ "epoch": 14.32,
763
+ "learning_rate": 0.0007735032325742355,
764
+ "loss": 0.7477,
765
+ "step": 630
766
+ },
767
+ {
768
+ "epoch": 14.43,
769
+ "learning_rate": 0.0007596179405590076,
770
+ "loss": 0.8088,
771
+ "step": 635
772
+ },
773
+ {
774
+ "epoch": 14.55,
775
+ "learning_rate": 0.0007457816658065133,
776
+ "loss": 0.7652,
777
+ "step": 640
778
+ },
779
+ {
780
+ "epoch": 14.66,
781
+ "learning_rate": 0.0007319972297266214,
782
+ "loss": 0.7847,
783
+ "step": 645
784
+ },
785
+ {
786
+ "epoch": 14.77,
787
+ "learning_rate": 0.0007182674431585703,
788
+ "loss": 0.7984,
789
+ "step": 650
790
+ },
791
+ {
792
+ "epoch": 14.89,
793
+ "learning_rate": 0.0007045951057978,
794
+ "loss": 0.8732,
795
+ "step": 655
796
+ },
797
+ {
798
+ "epoch": 15.0,
799
+ "learning_rate": 0.0006909830056250527,
800
+ "loss": 0.8258,
801
+ "step": 660
802
+ },
803
+ {
804
+ "epoch": 15.11,
805
+ "learning_rate": 0.0006774339183378663,
806
+ "loss": 0.6311,
807
+ "step": 665
808
+ },
809
+ {
810
+ "epoch": 15.23,
811
+ "learning_rate": 0.0006639506067845697,
812
+ "loss": 0.6543,
813
+ "step": 670
814
+ },
815
+ {
816
+ "epoch": 15.34,
817
+ "learning_rate": 0.0006505358204009018,
818
+ "loss": 0.6421,
819
+ "step": 675
820
+ },
821
+ {
822
+ "epoch": 15.45,
823
+ "learning_rate": 0.0006371922946493591,
824
+ "loss": 0.6937,
825
+ "step": 680
826
+ },
827
+ {
828
+ "epoch": 15.57,
829
+ "learning_rate": 0.0006239227504614003,
830
+ "loss": 0.6887,
831
+ "step": 685
832
+ },
833
+ {
834
+ "epoch": 15.68,
835
+ "learning_rate": 0.0006107298936826086,
836
+ "loss": 0.7097,
837
+ "step": 690
838
+ },
839
+ {
840
+ "epoch": 15.8,
841
+ "learning_rate": 0.0005976164145209322,
842
+ "loss": 0.6778,
843
+ "step": 695
844
+ },
845
+ {
846
+ "epoch": 15.91,
847
+ "learning_rate": 0.0005845849869981136,
848
+ "loss": 0.7124,
849
+ "step": 700
850
+ },
851
+ {
852
+ "epoch": 16.02,
853
+ "learning_rate": 0.000571638268404419,
854
+ "loss": 0.7053,
855
+ "step": 705
856
+ },
857
+ {
858
+ "epoch": 16.14,
859
+ "learning_rate": 0.0005587788987567784,
860
+ "loss": 0.5863,
861
+ "step": 710
862
+ },
863
+ {
864
+ "epoch": 16.25,
865
+ "learning_rate": 0.0005460095002604533,
866
+ "loss": 0.5588,
867
+ "step": 715
868
+ },
869
+ {
870
+ "epoch": 16.36,
871
+ "learning_rate": 0.0005333326767743263,
872
+ "loss": 0.5363,
873
+ "step": 720
874
+ },
875
+ {
876
+ "epoch": 16.48,
877
+ "learning_rate": 0.0005207510132799435,
878
+ "loss": 0.6137,
879
+ "step": 725
880
+ },
881
+ {
882
+ "epoch": 16.59,
883
+ "learning_rate": 0.0005082670753543961,
884
+ "loss": 0.5606,
885
+ "step": 730
886
+ },
887
+ {
888
+ "epoch": 16.7,
889
+ "learning_rate": 0.0004958834086471683,
890
+ "loss": 0.629,
891
+ "step": 735
892
+ },
893
+ {
894
+ "epoch": 16.82,
895
+ "learning_rate": 0.00048360253836103817,
896
+ "loss": 0.5754,
897
+ "step": 740
898
+ },
899
+ {
900
+ "epoch": 16.93,
901
+ "learning_rate": 0.0004714269687371581,
902
+ "loss": 0.6239,
903
+ "step": 745
904
+ },
905
+ {
906
+ "epoch": 17.05,
907
+ "learning_rate": 0.0004593591825444028,
908
+ "loss": 0.5807,
909
+ "step": 750
910
+ },
911
+ {
912
+ "epoch": 17.16,
913
+ "learning_rate": 0.0004474016405730973,
914
+ "loss": 0.465,
915
+ "step": 755
916
+ },
917
+ {
918
+ "epoch": 17.27,
919
+ "learning_rate": 0.00043555678113323104,
920
+ "loss": 0.4871,
921
+ "step": 760
922
+ },
923
+ {
924
+ "epoch": 17.39,
925
+ "learning_rate": 0.00042382701955724725,
926
+ "loss": 0.4623,
927
+ "step": 765
928
+ },
929
+ {
930
+ "epoch": 17.5,
931
+ "learning_rate": 0.00041221474770752696,
932
+ "loss": 0.5059,
933
+ "step": 770
934
+ },
935
+ {
936
+ "epoch": 17.61,
937
+ "learning_rate": 0.00040072233348865304,
938
+ "loss": 0.5021,
939
+ "step": 775
940
+ },
941
+ {
942
+ "epoch": 17.73,
943
+ "learning_rate": 0.0003893521203645618,
944
+ "loss": 0.5138,
945
+ "step": 780
946
+ },
947
+ {
948
+ "epoch": 17.84,
949
+ "learning_rate": 0.00037810642688067796,
950
+ "loss": 0.5212,
951
+ "step": 785
952
+ },
953
+ {
954
+ "epoch": 17.95,
955
+ "learning_rate": 0.00036698754619112975,
956
+ "loss": 0.5611,
957
+ "step": 790
958
+ },
959
+ {
960
+ "epoch": 18.07,
961
+ "learning_rate": 0.00035599774559114475,
962
+ "loss": 0.4956,
963
+ "step": 795
964
+ },
965
+ {
966
+ "epoch": 18.18,
967
+ "learning_rate": 0.000345139266054715,
968
+ "loss": 0.4243,
969
+ "step": 800
970
+ },
971
+ {
972
+ "epoch": 18.3,
973
+ "learning_rate": 0.0003344143217776319,
974
+ "loss": 0.4391,
975
+ "step": 805
976
+ },
977
+ {
978
+ "epoch": 18.41,
979
+ "learning_rate": 0.00032382509972598086,
980
+ "loss": 0.4627,
981
+ "step": 810
982
+ },
983
+ {
984
+ "epoch": 18.52,
985
+ "learning_rate": 0.0003133737591901864,
986
+ "loss": 0.4208,
987
+ "step": 815
988
+ },
989
+ {
990
+ "epoch": 18.64,
991
+ "learning_rate": 0.0003030624313447067,
992
+ "loss": 0.45,
993
+ "step": 820
994
+ },
995
+ {
996
+ "epoch": 18.75,
997
+ "learning_rate": 0.00029289321881345256,
998
+ "loss": 0.44,
999
+ "step": 825
1000
+ },
1001
+ {
1002
+ "epoch": 18.86,
1003
+ "learning_rate": 0.0002828681952410366,
1004
+ "loss": 0.4451,
1005
+ "step": 830
1006
+ },
1007
+ {
1008
+ "epoch": 18.98,
1009
+ "learning_rate": 0.0002729894048699265,
1010
+ "loss": 0.4494,
1011
+ "step": 835
1012
+ },
1013
+ {
1014
+ "epoch": 19.09,
1015
+ "learning_rate": 0.00026325886212359495,
1016
+ "loss": 0.3839,
1017
+ "step": 840
1018
+ },
1019
+ {
1020
+ "epoch": 19.2,
1021
+ "learning_rate": 0.0002536785511957531,
1022
+ "loss": 0.3728,
1023
+ "step": 845
1024
+ },
1025
+ {
1026
+ "epoch": 19.32,
1027
+ "learning_rate": 0.00024425042564574185,
1028
+ "loss": 0.4126,
1029
+ "step": 850
1030
+ },
1031
+ {
1032
+ "epoch": 19.43,
1033
+ "learning_rate": 0.00023497640800017682,
1034
+ "loss": 0.4183,
1035
+ "step": 855
1036
+ },
1037
+ {
1038
+ "epoch": 19.55,
1039
+ "learning_rate": 0.0002258583893609175,
1040
+ "loss": 0.3778,
1041
+ "step": 860
1042
+ },
1043
+ {
1044
+ "epoch": 19.66,
1045
+ "learning_rate": 0.00021689822901944456,
1046
+ "loss": 0.3758,
1047
+ "step": 865
1048
+ },
1049
+ {
1050
+ "epoch": 19.77,
1051
+ "learning_rate": 0.000208097754077725,
1052
+ "loss": 0.4034,
1053
+ "step": 870
1054
+ },
1055
+ {
1056
+ "epoch": 19.89,
1057
+ "learning_rate": 0.0001994587590756397,
1058
+ "loss": 0.4085,
1059
+ "step": 875
1060
+ },
1061
+ {
1062
+ "epoch": 20.0,
1063
+ "learning_rate": 0.00019098300562505265,
1064
+ "loss": 0.3673,
1065
+ "step": 880
1066
+ },
1067
+ {
1068
+ "epoch": 20.11,
1069
+ "learning_rate": 0.0001826722220505931,
1070
+ "loss": 0.363,
1071
+ "step": 885
1072
+ },
1073
+ {
1074
+ "epoch": 20.23,
1075
+ "learning_rate": 0.000174528103037226,
1076
+ "loss": 0.3707,
1077
+ "step": 890
1078
+ },
1079
+ {
1080
+ "epoch": 20.34,
1081
+ "learning_rate": 0.00016655230928468257,
1082
+ "loss": 0.369,
1083
+ "step": 895
1084
+ },
1085
+ {
1086
+ "epoch": 20.45,
1087
+ "learning_rate": 0.00015874646716881869,
1088
+ "loss": 0.3528,
1089
+ "step": 900
1090
+ },
1091
+ {
1092
+ "epoch": 20.57,
1093
+ "learning_rate": 0.00015111216840997744,
1094
+ "loss": 0.3581,
1095
+ "step": 905
1096
+ },
1097
+ {
1098
+ "epoch": 20.68,
1099
+ "learning_rate": 0.00014365096974841107,
1100
+ "loss": 0.3466,
1101
+ "step": 910
1102
+ },
1103
+ {
1104
+ "epoch": 20.8,
1105
+ "learning_rate": 0.00013636439262684297,
1106
+ "loss": 0.3274,
1107
+ "step": 915
1108
+ },
1109
+ {
1110
+ "epoch": 20.91,
1111
+ "learning_rate": 0.00012925392288022297,
1112
+ "loss": 0.3401,
1113
+ "step": 920
1114
+ },
1115
+ {
1116
+ "epoch": 21.02,
1117
+ "learning_rate": 0.00012232101043274435,
1118
+ "loss": 0.3435,
1119
+ "step": 925
1120
+ },
1121
+ {
1122
+ "epoch": 21.14,
1123
+ "learning_rate": 0.00011556706900218572,
1124
+ "loss": 0.2972,
1125
+ "step": 930
1126
+ },
1127
+ {
1128
+ "epoch": 21.25,
1129
+ "learning_rate": 0.00010899347581163222,
1130
+ "loss": 0.3153,
1131
+ "step": 935
1132
+ },
1133
+ {
1134
+ "epoch": 21.36,
1135
+ "learning_rate": 0.00010260157130864178,
1136
+ "loss": 0.3315,
1137
+ "step": 940
1138
+ },
1139
+ {
1140
+ "epoch": 21.48,
1141
+ "learning_rate": 9.639265889190829e-05,
1142
+ "loss": 0.3264,
1143
+ "step": 945
1144
+ },
1145
+ {
1146
+ "epoch": 21.59,
1147
+ "learning_rate": 9.036800464548156e-05,
1148
+ "loss": 0.3427,
1149
+ "step": 950
1150
+ },
1151
+ {
1152
+ "epoch": 21.7,
1153
+ "learning_rate": 8.4528837080594e-05,
1154
+ "loss": 0.3415,
1155
+ "step": 955
1156
+ },
1157
+ {
1158
+ "epoch": 21.82,
1159
+ "learning_rate": 7.887634688515e-05,
1160
+ "loss": 0.323,
1161
+ "step": 960
1162
+ },
1163
+ {
1164
+ "epoch": 21.93,
1165
+ "learning_rate": 7.341168668092857e-05,
1166
+ "loss": 0.2961,
1167
+ "step": 965
1168
+ },
1169
+ {
1170
+ "epoch": 22.05,
1171
+ "learning_rate": 6.813597078854772e-05,
1172
+ "loss": 0.3276,
1173
+ "step": 970
1174
+ },
1175
+ {
1176
+ "epoch": 22.16,
1177
+ "learning_rate": 6.305027500023842e-05,
1178
+ "loss": 0.3045,
1179
+ "step": 975
1180
+ },
1181
+ {
1182
+ "epoch": 22.27,
1183
+ "learning_rate": 5.8155636360475384e-05,
1184
+ "loss": 0.3167,
1185
+ "step": 980
1186
+ },
1187
+ {
1188
+ "epoch": 22.39,
1189
+ "learning_rate": 5.345305295450997e-05,
1190
+ "loss": 0.319,
1191
+ "step": 985
1192
+ },
1193
+ {
1194
+ "epoch": 22.5,
1195
+ "learning_rate": 4.894348370484647e-05,
1196
+ "loss": 0.2852,
1197
+ "step": 990
1198
+ },
1199
+ {
1200
+ "epoch": 22.61,
1201
+ "learning_rate": 4.4627848175703315e-05,
1202
+ "loss": 0.3034,
1203
+ "step": 995
1204
+ },
1205
+ {
1206
+ "epoch": 22.73,
1207
+ "learning_rate": 4.050702638550274e-05,
1208
+ "loss": 0.2845,
1209
+ "step": 1000
1210
+ },
1211
+ {
1212
+ "epoch": 22.84,
1213
+ "learning_rate": 3.658185862742103e-05,
1214
+ "loss": 0.3136,
1215
+ "step": 1005
1216
+ },
1217
+ {
1218
+ "epoch": 22.95,
1219
+ "learning_rate": 3.285314529804295e-05,
1220
+ "loss": 0.3187,
1221
+ "step": 1010
1222
+ },
1223
+ {
1224
+ "epoch": 23.07,
1225
+ "learning_rate": 2.93216467341475e-05,
1226
+ "loss": 0.2907,
1227
+ "step": 1015
1228
+ },
1229
+ {
1230
+ "epoch": 23.18,
1231
+ "learning_rate": 2.5988083057666535e-05,
1232
+ "loss": 0.2955,
1233
+ "step": 1020
1234
+ },
1235
+ {
1236
+ "epoch": 23.3,
1237
+ "learning_rate": 2.2853134028840594e-05,
1238
+ "loss": 0.2785,
1239
+ "step": 1025
1240
+ },
1241
+ {
1242
+ "epoch": 23.41,
1243
+ "learning_rate": 1.9917438907606554e-05,
1244
+ "loss": 0.3369,
1245
+ "step": 1030
1246
+ },
1247
+ {
1248
+ "epoch": 23.52,
1249
+ "learning_rate": 1.7181596323244453e-05,
1250
+ "loss": 0.2837,
1251
+ "step": 1035
1252
+ },
1253
+ {
1254
+ "epoch": 23.64,
1255
+ "learning_rate": 1.4646164152307017e-05,
1256
+ "loss": 0.3002,
1257
+ "step": 1040
1258
+ },
1259
+ {
1260
+ "epoch": 23.75,
1261
+ "learning_rate": 1.231165940486234e-05,
1262
+ "loss": 0.3062,
1263
+ "step": 1045
1264
+ },
1265
+ {
1266
+ "epoch": 23.86,
1267
+ "learning_rate": 1.0178558119067316e-05,
1268
+ "loss": 0.2859,
1269
+ "step": 1050
1270
+ },
1271
+ {
1272
+ "epoch": 23.98,
1273
+ "learning_rate": 8.247295264097288e-06,
1274
+ "loss": 0.284,
1275
+ "step": 1055
1276
+ },
1277
+ {
1278
+ "epoch": 24.09,
1279
+ "learning_rate": 6.518264651449779e-06,
1280
+ "loss": 0.2607,
1281
+ "step": 1060
1282
+ },
1283
+ {
1284
+ "epoch": 24.2,
1285
+ "learning_rate": 4.991818854640395e-06,
1286
+ "loss": 0.3164,
1287
+ "step": 1065
1288
+ },
1289
+ {
1290
+ "epoch": 24.32,
1291
+ "learning_rate": 3.6682691373086663e-06,
1292
+ "loss": 0.2597,
1293
+ "step": 1070
1294
+ },
1295
+ {
1296
+ "epoch": 24.43,
1297
+ "learning_rate": 2.5478853897464847e-06,
1298
+ "loss": 0.2907,
1299
+ "step": 1075
1300
+ },
1301
+ {
1302
+ "epoch": 24.55,
1303
+ "learning_rate": 1.630896073864352e-06,
1304
+ "loss": 0.3033,
1305
+ "step": 1080
1306
+ },
1307
+ {
1308
+ "epoch": 24.66,
1309
+ "learning_rate": 9.174881766043087e-07,
1310
+ "loss": 0.3089,
1311
+ "step": 1085
1312
+ },
1313
+ {
1314
+ "epoch": 24.77,
1315
+ "learning_rate": 4.078071718107701e-07,
1316
+ "loss": 0.2964,
1317
+ "step": 1090
1318
+ },
1319
+ {
1320
+ "epoch": 24.89,
1321
+ "learning_rate": 1.0195699056669839e-07,
1322
+ "loss": 0.2995,
1323
+ "step": 1095
1324
+ },
1325
+ {
1326
+ "epoch": 25.0,
1327
+ "learning_rate": 0.0,
1328
+ "loss": 0.2936,
1329
+ "step": 1100
1330
+ }
1331
+ ],
1332
+ "logging_steps": 5,
1333
+ "max_steps": 1100,
1334
+ "num_input_tokens_seen": 0,
1335
+ "num_train_epochs": 25,
1336
+ "save_steps": 100,
1337
+ "total_flos": 5.602696856046797e+17,
1338
+ "train_batch_size": 4,
1339
+ "trial_name": null,
1340
+ "trial_params": null
1341
+ }
checkpoint-1100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c
3
+ size 4920
checkpoint-200/README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: /root/chatglm3-6b
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+
201
+
202
+ ### Framework versions
203
+
204
+ - PEFT 0.7.1
checkpoint-200/adapter_config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "/root/chatglm3-6b",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 64.0,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "query_key_value"
23
+ ],
24
+ "task_type": "CAUSAL_LM"
25
+ }
checkpoint-200/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9079a8f13b0b663beb8af4a69f38304ffb47f535efa9d4fc2f28235905d33d6
3
+ size 31204248
checkpoint-200/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3c20e12a6fe7711738ea34dd0ceeb02446ef057730b074a3f796920de8f458e
3
+ size 62437882
checkpoint-200/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:754a649249169df5413cd1afec214b0e512a562b2d537b50c7822a329e86ab92
3
+ size 14244
checkpoint-200/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca49ceb5308a589ec72593fdfc170ba0798f7206328f597dc676a71ad4f62985
3
+ size 1064
checkpoint-200/special_tokens_map.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|user|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "<|observation|>",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ }
17
+ ]
18
+ }
checkpoint-200/tokenization_chatglm.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import re
4
+ from typing import List, Optional, Union, Dict
5
+ from sentencepiece import SentencePieceProcessor
6
+ from transformers import PreTrainedTokenizer
7
+ from transformers.utils import logging, PaddingStrategy
8
+ from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
9
+
10
+
11
+ class SPTokenizer:
12
+ def __init__(self, model_path: str):
13
+ # reload tokenizer
14
+ assert os.path.isfile(model_path), model_path
15
+ self.sp_model = SentencePieceProcessor(model_file=model_path)
16
+
17
+ # BOS / EOS token IDs
18
+ self.n_words: int = self.sp_model.vocab_size()
19
+ self.bos_id: int = self.sp_model.bos_id()
20
+ self.eos_id: int = self.sp_model.eos_id()
21
+ self.pad_id: int = self.sp_model.unk_id()
22
+ assert self.sp_model.vocab_size() == self.sp_model.get_piece_size()
23
+
24
+ role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"]
25
+ special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens
26
+ self.special_tokens = {}
27
+ self.index_special_tokens = {}
28
+ for token in special_tokens:
29
+ self.special_tokens[token] = self.n_words
30
+ self.index_special_tokens[self.n_words] = token
31
+ self.n_words += 1
32
+ self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens])
33
+
34
+ def tokenize(self, s: str, encode_special_tokens=False):
35
+ if encode_special_tokens:
36
+ last_index = 0
37
+ t = []
38
+ for match in re.finditer(self.role_special_token_expression, s):
39
+ if last_index < match.start():
40
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()]))
41
+ t.append(s[match.start():match.end()])
42
+ last_index = match.end()
43
+ if last_index < len(s):
44
+ t.extend(self.sp_model.EncodeAsPieces(s[last_index:]))
45
+ return t
46
+ else:
47
+ return self.sp_model.EncodeAsPieces(s)
48
+
49
+ def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]:
50
+ assert type(s) is str
51
+ t = self.sp_model.encode(s)
52
+ if bos:
53
+ t = [self.bos_id] + t
54
+ if eos:
55
+ t = t + [self.eos_id]
56
+ return t
57
+
58
+ def decode(self, t: List[int]) -> str:
59
+ text, buffer = "", []
60
+ for token in t:
61
+ if token in self.index_special_tokens:
62
+ if buffer:
63
+ text += self.sp_model.decode(buffer)
64
+ buffer = []
65
+ text += self.index_special_tokens[token]
66
+ else:
67
+ buffer.append(token)
68
+ if buffer:
69
+ text += self.sp_model.decode(buffer)
70
+ return text
71
+
72
+ def decode_tokens(self, tokens: List[str]) -> str:
73
+ text = self.sp_model.DecodePieces(tokens)
74
+ return text
75
+
76
+ def convert_token_to_id(self, token):
77
+ """ Converts a token (str) in an id using the vocab. """
78
+ if token in self.special_tokens:
79
+ return self.special_tokens[token]
80
+ return self.sp_model.PieceToId(token)
81
+
82
+ def convert_id_to_token(self, index):
83
+ """Converts an index (integer) in a token (str) using the vocab."""
84
+ if index in self.index_special_tokens:
85
+ return self.index_special_tokens[index]
86
+ if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size():
87
+ return ""
88
+ return self.sp_model.IdToPiece(index)
89
+
90
+
91
+ class ChatGLMTokenizer(PreTrainedTokenizer):
92
+ vocab_files_names = {"vocab_file": "tokenizer.model"}
93
+
94
+ model_input_names = ["input_ids", "attention_mask", "position_ids"]
95
+
96
+ def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False,
97
+ **kwargs):
98
+ self.name = "GLMTokenizer"
99
+
100
+ self.vocab_file = vocab_file
101
+ self.tokenizer = SPTokenizer(vocab_file)
102
+ self.special_tokens = {
103
+ "<bos>": self.tokenizer.bos_id,
104
+ "<eos>": self.tokenizer.eos_id,
105
+ "<pad>": self.tokenizer.pad_id
106
+ }
107
+ self.encode_special_tokens = encode_special_tokens
108
+ super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces,
109
+ encode_special_tokens=encode_special_tokens,
110
+ **kwargs)
111
+
112
+ def get_command(self, token):
113
+ if token in self.special_tokens:
114
+ return self.special_tokens[token]
115
+ assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}"
116
+ return self.tokenizer.special_tokens[token]
117
+
118
+ @property
119
+ def unk_token(self) -> str:
120
+ return "<unk>"
121
+
122
+ @property
123
+ def pad_token(self) -> str:
124
+ return "<unk>"
125
+
126
+ @property
127
+ def pad_token_id(self):
128
+ return self.get_command("<pad>")
129
+
130
+ @property
131
+ def eos_token(self) -> str:
132
+ return "</s>"
133
+
134
+ @property
135
+ def eos_token_id(self):
136
+ return self.get_command("<eos>")
137
+
138
+ @property
139
+ def vocab_size(self):
140
+ return self.tokenizer.n_words
141
+
142
+ def get_vocab(self):
143
+ """ Returns vocab as a dict """
144
+ vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
145
+ vocab.update(self.added_tokens_encoder)
146
+ return vocab
147
+
148
+ def _tokenize(self, text, **kwargs):
149
+ return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens)
150
+
151
+ def _convert_token_to_id(self, token):
152
+ """ Converts a token (str) in an id using the vocab. """
153
+ return self.tokenizer.convert_token_to_id(token)
154
+
155
+ def _convert_id_to_token(self, index):
156
+ """Converts an index (integer) in a token (str) using the vocab."""
157
+ return self.tokenizer.convert_id_to_token(index)
158
+
159
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
160
+ return self.tokenizer.decode_tokens(tokens)
161
+
162
+ def save_vocabulary(self, save_directory, filename_prefix=None):
163
+ """
164
+ Save the vocabulary and special tokens file to a directory.
165
+
166
+ Args:
167
+ save_directory (`str`):
168
+ The directory in which to save the vocabulary.
169
+ filename_prefix (`str`, *optional*):
170
+ An optional prefix to add to the named of the saved files.
171
+
172
+ Returns:
173
+ `Tuple(str)`: Paths to the files saved.
174
+ """
175
+ if os.path.isdir(save_directory):
176
+ vocab_file = os.path.join(
177
+ save_directory, self.vocab_files_names["vocab_file"]
178
+ )
179
+ else:
180
+ vocab_file = save_directory
181
+
182
+ with open(self.vocab_file, 'rb') as fin:
183
+ proto_str = fin.read()
184
+
185
+ with open(vocab_file, "wb") as writer:
186
+ writer.write(proto_str)
187
+
188
+ return (vocab_file,)
189
+
190
+ def get_prefix_tokens(self):
191
+ prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")]
192
+ return prefix_tokens
193
+
194
+ def build_single_message(self, role, metadata, message):
195
+ assert role in ["system", "user", "assistant", "observation"], role
196
+ role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n")
197
+ message_tokens = self.tokenizer.encode(message)
198
+ tokens = role_tokens + message_tokens
199
+ return tokens
200
+
201
+ def build_chat_input(self, query, history=None, role="user"):
202
+ if history is None:
203
+ history = []
204
+ input_ids = []
205
+ for item in history:
206
+ content = item["content"]
207
+ if item["role"] == "system" and "tools" in item:
208
+ content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False)
209
+ input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content))
210
+ input_ids.extend(self.build_single_message(role, "", query))
211
+ input_ids.extend([self.get_command("<|assistant|>")])
212
+ return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True)
213
+
214
+ def build_inputs_with_special_tokens(
215
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
216
+ ) -> List[int]:
217
+ """
218
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
219
+ adding special tokens. A BERT sequence has the following format:
220
+
221
+ - single sequence: `[CLS] X [SEP]`
222
+ - pair of sequences: `[CLS] A [SEP] B [SEP]`
223
+
224
+ Args:
225
+ token_ids_0 (`List[int]`):
226
+ List of IDs to which the special tokens will be added.
227
+ token_ids_1 (`List[int]`, *optional*):
228
+ Optional second list of IDs for sequence pairs.
229
+
230
+ Returns:
231
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
232
+ """
233
+ prefix_tokens = self.get_prefix_tokens()
234
+ token_ids_0 = prefix_tokens + token_ids_0
235
+ if token_ids_1 is not None:
236
+ token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")]
237
+ return token_ids_0
238
+
239
+ def _pad(
240
+ self,
241
+ encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
242
+ max_length: Optional[int] = None,
243
+ padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
244
+ pad_to_multiple_of: Optional[int] = None,
245
+ return_attention_mask: Optional[bool] = None,
246
+ ) -> dict:
247
+ """
248
+ Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
249
+
250
+ Args:
251
+ encoded_inputs:
252
+ Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
253
+ max_length: maximum length of the returned list and optionally padding length (see below).
254
+ Will truncate by taking into account the special tokens.
255
+ padding_strategy: PaddingStrategy to use for padding.
256
+
257
+ - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
258
+ - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
259
+ - PaddingStrategy.DO_NOT_PAD: Do not pad
260
+ The tokenizer padding sides are defined in self.padding_side:
261
+
262
+ - 'left': pads on the left of the sequences
263
+ - 'right': pads on the right of the sequences
264
+ pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
265
+ This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
266
+ `>= 7.5` (Volta).
267
+ return_attention_mask:
268
+ (optional) Set to False to avoid returning attention mask (default: set to model specifics)
269
+ """
270
+ # Load from model defaults
271
+ assert self.padding_side == "left"
272
+
273
+ required_input = encoded_inputs[self.model_input_names[0]]
274
+ seq_length = len(required_input)
275
+
276
+ if padding_strategy == PaddingStrategy.LONGEST:
277
+ max_length = len(required_input)
278
+
279
+ if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
280
+ max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
281
+
282
+ needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
283
+
284
+ # Initialize attention mask if not present.
285
+ if "attention_mask" not in encoded_inputs:
286
+ encoded_inputs["attention_mask"] = [1] * seq_length
287
+
288
+ if "position_ids" not in encoded_inputs:
289
+ encoded_inputs["position_ids"] = list(range(seq_length))
290
+
291
+ if needs_to_be_padded:
292
+ difference = max_length - len(required_input)
293
+
294
+ if "attention_mask" in encoded_inputs:
295
+ encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
296
+ if "position_ids" in encoded_inputs:
297
+ encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
298
+ encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
299
+
300
+ return encoded_inputs
checkpoint-200/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2
3
+ size 1018370