diff --git a/README.md b/README.md index f349ccf23d853c78d00f6a3764040f5242a051c9..5b89f059a261eb30e1fbbe11ff7d3e8dd47ad2fa 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,14 @@ +--- +tags: +- llama-factory +- lora +- generated_from_trainer +- coolshell +base_model: chatglm3-6b +model-index: +- name: coolshell-llm +--- + # CoolShell LLM We express our deepest gratitude to Mr. Chen Hao for his selfless sharing in the internet community, especially in the field of technology. @@ -5,7 +16,42 @@ We express our deepest gratitude to Mr. Chen Hao for his selfless sharing in the > An orchid in deep forest won't stop giving out aroma despite nobody appreciating it. > A good man who is moral and well-behaved won't give up his principles despite poverty. -This repository's model has been trained using the dataset provided by coolshell-llm. For detailed usage instructions and more information, please visit the [coolshell-llm GitHub page](https://github.com/megaease/coolshell-llm). + +- [Model description](#model-description) +- [Training procedure](#training-procedure) + - [Training hyperparameters](#training-hyperparameters) + - [Framework versions](#framework-versions) + - [Demo](#demo) + - [Statement](#statement) + - [Special Thanks](#special-thanks) + +## Model description + +This model is a fine-tuned version of [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b) on the [coolshell-llm](https://github.com/megaease/coolshell-llm) dataset by using Qlora-4bit method. For detailed usage instructions and more information, please visit the [coolshell-llm GitHub page](https://github.com/megaease/coolshell-llm). + +## Training procedure + +### Training hyperparameters + +The following hyperparameters were used during training: +- learning_rate: 0.002 +- train_batch_size: 4 +- eval_batch_size: 8 +- seed: 42 +- gradient_accumulation_steps: 4 +- total_train_batch_size: 16 +- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 +- lr_scheduler_type: cosine +- num_epochs: 25.0 + +### Framework versions + +- PEFT 0.7.1 +- Transformers 4.36.2 +- Pytorch 2.1.2+cu121 +- Datasets 2.15.0 +- Tokenizers 0.15.0 +- LLaMA-Factory 0.4.0 ### Demo @@ -28,9 +74,12 @@ User: 酷壳网有哪些内容 User: exit ``` - ### Statement The CoolShell LLM model aims to perpetuate the spirit of Mr. Chen Hao. Do not use the open-source model and code, and any derivatives produced from the open-source project, for any purpose that may harm the nation and society, or for any service that has not undergone safety assessment and registration. -Although every effort has been made to ensure the compliance and accuracy of the data at every stage of model training, due to the influence of probabilistic randomness, the accuracy of output content cannot be guaranteed. Furthermore, the model's output can be easily misled by user input. This project does not assume any responsibility for data security, public opinion risks, or any risks and liabilities arising from the model being misled, abused, disseminated, or improperly utilized. \ No newline at end of file +Although every effort has been made to ensure the compliance and accuracy of the data at every stage of model training, due to the influence of probabilistic randomness, the accuracy of output content cannot be guaranteed. Furthermore, the model's output can be easily misled by user input. This project does not assume any responsibility for data security, public opinion risks, or any risks and liabilities arising from the model being misled, abused, disseminated, or improperly utilized. + + +### Special Thanks +We are immensely grateful to [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for providing such a feature-rich and easy-to-use LLM fine-tuning framework. Similarly, we would like to thank Zhipu AI and the KEG Laboratory of Tsinghua University for their open-source contribution to the [ChatGLM3](https://github.com/THUDM/ChatGLM3) model. Without their exceptional work, the establishment of this repository would not have been possible. \ No newline at end of file diff --git a/README.zh.md b/README.zh.md index 00deadfe7be17fd88957d95733a8234ec2358a79..560c9edf54c1f4fd38e199f88d3f58af472be804 100644 --- a/README.zh.md +++ b/README.zh.md @@ -1,3 +1,14 @@ +--- +tags: +- llama-factory +- lora +- generated_from_trainer +- coolshell +base_model: chatglm3-6b +model-index: +- name: coolshell-llm +--- + # CoolShell LLM 感恩陈皓先生对中文互联网,尤其是技术领域无私的分享。 @@ -5,7 +16,42 @@ > 芝兰生于深谷,不以无人而不芳。 > 君子修身养德,不以穷困而改志。 -本仓库的模型是基于 coolshell-llm 提供的数据集训练得来的。详细的使用方法和更多信息,请访问[coolshell-llm 的 GitHub 页面](https://github.com/megaease/coolshell-llm)。 +- [模型描述](#模型描述) +- [训练过程](#训练过程) + - [训练超参数](#训练超参数) + - [框架版本](#框架版本) + - [演示示例](#演示示例) + - [声明](#声明) + - [特别鸣谢](#特别鸣谢) + + +## 模型描述 + +这个模型是基于 [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b) 使用 [coolshell-llm](https://github.com/megaease/coolshell-llm) 数据集并用 Qlora-4bit 进行微调的结果。更多使用方法请查看[coolshell-llm GitHub 页面](https://github.com/megaease/coolshell-llm)。 + +## 训练过程 + +### 训练超参数 + +训练使用下边这些超参数 +- learning_rate: 0.002 +- train_batch_size: 4 +- eval_batch_size: 8 +- seed: 42 +- gradient_accumulation_steps: 4 +- total_train_batch_size: 16 +- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 +- lr_scheduler_type: cosine +- num_epochs: 25.0 + +### 框架版本 + +- PEFT 0.7.1 +- Transformers 4.36.2 +- Pytorch 2.1.2+cu121 +- Datasets 2.15.0 +- Tokenizers 0.15.0 +- LLaMA-Factory 0.4.0 ### 演示示例 @@ -30,9 +76,13 @@ User: 酷壳网有哪些内容 User: exit ``` - ### 声明 CoolShell LLM 模型旨在传承陈皓先生精神,勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。 -尽管模型在训练的各个阶段都尽力确保数据的合规性和准确性,但由于模型受概率随机性因素影响,无法保证输出内容的准确。同时模型的输出容易被用户的输入误导。本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。 \ No newline at end of file +尽管模型在训练的各个阶段都尽力确保数据的合规性和准确性,但由于模型受概率随机性因素影响,无法保证输出内容的准确。同时模型的输出容易被用户的输入误导。本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。 + + +### 特别鸣谢 + +我们非常感谢 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 提供了如此功能丰富且易于使用的 LLM 微调框架。同样,我们也要感谢智谱 AI 和清华大学 KEG 实验室对 [ChatGLM3](https://github.com/THUDM/ChatGLM3) 模型的开源贡献。没有他们的杰出工作,本仓库的建立将无从谈起。 \ No newline at end of file diff --git a/adapter_config.json b/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/adapter_model.safetensors b/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..8b1d852d68a43b9671e9576f9427ded10ee0c12d --- /dev/null +++ b/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bc2583490c7dc47bededcc0eaaa25d9aafe96d7680d7ecf5ec077c85de59604 +size 31204248 diff --git a/all_results.json b/all_results.json new file mode 100644 index 0000000000000000000000000000000000000000..74b97645f2a1a9e849ff2b89db981875744c3502 --- /dev/null +++ b/all_results.json @@ -0,0 +1,7 @@ +{ + "epoch": 25.0, + "train_loss": 1.3768115234375, + "train_runtime": 24197.7873, + "train_samples_per_second": 0.724, + "train_steps_per_second": 0.045 +} \ No newline at end of file diff --git a/checkpoint-100/README.md b/checkpoint-100/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-100/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-100/adapter_config.json b/checkpoint-100/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-100/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-100/adapter_model.safetensors b/checkpoint-100/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..9fd61fd283aa45886ba4dae97bc177d3d44b697c --- /dev/null +++ b/checkpoint-100/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f8a01b2ff9ae8d39695c90956fca3c08f1cbc215ff8ec47d39cdb42704f85f7 +size 31204248 diff --git a/checkpoint-100/optimizer.pt b/checkpoint-100/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..c23a962582f26fc2b764eda6853e713397c97536 --- /dev/null +++ b/checkpoint-100/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f89a17984e8f8325a843e199ab06bda3f078c75a4a70fd390368380879c4da9 +size 62437882 diff --git a/checkpoint-100/rng_state.pth b/checkpoint-100/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..e0c52e7520af7e10e7158e5ef5b94c6d124cd1d1 --- /dev/null +++ b/checkpoint-100/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0dabbebc3b7aae0f1e2e08720110c236a4c4ad8bcc4021283756db5a9251a361 +size 14244 diff --git a/checkpoint-100/scheduler.pt b/checkpoint-100/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..48f4ae9080da32e9f066992f7ab50ec4e3e59308 --- /dev/null +++ b/checkpoint-100/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5a75a62743becb9bf113e0f626f02da4c2bf599473c2d2862708dd9fbc349c5 +size 1064 diff --git a/checkpoint-100/special_tokens_map.json b/checkpoint-100/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-100/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-100/tokenization_chatglm.py b/checkpoint-100/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-100/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-100/tokenizer.model b/checkpoint-100/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-100/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-100/tokenizer_config.json b/checkpoint-100/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-100/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-100/trainer_state.json b/checkpoint-100/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..dfd8b46eae5e2150377f43d8e88b0328c7c053af --- /dev/null +++ b/checkpoint-100/trainer_state.json @@ -0,0 +1,141 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 2.2727272727272725, + "eval_steps": 500, + "global_step": 100, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 5.099717548376064e+16, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-100/training_args.bin b/checkpoint-100/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-100/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-1000/README.md b/checkpoint-1000/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-1000/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-1000/adapter_config.json b/checkpoint-1000/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-1000/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-1000/adapter_model.safetensors b/checkpoint-1000/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..f1459151d22b20b20a94adb4734c8ab8b49598fa --- /dev/null +++ b/checkpoint-1000/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:323caf0b1e8894e4ef8b0dbe356d83adafb2f8672a02f89fb8729684fbf30c82 +size 31204248 diff --git a/checkpoint-1000/optimizer.pt b/checkpoint-1000/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..cb1bd7c9fe8c1b607ce8cb00a3f71ea36572c142 --- /dev/null +++ b/checkpoint-1000/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:914475ddbfdc97f3d9f8637d5b05f797d202f9a60e23df9d28710afb7e06205a +size 62437882 diff --git a/checkpoint-1000/rng_state.pth b/checkpoint-1000/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..9ef1842deaabbd12b029eacd780378521b672e94 --- /dev/null +++ b/checkpoint-1000/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1073cb8b57930e10d4affaf055d83ef268bea78a4de9ff17cd6d0203574a40d +size 14244 diff --git a/checkpoint-1000/scheduler.pt b/checkpoint-1000/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..0cc6cd963cd9ae2369bf8384f6239404dd96be65 --- /dev/null +++ b/checkpoint-1000/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bed216a1f1980adb444c4a55e2b348e6b6c8174e1a232afea7a11177b3480627 +size 1064 diff --git a/checkpoint-1000/special_tokens_map.json b/checkpoint-1000/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-1000/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-1000/tokenization_chatglm.py b/checkpoint-1000/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-1000/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-1000/tokenizer.model b/checkpoint-1000/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-1000/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-1000/tokenizer_config.json b/checkpoint-1000/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-1000/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-1000/trainer_state.json b/checkpoint-1000/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..50aed210e279ad94f838ad58d07469a05435ba36 --- /dev/null +++ b/checkpoint-1000/trainer_state.json @@ -0,0 +1,1221 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 22.727272727272727, + "eval_steps": 500, + "global_step": 1000, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + }, + { + "epoch": 16.02, + "learning_rate": 0.000571638268404419, + "loss": 0.7053, + "step": 705 + }, + { + "epoch": 16.14, + "learning_rate": 0.0005587788987567784, + "loss": 0.5863, + "step": 710 + }, + { + "epoch": 16.25, + "learning_rate": 0.0005460095002604533, + "loss": 0.5588, + "step": 715 + }, + { + "epoch": 16.36, + "learning_rate": 0.0005333326767743263, + "loss": 0.5363, + "step": 720 + }, + { + "epoch": 16.48, + "learning_rate": 0.0005207510132799435, + "loss": 0.6137, + "step": 725 + }, + { + "epoch": 16.59, + "learning_rate": 0.0005082670753543961, + "loss": 0.5606, + "step": 730 + }, + { + "epoch": 16.7, + "learning_rate": 0.0004958834086471683, + "loss": 0.629, + "step": 735 + }, + { + "epoch": 16.82, + "learning_rate": 0.00048360253836103817, + "loss": 0.5754, + "step": 740 + }, + { + "epoch": 16.93, + "learning_rate": 0.0004714269687371581, + "loss": 0.6239, + "step": 745 + }, + { + "epoch": 17.05, + "learning_rate": 0.0004593591825444028, + "loss": 0.5807, + "step": 750 + }, + { + "epoch": 17.16, + "learning_rate": 0.0004474016405730973, + "loss": 0.465, + "step": 755 + }, + { + "epoch": 17.27, + "learning_rate": 0.00043555678113323104, + "loss": 0.4871, + "step": 760 + }, + { + "epoch": 17.39, + "learning_rate": 0.00042382701955724725, + "loss": 0.4623, + "step": 765 + }, + { + "epoch": 17.5, + "learning_rate": 0.00041221474770752696, + "loss": 0.5059, + "step": 770 + }, + { + "epoch": 17.61, + "learning_rate": 0.00040072233348865304, + "loss": 0.5021, + "step": 775 + }, + { + "epoch": 17.73, + "learning_rate": 0.0003893521203645618, + "loss": 0.5138, + "step": 780 + }, + { + "epoch": 17.84, + "learning_rate": 0.00037810642688067796, + "loss": 0.5212, + "step": 785 + }, + { + "epoch": 17.95, + "learning_rate": 0.00036698754619112975, + "loss": 0.5611, + "step": 790 + }, + { + "epoch": 18.07, + "learning_rate": 0.00035599774559114475, + "loss": 0.4956, + "step": 795 + }, + { + "epoch": 18.18, + "learning_rate": 0.000345139266054715, + "loss": 0.4243, + "step": 800 + }, + { + "epoch": 18.3, + "learning_rate": 0.0003344143217776319, + "loss": 0.4391, + "step": 805 + }, + { + "epoch": 18.41, + "learning_rate": 0.00032382509972598086, + "loss": 0.4627, + "step": 810 + }, + { + "epoch": 18.52, + "learning_rate": 0.0003133737591901864, + "loss": 0.4208, + "step": 815 + }, + { + "epoch": 18.64, + "learning_rate": 0.0003030624313447067, + "loss": 0.45, + "step": 820 + }, + { + "epoch": 18.75, + "learning_rate": 0.00029289321881345256, + "loss": 0.44, + "step": 825 + }, + { + "epoch": 18.86, + "learning_rate": 0.0002828681952410366, + "loss": 0.4451, + "step": 830 + }, + { + "epoch": 18.98, + "learning_rate": 0.0002729894048699265, + "loss": 0.4494, + "step": 835 + }, + { + "epoch": 19.09, + "learning_rate": 0.00026325886212359495, + "loss": 0.3839, + "step": 840 + }, + { + "epoch": 19.2, + "learning_rate": 0.0002536785511957531, + "loss": 0.3728, + "step": 845 + }, + { + "epoch": 19.32, + "learning_rate": 0.00024425042564574185, + "loss": 0.4126, + "step": 850 + }, + { + "epoch": 19.43, + "learning_rate": 0.00023497640800017682, + "loss": 0.4183, + "step": 855 + }, + { + "epoch": 19.55, + "learning_rate": 0.0002258583893609175, + "loss": 0.3778, + "step": 860 + }, + { + "epoch": 19.66, + "learning_rate": 0.00021689822901944456, + "loss": 0.3758, + "step": 865 + }, + { + "epoch": 19.77, + "learning_rate": 0.000208097754077725, + "loss": 0.4034, + "step": 870 + }, + { + "epoch": 19.89, + "learning_rate": 0.0001994587590756397, + "loss": 0.4085, + "step": 875 + }, + { + "epoch": 20.0, + "learning_rate": 0.00019098300562505265, + "loss": 0.3673, + "step": 880 + }, + { + "epoch": 20.11, + "learning_rate": 0.0001826722220505931, + "loss": 0.363, + "step": 885 + }, + { + "epoch": 20.23, + "learning_rate": 0.000174528103037226, + "loss": 0.3707, + "step": 890 + }, + { + "epoch": 20.34, + "learning_rate": 0.00016655230928468257, + "loss": 0.369, + "step": 895 + }, + { + "epoch": 20.45, + "learning_rate": 0.00015874646716881869, + "loss": 0.3528, + "step": 900 + }, + { + "epoch": 20.57, + "learning_rate": 0.00015111216840997744, + "loss": 0.3581, + "step": 905 + }, + { + "epoch": 20.68, + "learning_rate": 0.00014365096974841107, + "loss": 0.3466, + "step": 910 + }, + { + "epoch": 20.8, + "learning_rate": 0.00013636439262684297, + "loss": 0.3274, + "step": 915 + }, + { + "epoch": 20.91, + "learning_rate": 0.00012925392288022297, + "loss": 0.3401, + "step": 920 + }, + { + "epoch": 21.02, + "learning_rate": 0.00012232101043274435, + "loss": 0.3435, + "step": 925 + }, + { + "epoch": 21.14, + "learning_rate": 0.00011556706900218572, + "loss": 0.2972, + "step": 930 + }, + { + "epoch": 21.25, + "learning_rate": 0.00010899347581163222, + "loss": 0.3153, + "step": 935 + }, + { + "epoch": 21.36, + "learning_rate": 0.00010260157130864178, + "loss": 0.3315, + "step": 940 + }, + { + "epoch": 21.48, + "learning_rate": 9.639265889190829e-05, + "loss": 0.3264, + "step": 945 + }, + { + "epoch": 21.59, + "learning_rate": 9.036800464548156e-05, + "loss": 0.3427, + "step": 950 + }, + { + "epoch": 21.7, + "learning_rate": 8.4528837080594e-05, + "loss": 0.3415, + "step": 955 + }, + { + "epoch": 21.82, + "learning_rate": 7.887634688515e-05, + "loss": 0.323, + "step": 960 + }, + { + "epoch": 21.93, + "learning_rate": 7.341168668092857e-05, + "loss": 0.2961, + "step": 965 + }, + { + "epoch": 22.05, + "learning_rate": 6.813597078854772e-05, + "loss": 0.3276, + "step": 970 + }, + { + "epoch": 22.16, + "learning_rate": 6.305027500023842e-05, + "loss": 0.3045, + "step": 975 + }, + { + "epoch": 22.27, + "learning_rate": 5.8155636360475384e-05, + "loss": 0.3167, + "step": 980 + }, + { + "epoch": 22.39, + "learning_rate": 5.345305295450997e-05, + "loss": 0.319, + "step": 985 + }, + { + "epoch": 22.5, + "learning_rate": 4.894348370484647e-05, + "loss": 0.2852, + "step": 990 + }, + { + "epoch": 22.61, + "learning_rate": 4.4627848175703315e-05, + "loss": 0.3034, + "step": 995 + }, + { + "epoch": 22.73, + "learning_rate": 4.050702638550274e-05, + "loss": 0.2845, + "step": 1000 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 5.092929071525069e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-1000/training_args.bin b/checkpoint-1000/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-1000/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-1100/README.md b/checkpoint-1100/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-1100/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-1100/adapter_config.json b/checkpoint-1100/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-1100/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-1100/adapter_model.safetensors b/checkpoint-1100/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..8b1d852d68a43b9671e9576f9427ded10ee0c12d --- /dev/null +++ b/checkpoint-1100/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bc2583490c7dc47bededcc0eaaa25d9aafe96d7680d7ecf5ec077c85de59604 +size 31204248 diff --git a/checkpoint-1100/optimizer.pt b/checkpoint-1100/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..0c15401b201f679108fd2da0aeba241cb2180799 --- /dev/null +++ b/checkpoint-1100/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71abfda018effb690a77e01b7df48e60cb730b12599e5ad6fdc26845b844760a +size 62437882 diff --git a/checkpoint-1100/rng_state.pth b/checkpoint-1100/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..f1bc286248c277727b6ed1b195d70c8943badfd8 --- /dev/null +++ b/checkpoint-1100/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7866b8fc933c6248bae764638e49b94ebe1f35463171c6986de52c6a81632428 +size 14244 diff --git a/checkpoint-1100/scheduler.pt b/checkpoint-1100/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..1bfb3c14ecef69c6229b4df2d31a66b2a224a72e --- /dev/null +++ b/checkpoint-1100/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:834bea796770b94431ea03d70df0b96b826ab2cbdccf7ff1204aca5c40cb9ee7 +size 1064 diff --git a/checkpoint-1100/special_tokens_map.json b/checkpoint-1100/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-1100/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-1100/tokenization_chatglm.py b/checkpoint-1100/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-1100/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-1100/tokenizer.model b/checkpoint-1100/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-1100/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-1100/tokenizer_config.json b/checkpoint-1100/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-1100/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-1100/trainer_state.json b/checkpoint-1100/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..9cec2b6c51c14a05ed819b8995e5b82ad3df8168 --- /dev/null +++ b/checkpoint-1100/trainer_state.json @@ -0,0 +1,1341 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 25.0, + "eval_steps": 500, + "global_step": 1100, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + }, + { + "epoch": 16.02, + "learning_rate": 0.000571638268404419, + "loss": 0.7053, + "step": 705 + }, + { + "epoch": 16.14, + "learning_rate": 0.0005587788987567784, + "loss": 0.5863, + "step": 710 + }, + { + "epoch": 16.25, + "learning_rate": 0.0005460095002604533, + "loss": 0.5588, + "step": 715 + }, + { + "epoch": 16.36, + "learning_rate": 0.0005333326767743263, + "loss": 0.5363, + "step": 720 + }, + { + "epoch": 16.48, + "learning_rate": 0.0005207510132799435, + "loss": 0.6137, + "step": 725 + }, + { + "epoch": 16.59, + "learning_rate": 0.0005082670753543961, + "loss": 0.5606, + "step": 730 + }, + { + "epoch": 16.7, + "learning_rate": 0.0004958834086471683, + "loss": 0.629, + "step": 735 + }, + { + "epoch": 16.82, + "learning_rate": 0.00048360253836103817, + "loss": 0.5754, + "step": 740 + }, + { + "epoch": 16.93, + "learning_rate": 0.0004714269687371581, + "loss": 0.6239, + "step": 745 + }, + { + "epoch": 17.05, + "learning_rate": 0.0004593591825444028, + "loss": 0.5807, + "step": 750 + }, + { + "epoch": 17.16, + "learning_rate": 0.0004474016405730973, + "loss": 0.465, + "step": 755 + }, + { + "epoch": 17.27, + "learning_rate": 0.00043555678113323104, + "loss": 0.4871, + "step": 760 + }, + { + "epoch": 17.39, + "learning_rate": 0.00042382701955724725, + "loss": 0.4623, + "step": 765 + }, + { + "epoch": 17.5, + "learning_rate": 0.00041221474770752696, + "loss": 0.5059, + "step": 770 + }, + { + "epoch": 17.61, + "learning_rate": 0.00040072233348865304, + "loss": 0.5021, + "step": 775 + }, + { + "epoch": 17.73, + "learning_rate": 0.0003893521203645618, + "loss": 0.5138, + "step": 780 + }, + { + "epoch": 17.84, + "learning_rate": 0.00037810642688067796, + "loss": 0.5212, + "step": 785 + }, + { + "epoch": 17.95, + "learning_rate": 0.00036698754619112975, + "loss": 0.5611, + "step": 790 + }, + { + "epoch": 18.07, + "learning_rate": 0.00035599774559114475, + "loss": 0.4956, + "step": 795 + }, + { + "epoch": 18.18, + "learning_rate": 0.000345139266054715, + "loss": 0.4243, + "step": 800 + }, + { + "epoch": 18.3, + "learning_rate": 0.0003344143217776319, + "loss": 0.4391, + "step": 805 + }, + { + "epoch": 18.41, + "learning_rate": 0.00032382509972598086, + "loss": 0.4627, + "step": 810 + }, + { + "epoch": 18.52, + "learning_rate": 0.0003133737591901864, + "loss": 0.4208, + "step": 815 + }, + { + "epoch": 18.64, + "learning_rate": 0.0003030624313447067, + "loss": 0.45, + "step": 820 + }, + { + "epoch": 18.75, + "learning_rate": 0.00029289321881345256, + "loss": 0.44, + "step": 825 + }, + { + "epoch": 18.86, + "learning_rate": 0.0002828681952410366, + "loss": 0.4451, + "step": 830 + }, + { + "epoch": 18.98, + "learning_rate": 0.0002729894048699265, + "loss": 0.4494, + "step": 835 + }, + { + "epoch": 19.09, + "learning_rate": 0.00026325886212359495, + "loss": 0.3839, + "step": 840 + }, + { + "epoch": 19.2, + "learning_rate": 0.0002536785511957531, + "loss": 0.3728, + "step": 845 + }, + { + "epoch": 19.32, + "learning_rate": 0.00024425042564574185, + "loss": 0.4126, + "step": 850 + }, + { + "epoch": 19.43, + "learning_rate": 0.00023497640800017682, + "loss": 0.4183, + "step": 855 + }, + { + "epoch": 19.55, + "learning_rate": 0.0002258583893609175, + "loss": 0.3778, + "step": 860 + }, + { + "epoch": 19.66, + "learning_rate": 0.00021689822901944456, + "loss": 0.3758, + "step": 865 + }, + { + "epoch": 19.77, + "learning_rate": 0.000208097754077725, + "loss": 0.4034, + "step": 870 + }, + { + "epoch": 19.89, + "learning_rate": 0.0001994587590756397, + "loss": 0.4085, + "step": 875 + }, + { + "epoch": 20.0, + "learning_rate": 0.00019098300562505265, + "loss": 0.3673, + "step": 880 + }, + { + "epoch": 20.11, + "learning_rate": 0.0001826722220505931, + "loss": 0.363, + "step": 885 + }, + { + "epoch": 20.23, + "learning_rate": 0.000174528103037226, + "loss": 0.3707, + "step": 890 + }, + { + "epoch": 20.34, + "learning_rate": 0.00016655230928468257, + "loss": 0.369, + "step": 895 + }, + { + "epoch": 20.45, + "learning_rate": 0.00015874646716881869, + "loss": 0.3528, + "step": 900 + }, + { + "epoch": 20.57, + "learning_rate": 0.00015111216840997744, + "loss": 0.3581, + "step": 905 + }, + { + "epoch": 20.68, + "learning_rate": 0.00014365096974841107, + "loss": 0.3466, + "step": 910 + }, + { + "epoch": 20.8, + "learning_rate": 0.00013636439262684297, + "loss": 0.3274, + "step": 915 + }, + { + "epoch": 20.91, + "learning_rate": 0.00012925392288022297, + "loss": 0.3401, + "step": 920 + }, + { + "epoch": 21.02, + "learning_rate": 0.00012232101043274435, + "loss": 0.3435, + "step": 925 + }, + { + "epoch": 21.14, + "learning_rate": 0.00011556706900218572, + "loss": 0.2972, + "step": 930 + }, + { + "epoch": 21.25, + "learning_rate": 0.00010899347581163222, + "loss": 0.3153, + "step": 935 + }, + { + "epoch": 21.36, + "learning_rate": 0.00010260157130864178, + "loss": 0.3315, + "step": 940 + }, + { + "epoch": 21.48, + "learning_rate": 9.639265889190829e-05, + "loss": 0.3264, + "step": 945 + }, + { + "epoch": 21.59, + "learning_rate": 9.036800464548156e-05, + "loss": 0.3427, + "step": 950 + }, + { + "epoch": 21.7, + "learning_rate": 8.4528837080594e-05, + "loss": 0.3415, + "step": 955 + }, + { + "epoch": 21.82, + "learning_rate": 7.887634688515e-05, + "loss": 0.323, + "step": 960 + }, + { + "epoch": 21.93, + "learning_rate": 7.341168668092857e-05, + "loss": 0.2961, + "step": 965 + }, + { + "epoch": 22.05, + "learning_rate": 6.813597078854772e-05, + "loss": 0.3276, + "step": 970 + }, + { + "epoch": 22.16, + "learning_rate": 6.305027500023842e-05, + "loss": 0.3045, + "step": 975 + }, + { + "epoch": 22.27, + "learning_rate": 5.8155636360475384e-05, + "loss": 0.3167, + "step": 980 + }, + { + "epoch": 22.39, + "learning_rate": 5.345305295450997e-05, + "loss": 0.319, + "step": 985 + }, + { + "epoch": 22.5, + "learning_rate": 4.894348370484647e-05, + "loss": 0.2852, + "step": 990 + }, + { + "epoch": 22.61, + "learning_rate": 4.4627848175703315e-05, + "loss": 0.3034, + "step": 995 + }, + { + "epoch": 22.73, + "learning_rate": 4.050702638550274e-05, + "loss": 0.2845, + "step": 1000 + }, + { + "epoch": 22.84, + "learning_rate": 3.658185862742103e-05, + "loss": 0.3136, + "step": 1005 + }, + { + "epoch": 22.95, + "learning_rate": 3.285314529804295e-05, + "loss": 0.3187, + "step": 1010 + }, + { + "epoch": 23.07, + "learning_rate": 2.93216467341475e-05, + "loss": 0.2907, + "step": 1015 + }, + { + "epoch": 23.18, + "learning_rate": 2.5988083057666535e-05, + "loss": 0.2955, + "step": 1020 + }, + { + "epoch": 23.3, + "learning_rate": 2.2853134028840594e-05, + "loss": 0.2785, + "step": 1025 + }, + { + "epoch": 23.41, + "learning_rate": 1.9917438907606554e-05, + "loss": 0.3369, + "step": 1030 + }, + { + "epoch": 23.52, + "learning_rate": 1.7181596323244453e-05, + "loss": 0.2837, + "step": 1035 + }, + { + "epoch": 23.64, + "learning_rate": 1.4646164152307017e-05, + "loss": 0.3002, + "step": 1040 + }, + { + "epoch": 23.75, + "learning_rate": 1.231165940486234e-05, + "loss": 0.3062, + "step": 1045 + }, + { + "epoch": 23.86, + "learning_rate": 1.0178558119067316e-05, + "loss": 0.2859, + "step": 1050 + }, + { + "epoch": 23.98, + "learning_rate": 8.247295264097288e-06, + "loss": 0.284, + "step": 1055 + }, + { + "epoch": 24.09, + "learning_rate": 6.518264651449779e-06, + "loss": 0.2607, + "step": 1060 + }, + { + "epoch": 24.2, + "learning_rate": 4.991818854640395e-06, + "loss": 0.3164, + "step": 1065 + }, + { + "epoch": 24.32, + "learning_rate": 3.6682691373086663e-06, + "loss": 0.2597, + "step": 1070 + }, + { + "epoch": 24.43, + "learning_rate": 2.5478853897464847e-06, + "loss": 0.2907, + "step": 1075 + }, + { + "epoch": 24.55, + "learning_rate": 1.630896073864352e-06, + "loss": 0.3033, + "step": 1080 + }, + { + "epoch": 24.66, + "learning_rate": 9.174881766043087e-07, + "loss": 0.3089, + "step": 1085 + }, + { + "epoch": 24.77, + "learning_rate": 4.078071718107701e-07, + "loss": 0.2964, + "step": 1090 + }, + { + "epoch": 24.89, + "learning_rate": 1.0195699056669839e-07, + "loss": 0.2995, + "step": 1095 + }, + { + "epoch": 25.0, + "learning_rate": 0.0, + "loss": 0.2936, + "step": 1100 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 5.602696856046797e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-1100/training_args.bin b/checkpoint-1100/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-1100/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-200/README.md b/checkpoint-200/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-200/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-200/adapter_config.json b/checkpoint-200/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-200/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-200/adapter_model.safetensors b/checkpoint-200/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d16ae400b3b9a0c8fb7180d09fc6884dc5eb966f --- /dev/null +++ b/checkpoint-200/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9079a8f13b0b663beb8af4a69f38304ffb47f535efa9d4fc2f28235905d33d6 +size 31204248 diff --git a/checkpoint-200/optimizer.pt b/checkpoint-200/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..01aaef905d13ddbebb940c28939aa01d88bc20da --- /dev/null +++ b/checkpoint-200/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3c20e12a6fe7711738ea34dd0ceeb02446ef057730b074a3f796920de8f458e +size 62437882 diff --git a/checkpoint-200/rng_state.pth b/checkpoint-200/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..8345c9db73e65222f60443cc197cdfc365a9ac22 --- /dev/null +++ b/checkpoint-200/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:754a649249169df5413cd1afec214b0e512a562b2d537b50c7822a329e86ab92 +size 14244 diff --git a/checkpoint-200/scheduler.pt b/checkpoint-200/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..f0a696fc98d6a9ebeb0e366716525b2c1a450364 --- /dev/null +++ b/checkpoint-200/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca49ceb5308a589ec72593fdfc170ba0798f7206328f597dc676a71ad4f62985 +size 1064 diff --git a/checkpoint-200/special_tokens_map.json b/checkpoint-200/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-200/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-200/tokenization_chatglm.py b/checkpoint-200/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-200/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-200/tokenizer.model b/checkpoint-200/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-200/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-200/tokenizer_config.json b/checkpoint-200/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-200/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-200/trainer_state.json b/checkpoint-200/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..155f7540d2f7f88c83b8ac3895de2f0e2097f161 --- /dev/null +++ b/checkpoint-200/trainer_state.json @@ -0,0 +1,261 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 4.545454545454545, + "eval_steps": 500, + "global_step": 200, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 1.0268727547723776e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-200/training_args.bin b/checkpoint-200/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-200/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-300/README.md b/checkpoint-300/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-300/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-300/adapter_config.json b/checkpoint-300/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-300/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-300/adapter_model.safetensors b/checkpoint-300/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..56cce9da25eb0f46b158a873c9cc05206ecade2c --- /dev/null +++ b/checkpoint-300/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e220d6419e740f923cc6124bc6265c9df3f562e96a78efcde9e7588717485b0 +size 31204248 diff --git a/checkpoint-300/optimizer.pt b/checkpoint-300/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..598c5f1357484d3d8e05dcf0b04e7704b5c1a45c --- /dev/null +++ b/checkpoint-300/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d10054eefdf5b0ca7a5d696876048300d180438f1352d2a4d5c1cfc16b17fdc +size 62437882 diff --git a/checkpoint-300/rng_state.pth b/checkpoint-300/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..9ea1936e36c296d9a3e57d0d856fed7d05759cee --- /dev/null +++ b/checkpoint-300/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2e382bd86a073d9ac435189aa47a7ad5d68e1129172b8fc68b1976c4a8b24c9 +size 14244 diff --git a/checkpoint-300/scheduler.pt b/checkpoint-300/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..4f6dc82f0472e269724cd750d8cbe5d7d135e91c --- /dev/null +++ b/checkpoint-300/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cd5a18fa2a68db0acda1cf96d93f0349c4089662fee086e3504535e77ceb535 +size 1064 diff --git a/checkpoint-300/special_tokens_map.json b/checkpoint-300/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-300/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-300/tokenization_chatglm.py b/checkpoint-300/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-300/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-300/tokenizer.model b/checkpoint-300/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-300/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-300/tokenizer_config.json b/checkpoint-300/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-300/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-300/trainer_state.json b/checkpoint-300/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..aff5fb80a09d1172535b6d961566c68db2450d37 --- /dev/null +++ b/checkpoint-300/trainer_state.json @@ -0,0 +1,381 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 6.818181818181818, + "eval_steps": 500, + "global_step": 300, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 1.530797220667392e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-300/training_args.bin b/checkpoint-300/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-300/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-400/README.md b/checkpoint-400/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-400/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-400/adapter_config.json b/checkpoint-400/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-400/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-400/adapter_model.safetensors b/checkpoint-400/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..c7965b81c943c120cb2b07506d038b0241cbc1ca --- /dev/null +++ b/checkpoint-400/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a1b000af1e72645f71b8a829536a3dd0711ea56ebf72dc454d96e1969765c38 +size 31204248 diff --git a/checkpoint-400/optimizer.pt b/checkpoint-400/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..77d11a63d4326c54ade63beb5e49fc1e31581f2a --- /dev/null +++ b/checkpoint-400/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d15e821214296476b4cc6e0d82589dc357ee7e77e8b4e89dfd884bbdcadb6a4 +size 62437882 diff --git a/checkpoint-400/rng_state.pth b/checkpoint-400/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..c20b3d543b369e9adb6095f40bcd6b1dfbada244 --- /dev/null +++ b/checkpoint-400/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31c361eb8eecde08a271d93e6c5eef525134c62bd7fbd49722fb023d9072b1ea +size 14244 diff --git a/checkpoint-400/scheduler.pt b/checkpoint-400/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..9354fdb72269fbea1f865560f515702e795aca35 --- /dev/null +++ b/checkpoint-400/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:604a73bb32bc94a03b6bddbde38878816f4e28ff342b8206bf7cbabe687c2424 +size 1064 diff --git a/checkpoint-400/special_tokens_map.json b/checkpoint-400/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-400/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-400/tokenization_chatglm.py b/checkpoint-400/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-400/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-400/tokenizer.model b/checkpoint-400/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-400/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-400/tokenizer_config.json b/checkpoint-400/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-400/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-400/trainer_state.json b/checkpoint-400/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..d562bb47ed6cb4818d41e7cc4617188528eec38f --- /dev/null +++ b/checkpoint-400/trainer_state.json @@ -0,0 +1,501 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 9.090909090909092, + "eval_steps": 500, + "global_step": 400, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 2.0358076130328576e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-400/training_args.bin b/checkpoint-400/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-400/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-500/README.md b/checkpoint-500/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-500/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-500/adapter_config.json b/checkpoint-500/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-500/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-500/adapter_model.safetensors b/checkpoint-500/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..65b996d3f9bb37b9ab0e7419794f664a68b4cff3 --- /dev/null +++ b/checkpoint-500/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38c3f10badc4eca46e5391c0d76f664d91f2c5c96c52f05823964db29a131cc8 +size 31204248 diff --git a/checkpoint-500/optimizer.pt b/checkpoint-500/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..be94e82e99ebe5b1bd70640b320aaf46362cf277 --- /dev/null +++ b/checkpoint-500/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7175631f9ef75d430eb08d54cd0bc47f9376aec47461deaaff8946e1fce80f12 +size 62437882 diff --git a/checkpoint-500/rng_state.pth b/checkpoint-500/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..704ad9716a617526b738a271bd3896f9a0d51cb5 --- /dev/null +++ b/checkpoint-500/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8270413ee4d1e27028e4c5fdc6f6f13e233f42ffd7aa9694385f4015d85edcf0 +size 14244 diff --git a/checkpoint-500/scheduler.pt b/checkpoint-500/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..75e62ef52da8a1d95d04c12edb9de06ab1fe7772 --- /dev/null +++ b/checkpoint-500/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2343a969c11ec4ec216a210d648b7b960d566bef5539e79c2765af49aa68625b +size 1064 diff --git a/checkpoint-500/special_tokens_map.json b/checkpoint-500/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-500/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-500/tokenization_chatglm.py b/checkpoint-500/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-500/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-500/tokenizer.model b/checkpoint-500/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-500/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-500/tokenizer_config.json b/checkpoint-500/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-500/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-500/trainer_state.json b/checkpoint-500/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..8bc3cbe0d4b39a50542f5e3f999aa29cb17c2020 --- /dev/null +++ b/checkpoint-500/trainer_state.json @@ -0,0 +1,621 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 11.363636363636363, + "eval_steps": 500, + "global_step": 500, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 2.5448112270753792e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-500/training_args.bin b/checkpoint-500/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-500/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-600/README.md b/checkpoint-600/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-600/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-600/adapter_config.json b/checkpoint-600/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-600/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-600/adapter_model.safetensors b/checkpoint-600/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..8693c6108b7e2b17168eba8728bc677e3462f80c --- /dev/null +++ b/checkpoint-600/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:596b8d994195af594c60807528156dc655f2338206cea1219d8f9f17699a39c3 +size 31204248 diff --git a/checkpoint-600/optimizer.pt b/checkpoint-600/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..ccdb293468110b27a67449fea8d5b7d9580d6516 --- /dev/null +++ b/checkpoint-600/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dc673b7b24101a181f20152f8feeba5e0436d16b2fad104b913270a4bd9d6b9 +size 62437882 diff --git a/checkpoint-600/rng_state.pth b/checkpoint-600/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..7f5da71bcba3027ac48bce6222b0505d26b2e6c4 --- /dev/null +++ b/checkpoint-600/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31bcb95a31206dc96fa1e1bee6e4245055f6da2ff17b25d0d135dcb8b39f69a8 +size 14244 diff --git a/checkpoint-600/scheduler.pt b/checkpoint-600/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..ea69f62db04a8bd99c58a3f1efab6b8e610856c6 --- /dev/null +++ b/checkpoint-600/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20bfc76a2ebb40ddf377497ba1bdf8baec41588c6508711337f299764de5cf80 +size 1064 diff --git a/checkpoint-600/special_tokens_map.json b/checkpoint-600/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-600/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-600/tokenization_chatglm.py b/checkpoint-600/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-600/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-600/tokenizer.model b/checkpoint-600/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-600/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-600/tokenizer_config.json b/checkpoint-600/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-600/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-600/trainer_state.json b/checkpoint-600/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..a0f146d013be443c5899780301bd74a540333b1f --- /dev/null +++ b/checkpoint-600/trainer_state.json @@ -0,0 +1,741 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 13.636363636363637, + "eval_steps": 500, + "global_step": 600, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 3.0530793988521984e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-600/training_args.bin b/checkpoint-600/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-600/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-700/README.md b/checkpoint-700/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-700/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-700/adapter_config.json b/checkpoint-700/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-700/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-700/adapter_model.safetensors b/checkpoint-700/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4767f1582b9ee8b60a97766601e351cf7cea6d6e --- /dev/null +++ b/checkpoint-700/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05656560aa7a8f94e8f2bf807a12c24e19dacd8d31a96f306a1433fe40d79ef5 +size 31204248 diff --git a/checkpoint-700/optimizer.pt b/checkpoint-700/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..f756d30f846da793d15ac9e15efb1991a6c7a539 --- /dev/null +++ b/checkpoint-700/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be7ec8ec24f42c78b7996ef8b2f525221fc280463d56dd2b8462fe452c9a8d9a +size 62437882 diff --git a/checkpoint-700/rng_state.pth b/checkpoint-700/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..0a4733dc4ba242b62110eecf221e730d1e0ed237 --- /dev/null +++ b/checkpoint-700/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d30c0642c2a797dc3cd110d33cefef65ae7eed01705207c0ce1a5f0e3e64fff +size 14244 diff --git a/checkpoint-700/scheduler.pt b/checkpoint-700/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..c78d3b24500c24f1c1dc29b079138a5943b138d1 --- /dev/null +++ b/checkpoint-700/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da3efcc637a9bb9c201f18cb1b9c77a473adaa751a7764564db5488980f490fc +size 1064 diff --git a/checkpoint-700/special_tokens_map.json b/checkpoint-700/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-700/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-700/tokenization_chatglm.py b/checkpoint-700/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-700/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-700/tokenizer.model b/checkpoint-700/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-700/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-700/tokenizer_config.json b/checkpoint-700/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-700/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-700/trainer_state.json b/checkpoint-700/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..e5a0a5376acc30e969b68dd118c4cdb5fefdf06d --- /dev/null +++ b/checkpoint-700/trainer_state.json @@ -0,0 +1,861 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 15.909090909090908, + "eval_steps": 500, + "global_step": 700, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 3.56150844862464e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-700/training_args.bin b/checkpoint-700/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-700/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-800/README.md b/checkpoint-800/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-800/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-800/adapter_config.json b/checkpoint-800/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-800/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-800/adapter_model.safetensors b/checkpoint-800/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..089800c9e88069633551e1a3bf5c91e95ff64428 --- /dev/null +++ b/checkpoint-800/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54c6dd53d5326506ece69a9bd54a9eadb264d1e8d1c423195a0e633060f3be5e +size 31204248 diff --git a/checkpoint-800/optimizer.pt b/checkpoint-800/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..7d8828678b2e988b4532aebc0d1478274424e4fa --- /dev/null +++ b/checkpoint-800/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e689221fb9dfdd617b5e0d8a6f5ca10763fb1da5e6ecec7d8290ccfb4a1ee339 +size 62437882 diff --git a/checkpoint-800/rng_state.pth b/checkpoint-800/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..57e84c10055685a7d471cfe72bbb1dbcaf00992a --- /dev/null +++ b/checkpoint-800/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4f906f32e0cd50ee989e776447fc7f92a946657e5aecdb6935a8e559c806dbb +size 14244 diff --git a/checkpoint-800/scheduler.pt b/checkpoint-800/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..837e7bfac73063ef71e537cd0c9e889e0dc86f98 --- /dev/null +++ b/checkpoint-800/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d2864f4c40324dd6b958b5301dad812f50da0efe5f184e1572889a4e267b23b +size 1064 diff --git a/checkpoint-800/special_tokens_map.json b/checkpoint-800/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-800/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-800/tokenization_chatglm.py b/checkpoint-800/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-800/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-800/tokenizer.model b/checkpoint-800/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-800/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-800/tokenizer_config.json b/checkpoint-800/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-800/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-800/trainer_state.json b/checkpoint-800/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..ad79a35be68ecf98916bc1c4cb598985aa6d081e --- /dev/null +++ b/checkpoint-800/trainer_state.json @@ -0,0 +1,981 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 18.181818181818183, + "eval_steps": 500, + "global_step": 800, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + }, + { + "epoch": 16.02, + "learning_rate": 0.000571638268404419, + "loss": 0.7053, + "step": 705 + }, + { + "epoch": 16.14, + "learning_rate": 0.0005587788987567784, + "loss": 0.5863, + "step": 710 + }, + { + "epoch": 16.25, + "learning_rate": 0.0005460095002604533, + "loss": 0.5588, + "step": 715 + }, + { + "epoch": 16.36, + "learning_rate": 0.0005333326767743263, + "loss": 0.5363, + "step": 720 + }, + { + "epoch": 16.48, + "learning_rate": 0.0005207510132799435, + "loss": 0.6137, + "step": 725 + }, + { + "epoch": 16.59, + "learning_rate": 0.0005082670753543961, + "loss": 0.5606, + "step": 730 + }, + { + "epoch": 16.7, + "learning_rate": 0.0004958834086471683, + "loss": 0.629, + "step": 735 + }, + { + "epoch": 16.82, + "learning_rate": 0.00048360253836103817, + "loss": 0.5754, + "step": 740 + }, + { + "epoch": 16.93, + "learning_rate": 0.0004714269687371581, + "loss": 0.6239, + "step": 745 + }, + { + "epoch": 17.05, + "learning_rate": 0.0004593591825444028, + "loss": 0.5807, + "step": 750 + }, + { + "epoch": 17.16, + "learning_rate": 0.0004474016405730973, + "loss": 0.465, + "step": 755 + }, + { + "epoch": 17.27, + "learning_rate": 0.00043555678113323104, + "loss": 0.4871, + "step": 760 + }, + { + "epoch": 17.39, + "learning_rate": 0.00042382701955724725, + "loss": 0.4623, + "step": 765 + }, + { + "epoch": 17.5, + "learning_rate": 0.00041221474770752696, + "loss": 0.5059, + "step": 770 + }, + { + "epoch": 17.61, + "learning_rate": 0.00040072233348865304, + "loss": 0.5021, + "step": 775 + }, + { + "epoch": 17.73, + "learning_rate": 0.0003893521203645618, + "loss": 0.5138, + "step": 780 + }, + { + "epoch": 17.84, + "learning_rate": 0.00037810642688067796, + "loss": 0.5212, + "step": 785 + }, + { + "epoch": 17.95, + "learning_rate": 0.00036698754619112975, + "loss": 0.5611, + "step": 790 + }, + { + "epoch": 18.07, + "learning_rate": 0.00035599774559114475, + "loss": 0.4956, + "step": 795 + }, + { + "epoch": 18.18, + "learning_rate": 0.000345139266054715, + "loss": 0.4243, + "step": 800 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 4.074154800139469e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-800/training_args.bin b/checkpoint-800/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-800/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/checkpoint-900/README.md b/checkpoint-900/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a4640bc0bab946c21e07f36639d991fc5d9f684 --- /dev/null +++ b/checkpoint-900/README.md @@ -0,0 +1,204 @@ +--- +library_name: peft +base_model: /root/chatglm3-6b +--- + +# Model Card for Model ID + + + + + +## Model Details + +### Model Description + + + + + +- **Developed by:** [More Information Needed] +- **Funded by [optional]:** [More Information Needed] +- **Shared by [optional]:** [More Information Needed] +- **Model type:** [More Information Needed] +- **Language(s) (NLP):** [More Information Needed] +- **License:** [More Information Needed] +- **Finetuned from model [optional]:** [More Information Needed] + +### Model Sources [optional] + + + +- **Repository:** [More Information Needed] +- **Paper [optional]:** [More Information Needed] +- **Demo [optional]:** [More Information Needed] + +## Uses + + + +### Direct Use + + + +[More Information Needed] + +### Downstream Use [optional] + + + +[More Information Needed] + +### Out-of-Scope Use + + + +[More Information Needed] + +## Bias, Risks, and Limitations + + + +[More Information Needed] + +### Recommendations + + + +Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. + +## How to Get Started with the Model + +Use the code below to get started with the model. + +[More Information Needed] + +## Training Details + +### Training Data + + + +[More Information Needed] + +### Training Procedure + + + +#### Preprocessing [optional] + +[More Information Needed] + + +#### Training Hyperparameters + +- **Training regime:** [More Information Needed] + +#### Speeds, Sizes, Times [optional] + + + +[More Information Needed] + +## Evaluation + + + +### Testing Data, Factors & Metrics + +#### Testing Data + + + +[More Information Needed] + +#### Factors + + + +[More Information Needed] + +#### Metrics + + + +[More Information Needed] + +### Results + +[More Information Needed] + +#### Summary + + + +## Model Examination [optional] + + + +[More Information Needed] + +## Environmental Impact + + + +Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). + +- **Hardware Type:** [More Information Needed] +- **Hours used:** [More Information Needed] +- **Cloud Provider:** [More Information Needed] +- **Compute Region:** [More Information Needed] +- **Carbon Emitted:** [More Information Needed] + +## Technical Specifications [optional] + +### Model Architecture and Objective + +[More Information Needed] + +### Compute Infrastructure + +[More Information Needed] + +#### Hardware + +[More Information Needed] + +#### Software + +[More Information Needed] + +## Citation [optional] + + + +**BibTeX:** + +[More Information Needed] + +**APA:** + +[More Information Needed] + +## Glossary [optional] + + + +[More Information Needed] + +## More Information [optional] + +[More Information Needed] + +## Model Card Authors [optional] + +[More Information Needed] + +## Model Card Contact + +[More Information Needed] + + +### Framework versions + +- PEFT 0.7.1 \ No newline at end of file diff --git a/checkpoint-900/adapter_config.json b/checkpoint-900/adapter_config.json new file mode 100644 index 0000000000000000000000000000000000000000..e437b533e257864a38c04ed024f90cab5eebcd8d --- /dev/null +++ b/checkpoint-900/adapter_config.json @@ -0,0 +1,25 @@ +{ + "alpha_pattern": {}, + "auto_mapping": null, + "base_model_name_or_path": "/root/chatglm3-6b", + "bias": "none", + "fan_in_fan_out": false, + "inference_mode": true, + "init_lora_weights": true, + "layers_pattern": null, + "layers_to_transform": null, + "loftq_config": {}, + "lora_alpha": 64.0, + "lora_dropout": 0.1, + "megatron_config": null, + "megatron_core": "megatron.core", + "modules_to_save": null, + "peft_type": "LORA", + "r": 32, + "rank_pattern": {}, + "revision": null, + "target_modules": [ + "query_key_value" + ], + "task_type": "CAUSAL_LM" +} \ No newline at end of file diff --git a/checkpoint-900/adapter_model.safetensors b/checkpoint-900/adapter_model.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..33e5a787630d1ae5a1bb574f3af127e2d85d5dbe --- /dev/null +++ b/checkpoint-900/adapter_model.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d76fc7df89c1372ba69b5ea09d4556926ab8898e1ec1309a212a9c093f148066 +size 31204248 diff --git a/checkpoint-900/optimizer.pt b/checkpoint-900/optimizer.pt new file mode 100644 index 0000000000000000000000000000000000000000..d5323b5d71980670a680724e495ebdc170e0383e --- /dev/null +++ b/checkpoint-900/optimizer.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19221d4f40b031762d83f752f44f9806a0eff4370bb1d088a3e90d5130de401d +size 62437882 diff --git a/checkpoint-900/rng_state.pth b/checkpoint-900/rng_state.pth new file mode 100644 index 0000000000000000000000000000000000000000..f0cfdc7b516bfceed6ea16757f9be14b76258fca --- /dev/null +++ b/checkpoint-900/rng_state.pth @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b52d148e0bdcfae756cc5d1bed2f078908b8ca30fa4898562ed9c81aba81cf6c +size 14244 diff --git a/checkpoint-900/scheduler.pt b/checkpoint-900/scheduler.pt new file mode 100644 index 0000000000000000000000000000000000000000..4a8fe22ce3f23364bc2f1add52716d45d01ec762 --- /dev/null +++ b/checkpoint-900/scheduler.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91ad790c8ce464cf3a5f4d7efae5aed7c0aca618d8bf7bd220d6b628d8fbd816 +size 1064 diff --git a/checkpoint-900/special_tokens_map.json b/checkpoint-900/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/checkpoint-900/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/checkpoint-900/tokenization_chatglm.py b/checkpoint-900/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/checkpoint-900/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/checkpoint-900/tokenizer.model b/checkpoint-900/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/checkpoint-900/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/checkpoint-900/tokenizer_config.json b/checkpoint-900/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/checkpoint-900/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/checkpoint-900/trainer_state.json b/checkpoint-900/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..0b463bff7fe978a33b0ad47b8bf8fc159d62873d --- /dev/null +++ b/checkpoint-900/trainer_state.json @@ -0,0 +1,1101 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 20.454545454545453, + "eval_steps": 500, + "global_step": 900, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + }, + { + "epoch": 16.02, + "learning_rate": 0.000571638268404419, + "loss": 0.7053, + "step": 705 + }, + { + "epoch": 16.14, + "learning_rate": 0.0005587788987567784, + "loss": 0.5863, + "step": 710 + }, + { + "epoch": 16.25, + "learning_rate": 0.0005460095002604533, + "loss": 0.5588, + "step": 715 + }, + { + "epoch": 16.36, + "learning_rate": 0.0005333326767743263, + "loss": 0.5363, + "step": 720 + }, + { + "epoch": 16.48, + "learning_rate": 0.0005207510132799435, + "loss": 0.6137, + "step": 725 + }, + { + "epoch": 16.59, + "learning_rate": 0.0005082670753543961, + "loss": 0.5606, + "step": 730 + }, + { + "epoch": 16.7, + "learning_rate": 0.0004958834086471683, + "loss": 0.629, + "step": 735 + }, + { + "epoch": 16.82, + "learning_rate": 0.00048360253836103817, + "loss": 0.5754, + "step": 740 + }, + { + "epoch": 16.93, + "learning_rate": 0.0004714269687371581, + "loss": 0.6239, + "step": 745 + }, + { + "epoch": 17.05, + "learning_rate": 0.0004593591825444028, + "loss": 0.5807, + "step": 750 + }, + { + "epoch": 17.16, + "learning_rate": 0.0004474016405730973, + "loss": 0.465, + "step": 755 + }, + { + "epoch": 17.27, + "learning_rate": 0.00043555678113323104, + "loss": 0.4871, + "step": 760 + }, + { + "epoch": 17.39, + "learning_rate": 0.00042382701955724725, + "loss": 0.4623, + "step": 765 + }, + { + "epoch": 17.5, + "learning_rate": 0.00041221474770752696, + "loss": 0.5059, + "step": 770 + }, + { + "epoch": 17.61, + "learning_rate": 0.00040072233348865304, + "loss": 0.5021, + "step": 775 + }, + { + "epoch": 17.73, + "learning_rate": 0.0003893521203645618, + "loss": 0.5138, + "step": 780 + }, + { + "epoch": 17.84, + "learning_rate": 0.00037810642688067796, + "loss": 0.5212, + "step": 785 + }, + { + "epoch": 17.95, + "learning_rate": 0.00036698754619112975, + "loss": 0.5611, + "step": 790 + }, + { + "epoch": 18.07, + "learning_rate": 0.00035599774559114475, + "loss": 0.4956, + "step": 795 + }, + { + "epoch": 18.18, + "learning_rate": 0.000345139266054715, + "loss": 0.4243, + "step": 800 + }, + { + "epoch": 18.3, + "learning_rate": 0.0003344143217776319, + "loss": 0.4391, + "step": 805 + }, + { + "epoch": 18.41, + "learning_rate": 0.00032382509972598086, + "loss": 0.4627, + "step": 810 + }, + { + "epoch": 18.52, + "learning_rate": 0.0003133737591901864, + "loss": 0.4208, + "step": 815 + }, + { + "epoch": 18.64, + "learning_rate": 0.0003030624313447067, + "loss": 0.45, + "step": 820 + }, + { + "epoch": 18.75, + "learning_rate": 0.00029289321881345256, + "loss": 0.44, + "step": 825 + }, + { + "epoch": 18.86, + "learning_rate": 0.0002828681952410366, + "loss": 0.4451, + "step": 830 + }, + { + "epoch": 18.98, + "learning_rate": 0.0002729894048699265, + "loss": 0.4494, + "step": 835 + }, + { + "epoch": 19.09, + "learning_rate": 0.00026325886212359495, + "loss": 0.3839, + "step": 840 + }, + { + "epoch": 19.2, + "learning_rate": 0.0002536785511957531, + "loss": 0.3728, + "step": 845 + }, + { + "epoch": 19.32, + "learning_rate": 0.00024425042564574185, + "loss": 0.4126, + "step": 850 + }, + { + "epoch": 19.43, + "learning_rate": 0.00023497640800017682, + "loss": 0.4183, + "step": 855 + }, + { + "epoch": 19.55, + "learning_rate": 0.0002258583893609175, + "loss": 0.3778, + "step": 860 + }, + { + "epoch": 19.66, + "learning_rate": 0.00021689822901944456, + "loss": 0.3758, + "step": 865 + }, + { + "epoch": 19.77, + "learning_rate": 0.000208097754077725, + "loss": 0.4034, + "step": 870 + }, + { + "epoch": 19.89, + "learning_rate": 0.0001994587590756397, + "loss": 0.4085, + "step": 875 + }, + { + "epoch": 20.0, + "learning_rate": 0.00019098300562505265, + "loss": 0.3673, + "step": 880 + }, + { + "epoch": 20.11, + "learning_rate": 0.0001826722220505931, + "loss": 0.363, + "step": 885 + }, + { + "epoch": 20.23, + "learning_rate": 0.000174528103037226, + "loss": 0.3707, + "step": 890 + }, + { + "epoch": 20.34, + "learning_rate": 0.00016655230928468257, + "loss": 0.369, + "step": 895 + }, + { + "epoch": 20.45, + "learning_rate": 0.00015874646716881869, + "loss": 0.3528, + "step": 900 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 4.587283785641165e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/checkpoint-900/training_args.bin b/checkpoint-900/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/checkpoint-900/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920 diff --git a/special_tokens_map.json b/special_tokens_map.json new file mode 100644 index 0000000000000000000000000000000000000000..dd02cd16ef3e1cfed3ce0f8cd09b983412317a48 --- /dev/null +++ b/special_tokens_map.json @@ -0,0 +1,18 @@ +{ + "additional_special_tokens": [ + { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } + ] +} diff --git a/tokenization_chatglm.py b/tokenization_chatglm.py new file mode 100644 index 0000000000000000000000000000000000000000..862e8f9a75bc874741cababc3b352cbbfe3611ad --- /dev/null +++ b/tokenization_chatglm.py @@ -0,0 +1,300 @@ +import json +import os +import re +from typing import List, Optional, Union, Dict +from sentencepiece import SentencePieceProcessor +from transformers import PreTrainedTokenizer +from transformers.utils import logging, PaddingStrategy +from transformers.tokenization_utils_base import EncodedInput, BatchEncoding + + +class SPTokenizer: + def __init__(self, model_path: str): + # reload tokenizer + assert os.path.isfile(model_path), model_path + self.sp_model = SentencePieceProcessor(model_file=model_path) + + # BOS / EOS token IDs + self.n_words: int = self.sp_model.vocab_size() + self.bos_id: int = self.sp_model.bos_id() + self.eos_id: int = self.sp_model.eos_id() + self.pad_id: int = self.sp_model.unk_id() + assert self.sp_model.vocab_size() == self.sp_model.get_piece_size() + + role_special_tokens = ["<|system|>", "<|user|>", "<|assistant|>", "<|observation|>"] + special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"] + role_special_tokens + self.special_tokens = {} + self.index_special_tokens = {} + for token in special_tokens: + self.special_tokens[token] = self.n_words + self.index_special_tokens[self.n_words] = token + self.n_words += 1 + self.role_special_token_expression = "|".join([re.escape(token) for token in role_special_tokens]) + + def tokenize(self, s: str, encode_special_tokens=False): + if encode_special_tokens: + last_index = 0 + t = [] + for match in re.finditer(self.role_special_token_expression, s): + if last_index < match.start(): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:match.start()])) + t.append(s[match.start():match.end()]) + last_index = match.end() + if last_index < len(s): + t.extend(self.sp_model.EncodeAsPieces(s[last_index:])) + return t + else: + return self.sp_model.EncodeAsPieces(s) + + def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]: + assert type(s) is str + t = self.sp_model.encode(s) + if bos: + t = [self.bos_id] + t + if eos: + t = t + [self.eos_id] + return t + + def decode(self, t: List[int]) -> str: + text, buffer = "", [] + for token in t: + if token in self.index_special_tokens: + if buffer: + text += self.sp_model.decode(buffer) + buffer = [] + text += self.index_special_tokens[token] + else: + buffer.append(token) + if buffer: + text += self.sp_model.decode(buffer) + return text + + def decode_tokens(self, tokens: List[str]) -> str: + text = self.sp_model.DecodePieces(tokens) + return text + + def convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + if token in self.special_tokens: + return self.special_tokens[token] + return self.sp_model.PieceToId(token) + + def convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + if index in self.index_special_tokens: + return self.index_special_tokens[index] + if index in [self.eos_id, self.bos_id, self.pad_id] or index < 0 or index > self.sp_model.vocab_size(): + return "" + return self.sp_model.IdToPiece(index) + + +class ChatGLMTokenizer(PreTrainedTokenizer): + vocab_files_names = {"vocab_file": "tokenizer.model"} + + model_input_names = ["input_ids", "attention_mask", "position_ids"] + + def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, encode_special_tokens=False, + **kwargs): + self.name = "GLMTokenizer" + + self.vocab_file = vocab_file + self.tokenizer = SPTokenizer(vocab_file) + self.special_tokens = { + "": self.tokenizer.bos_id, + "": self.tokenizer.eos_id, + "": self.tokenizer.pad_id + } + self.encode_special_tokens = encode_special_tokens + super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, + encode_special_tokens=encode_special_tokens, + **kwargs) + + def get_command(self, token): + if token in self.special_tokens: + return self.special_tokens[token] + assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}" + return self.tokenizer.special_tokens[token] + + @property + def unk_token(self) -> str: + return "" + + @property + def pad_token(self) -> str: + return "" + + @property + def pad_token_id(self): + return self.get_command("") + + @property + def eos_token(self) -> str: + return "" + + @property + def eos_token_id(self): + return self.get_command("") + + @property + def vocab_size(self): + return self.tokenizer.n_words + + def get_vocab(self): + """ Returns vocab as a dict """ + vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)} + vocab.update(self.added_tokens_encoder) + return vocab + + def _tokenize(self, text, **kwargs): + return self.tokenizer.tokenize(text, encode_special_tokens=self.encode_special_tokens) + + def _convert_token_to_id(self, token): + """ Converts a token (str) in an id using the vocab. """ + return self.tokenizer.convert_token_to_id(token) + + def _convert_id_to_token(self, index): + """Converts an index (integer) in a token (str) using the vocab.""" + return self.tokenizer.convert_id_to_token(index) + + def convert_tokens_to_string(self, tokens: List[str]) -> str: + return self.tokenizer.decode_tokens(tokens) + + def save_vocabulary(self, save_directory, filename_prefix=None): + """ + Save the vocabulary and special tokens file to a directory. + + Args: + save_directory (`str`): + The directory in which to save the vocabulary. + filename_prefix (`str`, *optional*): + An optional prefix to add to the named of the saved files. + + Returns: + `Tuple(str)`: Paths to the files saved. + """ + if os.path.isdir(save_directory): + vocab_file = os.path.join( + save_directory, self.vocab_files_names["vocab_file"] + ) + else: + vocab_file = save_directory + + with open(self.vocab_file, 'rb') as fin: + proto_str = fin.read() + + with open(vocab_file, "wb") as writer: + writer.write(proto_str) + + return (vocab_file,) + + def get_prefix_tokens(self): + prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")] + return prefix_tokens + + def build_single_message(self, role, metadata, message): + assert role in ["system", "user", "assistant", "observation"], role + role_tokens = [self.get_command(f"<|{role}|>")] + self.tokenizer.encode(f"{metadata}\n") + message_tokens = self.tokenizer.encode(message) + tokens = role_tokens + message_tokens + return tokens + + def build_chat_input(self, query, history=None, role="user"): + if history is None: + history = [] + input_ids = [] + for item in history: + content = item["content"] + if item["role"] == "system" and "tools" in item: + content = content + "\n" + json.dumps(item["tools"], indent=4, ensure_ascii=False) + input_ids.extend(self.build_single_message(item["role"], item.get("metadata", ""), content)) + input_ids.extend(self.build_single_message(role, "", query)) + input_ids.extend([self.get_command("<|assistant|>")]) + return self.batch_encode_plus([input_ids], return_tensors="pt", is_split_into_words=True) + + def build_inputs_with_special_tokens( + self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None + ) -> List[int]: + """ + Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and + adding special tokens. A BERT sequence has the following format: + + - single sequence: `[CLS] X [SEP]` + - pair of sequences: `[CLS] A [SEP] B [SEP]` + + Args: + token_ids_0 (`List[int]`): + List of IDs to which the special tokens will be added. + token_ids_1 (`List[int]`, *optional*): + Optional second list of IDs for sequence pairs. + + Returns: + `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. + """ + prefix_tokens = self.get_prefix_tokens() + token_ids_0 = prefix_tokens + token_ids_0 + if token_ids_1 is not None: + token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("")] + return token_ids_0 + + def _pad( + self, + encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding], + max_length: Optional[int] = None, + padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, + pad_to_multiple_of: Optional[int] = None, + return_attention_mask: Optional[bool] = None, + ) -> dict: + """ + Pad encoded inputs (on left/right and up to predefined length or max length in the batch) + + Args: + encoded_inputs: + Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). + max_length: maximum length of the returned list and optionally padding length (see below). + Will truncate by taking into account the special tokens. + padding_strategy: PaddingStrategy to use for padding. + + - PaddingStrategy.LONGEST Pad to the longest sequence in the batch + - PaddingStrategy.MAX_LENGTH: Pad to the max length (default) + - PaddingStrategy.DO_NOT_PAD: Do not pad + The tokenizer padding sides are defined in self.padding_side: + + - 'left': pads on the left of the sequences + - 'right': pads on the right of the sequences + pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. + This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability + `>= 7.5` (Volta). + return_attention_mask: + (optional) Set to False to avoid returning attention mask (default: set to model specifics) + """ + # Load from model defaults + assert self.padding_side == "left" + + required_input = encoded_inputs[self.model_input_names[0]] + seq_length = len(required_input) + + if padding_strategy == PaddingStrategy.LONGEST: + max_length = len(required_input) + + if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0): + max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of + + needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length + + # Initialize attention mask if not present. + if "attention_mask" not in encoded_inputs: + encoded_inputs["attention_mask"] = [1] * seq_length + + if "position_ids" not in encoded_inputs: + encoded_inputs["position_ids"] = list(range(seq_length)) + + if needs_to_be_padded: + difference = max_length - len(required_input) + + if "attention_mask" in encoded_inputs: + encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"] + if "position_ids" in encoded_inputs: + encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"] + encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input + + return encoded_inputs diff --git a/tokenizer.model b/tokenizer.model new file mode 100644 index 0000000000000000000000000000000000000000..8a8007697b7cc3d3868dcffbbebf8c1f2bd690ba --- /dev/null +++ b/tokenizer.model @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7dc4c393423b76e4373e5157ddc34803a0189ba96b21ddbb40269d31468a6f2 +size 1018370 diff --git a/tokenizer_config.json b/tokenizer_config.json new file mode 100644 index 0000000000000000000000000000000000000000..f0e543dcb5c184576e9e88e2c48b586290d71953 --- /dev/null +++ b/tokenizer_config.json @@ -0,0 +1,41 @@ +{ + "added_tokens_decoder": { + "64795": { + "content": "<|user|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + }, + "64797": { + "content": "<|observation|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false, + "special": true + } + }, + "additional_special_tokens": [ + "<|user|>", + "<|observation|>" + ], + "auto_map": { + "AutoTokenizer": [ + "tokenization_chatglm.ChatGLMTokenizer", + null + ] + }, + "clean_up_tokenization_spaces": false, + "do_lower_case": false, + "encode_special_tokens": false, + "eos_token": "", + "model_max_length": 1000000000000000019884624838656, + "pad_token": "", + "padding_side": "right", + "remove_space": false, + "split_special_tokens": false, + "tokenizer_class": "ChatGLMTokenizer", + "unk_token": "" +} diff --git a/train_results.json b/train_results.json new file mode 100644 index 0000000000000000000000000000000000000000..74b97645f2a1a9e849ff2b89db981875744c3502 --- /dev/null +++ b/train_results.json @@ -0,0 +1,7 @@ +{ + "epoch": 25.0, + "train_loss": 1.3768115234375, + "train_runtime": 24197.7873, + "train_samples_per_second": 0.724, + "train_steps_per_second": 0.045 +} \ No newline at end of file diff --git a/trainer_log.jsonl b/trainer_log.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..c734149ed5ba50bbdbf83a6124e2323bba686064 --- /dev/null +++ b/trainer_log.jsonl @@ -0,0 +1,221 @@ +{"current_steps": 5, "total_steps": 1100, "loss": 4.5094, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001999898043009433, "epoch": 0.11, "percentage": 0.45, "elapsed_time": "0:01:55", "remaining_time": "7:00:44"} +{"current_steps": 10, "total_steps": 1100, "loss": 3.8047, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019995921928281893, "epoch": 0.23, "percentage": 0.91, "elapsed_time": "0:03:44", "remaining_time": "6:47:10"} +{"current_steps": 15, "total_steps": 1100, "loss": 3.8813, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001999082511823396, "epoch": 0.34, "percentage": 1.36, "elapsed_time": "0:05:40", "remaining_time": "6:50:33"} +{"current_steps": 20, "total_steps": 1100, "loss": 3.7188, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019983691039261358, "epoch": 0.45, "percentage": 1.82, "elapsed_time": "0:07:31", "remaining_time": "6:46:02"} +{"current_steps": 25, "total_steps": 1100, "loss": 3.6695, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019974521146102534, "epoch": 0.57, "percentage": 2.27, "elapsed_time": "0:09:25", "remaining_time": "6:45:20"} +{"current_steps": 30, "total_steps": 1100, "loss": 3.7078, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001996331730862691, "epoch": 0.68, "percentage": 2.73, "elapsed_time": "0:11:17", "remaining_time": "6:42:42"} +{"current_steps": 35, "total_steps": 1100, "loss": 3.6844, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019950081811453595, "epoch": 0.8, "percentage": 3.18, "elapsed_time": "0:13:10", "remaining_time": "6:41:04"} +{"current_steps": 40, "total_steps": 1100, "loss": 3.6961, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019934817353485504, "epoch": 0.91, "percentage": 3.64, "elapsed_time": "0:14:57", "remaining_time": "6:36:22"} +{"current_steps": 45, "total_steps": 1100, "loss": 3.5758, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019917527047359027, "epoch": 1.02, "percentage": 4.09, "elapsed_time": "0:16:32", "remaining_time": "6:27:37"} +{"current_steps": 50, "total_steps": 1100, "loss": 3.4102, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001989821441880933, "epoch": 1.14, "percentage": 4.55, "elapsed_time": "0:18:22", "remaining_time": "6:25:43"} +{"current_steps": 55, "total_steps": 1100, "loss": 3.3984, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019876883405951376, "epoch": 1.25, "percentage": 5.0, "elapsed_time": "0:20:11", "remaining_time": "6:23:38"} +{"current_steps": 60, "total_steps": 1100, "loss": 3.3602, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001985353835847693, "epoch": 1.36, "percentage": 5.45, "elapsed_time": "0:22:06", "remaining_time": "6:23:10"} +{"current_steps": 65, "total_steps": 1100, "loss": 3.4461, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019828184036767556, "epoch": 1.48, "percentage": 5.91, "elapsed_time": "0:24:01", "remaining_time": "6:22:40"} +{"current_steps": 70, "total_steps": 1100, "loss": 3.3461, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019800825610923932, "epoch": 1.59, "percentage": 6.36, "elapsed_time": "0:25:50", "remaining_time": "6:20:18"} +{"current_steps": 75, "total_steps": 1100, "loss": 3.4172, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019771468659711597, "epoch": 1.7, "percentage": 6.82, "elapsed_time": "0:27:47", "remaining_time": "6:19:45"} +{"current_steps": 80, "total_steps": 1100, "loss": 3.4359, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019740119169423336, "epoch": 1.82, "percentage": 7.27, "elapsed_time": "0:29:33", "remaining_time": "6:16:46"} +{"current_steps": 85, "total_steps": 1100, "loss": 3.5141, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019706783532658523, "epoch": 1.93, "percentage": 7.73, "elapsed_time": "0:31:24", "remaining_time": "6:15:07"} +{"current_steps": 90, "total_steps": 1100, "loss": 3.2242, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001967146854701957, "epoch": 2.05, "percentage": 8.18, "elapsed_time": "0:33:04", "remaining_time": "6:11:10"} +{"current_steps": 95, "total_steps": 1100, "loss": 3.0227, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019634181413725788, "epoch": 2.16, "percentage": 8.64, "elapsed_time": "0:34:58", "remaining_time": "6:09:57"} +{"current_steps": 100, "total_steps": 1100, "loss": 2.8984, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019594929736144974, "epoch": 2.27, "percentage": 9.09, "elapsed_time": "0:36:44", "remaining_time": "6:07:22"} +{"current_steps": 105, "total_steps": 1100, "loss": 3.0781, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001955372151824297, "epoch": 2.39, "percentage": 9.55, "elapsed_time": "0:38:32", "remaining_time": "6:05:12"} +{"current_steps": 110, "total_steps": 1100, "loss": 3.1203, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019510565162951536, "epoch": 2.5, "percentage": 10.0, "elapsed_time": "0:40:29", "remaining_time": "6:04:26"} +{"current_steps": 115, "total_steps": 1100, "loss": 3.1828, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00194654694704549, "epoch": 2.61, "percentage": 10.45, "elapsed_time": "0:42:24", "remaining_time": "6:03:14"} +{"current_steps": 120, "total_steps": 1100, "loss": 3.0531, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019418443636395248, "epoch": 2.73, "percentage": 10.91, "elapsed_time": "0:44:08", "remaining_time": "6:00:28"} +{"current_steps": 125, "total_steps": 1100, "loss": 3.1523, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001936949724999762, "epoch": 2.84, "percentage": 11.36, "elapsed_time": "0:45:59", "remaining_time": "5:58:41"} +{"current_steps": 130, "total_steps": 1100, "loss": 3.1156, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019318640292114524, "epoch": 2.95, "percentage": 11.82, "elapsed_time": "0:47:56", "remaining_time": "5:57:44"} +{"current_steps": 135, "total_steps": 1100, "loss": 2.7844, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019265883133190713, "epoch": 3.07, "percentage": 12.27, "elapsed_time": "0:49:46", "remaining_time": "5:55:44"} +{"current_steps": 140, "total_steps": 1100, "loss": 2.6711, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019211236531148502, "epoch": 3.18, "percentage": 12.73, "elapsed_time": "0:51:42", "remaining_time": "5:54:35"} +{"current_steps": 145, "total_steps": 1100, "loss": 2.6609, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019154711629194062, "epoch": 3.3, "percentage": 13.18, "elapsed_time": "0:53:36", "remaining_time": "5:53:05"} +{"current_steps": 150, "total_steps": 1100, "loss": 2.7531, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019096319953545184, "epoch": 3.41, "percentage": 13.64, "elapsed_time": "0:55:35", "remaining_time": "5:52:06"} +{"current_steps": 155, "total_steps": 1100, "loss": 2.7977, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0019036073411080917, "epoch": 3.52, "percentage": 14.09, "elapsed_time": "0:57:22", "remaining_time": "5:49:48"} +{"current_steps": 160, "total_steps": 1100, "loss": 2.7914, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018973984286913585, "epoch": 3.64, "percentage": 14.55, "elapsed_time": "0:59:08", "remaining_time": "5:47:26"} +{"current_steps": 165, "total_steps": 1100, "loss": 2.8188, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018910065241883678, "epoch": 3.75, "percentage": 15.0, "elapsed_time": "1:00:55", "remaining_time": "5:45:14"} +{"current_steps": 170, "total_steps": 1100, "loss": 2.8945, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018844329309978143, "epoch": 3.86, "percentage": 15.45, "elapsed_time": "1:02:51", "remaining_time": "5:43:52"} +{"current_steps": 175, "total_steps": 1100, "loss": 2.8883, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018776789895672556, "epoch": 3.98, "percentage": 15.91, "elapsed_time": "1:04:39", "remaining_time": "5:41:47"} +{"current_steps": 180, "total_steps": 1100, "loss": 2.4617, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018707460771197773, "epoch": 4.09, "percentage": 16.36, "elapsed_time": "1:06:26", "remaining_time": "5:39:37"} +{"current_steps": 185, "total_steps": 1100, "loss": 2.4633, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001863635607373157, "epoch": 4.2, "percentage": 16.82, "elapsed_time": "1:08:24", "remaining_time": "5:38:20"} +{"current_steps": 190, "total_steps": 1100, "loss": 2.5094, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001856349030251589, "epoch": 4.32, "percentage": 17.27, "elapsed_time": "1:10:17", "remaining_time": "5:36:37"} +{"current_steps": 195, "total_steps": 1100, "loss": 2.432, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018488878315900226, "epoch": 4.43, "percentage": 17.73, "elapsed_time": "1:11:56", "remaining_time": "5:33:54"} +{"current_steps": 200, "total_steps": 1100, "loss": 2.5648, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018412535328311812, "epoch": 4.55, "percentage": 18.18, "elapsed_time": "1:13:54", "remaining_time": "5:32:35"} +{"current_steps": 205, "total_steps": 1100, "loss": 2.4836, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018334476907153176, "epoch": 4.66, "percentage": 18.64, "elapsed_time": "1:15:41", "remaining_time": "5:30:29"} +{"current_steps": 210, "total_steps": 1100, "loss": 2.6617, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001825471896962774, "epoch": 4.77, "percentage": 19.09, "elapsed_time": "1:17:33", "remaining_time": "5:28:40"} +{"current_steps": 215, "total_steps": 1100, "loss": 2.6734, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018173277779494068, "epoch": 4.89, "percentage": 19.55, "elapsed_time": "1:19:23", "remaining_time": "5:26:46"} +{"current_steps": 220, "total_steps": 1100, "loss": 2.6742, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018090169943749475, "epoch": 5.0, "percentage": 20.0, "elapsed_time": "1:21:09", "remaining_time": "5:24:39"} +{"current_steps": 225, "total_steps": 1100, "loss": 2.1379, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0018005412409243604, "epoch": 5.11, "percentage": 20.45, "elapsed_time": "1:22:59", "remaining_time": "5:22:44"} +{"current_steps": 230, "total_steps": 1100, "loss": 2.1508, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017919022459222751, "epoch": 5.23, "percentage": 20.91, "elapsed_time": "1:24:47", "remaining_time": "5:20:42"} +{"current_steps": 235, "total_steps": 1100, "loss": 2.2582, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017831017709805555, "epoch": 5.34, "percentage": 21.36, "elapsed_time": "1:26:28", "remaining_time": "5:18:18"} +{"current_steps": 240, "total_steps": 1100, "loss": 2.2367, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017741416106390826, "epoch": 5.45, "percentage": 21.82, "elapsed_time": "1:28:23", "remaining_time": "5:16:44"} +{"current_steps": 245, "total_steps": 1100, "loss": 2.325, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017650235919998232, "epoch": 5.57, "percentage": 22.27, "elapsed_time": "1:30:11", "remaining_time": "5:14:44"} +{"current_steps": 250, "total_steps": 1100, "loss": 2.2703, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017557495743542584, "epoch": 5.68, "percentage": 22.73, "elapsed_time": "1:31:55", "remaining_time": "5:12:31"} +{"current_steps": 255, "total_steps": 1100, "loss": 2.3703, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017463214488042471, "epoch": 5.8, "percentage": 23.18, "elapsed_time": "1:33:39", "remaining_time": "5:10:21"} +{"current_steps": 260, "total_steps": 1100, "loss": 2.4648, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001736741137876405, "epoch": 5.91, "percentage": 23.64, "elapsed_time": "1:35:29", "remaining_time": "5:08:30"} +{"current_steps": 265, "total_steps": 1100, "loss": 2.2734, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017270105951300739, "epoch": 6.02, "percentage": 24.09, "elapsed_time": "1:37:14", "remaining_time": "5:06:23"} +{"current_steps": 270, "total_steps": 1100, "loss": 1.9898, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017171318047589637, "epoch": 6.14, "percentage": 24.55, "elapsed_time": "1:39:05", "remaining_time": "5:04:35"} +{"current_steps": 275, "total_steps": 1100, "loss": 1.9816, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0017071067811865474, "epoch": 6.25, "percentage": 25.0, "elapsed_time": "1:40:56", "remaining_time": "5:02:49"} +{"current_steps": 280, "total_steps": 1100, "loss": 1.9648, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016969375686552938, "epoch": 6.36, "percentage": 25.45, "elapsed_time": "1:42:50", "remaining_time": "5:01:11"} +{"current_steps": 285, "total_steps": 1100, "loss": 2.1672, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016866262408098134, "epoch": 6.48, "percentage": 25.91, "elapsed_time": "1:44:44", "remaining_time": "4:59:31"} +{"current_steps": 290, "total_steps": 1100, "loss": 2.0074, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016761749002740195, "epoch": 6.59, "percentage": 26.36, "elapsed_time": "1:46:32", "remaining_time": "4:57:35"} +{"current_steps": 295, "total_steps": 1100, "loss": 2.1598, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016655856782223683, "epoch": 6.7, "percentage": 26.82, "elapsed_time": "1:48:23", "remaining_time": "4:55:45"} +{"current_steps": 300, "total_steps": 1100, "loss": 2.0996, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016548607339452852, "epoch": 6.82, "percentage": 27.27, "elapsed_time": "1:50:11", "remaining_time": "4:53:50"} +{"current_steps": 305, "total_steps": 1100, "loss": 2.1434, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016440022544088554, "epoch": 6.93, "percentage": 27.73, "elapsed_time": "1:51:57", "remaining_time": "4:51:49"} +{"current_steps": 310, "total_steps": 1100, "loss": 2.0699, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016330124538088703, "epoch": 7.05, "percentage": 28.18, "elapsed_time": "1:53:42", "remaining_time": "4:49:45"} +{"current_steps": 315, "total_steps": 1100, "loss": 1.7312, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016218935731193223, "epoch": 7.16, "percentage": 28.64, "elapsed_time": "1:55:40", "remaining_time": "4:48:15"} +{"current_steps": 320, "total_steps": 1100, "loss": 1.7799, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0016106478796354383, "epoch": 7.27, "percentage": 29.09, "elapsed_time": "1:57:30", "remaining_time": "4:46:24"} +{"current_steps": 325, "total_steps": 1100, "loss": 1.7008, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015992776665113468, "epoch": 7.39, "percentage": 29.55, "elapsed_time": "1:59:18", "remaining_time": "4:44:30"} +{"current_steps": 330, "total_steps": 1100, "loss": 1.8969, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015877852522924731, "epoch": 7.5, "percentage": 30.0, "elapsed_time": "2:01:09", "remaining_time": "4:42:41"} +{"current_steps": 335, "total_steps": 1100, "loss": 1.8156, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015761729804427528, "epoch": 7.61, "percentage": 30.45, "elapsed_time": "2:03:02", "remaining_time": "4:40:57"} +{"current_steps": 340, "total_steps": 1100, "loss": 1.9336, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015644432188667695, "epoch": 7.73, "percentage": 30.91, "elapsed_time": "2:04:51", "remaining_time": "4:39:05"} +{"current_steps": 345, "total_steps": 1100, "loss": 1.9918, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015525983594269026, "epoch": 7.84, "percentage": 31.36, "elapsed_time": "2:06:44", "remaining_time": "4:37:22"} +{"current_steps": 350, "total_steps": 1100, "loss": 2.0055, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015406408174555976, "epoch": 7.95, "percentage": 31.82, "elapsed_time": "2:08:26", "remaining_time": "4:35:13"} +{"current_steps": 355, "total_steps": 1100, "loss": 1.7168, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0015285730312628418, "epoch": 8.07, "percentage": 32.27, "elapsed_time": "2:10:23", "remaining_time": "4:33:37"} +{"current_steps": 360, "total_steps": 1100, "loss": 1.5531, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001516397461638962, "epoch": 8.18, "percentage": 32.73, "elapsed_time": "2:12:12", "remaining_time": "4:31:46"} +{"current_steps": 365, "total_steps": 1100, "loss": 1.5922, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001504116591352832, "epoch": 8.3, "percentage": 33.18, "elapsed_time": "2:14:02", "remaining_time": "4:29:55"} +{"current_steps": 370, "total_steps": 1100, "loss": 1.618, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001491732924645604, "epoch": 8.41, "percentage": 33.64, "elapsed_time": "2:15:58", "remaining_time": "4:28:17"} +{"current_steps": 375, "total_steps": 1100, "loss": 1.6738, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014792489867200569, "epoch": 8.52, "percentage": 34.09, "elapsed_time": "2:17:44", "remaining_time": "4:26:18"} +{"current_steps": 380, "total_steps": 1100, "loss": 1.7461, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014666673232256737, "epoch": 8.64, "percentage": 34.55, "elapsed_time": "2:19:33", "remaining_time": "4:24:24"} +{"current_steps": 385, "total_steps": 1100, "loss": 1.6746, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014539904997395467, "epoch": 8.75, "percentage": 35.0, "elapsed_time": "2:21:12", "remaining_time": "4:22:14"} +{"current_steps": 390, "total_steps": 1100, "loss": 1.7711, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014412211012432212, "epoch": 8.86, "percentage": 35.45, "elapsed_time": "2:23:05", "remaining_time": "4:20:29"} +{"current_steps": 395, "total_steps": 1100, "loss": 1.8387, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014283617315955814, "epoch": 8.98, "percentage": 35.91, "elapsed_time": "2:24:55", "remaining_time": "4:18:39"} +{"current_steps": 400, "total_steps": 1100, "loss": 1.475, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0014154150130018866, "epoch": 9.09, "percentage": 36.36, "elapsed_time": "2:26:37", "remaining_time": "4:16:35"} +{"current_steps": 405, "total_steps": 1100, "loss": 1.4523, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001402383585479068, "epoch": 9.2, "percentage": 36.82, "elapsed_time": "2:28:24", "remaining_time": "4:14:40"} +{"current_steps": 410, "total_steps": 1100, "loss": 1.4812, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0013892701063173917, "epoch": 9.32, "percentage": 37.27, "elapsed_time": "2:30:10", "remaining_time": "4:12:43"} +{"current_steps": 415, "total_steps": 1100, "loss": 1.525, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0013760772495385997, "epoch": 9.43, "percentage": 37.73, "elapsed_time": "2:31:59", "remaining_time": "4:10:52"} +{"current_steps": 420, "total_steps": 1100, "loss": 1.398, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001362807705350641, "epoch": 9.55, "percentage": 38.18, "elapsed_time": "2:33:42", "remaining_time": "4:08:50"} +{"current_steps": 425, "total_steps": 1100, "loss": 1.4477, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0013494641795990985, "epoch": 9.66, "percentage": 38.64, "elapsed_time": "2:35:30", "remaining_time": "4:06:59"} +{"current_steps": 430, "total_steps": 1100, "loss": 1.5801, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00133604939321543, "epoch": 9.77, "percentage": 39.09, "elapsed_time": "2:37:25", "remaining_time": "4:05:16"} +{"current_steps": 435, "total_steps": 1100, "loss": 1.6422, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0013225660816621341, "epoch": 9.89, "percentage": 39.55, "elapsed_time": "2:39:21", "remaining_time": "4:03:36"} +{"current_steps": 440, "total_steps": 1100, "loss": 1.5535, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0013090169943749475, "epoch": 10.0, "percentage": 40.0, "elapsed_time": "2:41:05", "remaining_time": "4:01:38"} +{"current_steps": 445, "total_steps": 1100, "loss": 1.2324, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012954048942022001, "epoch": 10.11, "percentage": 40.45, "elapsed_time": "2:42:59", "remaining_time": "3:59:55"} +{"current_steps": 450, "total_steps": 1100, "loss": 1.2613, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012817325568414298, "epoch": 10.23, "percentage": 40.91, "elapsed_time": "2:44:51", "remaining_time": "3:58:07"} +{"current_steps": 455, "total_steps": 1100, "loss": 1.3293, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001268002770273379, "epoch": 10.34, "percentage": 41.36, "elapsed_time": "2:46:49", "remaining_time": "3:56:28"} +{"current_steps": 460, "total_steps": 1100, "loss": 1.2852, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012542183341934872, "epoch": 10.45, "percentage": 41.82, "elapsed_time": "2:48:39", "remaining_time": "3:54:38"} +{"current_steps": 465, "total_steps": 1100, "loss": 1.3295, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012403820594409924, "epoch": 10.57, "percentage": 42.27, "elapsed_time": "2:50:26", "remaining_time": "3:52:44"} +{"current_steps": 470, "total_steps": 1100, "loss": 1.3287, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012264967674257645, "epoch": 10.68, "percentage": 42.73, "elapsed_time": "2:52:16", "remaining_time": "3:50:55"} +{"current_steps": 475, "total_steps": 1100, "loss": 1.3566, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0012125652895529767, "epoch": 10.8, "percentage": 43.18, "elapsed_time": "2:53:59", "remaining_time": "3:48:55"} +{"current_steps": 480, "total_steps": 1100, "loss": 1.4414, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0011985904666457455, "epoch": 10.91, "percentage": 43.64, "elapsed_time": "2:55:52", "remaining_time": "3:47:09"} +{"current_steps": 485, "total_steps": 1100, "loss": 1.3695, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0011845751483658454, "epoch": 11.02, "percentage": 44.09, "elapsed_time": "2:57:35", "remaining_time": "3:45:11"} +{"current_steps": 490, "total_steps": 1100, "loss": 1.1363, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0011705221926326238, "epoch": 11.14, "percentage": 44.55, "elapsed_time": "2:59:30", "remaining_time": "3:43:28"} +{"current_steps": 495, "total_steps": 1100, "loss": 1.1354, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001156434465040231, "epoch": 11.25, "percentage": 45.0, "elapsed_time": "3:01:25", "remaining_time": "3:41:44"} +{"current_steps": 500, "total_steps": 1100, "loss": 1.0725, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0011423148382732854, "epoch": 11.36, "percentage": 45.45, "elapsed_time": "3:03:15", "remaining_time": "3:39:54"} +{"current_steps": 505, "total_steps": 1100, "loss": 1.1754, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001128166191521093, "epoch": 11.48, "percentage": 45.91, "elapsed_time": "3:05:01", "remaining_time": "3:37:59"} +{"current_steps": 510, "total_steps": 1100, "loss": 1.1848, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0011139914098905405, "epoch": 11.59, "percentage": 46.36, "elapsed_time": "3:06:53", "remaining_time": "3:36:12"} +{"current_steps": 515, "total_steps": 1100, "loss": 1.2354, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010997933838177826, "epoch": 11.7, "percentage": 46.82, "elapsed_time": "3:08:46", "remaining_time": "3:34:26"} +{"current_steps": 520, "total_steps": 1100, "loss": 1.1984, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010855750084788399, "epoch": 11.82, "percentage": 47.27, "elapsed_time": "3:10:33", "remaining_time": "3:32:32"} +{"current_steps": 525, "total_steps": 1100, "loss": 1.2666, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010713391831992322, "epoch": 11.93, "percentage": 47.73, "elapsed_time": "3:12:22", "remaining_time": "3:30:41"} +{"current_steps": 530, "total_steps": 1100, "loss": 1.1408, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001057088810862768, "epoch": 12.05, "percentage": 48.18, "elapsed_time": "3:14:08", "remaining_time": "3:28:48"} +{"current_steps": 535, "total_steps": 1100, "loss": 0.9385, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010428267973196027, "epoch": 12.16, "percentage": 48.64, "elapsed_time": "3:15:51", "remaining_time": "3:26:50"} +{"current_steps": 540, "total_steps": 1100, "loss": 1.0158, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010285560507936962, "epoch": 12.27, "percentage": 49.09, "elapsed_time": "3:17:44", "remaining_time": "3:25:04"} +{"current_steps": 545, "total_steps": 1100, "loss": 0.9936, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0010142794812897874, "epoch": 12.39, "percentage": 49.55, "elapsed_time": "3:19:31", "remaining_time": "3:23:11"} +{"current_steps": 550, "total_steps": 1100, "loss": 0.9891, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.001, "epoch": 12.5, "percentage": 50.0, "elapsed_time": "3:21:27", "remaining_time": "3:21:27"} +{"current_steps": 555, "total_steps": 1100, "loss": 1.0684, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000985720518710213, "epoch": 12.61, "percentage": 50.45, "elapsed_time": "3:23:17", "remaining_time": "3:19:38"} +{"current_steps": 560, "total_steps": 1100, "loss": 1.076, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0009714439492063038, "epoch": 12.73, "percentage": 50.91, "elapsed_time": "3:25:12", "remaining_time": "3:17:52"} +{"current_steps": 565, "total_steps": 1100, "loss": 1.0609, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0009571732026803976, "epoch": 12.84, "percentage": 51.36, "elapsed_time": "3:27:01", "remaining_time": "3:16:02"} +{"current_steps": 570, "total_steps": 1100, "loss": 1.1297, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000942911189137232, "epoch": 12.95, "percentage": 51.82, "elapsed_time": "3:28:54", "remaining_time": "3:14:14"} +{"current_steps": 575, "total_steps": 1100, "loss": 0.9342, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0009286608168007677, "epoch": 13.07, "percentage": 52.27, "elapsed_time": "3:30:36", "remaining_time": "3:12:17"} +{"current_steps": 580, "total_steps": 1100, "loss": 0.8511, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0009144249915211606, "epoch": 13.18, "percentage": 52.73, "elapsed_time": "3:32:32", "remaining_time": "3:10:32"} +{"current_steps": 585, "total_steps": 1100, "loss": 0.8336, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0009002066161822172, "epoch": 13.3, "percentage": 53.18, "elapsed_time": "3:34:19", "remaining_time": "3:08:40"} +{"current_steps": 590, "total_steps": 1100, "loss": 0.8652, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0008860085901094594, "epoch": 13.41, "percentage": 53.64, "elapsed_time": "3:36:15", "remaining_time": "3:06:56"} +{"current_steps": 595, "total_steps": 1100, "loss": 0.9744, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0008718338084789072, "epoch": 13.52, "percentage": 54.09, "elapsed_time": "3:38:09", "remaining_time": "3:05:09"} +{"current_steps": 600, "total_steps": 1100, "loss": 0.9006, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000857685161726715, "epoch": 13.64, "percentage": 54.55, "elapsed_time": "3:39:49", "remaining_time": "3:03:11"} +{"current_steps": 605, "total_steps": 1100, "loss": 0.9619, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000843565534959769, "epoch": 13.75, "percentage": 55.0, "elapsed_time": "3:41:40", "remaining_time": "3:01:21"} +{"current_steps": 610, "total_steps": 1100, "loss": 0.9123, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0008294778073673762, "epoch": 13.86, "percentage": 55.45, "elapsed_time": "3:43:27", "remaining_time": "2:59:30"} +{"current_steps": 615, "total_steps": 1100, "loss": 0.9959, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0008154248516341547, "epoch": 13.98, "percentage": 55.91, "elapsed_time": "3:45:17", "remaining_time": "2:57:40"} +{"current_steps": 620, "total_steps": 1100, "loss": 0.7503, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0008014095333542549, "epoch": 14.09, "percentage": 56.36, "elapsed_time": "3:47:05", "remaining_time": "2:55:48"} +{"current_steps": 625, "total_steps": 1100, "loss": 0.7357, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007874347104470233, "epoch": 14.2, "percentage": 56.82, "elapsed_time": "3:48:56", "remaining_time": "2:53:59"} +{"current_steps": 630, "total_steps": 1100, "loss": 0.7477, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007735032325742355, "epoch": 14.32, "percentage": 57.27, "elapsed_time": "3:50:49", "remaining_time": "2:52:12"} +{"current_steps": 635, "total_steps": 1100, "loss": 0.8088, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007596179405590076, "epoch": 14.43, "percentage": 57.73, "elapsed_time": "3:52:43", "remaining_time": "2:50:24"} +{"current_steps": 640, "total_steps": 1100, "loss": 0.7652, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007457816658065133, "epoch": 14.55, "percentage": 58.18, "elapsed_time": "3:54:30", "remaining_time": "2:48:33"} +{"current_steps": 645, "total_steps": 1100, "loss": 0.7847, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007319972297266214, "epoch": 14.66, "percentage": 58.64, "elapsed_time": "3:56:19", "remaining_time": "2:46:42"} +{"current_steps": 650, "total_steps": 1100, "loss": 0.7984, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007182674431585703, "epoch": 14.77, "percentage": 59.09, "elapsed_time": "3:58:05", "remaining_time": "2:44:49"} +{"current_steps": 655, "total_steps": 1100, "loss": 0.8732, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0007045951057978, "epoch": 14.89, "percentage": 59.55, "elapsed_time": "4:00:00", "remaining_time": "2:43:03"} +{"current_steps": 660, "total_steps": 1100, "loss": 0.8258, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006909830056250527, "epoch": 15.0, "percentage": 60.0, "elapsed_time": "4:01:42", "remaining_time": "2:41:08"} +{"current_steps": 665, "total_steps": 1100, "loss": 0.6311, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006774339183378663, "epoch": 15.11, "percentage": 60.45, "elapsed_time": "4:03:36", "remaining_time": "2:39:21"} +{"current_steps": 670, "total_steps": 1100, "loss": 0.6543, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006639506067845697, "epoch": 15.23, "percentage": 60.91, "elapsed_time": "4:05:30", "remaining_time": "2:37:33"} +{"current_steps": 675, "total_steps": 1100, "loss": 0.6421, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006505358204009018, "epoch": 15.34, "percentage": 61.36, "elapsed_time": "4:07:13", "remaining_time": "2:35:39"} +{"current_steps": 680, "total_steps": 1100, "loss": 0.6937, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006371922946493591, "epoch": 15.45, "percentage": 61.82, "elapsed_time": "4:09:03", "remaining_time": "2:33:50"} +{"current_steps": 685, "total_steps": 1100, "loss": 0.6887, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006239227504614003, "epoch": 15.57, "percentage": 62.27, "elapsed_time": "4:10:59", "remaining_time": "2:32:03"} +{"current_steps": 690, "total_steps": 1100, "loss": 0.7097, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0006107298936826086, "epoch": 15.68, "percentage": 62.73, "elapsed_time": "4:12:47", "remaining_time": "2:30:12"} +{"current_steps": 695, "total_steps": 1100, "loss": 0.6778, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005976164145209322, "epoch": 15.8, "percentage": 63.18, "elapsed_time": "4:14:31", "remaining_time": "2:28:19"} +{"current_steps": 700, "total_steps": 1100, "loss": 0.7124, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005845849869981136, "epoch": 15.91, "percentage": 63.64, "elapsed_time": "4:16:25", "remaining_time": "2:26:31"} +{"current_steps": 705, "total_steps": 1100, "loss": 0.7053, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000571638268404419, "epoch": 16.02, "percentage": 64.09, "elapsed_time": "4:18:14", "remaining_time": "2:24:41"} +{"current_steps": 710, "total_steps": 1100, "loss": 0.5863, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005587788987567784, "epoch": 16.14, "percentage": 64.55, "elapsed_time": "4:20:11", "remaining_time": "2:22:55"} +{"current_steps": 715, "total_steps": 1100, "loss": 0.5588, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005460095002604533, "epoch": 16.25, "percentage": 65.0, "elapsed_time": "4:22:02", "remaining_time": "2:21:05"} +{"current_steps": 720, "total_steps": 1100, "loss": 0.5363, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005333326767743263, "epoch": 16.36, "percentage": 65.45, "elapsed_time": "4:23:49", "remaining_time": "2:19:14"} +{"current_steps": 725, "total_steps": 1100, "loss": 0.6137, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005207510132799435, "epoch": 16.48, "percentage": 65.91, "elapsed_time": "4:25:47", "remaining_time": "2:17:28"} +{"current_steps": 730, "total_steps": 1100, "loss": 0.5606, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0005082670753543961, "epoch": 16.59, "percentage": 66.36, "elapsed_time": "4:27:33", "remaining_time": "2:15:36"} +{"current_steps": 735, "total_steps": 1100, "loss": 0.629, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0004958834086471683, "epoch": 16.7, "percentage": 66.82, "elapsed_time": "4:29:26", "remaining_time": "2:13:48"} +{"current_steps": 740, "total_steps": 1100, "loss": 0.5754, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00048360253836103817, "epoch": 16.82, "percentage": 67.27, "elapsed_time": "4:31:15", "remaining_time": "2:11:57"} +{"current_steps": 745, "total_steps": 1100, "loss": 0.6239, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0004714269687371581, "epoch": 16.93, "percentage": 67.73, "elapsed_time": "4:33:07", "remaining_time": "2:10:08"} +{"current_steps": 750, "total_steps": 1100, "loss": 0.5807, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0004593591825444028, "epoch": 17.05, "percentage": 68.18, "elapsed_time": "4:35:00", "remaining_time": "2:08:20"} +{"current_steps": 755, "total_steps": 1100, "loss": 0.465, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0004474016405730973, "epoch": 17.16, "percentage": 68.64, "elapsed_time": "4:36:46", "remaining_time": "2:06:28"} +{"current_steps": 760, "total_steps": 1100, "loss": 0.4871, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00043555678113323104, "epoch": 17.27, "percentage": 69.09, "elapsed_time": "4:38:41", "remaining_time": "2:04:40"} +{"current_steps": 765, "total_steps": 1100, "loss": 0.4623, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00042382701955724725, "epoch": 17.39, "percentage": 69.55, "elapsed_time": "4:40:30", "remaining_time": "2:02:50"} +{"current_steps": 770, "total_steps": 1100, "loss": 0.5059, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00041221474770752696, "epoch": 17.5, "percentage": 70.0, "elapsed_time": "4:42:19", "remaining_time": "2:00:59"} +{"current_steps": 775, "total_steps": 1100, "loss": 0.5021, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00040072233348865304, "epoch": 17.61, "percentage": 70.45, "elapsed_time": "4:44:10", "remaining_time": "1:59:10"} +{"current_steps": 780, "total_steps": 1100, "loss": 0.5138, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0003893521203645618, "epoch": 17.73, "percentage": 70.91, "elapsed_time": "4:45:55", "remaining_time": "1:57:18"} +{"current_steps": 785, "total_steps": 1100, "loss": 0.5212, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00037810642688067796, "epoch": 17.84, "percentage": 71.36, "elapsed_time": "4:47:49", "remaining_time": "1:55:29"} +{"current_steps": 790, "total_steps": 1100, "loss": 0.5611, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00036698754619112975, "epoch": 17.95, "percentage": 71.82, "elapsed_time": "4:49:42", "remaining_time": "1:53:40"} +{"current_steps": 795, "total_steps": 1100, "loss": 0.4956, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00035599774559114475, "epoch": 18.07, "percentage": 72.27, "elapsed_time": "4:51:30", "remaining_time": "1:51:50"} +{"current_steps": 800, "total_steps": 1100, "loss": 0.4243, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000345139266054715, "epoch": 18.18, "percentage": 72.73, "elapsed_time": "4:53:18", "remaining_time": "1:49:59"} +{"current_steps": 805, "total_steps": 1100, "loss": 0.4391, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0003344143217776319, "epoch": 18.3, "percentage": 73.18, "elapsed_time": "4:55:16", "remaining_time": "1:48:12"} +{"current_steps": 810, "total_steps": 1100, "loss": 0.4627, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00032382509972598086, "epoch": 18.41, "percentage": 73.64, "elapsed_time": "4:57:10", "remaining_time": "1:46:23"} +{"current_steps": 815, "total_steps": 1100, "loss": 0.4208, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0003133737591901864, "epoch": 18.52, "percentage": 74.09, "elapsed_time": "4:58:55", "remaining_time": "1:44:31"} +{"current_steps": 820, "total_steps": 1100, "loss": 0.45, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0003030624313447067, "epoch": 18.64, "percentage": 74.55, "elapsed_time": "5:00:50", "remaining_time": "1:42:43"} +{"current_steps": 825, "total_steps": 1100, "loss": 0.44, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00029289321881345256, "epoch": 18.75, "percentage": 75.0, "elapsed_time": "5:02:41", "remaining_time": "1:40:53"} +{"current_steps": 830, "total_steps": 1100, "loss": 0.4451, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0002828681952410366, "epoch": 18.86, "percentage": 75.45, "elapsed_time": "5:04:28", "remaining_time": "1:39:02"} +{"current_steps": 835, "total_steps": 1100, "loss": 0.4494, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0002729894048699265, "epoch": 18.98, "percentage": 75.91, "elapsed_time": "5:06:12", "remaining_time": "1:37:10"} +{"current_steps": 840, "total_steps": 1100, "loss": 0.3839, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00026325886212359495, "epoch": 19.09, "percentage": 76.36, "elapsed_time": "5:07:54", "remaining_time": "1:35:18"} +{"current_steps": 845, "total_steps": 1100, "loss": 0.3728, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0002536785511957531, "epoch": 19.2, "percentage": 76.82, "elapsed_time": "5:09:42", "remaining_time": "1:33:27"} +{"current_steps": 850, "total_steps": 1100, "loss": 0.4126, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00024425042564574185, "epoch": 19.32, "percentage": 77.27, "elapsed_time": "5:11:37", "remaining_time": "1:31:39"} +{"current_steps": 855, "total_steps": 1100, "loss": 0.4183, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00023497640800017682, "epoch": 19.43, "percentage": 77.73, "elapsed_time": "5:13:30", "remaining_time": "1:29:50"} +{"current_steps": 860, "total_steps": 1100, "loss": 0.3778, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0002258583893609175, "epoch": 19.55, "percentage": 78.18, "elapsed_time": "5:15:16", "remaining_time": "1:27:58"} +{"current_steps": 865, "total_steps": 1100, "loss": 0.3758, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00021689822901944456, "epoch": 19.66, "percentage": 78.64, "elapsed_time": "5:17:06", "remaining_time": "1:26:09"} +{"current_steps": 870, "total_steps": 1100, "loss": 0.4034, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000208097754077725, "epoch": 19.77, "percentage": 79.09, "elapsed_time": "5:18:59", "remaining_time": "1:24:19"} +{"current_steps": 875, "total_steps": 1100, "loss": 0.4085, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0001994587590756397, "epoch": 19.89, "percentage": 79.55, "elapsed_time": "5:20:53", "remaining_time": "1:22:30"} +{"current_steps": 880, "total_steps": 1100, "loss": 0.3673, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00019098300562505265, "epoch": 20.0, "percentage": 80.0, "elapsed_time": "5:22:36", "remaining_time": "1:20:39"} +{"current_steps": 885, "total_steps": 1100, "loss": 0.363, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0001826722220505931, "epoch": 20.11, "percentage": 80.45, "elapsed_time": "5:24:37", "remaining_time": "1:18:51"} +{"current_steps": 890, "total_steps": 1100, "loss": 0.3707, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.000174528103037226, "epoch": 20.23, "percentage": 80.91, "elapsed_time": "5:26:29", "remaining_time": "1:17:02"} +{"current_steps": 895, "total_steps": 1100, "loss": 0.369, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00016655230928468257, "epoch": 20.34, "percentage": 81.36, "elapsed_time": "5:28:24", "remaining_time": "1:15:13"} +{"current_steps": 900, "total_steps": 1100, "loss": 0.3528, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00015874646716881869, "epoch": 20.45, "percentage": 81.82, "elapsed_time": "5:30:14", "remaining_time": "1:13:23"} +{"current_steps": 905, "total_steps": 1100, "loss": 0.3581, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00015111216840997744, "epoch": 20.57, "percentage": 82.27, "elapsed_time": "5:32:02", "remaining_time": "1:11:32"} +{"current_steps": 910, "total_steps": 1100, "loss": 0.3466, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00014365096974841107, "epoch": 20.68, "percentage": 82.73, "elapsed_time": "5:33:57", "remaining_time": "1:09:43"} +{"current_steps": 915, "total_steps": 1100, "loss": 0.3274, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00013636439262684297, "epoch": 20.8, "percentage": 83.18, "elapsed_time": "5:35:42", "remaining_time": "1:07:52"} +{"current_steps": 920, "total_steps": 1100, "loss": 0.3401, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00012925392288022297, "epoch": 20.91, "percentage": 83.64, "elapsed_time": "5:37:28", "remaining_time": "1:06:01"} +{"current_steps": 925, "total_steps": 1100, "loss": 0.3435, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00012232101043274435, "epoch": 21.02, "percentage": 84.09, "elapsed_time": "5:39:16", "remaining_time": "1:04:11"} +{"current_steps": 930, "total_steps": 1100, "loss": 0.2972, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00011556706900218572, "epoch": 21.14, "percentage": 84.55, "elapsed_time": "5:41:08", "remaining_time": "1:02:21"} +{"current_steps": 935, "total_steps": 1100, "loss": 0.3153, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00010899347581163222, "epoch": 21.25, "percentage": 85.0, "elapsed_time": "5:42:51", "remaining_time": "1:00:30"} +{"current_steps": 940, "total_steps": 1100, "loss": 0.3315, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.00010260157130864178, "epoch": 21.36, "percentage": 85.45, "elapsed_time": "5:44:44", "remaining_time": "0:58:40"} +{"current_steps": 945, "total_steps": 1100, "loss": 0.3264, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.639265889190829e-05, "epoch": 21.48, "percentage": 85.91, "elapsed_time": "5:46:34", "remaining_time": "0:56:50"} +{"current_steps": 950, "total_steps": 1100, "loss": 0.3427, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.036800464548156e-05, "epoch": 21.59, "percentage": 86.36, "elapsed_time": "5:48:26", "remaining_time": "0:55:01"} +{"current_steps": 955, "total_steps": 1100, "loss": 0.3415, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.4528837080594e-05, "epoch": 21.7, "percentage": 86.82, "elapsed_time": "5:50:13", "remaining_time": "0:53:10"} +{"current_steps": 960, "total_steps": 1100, "loss": 0.323, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.887634688515e-05, "epoch": 21.82, "percentage": 87.27, "elapsed_time": "5:52:05", "remaining_time": "0:51:20"} +{"current_steps": 965, "total_steps": 1100, "loss": 0.2961, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 7.341168668092857e-05, "epoch": 21.93, "percentage": 87.73, "elapsed_time": "5:53:54", "remaining_time": "0:49:30"} +{"current_steps": 970, "total_steps": 1100, "loss": 0.3276, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.813597078854772e-05, "epoch": 22.05, "percentage": 88.18, "elapsed_time": "5:55:48", "remaining_time": "0:47:41"} +{"current_steps": 975, "total_steps": 1100, "loss": 0.3045, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.305027500023842e-05, "epoch": 22.16, "percentage": 88.64, "elapsed_time": "5:57:35", "remaining_time": "0:45:50"} +{"current_steps": 980, "total_steps": 1100, "loss": 0.3167, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.8155636360475384e-05, "epoch": 22.27, "percentage": 89.09, "elapsed_time": "5:59:29", "remaining_time": "0:44:01"} +{"current_steps": 985, "total_steps": 1100, "loss": 0.319, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 5.345305295450997e-05, "epoch": 22.39, "percentage": 89.55, "elapsed_time": "6:01:18", "remaining_time": "0:42:10"} +{"current_steps": 990, "total_steps": 1100, "loss": 0.2852, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.894348370484647e-05, "epoch": 22.5, "percentage": 90.0, "elapsed_time": "6:02:58", "remaining_time": "0:40:19"} +{"current_steps": 995, "total_steps": 1100, "loss": 0.3034, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.4627848175703315e-05, "epoch": 22.61, "percentage": 90.45, "elapsed_time": "6:04:49", "remaining_time": "0:38:29"} +{"current_steps": 1000, "total_steps": 1100, "loss": 0.2845, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.050702638550274e-05, "epoch": 22.73, "percentage": 90.91, "elapsed_time": "6:06:36", "remaining_time": "0:36:39"} +{"current_steps": 1005, "total_steps": 1100, "loss": 0.3136, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.658185862742103e-05, "epoch": 22.84, "percentage": 91.36, "elapsed_time": "6:08:32", "remaining_time": "0:34:50"} +{"current_steps": 1010, "total_steps": 1100, "loss": 0.3187, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.285314529804295e-05, "epoch": 22.95, "percentage": 91.82, "elapsed_time": "6:10:23", "remaining_time": "0:33:00"} +{"current_steps": 1015, "total_steps": 1100, "loss": 0.2907, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.93216467341475e-05, "epoch": 23.07, "percentage": 92.27, "elapsed_time": "6:12:02", "remaining_time": "0:31:09"} +{"current_steps": 1020, "total_steps": 1100, "loss": 0.2955, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5988083057666535e-05, "epoch": 23.18, "percentage": 92.73, "elapsed_time": "6:13:48", "remaining_time": "0:29:19"} +{"current_steps": 1025, "total_steps": 1100, "loss": 0.2785, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.2853134028840594e-05, "epoch": 23.3, "percentage": 93.18, "elapsed_time": "6:15:39", "remaining_time": "0:27:29"} +{"current_steps": 1030, "total_steps": 1100, "loss": 0.3369, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.9917438907606554e-05, "epoch": 23.41, "percentage": 93.64, "elapsed_time": "6:17:40", "remaining_time": "0:25:40"} +{"current_steps": 1035, "total_steps": 1100, "loss": 0.2837, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.7181596323244453e-05, "epoch": 23.52, "percentage": 94.09, "elapsed_time": "6:19:24", "remaining_time": "0:23:49"} +{"current_steps": 1040, "total_steps": 1100, "loss": 0.3002, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.4646164152307017e-05, "epoch": 23.64, "percentage": 94.55, "elapsed_time": "6:21:14", "remaining_time": "0:21:59"} +{"current_steps": 1045, "total_steps": 1100, "loss": 0.3062, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.231165940486234e-05, "epoch": 23.75, "percentage": 95.0, "elapsed_time": "6:23:12", "remaining_time": "0:20:10"} +{"current_steps": 1050, "total_steps": 1100, "loss": 0.2859, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0178558119067316e-05, "epoch": 23.86, "percentage": 95.45, "elapsed_time": "6:25:08", "remaining_time": "0:18:20"} +{"current_steps": 1055, "total_steps": 1100, "loss": 0.284, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 8.247295264097288e-06, "epoch": 23.98, "percentage": 95.91, "elapsed_time": "6:26:56", "remaining_time": "0:16:30"} +{"current_steps": 1060, "total_steps": 1100, "loss": 0.2607, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 6.518264651449779e-06, "epoch": 24.09, "percentage": 96.36, "elapsed_time": "6:28:37", "remaining_time": "0:14:39"} +{"current_steps": 1065, "total_steps": 1100, "loss": 0.3164, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.991818854640395e-06, "epoch": 24.2, "percentage": 96.82, "elapsed_time": "6:30:34", "remaining_time": "0:12:50"} +{"current_steps": 1070, "total_steps": 1100, "loss": 0.2597, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 3.6682691373086663e-06, "epoch": 24.32, "percentage": 97.27, "elapsed_time": "6:32:06", "remaining_time": "0:10:59"} +{"current_steps": 1075, "total_steps": 1100, "loss": 0.2907, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 2.5478853897464847e-06, "epoch": 24.43, "percentage": 97.73, "elapsed_time": "6:33:55", "remaining_time": "0:09:09"} +{"current_steps": 1080, "total_steps": 1100, "loss": 0.3033, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.630896073864352e-06, "epoch": 24.55, "percentage": 98.18, "elapsed_time": "6:35:56", "remaining_time": "0:07:19"} +{"current_steps": 1085, "total_steps": 1100, "loss": 0.3089, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 9.174881766043087e-07, "epoch": 24.66, "percentage": 98.64, "elapsed_time": "6:37:47", "remaining_time": "0:05:29"} +{"current_steps": 1090, "total_steps": 1100, "loss": 0.2964, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 4.078071718107701e-07, "epoch": 24.77, "percentage": 99.09, "elapsed_time": "6:39:35", "remaining_time": "0:03:39"} +{"current_steps": 1095, "total_steps": 1100, "loss": 0.2995, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 1.0195699056669839e-07, "epoch": 24.89, "percentage": 99.55, "elapsed_time": "6:41:28", "remaining_time": "0:01:49"} +{"current_steps": 1100, "total_steps": 1100, "loss": 0.2936, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": 0.0, "epoch": 25.0, "percentage": 100.0, "elapsed_time": "6:43:17", "remaining_time": "0:00:00"} +{"current_steps": 1100, "total_steps": 1100, "loss": null, "eval_loss": null, "predict_loss": null, "reward": null, "learning_rate": null, "epoch": 25.0, "percentage": 100.0, "elapsed_time": "6:43:17", "remaining_time": "0:00:00"} diff --git a/trainer_state.json b/trainer_state.json new file mode 100644 index 0000000000000000000000000000000000000000..48b70e66aec4bb6fcc3f146e16120403ac06bfef --- /dev/null +++ b/trainer_state.json @@ -0,0 +1,1350 @@ +{ + "best_metric": null, + "best_model_checkpoint": null, + "epoch": 25.0, + "eval_steps": 500, + "global_step": 1100, + "is_hyper_param_search": false, + "is_local_process_zero": true, + "is_world_process_zero": true, + "log_history": [ + { + "epoch": 0.11, + "learning_rate": 0.001999898043009433, + "loss": 4.5094, + "step": 5 + }, + { + "epoch": 0.23, + "learning_rate": 0.0019995921928281893, + "loss": 3.8047, + "step": 10 + }, + { + "epoch": 0.34, + "learning_rate": 0.001999082511823396, + "loss": 3.8813, + "step": 15 + }, + { + "epoch": 0.45, + "learning_rate": 0.0019983691039261358, + "loss": 3.7188, + "step": 20 + }, + { + "epoch": 0.57, + "learning_rate": 0.0019974521146102534, + "loss": 3.6695, + "step": 25 + }, + { + "epoch": 0.68, + "learning_rate": 0.001996331730862691, + "loss": 3.7078, + "step": 30 + }, + { + "epoch": 0.8, + "learning_rate": 0.0019950081811453595, + "loss": 3.6844, + "step": 35 + }, + { + "epoch": 0.91, + "learning_rate": 0.0019934817353485504, + "loss": 3.6961, + "step": 40 + }, + { + "epoch": 1.02, + "learning_rate": 0.0019917527047359027, + "loss": 3.5758, + "step": 45 + }, + { + "epoch": 1.14, + "learning_rate": 0.001989821441880933, + "loss": 3.4102, + "step": 50 + }, + { + "epoch": 1.25, + "learning_rate": 0.0019876883405951376, + "loss": 3.3984, + "step": 55 + }, + { + "epoch": 1.36, + "learning_rate": 0.001985353835847693, + "loss": 3.3602, + "step": 60 + }, + { + "epoch": 1.48, + "learning_rate": 0.0019828184036767556, + "loss": 3.4461, + "step": 65 + }, + { + "epoch": 1.59, + "learning_rate": 0.0019800825610923932, + "loss": 3.3461, + "step": 70 + }, + { + "epoch": 1.7, + "learning_rate": 0.0019771468659711597, + "loss": 3.4172, + "step": 75 + }, + { + "epoch": 1.82, + "learning_rate": 0.0019740119169423336, + "loss": 3.4359, + "step": 80 + }, + { + "epoch": 1.93, + "learning_rate": 0.0019706783532658523, + "loss": 3.5141, + "step": 85 + }, + { + "epoch": 2.05, + "learning_rate": 0.001967146854701957, + "loss": 3.2242, + "step": 90 + }, + { + "epoch": 2.16, + "learning_rate": 0.0019634181413725788, + "loss": 3.0227, + "step": 95 + }, + { + "epoch": 2.27, + "learning_rate": 0.0019594929736144974, + "loss": 2.8984, + "step": 100 + }, + { + "epoch": 2.39, + "learning_rate": 0.001955372151824297, + "loss": 3.0781, + "step": 105 + }, + { + "epoch": 2.5, + "learning_rate": 0.0019510565162951536, + "loss": 3.1203, + "step": 110 + }, + { + "epoch": 2.61, + "learning_rate": 0.00194654694704549, + "loss": 3.1828, + "step": 115 + }, + { + "epoch": 2.73, + "learning_rate": 0.0019418443636395248, + "loss": 3.0531, + "step": 120 + }, + { + "epoch": 2.84, + "learning_rate": 0.001936949724999762, + "loss": 3.1523, + "step": 125 + }, + { + "epoch": 2.95, + "learning_rate": 0.0019318640292114524, + "loss": 3.1156, + "step": 130 + }, + { + "epoch": 3.07, + "learning_rate": 0.0019265883133190713, + "loss": 2.7844, + "step": 135 + }, + { + "epoch": 3.18, + "learning_rate": 0.0019211236531148502, + "loss": 2.6711, + "step": 140 + }, + { + "epoch": 3.3, + "learning_rate": 0.0019154711629194062, + "loss": 2.6609, + "step": 145 + }, + { + "epoch": 3.41, + "learning_rate": 0.0019096319953545184, + "loss": 2.7531, + "step": 150 + }, + { + "epoch": 3.52, + "learning_rate": 0.0019036073411080917, + "loss": 2.7977, + "step": 155 + }, + { + "epoch": 3.64, + "learning_rate": 0.0018973984286913585, + "loss": 2.7914, + "step": 160 + }, + { + "epoch": 3.75, + "learning_rate": 0.0018910065241883678, + "loss": 2.8188, + "step": 165 + }, + { + "epoch": 3.86, + "learning_rate": 0.0018844329309978143, + "loss": 2.8945, + "step": 170 + }, + { + "epoch": 3.98, + "learning_rate": 0.0018776789895672556, + "loss": 2.8883, + "step": 175 + }, + { + "epoch": 4.09, + "learning_rate": 0.0018707460771197773, + "loss": 2.4617, + "step": 180 + }, + { + "epoch": 4.2, + "learning_rate": 0.001863635607373157, + "loss": 2.4633, + "step": 185 + }, + { + "epoch": 4.32, + "learning_rate": 0.001856349030251589, + "loss": 2.5094, + "step": 190 + }, + { + "epoch": 4.43, + "learning_rate": 0.0018488878315900226, + "loss": 2.432, + "step": 195 + }, + { + "epoch": 4.55, + "learning_rate": 0.0018412535328311812, + "loss": 2.5648, + "step": 200 + }, + { + "epoch": 4.66, + "learning_rate": 0.0018334476907153176, + "loss": 2.4836, + "step": 205 + }, + { + "epoch": 4.77, + "learning_rate": 0.001825471896962774, + "loss": 2.6617, + "step": 210 + }, + { + "epoch": 4.89, + "learning_rate": 0.0018173277779494068, + "loss": 2.6734, + "step": 215 + }, + { + "epoch": 5.0, + "learning_rate": 0.0018090169943749475, + "loss": 2.6742, + "step": 220 + }, + { + "epoch": 5.11, + "learning_rate": 0.0018005412409243604, + "loss": 2.1379, + "step": 225 + }, + { + "epoch": 5.23, + "learning_rate": 0.0017919022459222751, + "loss": 2.1508, + "step": 230 + }, + { + "epoch": 5.34, + "learning_rate": 0.0017831017709805555, + "loss": 2.2582, + "step": 235 + }, + { + "epoch": 5.45, + "learning_rate": 0.0017741416106390826, + "loss": 2.2367, + "step": 240 + }, + { + "epoch": 5.57, + "learning_rate": 0.0017650235919998232, + "loss": 2.325, + "step": 245 + }, + { + "epoch": 5.68, + "learning_rate": 0.0017557495743542584, + "loss": 2.2703, + "step": 250 + }, + { + "epoch": 5.8, + "learning_rate": 0.0017463214488042471, + "loss": 2.3703, + "step": 255 + }, + { + "epoch": 5.91, + "learning_rate": 0.001736741137876405, + "loss": 2.4648, + "step": 260 + }, + { + "epoch": 6.02, + "learning_rate": 0.0017270105951300739, + "loss": 2.2734, + "step": 265 + }, + { + "epoch": 6.14, + "learning_rate": 0.0017171318047589637, + "loss": 1.9898, + "step": 270 + }, + { + "epoch": 6.25, + "learning_rate": 0.0017071067811865474, + "loss": 1.9816, + "step": 275 + }, + { + "epoch": 6.36, + "learning_rate": 0.0016969375686552938, + "loss": 1.9648, + "step": 280 + }, + { + "epoch": 6.48, + "learning_rate": 0.0016866262408098134, + "loss": 2.1672, + "step": 285 + }, + { + "epoch": 6.59, + "learning_rate": 0.0016761749002740195, + "loss": 2.0074, + "step": 290 + }, + { + "epoch": 6.7, + "learning_rate": 0.0016655856782223683, + "loss": 2.1598, + "step": 295 + }, + { + "epoch": 6.82, + "learning_rate": 0.0016548607339452852, + "loss": 2.0996, + "step": 300 + }, + { + "epoch": 6.93, + "learning_rate": 0.0016440022544088554, + "loss": 2.1434, + "step": 305 + }, + { + "epoch": 7.05, + "learning_rate": 0.0016330124538088703, + "loss": 2.0699, + "step": 310 + }, + { + "epoch": 7.16, + "learning_rate": 0.0016218935731193223, + "loss": 1.7312, + "step": 315 + }, + { + "epoch": 7.27, + "learning_rate": 0.0016106478796354383, + "loss": 1.7799, + "step": 320 + }, + { + "epoch": 7.39, + "learning_rate": 0.0015992776665113468, + "loss": 1.7008, + "step": 325 + }, + { + "epoch": 7.5, + "learning_rate": 0.0015877852522924731, + "loss": 1.8969, + "step": 330 + }, + { + "epoch": 7.61, + "learning_rate": 0.0015761729804427528, + "loss": 1.8156, + "step": 335 + }, + { + "epoch": 7.73, + "learning_rate": 0.0015644432188667695, + "loss": 1.9336, + "step": 340 + }, + { + "epoch": 7.84, + "learning_rate": 0.0015525983594269026, + "loss": 1.9918, + "step": 345 + }, + { + "epoch": 7.95, + "learning_rate": 0.0015406408174555976, + "loss": 2.0055, + "step": 350 + }, + { + "epoch": 8.07, + "learning_rate": 0.0015285730312628418, + "loss": 1.7168, + "step": 355 + }, + { + "epoch": 8.18, + "learning_rate": 0.001516397461638962, + "loss": 1.5531, + "step": 360 + }, + { + "epoch": 8.3, + "learning_rate": 0.001504116591352832, + "loss": 1.5922, + "step": 365 + }, + { + "epoch": 8.41, + "learning_rate": 0.001491732924645604, + "loss": 1.618, + "step": 370 + }, + { + "epoch": 8.52, + "learning_rate": 0.0014792489867200569, + "loss": 1.6738, + "step": 375 + }, + { + "epoch": 8.64, + "learning_rate": 0.0014666673232256737, + "loss": 1.7461, + "step": 380 + }, + { + "epoch": 8.75, + "learning_rate": 0.0014539904997395467, + "loss": 1.6746, + "step": 385 + }, + { + "epoch": 8.86, + "learning_rate": 0.0014412211012432212, + "loss": 1.7711, + "step": 390 + }, + { + "epoch": 8.98, + "learning_rate": 0.0014283617315955814, + "loss": 1.8387, + "step": 395 + }, + { + "epoch": 9.09, + "learning_rate": 0.0014154150130018866, + "loss": 1.475, + "step": 400 + }, + { + "epoch": 9.2, + "learning_rate": 0.001402383585479068, + "loss": 1.4523, + "step": 405 + }, + { + "epoch": 9.32, + "learning_rate": 0.0013892701063173917, + "loss": 1.4812, + "step": 410 + }, + { + "epoch": 9.43, + "learning_rate": 0.0013760772495385997, + "loss": 1.525, + "step": 415 + }, + { + "epoch": 9.55, + "learning_rate": 0.001362807705350641, + "loss": 1.398, + "step": 420 + }, + { + "epoch": 9.66, + "learning_rate": 0.0013494641795990985, + "loss": 1.4477, + "step": 425 + }, + { + "epoch": 9.77, + "learning_rate": 0.00133604939321543, + "loss": 1.5801, + "step": 430 + }, + { + "epoch": 9.89, + "learning_rate": 0.0013225660816621341, + "loss": 1.6422, + "step": 435 + }, + { + "epoch": 10.0, + "learning_rate": 0.0013090169943749475, + "loss": 1.5535, + "step": 440 + }, + { + "epoch": 10.11, + "learning_rate": 0.0012954048942022001, + "loss": 1.2324, + "step": 445 + }, + { + "epoch": 10.23, + "learning_rate": 0.0012817325568414298, + "loss": 1.2613, + "step": 450 + }, + { + "epoch": 10.34, + "learning_rate": 0.001268002770273379, + "loss": 1.3293, + "step": 455 + }, + { + "epoch": 10.45, + "learning_rate": 0.0012542183341934872, + "loss": 1.2852, + "step": 460 + }, + { + "epoch": 10.57, + "learning_rate": 0.0012403820594409924, + "loss": 1.3295, + "step": 465 + }, + { + "epoch": 10.68, + "learning_rate": 0.0012264967674257645, + "loss": 1.3287, + "step": 470 + }, + { + "epoch": 10.8, + "learning_rate": 0.0012125652895529767, + "loss": 1.3566, + "step": 475 + }, + { + "epoch": 10.91, + "learning_rate": 0.0011985904666457455, + "loss": 1.4414, + "step": 480 + }, + { + "epoch": 11.02, + "learning_rate": 0.0011845751483658454, + "loss": 1.3695, + "step": 485 + }, + { + "epoch": 11.14, + "learning_rate": 0.0011705221926326238, + "loss": 1.1363, + "step": 490 + }, + { + "epoch": 11.25, + "learning_rate": 0.001156434465040231, + "loss": 1.1354, + "step": 495 + }, + { + "epoch": 11.36, + "learning_rate": 0.0011423148382732854, + "loss": 1.0725, + "step": 500 + }, + { + "epoch": 11.48, + "learning_rate": 0.001128166191521093, + "loss": 1.1754, + "step": 505 + }, + { + "epoch": 11.59, + "learning_rate": 0.0011139914098905405, + "loss": 1.1848, + "step": 510 + }, + { + "epoch": 11.7, + "learning_rate": 0.0010997933838177826, + "loss": 1.2354, + "step": 515 + }, + { + "epoch": 11.82, + "learning_rate": 0.0010855750084788399, + "loss": 1.1984, + "step": 520 + }, + { + "epoch": 11.93, + "learning_rate": 0.0010713391831992322, + "loss": 1.2666, + "step": 525 + }, + { + "epoch": 12.05, + "learning_rate": 0.001057088810862768, + "loss": 1.1408, + "step": 530 + }, + { + "epoch": 12.16, + "learning_rate": 0.0010428267973196027, + "loss": 0.9385, + "step": 535 + }, + { + "epoch": 12.27, + "learning_rate": 0.0010285560507936962, + "loss": 1.0158, + "step": 540 + }, + { + "epoch": 12.39, + "learning_rate": 0.0010142794812897874, + "loss": 0.9936, + "step": 545 + }, + { + "epoch": 12.5, + "learning_rate": 0.001, + "loss": 0.9891, + "step": 550 + }, + { + "epoch": 12.61, + "learning_rate": 0.000985720518710213, + "loss": 1.0684, + "step": 555 + }, + { + "epoch": 12.73, + "learning_rate": 0.0009714439492063038, + "loss": 1.076, + "step": 560 + }, + { + "epoch": 12.84, + "learning_rate": 0.0009571732026803976, + "loss": 1.0609, + "step": 565 + }, + { + "epoch": 12.95, + "learning_rate": 0.000942911189137232, + "loss": 1.1297, + "step": 570 + }, + { + "epoch": 13.07, + "learning_rate": 0.0009286608168007677, + "loss": 0.9342, + "step": 575 + }, + { + "epoch": 13.18, + "learning_rate": 0.0009144249915211606, + "loss": 0.8511, + "step": 580 + }, + { + "epoch": 13.3, + "learning_rate": 0.0009002066161822172, + "loss": 0.8336, + "step": 585 + }, + { + "epoch": 13.41, + "learning_rate": 0.0008860085901094594, + "loss": 0.8652, + "step": 590 + }, + { + "epoch": 13.52, + "learning_rate": 0.0008718338084789072, + "loss": 0.9744, + "step": 595 + }, + { + "epoch": 13.64, + "learning_rate": 0.000857685161726715, + "loss": 0.9006, + "step": 600 + }, + { + "epoch": 13.75, + "learning_rate": 0.000843565534959769, + "loss": 0.9619, + "step": 605 + }, + { + "epoch": 13.86, + "learning_rate": 0.0008294778073673762, + "loss": 0.9123, + "step": 610 + }, + { + "epoch": 13.98, + "learning_rate": 0.0008154248516341547, + "loss": 0.9959, + "step": 615 + }, + { + "epoch": 14.09, + "learning_rate": 0.0008014095333542549, + "loss": 0.7503, + "step": 620 + }, + { + "epoch": 14.2, + "learning_rate": 0.0007874347104470233, + "loss": 0.7357, + "step": 625 + }, + { + "epoch": 14.32, + "learning_rate": 0.0007735032325742355, + "loss": 0.7477, + "step": 630 + }, + { + "epoch": 14.43, + "learning_rate": 0.0007596179405590076, + "loss": 0.8088, + "step": 635 + }, + { + "epoch": 14.55, + "learning_rate": 0.0007457816658065133, + "loss": 0.7652, + "step": 640 + }, + { + "epoch": 14.66, + "learning_rate": 0.0007319972297266214, + "loss": 0.7847, + "step": 645 + }, + { + "epoch": 14.77, + "learning_rate": 0.0007182674431585703, + "loss": 0.7984, + "step": 650 + }, + { + "epoch": 14.89, + "learning_rate": 0.0007045951057978, + "loss": 0.8732, + "step": 655 + }, + { + "epoch": 15.0, + "learning_rate": 0.0006909830056250527, + "loss": 0.8258, + "step": 660 + }, + { + "epoch": 15.11, + "learning_rate": 0.0006774339183378663, + "loss": 0.6311, + "step": 665 + }, + { + "epoch": 15.23, + "learning_rate": 0.0006639506067845697, + "loss": 0.6543, + "step": 670 + }, + { + "epoch": 15.34, + "learning_rate": 0.0006505358204009018, + "loss": 0.6421, + "step": 675 + }, + { + "epoch": 15.45, + "learning_rate": 0.0006371922946493591, + "loss": 0.6937, + "step": 680 + }, + { + "epoch": 15.57, + "learning_rate": 0.0006239227504614003, + "loss": 0.6887, + "step": 685 + }, + { + "epoch": 15.68, + "learning_rate": 0.0006107298936826086, + "loss": 0.7097, + "step": 690 + }, + { + "epoch": 15.8, + "learning_rate": 0.0005976164145209322, + "loss": 0.6778, + "step": 695 + }, + { + "epoch": 15.91, + "learning_rate": 0.0005845849869981136, + "loss": 0.7124, + "step": 700 + }, + { + "epoch": 16.02, + "learning_rate": 0.000571638268404419, + "loss": 0.7053, + "step": 705 + }, + { + "epoch": 16.14, + "learning_rate": 0.0005587788987567784, + "loss": 0.5863, + "step": 710 + }, + { + "epoch": 16.25, + "learning_rate": 0.0005460095002604533, + "loss": 0.5588, + "step": 715 + }, + { + "epoch": 16.36, + "learning_rate": 0.0005333326767743263, + "loss": 0.5363, + "step": 720 + }, + { + "epoch": 16.48, + "learning_rate": 0.0005207510132799435, + "loss": 0.6137, + "step": 725 + }, + { + "epoch": 16.59, + "learning_rate": 0.0005082670753543961, + "loss": 0.5606, + "step": 730 + }, + { + "epoch": 16.7, + "learning_rate": 0.0004958834086471683, + "loss": 0.629, + "step": 735 + }, + { + "epoch": 16.82, + "learning_rate": 0.00048360253836103817, + "loss": 0.5754, + "step": 740 + }, + { + "epoch": 16.93, + "learning_rate": 0.0004714269687371581, + "loss": 0.6239, + "step": 745 + }, + { + "epoch": 17.05, + "learning_rate": 0.0004593591825444028, + "loss": 0.5807, + "step": 750 + }, + { + "epoch": 17.16, + "learning_rate": 0.0004474016405730973, + "loss": 0.465, + "step": 755 + }, + { + "epoch": 17.27, + "learning_rate": 0.00043555678113323104, + "loss": 0.4871, + "step": 760 + }, + { + "epoch": 17.39, + "learning_rate": 0.00042382701955724725, + "loss": 0.4623, + "step": 765 + }, + { + "epoch": 17.5, + "learning_rate": 0.00041221474770752696, + "loss": 0.5059, + "step": 770 + }, + { + "epoch": 17.61, + "learning_rate": 0.00040072233348865304, + "loss": 0.5021, + "step": 775 + }, + { + "epoch": 17.73, + "learning_rate": 0.0003893521203645618, + "loss": 0.5138, + "step": 780 + }, + { + "epoch": 17.84, + "learning_rate": 0.00037810642688067796, + "loss": 0.5212, + "step": 785 + }, + { + "epoch": 17.95, + "learning_rate": 0.00036698754619112975, + "loss": 0.5611, + "step": 790 + }, + { + "epoch": 18.07, + "learning_rate": 0.00035599774559114475, + "loss": 0.4956, + "step": 795 + }, + { + "epoch": 18.18, + "learning_rate": 0.000345139266054715, + "loss": 0.4243, + "step": 800 + }, + { + "epoch": 18.3, + "learning_rate": 0.0003344143217776319, + "loss": 0.4391, + "step": 805 + }, + { + "epoch": 18.41, + "learning_rate": 0.00032382509972598086, + "loss": 0.4627, + "step": 810 + }, + { + "epoch": 18.52, + "learning_rate": 0.0003133737591901864, + "loss": 0.4208, + "step": 815 + }, + { + "epoch": 18.64, + "learning_rate": 0.0003030624313447067, + "loss": 0.45, + "step": 820 + }, + { + "epoch": 18.75, + "learning_rate": 0.00029289321881345256, + "loss": 0.44, + "step": 825 + }, + { + "epoch": 18.86, + "learning_rate": 0.0002828681952410366, + "loss": 0.4451, + "step": 830 + }, + { + "epoch": 18.98, + "learning_rate": 0.0002729894048699265, + "loss": 0.4494, + "step": 835 + }, + { + "epoch": 19.09, + "learning_rate": 0.00026325886212359495, + "loss": 0.3839, + "step": 840 + }, + { + "epoch": 19.2, + "learning_rate": 0.0002536785511957531, + "loss": 0.3728, + "step": 845 + }, + { + "epoch": 19.32, + "learning_rate": 0.00024425042564574185, + "loss": 0.4126, + "step": 850 + }, + { + "epoch": 19.43, + "learning_rate": 0.00023497640800017682, + "loss": 0.4183, + "step": 855 + }, + { + "epoch": 19.55, + "learning_rate": 0.0002258583893609175, + "loss": 0.3778, + "step": 860 + }, + { + "epoch": 19.66, + "learning_rate": 0.00021689822901944456, + "loss": 0.3758, + "step": 865 + }, + { + "epoch": 19.77, + "learning_rate": 0.000208097754077725, + "loss": 0.4034, + "step": 870 + }, + { + "epoch": 19.89, + "learning_rate": 0.0001994587590756397, + "loss": 0.4085, + "step": 875 + }, + { + "epoch": 20.0, + "learning_rate": 0.00019098300562505265, + "loss": 0.3673, + "step": 880 + }, + { + "epoch": 20.11, + "learning_rate": 0.0001826722220505931, + "loss": 0.363, + "step": 885 + }, + { + "epoch": 20.23, + "learning_rate": 0.000174528103037226, + "loss": 0.3707, + "step": 890 + }, + { + "epoch": 20.34, + "learning_rate": 0.00016655230928468257, + "loss": 0.369, + "step": 895 + }, + { + "epoch": 20.45, + "learning_rate": 0.00015874646716881869, + "loss": 0.3528, + "step": 900 + }, + { + "epoch": 20.57, + "learning_rate": 0.00015111216840997744, + "loss": 0.3581, + "step": 905 + }, + { + "epoch": 20.68, + "learning_rate": 0.00014365096974841107, + "loss": 0.3466, + "step": 910 + }, + { + "epoch": 20.8, + "learning_rate": 0.00013636439262684297, + "loss": 0.3274, + "step": 915 + }, + { + "epoch": 20.91, + "learning_rate": 0.00012925392288022297, + "loss": 0.3401, + "step": 920 + }, + { + "epoch": 21.02, + "learning_rate": 0.00012232101043274435, + "loss": 0.3435, + "step": 925 + }, + { + "epoch": 21.14, + "learning_rate": 0.00011556706900218572, + "loss": 0.2972, + "step": 930 + }, + { + "epoch": 21.25, + "learning_rate": 0.00010899347581163222, + "loss": 0.3153, + "step": 935 + }, + { + "epoch": 21.36, + "learning_rate": 0.00010260157130864178, + "loss": 0.3315, + "step": 940 + }, + { + "epoch": 21.48, + "learning_rate": 9.639265889190829e-05, + "loss": 0.3264, + "step": 945 + }, + { + "epoch": 21.59, + "learning_rate": 9.036800464548156e-05, + "loss": 0.3427, + "step": 950 + }, + { + "epoch": 21.7, + "learning_rate": 8.4528837080594e-05, + "loss": 0.3415, + "step": 955 + }, + { + "epoch": 21.82, + "learning_rate": 7.887634688515e-05, + "loss": 0.323, + "step": 960 + }, + { + "epoch": 21.93, + "learning_rate": 7.341168668092857e-05, + "loss": 0.2961, + "step": 965 + }, + { + "epoch": 22.05, + "learning_rate": 6.813597078854772e-05, + "loss": 0.3276, + "step": 970 + }, + { + "epoch": 22.16, + "learning_rate": 6.305027500023842e-05, + "loss": 0.3045, + "step": 975 + }, + { + "epoch": 22.27, + "learning_rate": 5.8155636360475384e-05, + "loss": 0.3167, + "step": 980 + }, + { + "epoch": 22.39, + "learning_rate": 5.345305295450997e-05, + "loss": 0.319, + "step": 985 + }, + { + "epoch": 22.5, + "learning_rate": 4.894348370484647e-05, + "loss": 0.2852, + "step": 990 + }, + { + "epoch": 22.61, + "learning_rate": 4.4627848175703315e-05, + "loss": 0.3034, + "step": 995 + }, + { + "epoch": 22.73, + "learning_rate": 4.050702638550274e-05, + "loss": 0.2845, + "step": 1000 + }, + { + "epoch": 22.84, + "learning_rate": 3.658185862742103e-05, + "loss": 0.3136, + "step": 1005 + }, + { + "epoch": 22.95, + "learning_rate": 3.285314529804295e-05, + "loss": 0.3187, + "step": 1010 + }, + { + "epoch": 23.07, + "learning_rate": 2.93216467341475e-05, + "loss": 0.2907, + "step": 1015 + }, + { + "epoch": 23.18, + "learning_rate": 2.5988083057666535e-05, + "loss": 0.2955, + "step": 1020 + }, + { + "epoch": 23.3, + "learning_rate": 2.2853134028840594e-05, + "loss": 0.2785, + "step": 1025 + }, + { + "epoch": 23.41, + "learning_rate": 1.9917438907606554e-05, + "loss": 0.3369, + "step": 1030 + }, + { + "epoch": 23.52, + "learning_rate": 1.7181596323244453e-05, + "loss": 0.2837, + "step": 1035 + }, + { + "epoch": 23.64, + "learning_rate": 1.4646164152307017e-05, + "loss": 0.3002, + "step": 1040 + }, + { + "epoch": 23.75, + "learning_rate": 1.231165940486234e-05, + "loss": 0.3062, + "step": 1045 + }, + { + "epoch": 23.86, + "learning_rate": 1.0178558119067316e-05, + "loss": 0.2859, + "step": 1050 + }, + { + "epoch": 23.98, + "learning_rate": 8.247295264097288e-06, + "loss": 0.284, + "step": 1055 + }, + { + "epoch": 24.09, + "learning_rate": 6.518264651449779e-06, + "loss": 0.2607, + "step": 1060 + }, + { + "epoch": 24.2, + "learning_rate": 4.991818854640395e-06, + "loss": 0.3164, + "step": 1065 + }, + { + "epoch": 24.32, + "learning_rate": 3.6682691373086663e-06, + "loss": 0.2597, + "step": 1070 + }, + { + "epoch": 24.43, + "learning_rate": 2.5478853897464847e-06, + "loss": 0.2907, + "step": 1075 + }, + { + "epoch": 24.55, + "learning_rate": 1.630896073864352e-06, + "loss": 0.3033, + "step": 1080 + }, + { + "epoch": 24.66, + "learning_rate": 9.174881766043087e-07, + "loss": 0.3089, + "step": 1085 + }, + { + "epoch": 24.77, + "learning_rate": 4.078071718107701e-07, + "loss": 0.2964, + "step": 1090 + }, + { + "epoch": 24.89, + "learning_rate": 1.0195699056669839e-07, + "loss": 0.2995, + "step": 1095 + }, + { + "epoch": 25.0, + "learning_rate": 0.0, + "loss": 0.2936, + "step": 1100 + }, + { + "epoch": 25.0, + "step": 1100, + "total_flos": 5.602696856046797e+17, + "train_loss": 1.3768115234375, + "train_runtime": 24197.7873, + "train_samples_per_second": 0.724, + "train_steps_per_second": 0.045 + } + ], + "logging_steps": 5, + "max_steps": 1100, + "num_input_tokens_seen": 0, + "num_train_epochs": 25, + "save_steps": 100, + "total_flos": 5.602696856046797e+17, + "train_batch_size": 4, + "trial_name": null, + "trial_params": null +} diff --git a/training_args.bin b/training_args.bin new file mode 100644 index 0000000000000000000000000000000000000000..ff8dbcdca96337fe706e3b8a5e49365cea791f82 --- /dev/null +++ b/training_args.bin @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fef6a3ae006ec4c51dbcf0a3e569288ca5ab1bbc97f41768934c32153b03277c +size 4920