RichardErkhov commited on
Commit
ea13635
1 Parent(s): 979a7be

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +215 -0
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ bilingual-gpt-neox-4b-instruction-sft - bnb 8bits
11
+ - Model creator: https://huggingface.co/rinna/
12
+ - Original model: https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
20
+ license: mit
21
+ datasets:
22
+ - Anthropic/hh-rlhf
23
+ language:
24
+ - ja
25
+ - en
26
+ inference: false
27
+ base_model: rinna/bilingual-gpt-neox-4b
28
+ ---
29
+
30
+ # bilingual-gpt-neox-4b-instruction-sft
31
+
32
+ ![rinna-icon](./rinna.png)
33
+
34
+ ---
35
+ # Update
36
+
37
+ - **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT license.
38
+ - Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it.
39
+ - The new model released on 2023/08/02 is built from datasets with less strict licenses and has better evaluation performance, so we suggest using the new model.
40
+ - For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models.
41
+ - 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462`
42
+ - 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186`
43
+ - **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake.
44
+
45
+ ---
46
+
47
+ # Overview
48
+ This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters.
49
+
50
+ The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent.
51
+
52
+ * **Model architecture**
53
+
54
+ A 36-layer, 2816-hidden-size transformer-based language model.
55
+
56
+ * **Fine-tuning**
57
+
58
+ The fine-tuning data is the subset of the following datasets.
59
+ * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation
60
+ * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
61
+
62
+ * **Model Series**
63
+
64
+ | Variant | Link |
65
+ | :-- | :--|
66
+ | Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
67
+ | Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo |
68
+ | Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
69
+ | Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
70
+ | Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
71
+ | Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
72
+ | Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
73
+ | Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
74
+ | Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
75
+
76
+ * **Contributors**
77
+
78
+ [Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
79
+
80
+ ---
81
+
82
+ # Benchmarking
83
+
84
+ Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks.
85
+
86
+ - *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.*
87
+ - *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.*
88
+
89
+ | Model | 4-task average accuracy | 6-task average accuracy |
90
+ | :-- | :-- | :-- |
91
+ | bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 |
92
+ | **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** |
93
+ | bilingual-gpt-neox-4b | 56.12 | 51.83 |
94
+ | japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
95
+ | japanese-gpt-neox-3.6b | 55.07 | 50.32 |
96
+
97
+ ---
98
+
99
+ # I/O Format
100
+ A special format has been adopted to construct inputs.
101
+ * An input prompt is formatted as a conversation between `ユーザー` and `システム`.
102
+ * Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`).
103
+ * The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response.
104
+ * All the utterances in the input prompt should be separated by a newline `\n`.
105
+
106
+ Following is an example to construct input from a conversation.
107
+ ~~~python
108
+ prompt = [
109
+ {
110
+ "speaker": "ユーザー",
111
+ "text": "Hello, you are an assistant that helps me learn Japanese."
112
+ },
113
+ {
114
+ "speaker": "システム",
115
+ "text": "Sure, what can I do for you?"
116
+ },
117
+ {
118
+ "speaker": "ユーザー",
119
+ "text": "VRはなんですか。"
120
+ }
121
+ ]
122
+ prompt = [
123
+ f"{uttr['speaker']}: {uttr['text']}"
124
+ for uttr in prompt
125
+ ]
126
+ prompt = "\n".join(prompt)
127
+ prompt = (
128
+ prompt
129
+ + "\n"
130
+ + "システム: "
131
+ )
132
+ print(prompt)
133
+ """
134
+ ユーザー: Hello, you are an assistant that helps me learn Japanese.
135
+ システム: Sure, what can I do for you?
136
+ ユーザー: VRはなんですか。
137
+ システム:
138
+ """
139
+ ~~~
140
+
141
+ ---
142
+
143
+ # How to use the model
144
+
145
+ **Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task.
146
+
147
+ ~~~~python
148
+ import torch
149
+ from transformers import AutoTokenizer, AutoModelForCausalLM
150
+
151
+ tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False)
152
+ model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft")
153
+
154
+ if torch.cuda.is_available():
155
+ model = model.to("cuda")
156
+
157
+ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
158
+
159
+ with torch.no_grad():
160
+ output_ids = model.generate(
161
+ token_ids.to(model.device),
162
+ max_new_tokens=512,
163
+ do_sample=True,
164
+ temperature=1.0,
165
+ top_p=0.85,
166
+ pad_token_id=tokenizer.pad_token_id,
167
+ bos_token_id=tokenizer.bos_token_id,
168
+ eos_token_id=tokenizer.eos_token_id
169
+ )
170
+
171
+ output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
172
+ print(output)
173
+ """VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>"""
174
+ ~~~~
175
+
176
+ ---
177
+
178
+ # Tokenization
179
+ The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
180
+ * The tokenizer has a vocabulary size of 65,536.
181
+ * It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens.
182
+ * It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better.
183
+ * We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese.
184
+ * Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`).
185
+ * This decision trades the English processing efficiency for a unified way to treat whitespaces.
186
+ * It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict.
187
+ * **Don't forget to set `use_fast=False` to make the above features function correctly.**
188
+
189
+ ---
190
+
191
+ # How to cite
192
+ ```bibtex
193
+ @misc{rinna-bilingual-gpt-neox-4b-instruction-sft,
194
+ title = {rinna/bilingual-gpt-neox-4b-instruction-sft},
195
+ author = {Zhao, Tianyu and Sawada, Kei},
196
+ url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft}
197
+ }
198
+
199
+ @inproceedings{sawada2024release,
200
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
201
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
202
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
203
+ month = {5},
204
+ year = {2024},
205
+ pages = {13898--13905},
206
+ url = {https://aclanthology.org/2024.lrec-main.1213},
207
+ note = {\url{https://arxiv.org/abs/2404.01657}}
208
+ }
209
+ ```
210
+
211
+ ---
212
+
213
+ # Licenese
214
+ [The MIT license](https://opensource.org/licenses/MIT)
215
+