01-ai-bot
commited on
Commit
•
3ec17ea
1
Parent(s):
e0bab64
Sync README
Browse files
README.md
ADDED
@@ -0,0 +1,268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<p align="center">
|
3 |
+
<img src="https://github.com/01-ai/Yi/raw/main/assets/img/Yi.svg?sanitize=true" width="200px">
|
4 |
+
</p>
|
5 |
+
<a href="https://github.com/01-ai/Yi/actions/workflows/ci.yml">
|
6 |
+
<img src="https://github.com/01-ai/Yi/actions/workflows/ci.yml/badge.svg">
|
7 |
+
</a>
|
8 |
+
<a href="https://huggingface.co/01-ai">
|
9 |
+
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-01--ai-blue">
|
10 |
+
</a>
|
11 |
+
<a href="https://www.modelscope.cn/organization/01ai/">
|
12 |
+
<img src="https://img.shields.io/badge/ModelScope-01--ai-blue">
|
13 |
+
</a>
|
14 |
+
<a href="https://github.com/01-ai/Yi/blob/main/LICENSE">
|
15 |
+
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue">
|
16 |
+
</a>
|
17 |
+
<a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
|
18 |
+
<img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue">
|
19 |
+
</a>
|
20 |
+
<a href="mailto:[email protected]">
|
21 |
+
<img src="https://img.shields.io/badge/✉️[email protected]">
|
22 |
+
</a>
|
23 |
+
</div>
|
24 |
+
|
25 |
+
## Introduction
|
26 |
+
|
27 |
+
The **Yi** series models are large language models trained from scratch by
|
28 |
+
developers at [01.AI](https://01.ai/).
|
29 |
+
|
30 |
+
## News
|
31 |
+
|
32 |
+
<details open>
|
33 |
+
<summary>🔥 <b>2023/11/08</b>: Invited test of Yi-34B chat model.</summary>
|
34 |
+
|
35 |
+
Application form:
|
36 |
+
|
37 |
+
- [English](https://cn.mikecrm.com/l91ODJf)
|
38 |
+
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
|
39 |
+
|
40 |
+
</details>
|
41 |
+
|
42 |
+
<details>
|
43 |
+
<summary>🎯 <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
|
44 |
+
|
45 |
+
This release contains two base models with the same parameter sizes of previous
|
46 |
+
release, except that the context window is extended to 200K.
|
47 |
+
|
48 |
+
</details>
|
49 |
+
|
50 |
+
<details>
|
51 |
+
<summary>🎯 <b>2023/11/02</b>: The base model of <code>Yi-6B</code> and <code>Yi-34B</code>.</summary>
|
52 |
+
|
53 |
+
The first public release contains two bilingual (English/Chinese) base models
|
54 |
+
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
|
55 |
+
sequence length and can be extended to 32K during inference time.
|
56 |
+
|
57 |
+
</details>
|
58 |
+
|
59 |
+
## Model Performance
|
60 |
+
|
61 |
+
|
62 |
+
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
|
63 |
+
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
|
64 |
+
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
|
65 |
+
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
|
66 |
+
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
|
67 |
+
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
|
68 |
+
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
|
69 |
+
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
|
70 |
+
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
|
71 |
+
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
|
72 |
+
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
|
73 |
+
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
|
74 |
+
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
|
75 |
+
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
|
76 |
+
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
|
77 |
+
|
78 |
+
While benchmarking open-source models, we have observed a disparity between the
|
79 |
+
results generated by our pipeline and those reported in public sources (e.g.
|
80 |
+
OpenCompass). Upon conducting a more in-depth investigation of this difference,
|
81 |
+
we have discovered that various models may employ different prompts,
|
82 |
+
post-processing strategies, and sampling techniques, potentially resulting in
|
83 |
+
significant variations in the outcomes. Our prompt and post-processing strategy
|
84 |
+
remains consistent with the original benchmark, and greedy decoding is employed
|
85 |
+
during evaluation without any post-processing for the generated content. For
|
86 |
+
scores that were not reported by the original authors (including scores reported
|
87 |
+
with different settings), we try to get results with our pipeline.
|
88 |
+
|
89 |
+
To evaluate the model's capability extensively, we adopted the methodology
|
90 |
+
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
|
91 |
+
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
|
92 |
+
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
|
93 |
+
using a 7-shot setup, while all other tests were conducted with a 0-shot
|
94 |
+
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
|
95 |
+
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
|
96 |
+
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
|
97 |
+
is derived by averaging the scores on the remaining tasks. Since the scores for
|
98 |
+
these two tasks are generally lower than the average, we believe that
|
99 |
+
Falcon-180B's performance was not underestimated.
|
100 |
+
|
101 |
+
## Usage
|
102 |
+
|
103 |
+
Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you
|
104 |
+
encounter any problem when using the **Yi** series models.
|
105 |
+
|
106 |
+
### 1. Prepare development environment
|
107 |
+
|
108 |
+
The best approach to try the **Yi** series models is through Docker with GPUs. We
|
109 |
+
provide the following docker images to help you get started.
|
110 |
+
|
111 |
+
- `registry.lingyiwanwu.com/ci/01-ai/yi:latest`
|
112 |
+
|
113 |
+
Note that the `latest` tag always points to the latest code in the `main`
|
114 |
+
branch. To test a stable version, please replace it with a specific
|
115 |
+
[tag](https://github.com/01-ai/Yi/tags).
|
116 |
+
|
117 |
+
If you prefer to try out with your local development environment. First, create
|
118 |
+
a virtual environment and clone this repo. Then install the dependencies with
|
119 |
+
`pip install -r requirements.txt`. For the best performance, we recommend you
|
120 |
+
also install the latest version (`>=2.3.3`) of
|
121 |
+
[flash-attention](https://github.com/Dao-AILab/flash-attention#installation-and-features).
|
122 |
+
|
123 |
+
### 2. Download the model (optional)
|
124 |
+
|
125 |
+
By default, the model weights and tokenizer will be downloaded from
|
126 |
+
[HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You
|
127 |
+
can also download them manually from the following places:
|
128 |
+
|
129 |
+
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
|
130 |
+
- [WiseModel](https://wisemodel.cn/models) (Search for `Yi`)
|
131 |
+
- Mirror site (remember to extract the content with `tar`)
|
132 |
+
- [Yi-6B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-6B.tar)
|
133 |
+
- [Yi-6B-200K.tar](https://storage.lingyiwanwu.com/yi/models/Yi-6B-200K.tar)
|
134 |
+
- [Yi-34B.tar](https://storage.lingyiwanwu.com/yi/models/Yi-34B.tar)
|
135 |
+
- [Yi-34B-200K.tar](https://storage.lingyiwanwu.com/yi/models/Yi-34B-200K.tar)
|
136 |
+
|
137 |
+
### 3. Examples
|
138 |
+
|
139 |
+
#### 3.1 Use the base model
|
140 |
+
|
141 |
+
```bash
|
142 |
+
python demo/text_generation.py
|
143 |
+
```
|
144 |
+
|
145 |
+
To reuse the downloaded models in the previous step, you can provide the extra
|
146 |
+
`--model` argument:
|
147 |
+
|
148 |
+
```bash
|
149 |
+
python demo/text_generation.py --model /path/to/model
|
150 |
+
```
|
151 |
+
|
152 |
+
Or if you'd like to get your hands dirty:
|
153 |
+
|
154 |
+
```python
|
155 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
156 |
+
|
157 |
+
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34B", device_map="auto", torch_dtype="auto", trust_remote_code=True)
|
158 |
+
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34B", trust_remote_code=True)
|
159 |
+
inputs = tokenizer("There's a place where time stands still. A place of breath taking wonder, but also", return_tensors="pt")
|
160 |
+
max_length = 256
|
161 |
+
|
162 |
+
outputs = model.generate(
|
163 |
+
inputs.input_ids.cuda(),
|
164 |
+
max_length=max_length,
|
165 |
+
eos_token_id=tokenizer.eos_token_id,
|
166 |
+
do_sample=True,
|
167 |
+
repetition_penalty=1.3,
|
168 |
+
no_repeat_ngram_size=5,
|
169 |
+
temperature=0.7,
|
170 |
+
top_k=40,
|
171 |
+
top_p=0.8,
|
172 |
+
)
|
173 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
174 |
+
```
|
175 |
+
|
176 |
+
<details>
|
177 |
+
|
178 |
+
<summary>Output</summary>
|
179 |
+
|
180 |
+
**Prompt**: There's a place where time stands still. A place of breath taking wonder, but also
|
181 |
+
|
182 |
+
**Generation**: There's a place where time stands still. A place of breath taking wonder, but also of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared.
|
183 |
+
The place is called the Arctic.
|
184 |
+
The Arctic is a vast, frozen wilderness. It is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
|
185 |
+
The Arctic is also a place of great beauty. The ice and snow are a pristine white. The sky is a deep blue. The sunsets are spectacular.
|
186 |
+
But the Arctic is also a place of great danger. The ice can be treacherous. The winds can be deadly. The sun can be blinding.
|
187 |
+
The Arctic is a place where the only way to survive is to be prepared.
|
188 |
+
The Arctic is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
|
189 |
+
The Arctic is a place of great beauty. The ice and snow are a
|
190 |
+
|
191 |
+
</details>
|
192 |
+
|
193 |
+
For more advanced usage, please refer to the
|
194 |
+
[doc](https://github.com/01-ai/Yi/tree/main/demo).
|
195 |
+
|
196 |
+
#### 3.2 Finetuning from the base model:
|
197 |
+
|
198 |
+
```bash
|
199 |
+
bash finetune/scripts/run_sft_Yi_6b.sh
|
200 |
+
```
|
201 |
+
|
202 |
+
Once finished, you can compare the finetuned model and the base model with the following command:
|
203 |
+
|
204 |
+
```bash
|
205 |
+
bash finetune/scripts/run_eval.sh
|
206 |
+
```
|
207 |
+
|
208 |
+
For more advanced usage like fine-tuning based on your custom data, please refer
|
209 |
+
the [doc](https://github.com/01-ai/Yi/tree/main/finetune).
|
210 |
+
|
211 |
+
#### 3.3 Quantization
|
212 |
+
|
213 |
+
##### GPT-Q
|
214 |
+
```bash
|
215 |
+
python quantization/gptq/quant_autogptq.py \
|
216 |
+
--model /base_model \
|
217 |
+
--output_dir /quantized_model \
|
218 |
+
--trust_remote_code
|
219 |
+
```
|
220 |
+
|
221 |
+
Once finished, you can then evaluate the resulting model as follows:
|
222 |
+
|
223 |
+
```bash
|
224 |
+
python quantization/gptq/eval_quantized_model.py \
|
225 |
+
--model /quantized_model \
|
226 |
+
--trust_remote_code
|
227 |
+
```
|
228 |
+
|
229 |
+
For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)
|
230 |
+
|
231 |
+
##### AWQ
|
232 |
+
```bash
|
233 |
+
python quantization/awq/quant_autoawq.py \
|
234 |
+
--model /base_model \
|
235 |
+
--output_dir /quantized_model \
|
236 |
+
--trust_remote_code
|
237 |
+
```
|
238 |
+
|
239 |
+
Once finished, you can then evaluate the resulted model as follows:
|
240 |
+
|
241 |
+
```bash
|
242 |
+
python quantization/awq/eval_quantized_model.py \
|
243 |
+
--model /quantized_model \
|
244 |
+
--trust_remote_code
|
245 |
+
```
|
246 |
+
|
247 |
+
For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)
|
248 |
+
|
249 |
+
## Disclaimer
|
250 |
+
|
251 |
+
We use data compliance checking algorithms during the training process, to
|
252 |
+
ensure the compliance of the trained model to the best of our ability. Due to
|
253 |
+
complex data and the diversity of language model usage scenarios, we cannot
|
254 |
+
guarantee that the model will generate correct, and reasonable output in all
|
255 |
+
scenarios. Please be aware that there is still a risk of the model producing
|
256 |
+
problematic outputs. We will not be responsible for any risks and issues
|
257 |
+
resulting from misuse, misguidance, illegal usage, and related misinformation,
|
258 |
+
as well as any associated data security concerns.
|
259 |
+
|
260 |
+
## License
|
261 |
+
|
262 |
+
The source code in this repo is licensed under the [Apache 2.0
|
263 |
+
license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models
|
264 |
+
are fully open for academic research and free commercial usage with permission
|
265 |
+
via applications. All usage must adhere to the [Model License
|
266 |
+
Agreement 2.0](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
|
267 |
+
To apply for the official commercial license, please contact us
|
268 |
+
([[email protected]](mailto:[email protected])).
|