sharpbai commited on
Commit
3b01812
1 Parent(s): 3f7f161

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +187 -0
README.md ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ ---
8
+
9
+ # Baichuan-13B-Chat
10
+
11
+ *The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads*
12
+
13
+ A 650MB split weight version of [baichuan-inc/Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)
14
+
15
+ The original model card is down below
16
+
17
+ -----------------------------------------
18
+
19
+
20
+ # Baichuan-13B-Chat
21
+
22
+ <!-- Provide a quick summary of what the model is/does. -->
23
+
24
+ ## 介绍
25
+ Baichuan-13B-Chat为Baichuan-13B系列模型中对齐后的版本,预训练模型可见[Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)。
26
+
27
+ [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点:
28
+
29
+ 1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。
30
+ 2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。
31
+ 3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。
32
+ 4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。
33
+
34
+ Baichuan-13B-Chat is the aligned version in the Baichuan-13B series of models, and the pre-trained model can be found at [Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base).
35
+
36
+ [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features:
37
+
38
+ 1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096.
39
+ 2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code.
40
+ 3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss.
41
+ 4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email.
42
+
43
+
44
+ ## 使用方式
45
+
46
+ 如下是一个使用Baichuan-13B-Chat进行对话的示例,正确输出为"乔戈里峰。世界第二高峰———乔戈里峰西方登山者称其为k2峰,海拔高度是8611米,位于喀喇昆仑山脉的中巴边境上"
47
+ ```python
48
+ import torch
49
+ from transformers import AutoModel, AutoTokenizer
50
+ from transformers.generation.utils import GenerationConfig
51
+ tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True)
52
+ model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
53
+ model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat")
54
+ messages = []
55
+ messages.append({"role": "user", "content": "世界上第二高的山峰是哪座"})
56
+ response = model.chat(tokenizer, messages)
57
+ print(response)
58
+ ```
59
+
60
+ Here is an example of a conversation using Baichuan-13B-Chat, the correct output is "K2. The world's second highest peak - K2, also known as Mount Godwin-Austen or Chhogori, with an altitude of 8611 meters, is located on the China-Pakistan border in the Karakoram Range."
61
+ ```python
62
+ import torch
63
+ from transformers import AutoModel, AutoTokenizer
64
+ from transformers.generation.utils import GenerationConfig
65
+ tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True)
66
+ model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
67
+ model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat")
68
+ messages = []
69
+ messages.append({"role": "user", "content": "Which moutain is the second highest one in the world?"})
70
+ response = model.chat(tokenizer, messages)
71
+ print(response)
72
+ ```
73
+
74
+ ## 模型详情
75
+
76
+ ### 模型描述
77
+
78
+ <!-- Provide a longer summary of what this model is. -->
79
+
80
+ - **Developed by:** 百川智能(Baichuan Intelligent Technology)
81
+ - **Email**: [email protected]
82
+ - **Language(s) (NLP):** Chinese/English
83
+ - **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20%E6%A8%A1%E5%9E%8B%E5%95%86%E7%94%A8%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)|
84
+ [EN](Community%20License%20for%20Baichuan-13B%20Model.pdf))
85
+
86
+ **商业用途(For commercial use):** 请通过上述Email联系申请书面授权。(Contact us via Email above to apply for written authorization.)
87
+
88
+ ### 模型结构
89
+
90
+ <!-- Provide the basic links for the model. -->
91
+
92
+ 整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%:
93
+
94
+ | Model | tokens/s |
95
+ |-------------|----------|
96
+ | LLaMA-13B | 19.4 |
97
+ | Baichuan-13B| 25.4 |
98
+
99
+ 具体参数和见下表
100
+ | 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 |
101
+ |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
102
+ | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
103
+ | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
104
+
105
+ The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%:
106
+
107
+ | Model | tokens/s |
108
+ |-------------|----------|
109
+ | LLaMA-13B | 19.4 |
110
+ | Baichuan-13B| 25.4 |
111
+
112
+ The specific parameters are as follows:
113
+ | Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length |
114
+ |-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
115
+ | Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
116
+ | Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
117
+
118
+ ## 使用须知
119
+
120
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
121
+
122
+
123
+ ### 免责声明
124
+
125
+ 我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
126
+
127
+ 我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍��可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
128
+
129
+ We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment.
130
+
131
+ We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited.
132
+
133
+ ## 训练详情
134
+
135
+ 训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。
136
+
137
+ For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B).
138
+
139
+ ## 测评结果
140
+
141
+ ## [C-Eval](https://cevalbenchmark.com/index.html#home)
142
+
143
+ | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
144
+ |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
145
+ | Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 |
146
+ | Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 |
147
+ | Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 |
148
+ | Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 |
149
+ | LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 |
150
+ | moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 |
151
+ | vicuna-13B | 22.8 | 24.8 | 22.3 | 18.5 | 22.2 |
152
+ | **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** |
153
+ | **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** |
154
+
155
+
156
+ ## [MMLU](https://arxiv.org/abs/2009.03300)
157
+
158
+ | Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
159
+ |-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
160
+ | LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 |
161
+ | Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 |
162
+ | Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 |
163
+ | Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 |
164
+ | Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 |
165
+ | vicuna-13B | 24.2 | 24.1 | 24.6 | 26.8 | 24.9 |
166
+ | moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 |
167
+ | **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** |
168
+ | **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** |
169
+ > 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。
170
+
171
+ ## [CMMLU](https://github.com/haonan-li/CMMLU)
172
+
173
+ | Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average |
174
+ |-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:|
175
+ | Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 |
176
+ | Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 |
177
+ | Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 |
178
+ | Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 |
179
+ | LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 |
180
+ | moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 |
181
+ | vicuna-13B | 24.0 | 25.4 | 25.3 | 25.0 | 25.0 | 24.9 |
182
+ | **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** |
183
+ | **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** |
184
+ > 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。
185
+
186
+ ## 微信群组
187
+ ![WeChat](https://github.com/baichuan-inc/Baichuan-13B/blob/main/media/wechat.jpeg?raw=true)