TheBloke commited on
Commit
37606b0
·
1 Parent(s): 07e5c4e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +575 -0
README.md ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: SUSTech/SUS-Chat-34B
3
+ inference: false
4
+ license: other
5
+ license_link: LICENSE
6
+ license_name: yi-license
7
+ model_creator: Southern university of science and technology
8
+ model_name: SUS Chat 34B
9
+ model_type: yi
10
+ pipeline_tag: text-generation
11
+ prompt_template: '### Human: {prompt}
12
+
13
+
14
+ ### Assistant:
15
+
16
+ '
17
+ quantized_by: TheBloke
18
+ widget:
19
+ - example_title: SUS-Chat
20
+ output:
21
+ text: ' Hello! How can I assist you today?'
22
+ text: hi
23
+ ---
24
+ <!-- markdownlint-disable MD041 -->
25
+
26
+ <!-- header start -->
27
+ <!-- 200823 -->
28
+ <div style="width: auto; margin-left: auto; margin-right: auto">
29
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
30
+ </div>
31
+ <div style="display: flex; justify-content: space-between; width: 100%;">
32
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
34
+ </div>
35
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
37
+ </div>
38
+ </div>
39
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
40
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
41
+ <!-- header end -->
42
+
43
+ # SUS Chat 34B - AWQ
44
+ - Model creator: [Southern university of science and technology](https://huggingface.co/SUSTech)
45
+ - Original model: [SUS Chat 34B](https://huggingface.co/SUSTech/SUS-Chat-34B)
46
+
47
+ <!-- description start -->
48
+ ## Description
49
+
50
+ This repo contains AWQ model files for [Southern university of science and technology's SUS Chat 34B](https://huggingface.co/SUSTech/SUS-Chat-34B).
51
+
52
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
53
+
54
+
55
+ ### About AWQ
56
+
57
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
58
+
59
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
60
+
61
+ It is supported by:
62
+
63
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
64
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
65
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
66
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
67
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
68
+
69
+ <!-- description end -->
70
+ <!-- repositories-available start -->
71
+ ## Repositories available
72
+
73
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SUS-Chat-34B-AWQ)
74
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SUS-Chat-34B-GPTQ)
75
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SUS-Chat-34B-GGUF)
76
+ * [Southern university of science and technology's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SUSTech/SUS-Chat-34B)
77
+ <!-- repositories-available end -->
78
+
79
+ <!-- prompt-template start -->
80
+ ## Prompt template: SUS
81
+
82
+ ```
83
+ ### Human: {prompt}
84
+
85
+ ### Assistant:
86
+
87
+ ```
88
+
89
+ <!-- prompt-template end -->
90
+
91
+
92
+ <!-- README_AWQ.md-provided-files start -->
93
+ ## Provided files, and AWQ parameters
94
+
95
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
96
+
97
+ Models are released as sharded safetensors files.
98
+
99
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
100
+ | ------ | ---- | -- | ----------- | ------- | ---- |
101
+ | [main](https://huggingface.co/TheBloke/SUS-Chat-34B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.23 GB
102
+
103
+ <!-- README_AWQ.md-provided-files end -->
104
+
105
+ <!-- README_AWQ.md-text-generation-webui start -->
106
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
107
+
108
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
109
+
110
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
111
+
112
+ 1. Click the **Model tab**.
113
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/SUS-Chat-34B-AWQ`.
114
+ 3. Click **Download**.
115
+ 4. The model will start downloading. Once it's finished it will say "Done".
116
+ 5. In the top left, click the refresh icon next to **Model**.
117
+ 6. In the **Model** dropdown, choose the model you just downloaded: `SUS-Chat-34B-AWQ`
118
+ 7. Select **Loader: AutoAWQ**.
119
+ 8. Click Load, and the model will load and is now ready for use.
120
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
121
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
122
+ <!-- README_AWQ.md-text-generation-webui end -->
123
+
124
+ <!-- README_AWQ.md-use-from-vllm start -->
125
+ ## Multi-user inference server: vLLM
126
+
127
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
128
+
129
+ - Please ensure you are using vLLM version 0.2 or later.
130
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
131
+
132
+ For example:
133
+
134
+ ```shell
135
+ python3 -m vllm.entrypoints.api_server --model TheBloke/SUS-Chat-34B-AWQ --quantization awq --dtype auto
136
+ ```
137
+
138
+ - When using vLLM from Python code, again set `quantization=awq`.
139
+
140
+ For example:
141
+
142
+ ```python
143
+ from vllm import LLM, SamplingParams
144
+
145
+ prompts = [
146
+ "Tell me about AI",
147
+ "Write a story about llamas",
148
+ "What is 291 - 150?",
149
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
150
+ ]
151
+ prompt_template=f'''### Human: {prompt}
152
+
153
+ ### Assistant:
154
+ '''
155
+
156
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
157
+
158
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
159
+
160
+ llm = LLM(model="TheBloke/SUS-Chat-34B-AWQ", quantization="awq", dtype="auto")
161
+
162
+ outputs = llm.generate(prompts, sampling_params)
163
+
164
+ # Print the outputs.
165
+ for output in outputs:
166
+ prompt = output.prompt
167
+ generated_text = output.outputs[0].text
168
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
169
+ ```
170
+ <!-- README_AWQ.md-use-from-vllm start -->
171
+
172
+ <!-- README_AWQ.md-use-from-tgi start -->
173
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
174
+
175
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
176
+
177
+ Example Docker parameters:
178
+
179
+ ```shell
180
+ --model-id TheBloke/SUS-Chat-34B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
181
+ ```
182
+
183
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
184
+
185
+ ```shell
186
+ pip3 install huggingface-hub
187
+ ```
188
+
189
+ ```python
190
+ from huggingface_hub import InferenceClient
191
+
192
+ endpoint_url = "https://your-endpoint-url-here"
193
+
194
+ prompt = "Tell me about AI"
195
+ prompt_template=f'''### Human: {prompt}
196
+
197
+ ### Assistant:
198
+ '''
199
+
200
+ client = InferenceClient(endpoint_url)
201
+ response = client.text_generation(prompt,
202
+ max_new_tokens=128,
203
+ do_sample=True,
204
+ temperature=0.7,
205
+ top_p=0.95,
206
+ top_k=40,
207
+ repetition_penalty=1.1)
208
+
209
+ print(f"Model output: ", response)
210
+ ```
211
+ <!-- README_AWQ.md-use-from-tgi end -->
212
+
213
+ <!-- README_AWQ.md-use-from-python start -->
214
+ ## Inference from Python code using Transformers
215
+
216
+ ### Install the necessary packages
217
+
218
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
219
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
220
+
221
+ ```shell
222
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
223
+ ```
224
+
225
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
226
+
227
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
228
+
229
+ ```shell
230
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
231
+ ```
232
+
233
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
234
+
235
+ ```shell
236
+ pip3 uninstall -y autoawq
237
+ git clone https://github.com/casper-hansen/AutoAWQ
238
+ cd AutoAWQ
239
+ pip3 install .
240
+ ```
241
+
242
+ ### Transformers example code (requires Transformers 4.35.0 and later)
243
+
244
+ ```python
245
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
246
+
247
+ model_name_or_path = "TheBloke/SUS-Chat-34B-AWQ"
248
+
249
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
250
+ model = AutoModelForCausalLM.from_pretrained(
251
+ model_name_or_path,
252
+ low_cpu_mem_usage=True,
253
+ device_map="cuda:0"
254
+ )
255
+
256
+ # Using the text streamer to stream output one token at a time
257
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
258
+
259
+ prompt = "Tell me about AI"
260
+ prompt_template=f'''### Human: {prompt}
261
+
262
+ ### Assistant:
263
+ '''
264
+
265
+ # Convert prompt to tokens
266
+ tokens = tokenizer(
267
+ prompt_template,
268
+ return_tensors='pt'
269
+ ).input_ids.cuda()
270
+
271
+ generation_params = {
272
+ "do_sample": True,
273
+ "temperature": 0.7,
274
+ "top_p": 0.95,
275
+ "top_k": 40,
276
+ "max_new_tokens": 512,
277
+ "repetition_penalty": 1.1
278
+ }
279
+
280
+ # Generate streamed output, visible one token at a time
281
+ generation_output = model.generate(
282
+ tokens,
283
+ streamer=streamer,
284
+ **generation_params
285
+ )
286
+
287
+ # Generation without a streamer, which will include the prompt in the output
288
+ generation_output = model.generate(
289
+ tokens,
290
+ **generation_params
291
+ )
292
+
293
+ # Get the tokens from the output, decode them, print them
294
+ token_output = generation_output[0]
295
+ text_output = tokenizer.decode(token_output)
296
+ print("model.generate output: ", text_output)
297
+
298
+ # Inference is also possible via Transformers' pipeline
299
+ from transformers import pipeline
300
+
301
+ pipe = pipeline(
302
+ "text-generation",
303
+ model=model,
304
+ tokenizer=tokenizer,
305
+ **generation_params
306
+ )
307
+
308
+ pipe_output = pipe(prompt_template)[0]['generated_text']
309
+ print("pipeline output: ", pipe_output)
310
+
311
+ ```
312
+ <!-- README_AWQ.md-use-from-python end -->
313
+
314
+ <!-- README_AWQ.md-compatibility start -->
315
+ ## Compatibility
316
+
317
+ The files provided are tested to work with:
318
+
319
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
320
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
321
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
322
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
323
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
324
+
325
+ <!-- README_AWQ.md-compatibility end -->
326
+
327
+ <!-- footer start -->
328
+ <!-- 200823 -->
329
+ ## Discord
330
+
331
+ For further support, and discussions on these models and AI in general, join us at:
332
+
333
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
334
+
335
+ ## Thanks, and how to contribute
336
+
337
+ Thanks to the [chirper.ai](https://chirper.ai) team!
338
+
339
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
340
+
341
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
342
+
343
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
344
+
345
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
346
+
347
+ * Patreon: https://patreon.com/TheBlokeAI
348
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
349
+
350
+ **Special thanks to**: Aemon Algiz.
351
+
352
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
353
+
354
+
355
+ Thank you to all my generous patrons and donaters!
356
+
357
+ And thank you again to a16z for their generous grant.
358
+
359
+ <!-- footer end -->
360
+
361
+ # Original model card: Southern university of science and technology's SUS Chat 34B
362
+
363
+
364
+ # 🐷SUS-Chat: Instruction tuning done right
365
+
366
+
367
+
368
+ <div align="center">
369
+
370
+ <p align="center">
371
+ <img src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/sustech.svg?sanitize=true" width="200px">
372
+ <img src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/ccnl.png?sanitize=true" width="200px">
373
+ </p>
374
+
375
+ <div style="display: inline-block;">
376
+
377
+ <a rel="noopener nofollow" href="https://github.com/SUSTech-IDEA/SUS-Chat/issues">
378
+ <img src="https://img.shields.io/github/issues/SUSTech-IDEA/SUS-Chat?logo=github" style="margin: 0 0;">
379
+ </a>
380
+
381
+ </div>
382
+
383
+ <div style="display: inline-block;">
384
+
385
+ <a href="https://huggingface.co/SUSTech">
386
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SUSTech-blue" style="margin: 0 0;">
387
+ </a>
388
+
389
+ </div>
390
+
391
+ <div style="display: inline-block;">
392
+
393
+ <a rel="noopener nofollow" href="https://www.modelscope.cn/organization/sustc/">
394
+ <img src="https://img.shields.io/badge/ModelScope-sustc-blue" style="margin: 0 0;">
395
+ </a>
396
+
397
+ </div>
398
+
399
+ <div style="display: inline-block;">
400
+
401
+ <a rel="noopener nofollow" href="https://github.com/SUSTech-IDEA/SUS-Chat/blob/main/LICENSE">
402
+ <img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue" style="margin: 0 0;">
403
+ </a>
404
+
405
+ </div>
406
+
407
+ <div style="display: inline-block;">
408
+
409
+ <a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
410
+ <img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue" style="margin: 0 0;">
411
+ </a>
412
+
413
+ </div>
414
+
415
+ <div style="display: inline-block;">
416
+
417
+ <a rel="noopener nofollow" href="mailto:[email protected]">
418
+ <img src="https://img.shields.io/badge/✉️[email protected]" style="margin: 0 0;">
419
+ </a>
420
+
421
+ </div>
422
+
423
+ </div>
424
+
425
+ # News
426
+
427
+ - 2023-12-05: SUS-Chat is ranked 2nd in [Open LLM
428
+ leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
429
+ and surpassed all models under 70B.
430
+
431
+ - 2023-12-01: SUS-Chat-34B is now avaliable on HuggingFace🤗.
432
+
433
+ # Inrtoduction
434
+
435
+ <img src="https://hackmd.io/_uploads/HJlDtzhBa.png" id="fig-sus"
436
+ alt="Figure 1: DALL·E 2023-12-01 11.03.28 - An imposing, majestic wild boar combined with elements of a futuristic transformer robot. The boar itself should be intricately blended with these tra" />
437
+
438
+ **SUS-Chat** is a 34B bilingual Chinese-English dialogue model, jointly
439
+ released by the **Southern University of Science and Technology** and
440
+ **International Digital Economy Academy**. The SUS-Chat-34B model has
441
+ been fine-tuned on millions of high-quality, multilingual instruction
442
+ data. While maintaining the strong language capabilities of the base
443
+ model, the SUS-Chat-34B model has improved the model’s response to human
444
+ instructions through high-quality instruction fine-tuning and excels at
445
+ imitating human thought processes through chains of thought. It
446
+ introduces inter-instruction attention sharing in long texts, expanding
447
+ the window size from 4K to 8K, significantly enhancing the usability of
448
+ multi-round dialogues.
449
+
450
+ It has surpassed all models of the same size in almost all benchmark
451
+ tests and is better suited to meet the practical needs of complex
452
+ multilingual tasks. Compared to larger models, SUS-Chat-34B remains
453
+ highly competitive and achieved state-of-the-art performance in our
454
+ comprehensive evaluations.
455
+
456
+ SUS-Chat powerfully demonstrates that through the right instruction
457
+ fine-tuning, academic institutions can achieve better performance
458
+ without increasing model parameters, using open-source datasets and
459
+ models. This bridges the gap between academia and industry in large
460
+ language models and opens new possibilities for collaboration between
461
+ academic and industrial sectors.
462
+
463
+ # Performance
464
+
465
+ To better evaluate the performance of the SUS-Chat-34B model, we
466
+ conducted assessments across multiple benchmark tests and have
467
+ open-sourced the evaluation framework
468
+ [TLEM](https://huggingface.co/spaces/SUSTech/tlem) to facilitate
469
+ replication and comparison by other researchers.
470
+
471
+ In TLEM, we utilized various benchmark tests including MMLU, CMMLU,
472
+ C-Eval, BBH, GSM-8K, and MATH, focusing on measuring the model’s
473
+ knowledge and thinking capabilities. In these metrics, the SUS-Chat-34B
474
+ model achieved state-of-the-art performance. Additionally, we
475
+ incorporated
476
+ [lm-eval](https://github.com/EleutherAI/lm-evaluation-harness) to test
477
+ SUS-Chat and similar models on winogrande, hellaswag, arc, and
478
+ truthful-qa, assessing the model’s common-sense reasoning ability and
479
+ susceptibility to illusions.
480
+
481
+ Overall, the SUS-Chat-34B model significantly outperformed models of
482
+ similar scale and achieved the most advanced comprehensive performance.
483
+
484
+ | model | mmlu-chat | cmmlu-chat | ceval-chat | gsm8k | BBH | MATH | winogrande | arc | hellaswag | truthfulqa | average |
485
+ |:------------------|----------:|-----------:|-----------:|------:|------:|------:|-----------:|------:|----------:|-----------:|--------:|
486
+ | GPT-4 | 83 | 71 | 69.9 | 91.4 | 86.7 | 45.8 | 87.5 | 94.5 | 91.4 | nan | 80.1333 |
487
+ | SUS-Chat-34B | 77.35 | 78.68 | 82.42 | 80.06 | 67.62 | 28.8 | 81.22 | 81.54 | 83.79 | 57.47 | 71.895 |
488
+ | Qwen-72B-Chat | 74.52 | 77.02 | 77.22 | 76.57 | 72.63 | 35.9 | 80.58 | 81.29 | 87.02 | 50.64 | 71.339 |
489
+ | DeepSeek-67B-Chat | 69.43 | 48.51 | 59.7 | 74.45 | 69.73 | 29.56 | 76.09 | 82.1 | 86.06 | 56.37 | 65.2 |
490
+ | OrionStar-34B | 68.51 | 66.88 | 65.13 | 54.36 | 62.88 | 12.8 | 77.27 | 80.19 | 84.54 | 53.24 | 62.58 |
491
+ | Yi-34B-Chat | 66.96 | 55.16 | 77.16 | 63.76 | 61.54 | 10.02 | 76.64 | 70.66 | 82.29 | 54.57 | 61.876 |
492
+
493
+ <img
494
+ src="https://github.com/SUSTech-IDEA/SUS-Chat/raw/main/assets/radar.png"
495
+ id="fig-bench" alt="Figure 2: Benchmark" />
496
+
497
+ # Usage
498
+
499
+ SUS-Chat-34B is a standard LLaMA model and should be seamlessly
500
+ compatible with the LLaMA ecosystem. We provide the following example to
501
+ demonstrate how it can be used for multi-turn dialogues.
502
+
503
+ ``` python
504
+ from transformers import AutoModelForCausalLM, AutoTokenizer
505
+
506
+
507
+ def chat_template(messages):
508
+ history = ""
509
+ for message in messages:
510
+ match message:
511
+ case {"role": "user", "content": message}:
512
+ history += f"### Human: {message}\n\n### Assistant: "
513
+ case {"role": "assistant", "content": message}:
514
+ history += message
515
+ return history
516
+
517
+
518
+ model_path = "SUSTech/SUS-Chat-34B"
519
+
520
+ tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
521
+ model = AutoModelForCausalLM.from_pretrained(
522
+ model_path, device_map="auto", torch_dtype="auto"
523
+ ).eval()
524
+
525
+ messages = [{"role": "user", "content": "hi"}]
526
+
527
+ input_ids = tokenizer.encode(chat_template(messages), return_tensors="pt").to("cuda")
528
+ output_ids = model.generate(input_ids.to("cuda"))
529
+ response = tokenizer.decode(
530
+ output_ids[0][input_ids.shape[1] :], skip_special_tokens=True
531
+ )
532
+
533
+ messages.append({"role": "assistant", "content": response})
534
+
535
+ # Second round
536
+
537
+ messages.append({"role": "user", "content": "What is the capital of China?"})
538
+
539
+ input_ids = tokenizer.encode(chat_template(messages), return_tensors="pt").to("cuda")
540
+ output_ids = model.generate(input_ids.to("cuda"))
541
+ response = tokenizer.decode(
542
+ output_ids[0][input_ids.shape[1] :], skip_special_tokens=True
543
+ )
544
+
545
+ messages.append({"role": "assistant", "content": response})
546
+ ```
547
+
548
+ # Limitations
549
+
550
+ SUS-Chat has only undergone supervised fine-tuning and has not yet been
551
+ trained on human preference learning. As a result, it may produce
552
+ unreasonable responses in some situations and exacerbate existing issues
553
+ in language models, including hallucinations, non-determinism, and
554
+ cumulative errors. To achieve better performance for downstream tasks,
555
+ we recommend adjusting the generation configuration parameters
556
+ accordingly.
557
+
558
+ # Disclaimer
559
+
560
+ During the training process, we used data compliance check algorithms to
561
+ ensure the compliance of the training model as much as possible. Due to
562
+ the complexity of the data and the diverse use cases of language models,
563
+ we cannot guarantee that the model will produce correct and reasonable
564
+ outputs in all scenarios. Please be aware that there is still a risk of
565
+ the model generating problematic outputs. We will not be responsible for
566
+ any risks or issues arising from misuse, misguidance, illegal use, and
567
+ related misinformation, as well as data security issues related to the
568
+ model.
569
+
570
+ # License
571
+
572
+ This model is developed entirely for academic research and free
573
+ commercial use, but it must adhere to the
574
+ [license](https://github.com/SUSTech-IDEA/SUS-Chat/blob/main/MODEL_LICENSE_AGREEMENT.txt)
575
+ from 01-ai.