File size: 2,809 Bytes
d1850c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
05581a1
 
 
 
d1850c3
 
 
 
 
 
 
 
 
05581a1
d1850c3
 
05581a1
 
 
 
 
 
 
 
 
 
 
 
d1850c3
05581a1
 
d1850c3
05581a1
d1850c3
 
 
 
 
 
 
 
 
05581a1
 
 
d1850c3
 
 
05581a1
 
d1850c3
05581a1
 
d1850c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: creativeml-openrail-m
language:
- en
tags:
- LLM
- tensorRT
- chatGLM
---
## Model Card for lyraChatGLM

lyraChatGLM is currently the **fastest chatGLM-6B** available, as far as we know, it is also the **fisrt accelerated version of chatGLM-6B**.

The inference speed of lyraChatGLM is **10x** faster than the original version, and we're still working to improve the performance.

Among its main features are:

- weights: original ChatGLM-6B weights released by THUDM.
- device: lyraChatGLM is mainly based on FasterTransformer compiled for SM=80 (A100, for example).

## Speed

### test environment

- device: Nvidia A100 40G

|version|speed|
|:-:|:-:|
|original|30 tokens/s|
|lyraChatGLM|310 tokens/s|


## Model Sources

- **Repository:** [https://huggingface.co/THUDM/chatglm-6b]

## Uses

```python
from transformers import AutoTokenizer
from faster_chat_glm import GLM6B, FasterChatGLM


tokenizer = AutoTokenizer.from_pretrained(chatglm6b_dir, trust_remote_code=True)

BATCH_SIZE = 8
MAX_OUT_LEN = 50

# prepare input
input_str = ["音乐推荐应该考虑哪些因素?帮我写一篇不少于800字的方案。 ", ] *
inputs = tokenizer(input_str, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to('cuda:0')


# kernel for chat model.
kernel = GLM6B(plan_path="./models/glm6b-bs{BATCH_SIZE}.ftm",
               batch_size=1,
               num_beams=1,
               use_cache=True,
               num_heads=32,
               emb_size_per_heads=128,
               decoder_layers=28,
               vocab_size=150528,
               max_seq_len=MAX_OUT_LEN)
chat = FasterChatGLM(model_dir=chatglm6b_dir, kernel=kernel).half().cuda()

# generate
sample_output = chat.generate(inputs=input_ids, max_length=MAX_OUT_LEN)
# de-tokenize model output to text
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
print(res)
```
## Demo output

### input
音乐推荐应该考虑哪些因素?帮我写一篇不少于800字的方案。 

### output
音乐推荐是音乐爱好者们经常面临的问题。一个好的音乐推荐应该能够根据用户的需求和喜好,推荐出符合他们口味的音乐。本文将探讨音乐



## Environment

- hardware: Nvidia Ampere architecture (A100) or compatable
- docker image avaible: https://hub.docker.com/r/bigmoyan/lyra_aigc/tags
```
docker pull bigmoyan/lyra_aigc:v0.1
```

## Citation
``` bibtex
@Misc{lyraChatGLM2023,
  author =       {Kangjian Wu, Zhengtao Wang, Bin Wu},
  title =        {lyaraChatGLM: Accelerating chatGLM by 10x+},
  howpublished = {\url{https://huggingface.co/TMElyralab/lyraChatGLM}},
  year =         {2023}
}
```

## Report bug
- start a discussion to report any bugs!--> https://huggingface.co/TMElyralab/lyraChatGLM/discussions
- report bug with a `[bug]` mark in the title.