unsubscribe commited on
Commit
e67707d
·
1 Parent(s): efb33e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +225 -1
README.md CHANGED
@@ -5,6 +5,230 @@ colorFrom: indigo
5
  colorTo: pink
6
  sdk: static
7
  pinned: false
 
8
  ---
 
 
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  colorTo: pink
6
  sdk: static
7
  pinned: false
8
+ license: apache-2.0
9
  ---
10
+ <div align="center">
11
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64ccdc322e592905f922a06e/VhwQtaklohkUXFWkjA-3M.png" width="450"/>
12
 
13
+ English | [简体中文](README_zh-CN.md)
14
+
15
+ </div>
16
+
17
+ <p align="center">
18
+ 👋 join us on <a href="https://twitter.com/intern_lm" target="_blank">Twitter</a>, <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
19
+ </p>
20
+
21
+ ______________________________________________________________________
22
+
23
+ ## News 🎉
24
+
25
+ - \[2023/08\] TurboMind supports 4-bit quantization and inference.
26
+ - \[2023/07\] TurboMind supports Llama-2 70B with GQA.
27
+ - \[2023/07\] TurboMind supports Llama-2 7B/13B.
28
+ - \[2023/07\] TurboMind supports tensor-parallel inference of InternLM.
29
+
30
+ ______________________________________________________________________
31
+
32
+ ## Introduction
33
+
34
+ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams. It has the following core features:
35
+
36
+ - **Efficient Inference Engine (TurboMind)**: Based on [FasterTransformer](https://github.com/NVIDIA/FasterTransformer), we have implemented an efficient inference engine - TurboMind, which supports the inference of LLaMA and its variant models on NVIDIA GPUs.
37
+
38
+ - **Interactive Inference Mode**: By caching the k/v of attention during multi-round dialogue processes, it remembers dialogue history, thus avoiding repetitive processing of historical sessions.
39
+
40
+ - **Multi-GPU Model Deployment and Quantization**: We provide comprehensive model deployment and quantification support, and have been validated at different scales.
41
+
42
+ - **Persistent Batch Inference**: Further optimization of model execution efficiency.
43
+
44
+ ![PersistentBatchInference](https://github.com/InternLM/lmdeploy/assets/67539920/e3876167-0671-44fc-ac52-5a0f9382493e)
45
+
46
+ ## Performance
47
+
48
+ **Case I**: output token throughput with fixed input token and output token number (1, 2048)
49
+
50
+ **Case II**: request throughput with real conversation data
51
+
52
+ Test Setting: LLaMA-7B, NVIDIA A100(80G)
53
+
54
+ The output token throughput of TurboMind exceeds 2000 tokens/s, which is about 5% - 15% higher than DeepSpeed overall and outperforms huggingface transformers by up to 2.3x.
55
+ And the request throughput of TurboMind is 30% higher than vLLM.
56
+
57
+ ![benchmark](https://github.com/InternLM/lmdeploy/assets/4560679/7775c518-608e-4e5b-be73-7645a444e774)
58
+
59
+ ## Quick Start
60
+
61
+ ### Installation
62
+
63
+ Install lmdeploy with pip ( python 3.8+) or [from source](./docs/en/build.md)
64
+
65
+ ```shell
66
+ pip install lmdeploy
67
+ ```
68
+
69
+ ### Deploy InternLM
70
+
71
+ #### Get InternLM model
72
+
73
+ ```shell
74
+ # 1. Download InternLM model
75
+
76
+ # Make sure you have git-lfs installed (https://git-lfs.com)
77
+ git lfs install
78
+ git clone https://huggingface.co/internlm/internlm-chat-7b /path/to/internlm-chat-7b
79
+
80
+ # if you want to clone without large files – just their pointers
81
+ # prepend your git clone with the following env var:
82
+ GIT_LFS_SKIP_SMUDGE=1
83
+
84
+ # 2. Convert InternLM model to turbomind's format, which will be in "./workspace" by default
85
+ python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b
86
+
87
+ ```
88
+
89
+ #### Inference by TurboMind
90
+
91
+ ```shell
92
+ python -m lmdeploy.turbomind.chat ./workspace
93
+ ```
94
+
95
+ > **Note**<br />
96
+ > When inferring with FP16 precision, the InternLM-7B model requires at least 15.7G of GPU memory overhead on TurboMind. <br />
97
+ > It is recommended to use NVIDIA cards such as 3090, V100, A100, etc.
98
+ > Disable GPU ECC can free up 10% memory, try `sudo nvidia-smi --ecc-config=0` and reboot system.
99
+
100
+ > **Note**<br />
101
+ > Tensor parallel is available to perform inference on multiple GPUs. Add `--tp=<num_gpu>` on `chat` to enable runtime TP.
102
+
103
+ #### Serving with gradio
104
+
105
+ ```shell
106
+ python3 -m lmdeploy.serve.gradio.app ./workspace
107
+ ```
108
+
109
+ ![](https://github.com/InternLM/lmdeploy/assets/67539920/08d1e6f2-3767-44d5-8654-c85767cec2ab)
110
+
111
+ #### Serving with Triton Inference Server
112
+
113
+ Launch inference server by:
114
+
115
+ ```shell
116
+ bash workspace/service_docker_up.sh
117
+ ```
118
+
119
+ Then, you can communicate with the inference server by command line,
120
+
121
+ ```shell
122
+ python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
123
+ ```
124
+
125
+ or webui,
126
+
127
+ ```shell
128
+ python3 -m lmdeploy.serve.gradio.app {server_ip_addresss}:33337
129
+ ```
130
+
131
+ For the deployment of other supported models, such as LLaMA, LLaMA-2, vicuna and so on, you can find the guide from [here](docs/en/serving.md)
132
+
133
+ ### Inference with PyTorch
134
+
135
+ For detailed instructions on Inference pytorch models, see [here](docs/en/pytorch.md).
136
+
137
+ #### Single GPU
138
+
139
+ ```shell
140
+ python3 -m lmdeploy.pytorch.chat $NAME_OR_PATH_TO_HF_MODEL \
141
+ --max_new_tokens 64 \
142
+ --temperture 0.8 \
143
+ --top_p 0.95 \
144
+ --seed 0
145
+ ```
146
+
147
+ #### Tensor Parallel with DeepSpeed
148
+
149
+ ```shell
150
+ deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
151
+ $NAME_OR_PATH_TO_HF_MODEL \
152
+ --max_new_tokens 64 \
153
+ --temperture 0.8 \
154
+ --top_p 0.95 \
155
+ --seed 0
156
+ ```
157
+
158
+ You need to install deepspeed first to use this feature.
159
+
160
+ ```
161
+ pip install deepspeed
162
+ ```
163
+
164
+ ## Quantization
165
+
166
+ ### Step 1. Obtain Quantization Parameters
167
+
168
+ First, run the quantization script to obtain the quantization parameters.
169
+
170
+ > After execution, various parameters needed for quantization will be stored in `$WORK_DIR`; these will be used in the following steps..
171
+
172
+ ```
173
+ python3 -m lmdeploy.lite.apis.calibrate \
174
+ --model $HF_MODEL \
175
+ --calib_dataset 'c4' \ # Calibration dataset, supports c4, ptb, wikitext2, pileval
176
+ --calib_samples 128 \ # Number of samples in the calibration set, if memory is insufficient, you can appropriately reduce this
177
+ --calib_seqlen 2048 \ # Length of a single piece of text, if memory is insufficient, you can appropriately reduce this
178
+ --work_dir $WORK_DIR \ # Folder storing Pytorch format quantization statistics parameters and post-quantization weight
179
+
180
+ ```
181
+
182
+ ### Step 2. Actual Model Quantization
183
+
184
+ `LMDeploy` supports INT4 quantization of weights and INT8 quantization of KV Cache. Run the corresponding script according to your needs.
185
+
186
+ #### Weight INT4 Quantization
187
+
188
+ LMDeploy uses AWQ algorithm for model weight quantization
189
+
190
+ > Requires input from the $WORK_DIR of step 1, and the quantized weights will also be stored in this folder.
191
+
192
+ ```
193
+ python3 -m lmdeploy.lite.apis.auto_awq \
194
+ --w_bits 4 \ # Bit number for weight quantization
195
+ --w_sym False \ # Whether to use symmetric quantization for weights
196
+ --w_group_size 128 \ # Group size for weight quantization statistics
197
+ --work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
198
+ ```
199
+
200
+ #### KV Cache INT8 Quantization
201
+
202
+ In fp16 mode, kv_cache int8 quantization can be enabled, and a single card can serve more users.
203
+ First execute the quantization script, and the quantization parameters are stored in the `workspace/triton_models/weights` transformed by `deploy.py`.
204
+
205
+ ```
206
+ python3 -m lmdeploy.lite.apis.kv_qparams \
207
+ --work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
208
+ --turbomind_dir $TURBOMIND_DIR \
209
+ --kv_sym False \ # Whether to use symmetric or asymmetric quantization.
210
+ --num_tp 1 \ # The number of GPUs used for tensor parallelism
211
+ ```
212
+
213
+ Then adjust `workspace/triton_models/weights/config.ini`
214
+
215
+ - `use_context_fmha` changed to 0, means off
216
+ - `quant_policy` is set to 4. This parameter defaults to 0, which means it is not enabled
217
+
218
+ Here is [quantization test results](./docs/en/quantization.md).
219
+
220
+ > **Warning**<br />
221
+ > runtime Tensor Parallel for quantilized model is not available. Please setup `--tp` on `deploy` to enable static TP.
222
+
223
+ ## Contributing
224
+
225
+ We appreciate all contributions to LMDeploy. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
226
+
227
+ ## Acknowledgement
228
+
229
+ - [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)
230
+ - [llm-awq](https://github.com/mit-han-lab/llm-awq)
231
+
232
+ ## License
233
+
234
+ This project is released under the [Apache 2.0 license](LICENSE).