jerryzh168 commited on
Commit
b548de8
·
verified ·
1 Parent(s): ea000a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -17,11 +17,11 @@ base_model:
17
  pipeline_tag: text-generation
18
  ---
19
 
20
- [Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, using [hqq](https://mobiusml.github.io/hqq_blog/) algorithm for improved accuracy, by PyTorch team.
21
 
22
  # Quantization Recipe
23
 
24
- First need to install the required packages:
25
  ```
26
  pip install git+https://github.com/huggingface/transformers@main
27
  pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
@@ -29,7 +29,7 @@ pip install torch
29
  pip install accelerate
30
  ```
31
 
32
- We used following code to get the quantized model:
33
  ```
34
  import torch
35
  from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
@@ -159,7 +159,9 @@ We can use the following code to get a sense of peak memory usage during inferen
159
  | Peak Memory (GB) | 8.91 | 2.98 (67% reduction) |
160
 
161
 
162
- ## Benchmark Peak Memory
 
 
163
 
164
  ```
165
  import torch
@@ -203,9 +205,7 @@ print(f"Peak Memory Usage: {mem:.02f} GB")
203
 
204
  # Model Performance
205
 
206
- Our int4wo is only optimized for batch size 1, so we'll see slowdown in larger batch sizes, we expect this to be used in local server deployment for single or a few users
207
- and decode tokens per second will be more important than time to first token.
208
-
209
 
210
  ## Results (A100 machine)
211
  | Benchmark (Latency) | | |
 
17
  pipeline_tag: text-generation
18
  ---
19
 
20
+ [Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, using [hqq](https://mobiusml.github.io/hqq_blog/) algorithm for improved accuracy, by PyTorch team.
21
 
22
  # Quantization Recipe
23
 
24
+ Install the required packages:
25
  ```
26
  pip install git+https://github.com/huggingface/transformers@main
27
  pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
 
29
  pip install accelerate
30
  ```
31
 
32
+ Use the following code to get the quantized model:
33
  ```
34
  import torch
35
  from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
 
159
  | Peak Memory (GB) | 8.91 | 2.98 (67% reduction) |
160
 
161
 
162
+ ## Peak Memory
163
+
164
+ We can use the following code to get a sense of peak memory usage during inference:
165
 
166
  ```
167
  import torch
 
205
 
206
  # Model Performance
207
 
208
+ Our int4wo is only optimized for batch size 1, so expect some slowdown with larger batch sizes, we expect this to be used in local server deployment for single or a few users where the decode tokens per second will matters more than the time to first token.
 
 
209
 
210
  ## Results (A100 machine)
211
  | Benchmark (Latency) | | |