Update model card
#1
by
nm-research
- opened
README.md
CHANGED
@@ -1,5 +1,150 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- mistral-community/pixtral-12b
|
4 |
- mistralai/Pixtral-12B-2409
|
5 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
tags:
|
3 |
+
- fp8
|
4 |
+
- vllm
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
- de
|
8 |
+
- fr
|
9 |
+
- it
|
10 |
+
- pt
|
11 |
+
- hi
|
12 |
+
- es
|
13 |
+
- th
|
14 |
+
pipeline_tag: text-generation
|
15 |
base_model:
|
16 |
- mistral-community/pixtral-12b
|
17 |
- mistralai/Pixtral-12B-2409
|
18 |
+
---
|
19 |
+
|
20 |
+
# pixtral-12b-FP8-dynamic
|
21 |
+
|
22 |
+
## Model Overview
|
23 |
+
- **Model Architecture:** Llava
|
24 |
+
- **Input:** Text/Image
|
25 |
+
- **Output:** Text
|
26 |
+
- **Model Optimizations:**
|
27 |
+
- **Weight quantization:** FP8
|
28 |
+
- **Activation quantization:** FP8
|
29 |
+
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similar to [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b), this models is intended for assistant-like chat.
|
30 |
+
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
|
31 |
+
- **Release Date:** 11/1/2024
|
32 |
+
- **Version:** 1.0
|
33 |
+
- **License(s):**
|
34 |
+
- **Model Developers:** Neural Magic
|
35 |
+
|
36 |
+
Quantized version of [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b).
|
37 |
+
|
38 |
+
### Model Optimizations
|
39 |
+
|
40 |
+
This model was obtained by quantizing the weights and activations of [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b) to FP8 data type, ready for inference with vLLM built from source.
|
41 |
+
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
|
42 |
+
|
43 |
+
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the FP8 representations of the quantized weights and activations. Activations are also quantized on a per-token dynamic basis.
|
44 |
+
[LLM Compressor](https://github.com/vllm-project/llm-compressor) is used for quantization.
|
45 |
+
|
46 |
+
## Deployment
|
47 |
+
|
48 |
+
### Use with vLLM
|
49 |
+
|
50 |
+
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
|
51 |
+
|
52 |
+
```python
|
53 |
+
from vllm import LLM, SamplingParams
|
54 |
+
from vllm.assets.image import ImageAsset
|
55 |
+
|
56 |
+
# Initialize the LLM
|
57 |
+
model_name = "neuralmagic/pixtral-12b-FP8-dynamic"
|
58 |
+
llm = LLM(model=model_name, max_num_seqs=1, enforce_eager=True)
|
59 |
+
|
60 |
+
# Load the image
|
61 |
+
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
|
62 |
+
|
63 |
+
# Create the prompt
|
64 |
+
question = "If I had to write a haiku for this one, it would be: "
|
65 |
+
prompt = f"<|image|><|begin_of_text|>{question}"
|
66 |
+
|
67 |
+
# Set up sampling parameters
|
68 |
+
sampling_params = SamplingParams(temperature=0.2, max_tokens=30)
|
69 |
+
|
70 |
+
# Generate the response
|
71 |
+
inputs = {
|
72 |
+
"prompt": prompt,
|
73 |
+
"multi_modal_data": {
|
74 |
+
"image": image
|
75 |
+
},
|
76 |
+
}
|
77 |
+
outputs = llm.generate(inputs, sampling_params=sampling_params)
|
78 |
+
|
79 |
+
# Print the generated text
|
80 |
+
print(outputs[0].outputs[0].text)
|
81 |
+
```
|
82 |
+
|
83 |
+
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
|
84 |
+
|
85 |
+
```
|
86 |
+
vllm serve neuralmagic/pixtral-12b-FP8-dynamic --max-num-seqs 16
|
87 |
+
```
|
88 |
+
|
89 |
+
## Creation
|
90 |
+
|
91 |
+
This model was created by applying [LLM Compressor](https://github.com/vllm-project/llm-compressor/blob/f90013702b15bd1690e4e2fe9ed434921b6a6199/examples/quantization_w8a8_fp8/llama3.2_vision_example.py), as presented in the code snipet below.
|
92 |
+
|
93 |
+
```python
|
94 |
+
from transformers import AutoProcessor, LlavaForConditionalGeneration
|
95 |
+
|
96 |
+
from llmcompressor.modifiers.quantization import QuantizationModifier
|
97 |
+
from llmcompressor.transformers import oneshot, wrap_hf_model_class
|
98 |
+
|
99 |
+
MODEL_ID = "mistral-community/pixtral-12b"
|
100 |
+
|
101 |
+
# Load model.
|
102 |
+
model_class = wrap_hf_model_class(LlavaForConditionalGeneration)
|
103 |
+
model = model_class.from_pretrained(MODEL_ID, device_map="auto", torch_dtype="auto")
|
104 |
+
processor = AutoProcessor.from_pretrained(MODEL_ID)
|
105 |
+
|
106 |
+
# Configure the quantization algorithm and scheme.
|
107 |
+
# In this case, we:
|
108 |
+
# * quantize the weights to fp8 with per channel via ptq
|
109 |
+
# * quantize the activations to fp8 with dynamic per token
|
110 |
+
recipe = QuantizationModifier(
|
111 |
+
targets="Linear",
|
112 |
+
scheme="FP8_DYNAMIC",
|
113 |
+
ignore=["re:.*lm_head", "re:multi_modal_projector.*", "re:vision_model.*"],
|
114 |
+
)
|
115 |
+
|
116 |
+
# Apply quantization and save to disk in compressed-tensors format.
|
117 |
+
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-Dynamic"
|
118 |
+
oneshot(model=model, recipe=recipe, output_dir=SAVE_DIR)
|
119 |
+
processor.save_pretrained(SAVE_DIR)
|
120 |
+
|
121 |
+
# Confirm generations of the quantized model look sane.
|
122 |
+
print("========== SAMPLE GENERATION ==============")
|
123 |
+
input_ids = processor(text="Hello my name is", return_tensors="pt").input_ids.to("cuda")
|
124 |
+
output = model.generate(input_ids, max_new_tokens=20)
|
125 |
+
print(processor.decode(output[0]))
|
126 |
+
print("==========================================")
|
127 |
+
```
|
128 |
+
|
129 |
+
## Evaluation
|
130 |
+
|
131 |
+
### Multimodal Benchmarks
|
132 |
+
|
133 |
+
| | pixtral-12b | pixtral-12b-FP8-dynamic |
|
134 |
+
|:-------------------:|:-------------:|:----------:|
|
135 |
+
| **MMMU** *(CoT)* | 49.44 | 51.11 |
|
136 |
+
| **Mathvista** *(CoT)* | 58.1 | 59.4 |
|
137 |
+
| **ChartQA** *(CoT)* | 82.64 | 82.68 |
|
138 |
+
| **DocVQA** *(ANLS)* | 89.36 | 89.35 |
|
139 |
+
|
140 |
+
### Text Benchmarks
|
141 |
+
|
142 |
+
| | pixtral-12b | pixtral-12b-FP8-dynamic |
|
143 |
+
|:-------------------:|:-------------:|:----------:|
|
144 |
+
| **MMLU** *(5-shot)* | 69.27 | 68.96 |
|
145 |
+
| **Math** *(0-shot)* | 43.82 | 43.27 |
|
146 |
+
| **Human Eval** *(Pass@1)* | 77.80 | 76.4 |
|
147 |
+
|
148 |
+
### Reproduction
|
149 |
+
|
150 |
+
TBD
|