prince-canuma
commited on
Commit
•
0157509
1
Parent(s):
1ee1e0c
Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,169 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
|
|
|
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
|
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
15 |
|
16 |
-
|
|
|
|
|
|
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
|
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
|
|
53 |
|
54 |
-
|
|
|
|
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
59 |
|
60 |
-
|
|
|
|
|
|
|
|
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
|
|
|
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
-
|
|
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
|
|
|
|
|
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
|
|
85 |
|
86 |
-
|
|
|
|
|
87 |
|
88 |
-
|
|
|
89 |
|
90 |
-
|
91 |
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
|
|
|
|
|
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
200 |
|
|
|
|
|
|
|
201 |
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- fr
|
5 |
+
- it
|
6 |
+
- de
|
7 |
+
- es
|
8 |
+
- en
|
9 |
+
inference:
|
10 |
+
parameters:
|
11 |
+
temperature: 0.5
|
12 |
+
widget:
|
13 |
+
- messages:
|
14 |
+
- role: user
|
15 |
+
content: What is your favorite condiment?
|
16 |
---
|
17 |
+
# Model Card for Mixtral-8x22B-Instruct-v0.1-4bit
|
18 |
+
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
19 |
|
20 |
+
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
|
21 |
|
22 |
+
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
|
23 |
|
24 |
+
## Warning
|
25 |
+
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
|
26 |
|
27 |
+
## Instruction format
|
28 |
|
29 |
+
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
|
30 |
|
31 |
+
The template used to build a prompt for the Instruct model is defined as follows:
|
32 |
+
```
|
33 |
+
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
|
34 |
+
```
|
35 |
+
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
|
36 |
|
37 |
+
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
|
38 |
+
```python
|
39 |
+
def tokenize(text):
|
40 |
+
return tok.encode(text, add_special_tokens=False)
|
41 |
|
42 |
+
[BOS_ID] +
|
43 |
+
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
|
44 |
+
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
|
45 |
+
…
|
46 |
+
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
|
47 |
+
tokenize(BOT_MESSAGE_N) + [EOS_ID]
|
48 |
+
```
|
49 |
|
50 |
+
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
+
In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
|
53 |
|
54 |
+
## Run the model
|
55 |
|
56 |
+
```python
|
57 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
58 |
|
59 |
+
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
|
60 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
61 |
|
62 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
63 |
|
64 |
+
messages = [
|
65 |
+
{"role": "user", "content": "What is your favourite condiment?"},
|
66 |
+
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
67 |
+
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
68 |
+
]
|
69 |
|
70 |
+
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
71 |
|
72 |
+
outputs = model.generate(inputs, max_new_tokens=20)
|
73 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
74 |
+
```
|
75 |
|
76 |
+
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
|
77 |
|
78 |
+
### In half-precision
|
79 |
|
80 |
+
Note `float16` precision only works on GPU devices
|
81 |
|
82 |
+
<details>
|
83 |
+
<summary> Click to expand </summary>
|
84 |
|
85 |
+
```diff
|
86 |
+
+ import torch
|
87 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
88 |
|
89 |
+
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
|
90 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
91 |
|
92 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
|
93 |
|
94 |
+
messages = [
|
95 |
+
{"role": "user", "content": "What is your favourite condiment?"},
|
96 |
+
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
97 |
+
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
98 |
+
]
|
99 |
|
100 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
101 |
|
102 |
+
outputs = model.generate(input_ids, max_new_tokens=20)
|
103 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
104 |
+
```
|
105 |
+
</details>
|
106 |
|
107 |
+
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
|
108 |
|
109 |
+
<details>
|
110 |
+
<summary> Click to expand </summary>
|
111 |
|
112 |
+
```diff
|
113 |
+
+ import torch
|
114 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
115 |
|
116 |
+
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
|
117 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
118 |
|
119 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
|
120 |
|
121 |
+
text = "Hello my name is"
|
122 |
+
messages = [
|
123 |
+
{"role": "user", "content": "What is your favourite condiment?"},
|
124 |
+
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
125 |
+
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
126 |
+
]
|
127 |
|
128 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
129 |
|
130 |
+
outputs = model.generate(input_ids, max_new_tokens=20)
|
131 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
132 |
+
```
|
133 |
+
</details>
|
134 |
|
135 |
+
### Load the model with Flash Attention 2
|
136 |
|
137 |
+
<details>
|
138 |
+
<summary> Click to expand </summary>
|
139 |
|
140 |
+
```diff
|
141 |
+
+ import torch
|
142 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
143 |
|
144 |
+
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
|
145 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
146 |
|
147 |
+
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
|
148 |
|
149 |
+
messages = [
|
150 |
+
{"role": "user", "content": "What is your favourite condiment?"},
|
151 |
+
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
152 |
+
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
153 |
+
]
|
154 |
|
155 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
|
156 |
|
157 |
+
outputs = model.generate(input_ids, max_new_tokens=20)
|
158 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
159 |
+
```
|
160 |
+
</details>
|
161 |
|
162 |
+
## Limitations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
163 |
|
164 |
+
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
165 |
+
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
166 |
+
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
167 |
|
168 |
+
# The Mistral AI Team
|
169 |
+
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|