---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
base_model: mistralai/Pixtral-Large-Instruct-2411
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: transformers
pipeline_tag: image-text-to-text
---
# Pixtral-Large-Instruct-2411 🧡
Transformers implementation of [Pixtral-Large-Instruct-2411](https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411).
## Tokenizer And Prompt Template
Using conversion of v7m1 tokenizer with 32k vocab size.
Chat template in chat_template.json uses the v7 instruct template:
```
[SYSTEM_PROMPT] [/SYSTEM_PROMPT][INST] [/INST] [INST] [/INST]
```
## Notes
*- tool use hasn't been implemented in the template yet. I'll add this in later.*
*- I've added extra stop tokens between consecutive user messages. Helps contexts where there'll be multiple speakers etc but your milage may vary.*
*- If you have a better implementation of the tokenizer let me know and I'm happy to swap it out.*
*- As always pls respect the model license.*
## Quantizations
EXL2 quants are available in different sizes [here](https://huggingface.co/models?author=nintwentydo&other=base_model:quantized:mistralai/Pixtral-Large-Instruct-2411). You'll need to use dev branch of [ExLlamaV2](https://github.com/turboderp/exllamav2/tree/dev) for vision input.