---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
base_model: mistralai/Pixtral-Large-Instruct-2411
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: transformers
pipeline_tag: image-text-to-text
---
# Pixtral-Large-Instruct-2411 🧡
Transformers implementation of [Pixtral-Large-Instruct-2411](https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411).
## Tokenizer And Prompt Template
Using conversion of v7m1 tokenizer with 32k vocab size.
Chat template in tokenizer_config.json uses the v7 instruct template:
```
[SYSTEM_PROMPT] [/SYSTEM_PROMPT][INST] [/INST] [INST] [/INST]
```
## Notes
*- tool use hasn't been implemented in the template yet.*
*- I've added extra stop tokens between consecutive user messages. Helps contexts where there'll be multiple speakers etc but your milage may vary.*
*- If you have a better implementation of the tokenizer let me know and I'm happy to swap it out.*
*- As always pls respect the model license.*
Currently doing a fresh measurement run ahead of re-doing my exl2 quants which I'll upload. Apologies in advance if anything is wonky, tbh this is just a personal learning exercise for me and I decided to make this model my fixation to freshen up on my knowledge lol.