nintwentydo's picture
Update README.md
4e475d3 verified
|
raw
history blame
1.44 kB
metadata
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - zh
  - ja
  - ru
  - ko
license: other
license_name: mrl
base_model: mistralai/Pixtral-Large-Instruct-2411
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: transformers
pipeline_tag: image-text-to-text

Pixtral-Large-Instruct-2411 🧡

Transformers implementation of Pixtral-Large-Instruct-2411.

Tokenizer And Prompt Template

Using conversion of v7m1 tokenizer with 32k vocab size.

Chat template in chat_template.json uses the v7 instruct template:

<s>[SYSTEM_PROMPT] <system prompt>[/SYSTEM_PROMPT][INST] <user message>[/INST] <assistant response></s>[INST] <user message>[/INST]

Notes

- tool use hasn't been implemented in the template yet. I'll add this in later.
- I've added extra stop tokens between consecutive user messages. Helps contexts where there'll be multiple speakers etc but your milage may vary.
- If you have a better implementation of the tokenizer let me know and I'm happy to swap it out.
- As always pls respect the model license.

Quantizations

EXL2 quants are available in different sizes here. You'll need to use dev branch of ExLlamaV2 for vision input.