|
--- |
|
inference: false |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- zh |
|
- ja |
|
- ru |
|
- ko |
|
license: other |
|
license_link: https://mistral.ai/licenses/MRL-0.1.md |
|
license_name: mistral-research-license |
|
base_model_relation: quantized |
|
quantized_by: Quant-Cartel |
|
base_model: mistralai/Mistral-Large-Instruct-2411 |
|
tags: |
|
- iMat |
|
- gguf |
|
--- |
|
``` |
|
e88 88e d8 |
|
d888 888b 8888 8888 ,"Y88b 888 8e d88 |
|
C8888 8888D 8888 8888 "8" 888 888 88b d88888 |
|
Y888 888P Y888 888P ,ee 888 888 888 888 |
|
"88 88" "88 88" "88 888 888 888 888 |
|
b |
|
8b, |
|
|
|
e88'Y88 d8 888 |
|
d888 'Y ,"Y88b 888,8, d88 ,e e, 888 |
|
C8888 "8" 888 888 " d88888 d88 88b 888 |
|
Y888 ,d ,ee 888 888 888 888 , 888 |
|
"88,d88 "88 888 888 888 "YeeP" 888 |
|
|
|
PROUDLY PRESENTS |
|
``` |
|
# Mistral-Large-Instruct-2411-iMat-GGUF |
|
|
|
Quantized from fp16. |
|
|
|
Original model author: [mistralai](https://huggingface.co/mistralai/) |
|
|
|
* Importance Matrix calculated using [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
|
* 105 chunks |
|
* n_ctx=512 |
|
* Calculation uses f16 precision model weights |
|
|
|
Original model README [here](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411) |
|
|