Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
30 |
| -------- | ---------- | --------- | ----------- |
|
31 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_1.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_1.gguf) | Q8_1 | 7.95GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
|
32 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_0.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_0.gguf) | Q8_0 | 7.70GB | Extremely high quality, generally unneeded but max available quant. |
|
33 |
-
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K_L.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF
|
34 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
|
35 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_L.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_L.gguf) | Q5_K_L | 5.47GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
|
36 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_M.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. |
|
|
|
30 |
| -------- | ---------- | --------- | ----------- |
|
31 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_1.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_1.gguf) | Q8_1 | 7.95GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
|
32 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_0.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q8_0.gguf) | Q8_0 | 7.70GB | Extremely high quality, generally unneeded but max available quant. |
|
33 |
+
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K_L.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K_L.gguf) | Q6_K_L | 6.26GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
|
34 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
|
35 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_L.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_L.gguf) | Q5_K_L | 5.47GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
|
36 |
| [MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_M.gguf](https://huggingface.co/bartowski/MadWizard-SFT-v2-Mistral-7b-v0.3-GGUF/blob/main/MadWizard-SFT-v2-Mistral-7b-v0.3-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. |
|