legraphista
commited on
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -59,7 +59,7 @@ Link: [here](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliter
|
|
59 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q6_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q6_K.gguf) | Q6_K | 6.60GB | β
Available | βͺ Static | π¦ No
|
60 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q4_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q4_K.gguf) | Q4_K | 4.92GB | β
Available | π’ IMatrix | π¦ No
|
61 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q3_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q3_K.gguf) | Q3_K | 4.02GB | β
Available | π’ IMatrix | π¦ No
|
62 |
-
| Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K | Q2_K |
|
63 |
|
64 |
|
65 |
### All Quants
|
@@ -82,7 +82,7 @@ Link: [here](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliter
|
|
82 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | -
|
83 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | -
|
84 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | -
|
85 |
-
| Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K | Q2_K |
|
86 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | -
|
87 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | -
|
88 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | -
|
|
|
59 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q6_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q6_K.gguf) | Q6_K | 6.60GB | β
Available | βͺ Static | π¦ No
|
60 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q4_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q4_K.gguf) | Q4_K | 4.92GB | β
Available | π’ IMatrix | π¦ No
|
61 |
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q3_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q3_K.gguf) | Q3_K | 4.02GB | β
Available | π’ IMatrix | π¦ No
|
62 |
+
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
63 |
|
64 |
|
65 |
### All Quants
|
|
|
82 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | -
|
83 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | -
|
84 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | -
|
85 |
+
| [Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K.gguf](https://huggingface.co/legraphista/Meta-Llama-3-8B-Instruct-abliterated-v3-IMat-GGUF/blob/main/Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
86 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | -
|
87 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | -
|
88 |
| Meta-Llama-3-8B-Instruct-abliterated-v3.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | -
|