legraphista
commited on
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -100,8 +100,8 @@ Link: [here](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blo
|
|
100 |
| [Llama-3.1-Storm-8B.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | β
Available | π’ IMatrix | π¦ No
|
101 |
| [Llama-3.1-Storm-8B.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
102 |
| [Llama-3.1-Storm-8B.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.Q2_K_S.gguf) | Q2_K_S | 2.99GB | β
Available | π’ IMatrix | π¦ No
|
103 |
-
| Llama-3.1-Storm-8B.IQ2_M | IQ2_M |
|
104 |
-
| Llama-3.1-Storm-8B.IQ2_S | IQ2_S |
|
105 |
| Llama-3.1-Storm-8B.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | -
|
106 |
| Llama-3.1-Storm-8B.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | -
|
107 |
| Llama-3.1-Storm-8B.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | -
|
|
|
100 |
| [Llama-3.1-Storm-8B.IQ3_XXS.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | β
Available | π’ IMatrix | π¦ No
|
101 |
| [Llama-3.1-Storm-8B.Q2_K.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.Q2_K.gguf) | Q2_K | 3.18GB | β
Available | π’ IMatrix | π¦ No
|
102 |
| [Llama-3.1-Storm-8B.Q2_K_S.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.Q2_K_S.gguf) | Q2_K_S | 2.99GB | β
Available | π’ IMatrix | π¦ No
|
103 |
+
| [Llama-3.1-Storm-8B.IQ2_M.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.IQ2_M.gguf) | IQ2_M | 2.95GB | β
Available | π’ IMatrix | π¦ No
|
104 |
+
| [Llama-3.1-Storm-8B.IQ2_S.gguf](https://huggingface.co/legraphista/Llama-3.1-Storm-8B-IMat-GGUF/blob/main/Llama-3.1-Storm-8B.IQ2_S.gguf) | IQ2_S | 2.76GB | β
Available | π’ IMatrix | π¦ No
|
105 |
| Llama-3.1-Storm-8B.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | -
|
106 |
| Llama-3.1-Storm-8B.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | -
|
107 |
| Llama-3.1-Storm-8B.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | -
|