Add links to other experimental quants
Browse files
README.md
CHANGED
@@ -42,10 +42,12 @@ All quants made using imatrix option with dataset from [here](https://gist.githu
|
|
42 |
| [Hermes-2-Theta-Llama-3-70B-Q4_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q4_K_L.gguf) | Q4_K_L | 45.27GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
|
43 |
| [Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
|
44 |
| [Hermes-2-Theta-Llama-3-70B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
|
|
45 |
| [Hermes-2-Theta-Llama-3-70B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
|
46 |
| [Hermes-2-Theta-Llama-3-70B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
47 |
| [Hermes-2-Theta-Llama-3-70B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
|
48 |
| [Hermes-2-Theta-Llama-3-70B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
|
|
49 |
| [Hermes-2-Theta-Llama-3-70B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
|
50 |
| [Hermes-2-Theta-Llama-3-70B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
51 |
| [Hermes-2-Theta-Llama-3-70B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Lower quality, uses SOTA techniques to be usable. |
|
|
|
42 |
| [Hermes-2-Theta-Llama-3-70B-Q4_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q4_K_L.gguf) | Q4_K_L | 45.27GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
|
43 |
| [Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
|
44 |
| [Hermes-2-Theta-Llama-3-70B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
45 |
+
| [Hermes-2-Theta-Llama-3-70B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q3_K_XL.gguf) | Q3_K_XL | 40.00GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Medium low quality. |
|
46 |
| [Hermes-2-Theta-Llama-3-70B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
|
47 |
| [Hermes-2-Theta-Llama-3-70B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
48 |
| [Hermes-2-Theta-Llama-3-70B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
|
49 |
| [Hermes-2-Theta-Llama-3-70B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
|
50 |
+
| [Hermes-2-Theta-Llama-3-70B-Q2_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q2_K_L.gguf) | Q2_K_L | 29.40GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very low quality but surprisingly usable. |
|
51 |
| [Hermes-2-Theta-Llama-3-70B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
|
52 |
| [Hermes-2-Theta-Llama-3-70B-IQ2_M.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
|
53 |
| [Hermes-2-Theta-Llama-3-70B-IQ2_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-70B-GGUF/blob/main/Hermes-2-Theta-Llama-3-70B-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Lower quality, uses SOTA techniques to be usable. |
|