Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
Unofficial GGUF Quantizations of Grok-1. Should just run with llama.cpp once the [PR- Add grok-1 support #6204](https://github.com/ggerganov/llama.cpp/pull/6204) is merged.
|
6 |
|
7 |
The splits now use [PR: llama_model_loader: support multiple split/shard GGUFs](https://github.com/ggerganov/llama.cpp/pull/6187).
|
8 |
-
Therefore no merging using `gguf-split` is needed
|
9 |
|
10 |
For now only Q2_K Quant, others (Q3_K, Q4_K, Q5_K & Q6_K) are prepared waiting to upload.
|
11 |
|
|
|
5 |
Unofficial GGUF Quantizations of Grok-1. Should just run with llama.cpp once the [PR- Add grok-1 support #6204](https://github.com/ggerganov/llama.cpp/pull/6204) is merged.
|
6 |
|
7 |
The splits now use [PR: llama_model_loader: support multiple split/shard GGUFs](https://github.com/ggerganov/llama.cpp/pull/6187).
|
8 |
+
Therefore, no merging using `gguf-split` is needed any more.
|
9 |
|
10 |
For now only Q2_K Quant, others (Q3_K, Q4_K, Q5_K & Q6_K) are prepared waiting to upload.
|
11 |
|