Upload README.md
Browse files
README.md
CHANGED
@@ -61,6 +61,18 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
61 |
<!-- prompt-template end -->
|
62 |
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
<!-- README_GPTQ.md-provided-files start -->
|
65 |
## Provided files, and GPTQ parameters
|
66 |
|
@@ -298,13 +310,11 @@ print(pipe(prompt_template)[0]['generated_text'])
|
|
298 |
<!-- README_GPTQ.md-compatibility start -->
|
299 |
## Compatibility
|
300 |
|
301 |
-
The files provided are tested to work with
|
302 |
-
|
303 |
-
They also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI) and in [KobaldAI](https://github.com/KoboldAI/KoboldAI-Client).
|
304 |
|
305 |
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
306 |
|
307 |
-
|
308 |
<!-- README_GPTQ.md-compatibility end -->
|
309 |
|
310 |
<!-- footer start -->
|
|
|
61 |
<!-- prompt-template end -->
|
62 |
|
63 |
|
64 |
+
|
65 |
+
<!-- README_GPTQ.md-compatible clients start -->
|
66 |
+
## Known compatible clients / servers
|
67 |
+
|
68 |
+
These GPTQs are known to work in the following inference servers/webuis:
|
69 |
+
|
70 |
+
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
71 |
+
- [KobaldAI United](https://github.com/henk717/koboldai)
|
72 |
+
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
|
73 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
|
74 |
+
<!-- README_GPTQ.md-compatible clients end -->
|
75 |
+
|
76 |
<!-- README_GPTQ.md-provided-files start -->
|
77 |
## Provided files, and GPTQ parameters
|
78 |
|
|
|
310 |
<!-- README_GPTQ.md-compatibility start -->
|
311 |
## Compatibility
|
312 |
|
313 |
+
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
|
|
|
|
|
314 |
|
315 |
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
316 |
|
317 |
+
For a list of clients/servers, please see "Known compatible clients / servers", above.
|
318 |
<!-- README_GPTQ.md-compatibility end -->
|
319 |
|
320 |
<!-- footer start -->
|