Update README.md
Browse files
README.md
CHANGED
@@ -65,10 +65,12 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
|
|
65 |
|
66 |
## How to run in `text-generation-webui`
|
67 |
|
68 |
-
Put the desired .bin file in a model directory with `ggml` (case sensitive) in its name.
|
69 |
-
|
70 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
71 |
|
|
|
|
|
|
|
|
|
72 |
# Original model info
|
73 |
|
74 |
Overview of Evol-Instruct
|
|
|
65 |
|
66 |
## How to run in `text-generation-webui`
|
67 |
|
|
|
|
|
68 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
69 |
|
70 |
+
Note: at this time text-generation-webui will not support the new q5 quantisation methods.
|
71 |
+
|
72 |
+
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
|
73 |
+
|
74 |
# Original model info
|
75 |
|
76 |
Overview of Evol-Instruct
|