Update README.md
Browse files
README.md
CHANGED
@@ -38,8 +38,8 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
38 |
|
39 |
## Repositories available
|
40 |
|
41 |
-
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/
|
42 |
-
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/
|
43 |
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-llama-13b)
|
44 |
|
45 |
## Prompt template: Orca-Vicuna
|
|
|
38 |
|
39 |
## Repositories available
|
40 |
|
41 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ)
|
42 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGML)
|
43 |
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-llama-13b)
|
44 |
|
45 |
## Prompt template: Orca-Vicuna
|