Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ pipeline_tag: text-generation
|
|
13 |
|
14 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request with <a href="https://github.com/ggerganov/llama.cpp/pull/7402">Smaug support</a> for quantization.
|
15 |
|
|
|
|
|
16 |
Original model: https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
|
17 |
|
18 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
|
|
|
13 |
|
14 |
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request with <a href="https://github.com/ggerganov/llama.cpp/pull/7402">Smaug support</a> for quantization.
|
15 |
|
16 |
+
This model can be run as of release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a>
|
17 |
+
|
18 |
Original model: https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
|
19 |
|
20 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
|