Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This model is still uploading. README will be here shortly.
|
2 |
+
|
3 |
+
If you're too impatient to wait for that (of course you are), to run these files you need:
|
4 |
+
1. llama.cpp as of this commit: https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb
|
5 |
+
2. To add new command line parameter `-gqa 8`
|
6 |
+
|
7 |
+
Example command:
|
8 |
+
```
|
9 |
+
/workspace/git/llama.cpp/main -m llama-2-70b-chat/ggml/llama-2-70b-chat.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "[INST] <<SYS>>You are a helpful assistant<</SYS>>Write a story about llamas[/INST]"
|
10 |
+
```
|
11 |
+
|
12 |
+
There is no CUDA support at this time, but it should hopefully be coming soon.
|