dranger003
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,8 @@ license: cc-by-nc-2.0
|
|
4 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/wolfram/miquliz-120b-v2.0
|
5 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
|
6 |
|
7 |
-
Using IQ2_XXS it seems to fit 100/141 layers using 2K context on a 24GB card.
|
|
|
|
|
|
|
|
|
|
4 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/wolfram/miquliz-120b-v2.0
|
5 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
|
6 |
|
7 |
+
Using IQ2_XXS it seems to fit 100/141 layers using 2K context on a 24GB card.
|
8 |
+
|
9 |
+
| Layers | Context | Template |
|
10 |
+
| --- | --- | --- |
|
11 |
+
| <pre>140</pre> | <pre>32768</pre> | <pre>[INST] {prompt} [/INST]<br>{response}</pre> |
|