Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ Additional quants are uploading...
|
|
19 |
The NEO Class tech was created after countless investigations and over 120 lab experiments backed by
|
20 |
real world testing and qualitative results.
|
21 |
|
22 |
-
NEO Class results:
|
23 |
|
24 |
Better overall function, instruction following, output quality and stronger connections to ideas, concepts and the world in general.
|
25 |
|
@@ -33,6 +33,30 @@ Perplexity drop of 1191 points for Neo Class Imatrix quant of IQ4XS VS regular q
|
|
33 |
|
34 |
(lower is better)
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
<B> Model Notes: </B>
|
37 |
|
38 |
Maximum context is 8k. Please see original model maker's page for details, and usage information for this model.
|
|
|
19 |
The NEO Class tech was created after countless investigations and over 120 lab experiments backed by
|
20 |
real world testing and qualitative results.
|
21 |
|
22 |
+
<b>NEO Class results: </b>
|
23 |
|
24 |
Better overall function, instruction following, output quality and stronger connections to ideas, concepts and the world in general.
|
25 |
|
|
|
33 |
|
34 |
(lower is better)
|
35 |
|
36 |
+
<B> A Funny thing happened on the way to the "lab" ... </b>
|
37 |
+
|
38 |
+
Although this model uses a "Llama3" template we found that Command-R's template worked better specifically for creative purposes.
|
39 |
+
|
40 |
+
This applies to both normal quants and Neo quants.
|
41 |
+
|
42 |
+
Here is Command-R's template:
|
43 |
+
|
44 |
+
{
|
45 |
+
"name": "Cohere Command R",
|
46 |
+
"inference_params": {
|
47 |
+
"input_prefix": "<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>",
|
48 |
+
"input_suffix": "<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>",
|
49 |
+
"antiprompt": [
|
50 |
+
"<|START_OF_TURN_TOKEN|>",
|
51 |
+
"<|END_OF_TURN_TOKEN|>"
|
52 |
+
],
|
53 |
+
"pre_prompt_prefix": "<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>",
|
54 |
+
"pre_prompt_suffix": ""
|
55 |
+
}
|
56 |
+
}
|
57 |
+
|
58 |
+
This was "interesting" issue was confirmed by multiple users.
|
59 |
+
|
60 |
<B> Model Notes: </B>
|
61 |
|
62 |
Maximum context is 8k. Please see original model maker's page for details, and usage information for this model.
|