Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
roleplaying
swearing
extreme swearing
rp
graphic horror
horror
nsfw
llama3
Not-For-All-Audiences
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -50,17 +50,20 @@ This version will follow your instructions better, and output will be more nuanc
|
|
50 |
|
51 |
Roughly, if you are using Q4KM of V2, you will get close to Q5KM of V1 performance.
|
52 |
|
|
|
|
|
53 |
There are also links to new guides (30+ pages) below to get the most out of this model, regardless of use case(s) you want to use it for.
|
54 |
|
55 |
Templates to use with this model have also been added.
|
56 |
|
57 |
For Grand Horror 16.5B (V1 and/or V2), you may want to check out "generational steering" (section : "Highest Quality Settings..." below.)
|
58 |
|
59 |
-
I updated the examples with the "v2" versions so you can see some of the contrasts.
|
60 |
|
61 |
-
3 new (v2 only) examples showing the full "horror" this model can output are also at the bottom of this page.
|
62 |
|
63 |
-
If you are using V1, you should immediately download the newest quants for maximum performance
|
|
|
64 |
|
65 |
Note that performance upgrades (via LLAMACPP and requanting the model) affect ALL QUANTS.
|
66 |
|
|
|
50 |
|
51 |
Roughly, if you are using Q4KM of V2, you will get close to Q5KM of V1 performance.
|
52 |
|
53 |
+
The model was also remastered from "float16" to "bfloat16" to capture more nuance from the source models.
|
54 |
+
|
55 |
There are also links to new guides (30+ pages) below to get the most out of this model, regardless of use case(s) you want to use it for.
|
56 |
|
57 |
Templates to use with this model have also been added.
|
58 |
|
59 |
For Grand Horror 16.5B (V1 and/or V2), you may want to check out "generational steering" (section : "Highest Quality Settings..." below.)
|
60 |
|
61 |
+
I updated the examples with the "v2" versions ("v1" also shown) so you can see some of the contrasts.
|
62 |
|
63 |
+
3 new (v2 only) examples showing the full "horror" this model can output are also at the bottom of this page including "dynamic temp".
|
64 |
|
65 |
+
If you are using V1, you should immediately download the newest quants for maximum performance especially if you are using it
|
66 |
+
for role play and/or multi-turn chat.
|
67 |
|
68 |
Note that performance upgrades (via LLAMACPP and requanting the model) affect ALL QUANTS.
|
69 |
|