Text Generation
GGUF
English
creative
Uncensored
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
brainstorm 40x
swearing
rp
128k context
horror
llama 3.2
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -165,6 +165,19 @@ OTHER OPTIONS:
|
|
165 |
|
166 |
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
167 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
<B>Model Template:</B>
|
169 |
|
170 |
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131072.
|
|
|
165 |
|
166 |
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
167 |
|
168 |
+
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
|
169 |
+
|
170 |
+
This a "Class 2" / "Class 3" model:
|
171 |
+
|
172 |
+
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
|
173 |
+
|
174 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
175 |
+
|
176 |
+
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
|
177 |
+
|
178 |
+
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
|
179 |
+
|
180 |
+
|
181 |
<B>Model Template:</B>
|
182 |
|
183 |
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 128k / 131072.
|