Update README.md
Browse files
README.md
CHANGED
@@ -66,6 +66,33 @@ Special thanks to the model creators at SAO10K for making such a fantastic model
|
|
66 |
|
67 |
[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2 ]
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
<h3> Sample Prompt and Model's Compared:</h3>
|
70 |
|
71 |
Prompt tested with "temp=0" to ensure compliance, 2048 context (model supports 8192 context / 8k), and "chat" template for LLAMA3.
|
|
|
66 |
|
67 |
[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2 ]
|
68 |
|
69 |
+
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
|
70 |
+
|
71 |
+
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
|
72 |
+
|
73 |
+
Set the "Smoothing_factor" to 1.5 to 2.5
|
74 |
+
|
75 |
+
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
|
76 |
+
|
77 |
+
: in text-generation-webui -> parameters -> lower right.
|
78 |
+
|
79 |
+
: In Silly Tavern this is called: "Smoothing"
|
80 |
+
|
81 |
+
|
82 |
+
NOTE: For "text-generation-webui"
|
83 |
+
|
84 |
+
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
|
85 |
+
|
86 |
+
Source versions (and config files) of my models are here:
|
87 |
+
|
88 |
+
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
|
89 |
+
|
90 |
+
OTHER OPTIONS:
|
91 |
+
|
92 |
+
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
|
93 |
+
|
94 |
+
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
|
95 |
+
|
96 |
<h3> Sample Prompt and Model's Compared:</h3>
|
97 |
|
98 |
Prompt tested with "temp=0" to ensure compliance, 2048 context (model supports 8192 context / 8k), and "chat" template for LLAMA3.
|