Lewdiculous
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,65 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
inference: false
|
6 |
+
tags:
|
7 |
+
- roleplay
|
8 |
+
- llama3
|
9 |
+
- sillytavern
|
10 |
+
---
|
11 |
+
|
12 |
+
# #roleplay #sillytavern #llama3
|
13 |
+
|
14 |
+
My GGUF-IQ-Imatrix quants for [**Nitral-AI/Poppy_Porpoise-0.85-L3-8B**](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.85-L3-8B).
|
15 |
+
|
16 |
+
"Isn't Poppy the cutest [Porpoise](https://g.co/kgs/5C2zP3r)?"
|
17 |
+
|
18 |
+
> [!IMPORTANT]
|
19 |
+
> **Quantization process:** <br>
|
20 |
+
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
21 |
+
> Since the original model was already an FP16, imatrix data was generated from the FP16-GGUF and the conversions as well. <br> <!-- This was a bit more disk and compute intensive but hopefully avoided any losses during conversion. <br> -->
|
22 |
+
> If you noticed any issues let me know in the discussions.
|
23 |
+
|
24 |
+
> [!NOTE]
|
25 |
+
> **General usage:** <br>
|
26 |
+
> Use the latest version of **KoboldCpp**. <br>
|
27 |
+
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. <br>
|
28 |
+
> For **12GB VRAM** GPUs, the **Q5_K_M-imat** quant will give you a great size/quality balance. <br>
|
29 |
+
>
|
30 |
+
> **Resources:** <br>
|
31 |
+
> You can find out more about how each quant stacks up against each other and their types [**here**](gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [**here**](https://rentry.org/llama-cpp-quants-or-fine-ill-do-it-myself-then-pt-2), respectively.
|
32 |
+
>
|
33 |
+
> **Presets:** <br>
|
34 |
+
> Some compatible SillyTavern presets can be found [**here (Poppy-0.85 Presets)**] or [**here (Virt's Roleplay Presets)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
|
35 |
+
<!-- > Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) for other recommendations and samplers.
|
36 |
+
-->
|
37 |
+
|
38 |
+
> [!TIP]
|
39 |
+
> **Personal-support:** <br>
|
40 |
+
> I apologize for disrupting your experience. <br>
|
41 |
+
> Currently I'm working on moving for a better internet provider. <br>
|
42 |
+
> If you **want** and you are **able to**... <br>
|
43 |
+
> You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
|
44 |
+
>
|
45 |
+
> **Author-support:** <br>
|
46 |
+
> You can support the author [**at their own page**](https://huggingface.co/Nitral-AI).
|
47 |
+
|
48 |
+
## **Original model information:**
|
49 |
+
|
50 |
+
# Note: Updated Presets!
|
51 |
+
|
52 |
+
# "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
|
53 |
+
|
54 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Boje781GkTdYgORTYGI6r.png)
|
55 |
+
# Recomended ST Presets:(Updated for 0.85) [Porpoise Presets](https://huggingface.co/Nitral-8B-r1/tree/main/Presets)
|
56 |
+
|
57 |
+
If you want to use vision functionality:
|
58 |
+
|
59 |
+
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
|
60 |
+
|
61 |
+
# To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-2.0-mmproj-model-f16)
|
62 |
+
|
63 |
+
* You can load the **mmproj** by using the corresponding section in the interface:
|
64 |
+
|
65 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
|