--- tags: - roleplay - llama3 - sillytavern - gguf license: apache-2.0 --- > [!TIP] > **Support:**
> My upload speeds have been cooked and unstable lately.
> Realistically I'd need to move to get a better provider.
> If you **want** and you are able to...
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous)
> I apologize for disrupting your experience. **This is a Llama-3 land now, cowboys!** "A chaotic force beckons for you, will you heed her call?" GGUF-IQ-Imatrix quants for [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B). > [!IMPORTANT] > **Updated!** > These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind.
> Use **KoboldCpp version 1.64** or higher. > [!NOTE] > **Quant:**
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes. > [!WARNING] > Recommended presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here](https://huggingface.co/Virt-io/SillyTavern-Presets).
> Use the latest version of KoboldCpp. **Use the provided presets.**
> This is all still highly experimental, modified configs were used to avoid the tokenizer issues, let the authors know how it performs for you, feedback is more important than ever now. **Original model information:** # Chaos RP ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/u5p9kdbXT2QQA3iMU0vF1.png) A chaotic force beckons for you, will you heed her call? Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort. Enjoy!