---
base_model: matchaaaaa/Chaifighter-20B-v2
model_name: Chaifighter-20b-GGUF-v2
quantized_by: brooketh
---
**
The official library of GGUF format models for use in the local AI chat app, Faraday.dev.
**Download Faraday here to get started.
Request Additional models at r/LLM_Quants.
*** # Chaifighter 20B v2 - **Creator:** [matchaaaaa](https://huggingface.co/matchaaaaa/) - **Original:** [Chaifighter 20B v2](https://huggingface.co/matchaaaaa/Chaifighter-20B-v2) - **Date Created:** 2024-05-19 - **Trained Context:** 4096 tokens - **Description:** Medium-sized model geared towards long-form verbose roleplay chats. Designed to be a very creative and rich storyteller while retaining reasoning, coherence, and context-following capabilities. May be considerably quicker than comparably-sized models on most hardware. With v2 comes better long context performance and quality fixes. ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Faraday.dev. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. ***