glider-gguf / README.md
DarshanDeshpande's picture
Upload Q8_0 GGUF model
06b6a01 verified
---
license: cc-by-nc-4.0
base_model:
- PatronusAI/glider
---
Available GGUF versions for the [PatronusAI/glider](https://huggingface.co/PatronusAI/glider) model: [`BF16`, `Q8_0`, `Q5_K_M`, `Q4_K_M`]
How to load your desired quantized model:
1. Select the appropraite GGUF quantization from the available list above
2. Run the following code:
```bash
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_{version_from_list}.gguf")
```
For loading the Q8_0 version, this script will change to:
```bash
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_Q8_0.gguf")
```
For any issues or questions, reach out to [Darshan Deshpande](https://huggingface.co/darshandeshpande) or [Rebecca Qian](https://huggingface.co/RebeccaQian1)