GGUF LoRA adapters Adapters extracted from fine tuned models, using mergekit-extract-lora ggml-org/LoRA-Llama-3-Instruct-abliteration-8B-F16-GGUF Updated Nov 1, 2024 • 46 • 1 ggml-org/LoRA-Qwen2.5-1.5B-Instruct-abliterated-F16-GGUF Updated 12 days ago • 51 • 2 ggml-org/LoRA-Qwen2.5-3B-Instruct-abliterated-F16-GGUF Updated 26 days ago • 46 • 1 ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF Updated 27 days ago • 139 • 3
llama.vim Recommended models for the llama.vim and llama.vscode plugins ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF Text Generation • Updated 4 days ago • 120 ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 1.67k • 7 ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF Text Generation • Updated Nov 26, 2024 • 1.52k • 3 ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF Text Generation • Updated Oct 28, 2024 • 2.45k • 1
ggml-org/LoRA-Deepthink-Reasoning-Qwen2.5-7B-Instruct-Q8_0-GGUF Text Generation • Updated 20 days ago • 27