--- base_model: luke-c/gpt2-lora library_name: peft tags: - llama-cpp - gguf-my-lora --- # luke-c/gpt2-lora-Q8_0-GGUF This LoRA adapter was converted to GGUF format from [`luke-c/gpt2-lora`](https://huggingface.co/luke-c/gpt2-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/luke-c/gpt2-lora) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora gpt2-lora-q8_0.gguf (...other args) # with server llama-server -m base_model.gguf --lora gpt2-lora-q8_0.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).