--- library_name: transformers tags: - llama-cpp - gguf-my-lora base_model: salni84/fine-tuned-llama3.2 --- # salni84/fine-tuned-llama3.2-F16-GGUF This LoRA adapter was converted to GGUF format from [`salni84/fine-tuned-llama3.2`](https://huggingface.co/salni84/fine-tuned-llama3.2) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/salni84/fine-tuned-llama3.2) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora fine-tuned-llama3.2-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora fine-tuned-llama3.2-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).