vkimbris/Cotype-Nano-Generate-Adapter-Q8_0-GGUF

This LoRA adapter was converted to GGUF format from vkimbris/Cotype-Nano-Generate-Adapter via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora Cotype-Nano-Generate-Adapter-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora Cotype-Nano-Generate-Adapter-q8_0.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
7
GGUF
Model size
18.5M params
Architecture
qwen2

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for vkimbris/Cotype-Nano-Generate-Adapter-Q8_0-GGUF

Base model

MTSAIR/Cotype-Nano
Quantized
(1)
this model