Recaru/gemma-ko-2b-Q4_K_M-GGUF

This model was converted to GGUF format from beomi/gemma-ko-2b using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Recaru/gemma-ko-2b-Q4_K_M-GGUF --model gemma-ko-2b.Q4_K_M.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Recaru/gemma-ko-2b-Q4_K_M-GGUF --model gemma-ko-2b.Q4_K_M.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m gemma-ko-2b.Q4_K_M.gguf -n 128
Downloads last month
2
GGUF
Model size
2.51B params
Architecture
gemma

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.