MaziyarPanahi/Llama-3-16B-Instruct-v0.1-GGUF

Description

MaziyarPanahi/Llama-3-16B-Instruct-v0.1-GGUF contains GGUF format model files for MaziyarPanahi/Llama-3-16B-Instruct-v0.1.

Load GGUF models

You MUST follow the prompt template provided by Llama-3:

./llama.cpp/main -m Llama-3-11B-Instruct.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024
Downloads last month
365
GGUF
Model size
16.8B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for MaziyarPanahi/Llama-3-16B-Instruct-v0.1-GGUF

Collections including MaziyarPanahi/Llama-3-16B-Instruct-v0.1-GGUF