Description

This repository contains GGUF format model files for Meta LLama 3 Instruct.

Prompt template

<|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Same as here: https://ollama.com/library/llama3:instruct/blobs/8ab4849b038c

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you need:

huggingface-cli download liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF --include "meta-llama-3-8b-instruct.Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
Downloads last month
12
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for liashchynskyi/Meta-Llama-3-8B-Instruct-GGUF

Quantized
(188)
this model