# Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-gguf
            This model was converted to GGUF format from [`GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct`](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) using llama.cpp.
            Refer to the [original model card](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) for more details on the model.

            ## Available Versions
            - `llama3-8b-cpt-sahabatai-v1-instruct.q2_k.gguf` (q2_k)
  • llama3-8b-cpt-sahabatai-v1-instruct.q3_k_m.gguf (q3_k_m)

  • llama3-8b-cpt-sahabatai-v1-instruct.q4_0.gguf (q4_0)

  • llama3-8b-cpt-sahabatai-v1-instruct.q4_k_m.gguf (q4_k_m)

  • llama3-8b-cpt-sahabatai-v1-instruct.q5_0.gguf (q5_0)

  • llama3-8b-cpt-sahabatai-v1-instruct.q5_k_m.gguf (q5_k_m)

  • llama3-8b-cpt-sahabatai-v1-instruct.q6_k.gguf (q6_k)

  • llama3-8b-cpt-sahabatai-v1-instruct.q8_0.gguf (q8_0)

              ## Use with llama.cpp
              Replace `FILENAME` with one of the above filenames.
    
              ### CLI:
              ```bash
              llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-gguf --hf-file FILENAME -p "Your prompt here"
              ```
    
              ### Server:
              ```bash
              llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-gguf --hf-file FILENAME -c 2048
              ```
    
              ## Model Details
              - **Original Model:** [GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct)
              - **Format:** GGUF
    
Downloads last month
33,560
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-gguf