metadata
base_model: ayan4m1/Llama3.1-8B-Sonnet
base_model_relation: quantized
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- llama3.1
- sonnet
- claude
quantized_by: ayan4m1
inference: false
fine-tuning: false
library_name: transformers
Llama-3.1-8B Sonnet fine-tuning in quantized GGUFs
Original model: https://huggingface.co/ayan4m1/Llama3.1-8B-Sonnet
Quantized into:
- Q8_0
- Q6_K
- Q5_K_M
- Q4_K_M
- Q3_K_M
- Q2_K
Prompt format
<|begin_of_text|>{prompt}
Credits
Thanks to Meta, mlfoundations-dev, and Gryphe for providing the data used to create this fine-tuning.