Llama-3-8B-16K-GGUF
- This is quantized version of mattshumer/Llama-3-8B-16K created using llama.cpp
Model Description
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length
dataset.
rope_theta
was set to 1000000.0
. Trained with Axolotl.
- Downloads last month
- 94
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/Llama-3-8B-16K-GGUF
Base model
mattshumer/Llama-3-8B-16K