Meta-Llama-3-8B-Instruct-4bit

Model Details

Model Description

4-bit GPTQ quantization for Meta-Llama-3-8B-Instruct by c4 dataset

Model Sources

Downloads last month
69
Safetensors
Model size
1.99B params
Tensor type
FP16
·
I32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.