--- base_model: - meta-llama/Meta-Llama-3-70B --- # meta-llama/Meta-Llama-3-70B (Quantized) ## Description This model is a quantized version of the original model `meta-llama/Meta-Llama-3-70B`. It has been quantized using int4_weight_only quantization with torchao. ## Quantization Details - **Quantization Type**: int4_weight_only - **Group Size**: 128 ## Usage You can use this model in your applications by loading it directly from the Hugging Face Hub: ```python from transformers import AutoModel model = AutoModel.from_pretrained("meta-llama/Meta-Llama-3-70B")