--- base_model: TareksTesting/Scripturient-V2.0-LLaMa-70B base_model_relation: quantized quantized_by: ArtusDev library_name: transformers license: llama3.3 tags: - mergekit - merge - exl3 --- ## EXL3 Quants of TareksTesting/Scripturient-V2.0-LLaMa-70B EXL3 quants of [TareksTesting/Scripturient-V2.0-LLaMa-70B](https://huggingface.co/TareksTesting/Scripturient-V2.0-LLaMa-70B) using exllamav3 for quantization. ### Quants | Quant(Revision) | Bits per Weight | Head Bits | | -------- | ---------- | --------- | | [3.5_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.0-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 | ### Downloading quants with huggingface-cli
Click to view download instructions Install hugginface-cli: ```bash pip install -U "huggingface_hub[cli]" ``` Download quant by targeting the specific quant revision (branch): ``` huggingface-cli download ArtusDev/TareksTesting_Scripturient-V2.0-LLaMa-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./ ```