Safetensors
llama
compressed-tensors
mrhendrey's picture
Adding in fp8 quantized model and config files
606b6cb
raw
history blame contribute delete
136 Bytes
DEFAULT_stage:
DEFAULT_modifiers:
QuantizationModifier:
ignore: [lm_head]
targets: [Linear]
scheme: FP8_DYNAMIC