quantization / compressed_tensors
danieldk's picture
danieldk HF staff
Add `scaled_(int|fp8)_quant` and `fp8_marlin_gemm`
5c6fb68