FP8-Dynamic quantization using llmcompressor. Run with:

vllm serve leon-se/gemma-3-12b-it-FP8-Dynamic --max-model-len 4096
Downloads last month
26
Safetensors
Model size
12.2B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for leon-se/gemma-3-12b-it-FP8-Dynamic

Quantized
(26)
this model