multimodalart's picture
Upload 2025 files
22a452a verified

A newer version of the Gradio SDK is available: 5.34.2

Upgrade

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.

Learn how to quantize models in the Quantization guide.

PipelineQuantizationConfig

[[autodoc]] quantizers.PipelineQuantizationConfig

BitsAndBytesConfig

[[autodoc]] BitsAndBytesConfig

GGUFQuantizationConfig

[[autodoc]] GGUFQuantizationConfig

QuantoConfig

[[autodoc]] QuantoConfig

TorchAoConfig

[[autodoc]] TorchAoConfig

DiffusersQuantizer

[[autodoc]] quantizers.base.DiffusersQuantizer