Spaces:
Running
on
Zero
Running
on
Zero
A newer version of the Gradio SDK is available:
5.34.2
Quantization
Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.
Learn how to quantize models in the Quantization guide.
PipelineQuantizationConfig
[[autodoc]] quantizers.PipelineQuantizationConfig
BitsAndBytesConfig
[[autodoc]] BitsAndBytesConfig
GGUFQuantizationConfig
[[autodoc]] GGUFQuantizationConfig
QuantoConfig
[[autodoc]] QuantoConfig
TorchAoConfig
[[autodoc]] TorchAoConfig
DiffusersQuantizer
[[autodoc]] quantizers.base.DiffusersQuantizer