Spaces:
Running
on
Zero
Running
on
Zero
Caching methods
Cache methods speedup diffusion transformers by storing and reusing intermediate outputs of specific layers, such as attention and feedforward layers, instead of recalculating them at each inference step.
CacheMixin
[[autodoc]] CacheMixin
PyramidAttentionBroadcastConfig
[[autodoc]] PyramidAttentionBroadcastConfig
[[autodoc]] apply_pyramid_attention_broadcast
FasterCacheConfig
[[autodoc]] FasterCacheConfig
[[autodoc]] apply_faster_cache