# Hybrid Inference **Empowering local AI builders with Hybrid Inference** > [!TIP] > Hybrid Inference is an [experimental feature](https://huggingface.co/blog/remote_vae). > Feedback can be provided [here](https://github.com/huggingface/diffusers/issues/new?template=remote-vae-pilot-feedback.yml). ## Why use Hybrid Inference? Hybrid Inference offers a fast and simple way to offload local generation requirements. - ๐Ÿš€ **Reduced Requirements:** Access powerful models without expensive hardware. - ๐Ÿ’Ž **Without Compromise:** Achieve the highest quality without sacrificing performance. - ๐Ÿ’ฐ **Cost Effective:** It's free! ๐Ÿค‘ - ๐ŸŽฏ **Diverse Use Cases:** Fully compatible with Diffusers ๐Ÿงจ and the wider community. - ๐Ÿ”ง **Developer-Friendly:** Simple requests, fast responses. --- ## Available Models * **VAE Decode ๐Ÿ–ผ๏ธ:** Quickly decode latent representations into high-quality images without compromising performance or workflow speed. * **VAE Encode ๐Ÿ”ข:** Efficiently encode images into latent representations for generation and training. * **Text Encoders ๐Ÿ“ƒ (coming soon):** Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow. --- ## Integrations * **[SD.Next](https://github.com/vladmandic/sdnext):** All-in-one UI with direct supports Hybrid Inference. * **[ComfyUI-HFRemoteVae](https://github.com/kijai/ComfyUI-HFRemoteVae):** ComfyUI node for Hybrid Inference. ## Changelog - March 10 2025: Added VAE encode - March 2 2025: Initial release with VAE decoding ## Contents The documentation is organized into three sections: * **VAE Decode** Learn the basics of how to use VAE Decode with Hybrid Inference. * **VAE Encode** Learn the basics of how to use VAE Encode with Hybrid Inference. * **API Reference** Dive into task-specific settings and parameters.