metadata
title: README
emoji: π
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
π Join the Pruna AI community!
π Simply make AI models faster, cheaper, smaller, greener!
Pruna AI makes AI models faster, cheaper, smaller, greener with the pruna
package.
- It supports various models including CV, NLP, audio, graphs for predictive and generative AI.
- It supports various hardware including GPU, CPU, Edge.
- It supports various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation that can be combined together.
- You can either play on your own with smash/compression configurations or let the smashing/compressing agent find the optimal configuration [Pro].
- You can evaluate reliable quality and efficiency metrics of your base vs smashed/compressed models. You can set it up in minutes and compress your first models in few lines of code!
β© How to get started?
You can smash your own models by installing pruna with pip:
pip install pruna
or directly from source.
You can start with simple notebooks to experience efficiency gains with:
Use Case | Free Notebooks |
---|---|
3x Faster Stable Diffusion Models | β© Smash for free |
Making your LLMs 4x smaller | β© Smash for free |
Smash your model with a CPU only | β© Smash for free |
Transcribe 2 hours of audio in less than 2 minutes with Whisper | β© Smash for free |
100% faster Whisper Transcription | β© Smash for free |
Run your Flux model without an A100 | β© Smash for free |
x2 smaller Sana in action | β© Smash for free |
For more details about installation, free tutorials and Pruna Pro tutorials, you can check the Pruna AI documentation.