Layer-Condensed KV Cache for Efficient Inference of Large Language Models Paper • 2405.10637 • Published May 17, 2024 • 23
Running 403 403 LLM Model VRAM Calculator 📈 Calculate VRAM requirements for running large language models
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 55