HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading Paper • 2502.12574 • Published 11 days ago • 10
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading Paper • 2502.12574 • Published 11 days ago • 10
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading Paper • 2502.12574 • Published 11 days ago • 10 • 2
MINI-SEQUENCE TRANSFORMER: Optimizing Intermediate Memory for Long Sequences Training Paper • 2407.15892 • Published Jul 22, 2024