Papers
arxiv:2502.08910

InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU

Published on Feb 13
· Submitted by geonp on Feb 14
#1 Paper of the day
Authors:

Abstract

In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.

Community

Paper author Paper submitter
edited about 19 hours ago

🚀 We are thrilled to announce our new paper "InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU"!

📄 Paper: https://huggingface.co/papers/2502.08910
😺 Source code: https://github.com/DeepAuto-AI/hip-attention/
😺 SGLang Integration available now: https://github.com/DeepAuto-AI/sglang/
▶️ Try our Live Demo with DeepSeek 14B at https://chat.deepauto.ai/

🔑 Key features of our proposed method ♾️ InfiniteHiP ♾️:

♾️ 18.95x Speedup in Attention Decoding on 1M Tokens with Efficient Multi-stage Context Pruning

♾️ 7.25× Faster End-to-end Decoding Throughput on a 3M Token Context

♾️ Training-free Out-of-length Generalization Capability with Dynamic RoPE Adjustment

♾️ Efficiently Handle up to 3 Million Tokens on a Single L40s 48GB GPU with Dynamic KV Cache Offloading

Video Demo

Wow 🔥🔥🔥🔥, wonder how will it go on CPUs 🤔

·

Haha, actually that is also in our scope too. Stay tuned :+1:

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.08910 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.08910 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.08910 in a Space README.md to link it from this page.

Collections including this paper 8