Papers
arxiv:2411.10958

SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration

Published on Nov 17
· Submitted by jt-zhang on Nov 21
#1 Paper of the day
Authors:
,
,
,

Abstract

Although quantization for linear layers has been widely used, its application to accelerate the attention process remains limited. SageAttention utilizes 8-bit matrix multiplication, 16-bit matrix multiplication with 16-bit accumulator, and precision-enhancing methods, implementing an accurate and 2x speedup kernel compared to FlashAttention2. To further enhance the efficiency of attention computation while maintaining precision, we propose SageAttention2, which utilizes significantly faster 4-bit matrix multiplication (Matmul) alongside additional precision-enhancing techniques. First, we propose to quantize matrixes (Q, K) to INT4 in a warp-level granularity and quantize matrixes (widetilde P, V) to FP8. Second, we propose a method to smooth Q and V, enhancing the accuracy of attention with INT4 QK and FP8 PV. Third, we analyze the quantization accuracy across timesteps and layers, then propose an adaptive quantization method to ensure the end-to-end metrics over various models. The operations per second (OPS) of SageAttention2 surpass FlashAttention2 and xformers by about 3x and 5x on RTX4090, respectively. Comprehensive experiments confirm that our approach incurs negligible end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation. The codes are available at https://github.com/thu-ml/SageAttention.

Community

Paper author Paper submitter
This comment has been hidden
Paper author Paper submitter
edited 5 days ago

Github: https://github.com/thu-ml/SageAttention

NEWS: The SageAttention2 Paper has been updated (SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization). Now it is available at https://arxiv.org/abs/2411.10958 !
2.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Great work! Do you have a comparison with Flash Attention 3?

·
Paper author

Thank you for your kind words. FlashAttention3 is tailored to and can only be used with the Nvidia Hopper architecture, i.e., H100 and H800. We will compare with it later.

Paper author Paper submitter
edited 5 days ago

The SageAttention2 Paper has been updated (SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization). Now it is available at https://arxiv.org/abs/2411.10958 !
2.png

Paper author Paper submitter

22.png
23.png
24.png
25.png
26.png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.10958 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.10958 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.10958 in a Space README.md to link it from this page.

Collections including this paper 12