-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 611 -
BitNet: Scaling 1-bit Transformers for Large Language Models
Paper • 2310.11453 • Published • 98 -
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 105 -
TransformerFAM: Feedback attention is working memory
Paper • 2404.09173 • Published • 43
Collections
Discover the best community collections!
Collections including paper arxiv:2412.20993