Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16 • 145
Running on Zero 1.9k 1.9k Chat With Janus-Pro-7B 🌍 A unified multimodal understanding and generation model.
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking Paper • 2501.04519 • Published Jan 8 • 264