Running 1.29k 1.29k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy Sep 18, 2024 • 223
vicgalle/configurable-system-prompt-multitask Viewer • Updated Apr 23, 2024 • 1.95k • 204 • 22