DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models Paper • 2410.07331 • Published Oct 9 • 4
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training Paper • 2405.15319 • Published May 24 • 25
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models Paper • 2402.12851 • Published Feb 20 • 2
Neeko: Leveraging Dynamic LoRA for Efficient Multi-Character Role-Playing Agent Paper • 2402.13717 • Published Feb 21 • 2
CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark Paper • 2401.11944 • Published Jan 22 • 27