TinySwallow Collection Compact Japanese models trained with "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models" • 5 items • Updated 7 days ago • 12
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection Paper • 2402.03744 • Published Feb 6, 2024 • 4
Foundation AI Papers Collection Curated List of Must-Reads on LLM reasoning at Temus AI team • 135 items • Updated Jun 15, 2024 • 29