Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs Paper • 2410.15438 • Published Oct 20, 2024
VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search Paper • 2503.10582 • Published Mar 13 • 23
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering Paper • 2503.23415 • Published Mar 30 • 1
Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch Mining Paper • 2505.11293 • Published May 16
VideoEval-Pro: Robust and Realistic Long Video Understanding Evaluation Paper • 2505.14640 • Published May 20 • 15
Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem Paper • 2506.03295 • Published Jun 3 • 17
VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation Paper • 2506.03930 • Published Jun 4 • 25
BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent Paper • 2508.06600 • Published 8 days ago • 37
The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models Paper • 2404.05904 • Published Apr 8, 2024 • 9
DC-BERT: Decoupling Question and Document for Efficient Contextual Encoding Paper • 2002.12591 • Published Feb 28, 2020
Running 34 34 OPEN-MOE-LLM-LEADERBOARD 🔥 Display and submit models for evaluation on an LLM leaderboard
view post Post 1938 Hey everyone,We're thrilled to introduce our latest project: a hand-curated list of the best production-level generative-ai open-source toolkits! 🚀Check it out here: zhiminy/awesome-production-genai-search See translation 🤗 3 3 + Reply
StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs Paper • 2505.20139 • Published May 26 • 18