date
timestamp[ns]date 2023-05-05 00:00:00
2025-03-28 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
177
| authors
sequencelengths 1
942
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2025-02-18T00:00:00 | 2502.11196 | How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training | [
"Yixin Ou",
"Yunzhi Yao",
"Ningyu Zhang",
"Hui Jin",
"Jiacheng Sun",
"Shumin Deng",
"Zhenguo Li",
"Huajun Chen"
] | https://github.com/zjunlp/DynamicKnowledgeCircuits | Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired knowledge in their neural computations. We address this issue through the lens of knowledge circuit evolution, identifying computational subgraphs that facilitate knowledge storage and processing. Our systematic analysis of circuit evolution throughout continual pre-training reveals several key findings: (1) the acquisition of new knowledge is influenced by its relevance to pre-existing knowledge; (2) the evolution of knowledge circuits exhibits a distinct phase shift from formation to optimization; (3) the evolution of knowledge circuits follows a deep-to-shallow pattern. These insights not only advance our theoretical understanding of the mechanisms of new knowledge acquisition in LLMs, but also provide potential implications for improving continual pre-training strategies to enhance model performance. Code and data will be available at https://github.com/zjunlp/DynamicKnowledgeCircuits. |
2025-02-18T00:00:00 | 2502.11167 | SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors | [
"Bohan Lyu",
"Siqiao Huang",
"Zichen Liang"
] | https://github.com/Imbernoulli/SURGE | Large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as code understanding and code generation. However, an equally important yet underexplored question is whether LLMs can serve as general-purpose surrogate code executors, to predict the output and behavior of a program without actually running it. To systematically investigate this capability, we introduce SURGE, a comprehensive benchmark covering eight key aspects: multi-language programming tasks, competition-level programming problems, repository-level code analysis, high-cost scientific computing, time-complexity-intensive algorithms, buggy code analysis, programs dependent on specific compilers or execution environments, and formal mathematical proof verification. We evaluate multiple open-source and proprietary LLMs on SURGE and conduct a scaling study to analyze the impact of model size and training data scale on surrogate execution accuracy. Additionally, we categorize model prediction errors and explore potential areas for improvement. Our findings indicate that while LLMs can predict code execution results in certain cases, they exhibit limitations in general-purpose surrogate execution. This study provides empirical insights into the feasibility of using LLMs as surrogate code executors. Code and dataset are released at https://github.com/Imbernoulli/SURGE. |
2025-02-18T00:00:00 | 2502.11330 | System Message Generation for User Preferences using Open-Source Models | [
"Minbyul Jeong",
"Jungho Cho",
"Minsoo Khang",
"Dawoon Jung",
"Teakgyu Hong"
] | System messages play a crucial role in interactions with large language models (LLMs), often serving as prompts to initiate conversations. Through system messages, users can assign specific roles, perform intended tasks, incorporate background information, specify various output formats and communication styles. Despite such versatility, publicly available data are often lack system messages and subject to strict license constraints in the industry field. Manual labeling of publicly available data with system messages that align with user instructions demands significant resources. In view of such challenges, our work introduces SysGen, a pipeline for generating system messages with better aligned assistant responses from the supervised fine-tuning dataset without system messages. Training on SysGen data has demonstrated substantial improvements in the alignment of model responses with system messages and user instructions, as demonstrated across various open-source models on the Multifacet benchmark, while maintaining minimal impact on other unseen benchmarks such as Open LLM Leaderboard 2. Our qualitative analysis highlights the importance of diverse system messages to ensure better adaptability across different contexts. |
|
2025-02-18T00:00:00 | 2502.11574 | Large Language Models and Mathematical Reasoning Failures | [
"Johan Boye",
"Birger Moell"
] | This paper investigates the mathematical reasoning capabilities of large language models (LLMs) using 50 newly constructed high-school-level word problems. Unlike prior studies that focus solely on answer correctness, we rigorously analyze both final answers and solution steps to identify reasoning failures. Evaluating eight state-of-the-art models - including Mixtral, Llama, Gemini, GPT-4o, and OpenAI's o1 variants - we find that while newer models (e.g., o3-mini, deepseek-r1) achieve higher accuracy, all models exhibit errors in spatial reasoning, strategic planning, and arithmetic, sometimes producing correct answers through flawed logic. Common failure modes include unwarranted assumptions, over-reliance on numerical patterns, and difficulty translating physical intuition into mathematical steps. Manual analysis reveals that models struggle with problems requiring multi-step deduction or real-world knowledge, despite possessing broad mathematical knowledge. Our results underscore the importance of evaluating reasoning processes, not just answers, and caution against overestimating LLMs' problem-solving proficiency. The study highlights persistent gaps in LLMs' generalization abilities, emphasizing the need for targeted improvements in structured reasoning and constraint handling. |
|
2025-02-18T00:00:00 | 2502.11578 | Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance | [
"Birger Moell",
"Johan Boye"
] | Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the LIX readability metric and Average Dependency Distance (ADD). Using Swedish high school and university-level essays, we evaluate the models' abilities to compute LIX scores and perform dependency parsing, comparing their results to established ground truths. Our findings reveal that while all models demonstrate some capacity for these tasks, ChatGPT-o1-mini performs most consistently, achieving the highest accuracy in both LIX computation and dependency parsing. Additionally, we observe a strong significant correlation -0.875 p 0.026 (N=6) between the models' accuracy in computing LIX and their overall performance on the Massive Multitask Language Understanding (MMLU) benchmark. These results suggest that language complexity measurement abilities can serve as a noisy zero-shot proxies for assessing the general capabilities of LLMs, providing a practical method for model evaluation without the need for extensive benchmarking datasets. |
|
2025-02-18T00:00:00 | 2502.10458 | I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models | [
"Zhenxing Mi",
"Kuan-Chieh Wang",
"Guocheng Qian",
"Hanrong Ye",
"Runtao Liu",
"Sergey Tulyakov",
"Kfir Aberman",
"Dan Xu"
] | This paper presents ThinkDiff, a novel alignment paradigm that empowers text-to-image diffusion models with multimodal in-context understanding and reasoning capabilities by integrating the strengths of vision-language models (VLMs). Existing multimodal diffusion finetuning methods largely focus on pixel-level reconstruction rather than in-context reasoning, and are constrained by the complexity and limited availability of reasoning-based datasets. ThinkDiff addresses these challenges by leveraging vision-language training as a proxy task, aligning VLMs with the decoder of an encoder-decoder large language model (LLM) instead of a diffusion decoder. This proxy task builds on the observation that the LLM decoder shares the same input feature space with diffusion decoders that use the corresponding LLM encoder for prompt embedding. As a result, aligning VLMs with diffusion decoders can be simplified through alignment with the LLM decoder. Without complex training and datasets, ThinkDiff effectively unleashes understanding, reasoning, and composing capabilities in diffusion models. Experiments demonstrate that ThinkDiff significantly improves accuracy from 19.2% to 46.3% on the challenging CoBSAT benchmark for multimodal in-context reasoning generation, with only 5 hours of training on 4 A100 GPUs. Additionally, ThinkDiff demonstrates exceptional performance in composing multiple images and texts into logically coherent images. Project page: https://mizhenxing.github.io/ThinkDiff. |
|
2025-02-18T00:00:00 | 2502.12054 | PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning | [
"Xinyu Zhang",
"Yuxuan Dong",
"Yanrui Wu",
"Jiaxing Huang",
"Chengyou Jia",
"Basura Fernando",
"Mike Zheng Shou",
"Lingling Zhang",
"Jun Liu"
] | Large language models demonstrate remarkable capabilities across various domains, especially mathematics and logic reasoning. However, current evaluations overlook physics-based reasoning - a complex task requiring physics theorems and constraints. We present PhysReason, a 1,200-problem benchmark comprising knowledge-based (25%) and reasoning-based (75%) problems, where the latter are divided into three difficulty levels (easy, medium, hard). Notably, problems require an average of 8.1 solution steps, with hard requiring 15.6, reflecting the complexity of physics-based reasoning. We propose the Physics Solution Auto Scoring Framework, incorporating efficient answer-level and comprehensive step-level evaluations. Top-performing models like Deepseek-R1, Gemini-2.0-Flash-Thinking, and o3-mini-high achieve less than 60% on answer-level evaluation, with performance dropping from knowledge questions (75.11%) to hard problems (31.95%). Through step-level evaluation, we identified four key bottlenecks: Physics Theorem Application, Physics Process Understanding, Calculation, and Physics Condition Analysis. These findings position PhysReason as a novel and comprehensive benchmark for evaluating physics-based reasoning capabilities in large language models. Our code and data will be published at https:/dxzxy12138.github.io/PhysReason. |
|
2025-02-18T00:00:00 | 2502.09083 | Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking | [
"Greta Warren",
"Irina Shklovski",
"Isabelle Augenstein"
] | The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps. |
|
2025-02-18T00:00:00 | 2502.12135 | MagicArticulate: Make Your 3D Models Articulation-Ready | [
"Chaoyue Song",
"Jianfeng Zhang",
"Xiu Li",
"Fan Yang",
"Yiwen Chen",
"Zhongcong Xu",
"Jun Hao Liew",
"Xiaoyang Guo",
"Fayao Liu",
"Jiashi Feng",
"Guosheng Lin"
] | With the explosive growth of 3D content creation, there is an increasing demand for automatically converting static 3D models into articulation-ready versions that support realistic animation. Traditional approaches rely heavily on manual annotation, which is both time-consuming and labor-intensive. Moreover, the lack of large-scale benchmarks has hindered the development of learning-based solutions. In this work, we present MagicArticulate, an effective framework that automatically transforms static 3D models into articulation-ready assets. Our key contributions are threefold. First, we introduce Articulation-XL, a large-scale benchmark containing over 33k 3D models with high-quality articulation annotations, carefully curated from Objaverse-XL. Second, we propose a novel skeleton generation method that formulates the task as a sequence modeling problem, leveraging an auto-regressive transformer to naturally handle varying numbers of bones or joints within skeletons and their inherent dependencies across different 3D models. Third, we predict skinning weights using a functional diffusion process that incorporates volumetric geodesic distance priors between vertices and joints. Extensive experiments demonstrate that MagicArticulate significantly outperforms existing methods across diverse object categories, achieving high-quality articulation that enables realistic animation. Project page: https://chaoyuesong.github.io/MagicArticulate. |
|
2025-02-18T00:00:00 | 2502.11085 | Towards Data-Efficient Pretraining for Atomic Property Prediction | [
"Yasir Ghunaim",
"Hasan Abed Al Kader Hammoud",
"Bernard Ghanem"
] | This paper challenges the recent paradigm in atomic property prediction that links progress to growing dataset sizes and computational resources. We show that pretraining on a carefully selected, task-relevant dataset can match or even surpass large-scale pretraining, while using as little as 1/24th of the computational cost. We introduce the Chemical Similarity Index (CSI), a novel metric inspired by computer vision's Fr\'echet Inception Distance, for molecular graphs which quantifies the alignment between upstream pretraining datasets and downstream tasks. By selecting the most relevant dataset with minimal CSI distance, we show that models pretrained on a smaller, focused dataset consistently outperform those pretrained on massive, mixed datasets such as JMP, even when those larger datasets include the relevant dataset. Counterintuitively, we also find that indiscriminately adding more data can degrade model performance when the additional data poorly aligns with the task at hand. Our findings highlight that quality often outperforms quantity in pretraining for atomic property prediction. |
|
2025-02-18T00:00:00 | 2502.11831 | Intuitive physics understanding emerges from self-supervised pretraining on natural videos | [
"Quentin Garrido",
"Nicolas Ballas",
"Mahmoud Assran",
"Adrien Bardes",
"Laurent Najman",
"Michael Rabbat",
"Emmanuel Dupoux",
"Yann LeCun"
] | We investigate the emergence of intuitive physics understanding in general-purpose deep neural network models trained to predict masked regions in natural videos. Leveraging the violation-of-expectation framework, we find that video prediction models trained to predict outcomes in a learned representation space demonstrate an understanding of various intuitive physics properties, such as object permanence and shape consistency. In contrast, video prediction in pixel space and multimodal large language models, which reason through text, achieve performance closer to chance. Our comparisons of these architectures reveal that jointly learning an abstract representation space while predicting missing parts of sensory input, akin to predictive coding, is sufficient to acquire an understanding of intuitive physics, and that even models trained on one week of unique video achieve above chance performance. This challenges the idea that core knowledge -- a set of innate systems to help understand the world -- needs to be hardwired to develop an understanding of intuitive physics. |
|
2025-02-18T00:00:00 | 2502.08441 | Better Embeddings with Coupled Adam | [
"Felix Stollenwerk",
"Tobias Stollenwerk"
] | Despite their remarkable capabilities, LLMs learn word representations that exhibit the undesirable yet poorly understood feature of anisotropy. In this paper, we argue that the second moment in Adam is a cause of anisotropic embeddings, and suggest a modified optimizer called Coupled Adam to mitigate the problem. Our experiments demonstrate that Coupled Adam significantly improves the quality of embeddings, while also leading to better upstream and downstream performance on large enough datasets. |
|
2025-02-18T00:00:00 | 2502.11089 | Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention | [
"Jingyang Yuan",
"Huazuo Gao",
"Damai Dai",
"Junyu Luo",
"Liang Zhao",
"Zhengyan Zhang",
"Zhenda Xie",
"Y. X. Wei",
"Lean Wang",
"Zhiping Xiao",
"Yuqing Wang",
"Chong Ruan",
"Ming Zhang",
"Wenfeng Liang",
"Wangding Zeng"
] | Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle. |
|
2025-02-18T00:00:00 | 2502.11157 | Dyve: Thinking Fast and Slow for Dynamic Process Verification | [
"Jianyuan Zhong",
"Zeju Li",
"Zhijian Xu",
"Xiangyu Wen",
"Qiang Xu"
] | We present Dyve, a dynamic process verifier that enhances reasoning error detection in large language models by integrating fast and slow thinking, inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate token-level confirmation System 1 for straightforward steps and comprehensive analysis System 2 for complex ones. Leveraging a novel step-wise consensus-filtered process supervision technique, combining Monte Carlo estimation with LLM based evaluation, Dyve curates high-quality supervision signals from noisy data. Experimental results on ProcessBench and the MATH dataset confirm that Dyve significantly outperforms existing process-based verifiers and boosts performance in Best-of-N settings. |
|
2025-02-18T00:00:00 | 2502.10550 | Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning | [
"Egor Cherepanov",
"Nikita Kachaev",
"Alexey K. Kovalev",
"Aleksandr I. Panov"
] | Memory is crucial for enabling agents to tackle complex tasks with temporal and spatial dependencies. While many reinforcement learning (RL) algorithms incorporate memory, the field lacks a universal benchmark to assess an agent's memory capabilities across diverse scenarios. This gap is particularly evident in tabletop robotic manipulation, where memory is essential for solving tasks with partial observability and ensuring robust performance, yet no standardized benchmarks exist. To address this, we introduce MIKASA (Memory-Intensive Skills Assessment Suite for Agents), a comprehensive benchmark for memory RL, with three key contributions: (1) we propose a comprehensive classification framework for memory-intensive RL tasks, (2) we collect MIKASA-Base - a unified benchmark that enables systematic evaluation of memory-enhanced agents across diverse scenarios, and (3) we develop MIKASA-Robo - a novel benchmark of 32 carefully designed memory-intensive tasks that assess memory capabilities in tabletop robotic manipulation. Our contributions establish a unified framework for advancing memory RL research, driving the development of more reliable systems for real-world applications. The code is available at https://sites.google.com/view/memorybenchrobots/. |
|
2025-02-18T00:00:00 | 2502.11177 | The Mirage of Model Editing: Revisiting Evaluation in the Wild | [
"Wanli Yang",
"Fei Sun",
"Jiajun Tan",
"Xinyu Ma",
"Qi Cao",
"Dawei Yin",
"Huawei Shen",
"Xueqi Cheng"
] | Despite near-perfect results in artificial evaluations, the effectiveness of model editing in real-world applications remains unexplored. To bridge this gap, we propose to study model editing in question answering (QA) by establishing a rigorous evaluation practice to assess the effectiveness of editing methods in correcting LLMs' errors. It consists of QAEdit, a new benchmark derived from popular QA datasets, and a standardized evaluation framework. Our single editing experiments indicate that current editing methods perform substantially worse than previously reported (38.5% vs. ~96%). Through module analysis and controlled experiments, we demonstrate that this performance decline stems from issues in evaluation practices of prior editing research. One key issue is the inappropriate use of teacher forcing in testing prevents error propagation by feeding ground truth tokens (inaccessible in real-world scenarios) as input. Furthermore, we simulate real-world deployment by sequential editing, revealing that current approaches fail drastically with only 1000 edits. Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices, and establishes a rigorous evaluation framework with key insights to advance reliable and practical model editing research. |
|
2025-02-18T00:00:00 | 2502.08820 | Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model | [
"Emre Can Acikgoz",
"Jeremiah Greer",
"Akul Datta",
"Ze Yang",
"William Zeng",
"Oussama Elachqar",
"Emmanouil Koukoumidis",
"Dilek Hakkani-Tür",
"Gokhan Tur"
] | Large Language Models (LLMs) with API-calling capabilities enabled building effective Language Agents (LA), while also revolutionizing the conventional task-oriented dialogue (TOD) paradigm. However, current approaches face a critical dilemma: TOD systems are often trained on a limited set of target APIs, requiring new data to maintain their quality when interfacing with new services, while LAs are not trained to maintain user intent over multi-turn conversations. Because both robust multi-turn management and advanced function calling are crucial for effective conversational agents, we evaluate these skills on three popular benchmarks: MultiWOZ 2.4 (TOD), BFCL V3 (LA), and API-Bank (LA), and our analyses reveal that specialized approaches excel in one domain but underperform in the other. To bridge this chasm, we introduce CALM (Conversational Agentic Language Model), a unified approach that integrates both conversational and agentic capabilities. We created CALM-IT, a carefully constructed multi-task dataset that interleave multi-turn ReAct reasoning with complex API usage. Using CALM-IT, we train three models CALM 8B, CALM 70B, and CALM 405B, which outperform top domain-specific models, including GPT-4o, across all three benchmarks. |
|
2025-02-18T00:00:00 | 2502.11748 | ILIAS: Instance-Level Image retrieval At Scale | [
"Giorgos Kordopatis-Zilos",
"Vladan Stojnić",
"Anna Manko",
"Pavel Šuma",
"Nikolaos-Antonios Ypsilantis",
"Nikos Efthymiadis",
"Zakaria Laskar",
"Jiří Matas",
"Ondřej Chum",
"Giorgos Tolias"
] | This work introduces ILIAS, a new test dataset for Instance-Level Image retrieval At Scale. It is designed to evaluate the ability of current and future foundation models and retrieval techniques to recognize particular objects. The key benefits over existing datasets include large scale, domain diversity, accurate ground truth, and a performance that is far from saturated. ILIAS includes query and positive images for 1,000 object instances, manually collected to capture challenging conditions and diverse domains. Large-scale retrieval is conducted against 100 million distractor images from YFCC100M. To avoid false negatives without extra annotation effort, we include only query objects confirmed to have emerged after 2014, i.e. the compilation date of YFCC100M. An extensive benchmarking is performed with the following observations: i) models fine-tuned on specific domains, such as landmarks or products, excel in that domain but fail on ILIAS ii) learning a linear adaptation layer using multi-domain class supervision results in performance improvements, especially for vision-language models iii) local descriptors in retrieval re-ranking are still a key ingredient, especially in the presence of severe background clutter iv) the text-to-image performance of the vision-language foundation models is surprisingly close to the corresponding image-to-image case. website: https://vrg.fel.cvut.cz/ilias/ |
|
2025-02-18T00:00:00 | 2502.11357 | Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents | [
"Vardaan Pahuja",
"Yadong Lu",
"Corby Rosset",
"Boyu Gou",
"Arindam Mitra",
"Spencer Whitehead",
"Yu Su",
"Ahmed Awadallah"
] | Recent success in large multimodal models (LMMs) has sparked promising applications of agents capable of autonomously completing complex web tasks. While open-source LMM agents have made significant advances in offline evaluation benchmarks, their performance still falls substantially short of human-level capabilities in more realistic online settings. A key bottleneck is the lack of diverse and large-scale trajectory-level datasets across various domains, which are expensive to collect. In this paper, we address this challenge by developing a scalable recipe to synthesize the largest and most diverse trajectory-level dataset to date, containing over 94K successful multimodal web trajectories, spanning 49K unique URLs, 720K screenshots, and 33M web elements. In particular, we leverage extensive web exploration and refinement to obtain diverse task intents. The average cost is 28 cents per successful trajectory, making it affordable to a wide range of users in the community. Leveraging this dataset, we train Explorer, a multimodal web agent, and demonstrate strong performance on both offline and online web agent benchmarks such as Mind2Web-Live, Multimodal-Mind2Web, and MiniWob++. Additionally, our experiments highlight data scaling as a key driver for improving web agent capabilities. We hope this study makes state-of-the-art LMM-based agent research at a larger scale more accessible. |
|
2025-02-18T00:00:00 | 2502.08745 | IHEval: Evaluating Language Models on Following the Instruction Hierarchy | [
"Zhihan Zhang",
"Shiyang Li",
"Zixuan Zhang",
"Xin Liu",
"Haoming Jiang",
"Xianfeng Tang",
"Yifan Gao",
"Zheng Li",
"Haodong Wang",
"Zhaoxuan Tan",
"Yichuan Li",
"Qingyu Yin",
"Bing Yin",
"Meng Jiang"
] | The instruction hierarchy, which establishes a priority order from system messages to user messages, conversation history, and tool outputs, is essential for ensuring consistent and safe behavior in language models (LMs). Despite its importance, this topic receives limited attention, and there is a lack of comprehensive benchmarks for evaluating models' ability to follow the instruction hierarchy. We bridge this gap by introducing IHEval, a novel benchmark comprising 3,538 examples across nine tasks, covering cases where instructions in different priorities either align or conflict. Our evaluation of popular LMs highlights their struggle to recognize instruction priorities. All evaluated models experience a sharp performance decline when facing conflicting instructions, compared to their original instruction-following performance. Moreover, the most competitive open-source model only achieves 48% accuracy in resolving such conflicts. Our results underscore the need for targeted optimization in the future development of LMs. |
|
2025-02-18T00:00:00 | 2502.09969 | Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning | [
"Ishika Agarwal",
"Dilek Hakkani-Tür"
] | https://github.com/agarwalishika/NN-CIFT | Influence functions provide crucial insights into model training, but existing methods suffer from large computational costs and limited generalization. Particularly, recent works have proposed various metrics and algorithms to calculate the influence of data using language models, which do not scale well with large models and datasets. This is because of the expensive forward and backward passes required for computation, substantial memory requirements to store large models, and poor generalization of influence estimates to new data. In this paper, we explore the use of small neural networks -- which we refer to as the InfluenceNetwork -- to estimate influence values, achieving up to 99% cost reduction. Our evaluation demonstrates that influence values can be estimated with models just 0.0027% the size of full language models (we use 7B and 8B versions). We apply our algorithm of estimating influence values (called NN-CIFT: Neural Networks for effiCient Instruction Fine-Tuning) to the downstream task of subset selection for general instruction fine-tuning. In our study, we include four state-of-the-art influence functions and show no compromise in performance, despite large speedups, between NN-CIFT and the original influence functions. We provide an in-depth hyperparameter analyses of NN-CIFT. The code for our method can be found here: https://github.com/agarwalishika/NN-CIFT. |
2025-02-18T00:00:00 | 2502.08826 | Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation | [
"Mohammad Mahdi Abootorabi",
"Amirhosein Zobeiri",
"Mahdi Dehghani",
"Mohammadali Mohammadkhani",
"Bardia Mohammadi",
"Omid Ghahroodi",
"Mahdieh Soleymani Baghshah",
"Ehsaneddin Asgari"
] | https://github.com/llm-lab-org/Multimodal-RAG-Survey | Large Language Models (LLMs) struggle with hallucinations and outdated knowledge due to their reliance on static training data. Retrieval-Augmented Generation (RAG) mitigates these issues by integrating external dynamic information enhancing factual and updated grounding. Recent advances in multimodal learning have led to the development of Multimodal RAG, incorporating multiple modalities such as text, images, audio, and video to enhance the generated outputs. However, cross-modal alignment and reasoning introduce unique challenges to Multimodal RAG, distinguishing it from traditional unimodal RAG. This survey offers a structured and comprehensive analysis of Multimodal RAG systems, covering datasets, metrics, benchmarks, evaluation, methodologies, and innovations in retrieval, fusion, augmentation, and generation. We precisely review training strategies, robustness enhancements, and loss functions, while also exploring the diverse Multimodal RAG scenarios. Furthermore, we discuss open challenges and future research directions to support advancements in this evolving field. This survey lays the foundation for developing more capable and reliable AI systems that effectively leverage multimodal dynamic external knowledge bases. Resources are available at https://github.com/llm-lab-org/Multimodal-RAG-Survey. |
2025-02-18T00:00:00 | 2502.09509 | EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling | [
"Theodoros Kouzelis",
"Ioannis Kakogeorgiou",
"Spyros Gidaris",
"Nikos Komodakis"
] | Latent generative models have emerged as a leading approach for high-quality image synthesis. These models rely on an autoencoder to compress images into a latent space, followed by a generative model to learn the latent distribution. We identify that existing autoencoders lack equivariance to semantic-preserving transformations like scaling and rotation, resulting in complex latent spaces that hinder generative performance. To address this, we propose EQ-VAE, a simple regularization approach that enforces equivariance in the latent space, reducing its complexity without degrading reconstruction quality. By finetuning pre-trained autoencoders with EQ-VAE, we enhance the performance of several state-of-the-art generative models, including DiT, SiT, REPA and MaskGIT, achieving a 7 speedup on DiT-XL/2 with only five epochs of SD-VAE fine-tuning. EQ-VAE is compatible with both continuous and discrete autoencoders, thus offering a versatile enhancement for a wide range of latent generative models. Project page and code: https://eq-vae.github.io/. |
|
2025-02-18T00:00:00 | 2502.12154 | Diffusion Models without Classifier-free Guidance | [
"Zhicong Tang",
"Jianmin Bao",
"Dong Chen",
"Baining Guo"
] | https://github.com/tzco/Diffusion-wo-CFG | This paper presents Model-guidance (MG), a novel objective for training diffusion model that addresses and removes of the commonly used Classifier-free guidance (CFG). Our innovative approach transcends the standard modeling of solely data distribution to incorporating the posterior probability of conditions. The proposed technique originates from the idea of CFG and is easy yet effective, making it a plug-and-play module for existing models. Our method significantly accelerates the training process, doubles the inference speed, and achieve exceptional quality that parallel and even surpass concurrent diffusion models with CFG. Extensive experiments demonstrate the effectiveness, efficiency, scalability on different models and datasets. Finally, we establish state-of-the-art performance on ImageNet 256 benchmarks with an FID of 1.34. Our code is available at https://github.com/tzco/Diffusion-wo-CFG. |
2025-02-18T00:00:00 | 2502.12982 | Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs | [
"Longxu Dou",
"Qian Liu",
"Fan Zhou",
"Changyu Chen",
"Zili Wang",
"Ziqi Jin",
"Zichen Liu",
"Tongyao Zhu",
"Cunxiao Du",
"Penghui Yang",
"Haonan Wang",
"Jiaheng Liu",
"Yongchi Zhao",
"Xiachong Feng",
"Xin Mao",
"Man Tsung Yeung",
"Kunat Pipatanakul",
"Fajri Koto",
"Min Si Thu",
"Hynek Kydlíček",
"Zeyi Liu",
"Qunshu Lin",
"Sittipong Sripaisarnmongkol",
"Kridtaphad Sae-Khow",
"Nirattisai Thongchim",
"Taechawat Konkaew",
"Narong Borijindargoon",
"Anh Dao",
"Matichon Maneegard",
"Phakphum Artkaew",
"Zheng-Xin Yong",
"Quan Nguyen",
"Wannaphong Phatthiyaphaibun",
"Hoang H. Tran",
"Mike Zhang",
"Shiqi Chen",
"Tianyu Pang",
"Chao Du",
"Xinyi Wan",
"Wei Lu",
"Min Lin"
] | Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages. |
|
2025-02-18T00:00:00 | 2502.11336 | ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability | [
"Ryuto Koike",
"Masahiro Kaneko",
"Ayana Niwa",
"Preslav Nakov",
"Naoaki Okazaki"
] | Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans verify whether a text is human-written or LLM-generated, they intuitively investigate with which of them it shares more similar spans. However, existing interpretable detectors are not aligned with the human decision-making process and fail to offer evidence that users easily understand. To bridge this gap, we introduce ExaGPT, an interpretable detection approach grounded in the human decision-making process for verifying the origin of a text. ExaGPT identifies a text by checking whether it shares more similar spans with human-written vs. with LLM-generated texts from a datastore. This approach can provide similar span examples that contribute to the decision for each span in the text as evidence. Our human evaluation demonstrates that providing similar span examples contributes more effectively to judging the correctness of the decision than existing interpretable methods. Moreover, extensive experiments in four domains and three generators show that ExaGPT massively outperforms prior powerful detectors by up to +40.9 points of accuracy at a false positive rate of 1%. |
|
2025-02-19T00:00:00 | 2502.13143 | SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation | [
"Zekun Qi",
"Wenyao Zhang",
"Yufei Ding",
"Runpei Dong",
"Xinqiang Yu",
"Jingwen Li",
"Lingyun Xu",
"Baoyu Li",
"Xialin He",
"Guofan Fan",
"Jiazhao Zhang",
"Jiawei He",
"Jiayuan Gu",
"Xin Jin",
"Kaisheng Ma",
"Zhizheng Zhang",
"He Wang",
"Li Yi"
] | Spatial intelligence is a critical component of embodied AI, promoting robots to understand and interact with their environments. While recent advances have enhanced the ability of VLMs to perceive object locations and positional relationships, they still lack the capability to precisely understand object orientations-a key requirement for tasks involving fine-grained manipulations. Addressing this limitation not only requires geometric reasoning but also an expressive and intuitive way to represent orientation. In this context, we propose that natural language offers a more flexible representation space than canonical frames, making it particularly suitable for instruction-following robotic systems. In this paper, we introduce the concept of semantic orientation, which defines object orientations using natural language in a reference-frame-free manner (e.g., the ''plug-in'' direction of a USB or the ''handle'' direction of a knife). To support this, we construct OrienText300K, a large-scale dataset of 3D models annotated with semantic orientations that link geometric understanding to functional semantics. By integrating semantic orientation into a VLM system, we enable robots to generate manipulation actions with both positional and orientational constraints. Extensive experiments in simulation and real world demonstrate that our approach significantly enhances robotic manipulation capabilities, e.g., 48.7% accuracy on Open6DOR and 74.9% accuracy on SIMPLER. |
|
2025-02-19T00:00:00 | 2502.13131 | Rethinking Diverse Human Preference Learning through Principal Component Analysis | [
"Feng Luo",
"Rui Yang",
"Hao Sun",
"Chunyuan Deng",
"Jiarui Yao",
"Jingyan Shen",
"Huan Zhang",
"Hanjie Chen"
] | Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive and hard to scale. In this paper, we introduce Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons without requiring fine-grained annotations. Our key insight is to represent human preferences as vectors and analyze them using Principal Component Analysis (PCA). By constructing a dataset of embedding differences between preferred and rejected responses, DRMs identify orthogonal basis vectors that capture distinct aspects of preference. These decomposed rewards can be flexibly combined to align with different user needs, offering an interpretable and scalable alternative to traditional reward models. We demonstrate that DRMs effectively extract meaningful preference dimensions (e.g., helpfulness, safety, humor) and adapt to new users without additional training. Our results highlight DRMs as a powerful framework for personalized and interpretable LLM alignment. |
|
2025-02-19T00:00:00 | 2502.11079 | Phantom: Subject-consistent video generation via cross-modal alignment | [
"Lijie Liu",
"Tianxiang Ma",
"Bingchuan Li",
"Zhuowei Chen",
"Jiawei Liu",
"Qian He",
"Xinglong Wu"
] | The continuous development of foundational models for video generation is evolving into various applications, with subject-consistent video generation still in the exploratory stage. We refer to this as Subject-to-Video, which extracts subject elements from reference images and generates subject-consistent video through textual instructions. We believe that the essence of subject-to-video lies in balancing the dual-modal prompts of text and image, thereby deeply and simultaneously aligning both text and visual content. To this end, we propose Phantom, a unified video generation framework for both single and multi-subject references. Building on existing text-to-video and image-to-video architectures, we redesign the joint text-image injection model and drive it to learn cross-modal alignment via text-image-video triplet data. In particular, we emphasize subject consistency in human generation, covering existing ID-preserving video generation while offering enhanced advantages. The project homepage is here https://phantom-video.github.io/Phantom/. |
|
2025-02-19T00:00:00 | 2502.11433 | FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading | [
"Guojun Xiong",
"Zhiyang Deng",
"Keyi Wang",
"Yupeng Cao",
"Haohang Li",
"Yangyang Yu",
"Xueqing Peng",
"Mingquan Lin",
"Kaleb E Smith",
"Xiao-Yang Liu",
"Jimin Huang",
"Sophia Ananiadou",
"Qianqian Xie"
] | Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose FLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements. |
|
2025-02-19T00:00:00 | 2502.13145 | Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation | [
"Bencheng Liao",
"Hongyuan Tao",
"Qian Zhang",
"Tianheng Cheng",
"Yingyue Li",
"Haoran Yin",
"Wenyu Liu",
"Xinggang Wang"
] | https://github.com/hustvl/mmMamba | Recent Multimodal Large Language Models (MLLMs) have achieved remarkable performance but face deployment challenges due to their quadratic computational complexity, growing Key-Value cache requirements, and reliance on separate vision encoders. We propose mmMamba, a framework for developing linear-complexity native multimodal state space models through progressive distillation from existing MLLMs using moderate academic computational resources. Our approach enables the direct conversion of trained decoder-only MLLMs to linear-complexity architectures without requiring pre-trained RNN-based LLM or vision encoders. We propose an seeding strategy to carve Mamba from trained Transformer and a three-stage distillation recipe, which can effectively transfer the knowledge from Transformer to Mamba while preserving multimodal capabilities. Our method also supports flexible hybrid architectures that combine Transformer and Mamba layers for customizable efficiency-performance trade-offs. Distilled from the Transformer-based decoder-only HoVLE, mmMamba-linear achieves competitive performance against existing linear and quadratic-complexity VLMs, while mmMamba-hybrid further improves performance significantly, approaching HoVLE's capabilities. At 103K tokens, mmMamba-linear demonstrates 20.6times speedup and 75.8% GPU memory reduction compared to HoVLE, while mmMamba-hybrid achieves 13.5times speedup and 60.2% memory savings. Code and models are released at https://github.com/hustvl/mmMamba |
2025-02-19T00:00:00 | 2502.12513 | RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm | [
"Tiancheng Gu",
"Kaicheng Yang",
"Chaoyi Zhang",
"Yin Xie",
"Xiang An",
"Ziyong Feng",
"Dongnan Liu",
"Weidong Cai",
"Jiankang Deng"
] | https://github.com/deepglint/RealSyn | After pre-training on extensive image-text pairs, Contrastive Language-Image Pre-training (CLIP) demonstrates promising performance on a wide variety of benchmarks. However, a substantial volume of non-paired data, such as multimodal interleaved documents, remains underutilized for vision-language representation learning. To fully leverage these unpaired documents, we initially establish a Real-World Data Extraction pipeline to extract high-quality images and texts. Then we design a hierarchical retrieval method to efficiently associate each image with multiple semantically relevant realistic texts. To further enhance fine-grained visual information, we propose an image semantic augmented generation module for synthetic text production. Furthermore, we employ a semantic balance sampling strategy to improve dataset diversity, enabling better learning of long-tail concepts. Based on these innovations, we construct RealSyn, a dataset combining realistic and synthetic texts, available in three scales: 15M, 30M, and 100M. Extensive experiments demonstrate that RealSyn effectively advances vision-language representation learning and exhibits strong scalability. Models pre-trained on RealSyn achieve state-of-the-art performance on multiple downstream tasks. To facilitate future research, the RealSyn dataset and pre-trained model weights are released at https://github.com/deepglint/RealSyn. |
2025-02-19T00:00:00 | 2502.11564 | Continuous Diffusion Model for Language Modeling | [
"Jaehyeong Jo",
"Sung Ju Hwang"
] | https://github.com/harryjo97/RDLM{https: | Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing continuous diffusion models for discrete data have limited performance compared to discrete approaches, and the unclear link between them restricts the development of diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion and continuous flow on the statistical manifold, and building on the analogy, we introduce a simple design for the diffusion process that generalizes previous discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry and a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of autoregressive models. Codes available at https://github.com/harryjo97/RDLM{https://github.com/harryjo97/RDLM}. |
2025-02-19T00:00:00 | 2502.12501 | Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge | [
"Qiyuan Zhang",
"Yufei Wang",
"Yuxin Jiang",
"Liangyou Li",
"Chuhan Wu",
"Yasheng Wang",
"Xin Jiang",
"Lifeng Shang",
"Ruiming Tang",
"Fuyuan Lyu",
"Chen Ma"
] | LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become a widely adopted auto-evaluation method. However, its reliability is compromised by the CoT reasoning's inability to capture comprehensive and deeper details, often leading to incomplete outcomes. Existing methods mainly rely on majority voting or criteria expansion, which is insufficient to address the limitation in CoT. We propose Crowd-based Comparative Evaluation, which introduces additional crowd responses to compare with the candidate responses, thereby exposing deeper and more comprehensive details within the candidate responses. This process effectively guides LLM-as-a-Judge to provide a more detailed CoT judgment. Extensive experiments demonstrate that our approach enhances evaluation reliability, achieving an average accuracy gain of 6.7% across five benchmarks. Moreover, our method produces higher-quality CoTs that facilitate judge distillation and exhibit superior performance in rejection sampling for supervised fine-tuning (SFT), referred to as crowd rejection sampling, thereby enabling more efficient SFT. Our analysis confirms that CoTs generated by ours are more comprehensive and of higher quality, and evaluation accuracy improves as inference scales. |
|
2025-02-19T00:00:00 | 2502.09838 | HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation | [
"Tianwei Lin",
"Wenqiao Zhang",
"Sijing Li",
"Yuqian Yuan",
"Binhe Yu",
"Haoyuan Li",
"Wanggui He",
"Hao Jiang",
"Mengze Li",
"Xiaohui Song",
"Siliang Tang",
"Jun Xiao",
"Hui Lin",
"Yueting Zhuang",
"Beng Chin Ooi"
] | https://github.com/DCDmllm/HealthGPT | We present HealthGPT, a powerful Medical Large Vision-Language Model (Med-LVLM) that integrates medical visual comprehension and generation capabilities within a unified autoregressive paradigm. Our bootstrapping philosophy is to progressively adapt heterogeneous comprehension and generation knowledge to pre-trained large language models (LLMs). This is achieved through a novel heterogeneous low-rank adaptation (H-LoRA) technique, which is complemented by a tailored hierarchical visual perception approach and a three-stage learning strategy. To effectively learn the HealthGPT, we devise a comprehensive medical domain-specific comprehension and generation dataset called VL-Health. Experimental results demonstrate exceptional performance and scalability of HealthGPT in medical visual unified tasks. Our project can be accessed at https://github.com/DCDmllm/HealthGPT. |
2025-02-19T00:00:00 | 2502.12574 | HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading | [
"Cheng Luo",
"Zefan Cai",
"Hanshi Sun",
"Jinqi Xiao",
"Bo Yuan",
"Wen Xiao",
"Junjie Hu",
"Jiawei Zhao",
"Beidi Chen",
"Anima Anandkumar"
] | Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods. |
|
2025-02-19T00:00:00 | 2502.10852 | Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages | [
"Zeli Su",
"Ziyin Zhang",
"Guixian Xu",
"Jianing Liu",
"XU Han",
"Ting Zhang",
"Yushuang Dong"
] | While multilingual language models like XLM-R have advanced multilingualism in NLP, they still perform poorly in extremely low-resource languages. This situation is exacerbated by the fact that modern LLMs such as LLaMA and Qwen support far fewer languages than XLM-R, making text generation models non-existent for many languages in the world. To tackle this challenge, we propose a novel framework for adapting multilingual encoders to text generation in extremely low-resource languages. By reusing the weights between the encoder and the decoder, our framework allows the model to leverage the learned semantic space of the encoder, enabling efficient learning and effective generalization in low-resource languages. Applying this framework to four Chinese minority languages, we present XLM-SWCM, and demonstrate its superior performance on various downstream tasks even when compared with much larger models. |
|
2025-02-19T00:00:00 | 2502.12215 | Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities? | [
"Zhiyuan Zeng",
"Qinyuan Cheng",
"Zhangyue Yin",
"Yunhua Zhou",
"Xipeng Qiu"
] | The advent of test-time scaling in large language models (LLMs), exemplified by OpenAI's o1 series, has advanced reasoning capabilities by scaling computational resource allocation during inference. While successors like QwQ, Deepseek-R1 (R1) and LIMO replicate these advancements, whether these models truly possess test-time scaling capabilities remains underexplored. This study found that longer CoTs of these o1-like models do not consistently enhance accuracy; in fact, correct solutions are often shorter than incorrect ones for the same questions. Further investigation shows this phenomenon is closely related to models' self-revision capabilities - longer CoTs contain more self-revisions, which often lead to performance degradation. We then compare sequential and parallel scaling strategies on QwQ, R1 and LIMO, finding that parallel scaling achieves better coverage and scalability. Based on these insights, we propose Shortest Majority Vote, a method that combines parallel scaling strategies with CoT length characteristics, significantly improving models' test-time scalability compared to conventional majority voting approaches. |
|
2025-02-19T00:00:00 | 2502.12464 | SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models | [
"Seanie Lee",
"Dong Bok Lee",
"Dominik Wagner",
"Minki Kang",
"Haebin Seong",
"Tobias Bocklet",
"Juho Lee",
"Sung Ju Hwang"
] | Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often underperform on "hard" examples where the larger model provides accurate predictions. We observe that many inputs can be reliably handled by the smaller model, while only a small fraction require the larger model's capacity. Motivated by this, we propose SafeRoute, a binary router that distinguishes hard examples from easy ones. Our method selectively applies the larger safety guard model to the data that the router considers hard, improving efficiency while maintaining accuracy compared to solely using the larger safety guard model. Experimental results on multiple benchmark datasets demonstrate that our adaptive model selection significantly enhances the trade-off between computational cost and safety performance, outperforming relevant baselines. |
|
2025-02-19T00:00:00 | 2502.12170 | MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections | [
"Da Xiao",
"Qingye Meng",
"Shengping Li",
"Xingyuan Yuan"
] | https://github.com/Caiyun-AI/MUDDFormer | We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective method to address the limitations of residual connections and enhance cross-layer information flow in Transformers. Unlike existing dense connection approaches with static and shared connection weights, MUDD generates connection weights dynamically depending on hidden states at each sequence position and for each decoupled input stream (the query, key, value or residual) of a Transformer block. MUDD connections can be seamlessly integrated into any Transformer architecture to create MUDDFormer. Extensive experiments show that MUDDFormer significantly outperforms Transformers across various model architectures and scales in language modeling, achieving the performance of Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B in five-shot settings, while adding only 0.23% parameters and 0.4% computation. Code in JAX and PyTorch and pre-trained models are available at https://github.com/Caiyun-AI/MUDDFormer . |
2025-02-19T00:00:00 | 2502.12900 | Soundwave: Less is More for Speech-Text Alignment in LLMs | [
"Yuhao Zhang",
"Zhiheng Liu",
"Fan Bu",
"Ruiyu Zhang",
"Benyou Wang",
"Haizhou Li"
] | https://github.com/FreedomIntelligence/Soundwave | Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwave, which utilizes an efficient training strategy and a novel architecture to address these issues. Results show that Soundwave outperforms the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks, using only one-fiftieth of the training data. Further analysis shows that Soundwave still retains its intelligence during conversation. The project is available at https://github.com/FreedomIntelligence/Soundwave. |
2025-02-19T00:00:00 | 2502.13130 | Magma: A Foundation Model for Multimodal AI Agents | [
"Jianwei Yang",
"Reuben Tan",
"Qianhui Wu",
"Ruijie Zheng",
"Baolin Peng",
"Yongyuan Liang",
"Yu Gu",
"Mu Cai",
"Seonghyeon Ye",
"Joel Jang",
"Yuquan Deng",
"Lars Liden",
"Jianfeng Gao"
] | We present Magma, a foundation model that serves multimodal AI agentic tasks in both the digital and physical worlds. Magma is a significant extension of vision-language (VL) models in that it not only retains the VL understanding ability (verbal intelligence) of the latter, but is also equipped with the ability to plan and act in the visual-spatial world (spatial-temporal intelligence) and complete agentic tasks ranging from UI navigation to robot manipulation. To endow the agentic capabilities, Magma is pretrained on large amounts of heterogeneous datasets spanning from images, videos to robotics data, where the actionable visual objects (e.g., clickable buttons in GUI) in images are labeled by Set-of-Mark (SoM) for action grounding, and the object movements (e.g., the trace of human hands or robotic arms) in videos are labeled by Trace-of-Mark (ToM) for action planning. Extensive experiments show that SoM and ToM reach great synergy and facilitate the acquisition of spatial-temporal intelligence for our Magma model, which is fundamental to a wide range of tasks as shown in Fig.1. In particular, Magma creates new state-of-the-art results on UI navigation and robotic manipulation tasks, outperforming previous models that are specifically tailored to these tasks. On image and video-related multimodal tasks, Magma also compares favorably to popular large multimodal models that are trained on much larger datasets. We make our model and code public for reproducibility at https://microsoft.github.io/Magma. |
|
2025-02-19T00:00:00 | 2502.12859 | PAFT: Prompt-Agnostic Fine-Tuning | [
"Chenxing Wei",
"Yao Shu",
"Mingwen Ou",
"Ying Tiffany He",
"Fei Richard Yu"
] | While Large Language Models (LLMs) adapt well to downstream tasks after fine-tuning, this adaptability often compromises prompt robustness, as even minor prompt variations can significantly degrade performance. To address this, we propose Prompt-Agnostic Fine-Tuning(PAFT), a simple yet effective approach that dynamically adjusts prompts during fine-tuning. This encourages the model to learn underlying task principles rather than overfitting to specific prompt formulations. PAFT operates in two stages: First, a diverse set of meaningful, synthetic candidate prompts is constructed. Second, during fine-tuning, prompts are randomly sampled from this set to create dynamic training inputs. Extensive experiments across diverse datasets and LLMs demonstrate that models trained with PAFT exhibit strong robustness and generalization across a wide range of prompts, including unseen ones. This enhanced robustness improves both model performance and inference speed while maintaining training efficiency. Ablation studies further confirm the effectiveness of PAFT. |
|
2025-02-19T00:00:00 | 2502.13142 | Pre-training Auto-regressive Robotic Models with 4D Representations | [
"Dantong Niu",
"Yuvan Sharma",
"Haoru Xue",
"Giscard Biamby",
"Junyi Zhang",
"Ziteng Ji",
"Trevor Darrell",
"Roei Herzig"
] | Foundation models pre-trained on massive unlabeled datasets have revolutionized natural language and computer vision, exhibiting remarkable generalization capabilities, thus highlighting the importance of pre-training. Yet, efforts in robotics have struggled to achieve similar success, limited by either the need for costly robotic annotations or the lack of representations that effectively model the physical world. In this paper, we introduce ARM4R, an Auto-regressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better pre-trained robotic model. Specifically, we focus on utilizing 3D point tracking representations from videos derived by lifting 2D representations into 3D space via monocular depth estimation across time. These 4D representations maintain a shared geometric structure between the points and robot state representations up to a linear transformation, enabling efficient transfer learning from human video data to low-level robotic control. Our experiments show that ARM4R can transfer efficiently from human video data to robotics and consistently improves performance on tasks across various robot environments and configurations. |
|
2025-02-19T00:00:00 | 2502.11271 | OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning | [
"Pan Lu",
"Bowen Chen",
"Sheng Liu",
"Rahul Thapa",
"Joseph Boen",
"James Zou"
] | Solving complex reasoning tasks may involve visual understanding, domain knowledge retrieval, numerical calculation, and multi-step reasoning. Existing methods augment large language models (LLMs) with external tools but are restricted to specialized domains, limited tool types, or require additional training data. In this paper, we introduce OctoTools, a training-free, user-friendly, and easily extensible open-source agentic framework designed to tackle complex reasoning across diverse domains. OctoTools introduces standardized tool cards to encapsulate tool functionality, a planner for both high-level and low-level planning, and an executor to carry out tool usage. We validate OctoTools' generality across 16 diverse tasks (including MathVista, MMLU-Pro, MedQA, and GAIA-Text), achieving substantial average accuracy gains of 9.3% over GPT-4o. Furthermore, OctoTools outperforms AutoGen, GPT-Functions and LangChain by up to 10.6% when given the same set of tools. Through comprehensive analysis and ablations, OctoTools demonstrates advantages in task planning, effective tool usage, and multi-step problem solving. |
|
2025-02-19T00:00:00 | 2502.12669 | Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research | [
"Xiang Liu",
"Penglei Sun",
"Shuyan Chen",
"Longhan Zhang",
"Peijie Dong",
"Huajie You",
"Yongqi Zhang",
"Chang Yan",
"Xiaowen Chu",
"Tong-yi Zhang"
] | The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain. We present a comprehensive knowledge-enhanced system for PSCs that integrates three key components. First, we develop Perovskite-KG, a domain-specific knowledge graph constructed from 1,517 research papers, containing 23,789 entities and 22,272 relationships. Second, we create two complementary datasets: Perovskite-Chat, comprising 55,101 high-quality question-answer pairs generated through a novel multi-agent framework, and Perovskite-Reasoning, containing 2,217 carefully curated materials science problems. Third, we introduce two specialized large language models: Perovskite-Chat-LLM for domain-specific knowledge assistance and Perovskite-Reasoning-LLM for scientific reasoning tasks. Experimental results demonstrate that our system significantly outperforms existing models in both domain-specific knowledge retrieval and scientific reasoning tasks, providing researchers with effective tools for literature review, experimental design, and complex problem-solving in PSC research. |
|
2025-02-19T00:00:00 | 2502.09245 | You Do Not Fully Utilize Transformer's Representation Capacity | [
"Gleb Gerasimov",
"Yaroslav Aksenov",
"Nikita Balagansky",
"Viacheslav Sinii",
"Daniil Gavrilov"
] | In contrast to RNNs, which compress previous tokens into a single hidden state, Transformers can attend to all previous tokens directly. However, standard Transformers only use representations from the immediately preceding layer. In this paper, we show that this design choice causes representation collapse and leads to suboptimal performance. To address this issue, we introduce Layer-Integrated Memory (LIMe), a simple yet powerful approach that preserves the model's overall memory footprint while expanding its representational capacity by allowing access to hidden states from earlier layers. Through extensive experiments across various architectures and different lookup mechanisms, we demonstrate consistent performance improvements on a wide range of tasks. Moreover, our analysis of the learned representation dynamics and our exploration of depthwise circuits reveal how LIMe integrates information across layers, pointing to promising directions for future research. |
|
2025-02-19T00:00:00 | 2502.10708 | Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey | [
"Zirui Song",
"Bin Yan",
"Yuhan Liu",
"Miao Fang",
"Mingzhe Li",
"Rui Yan",
"Xiuying Chen"
] | https://github.com/abilliyb/Knowledge_Injection_Survey_Papers | Large Language Models (LLMs) have demonstrated remarkable success in various tasks such as natural language understanding, text summarization, and machine translation. However, their general-purpose nature often limits their effectiveness in domain-specific applications that require specialized knowledge, such as healthcare, chemistry, or legal analysis. To address this, researchers have explored diverse methods to enhance LLMs by integrating domain-specific knowledge. In this survey, we provide a comprehensive overview of these methods, which we categorize into four key approaches: dynamic knowledge injection, static knowledge embedding, modular adapters, and prompt optimization. Each approach offers unique mechanisms to equip LLMs with domain expertise, balancing trade-offs between flexibility, scalability, and efficiency. We discuss how these methods enable LLMs to tackle specialized tasks, compare their advantages and disadvantages, evaluate domain-specific LLMs against general LLMs, and highlight the challenges and opportunities in this emerging field. For those interested in delving deeper into this area, we also summarize the commonly used datasets and benchmarks. To keep researchers updated on the latest studies, we maintain an open-source at: https://github.com/abilliyb/Knowledge_Injection_Survey_Papers, dedicated to documenting research in the field of specialized LLM. |
2025-02-19T00:00:00 | 2502.13063 | Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity | [
"Yuri Kuratov",
"Mikhail Arkhipov",
"Aydar Bulatov",
"Mikhail Burtsev"
] | A range of recent works addresses the problem of compression of sequence of tokens into a shorter sequence of real-valued vectors to be used as inputs instead of token embeddings or key-value cache. These approaches allow to reduce the amount of compute in existing language models. Despite relying on powerful models as encoders, the maximum attainable lossless compression ratio is typically not higher than x10. This fact is highly intriguing because, in theory, the maximum information capacity of large real-valued vectors is far beyond the presented rates even for 16-bit precision and a modest vector size. In this work, we explore the limits of compression by replacing the encoder with a per-sample optimization procedure. We show that vectors with compression ratios up to x1500 exist, which highlights two orders of magnitude gap between existing and practically attainable solutions. Furthermore, we empirically show that the compression limits are determined not by the length of the input but by the amount of uncertainty to be reduced, namely, the cross-entropy loss on this sequence without any conditioning. The obtained limits highlight the substantial gap between the theoretical capacity of input embeddings and their practical utilization, suggesting significant room for optimization in model design. |
|
2025-02-19T00:00:00 | 2502.10990 | FinMTEB: Finance Massive Text Embedding Benchmark | [
"Yixuan Tang",
"Yi Yang"
] | Embedding models play a crucial role in representing and retrieving information across various NLP applications. Recent advances in large language models (LLMs) have further enhanced the performance of embedding models. While these models are often benchmarked on general-purpose datasets, real-world applications demand domain-specific evaluation. In this work, we introduce the Finance Massive Text Embedding Benchmark (FinMTEB), a specialized counterpart to MTEB designed for the financial domain. FinMTEB comprises 64 financial domain-specific embedding datasets across 7 tasks that cover diverse textual types in both Chinese and English, such as financial news articles, corporate annual reports, ESG reports, regulatory filings, and earnings call transcripts. We also develop a finance-adapted model, FinPersona-E5, using a persona-based data synthetic method to cover diverse financial embedding tasks for training. Through extensive evaluation of 15 embedding models, including FinPersona-E5, we show three key findings: (1) performance on general-purpose benchmarks shows limited correlation with financial domain tasks; (2) domain-adapted models consistently outperform their general-purpose counterparts; and (3) surprisingly, a simple Bag-of-Words (BoW) approach outperforms sophisticated dense embeddings in financial Semantic Textual Similarity (STS) tasks, underscoring current limitations in dense embedding techniques. Our work establishes a robust evaluation framework for financial NLP applications and provides crucial insights for developing domain-specific embedding models. |
|
2025-02-19T00:00:00 | 2502.12996 | Eager Updates For Overlapped Communication and Computation in DiLoCo | [
"Satyen Kale",
"Arthur Douillard",
"Yanislav Donchev"
] | Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their own local data, and an outer optimization step, where the inner updates are synchronized. While such approaches require orders of magnitude less communication than standard data-parallel training, in settings where the workers are datacenters, even the limited communication requirements of these approaches can still cause significant slow downs due to the blocking necessary at each outer optimization step. In this paper, we investigate techniques to mitigate this issue by overlapping communication with computation in a manner that allows the outer optimization step to fully overlap with the inner optimization phase. We show that a particular variant, dubbed eager updates, provides competitive performance with standard DiLoCo in settings with low bandwidth between workers. |
|
2025-02-19T00:00:00 | 2502.12018 | Atom of Thoughts for Markov LLM Test-Time Scaling | [
"Fengwei Teng",
"Zhaoyang Yu",
"Quan Shi",
"Jiayi Zhang",
"Chenglin Wu",
"Yuyu Luo"
] | https://github.com/qixucen/atom | Large Language Models (LLMs) achieve superior performance through training-time scaling, and test-time scaling further enhances their capabilities by conducting effective reasoning during inference. However, as the scale of reasoning increases, existing test-time scaling methods suffer from accumulated historical information, which not only wastes computational resources but also interferes with effective reasoning. To address this issue, we observe that complex reasoning progress is often achieved by solving a sequence of independent subquestions, each being self-contained and verifiable. These subquestions are essentially atomic questions, relying primarily on their current state rather than accumulated history, similar to the memoryless transitions in a Markov process. Based on this observation, we propose Atom of Thoughts (AoT), where each state transition in the reasoning process consists of decomposing the current question into a dependency-based directed acyclic graph and contracting its subquestions, forming a new atomic question state. This iterative decomposition-contraction process continues until reaching directly solvable atomic questions, naturally realizing Markov transitions between question states. Furthermore, these atomic questions can be seamlessly integrated into existing test-time scaling methods, enabling AoT to serve as a plug-in enhancement for improving reasoning capabilities. Experiments across six benchmarks demonstrate the effectiveness of AoT both as a standalone framework and a plug-in enhancement. Notably, on HotpotQA, when applied to gpt-4o-mini, AoT achieves an 80.6% F1 score, surpassing o3-mini by 3.4% and DeepSeek-R1 by 10.6%. The code will be available at https://github.com/qixucen/atom. |
2025-02-19T00:00:00 | 2502.13092 | Text2World: Benchmarking Large Language Models for Symbolic World Model Generation | [
"Mengkang Hu",
"Tianxing Chen",
"Yude Zou",
"Yuheng Lei",
"Qiguang Chen",
"Ming Li",
"Hongyuan Zhang",
"Wenqi Shao",
"Ping Luo"
] | Recently, there has been growing interest in leveraging large language models (LLMs) to generate symbolic world models from textual descriptions. Although LLMs have been extensively explored in the context of world modeling, prior studies encountered several challenges, including evaluation randomness, dependence on indirect metrics, and a limited domain scope. To address these limitations, we introduce a novel benchmark, Text2World, based on planning domain definition language (PDDL), featuring hundreds of diverse domains and employing multi-criteria, execution-based metrics for a more robust evaluation. We benchmark current LLMs using Text2World and find that reasoning models trained with large-scale reinforcement learning outperform others. However, even the best-performing model still demonstrates limited capabilities in world modeling. Building on these insights, we examine several promising strategies to enhance the world modeling capabilities of LLMs, including test-time scaling, agent training, and more. We hope that Text2World can serve as a crucial resource, laying the groundwork for future research in leveraging LLMs as world models. The project page is available at https://text-to-world.github.io/. |
|
2025-02-19T00:00:00 | 2502.12929 | Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options | [
"Lakshmi Nair",
"Ian Trase",
"Mark Kim"
] | We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Learning tasks (AutoML). Our framework outperforms state-of-the-art baselines, achieving improvements of 38.2% - 69.2% on standard data science tasks, and 37.4% - 47.9% on therapeutic chemistry tasks. With an overall operation cost under $1 per task, our framework is well-suited for cost-sensitive applications. Beyond classification and regression, we illustrate the broader applicability of our FoO-based agentic system to tasks such as reinforcement learning and image generation. Our framework presents significant advancements compared to current state-of-the-art agentic systems for AutoML, due to the benefits of FoO in enforcing diversity in LLM solutions through compressed, explainable representations that also support long-term memory when combined with case-based reasoning. |
|
2025-02-19T00:00:00 | 2502.08869 | Harnessing Vision Models for Time Series Analysis: A Survey | [
"Jingchao Ni",
"Ziming Zhao",
"ChengAo Shen",
"Hanghang Tong",
"Dongjin Song",
"Wei Cheng",
"Dongsheng Luo",
"Haifeng Chen"
] | Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs). Efforts in leveraging vision models for time series analysis have also been made along the way but are less visible to the community due to the predominant research on sequence modeling in this domain. However, the discrepancy between continuous time series and the discrete token space of LLMs, and the challenges in explicitly modeling the correlations of variates in multivariate time series have shifted some research attentions to the equally successful Large Vision Models (LVMs) and Vision Language Models (VLMs). To fill the blank in the existing literature, this survey discusses the advantages of vision models over LLMs in time series analysis. It provides a comprehensive and in-depth overview of the existing methods, with dual views of detailed taxonomy that answer the key research questions including how to encode time series as images and how to model the imaged time series for various tasks. Additionally, we address the challenges in the pre- and post-processing steps involved in this framework and outline future directions to further advance time series analysis with vision models. |
|
2025-02-19T00:00:00 | 2502.12524 | YOLOv12: Attention-Centric Real-Time Object Detectors | [
"Yunjie Tian",
"Qixiang Ye",
"David Doermann"
] | Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms. YOLOv12 surpasses all popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the computation and 45% of the parameters. More comparisons are shown in Figure 1. |
|
2025-02-19T00:00:00 | 2502.12130 | Scaling Autonomous Agents via Automatic Reward Modeling And Planning | [
"Zhenfang Chen",
"Delin Chen",
"Rui Sun",
"Wenjun Liu",
"Chuang Gan"
] | Large language models (LLMs) have demonstrated remarkable capabilities across a range of text-generation tasks. However, LLMs still struggle with problems requiring multi-step decision-making and environmental feedback, such as online shopping, scientific reasoning, and mathematical problem-solving. Unlike pure text data, collecting large-scale decision-making data is challenging. Moreover, many powerful LLMs are only accessible through APIs, which hinders their fine-tuning for agent tasks due to cost and complexity. To address LLM agents' limitations, we propose a framework that can automatically learn a reward model from the environment without human annotations. This model can be used to evaluate the action trajectories of LLM agents and provide heuristics for task planning. Specifically, our approach involves employing one LLM-based agent to navigate an environment randomly, generating diverse action trajectories. Subsequently, a separate LLM is leveraged to assign a task intent and synthesize a negative response alongside the correct response for each trajectory. These triplets (task intent, positive response, and negative response) are then utilized as training data to optimize a reward model capable of scoring action trajectories. The effectiveness and generalizability of our framework are demonstrated through evaluations conducted on different agent benchmarks. In conclusion, our proposed framework represents a significant advancement in enhancing LLM agents' decision-making capabilities. By automating the learning of reward models, we overcome the challenges of data scarcity and API limitations, potentially revolutionizing the application of LLMs in complex and interactive environments. This research paves the way for more sophisticated AI agents capable of tackling a wide range of real-world problems requiring multi-step decision-making. |
|
2025-02-19T00:00:00 | 2502.13962 | Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering | [
"William Jurayj",
"Jeffrey Cheng",
"Benjamin Van Durme"
] | Scaling the test-time compute of large language models has demonstrated impressive performance on reasoning benchmarks. However, existing evaluations of test-time scaling make the strong assumption that a reasoning system should always give an answer to any question provided. This overlooks concerns about whether a model is confident in its answer, and whether it is appropriate to always provide a response. To address these concerns, we extract confidence scores during reasoning for thresholding model responses. We find that increasing compute budget at inference time not only helps models answer more questions correctly, but also increases confidence in correct responses. We then extend the current paradigm of zero-risk responses during evaluation by considering settings with non-zero levels of response risk, and suggest a recipe for reporting evaluations under these settings. |
|
2025-02-19T00:00:00 | 2502.12659 | The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1 | [
"Kaiwen Zhou",
"Chengzhi Liu",
"Xuandong Zhao",
"Shreedhar Jangam",
"Jayanth Srinivasa",
"Gaowen Liu",
"Dawn Song",
"Xin Eric Wang"
] | The rapid development of large reasoning models, such as OpenAI-o3 and DeepSeek-R1, has led to significant improvements in complex reasoning over non-reasoning large language models~(LLMs). However, their enhanced capabilities, combined with the open-source access of models like DeepSeek-R1, raise serious safety concerns, particularly regarding their potential for misuse. In this work, we present a comprehensive safety assessment of these reasoning models, leveraging established safety benchmarks to evaluate their compliance with safety regulations. Furthermore, we investigate their susceptibility to adversarial attacks, such as jailbreaking and prompt injection, to assess their robustness in real-world applications. Through our multi-faceted analysis, we uncover four key findings: (1) There is a significant safety gap between the open-source R1 models and the o3-mini model, on both safety benchmark and attack, suggesting more safety effort on R1 is needed. (2) The distilled reasoning model shows poorer safety performance compared to its safety-aligned base models. (3) The stronger the model's reasoning ability, the greater the potential harm it may cause when answering unsafe questions. (4) The thinking process in R1 models pose greater safety concerns than their final answers. Our study provides insights into the security implications of reasoning models and highlights the need for further advancements in R1 models' safety to close the gap. |
|
2025-02-20T00:00:00 | 2502.12143 | Small Models Struggle to Learn from Strong Reasoners | [
"Yuetai Li",
"Xiang Yue",
"Zhangchen Xu",
"Fengqing Jiang",
"Luyao Niu",
"Bill Yuchen Lin",
"Bhaskar Ramasubramanian",
"Radha Poovendran"
] | Large language models (LLMs) excel in complex reasoning tasks, and distilling their reasoning capabilities into smaller models has shown promise. However, we uncover an interesting phenomenon, which we term the Small Model Learnability Gap: small models (leq3B parameters) do not consistently benefit from long chain-of-thought (CoT) reasoning or distillation from larger models. Instead, they perform better when fine-tuned on shorter, simpler reasoning chains that better align with their intrinsic learning capacity. To address this, we propose Mix Distillation, a simple yet effective strategy that balances reasoning complexity by combining long and short CoT examples or reasoning from both larger and smaller models. Our experiments demonstrate that Mix Distillation significantly improves small model reasoning performance compared to training on either data alone. These findings highlight the limitations of direct strong model distillation and underscore the importance of adapting reasoning complexity for effective reasoning capability transfer. |
|
2025-02-20T00:00:00 | 2502.13922 | LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization | [
"Guanzheng Chen",
"Xin Li",
"Michael Qizhe Shieh",
"Lidong Bing"
] | Large Language Models (LLMs) have demonstrated remarkable capabilities through pretraining and alignment. However, superior short-context LLMs may underperform in long-context scenarios due to insufficient long-context alignment. This alignment process remains challenging due to the impracticality of human annotation for extended contexts and the difficulty in balancing short- and long-context performance. To address these challenges, we introduce LongPO, that enables short-context LLMs to self-evolve to excel on long-context tasks by internally transferring short-context capabilities. LongPO harnesses LLMs to learn from self-generated short-to-long preference data, comprising paired responses generated for identical instructions with long-context inputs and their compressed short-context counterparts, respectively. This preference reveals capabilities and potentials of LLMs cultivated during short-context alignment that may be diminished in under-aligned long-context scenarios. Additionally, LongPO incorporates a short-to-long KL constraint to mitigate short-context performance decline during long-context alignment. When applied to Mistral-7B-Instruct-v0.2 from 128K to 512K context lengths, LongPO fully retains short-context performance and largely outperforms naive SFT and DPO in both long- and short-context tasks. Specifically, \ourMethod-trained models can achieve results on long-context benchmarks comparable to, or even surpassing, those of superior LLMs (e.g., GPT-4-128K) that involve extensive long-context annotation and larger parameter scales. |
|
2025-02-20T00:00:00 | 2502.13144 | RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning | [
"Hao Gao",
"Shaoyu Chen",
"Bo Jiang",
"Bencheng Liao",
"Yiang Shi",
"Xiaoyang Guo",
"Yuechuan Pu",
"Haoran Yin",
"Xiangyu Li",
"Xinbang Zhang",
"Ying Zhang",
"Wenyu Liu",
"Qian Zhang",
"Xinggang Wang"
] | Existing end-to-end autonomous driving (AD) algorithms typically follow the Imitation Learning (IL) paradigm, which faces challenges such as causal confusion and the open-loop gap. In this work, we establish a 3DGS-based closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS techniques, we construct a photorealistic digital replica of the real physical world, enabling the AD policy to extensively explore the state space and learn to handle out-of-distribution scenarios through large-scale trial and error. To enhance safety, we design specialized rewards that guide the policy to effectively respond to safety-critical events and understand real-world causal relationships. For better alignment with human driving behavior, IL is incorporated into RL training as a regularization term. We introduce a closed-loop evaluation benchmark consisting of diverse, previously unseen 3DGS environments. Compared to IL-based methods, RAD achieves stronger performance in most closed-loop metrics, especially 3x lower collision rate. Abundant closed-loop results are presented at https://hgao-cv.github.io/RAD. |
|
2025-02-20T00:00:00 | 2502.13965 | Autellix: An Efficient Serving Engine for LLM Agents as General Programs | [
"Michael Luo",
"Xiaoxiang Shi",
"Colin Cai",
"Tianjun Zhang",
"Justin Wong",
"Yichuan Wang",
"Chi Wang",
"Yanping Huang",
"Zhifeng Chen",
"Joseph E. Gonzalez",
"Ion Stoica"
] | Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM. |
|
2025-02-20T00:00:00 | 2502.13233 | SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering? | [
"Yucheng Shi",
"Tianze Yang",
"Canyu Chen",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li",
"Ninghao Liu"
] | Large Language Models (LLMs) have shown remarkable capabilities in general domains but often struggle with tasks requiring specialized knowledge. Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve external information from static knowledge bases, which can be outdated or incomplete, missing fine-grained clinical details essential for accurate medical question answering. In this work, we propose SearchRAG, a novel framework that overcomes these limitations by leveraging real-time search engines. Our method employs synthetic query generation to convert complex medical questions into search-engine-friendly queries and utilizes uncertainty-based knowledge selection to filter and incorporate the most relevant and informative medical knowledge into the LLM's input. Experimental results demonstrate that our method significantly improves response accuracy in medical question answering tasks, particularly for complex questions requiring detailed and up-to-date knowledge. |
|
2025-02-20T00:00:00 | 2502.13347 | Craw4LLM: Efficient Web Crawling for LLM Pretraining | [
"Shi Yu",
"Zhiyuan Liu",
"Chenyan Xiong"
] | https://github.com/cxcscmu/Crawl4LLM | Web crawl is a main source of large language models' (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Crawl4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, it leverages the influence of a webpage in LLM pretraining as the priority score of the web crawler's scheduler, replacing the standard graph connectivity based priority. Our experiments on a web graph containing 900 million webpages from a commercial search engine's index demonstrate the efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just 21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream performances of previous crawls, significantly reducing the crawling waste and alleviating the burdens on websites. Our code is publicly available at https://github.com/cxcscmu/Crawl4LLM. |
2025-02-20T00:00:00 | 2502.13923 | Qwen2.5-VL Technical Report | [
"Shuai Bai",
"Keqin Chen",
"Xuejing Liu",
"Jialin Wang",
"Wenbin Ge",
"Sibo Song",
"Kai Dang",
"Peng Wang",
"Shijie Wang",
"Jun Tang",
"Humen Zhong",
"Yuanzhi Zhu",
"Mingkun Yang",
"Zhaohai Li",
"Jianqiang Wan",
"Pengfei Wang",
"Wei Ding",
"Zheren Fu",
"Yiheng Xu",
"Jiabo Ye",
"Xi Zhang",
"Tianbao Xie",
"Zesen Cheng",
"Hang Zhang",
"Zhibo Yang",
"Haiyang Xu",
"Junyang Lin"
] | We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and interacting with the world through enhanced visual recognition, precise object localization, robust document parsing, and long-video comprehension. A standout feature of Qwen2.5-VL is its ability to localize objects using bounding boxes or points accurately. It provides robust structured data extraction from invoices, forms, and tables, as well as detailed analysis of charts, diagrams, and layouts. To handle complex inputs, Qwen2.5-VL introduces dynamic resolution processing and absolute time encoding, enabling it to process images of varying sizes and videos of extended durations (up to hours) with second-level event localization. This allows the model to natively perceive spatial scales and temporal dynamics without relying on traditional normalization techniques. By training a native dynamic-resolution Vision Transformer (ViT) from scratch and incorporating Window Attention, we reduce computational overhead while maintaining native resolution. As a result, Qwen2.5-VL excels not only in static image and document understanding but also as an interactive visual agent capable of reasoning, tool usage, and task execution in real-world scenarios such as operating computers and mobile devices. Qwen2.5-VL is available in three sizes, addressing diverse use cases from edge AI to high-performance computing. The flagship Qwen2.5-VL-72B model matches state-of-the-art models like GPT-4o and Claude 3.5 Sonnet, particularly excelling in document and diagram understanding. Additionally, Qwen2.5-VL maintains robust linguistic performance, preserving the core language competencies of the Qwen2.5 LLM. |
|
2025-02-20T00:00:00 | 2502.13943 | AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence | [
"Yuliang Liu",
"Junjie Lu",
"Zhaoling Chen",
"Chaofeng Qu",
"Jason Klein Liu",
"Chonghan Liu",
"Zefan Cai",
"Yunhui Xia",
"Li Zhao",
"Jiang Bian",
"Chuheng Zhang",
"Wei Shen",
"Zhouhan Lin"
] | Current approaches for training Process Reward Models (PRMs) often involve breaking down responses into multiple reasoning steps using rule-based techniques, such as using predefined placeholder tokens or setting the reasoning step's length into a fixed size. These approaches overlook the fact that specific words do not typically mark true decision points in a text. To address this, we propose AdaptiveStep, a method that divides reasoning steps based on the model's confidence in predicting the next word. This division method provides more decision-making information at each step, enhancing downstream tasks, such as reward model learning. Moreover, our method does not require manual annotation. We demonstrate its effectiveness through experiments with AdaptiveStep-trained PRMs in mathematical reasoning and code generation tasks. Experimental results indicate that the outcome PRM achieves state-of-the-art Best-of-N performance, surpassing greedy search strategy with token-level value-guided decoding, while also reducing construction costs by over 30% compared to existing open-source PRMs. In addition, we provide a thorough analysis and case study on the PRM's performance, transferability, and generalization capabilities. |
|
2025-02-20T00:00:00 | 2502.12638 | NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation | [
"Zhiyuan Liu",
"Yanchen Luo",
"Han Huang",
"Enzhi Zhang",
"Sihang Li",
"Junfeng Fang",
"Yaorui Shi",
"Xiang Wang",
"Kenji Kawaguchi",
"Tat-Seng Chua"
] | https://github.com/acharkq/NExT-Mol | 3D molecule generation is crucial for drug discovery and material design. While prior efforts focus on 3D diffusion models for their benefits in modeling continuous 3D conformers, they overlook the advantages of 1D SELFIES-based Language Models (LMs), which can generate 100% valid molecules and leverage the billion-scale 1D molecule datasets. To combine these advantages for 3D molecule generation, we propose a foundation model -- NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation. NExT-Mol uses an extensively pretrained molecule LM for 1D molecule generation, and subsequently predicts the generated molecule's 3D conformers with a 3D diffusion model. We enhance NExT-Mol's performance by scaling up the LM's model size, refining the diffusion neural architecture, and applying 1D to 3D transfer learning. Notably, our 1D molecule LM significantly outperforms baselines in distributional similarity while ensuring validity, and our 3D diffusion model achieves leading performances in conformer prediction. Given these improvements in 1D and 3D modeling, NExT-Mol achieves a 26% relative improvement in 3D FCD for de novo 3D generation on GEOM-DRUGS, and a 13% average relative gain for conditional 3D generation on QM9-2014. Our codes and pretrained checkpoints are available at https://github.com/acharkq/NExT-Mol. |
2025-02-20T00:00:00 | 2502.13173 | Thinking Preference Optimization | [
"Wang Yang",
"Hongye Jin",
"Jingfeng Yang",
"Vipin Chaudhary",
"Xiaotian Han"
] | Supervised Fine-Tuning (SFT) has been a go-to and effective method for enhancing long chain-of-thought (CoT) reasoning in relatively small LLMs by fine-tuning them with long CoT responses from larger LLMs. To continually improve reasoning abilities, we can either collect new high-quality long CoT reasoning SFT data or repeatedly train on existing SFT datasets. However, acquiring new long CoT SFT data is costly and limited, while repeated training often results in a performance plateau or decline. To further boost the performance with the SFT data, we propose Thinking Preference Optimization (ThinkPO), a simple yet effective post-SFT method that enhances long CoT reasoning without requiring new long CoT responses. Instead, ThinkPO utilizes readily available or easily obtainable short CoT reasoning responses as rejected answers and long CoT responses as chosen answers for the same question. It then applies direct preference optimization to encourage the model to favor longer reasoning outputs. Experiments show that ThinkPO further improves the reasoning performance of SFT-ed models, e.g. it increases math reasoning accuracy of SFT-ed models by 8.6% and output length by 25.9%. Notably, ThinkPO is capable of continually boosting the performance of the publicly distilled SFT model, e.g., increasing the official DeepSeek-R1-Distill-Qwen-7B's performance on MATH500 from 87.4% to 91.2%. |
|
2025-02-20T00:00:00 | 2502.13946 | Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region | [
"Chak Tou Leong",
"Qingyu Yin",
"Jian Wang",
"Wenjie Li"
] | The safety alignment of large language models (LLMs) remains vulnerable, as their initial behavior can be easily jailbroken by even relatively simple attacks. Since infilling a fixed template between the input instruction and initial model output is a common practice for existing LLMs, we hypothesize that this template is a key factor behind their vulnerabilities: LLMs' safety-related decision-making overly relies on the aggregated information from the template region, which largely influences these models' safety behavior. We refer to this issue as template-anchored safety alignment. In this paper, we conduct extensive experiments and verify that template-anchored safety alignment is widespread across various aligned LLMs. Our mechanistic analyses demonstrate how it leads to models' susceptibility when encountering inference-time jailbreak attacks. Furthermore, we show that detaching safety mechanisms from the template region is promising in mitigating vulnerabilities to jailbreak attacks. We encourage future research to develop more robust safety alignment techniques that reduce reliance on the template region. |
|
2025-02-20T00:00:00 | 2502.13128 | SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation | [
"Zihan Liu",
"Shuangrui Ding",
"Zhixiong Zhang",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Zang",
"Yuhang Cao",
"Dahua Lin",
"Jiaqi Wang"
] | https://github.com/LiuZH-19/SongGen | Text-to-song generation, the task of creating vocals and accompaniment from textual inputs, poses significant challenges due to domain complexity and data scarcity. Existing approaches often employ multi-stage generation procedures, resulting in cumbersome training and inference pipelines. In this paper, we propose SongGen, a fully open-source, single-stage auto-regressive transformer designed for controllable song generation. The proposed model facilitates fine-grained control over diverse musical attributes, including lyrics and textual descriptions of instrumentation, genre, mood, and timbre, while also offering an optional three-second reference clip for voice cloning. Within a unified auto-regressive framework, SongGen supports two output modes: mixed mode, which generates a mixture of vocals and accompaniment directly, and dual-track mode, which synthesizes them separately for greater flexibility in downstream applications. We explore diverse token pattern strategies for each mode, leading to notable improvements and valuable insights. Furthermore, we design an automated data preprocessing pipeline with effective quality control. To foster community engagement and future research, we will release our model weights, training code, annotated data, and preprocessing pipeline. The generated samples are showcased on our project page at https://liuzh-19.github.io/SongGen/ , and the code will be available at https://github.com/LiuZH-19/SongGen . |
2025-02-20T00:00:00 | 2502.11995 | Presumed Cultural Identity: How Names Shape LLM Responses | [
"Siddhesh Pawar",
"Arnav Arora",
"Lucie-Aimée Kaffee",
"Isabelle Augenstein"
] | Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customisation. |
|
2025-02-20T00:00:00 | 2502.13685 | MoM: Linear Sequence Modeling with Mixture-of-Memories | [
"Jusen Du",
"Weigao Sun",
"Disen Lan",
"Jiaxi Hu",
"Yu Cheng"
] | Linear sequence modeling methods, such as linear attention, state space modeling, and linear RNNs, offer significant efficiency improvements by reducing the complexity of training and inference. However, these methods typically compress the entire input sequence into a single fixed-size memory state, which leads to suboptimal performance on recall-intensive downstream tasks. Drawing inspiration from neuroscience, particularly the brain's ability to maintain robust long-term memory while mitigating "memory interference", we introduce a novel architecture called Mixture-of-Memories (MoM). MoM utilizes multiple independent memory states, with a router network directing input tokens to specific memory states. This approach greatly enhances the overall memory capacity while minimizing memory interference. As a result, MoM performs exceptionally well on recall-intensive tasks, surpassing existing linear sequence modeling techniques. Despite incorporating multiple memory states, the computation of each memory state remains linear in complexity, allowing MoM to retain the linear-complexity advantage during training, while constant-complexity during inference. Our experimental results show that MoM significantly outperforms current linear sequence models on downstream language tasks, particularly recall-intensive tasks, and even achieves performance comparable to Transformer models. The code is released at https://github.com/OpenSparseLLMs/MoM and is also released as a part of https://github.com/OpenSparseLLMs/Linear-MoE. |
|
2025-02-20T00:00:00 | 2502.13581 | ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation | [
"Yupeng Hou",
"Jianmo Ni",
"Zhankui He",
"Noveen Sachdeva",
"Wang-Cheng Kang",
"Ed H. Chi",
"Julian McAuley",
"Derek Zhiyuan Cheng"
] | Generative recommendation (GR) is an emerging paradigm where user actions are tokenized into discrete token patterns and autoregressively generated as predictions. However, existing GR models tokenize each action independently, assigning the same fixed tokens to identical actions across all sequences without considering contextual relationships. This lack of context-awareness can lead to suboptimal performance, as the same action may hold different meanings depending on its surrounding context. To address this issue, we propose ActionPiece to explicitly incorporate context when tokenizing action sequences. In ActionPiece, each action is represented as a set of item features, which serve as the initial tokens. Given the action sequence corpora, we construct the vocabulary by merging feature patterns as new tokens, based on their co-occurrence frequency both within individual sets and across adjacent sets. Considering the unordered nature of feature sets, we further introduce set permutation regularization, which produces multiple segmentations of action sequences with the same semantics. Experiments on public datasets demonstrate that ActionPiece consistently outperforms existing action tokenization methods, improving NDCG@10 by 6.00% to 12.82%. |
|
2025-02-20T00:00:00 | 2502.11573 | InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning | [
"Congkai Xie",
"Shuo Cai",
"Wenjun Wang",
"Pengxiang Li",
"Zhijie Sang",
"Kejing Yang",
"Yiming Zhang",
"Zhen Li",
"Guanghao Zhu",
"Zeyu Liu",
"Yang Yu",
"Yuhang Liu",
"Su Lu",
"Baoyi He",
"Qi Zhou",
"Xiaotian Han",
"Jianbo Yuan",
"Shengyu Zhang",
"Fei Wu",
"Hongxia Yang"
] | Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have made significant advancements in reasoning capabilities. However, they still face challenges such as high computational demands and privacy concerns. This paper focuses on developing efficient Small Language Models (SLMs) and Multimodal Small Language Models (MSLMs) that retain competitive reasoning abilities. We introduce a novel training pipeline that enhances reasoning capabilities and facilitates deployment on edge devices, achieving state-of-the-art performance while minimizing development costs. \InfR~ aims to advance AI systems by improving reasoning, reducing adoption barriers, and addressing privacy concerns through smaller model sizes. Resources are available at https://github. com/Reallm-Labs/InfiR. |
|
2025-02-20T00:00:00 | 2502.13766 | GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking | [
"Florian Schneider",
"Carolin Holtermann",
"Chris Biemann",
"Anne Lauscher"
] | Large Vision-Language Models (LVLMs) have recently gained attention due to their distinctive performance and broad applicability. While it has been previously shown that their efficacy in usage scenarios involving non-Western contexts falls short, existing studies are limited in scope, covering just a narrow range of cultures, focusing exclusively on a small number of cultural aspects, or evaluating a limited selection of models on a single task only. Towards globally inclusive LVLM research, we introduce GIMMICK, an extensive multimodal benchmark designed to assess a broad spectrum of cultural knowledge across 144 countries representing six global macro-regions. GIMMICK comprises six tasks built upon three new datasets that span 728 unique cultural events or facets on which we evaluated 20 LVLMs and 11 LLMs, including five proprietary and 26 open-weight models of all sizes. We systematically examine (1) regional cultural biases, (2) the influence of model size, (3) input modalities, and (4) external cues. Our analyses reveal strong biases toward Western cultures across models and tasks and highlight strong correlations between model size and performance, as well as the effectiveness of multimodal input and external geographic cues. We further find that models have more knowledge of tangible than intangible aspects (e.g., food vs. rituals) and that they excel in recognizing broad cultural origins but struggle with a more nuanced understanding. |
|
2025-02-20T00:00:00 | 2502.13573 | Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective | [
"Yuan Yao",
"Xiaopu Zhang",
"Yu Zhang",
"Jian Jin",
"Qiang Yang"
] | https://github.com/yyyaoyuan/SHDA | Semi-supervised heterogeneous domain adaptation (SHDA) addresses learning across domains with distinct feature representations and distributions, where source samples are labeled while most target samples are unlabeled, with only a small fraction labeled. Moreover, there is no one-to-one correspondence between source and target samples. Although various SHDA methods have been developed to tackle this problem, the nature of the knowledge transferred across heterogeneous domains remains unclear. This paper delves into this question from an empirical perspective. We conduct extensive experiments on about 330 SHDA tasks, employing two supervised learning methods and seven representative SHDA methods. Surprisingly, our observations indicate that both the category and feature information of source samples do not significantly impact the performance of the target domain. Additionally, noise drawn from simple distributions, when used as source samples, may contain transferable knowledge. Based on this insight, we perform a series of experiments to uncover the underlying principles of transferable knowledge in SHDA. Specifically, we design a unified Knowledge Transfer Framework (KTF) for SHDA. Based on the KTF, we find that the transferable knowledge in SHDA primarily stems from the transferability and discriminability of the source domain. Consequently, ensuring those properties in source samples, regardless of their origin (e.g., image, text, noise), can enhance the effectiveness of knowledge transfer in SHDA tasks. The codes and datasets are available at https://github.com/yyyaoyuan/SHDA. |
2025-02-20T00:00:00 | 2502.13533 | Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models | [
"Jun Zhang",
"Jue Wang",
"Huan Li",
"Lidan Shou",
"Ke Chen",
"Yang You",
"Guiming Xie",
"Xuejian Gong",
"Kunlong Zhou"
] | Large Language Models (LLMs) have significantly advanced natural language processing with exceptional task generalization capabilities. Low-Rank Adaption (LoRA) offers a cost-effective fine-tuning solution, freezing the original model parameters and training only lightweight, low-rank adapter matrices. However, the memory footprint of LoRA is largely dominated by the original model parameters. To mitigate this, we propose LoRAM, a memory-efficient LoRA training scheme founded on the intuition that many neurons in over-parameterized LLMs have low training utility but are essential for inference. LoRAM presents a unique twist: it trains on a pruned (small) model to obtain pruned low-rank matrices, which are then recovered and utilized with the original (large) model for inference. Additionally, minimal-cost continual pre-training, performed by the model publishers in advance, aligns the knowledge discrepancy between pruned and original models. Our extensive experiments demonstrate the efficacy of LoRAM across various pruning strategies and downstream tasks. For a model with 70 billion parameters, LoRAM enables training on a GPU with only 20G HBM, replacing an A100-80G GPU for LoRA training and 15 GPUs for full fine-tuning. Specifically, QLoRAM implemented by structured pruning combined with 4-bit quantization, for LLaMA-3.1-70B (LLaMA-2-70B), reduces the parameter storage cost that dominates the memory usage in low-rank matrix training by 15.81times (16.95times), while achieving dominant performance gains over both the original LLaMA-3.1-70B (LLaMA-2-70B) and LoRA-trained LLaMA-3.1-8B (LLaMA-2-13B). |
|
2025-02-20T00:00:00 | 2502.13622 | REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models | [
"DongGeon Lee",
"Hwanjo Yu"
] | Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (Retrieval-augmented Factuality hallucINation Detection), a novel framework that detects hallucinated spans within LLM outputs by directly leveraging retrieved documents. As part of the REFIND, we propose the Context Sensitivity Ratio (CSR), a novel metric that quantifies the sensitivity of LLM outputs to retrieved evidence. This innovative approach enables REFIND to efficiently and accurately detect hallucinations, setting it apart from existing methods. In the evaluation, REFIND demonstrated robustness across nine languages, including low-resource settings, and significantly outperformed baseline models, achieving superior IoU scores in identifying hallucinated spans. This work highlights the effectiveness of quantifying context sensitivity for hallucination detection, thereby paving the way for more reliable and trustworthy LLM applications across diverse languages. |
|
2025-02-20T00:00:00 | 2502.13917 | TESS 2: A Large-Scale Generalist Diffusion Language Model | [
"Jaesung Tae",
"Hamish Ivison",
"Sachin Kumar",
"Arman Cohan"
] | https://github.com/hamishivi/tess-2 | We introduce TESS 2, a general instruction-following diffusion language model that outperforms contemporary instruction-tuned diffusion models, as well as matches and sometimes exceeds strong autoregressive (AR) models. We train TESS 2 by first adapting a strong AR model via continued pretraining with the usual cross-entropy as diffusion loss, and then performing further instruction tuning. We find that adaptation training as well as the choice of the base model is crucial for training good instruction-following diffusion models. We further propose reward guidance, a novel and modular inference-time guidance procedure to align model outputs without needing to train the underlying model. Finally, we show that TESS 2 further improves with increased inference-time compute, highlighting the utility of diffusion LMs in having fine-grained controllability over the amount of compute used at inference time. Code and models are available at https://github.com/hamishivi/tess-2. |
2025-02-20T00:00:00 | 2502.12752 | High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion | [
"Xiang Zhang",
"Yang Zhang",
"Lukas Mehl",
"Markus Gross",
"Christopher Schroers"
] | Despite recent advances in Novel View Synthesis (NVS), generating high-fidelity views from single or sparse observations remains a significant challenge. Existing splatting-based approaches often produce distorted geometry due to splatting errors. While diffusion-based methods leverage rich 3D priors to achieve improved geometry, they often suffer from texture hallucination. In this paper, we introduce SplatDiff, a pixel-splatting-guided video diffusion model designed to synthesize high-fidelity novel views from a single image. Specifically, we propose an aligned synthesis strategy for precise control of target viewpoints and geometry-consistent view synthesis. To mitigate texture hallucination, we design a texture bridge module that enables high-fidelity texture generation through adaptive feature fusion. In this manner, SplatDiff leverages the strengths of splatting and diffusion to generate novel views with consistent geometry and high-fidelity details. Extensive experiments verify the state-of-the-art performance of SplatDiff in single-view NVS. Additionally, without extra training, SplatDiff shows remarkable zero-shot performance across diverse tasks, including sparse-view NVS and stereo video conversion. |
|
2025-02-20T00:00:00 | 2502.13595 | MMTEB: Massive Multilingual Text Embedding Benchmark | [
"Kenneth Enevoldsen",
"Isaac Chung",
"Imene Kerboua",
"Márton Kardos",
"Ashwin Mathur",
"David Stap",
"Jay Gala",
"Wissam Siblini",
"Dominik Krzemiński",
"Genta Indra Winata",
"Saba Sturua",
"Saiteja Utpala",
"Mathieu Ciancone",
"Marion Schaeffer",
"Gabriel Sequeira",
"Diganta Misra",
"Shreeya Dhakal",
"Jonathan Rystrøm",
"Roman Solomatin",
"Ömer Çağatan",
"Akash Kundu",
"Martin Bernstorff",
"Shitao Xiao",
"Akshita Sukhlecha",
"Bhavish Pahwa",
"Rafał Poświata",
"Kranthi Kiran GV",
"Shawon Ashraf",
"Daniel Auras",
"Björn Plüster",
"Jan Philipp Harries",
"Loïc Magne",
"Isabelle Mohr",
"Mariya Hendriksen",
"Dawei Zhu",
"Hippolyte Gisserot-Boukhlef",
"Tom Aarsen",
"Jan Kostkan",
"Konrad Wojtasik",
"Taemin Lee",
"Marek Šuppa",
"Crystina Zhang",
"Roberta Rocca",
"Mohammed Hamdy",
"Andrianos Michail",
"John Yang",
"Manuel Faysse",
"Aleksei Vatolin",
"Nandan Thakur",
"Manan Dey",
"Dipam Vasani",
"Pranjal Chitale",
"Simone Tedeschi",
"Nguyen Tai",
"Artem Snegirev",
"Michael Günther",
"Mengzhou Xia",
"Weijia Shi",
"Xing Han Lù",
"Jordan Clive",
"Gayatri Krishnakumar",
"Anna Maksimova",
"Silvan Wehrli",
"Maria Tikhonova",
"Henil Panchal",
"Aleksandr Abramov",
"Malte Ostendorff",
"Zheng Liu",
"Simon Clematide",
"Lester James Miranda",
"Alena Fenogenova",
"Guangyu Song",
"Ruqiya Bin Safi",
"Wen-Ding Li",
"Alessia Borghini",
"Federico Cassano",
"Hongjin Su",
"Jimmy Lin",
"Howard Yen",
"Lasse Hansen",
"Sara Hooker",
"Chenghao Xiao",
"Vaibhav Adlakha",
"Orion Weller",
"Siva Reddy",
"Niklas Muennighoff"
] | Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost. |
|
2025-02-20T00:00:00 | 2502.12852 | MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching | [
"Fabian David Schmidt",
"Florian Schneider",
"Chris Biemann",
"Goran Glavaš"
] | Existing multilingual vision-language (VL) benchmarks often only cover a handful of languages. Consequently, evaluations of large vision-language models (LVLMs) predominantly target high-resource languages, underscoring the need for evaluation data for low-resource languages. To address this limitation, we introduce MVL-SIB, a massively multilingual vision-language benchmark that evaluates both cross-modal and text-only topical matching across 205 languages -- over 100 more than the most multilingual existing VL benchmarks encompass. We then benchmark a range of of open-weight LVLMs together with GPT-4o(-mini) on MVL-SIB. Our results reveal that LVLMs struggle in cross-modal topic matching in lower-resource languages, performing no better than chance on languages like N'Koo. Our analysis further reveals that VL support in LVLMs declines disproportionately relative to textual support for lower-resource languages, as evidenced by comparison of cross-modal and text-only topical matching performance. We further observe that open-weight LVLMs do not benefit from representing a topic with more than one image, suggesting that these models are not yet fully effective at handling multi-image tasks. By correlating performance on MVL-SIB with other multilingual VL benchmarks, we highlight that MVL-SIB serves as a comprehensive probe of multilingual VL understanding in LVLMs. |
|
2025-02-20T00:00:00 | 2502.13138 | AIDE: AI-Driven Exploration in the Space of Code | [
"Zhengyao Jiang",
"Dominik Schmidt",
"Dhruv Srikanth",
"Dixing Xu",
"Ian Kaplan",
"Deniss Jacenko",
"Yuxiang Wu"
] | Machine learning, the foundation of modern artificial intelligence, has driven innovations that have fundamentally transformed the world. Yet, behind advancements lies a complex and often tedious process requiring labor and compute intensive iteration and experimentation. Engineers and scientists developing machine learning models spend much of their time on trial-and-error tasks instead of conceptualizing innovative solutions or research hypotheses. To address this challenge, we introduce AI-Driven Exploration (AIDE), a machine learning engineering agent powered by large language models (LLMs). AIDE frames machine learning engineering as a code optimization problem, and formulates trial-and-error as a tree search in the space of potential solutions. By strategically reusing and refining promising solutions, AIDE effectively trades computational resources for enhanced performance, achieving state-of-the-art results on multiple machine learning engineering benchmarks, including our Kaggle evaluations, OpenAI MLE-Bench and METRs RE-Bench. |
|
2025-02-20T00:00:00 | 2502.13369 | Reducing Hallucinations in Language Model-based SPARQL Query Generation Using Post-Generation Memory Retrieval | [
"Aditya Sharma",
"Luis Lara",
"Amal Zouaq",
"Christopher J. Pal"
] | The ability to generate SPARQL queries from natural language questions is crucial for ensuring efficient and accurate retrieval of structured data from knowledge graphs (KG). While large language models (LLMs) have been widely adopted for SPARQL query generation, they are often susceptible to hallucinations and out-of-distribution errors when producing KG elements like Uniform Resource Identifiers (URIs) based on internal parametric knowledge. This often results in content that appears plausible but is factually incorrect, posing significant challenges for their use in real-world information retrieval (IR) applications. This has led to increased research aimed at detecting and mitigating such errors. In this paper, we introduce PGMR (Post-Generation Memory Retrieval), a modular framework that incorporates a non-parametric memory module to retrieve KG elements and enhance LLM-based SPARQL query generation. Our experimental results indicate that PGMR consistently delivers strong performance across diverse datasets, data distributions, and LLMs. Notably, PGMR significantly mitigates URI hallucinations, nearly eliminating the problem in several scenarios. |
|
2025-02-20T00:00:00 | 2502.13908 | Judging the Judges: A Collection of LLM-Generated Relevance Judgements | [
"Hossein A. Rahmani",
"Clemencia Siro",
"Mohammad Aliannejadi",
"Nick Craswell",
"Charles L. A. Clarke",
"Guglielmo Faggioli",
"Bhaskar Mitra",
"Paul Thomas",
"Emine Yilmaz"
] | Using Large Language Models (LLMs) for relevance assessments offers promising opportunities to improve Information Retrieval (IR), Natural Language Processing (NLP), and related fields. Indeed, LLMs hold the promise of allowing IR experimenters to build evaluation collections with a fraction of the manual human labor currently required. This could help with fresh topics on which there is still limited knowledge and could mitigate the challenges of evaluating ranking systems in low-resource scenarios, where it is challenging to find human annotators. Given the fast-paced recent developments in the domain, many questions concerning LLMs as assessors are yet to be answered. Among the aspects that require further investigation, we can list the impact of various components in a relevance judgment generation pipeline, such as the prompt used or the LLM chosen. This paper benchmarks and reports on the results of a large-scale automatic relevance judgment evaluation, the LLMJudge challenge at SIGIR 2024, where different relevance assessment approaches were proposed. In detail, we release and benchmark 42 LLM-generated labels of the TREC 2023 Deep Learning track relevance judgments produced by eight international teams who participated in the challenge. Given their diverse nature, these automatically generated relevance judgments can help the community not only investigate systematic biases caused by LLMs but also explore the effectiveness of ensemble models, analyze the trade-offs between different models and human assessors, and advance methodologies for improving automated evaluation techniques. The released resource is available at the following link: https://llm4eval.github.io/LLMJudge-benchmark/ |
|
2025-02-20T00:00:00 | 2502.13791 | From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions | [
"Nathanaël Carraz Rakotonirina",
"Mohammed Hamdy",
"Jon Ander Campos",
"Lucas Weber",
"Alberto Testoni",
"Marzieh Fadaee",
"Sandro Pezzelle",
"Marco Del Tredici"
] | Large Language Models (LLMs) are increasingly used in working environments for a wide range of tasks, excelling at solving individual problems in isolation. However, are they also able to effectively collaborate over long-term interactions? To investigate this, we introduce MemoryCode, a synthetic multi-session dataset designed to test LLMs' ability to track and execute simple coding instructions amid irrelevant information, simulating a realistic setting. While all the models we tested handle isolated instructions well, even the performance of state-of-the-art models like GPT-4o deteriorates when instructions are spread across sessions. Our analysis suggests this is due to their failure to retrieve and integrate information over long instruction chains. Our results highlight a fundamental limitation of current LLMs, restricting their ability to collaborate effectively in long interactions. |
|
2025-02-20T00:00:00 | 2502.13270 | REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation | [
"Dong-Ho Lee",
"Adyasha Maharana",
"Jay Pujara",
"Xiang Ren",
"Francesco Barbieri"
] | Long-term, open-domain dialogue capabilities are essential for chatbots aiming to recall past interactions and demonstrate emotional intelligence (EI). Yet, most existing research relies on synthetic, LLM-generated data, leaving open questions about real-world conversational patterns. To address this gap, we introduce REALTALK, a 21-day corpus of authentic messaging app dialogues, providing a direct benchmark against genuine human interactions. We first conduct a dataset analysis, focusing on EI attributes and persona consistency to understand the unique challenges posed by real-world dialogues. By comparing with LLM-generated conversations, we highlight key differences, including diverse emotional expressions and variations in persona stability that synthetic dialogues often fail to capture. Building on these insights, we introduce two benchmark tasks: (1) persona simulation where a model continues a conversation on behalf of a specific user given prior dialogue context; and (2) memory probing where a model answers targeted questions requiring long-term memory of past interactions. Our findings reveal that models struggle to simulate a user solely from dialogue history, while fine-tuning on specific user chats improves persona emulation. Additionally, existing models face significant challenges in recalling and leveraging long-term context within real-world conversations. |
|
2025-02-20T00:00:00 | 2502.14296 | On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective | [
"Yue Huang",
"Chujie Gao",
"Siyuan Wu",
"Haoran Wang",
"Xiangqi Wang",
"Yujun Zhou",
"Yanbo Wang",
"Jiayi Ye",
"Jiawen Shi",
"Qihui Zhang",
"Yuan Li",
"Han Bao",
"Zhaoyi Liu",
"Tianrui Guan",
"Dongping Chen",
"Ruoxi Chen",
"Kehan Guo",
"Andy Zou",
"Bryan Hooi Kuen-Yew",
"Caiming Xiong",
"Elias Stengel-Eskin",
"Hongyang Zhang",
"Hongzhi Yin",
"Huan Zhang",
"Huaxiu Yao",
"Jaehong Yoon",
"Jieyu Zhang",
"Kai Shu",
"Kaijie Zhu",
"Ranjay Krishna",
"Swabha Swayamdipta",
"Taiwei Shi",
"Weijia Shi",
"Xiang Li",
"Yiwei Li",
"Yuexing Hao",
"Yuexing Hao",
"Zhihao Jia",
"Zhize Li",
"Xiuying Chen",
"Zhengzhong Tu",
"Xiyang Hu",
"Tianyi Zhou",
"Jieyu Zhao",
"Lichao Sun",
"Furong Huang",
"Or Cohen Sasson",
"Prasanna Sattigeri",
"Anka Reuel",
"Max Lamparth",
"Yue Zhao",
"Nouha Dziri",
"Yu Su",
"Huan Sun",
"Heng Ji",
"Chaowei Xiao",
"Mohit Bansal",
"Nitesh V. Chawla",
"Jian Pei",
"Jianfeng Gao",
"Michael Backes",
"Philip S. Yu",
"Neil Zhenqiang Gong",
"Pin-Yu Chen",
"Bo Li",
"Xiangliang Zhang"
] | Generative Foundation Models (GenFMs) have emerged as transformative tools. However, their widespread adoption raises critical concerns regarding trustworthiness across dimensions. This paper presents a comprehensive framework to address these challenges through three key contributions. First, we systematically review global AI governance laws and policies from governments and regulatory bodies, as well as industry practices and standards. Based on this analysis, we propose a set of guiding principles for GenFMs, developed through extensive multidisciplinary collaboration that integrates technical, ethical, legal, and societal perspectives. Second, we introduce TrustGen, the first dynamic benchmarking platform designed to evaluate trustworthiness across multiple dimensions and model types, including text-to-image, large language, and vision-language models. TrustGen leverages modular components--metadata curation, test case generation, and contextual variation--to enable adaptive and iterative assessments, overcoming the limitations of static evaluation methods. Using TrustGen, we reveal significant progress in trustworthiness while identifying persistent challenges. Finally, we provide an in-depth discussion of the challenges and future directions for trustworthy GenFMs, which reveals the complex, evolving nature of trustworthiness, highlighting the nuanced trade-offs between utility and trustworthiness, and consideration for various downstream applications, identifying persistent challenges and providing a strategic roadmap for future research. This work establishes a holistic framework for advancing trustworthiness in GenAI, paving the way for safer and more responsible integration of GenFMs into critical applications. To facilitate advancement in the community, we release the toolkit for dynamic evaluation. |
|
2025-02-20T00:00:00 | 2502.14127 | Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above | [
"Nishant Balepur",
"Rachel Rudinger",
"Jordan Lee Boyd-Graber"
] | Multiple choice question answering (MCQA) is popular for LLM evaluation due to its simplicity and human-like testing, but we argue for its reform. We first reveal flaws in MCQA's format, as it struggles to: 1) test generation/subjectivity; 2) match LLM use cases; and 3) fully test knowledge. We instead advocate for generative formats based on human testing-where LLMs construct and explain answers-better capturing user needs and knowledge while remaining easy to score. We then show even when MCQA is a useful format, its datasets suffer from: leakage; unanswerability; shortcuts; and saturation. In each issue, we give fixes from education, like rubrics to guide MCQ writing; scoring methods to bridle guessing; and Item Response Theory to build harder MCQs. Lastly, we discuss LLM errors in MCQA-robustness, biases, and unfaithful explanations-showing how our prior solutions better measure or address these issues. While we do not need to desert MCQA, we encourage more efforts in refining the task based on educational testing, advancing evaluations. |
|
2025-02-21T00:00:00 | 2502.14739 | SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines | [
"M-A-P Team",
"Xinrun Du",
"Yifan Yao",
"Kaijing Ma",
"Bingli Wang",
"Tianyu Zheng",
"Kang Zhu",
"Minghao Liu",
"Yiming Liang",
"Xiaolong Jin",
"Zhenlin Wei",
"Chujie Zheng",
"Kaixing Deng",
"Shuyue Guo",
"Shian Jia",
"Sichao Jiang",
"Yiyan Liao",
"Rui Li",
"Qinrui Li",
"Sirun Li",
"Yizhi Li",
"Yunwen Li",
"Dehua Ma",
"Yuansheng Ni",
"Haoran Que",
"Qiyao Wang",
"Zhoufutu Wen",
"Siwei Wu",
"Tianshun Xing",
"Ming Xu",
"Zhenzhu Yang",
"Zekun Moore Wang",
"Junting Zhou",
"Yuelin Bai",
"Xingyuan Bu",
"Chenglin Cai",
"Liang Chen",
"Yifan Chen",
"Chengtuo Cheng",
"Tianhao Cheng",
"Keyi Ding",
"Siming Huang",
"Yun Huang",
"Yaoru Li",
"Yizhe Li",
"Zhaoqun Li",
"Tianhao Liang",
"Chengdong Lin",
"Hongquan Lin",
"Yinghao Ma",
"Zhongyuan Peng",
"Zifan Peng",
"Qige Qi",
"Shi Qiu",
"Xingwei Qu",
"Yizhou Tan",
"Zili Wang",
"Chenqing Wang",
"Hao Wang",
"Yiya Wang",
"Yubo Wang",
"Jiajun Xu",
"Kexin Yang",
"Ruibin Yuan",
"Yuanhao Yue",
"Tianyang Zhan",
"Chun Zhang",
"Jingyang Zhang",
"Xiyue Zhang",
"Xingjian Zhang",
"Yue Zhang",
"Yongchi Zhao",
"Xiangyu Zheng",
"Chenghua Zhong",
"Yang Gao",
"Zhoujun Li",
"Dayiheng Liu",
"Qian Liu",
"Tianyu Liu",
"Shiwen Ni",
"Junran Peng",
"Yujia Qin",
"Wenbo Su",
"Guoyin Wang",
"Shi Wang",
"Jian Yang",
"Min Yang",
"Meng Cao",
"Xiang Yue",
"Zhaoxiang Zhang",
"Wangchunshu Zhou",
"Jiaheng Liu",
"Qunshu Lin",
"Wenhao Huang",
"Ge Zhang"
] | Large language models (LLMs) have demonstrated remarkable proficiency in mainstream academic disciplines such as mathematics, physics, and computer science. However, human knowledge encompasses over 200 specialized disciplines, far exceeding the scope of existing benchmarks. The capabilities of LLMs in many of these specialized fields-particularly in light industry, agriculture, and service-oriented disciplines-remain inadequately evaluated. To address this gap, we present SuperGPQA, a comprehensive benchmark that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines. Our benchmark employs a novel Human-LLM collaborative filtering mechanism to eliminate trivial or ambiguous questions through iterative refinement based on both LLM responses and expert feedback. Our experimental results reveal significant room for improvement in the performance of current state-of-the-art LLMs across diverse knowledge domains (e.g., the reasoning-focused model DeepSeek-R1 achieved the highest accuracy of 61.82% on SuperGPQA), highlighting the considerable gap between current model capabilities and artificial general intelligence. Additionally, we present comprehensive insights from our management of a large-scale annotation process, involving over 80 expert annotators and an interactive Human-LLM collaborative system, offering valuable methodological guidance for future research initiatives of comparable scope. |
|
2025-02-21T00:00:00 | 2502.14282 | PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC | [
"Haowei Liu",
"Xi Zhang",
"Haiyang Xu",
"Yuyang Wanyan",
"Junyang Wang",
"Ming Yan",
"Ji Zhang",
"Chunfeng Yuan",
"Changsheng Xu",
"Weiming Hu",
"Fei Huang"
] | In the field of MLLM-based GUI agents, compared to smartphones, the PC scenario not only features a more complex interactive environment, but also involves more intricate intra- and inter-app workflows. To address these issues, we propose a hierarchical agent framework named PC-Agent. Specifically, from the perception perspective, we devise an Active Perception Module (APM) to overcome the inadequate abilities of current MLLMs in perceiving screenshot content. From the decision-making perspective, to handle complex user instructions and interdependent subtasks more effectively, we propose a hierarchical multi-agent collaboration architecture that decomposes decision-making processes into Instruction-Subtask-Action levels. Within this architecture, three agents (i.e., Manager, Progress and Decision) are set up for instruction decomposition, progress tracking and step-by-step decision-making respectively. Additionally, a Reflection agent is adopted to enable timely bottom-up error feedback and adjustment. We also introduce a new benchmark PC-Eval with 25 real-world complex instructions. Empirical results on PC-Eval show that our PC-Agent achieves a 32% absolute improvement of task success rate over previous state-of-the-art methods. The code will be publicly available. |
|
2025-02-21T00:00:00 | 2502.14834 | LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models | [
"Shangqing Tu",
"Yucheng Wang",
"Daniel Zhang-Li",
"Yushi Bai",
"Jifan Yu",
"Yuhao Wu",
"Lei Hou",
"Huiqin Liu",
"Zhiyuan Liu",
"Bin Xu",
"Juanzi Li"
] | https://github.com/THU-KEG/LongWriter-V | Existing Large Vision-Language Models (LVLMs) can process inputs with context lengths up to 128k visual and text tokens, yet they struggle to generate coherent outputs beyond 1,000 words. We find that the primary limitation is the absence of long output examples during supervised fine-tuning (SFT). To tackle this issue, we introduce LongWriter-V-22k, a SFT dataset comprising 22,158 examples, each with multiple input images, an instruction, and corresponding outputs ranging from 0 to 10,000 words. Moreover, to achieve long outputs that maintain high-fidelity to the input images, we employ Direct Preference Optimization (DPO) to the SFT model. Given the high cost of collecting human feedback for lengthy outputs (e.g., 3,000 words), we propose IterDPO, which breaks long outputs into segments and uses iterative corrections to form preference pairs with the original outputs. Additionally, we develop MMLongBench-Write, a benchmark featuring six tasks to evaluate the long-generation capabilities of VLMs. Our 7B parameter model, trained with LongWriter-V-22k and IterDPO, achieves impressive performance on this benchmark, outperforming larger proprietary models like GPT-4o. Code and data: https://github.com/THU-KEG/LongWriter-V |
2025-02-21T00:00:00 | 2502.14786 | SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features | [
"Michael Tschannen",
"Alexey Gritsenko",
"Xiao Wang",
"Muhammad Ferjad Naeem",
"Ibrahim Alabdulmohsin",
"Nikhil Parthasarathy",
"Talfan Evans",
"Lucas Beyer",
"Ye Xia",
"Basil Mustafa",
"Olivier Hénaff",
"Jeremiah Harmsen",
"Andreas Steiner",
"Xiaohua Zhai"
] | We introduce SigLIP 2, a family of new multilingual vision-language encoders that build on the success of the original SigLIP. In this second iteration, we extend the original image-text training objective with several prior, independently developed techniques into a unified recipe -- this includes captioning-based pretraining, self-supervised losses (self-distillation, masked prediction) and online data curation. With these changes, SigLIP 2 models outperform their SigLIP counterparts at all model scales in core capabilities, including zero-shot classification, image-text retrieval, and transfer performance when extracting visual representations for Vision-Language Models (VLMs). Furthermore, the new training recipe leads to significant improvements on localization and dense prediction tasks. We also train variants which support multiple resolutions and preserve the input's native aspect ratio. Finally, we train on a more diverse data-mixture that includes de-biasing techniques, leading to much better multilingual understanding and improved fairness. To allow users to trade off inference cost with performance, we release model checkpoints at four sizes: ViT-B (86M), L (303M), So400m (400M), and g (1B). |
|
2025-02-21T00:00:00 | 2502.14768 | Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning | [
"Tian Xie",
"Zitian Gao",
"Qingnan Ren",
"Haoming Luo",
"Yuqian Hong",
"Bryan Dai",
"Joey Zhou",
"Kai Qiu",
"Zhirong Wu",
"Chong Luo"
] | Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in large reasoning models. To analyze reasoning dynamics, we use synthetic logic puzzles as training data due to their controllable complexity and straightforward answer verification. We make some key technical contributions that lead to effective and stable RL training: a system prompt that emphasizes the thinking and answering process, a stringent format reward function that penalizes outputs for taking shortcuts, and a straightforward training recipe that achieves stable convergence. Our 7B model develops advanced reasoning skills-such as reflection, verification, and summarization-that are absent from the logic corpus. Remarkably, after training on just 5K logic problems, it demonstrates generalization abilities to the challenging math benchmarks AIME and AMC. |
|
2025-02-21T00:00:00 | 2502.14844 | Dynamic Concepts Personalization from Single Videos | [
"Rameen Abdal",
"Or Patashnik",
"Ivan Skorokhodov",
"Willi Menapace",
"Aliaksandr Siarohin",
"Sergey Tulyakov",
"Daniel Cohen-Or",
"Kfir Aberman"
] | Personalizing generative text-to-image models has seen remarkable progress, but extending this personalization to text-to-video models presents unique challenges. Unlike static concepts, personalizing text-to-video models has the potential to capture dynamic concepts, i.e., entities defined not only by their appearance but also by their motion. In this paper, we introduce Set-and-Sequence, a novel framework for personalizing Diffusion Transformers (DiTs)-based generative video models with dynamic concepts. Our approach imposes a spatio-temporal weight space within an architecture that does not explicitly separate spatial and temporal features. This is achieved in two key stages. First, we fine-tune Low-Rank Adaptation (LoRA) layers using an unordered set of frames from the video to learn an identity LoRA basis that represents the appearance, free from temporal interference. In the second stage, with the identity LoRAs frozen, we augment their coefficients with Motion Residuals and fine-tune them on the full video sequence, capturing motion dynamics. Our Set-and-Sequence framework results in a spatio-temporal weight space that effectively embeds dynamic concepts into the video model's output domain, enabling unprecedented editability and compositionality while setting a new benchmark for personalizing dynamic concepts. |
|
2025-02-21T00:00:00 | 2502.14846 | Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation | [
"Yue Yang",
"Ajay Patel",
"Matt Deitke",
"Tanmay Gupta",
"Luca Weihs",
"Andrew Head",
"Mark Yatskar",
"Chris Callison-Burch",
"Ranjay Krishna",
"Aniruddha Kembhavi",
"Christopher Clark"
] | Reasoning about images with rich text, such as charts and documents, is a critical application of vision-language models (VLMs). However, VLMs often struggle in these domains due to the scarcity of diverse text-rich vision-language data. To address this challenge, we present CoSyn, a framework that leverages the coding capabilities of text-only large language models (LLMs) to automatically create synthetic text-rich multimodal data. Given input text describing a target domain (e.g., "nutrition fact labels"), CoSyn prompts an LLM to generate code (Python, HTML, LaTeX, etc.) for rendering synthetic images. With the underlying code as textual representations of the synthetic images, CoSyn can generate high-quality instruction-tuning data, again relying on a text-only LLM. Using CoSyn, we constructed a dataset comprising 400K images and 2.7M rows of vision-language instruction-tuning data. Comprehensive experiments on seven benchmarks demonstrate that models trained on our synthetic data achieve state-of-the-art performance among competitive open-source models, including Llama 3.2, and surpass proprietary models such as GPT-4V and Gemini 1.5 Flash. Furthermore, CoSyn can produce synthetic pointing data, enabling VLMs to ground information within input images, showcasing its potential for developing multimodal agents capable of acting in real-world environments. |
|
2025-02-21T00:00:00 | 2502.14377 | RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers | [
"Ke Cao",
"Jing Wang",
"Ao Ma",
"Jiasong Feng",
"Zhanjie Zhang",
"Xuanhua He",
"Shanyuan Liu",
"Bo Cheng",
"Dawei Leng",
"Yuhui Yin",
"Jie Zhang"
] | The Diffusion Transformer plays a pivotal role in advancing text-to-image and text-to-video generation, owing primarily to its inherent scalability. However, existing controlled diffusion transformer methods incur significant parameter and computational overheads and suffer from inefficient resource allocation due to their failure to account for the varying relevance of control information across different transformer layers. To address this, we propose the Relevance-Guided Efficient Controllable Generation framework, RelaCtrl, enabling efficient and resource-optimized integration of control signals into the Diffusion Transformer. First, we evaluate the relevance of each layer in the Diffusion Transformer to the control information by assessing the "ControlNet Relevance Score"-i.e., the impact of skipping each control layer on both the quality of generation and the control effectiveness during inference. Based on the strength of the relevance, we then tailor the positioning, parameter scale, and modeling capacity of the control layers to reduce unnecessary parameters and redundant computations. Additionally, to further improve efficiency, we replace the self-attention and FFN in the commonly used copy block with the carefully designed Two-Dimensional Shuffle Mixer (TDSM), enabling efficient implementation of both the token mixer and channel mixer. Both qualitative and quantitative experimental results demonstrate that our approach achieves superior performance with only 15% of the parameters and computational complexity compared to PixArt-delta. More examples are available at https://relactrl.github.io/RelaCtrl/. |
|
2025-02-21T00:00:00 | 2502.14669 | AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO | [
"Alan Dao",
"Dinh Bach Vu"
] | Large Language Models (LLMs) have demonstrated impressive capabilities in language processing, yet they often struggle with tasks requiring genuine visual spatial reasoning. In this paper, we introduce a novel two-stage training framework designed to equip standard LLMs with visual reasoning abilities for maze navigation. First, we leverage Supervised Fine Tuning (SFT) on a curated dataset of tokenized maze representations to teach the model to predict step-by-step movement commands. Next, we apply Group Relative Policy Optimization (GRPO)-a technique used in DeepSeekR1-with a carefully crafted reward function to refine the model's sequential decision-making and encourage emergent chain-of-thought behaviors. Experimental results on synthetically generated mazes show that while a baseline model fails to navigate the maze, the SFT-trained model achieves 86% accuracy, and further GRPO fine-tuning boosts accuracy to 93%. Qualitative analyses reveal that GRPO fosters more robust and self-corrective reasoning, highlighting the potential of our approach to bridge the gap between language models and visual spatial tasks. These findings offer promising implications for applications in robotics, autonomous navigation, and other domains that require integrated visual and sequential reasoning. |
|
2025-02-21T00:00:00 | 2502.14499 | MLGym: A New Framework and Benchmark for Advancing AI Research Agents | [
"Deepak Nathani",
"Lovish Madaan",
"Nicholas Roberts",
"Nikolay Bashlykov",
"Ajay Menon",
"Vincent Moens",
"Amar Budhiraja",
"Despoina Magka",
"Vladislav Vorotilov",
"Gaurav Chaurasia",
"Dieuwke Hupkes",
"Ricardo Silveira Cabral",
"Tatiana Shavrina",
"Jakob Foerster",
"Yoram Bachrach",
"William Yang Wang",
"Roberta Raileanu"
] | We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 diverse and open-ended AI research tasks from diverse domains such as computer vision, natural language processing, reinforcement learning, and game theory. Solving these tasks requires real-world AI research skills such as generating new ideas and hypotheses, creating and processing data, implementing ML methods, training models, running experiments, analyzing the results, and iterating through this process to improve on a given task. We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro. Our MLGym framework makes it easy to add new tasks, integrate and evaluate models or agents, generate synthetic data at scale, as well as develop new learning algorithms for training agents on AI research tasks. We find that current frontier models can improve on the given baselines, usually by finding better hyperparameters, but do not generate novel hypotheses, algorithms, architectures, or substantial improvements. We open-source our framework and benchmark to facilitate future research in advancing the AI research capabilities of LLM agents. |
Subsets and Splits