date
timestamp[ns]date 2023-05-05 00:00:00
2025-03-28 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
177
| authors
sequencelengths 1
942
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2025-01-15T00:00:00 | 2501.08316 | Diffusion Adversarial Post-Training for One-Step Video Generation | [
"Shanchuan Lin",
"Xin Xia",
"Yuxi Ren",
"Ceyuan Yang",
"Xuefeng Xiao",
"Lu Jiang"
] | The diffusion models are widely used for image and video generation, but their iterative generation process is slow and expansive. While existing distillation approaches have demonstrated the potential for one-step generation in the image domain, they still suffer from significant quality degradation. In this work, we propose Adversarial Post-Training (APT) against real data following diffusion pre-training for one-step video generation. To improve the training stability and quality, we introduce several improvements to the model architecture and training procedures, along with an approximated R1 regularization objective. Empirically, our experiments show that our adversarial post-trained model, Seaweed-APT, can generate 2-second, 1280x720, 24fps videos in real time using a single forward evaluation step. Additionally, our model is capable of generating 1024px images in a single step, achieving quality comparable to state-of-the-art methods. |
|
2025-01-15T00:00:00 | 2501.08332 | MangaNinja: Line Art Colorization with Precise Reference Following | [
"Zhiheng Liu",
"Ka Leong Cheng",
"Xi Chen",
"Jie Xiao",
"Hao Ouyang",
"Kai Zhu",
"Yu Liu",
"Yujun Shen",
"Qifeng Chen",
"Ping Luo"
] | Derived from diffusion models, MangaNinjia specializes in the task of reference-guided line art colorization. We incorporate two thoughtful designs to ensure precise character detail transcription, including a patch shuffling module to facilitate correspondence learning between the reference color image and the target line art, and a point-driven control scheme to enable fine-grained color matching. Experiments on a self-collected benchmark demonstrate the superiority of our model over current solutions in terms of precise colorization. We further showcase the potential of the proposed interactive point control in handling challenging cases, cross-character colorization, multi-reference harmonization, beyond the reach of existing algorithms. |
|
2025-01-15T00:00:00 | 2501.08225 | FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors | [
"Yabo Zhang",
"Xinpeng Zhou",
"Yihan Zeng",
"Hang Xu",
"Hui Li",
"Wangmeng Zuo"
] | https://github.com/YBYBZhang/FramePainter | Interactive image editing allows users to modify images through visual interaction operations such as drawing, clicking, and dragging. Existing methods construct such supervision signals from videos, as they capture how objects change with various physical interactions. However, these models are usually built upon text-to-image diffusion models, so necessitate (i) massive training samples and (ii) an additional reference encoder to learn real-world dynamics and visual consistency. In this paper, we reformulate this task as an image-to-video generation problem, so that inherit powerful video diffusion priors to reduce training costs and ensure temporal consistency. Specifically, we introduce FramePainter as an efficient instantiation of this formulation. Initialized with Stable Video Diffusion, it only uses a lightweight sparse control encoder to inject editing signals. Considering the limitations of temporal attention in handling large motion between two frames, we further propose matching attention to enlarge the receptive field while encouraging dense correspondence between edited and source image tokens. We highlight the effectiveness and efficiency of FramePainter across various of editing signals: it domainantly outperforms previous state-of-the-art methods with far less training data, achieving highly seamless and coherent editing of images, \eg, automatically adjust the reflection of the cup. Moreover, FramePainter also exhibits exceptional generalization in scenarios not present in real-world videos, \eg, transform the clownfish into shark-like shape. Our code will be available at https://github.com/YBYBZhang/FramePainter. |
2025-01-15T00:00:00 | 2501.07730 | Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens | [
"Dongwon Kim",
"Ju He",
"Qihang Yu",
"Chenglin Yang",
"Xiaohui Shen",
"Suha Kwak",
"Liang-Chieh Chen"
] | Image tokenizers form the foundation of modern text-to-image generative models but are notoriously difficult to train. Furthermore, most existing text-to-image models rely on large-scale, high-quality private datasets, making them challenging to replicate. In this work, we introduce Text-Aware Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful image tokenizer that can utilize either discrete or continuous 1-dimensional tokens. TA-TiTok uniquely integrates textual information during the tokenizer decoding stage (i.e., de-tokenization), accelerating convergence and enhancing performance. TA-TiTok also benefits from a simplified, yet effective, one-stage training process, eliminating the need for the complex two-stage distillation used in previous 1-dimensional tokenizers. This design allows for seamless scalability to large datasets. Building on this, we introduce a family of text-to-image Masked Generative Models (MaskGen), trained exclusively on open data while achieving comparable performance to models trained on private data. We aim to release both the efficient, strong TA-TiTok tokenizers and the open-data, open-weight MaskGen models to promote broader access and democratize the field of text-to-image masked generative models. |
|
2025-01-15T00:00:00 | 2501.08328 | PokerBench: Training Large Language Models to become Professional Poker Players | [
"Richard Zhuang",
"Akshat Gupta",
"Richard Yang",
"Aniket Rahane",
"Zhengyu Li",
"Gopala Anumanchipalli"
] | https://github.com/pokerllm/pokerbench | We introduce PokerBench - a benchmark for evaluating the poker-playing abilities of large language models (LLMs). As LLMs excel in traditional NLP tasks, their application to complex, strategic games like poker poses a new challenge. Poker, an incomplete information game, demands a multitude of skills such as mathematics, reasoning, planning, strategy, and a deep understanding of game theory and human psychology. This makes Poker the ideal next frontier for large language models. PokerBench consists of a comprehensive compilation of 11,000 most important scenarios, split between pre-flop and post-flop play, developed in collaboration with trained poker players. We evaluate prominent models including GPT-4, ChatGPT 3.5, and various Llama and Gemma series models, finding that all state-of-the-art LLMs underperform in playing optimal poker. However, after fine-tuning, these models show marked improvements. We validate PokerBench by having models with different scores compete with each other, demonstrating that higher scores on PokerBench lead to higher win rates in actual poker games. Through gameplay between our fine-tuned model and GPT-4, we also identify limitations of simple supervised fine-tuning for learning optimal playing strategy, suggesting the need for more advanced methodologies for effectively training language models to excel in games. PokerBench thus presents a unique benchmark for a quick and reliable evaluation of the poker-playing ability of LLMs as well as a comprehensive benchmark to study the progress of LLMs in complex game-playing scenarios. The dataset and code will be made available at: https://github.com/pokerllm/pokerbench. |
2025-01-15T00:00:00 | 2501.08292 | HALoGEN: Fantastic LLM Hallucinations and Where to Find Them | [
"Abhilasha Ravichander",
"Shrusti Ghela",
"David Wadden",
"Yejin Choi"
] | Despite their impressive ability to generate high-quality and fluent text, generative large language models (LLMs) also produce hallucinations: statements that are misaligned with established world knowledge or provided input context. However, measuring hallucination can be challenging, as having humans verify model generations on-the-fly is both expensive and time-consuming. In this work, we release HALoGEN, a comprehensive hallucination benchmark consisting of: (1) 10,923 prompts for generative models spanning nine domains including programming, scientific attribution, and summarization, and (2) automatic high-precision verifiers for each use case that decompose LLM generations into atomic units, and verify each unit against a high-quality knowledge source. We use this framework to evaluate ~150,000 generations from 14 language models, finding that even the best-performing models are riddled with hallucinations (sometimes up to 86% of generated atomic facts depending on the domain). We further define a novel error classification for LLM hallucinations based on whether they likely stem from incorrect recollection of training data (Type A errors), or incorrect knowledge in training data (Type B errors), or are fabrication (Type C errors). We hope our framework provides a foundation to enable the principled study of why generative models hallucinate, and advances the development of trustworthy large language models. |
|
2025-01-15T00:00:00 | 2501.07888 | Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video Understanding | [
"Liping Yuan",
"Jiawei Wang",
"Haomiao Sun",
"Yuchen Zhang",
"Yuan Lin"
] | We introduce Tarsier2, a state-of-the-art large vision-language model (LVLM) designed for generating detailed and accurate video descriptions, while also exhibiting superior general video understanding capabilities. Tarsier2 achieves significant advancements through three key upgrades: (1) Scaling pre-training data from 11M to 40M video-text pairs, enriching both volume and diversity; (2) Performing fine-grained temporal alignment during supervised fine-tuning; (3) Using model-based sampling to automatically construct preference data and applying DPO training for optimization. Extensive experiments show that Tarsier2-7B consistently outperforms leading proprietary models, including GPT-4o and Gemini 1.5 Pro, in detailed video description tasks. On the DREAM-1K benchmark, Tarsier2-7B improves F1 by 2.8\% over GPT-4o and 5.8\% over Gemini-1.5-Pro. In human side-by-side evaluations, Tarsier2-7B shows a +8.6\% performance advantage over GPT-4o and +24.9\% over Gemini-1.5-Pro. Tarsier2-7B also sets new state-of-the-art results across 15 public benchmarks, spanning tasks such as video question-answering, video grounding, hallucination test, and embodied question-answering, demonstrating its versatility as a robust generalist vision-language model. |
|
2025-01-15T00:00:00 | 2501.08167 | Potential and Perils of Large Language Models as Judges of Unstructured Textual Data | [
"Rewina Bedemariam",
"Natalie Perez",
"Sreyoshi Bhaduri",
"Satya Kapoor",
"Alex Gil",
"Elizabeth Conjar",
"Ikkei Itoku",
"David Theil",
"Aman Chadha",
"Naumaan Nayyar"
] | Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLMs as judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon's Titan Express, Nova Pro, and Meta's Llama serving as LLM judges. The LLM-as-judge approach was compared to human evaluations using Cohen's kappa, Spearman's rho, and Krippendorff's alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLMs as judges offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. This research contributes to the growing body of knowledge on AI assisted text analysis. We discuss limitations and provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM judge models across various contexts and use cases. |
|
2025-01-15T00:00:00 | 2501.08197 | OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for LLM Training | [
"Yijiong Yu",
"Ziyun Dai",
"Zekun Wang",
"Wei Wang",
"Ran Chen",
"Ji Pei"
] | Large language models (LLMs) have demonstrated remarkable capabilities, but their success heavily relies on the quality of pretraining corpora. For Chinese LLMs, the scarcity of high-quality Chinese datasets presents a significant challenge, often limiting their performance. To address this issue, we propose the OpenCSG Chinese Corpus, a series of high-quality datasets specifically designed for LLM pretraining, post-training, and fine-tuning. This corpus includes Fineweb-edu-chinese, Fineweb-edu-chinese-v2, Cosmopedia-chinese, and Smoltalk-chinese, each with distinct characteristics: Fineweb-edu datasets focus on filtered, high-quality content derived from diverse Chinese web sources; Cosmopedia-chinese provides synthetic, textbook-style data for knowledge-intensive training; and Smoltalk-chinese emphasizes stylistic and diverse chat-format data. The OpenCSG Chinese Corpus is characterized by its high-quality text, diverse coverage across domains, and scalable, reproducible data curation processes. Additionally, we conducted extensive experimental analyses, including evaluations on smaller parameter models, which demonstrated significant performance improvements in tasks such as C-Eval, showcasing the effectiveness of the corpus for training Chinese LLMs. |
|
2025-01-15T00:00:00 | 2501.08319 | Enhancing Automated Interpretability with Output-Centric Feature Descriptions | [
"Yoav Gur-Arieh",
"Roy Mayan",
"Chen Agassy",
"Atticus Geiger",
"Mor Geva"
] | Automated interpretability pipelines generate natural language descriptions for the concepts represented by features in large language models (LLMs), such as plants or the first word in a sentence. These descriptions are derived using inputs that activate the feature, which may be a dimension or a direction in the model's representation space. However, identifying activating inputs is costly, and the mechanistic role of a feature in model behavior is determined both by how inputs cause a feature to activate and by how feature activation affects outputs. Using steering evaluations, we reveal that current pipelines provide descriptions that fail to capture the causal effect of the feature on outputs. To fix this, we propose efficient, output-centric methods for automatically generating feature descriptions. These methods use the tokens weighted higher after feature stimulation or the highest weight tokens after applying the vocabulary "unembedding" head directly to the feature. Our output-centric descriptions better capture the causal effect of a feature on model outputs than input-centric descriptions, but combining the two leads to the best performance on both input and output evaluations. Lastly, we show that output-centric descriptions can be used to find inputs that activate features previously thought to be "dead". |
|
2025-01-15T00:00:00 | 2501.08284 | AfriHate: A Multilingual Collection of Hate Speech and Abusive Language Datasets for African Languages | [
"Shamsuddeen Hassan Muhammad",
"Idris Abdulmumin",
"Abinew Ali Ayele",
"David Ifeoluwa Adelani",
"Ibrahim Said Ahmad",
"Saminu Mohammad Aliyu",
"Nelson Odhiambo Onyango",
"Lilian D. A. Wanzare",
"Samuel Rutunda",
"Lukman Jibril Aliyu",
"Esubalew Alemneh",
"Oumaima Hourrane",
"Hagos Tesfahun Gebremichael",
"Elyas Abdi Ismail",
"Meriem Beloucif",
"Ebrahim Chekol Jibril",
"Andiswa Bukula",
"Rooweither Mabuya",
"Salomey Osei",
"Abigail Oppong",
"Tadesse Destaw Belay",
"Tadesse Kebede Guge",
"Tesfa Tegegne Asfaw",
"Chiamaka Ijeoma Chukwuneke",
"Paul Röttger",
"Seid Muhie Yimam",
"Nedjma Ousidhoum"
] | https://github.com/AfriHate/AfriHate | Hate speech and abusive language are global phenomena that need socio-cultural background knowledge to be understood, identified, and moderated. However, in many regions of the Global South, there have been several documented occurrences of (1) absence of moderation and (2) censorship due to the reliance on keyword spotting out of context. Further, high-profile individuals have frequently been at the center of the moderation process, while large and targeted hate speech campaigns against minorities have been overlooked. These limitations are mainly due to the lack of high-quality data in the local languages and the failure to include local communities in the collection, annotation, and moderation processes. To address this issue, we present AfriHate: a multilingual collection of hate speech and abusive language datasets in 15 African languages. Each instance in AfriHate is annotated by native speakers familiar with the local culture. We report the challenges related to the construction of the datasets and present various classification baseline results with and without using LLMs. The datasets, individual annotations, and hate speech and offensive language lexicons are available on https://github.com/AfriHate/AfriHate |
2025-01-15T00:00:00 | 2501.06751 | Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models | [
"Michael Toker",
"Ido Galil",
"Hadas Orgad",
"Rinon Gal",
"Yoad Tewel",
"Gal Chechik",
"Yonatan Belinkov"
] | Text-to-image (T2I) diffusion models rely on encoded prompts to guide the image generation process. Typically, these prompts are extended to a fixed length by adding padding tokens before text encoding. Despite being a default practice, the influence of padding tokens on the image generation process has not been investigated. In this work, we conduct the first in-depth analysis of the role padding tokens play in T2I models. We develop two causal techniques to analyze how information is encoded in the representation of tokens across different components of the T2I pipeline. Using these techniques, we investigate when and how padding tokens impact the image generation process. Our findings reveal three distinct scenarios: padding tokens may affect the model's output during text encoding, during the diffusion process, or be effectively ignored. Moreover, we identify key relationships between these scenarios and the model's architecture (cross or self-attention) and its training process (frozen or trained text encoder). These insights contribute to a deeper understanding of the mechanisms of padding tokens, potentially informing future model design and training practices in T2I systems. |
|
2025-01-15T00:00:00 | 2501.08326 | Omni-RGPT: Unifying Image and Video Region-level Understanding via Token Marks | [
"Miran Heo",
"Min-Hung Chen",
"De-An Huang",
"Sifei Liu",
"Subhashree Radhakrishnan",
"Seon Joo Kim",
"Yu-Chiang Frank Wang",
"Ryo Hachiuma"
] | We present Omni-RGPT, a multimodal large language model designed to facilitate region-level comprehension for both images and videos. To achieve consistent region representation across spatio-temporal dimensions, we introduce Token Mark, a set of tokens highlighting the target regions within the visual feature space. These tokens are directly embedded into spatial regions using region prompts (e.g., boxes or masks) and simultaneously incorporated into the text prompt to specify the target, establishing a direct connection between visual and text tokens. To further support robust video understanding without requiring tracklets, we introduce an auxiliary task that guides Token Mark by leveraging the consistency of the tokens, enabling stable region interpretation across the video. Additionally, we introduce a large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT achieves state-of-the-art results on image and video-based commonsense reasoning benchmarks while showing strong performance in captioning and referring expression comprehension tasks. |
|
2025-01-15T00:00:00 | 2501.08120 | In-situ graph reasoning and knowledge expansion using Graph-PReFLexOR | [
"Markus J. Buehler"
] | The pursuit of automated scientific discovery has fueled progress from symbolic logic to modern AI, forging new frontiers in reasoning and pattern recognition. Transformers function as potential systems, where every possible relationship remains latent potentiality until tasks impose constraints, akin to measurement. Yet, refining their sampling requires more than probabilistic selection: solutions must conform to specific structures or rules, ensuring consistency and the invocation of general principles. We present Graph-PReFLexOR (Graph-based Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning), a framework that combines graph reasoning with symbolic abstraction to dynamically expand domain knowledge. Inspired by reinforcement learning, Graph-PReFLexOR defines reasoning as a structured mapping, where tasks yield knowledge graphs, abstract patterns, and ultimately, final answers. Inspired by category theory, it encodes concepts as nodes and their relationships as edges, supporting hierarchical inference and adaptive learning through isomorphic representations. Demonstrations include hypothesis generation, materials design, and creative reasoning, such as discovering relationships between mythological concepts like 'thin places' with materials science. We propose a 'knowledge garden growth' strategy that integrates insights across domains, promoting interdisciplinary connections. Results with a 3-billion-parameter Graph-PReFLexOR model show superior reasoning depth and adaptability, underscoring the potential for transparent, multidisciplinary AI-driven discovery. It lays the groundwork for general autonomous reasoning solutions. |
|
2025-01-15T00:00:00 | 2501.05131 | 3DIS-FLUX: simple and efficient multi-instance generation with DiT rendering | [
"Dewei Zhou",
"Ji Xie",
"Zongxin Yang",
"Yi Yang"
] | The growing demand for controllable outputs in text-to-image generation has driven significant advancements in multi-instance generation (MIG), enabling users to define both instance layouts and attributes. Currently, the state-of-the-art methods in MIG are primarily adapter-based. However, these methods necessitate retraining a new adapter each time a more advanced model is released, resulting in significant resource consumption. A methodology named Depth-Driven Decoupled Instance Synthesis (3DIS) has been introduced, which decouples MIG into two distinct phases: 1) depth-based scene construction and 2) detail rendering with widely pre-trained depth control models. The 3DIS method requires adapter training solely during the scene construction phase, while enabling various models to perform training-free detail rendering. Initially, 3DIS focused on rendering techniques utilizing U-Net architectures such as SD1.5, SD2, and SDXL, without exploring the potential of recent DiT-based models like FLUX. In this paper, we present 3DIS-FLUX, an extension of the 3DIS framework that integrates the FLUX model for enhanced rendering capabilities. Specifically, we employ the FLUX.1-Depth-dev model for depth map controlled image generation and introduce a detail renderer that manipulates the Attention Mask in FLUX's Joint Attention mechanism based on layout information. This approach allows for the precise rendering of fine-grained attributes of each instance. Our experimental results indicate that 3DIS-FLUX, leveraging the FLUX model, outperforms the original 3DIS method, which utilized SD2 and SDXL, and surpasses current state-of-the-art adapter-based methods in terms of both performance and image quality. Project Page: https://limuloo.github.io/3DIS/. |
|
2025-01-15T00:00:00 | 2501.07556 | MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training | [
"Xingyi He",
"Hao Yu",
"Sida Peng",
"Dongli Tan",
"Zehong Shen",
"Hujun Bao",
"Xiaowei Zhou"
] | Image matching, which aims to identify corresponding pixel locations between images, is crucial in a wide range of scientific disciplines, aiding in image registration, fusion, and analysis. In recent years, deep learning-based image matching algorithms have dramatically outperformed humans in rapidly and accurately finding large amounts of correspondences. However, when dealing with images captured under different imaging modalities that result in significant appearance changes, the performance of these algorithms often deteriorates due to the scarcity of annotated cross-modal training data. This limitation hinders applications in various fields that rely on multiple image modalities to obtain complementary information. To address this challenge, we propose a large-scale pre-training framework that utilizes synthetic cross-modal training signals, incorporating diverse data from various sources, to train models to recognize and match fundamental structures across images. This capability is transferable to real-world, unseen cross-modality image matching tasks. Our key finding is that the matching model trained with our framework achieves remarkable generalizability across more than eight unseen cross-modality registration tasks using the same network weight, substantially outperforming existing methods, whether designed for generalization or tailored for specific tasks. This advancement significantly enhances the applicability of image matching technologies across various scientific disciplines and paves the way for new applications in multi-modality human and artificial intelligence analysis and beyond. |
|
2025-01-16T00:00:00 | 2501.08828 | MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents | [
"Kuicai Dong",
"Yujing Chang",
"Xin Deik Goh",
"Dexun Li",
"Ruiming Tang",
"Yong Liu"
] | Multi-modal document retrieval is designed to identify and retrieve various forms of multi-modal content, such as figures, tables, charts, and layout information from extensive documents. Despite its significance, there is a notable lack of a robust benchmark to effectively evaluate the performance of systems in multi-modal document retrieval. To address this gap, this work introduces a new benchmark, named as MMDocIR, encompassing two distinct tasks: page-level and layout-level retrieval. The former focuses on localizing the most relevant pages within a long document, while the latter targets the detection of specific layouts, offering a more fine-grained granularity than whole-page analysis. A layout can refer to a variety of elements such as textual paragraphs, equations, figures, tables, or charts. The MMDocIR benchmark comprises a rich dataset featuring expertly annotated labels for 1,685 questions and bootstrapped labels for 173,843 questions, making it a pivotal resource for advancing multi-modal document retrieval for both training and evaluation. Through rigorous experiments, we reveal that (i) visual retrievers significantly outperform their text counterparts, (ii) MMDocIR train set can effectively benefit the training process of multi-modal document retrieval and (iii) text retrievers leveraging on VLM-text perform much better than those using OCR-text. These findings underscores the potential advantages of integrating visual elements for multi-modal document retrieval. |
|
2025-01-16T00:00:00 | 2501.08994 | RepVideo: Rethinking Cross-Layer Representation for Video Generation | [
"Chenyang Si",
"Weichen Fan",
"Zhengyao Lv",
"Ziqi Huang",
"Yu Qiao",
"Ziwei Liu"
] | Video generation has achieved remarkable progress with the introduction of diffusion models, which have significantly improved the quality of generated videos. However, recent research has primarily focused on scaling up model training, while offering limited insights into the direct impact of representations on the video generation process. In this paper, we initially investigate the characteristics of features in intermediate layers, finding substantial variations in attention maps across different layers. These variations lead to unstable semantic representations and contribute to cumulative differences between features, which ultimately reduce the similarity between adjacent frames and negatively affect temporal coherence. To address this, we propose RepVideo, an enhanced representation framework for text-to-video diffusion models. By accumulating features from neighboring layers to form enriched representations, this approach captures more stable semantic information. These enhanced representations are then used as inputs to the attention mechanism, thereby improving semantic expressiveness while ensuring feature consistency across adjacent frames. Extensive experiments demonstrate that our RepVideo not only significantly enhances the ability to generate accurate spatial appearances, such as capturing complex spatial relationships between multiple objects, but also improves temporal consistency in video generation. |
|
2025-01-16T00:00:00 | 2501.08983 | CityDreamer4D: Compositional Generative Model of Unbounded 4D Cities | [
"Haozhe Xie",
"Zhaoxi Chen",
"Fangzhou Hong",
"Ziwei Liu"
] | 3D scene generation has garnered growing attention in recent years and has made significant progress. Generating 4D cities is more challenging than 3D scenes due to the presence of structurally complex, visually diverse objects like buildings and vehicles, and heightened human sensitivity to distortions in urban environments. To tackle these issues, we propose CityDreamer4D, a compositional generative model specifically tailored for generating unbounded 4D cities. Our main insights are 1) 4D city generation should separate dynamic objects (e.g., vehicles) from static scenes (e.g., buildings and roads), and 2) all objects in the 4D scene should be composed of different types of neural fields for buildings, vehicles, and background stuff. Specifically, we propose Traffic Scenario Generator and Unbounded Layout Generator to produce dynamic traffic scenarios and static city layouts using a highly compact BEV representation. Objects in 4D cities are generated by combining stuff-oriented and instance-oriented neural fields for background stuff, buildings, and vehicles. To suit the distinct characteristics of background stuff and instances, the neural fields employ customized generative hash grids and periodic positional embeddings as scene parameterizations. Furthermore, we offer a comprehensive suite of datasets for city generation, including OSM, GoogleEarth, and CityTopia. The OSM dataset provides a variety of real-world city layouts, while the Google Earth and CityTopia datasets deliver large-scale, high-quality city imagery complete with 3D instance annotations. Leveraging its compositional design, CityDreamer4D supports a range of downstream applications, such as instance editing, city stylization, and urban simulation, while delivering state-of-the-art performance in generating realistic 4D cities. |
|
2025-01-16T00:00:00 | 2501.09019 | Ouroboros-Diffusion: Exploring Consistent Content Generation in Tuning-free Long Video Diffusion | [
"Jingyuan Chen",
"Fuchen Long",
"Jie An",
"Zhaofan Qiu",
"Ting Yao",
"Jiebo Luo",
"Tao Mei"
] | The first-in-first-out (FIFO) video diffusion, built on a pre-trained text-to-video model, has recently emerged as an effective approach for tuning-free long video generation. This technique maintains a queue of video frames with progressively increasing noise, continuously producing clean frames at the queue's head while Gaussian noise is enqueued at the tail. However, FIFO-Diffusion often struggles to keep long-range temporal consistency in the generated videos due to the lack of correspondence modeling across frames. In this paper, we propose Ouroboros-Diffusion, a novel video denoising framework designed to enhance structural and content (subject) consistency, enabling the generation of consistent videos of arbitrary length. Specifically, we introduce a new latent sampling technique at the queue tail to improve structural consistency, ensuring perceptually smooth transitions among frames. To enhance subject consistency, we devise a Subject-Aware Cross-Frame Attention (SACFA) mechanism, which aligns subjects across frames within short segments to achieve better visual coherence. Furthermore, we introduce self-recurrent guidance. This technique leverages information from all previous cleaner frames at the front of the queue to guide the denoising of noisier frames at the end, fostering rich and contextual global information interaction. Extensive experiments of long video generation on the VBench benchmark demonstrate the superiority of our Ouroboros-Diffusion, particularly in terms of subject consistency, motion smoothness, and temporal consistency. |
|
2025-01-16T00:00:00 | 2501.08809 | XMusic: Towards a Generalized and Controllable Symbolic Music Generation Framework | [
"Sida Tian",
"Can Zhang",
"Wei Yuan",
"Wei Tan",
"Wenjie Zhu"
] | In recent years, remarkable advancements in artificial intelligence-generated content (AIGC) have been achieved in the fields of image synthesis and text generation, generating content comparable to that produced by humans. However, the quality of AI-generated music has not yet reached this standard, primarily due to the challenge of effectively controlling musical emotions and ensuring high-quality outputs. This paper presents a generalized symbolic music generation framework, XMusic, which supports flexible prompts (i.e., images, videos, texts, tags, and humming) to generate emotionally controllable and high-quality symbolic music. XMusic consists of two core components, XProjector and XComposer. XProjector parses the prompts of various modalities into symbolic music elements (i.e., emotions, genres, rhythms and notes) within the projection space to generate matching music. XComposer contains a Generator and a Selector. The Generator generates emotionally controllable and melodious music based on our innovative symbolic music representation, whereas the Selector identifies high-quality symbolic music by constructing a multi-task learning scheme involving quality assessment, emotion recognition, and genre recognition tasks. In addition, we build XMIDI, a large-scale symbolic music dataset that contains 108,023 MIDI files annotated with precise emotion and genre labels. Objective and subjective evaluations show that XMusic significantly outperforms the current state-of-the-art methods with impressive music quality. Our XMusic has been awarded as one of the nine Highlights of Collectibles at WAIC 2023. The project homepage of XMusic is https://xmusic-project.github.io. |
|
2025-01-16T00:00:00 | 2501.09012 | Multimodal LLMs Can Reason about Aesthetics in Zero-Shot | [
"Ruixiang Jiang",
"Changwen Chen"
] | https://github.com/songrise/MLLM4Art | We present the first study on how Multimodal LLMs' (MLLMs) reasoning ability shall be elicited to evaluate the aesthetics of artworks. To facilitate this investigation, we construct MM-StyleBench, a novel high-quality dataset for benchmarking artistic stylization. We then develop a principled method for human preference modeling and perform a systematic correlation analysis between MLLMs' responses and human preference. Our experiments reveal an inherent hallucination issue of MLLMs in art evaluation, associated with response subjectivity. ArtCoT is proposed, demonstrating that art-specific task decomposition and the use of concrete language boost MLLMs' reasoning ability for aesthetics. Our findings offer valuable insights into MLLMs for art and can benefit a wide range of downstream applications, such as style transfer and artistic image generation. Code available at https://github.com/songrise/MLLM4Art. |
2025-01-16T00:00:00 | 2501.07783 | Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal Understanding | [
"Zhaokai Wang",
"Xizhou Zhu",
"Xue Yang",
"Gen Luo",
"Hao Li",
"Changyao Tian",
"Wenhan Dou",
"Junqi Ge",
"Lewei Lu",
"Yu Qiao",
"Jifeng Dai"
] | https://github.com/OpenGVLab/PIIP | Image pyramids are widely adopted in top-performing methods to obtain multi-scale features for precise visual perception and understanding. However, current image pyramids use the same large-scale model to process multiple resolutions of images, leading to significant computational cost. To address this challenge, we propose a novel network architecture, called Parameter-Inverted Image Pyramid Networks (PIIP). Specifically, PIIP uses pretrained models (ViTs or CNNs) as branches to process multi-scale images, where images of higher resolutions are processed by smaller network branches to balance computational cost and performance. To integrate information from different spatial scales, we further propose a novel cross-branch feature interaction mechanism. To validate PIIP, we apply it to various perception models and a representative multimodal large language model called LLaVA, and conduct extensive experiments on various tasks such as object detection, segmentation, image classification and multimodal understanding. PIIP achieves superior performance compared to single-branch and existing multi-resolution approaches with lower computational cost. When applied to InternViT-6B, a large-scale vision foundation model, PIIP can improve its performance by 1%-2% on detection and segmentation with only 40%-60% of the original computation, finally achieving 60.0 box AP on MS COCO and 59.7 mIoU on ADE20K. For multimodal understanding, our PIIP-LLaVA achieves 73.0% accuracy on TextVQA and 74.5% on MMBench with only 2.8M training data. Our code is released at https://github.com/OpenGVLab/PIIP. |
2025-01-16T00:00:00 | 2501.08365 | Towards Best Practices for Open Datasets for LLM Training | [
"Stefan Baack",
"Stella Biderman",
"Kasia Odrozek",
"Aviya Skowron",
"Ayah Bdeir",
"Jillian Bommarito",
"Jennifer Ding",
"Maximilian Gahntz",
"Paul Keller",
"Pierre-Carl Langlais",
"Greg Lindahl",
"Sebastian Majstorovic",
"Nik Marda",
"Guilherme Penedo",
"Maarten Van Segbroeck",
"Jennifer Wang",
"Leandro von Werra",
"Mitchell Baker",
"Julie Belião",
"Kasia Chmielinski",
"Marzieh Fadaee",
"Lisa Gutermuth",
"Hynek Kydlíček",
"Greg Leppert",
"EM Lewis-Jong",
"Solana Larsen",
"Shayne Longpre",
"Angela Oduor Lungati",
"Cullen Miller",
"Victor Miller",
"Max Ryabinin",
"Kathleen Siminyu",
"Andrew Strait",
"Mark Surman",
"Anna Tumadóttir",
"Maurice Weber",
"Rebecca Weiss",
"Lee White",
"Thomas Wolf"
] | Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models. While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness. |
|
2025-01-16T00:00:00 | 2501.08970 | Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography | [
"Ilia Shumailov",
"Daniel Ramage",
"Sarah Meiklejohn",
"Peter Kairouz",
"Florian Hartmann",
"Borja Balle",
"Eugene Bagdasarian"
] | We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them. |
|
2025-01-16T00:00:00 | 2501.04693 | Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding | [
"Joshua Jones",
"Oier Mees",
"Carmelo Sferrazza",
"Kyle Stachowicz",
"Pieter Abbeel",
"Sergey Levine"
] | Interacting with the world is a multi-sensory experience: achieving effective general-purpose interaction requires making use of all available modalities -- including vision, touch, and audio -- to fill in gaps from partial observation. For example, when vision is occluded reaching into a bag, a robot should rely on its senses of touch and sound. However, state-of-the-art generalist robot policies are typically trained on large datasets to predict robot actions solely from visual and proprioceptive observations. In this work, we propose FuSe, a novel approach that enables finetuning visuomotor generalist policies on heterogeneous sensor modalities for which large datasets are not readily available by leveraging natural language as a common cross-modal grounding. We combine a multimodal contrastive loss with a sensory-grounded language generation loss to encode high-level semantics. In the context of robot manipulation, we show that FuSe enables performing challenging tasks that require reasoning jointly over modalities such as vision, touch, and sound in a zero-shot setting, such as multimodal prompting, compositional cross-modal prompting, and descriptions of objects it interacts with. We show that the same recipe is applicable to widely different generalist policies, including both diffusion-based generalist policies and large vision-language-action (VLA) models. Extensive experiments in the real world show that FuSeis able to increase success rates by over 20% compared to all considered baselines. |
|
2025-01-16T00:00:00 | 2412.19412 | MINIMA: Modality Invariant Image Matching | [
"Xingyu Jiang",
"Jiangwei Ren",
"Zizhuo Li",
"Xin Zhou",
"Dingkang Liang",
"Xiang Bai"
] | https://github.com/LSXI7/MINIMA | Image matching for both cross-view and cross-modality plays a critical role in multimodal perception. In practice, the modality gap caused by different imaging systems/styles poses great challenges to the matching task. Existing works try to extract invariant features for specific modalities and train on limited datasets, showing poor generalization. In this paper, we present MINIMA, a unified image matching framework for multiple cross-modal cases. Without pursuing fancy modules, our MINIMA aims to enhance universal performance from the perspective of data scaling up. For such purpose, we propose a simple yet effective data engine that can freely produce a large dataset containing multiple modalities, rich scenarios, and accurate matching labels. Specifically, we scale up the modalities from cheap but rich RGB-only matching data, by means of generative models. Under this setting, the matching labels and rich diversity of the RGB dataset are well inherited by the generated multimodal data. Benefiting from this, we construct MD-syn, a new comprehensive dataset that fills the data gap for general multimodal image matching. With MD-syn, we can directly train any advanced matching pipeline on randomly selected modality pairs to obtain cross-modal ability. Extensive experiments on in-domain and zero-shot matching tasks, including 19 cross-modal cases, demonstrate that our MINIMA can significantly outperform the baselines and even surpass modality-specific methods. The dataset and code are available at https://github.com/LSXI7/MINIMA . |
2025-01-17T00:00:00 | 2501.08617 | RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation | [
"Kaiqu Liang",
"Haimin Hu",
"Ryan Liu",
"Thomas L. Griffiths",
"Jaime Fernández Fisac"
] | Generative AI systems like foundation models (FMs) must align well with human values to ensure their behavior is helpful and trustworthy. While Reinforcement Learning from Human Feedback (RLHF) has shown promise for optimizing model performance using human judgments, existing RLHF pipelines predominantly rely on immediate feedback, which can fail to accurately reflect the downstream impact of an interaction on users' utility. We demonstrate that feedback based on evaluators' foresight estimates of downstream consequences systematically induces Goodhart's Law dynamics, incentivizing misaligned behaviors like sycophancy and deception and ultimately degrading user outcomes. To alleviate this, we propose decoupling evaluation from prediction by refocusing RLHF on hindsight feedback. Our theoretical analysis reveals that conditioning evaluator feedback on downstream observations mitigates misalignment and improves expected human utility, even when these observations are simulated by the AI system itself. To leverage this insight in a practical alignment algorithm, we introduce Reinforcement Learning from Hindsight Simulation (RLHS), which first simulates plausible consequences and then elicits feedback to assess what behaviors were genuinely beneficial in hindsight. We apply RLHS to two widely-employed online and offline preference optimization methods -- Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) -- and show empirically that misalignment is significantly reduced with both methods. Through an online human user study, we show that RLHS consistently outperforms RLHF in helping users achieve their goals and earns higher satisfaction ratings, despite being trained solely with simulated hindsight feedback. These results underscore the importance of focusing on long-term consequences, even simulated ones, to mitigate misalignment in RLHF. |
|
2025-01-17T00:00:00 | 2501.09686 | Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models | [
"Fengli Xu",
"Qianyue Hao",
"Zefang Zong",
"Jingwei Wang",
"Yunke Zhang",
"Jingyi Wang",
"Xiaochong Lan",
"Jiahui Gong",
"Tianjian Ouyang",
"Fanjin Meng",
"Chenyang Shao",
"Yuwei Yan",
"Qinglong Yang",
"Yiwen Song",
"Sijian Ren",
"Xinyuan Hu",
"Yu Li",
"Jie Feng",
"Chen Gao",
"Yong Li"
] | Language has long been conceived as an essential tool for human reasoning. The breakthrough of Large Language Models (LLMs) has sparked significant research interest in leveraging these models to tackle complex reasoning tasks. Researchers have moved beyond simple autoregressive token generation by introducing the concept of "thought" -- a sequence of tokens representing intermediate steps in the reasoning process. This innovative paradigm enables LLMs' to mimic complex human reasoning processes, such as tree search and reflective thinking. Recently, an emerging trend of learning to reason has applied reinforcement learning (RL) to train LLMs to master reasoning processes. This approach enables the automatic generation of high-quality reasoning trajectories through trial-and-error search algorithms, significantly expanding LLMs' reasoning capacity by providing substantially more training data. Furthermore, recent studies demonstrate that encouraging LLMs to "think" with more tokens during test-time inference can further significantly boost reasoning accuracy. Therefore, the train-time and test-time scaling combined to show a new research frontier -- a path toward Large Reasoning Model. The introduction of OpenAI's o1 series marks a significant milestone in this research direction. In this survey, we present a comprehensive review of recent progress in LLM reasoning. We begin by introducing the foundational background of LLMs and then explore the key technical components driving the development of large reasoning models, with a focus on automated data construction, learning-to-reason techniques, and test-time scaling. We also analyze popular open-source projects at building large reasoning models, and conclude with open challenges and future research directions. |
|
2025-01-17T00:00:00 | 2501.09732 | Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps | [
"Nanye Ma",
"Shangyuan Tong",
"Haolin Jia",
"Hexiang Hu",
"Yu-Chuan Su",
"Mingda Zhang",
"Xuan Yang",
"Yandong Li",
"Tommi Jaakkola",
"Xuhui Jia",
"Saining Xie"
] | Generative models have made significant impacts across various domains, largely due to their ability to scale during training by increasing data, computational resources, and model size, a phenomenon characterized by the scaling laws. Recent research has begun to explore inference-time scaling behavior in Large Language Models (LLMs), revealing how performance can further improve with additional computation during inference. Unlike LLMs, diffusion models inherently possess the flexibility to adjust inference-time computation via the number of denoising steps, although the performance gains typically flatten after a few dozen. In this work, we explore the inference-time scaling behavior of diffusion models beyond increasing denoising steps and investigate how the generation performance can further improve with increased computation. Specifically, we consider a search problem aimed at identifying better noises for the diffusion sampling process. We structure the design space along two axes: the verifiers used to provide feedback, and the algorithms used to find better noise candidates. Through extensive experiments on class-conditioned and text-conditioned image generation benchmarks, our findings reveal that increasing inference-time compute leads to substantial improvements in the quality of samples generated by diffusion models, and with the complicated nature of images, combinations of the components in the framework can be specifically chosen to conform with different application scenario. |
|
2025-01-17T00:00:00 | 2501.09747 | FAST: Efficient Action Tokenization for Vision-Language-Action Models | [
"Karl Pertsch",
"Kyle Stachowicz",
"Brian Ichter",
"Danny Driess",
"Suraj Nair",
"Quan Vuong",
"Oier Mees",
"Chelsea Finn",
"Sergey Levine"
] | Autoregressive sequence models, such as Transformer-based vision-language action (VLA) policies, can be tremendously effective for capturing complex and generalizable robotic behaviors. However, such models require us to choose a tokenization of our continuous action signals, which determines how the discrete symbols predicted by the model map to continuous robot actions. We find that current approaches for robot action tokenization, based on simple per-dimension, per-timestep binning schemes, typically perform poorly when learning dexterous skills from high-frequency robot data. To address this challenge, we propose a new compression-based tokenization scheme for robot actions, based on the discrete cosine transform. Our tokenization approach, Frequency-space Action Sequence Tokenization (FAST), enables us to train autoregressive VLAs for highly dexterous and high-frequency tasks where standard discretization methods fail completely. Based on FAST, we release FAST+, a universal robot action tokenizer, trained on 1M real robot action trajectories. It can be used as a black-box tokenizer for a wide range of robot action sequences, with diverse action spaces and control frequencies. Finally, we show that, when combined with the pi0 VLA, our method can scale to training on 10k hours of robot data and match the performance of diffusion VLAs, while reducing training time by up to 5x. |
|
2025-01-17T00:00:00 | 2501.09755 | Learnings from Scaling Visual Tokenizers for Reconstruction and Generation | [
"Philippe Hansen-Estruch",
"David Yan",
"Ching-Yao Chung",
"Orr Zohar",
"Jialiang Wang",
"Tingbo Hou",
"Tao Xu",
"Sriram Vishwanath",
"Peter Vajda",
"Xinlei Chen"
] | Visual tokenization via auto-encoding empowers state-of-the-art image and video generative models by compressing pixels into a latent space. Although scaling Transformer-based generators has been central to recent advances, the tokenizer component itself is rarely scaled, leaving open questions about how auto-encoder design choices influence both its objective of reconstruction and downstream generative performance. Our work aims to conduct an exploration of scaling in auto-encoders to fill in this blank. To facilitate this exploration, we replace the typical convolutional backbone with an enhanced Vision Transformer architecture for Tokenization (ViTok). We train ViTok on large-scale image and video datasets far exceeding ImageNet-1K, removing data constraints on tokenizer scaling. We first study how scaling the auto-encoder bottleneck affects both reconstruction and generation -- and find that while it is highly correlated with reconstruction, its relationship with generation is more complex. We next explored the effect of separately scaling the auto-encoders' encoder and decoder on reconstruction and generation performance. Crucially, we find that scaling the encoder yields minimal gains for either reconstruction or generation, while scaling the decoder boosts reconstruction but the benefits for generation are mixed. Building on our exploration, we design ViTok as a lightweight auto-encoder that achieves competitive performance with state-of-the-art auto-encoders on ImageNet-1K and COCO reconstruction tasks (256p and 512p) while outperforming existing auto-encoders on 16-frame 128p video reconstruction for UCF-101, all with 2-5x fewer FLOPs. When integrated with Diffusion Transformers, ViTok demonstrates competitive performance on image generation for ImageNet-1K and sets new state-of-the-art benchmarks for class-conditional video generation on UCF-101. |
|
2025-01-17T00:00:00 | 2501.09756 | SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces | [
"Sumit Chaturvedi",
"Mengwei Ren",
"Yannick Hold-Geoffroy",
"Jingyuan Liu",
"Julie Dorsey",
"Zhixin Shu"
] | We introduce SynthLight, a diffusion model for portrait relighting. Our approach frames image relighting as a re-rendering problem, where pixels are transformed in response to changes in environmental lighting conditions. Using a physically-based rendering engine, we synthesize a dataset to simulate this lighting-conditioned transformation with 3D head assets under varying lighting. We propose two training and inference strategies to bridge the gap between the synthetic and real image domains: (1) multi-task training that takes advantage of real human portraits without lighting labels; (2) an inference time diffusion sampling procedure based on classifier-free guidance that leverages the input portrait to better preserve details. Our method generalizes to diverse real photographs and produces realistic illumination effects, including specular highlights and cast shadows, while preserving the subject's identity. Our quantitative experiments on Light Stage data demonstrate results comparable to state-of-the-art relighting methods. Our qualitative results on in-the-wild images showcase rich and unprecedented illumination effects. Project Page: https://vrroom.github.io/synthlight/ |
|
2025-01-17T00:00:00 | 2501.09503 | AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation | [
"Junjie He",
"Yuxiang Tuo",
"Binghui Chen",
"Chongyang Zhong",
"Yifeng Geng",
"Liefeng Bo"
] | Recently, large-scale generative models have demonstrated outstanding text-to-image generation capabilities. However, generating high-fidelity personalized images with specific subjects still presents challenges, especially in cases involving multiple subjects. In this paper, we propose AnyStory, a unified approach for personalized subject generation. AnyStory not only achieves high-fidelity personalization for single subjects, but also for multiple subjects, without sacrificing subject fidelity. Specifically, AnyStory models the subject personalization problem in an "encode-then-route" manner. In the encoding step, AnyStory utilizes a universal and powerful image encoder, i.e., ReferenceNet, in conjunction with CLIP vision encoder to achieve high-fidelity encoding of subject features. In the routing step, AnyStory utilizes a decoupled instance-aware subject router to accurately perceive and predict the potential location of the corresponding subject in the latent space, and guide the injection of subject conditions. Detailed experimental results demonstrate the excellent performance of our method in retaining subject details, aligning text descriptions, and personalizing for multiple subjects. The project page is at https://aigcdesigngroup.github.io/AnyStory/ . |
|
2025-01-17T00:00:00 | 2501.09433 | CaPa: Carve-n-Paint Synthesis for Efficient 4K Textured Mesh Generation | [
"Hwan Heo",
"Jangyeong Kim",
"Seongyeong Lee",
"Jeong A Wi",
"Junyoung Choi",
"Sangjun Ahn"
] | The synthesis of high-quality 3D assets from textual or visual inputs has become a central objective in modern generative modeling. Despite the proliferation of 3D generation algorithms, they frequently grapple with challenges such as multi-view inconsistency, slow generation times, low fidelity, and surface reconstruction problems. While some studies have addressed some of these issues, a comprehensive solution remains elusive. In this paper, we introduce CaPa, a carve-and-paint framework that generates high-fidelity 3D assets efficiently. CaPa employs a two-stage process, decoupling geometry generation from texture synthesis. Initially, a 3D latent diffusion model generates geometry guided by multi-view inputs, ensuring structural consistency across perspectives. Subsequently, leveraging a novel, model-agnostic Spatially Decoupled Attention, the framework synthesizes high-resolution textures (up to 4K) for a given geometry. Furthermore, we propose a 3D-aware occlusion inpainting algorithm that fills untextured regions, resulting in cohesive results across the entire model. This pipeline generates high-quality 3D assets in less than 30 seconds, providing ready-to-use outputs for commercial applications. Experimental results demonstrate that CaPa excels in both texture fidelity and geometric stability, establishing a new standard for practical, scalable 3D asset generation. |
|
2025-01-17T00:00:00 | 2501.09751 | OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking | [
"Zekun Xi",
"Wenbiao Yin",
"Jizhan Fang",
"Jialong Wu",
"Runnan Fang",
"Ningyu Zhang",
"Jiang Yong",
"Pengjun Xie",
"Fei Huang",
"Huajun Chen"
] | Machine writing with large language models often relies on retrieval-augmented generation. However, these approaches remain confined within the boundaries of the model's predefined scope, limiting the generation of content with rich information. Specifically, vanilla-retrieved information tends to lack depth, utility, and suffers from redundancy, which negatively impacts the quality of generated articles, leading to shallow, repetitive, and unoriginal outputs. To address these issues, we propose OmniThink, a machine writing framework that emulates the human-like process of iterative expansion and reflection. The core idea behind OmniThink is to simulate the cognitive behavior of learners as they progressively deepen their knowledge of the topics. Experimental results demonstrate that OmniThink improves the knowledge density of generated articles without compromising metrics such as coherence and depth. Human evaluations and expert feedback further highlight the potential of OmniThink to address real-world challenges in the generation of long-form articles. |
|
2025-01-17T00:00:00 | 2501.09484 | Exploring the Inquiry-Diagnosis Relationship with Advanced Patient Simulators | [
"Zhaocheng Liu",
"Quan Tu",
"Wen Ye",
"Yu Xiao",
"Zhishou Zhang",
"Hengfu Cui",
"Yalun Zhu",
"Qiang Ju",
"Shizheng Li",
"Jian Xie"
] | https://github.com/LIO-H-ZEN/PatientSimulator | Online medical consultation (OMC) restricts doctors to gathering patient information solely through inquiries, making the already complex sequential decision-making process of diagnosis even more challenging. Recently, the rapid advancement of large language models has demonstrated a significant potential to transform OMC. However, most studies have primarily focused on improving diagnostic accuracy under conditions of relatively sufficient information, while paying limited attention to the "inquiry" phase of the consultation process. This lack of focus has left the relationship between "inquiry" and "diagnosis" insufficiently explored. In this paper, we first extract real patient interaction strategies from authentic doctor-patient conversations and use these strategies to guide the training of a patient simulator that closely mirrors real-world behavior. By inputting medical records into our patient simulator to simulate patient responses, we conduct extensive experiments to explore the relationship between "inquiry" and "diagnosis" in the consultation process. Experimental results demonstrate that inquiry and diagnosis adhere to the Liebig's law: poor inquiry quality limits the effectiveness of diagnosis, regardless of diagnostic capability, and vice versa. Furthermore, the experiments reveal significant differences in the inquiry performance of various models. To investigate this phenomenon, we categorize the inquiry process into four types: (1) chief complaint inquiry; (2) specification of known symptoms; (3) inquiry about accompanying symptoms; and (4) gathering family or medical history. We analyze the distribution of inquiries across the four types for different models to explore the reasons behind their significant performance differences. We plan to open-source the weights and related code of our patient simulator at https://github.com/LIO-H-ZEN/PatientSimulator. |
2025-01-17T00:00:00 | 2501.09038 | Do generative video models learn physical principles from watching videos? | [
"Saman Motamed",
"Laura Culp",
"Kevin Swersky",
"Priyank Jaini",
"Robert Geirhos"
] | https://github.com/google-deepmind/physics-IQ-benchmark | AI video generation is undergoing a revolution, with quality and realism advancing rapidly. These advances have led to a passionate scientific debate: Do video models learn ``world models'' that discover laws of physics -- or, alternatively, are they merely sophisticated pixel predictors that achieve visual realism without understanding the physical principles of reality? We address this question by developing Physics-IQ, a comprehensive benchmark dataset that can only be solved by acquiring a deep understanding of various physical principles, like fluid dynamics, optics, solid mechanics, magnetism and thermodynamics. We find that across a range of current models (Sora, Runway, Pika, Lumiere, Stable Video Diffusion, and VideoPoet), physical understanding is severely limited, and unrelated to visual realism. At the same time, some test cases can already be successfully solved. This indicates that acquiring certain physical principles from observation alone may be possible, but significant challenges remain. While we expect rapid advances ahead, our work demonstrates that visual realism does not imply physical understanding. Our project page is at https://physics-iq.github.io; code at https://github.com/google-deepmind/physics-IQ-benchmark. |
2025-01-17T00:00:00 | 2501.09653 | The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating Large Language Models | [
"Jonathan Katzy",
"Razvan Mihai Popescu",
"Arie van Deursen",
"Maliheh Izadi"
] | The recent rise in the popularity of large language models has spurred the development of extensive code datasets needed to train them. This has left limited code available for collection and use in the downstream investigation of specific behaviors, or evaluation of large language models without suffering from data contamination. To address this problem, we release The Heap, a large multilingual dataset covering 57 programming languages that has been deduplicated with respect to other open datasets of code, enabling researchers to conduct fair evaluations of large language models without significant data cleaning overhead. |
|
2025-01-20T00:00:00 | 2501.09891 | Evolving Deeper LLM Thinking | [
"Kuang-Huei Lee",
"Ian Fischer",
"Yueh-Hua Wu",
"Dave Marwood",
"Shumeet Baluja",
"Dale Schuurmans",
"Xinyun Chen"
] | We explore an evolutionary search strategy for scaling inference time compute in Large Language Models. The proposed approach, Mind Evolution, uses a language model to generate, recombine and refine candidate responses. The proposed approach avoids the need to formalize the underlying inference problem whenever a solution evaluator is available. Controlling for inference cost, we find that Mind Evolution significantly outperforms other inference strategies such as Best-of-N and Sequential Revision in natural language planning tasks. In the TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more than 98% of the problem instances using Gemini 1.5 Pro without the use of a formal solver. |
|
2025-01-20T00:00:00 | 2501.10120 | PaSa: An LLM Agent for Comprehensive Academic Paper Search | [
"Yichen He",
"Guanhua Huang",
"Peiyuan Feng",
"Yuan Lin",
"Yuchen Zhang",
"Hang Li",
"Weinan E"
] | https://github.com/bytedance/pasa | We introduce PaSa, an advanced Paper Search agent powered by large language models. PaSa can autonomously make a series of decisions, including invoking search tools, reading papers, and selecting relevant references, to ultimately obtain comprehensive and accurate results for complex scholarly queries. We optimize PaSa using reinforcement learning with a synthetic dataset, AutoScholarQuery, which includes 35k fine-grained academic queries and corresponding papers sourced from top-tier AI conference publications. Additionally, we develop RealScholarQuery, a benchmark collecting real-world academic queries to assess PaSa performance in more realistic scenarios. Despite being trained on synthetic data, PaSa significantly outperforms existing baselines on RealScholarQuery, including Google, Google Scholar, Google with GPT-4 for paraphrased queries, chatGPT (search-enabled GPT-4o), GPT-o1, and PaSa-GPT-4o (PaSa implemented by prompting GPT-4o). Notably, PaSa-7B surpasses the best Google-based baseline, Google with GPT-4o, by 37.78% in recall@20 and 39.90% in recall@50. It also exceeds PaSa-GPT-4o by 30.36% in recall and 4.25% in precision. Model, datasets, and code are available at https://github.com/bytedance/pasa. |
2025-01-20T00:00:00 | 2501.10132 | ComplexFuncBench: Exploring Multi-Step and Constrained Function Calling under Long-Context Scenario | [
"Lucen Zhong",
"Zhengxiao Du",
"Xiaohan Zhang",
"Haiyi Hu",
"Jie Tang"
] | https://github.com/THUDM/ComplexFuncBench | Enhancing large language models (LLMs) with real-time APIs can help generate more accurate and up-to-date responses. However, evaluating the function calling abilities of LLMs in real-world scenarios remains under-explored due to the complexity of data collection and evaluation. In this work, we introduce ComplexFuncBench, a benchmark for complex function calling across five real-world scenarios. Compared to existing benchmarks, ComplexFuncBench encompasses multi-step and constrained function calling, which requires long-parameter filing, parameter value reasoning, and 128k long context. Additionally, we propose an automatic framework, ComplexEval, for quantitatively evaluating complex function calling tasks. Through comprehensive experiments, we demonstrate the deficiencies of state-of-the-art LLMs in function calling and suggest future directions for optimizing these capabilities. The data and code are available at https://github.com/THUDM/ComplexFuncBench. |
2025-01-20T00:00:00 | 2501.10020 | Textoon: Generating Vivid 2D Cartoon Characters from Text Descriptions | [
"Chao He",
"Jianqiang Ren",
"Liefeng Bo"
] | The 2D cartoon style is a prominent art form in digital character creation, particularly popular among younger audiences. While advancements in digital human technology have spurred extensive research into photorealistic digital humans and 3D characters, interactive 2D cartoon characters have received comparatively less attention. Unlike 3D counterparts, which require sophisticated construction and resource-intensive rendering, Live2D, a widely-used format for 2D cartoon characters, offers a more efficient alternative, which allows to animate 2D characters in a manner that simulates 3D movement without the necessity of building a complete 3D model. Furthermore, Live2D employs lightweight HTML5 (H5) rendering, improving both accessibility and efficiency. In this technical report, we introduce Textoon, an innovative method for generating diverse 2D cartoon characters in the Live2D format based on text descriptions. The Textoon leverages cutting-edge language and vision models to comprehend textual intentions and generate 2D appearance, capable of creating a wide variety of stunning and interactive 2D characters within one minute. The project homepage is https://human3daigc.github.io/Textoon_webpage/. |
|
2025-01-20T00:00:00 | 2501.10045 | HiFi-SR: A Unified Generative Transformer-Convolutional Adversarial Network for High-Fidelity Speech Super-Resolution | [
"Shengkui Zhao",
"Kun Zhou",
"Zexu Pan",
"Yukun Ma",
"Chong Zhang",
"Bin Ma"
] | https://github.com/modelscope/ClearerVoice-Studio | The application of generative adversarial networks (GANs) has recently advanced speech super-resolution (SR) based on intermediate representations like mel-spectrograms. However, existing SR methods that typically rely on independently trained and concatenated networks may lead to inconsistent representations and poor speech quality, especially in out-of-domain scenarios. In this work, we propose HiFi-SR, a unified network that leverages end-to-end adversarial training to achieve high-fidelity speech super-resolution. Our model features a unified transformer-convolutional generator designed to seamlessly handle both the prediction of latent representations and their conversion into time-domain waveforms. The transformer network serves as a powerful encoder, converting low-resolution mel-spectrograms into latent space representations, while the convolutional network upscales these representations into high-resolution waveforms. To enhance high-frequency fidelity, we incorporate a multi-band, multi-scale time-frequency discriminator, along with a multi-scale mel-reconstruction loss in the adversarial training process. HiFi-SR is versatile, capable of upscaling any input speech signal between 4 kHz and 32 kHz to a 48 kHz sampling rate. Experimental results demonstrate that HiFi-SR significantly outperforms existing speech SR methods across both objective metrics and ABX preference tests, for both in-domain and out-of-domain scenarios (https://github.com/modelscope/ClearerVoice-Studio). |
2025-01-20T00:00:00 | 2501.09978 | GaussianAvatar-Editor: Photorealistic Animatable Gaussian Head Avatar Editor | [
"Xiangyue Liu",
"Kunming Luo",
"Heng Li",
"Qi Zhang",
"Yuan Liu",
"Li Yi",
"Ping Tan"
] | We introduce GaussianAvatar-Editor, an innovative framework for text-driven editing of animatable Gaussian head avatars that can be fully controlled in expression, pose, and viewpoint. Unlike static 3D Gaussian editing, editing animatable 4D Gaussian avatars presents challenges related to motion occlusion and spatial-temporal inconsistency. To address these issues, we propose the Weighted Alpha Blending Equation (WABE). This function enhances the blending weight of visible Gaussians while suppressing the influence on non-visible Gaussians, effectively handling motion occlusion during editing. Furthermore, to improve editing quality and ensure 4D consistency, we incorporate conditional adversarial learning into the editing process. This strategy helps to refine the edited results and maintain consistency throughout the animation. By integrating these methods, our GaussianAvatar-Editor achieves photorealistic and consistent results in animatable 4D Gaussian editing. We conduct comprehensive experiments across various subjects to validate the effectiveness of our proposed techniques, which demonstrates the superiority of our approach over existing methods. More results and code are available at: [Project Link](https://xiangyueliu.github.io/GaussianAvatar-Editor/). |
|
2025-01-20T00:00:00 | 2501.10021 | X-Dyna: Expressive Dynamic Human Image Animation | [
"Di Chang",
"Hongyi Xu",
"You Xie",
"Yipeng Gao",
"Zhengfei Kuang",
"Shengqu Cai",
"Chenxu Zhang",
"Guoxian Song",
"Chao Wang",
"Yichun Shi",
"Zeyuan Chen",
"Shijie Zhou",
"Linjie Luo",
"Gordon Wetzstein",
"Mohammad Soleymani"
] | https://github.com/bytedance/X-Dyna | We introduce X-Dyna, a novel zero-shot, diffusion-based pipeline for animating a single human image using facial expressions and body movements derived from a driving video, that generates realistic, context-aware dynamics for both the subject and the surrounding environment. Building on prior approaches centered on human pose control, X-Dyna addresses key shortcomings causing the loss of dynamic details, enhancing the lifelike qualities of human video animations. At the core of our approach is the Dynamics-Adapter, a lightweight module that effectively integrates reference appearance context into the spatial attentions of the diffusion backbone while preserving the capacity of motion modules in synthesizing fluid and intricate dynamic details. Beyond body pose control, we connect a local control module with our model to capture identity-disentangled facial expressions, facilitating accurate expression transfer for enhanced realism in animated scenes. Together, these components form a unified framework capable of learning physical human motion and natural scene dynamics from a diverse blend of human and scene videos. Comprehensive qualitative and quantitative evaluations demonstrate that X-Dyna outperforms state-of-the-art methods, creating highly lifelike and expressive animations. The code is available at https://github.com/bytedance/X-Dyna. |
2025-01-20T00:00:00 | 2501.09775 | Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs) More Self-Confident Even When They Are Wrong | [
"Tairan Fu",
"Javier Conde",
"Gonzalo Martínez",
"María Grandury",
"Pedro Reviriego"
] | One of the most widely used methods to evaluate LLMs are Multiple Choice Question (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on almost any topic at scale as the results can be processed automatically. To help the LLM answer, a few examples called few shots can be included in the prompt. Moreover, the LLM can be asked to answer the question directly with the selected option or to first provide the reasoning and then the selected answer, which is known as chain of thought. In addition to checking whether the selected answer is correct, the evaluation can look at the LLM-estimated probability of its response as an indication of the confidence of the LLM in the response. In this paper, we study how the LLM confidence in its answer depends on whether the model has been asked to answer directly or to provide the reasoning before answering. The results of the evaluation of questions on a wide range of topics in seven different models show that LLMs are more confident in their answers when they provide reasoning before the answer. This occurs regardless of whether the selected answer is correct. Our hypothesis is that this behavior is due to the reasoning that modifies the probability of the selected answer, as the LLM predicts the answer based on the input question and the reasoning that supports the selection made. Therefore, LLM estimated probabilities seem to have intrinsic limitations that should be understood in order to use them in evaluation procedures. Interestingly, the same behavior has been observed in humans, for whom explaining an answer increases confidence in its correctness. |
|
2025-01-20T00:00:00 | 2501.09825 | Bridging Language Barriers in Healthcare: A Study on Arabic LLMs | [
"Nada Saadi",
"Tathagata Raha",
"Clément Christophe",
"Marco AF Pimentel",
"Ronnie Rajan",
"Praveen K Kanithi"
] | This paper investigates the challenges of developing large language models (LLMs) proficient in both multilingual understanding and medical knowledge. We demonstrate that simply translating medical data does not guarantee strong performance on clinical tasks in the target language. Our experiments reveal that the optimal language mix in training data varies significantly across different medical tasks. We find that larger models with carefully calibrated language ratios achieve superior performance on native-language clinical tasks. Furthermore, our results suggest that relying solely on fine-tuning may not be the most effective approach for incorporating new language knowledge into LLMs. Instead, data and computationally intensive pretraining methods may still be necessary to achieve optimal performance in multilingual medical settings. These findings provide valuable guidance for building effective and inclusive medical AI systems for diverse linguistic communities. |
|
2025-01-21T00:00:00 | 2501.08325 | GameFactory: Creating New Games with Generative Interactive Videos | [
"Jiwen Yu",
"Yiran Qin",
"Xintao Wang",
"Pengfei Wan",
"Di Zhang",
"Xihui Liu"
] | Generative game engines have the potential to revolutionize game development by autonomously creating new content and reducing manual workload. However, existing video-based game generation methods fail to address the critical challenge of scene generalization, limiting their applicability to existing games with fixed styles and scenes. In this paper, we present GameFactory, a framework focused on exploring scene generalization in game video generation. To enable the creation of entirely new and diverse games, we leverage pre-trained video diffusion models trained on open-domain video data. To bridge the domain gap between open-domain priors and small-scale game dataset, we propose a multi-phase training strategy that decouples game style learning from action control, preserving open-domain generalization while achieving action controllability. Using Minecraft as our data source, we release GF-Minecraft, a high-quality and diversity action-annotated video dataset for research. Furthermore, we extend our framework to enable autoregressive action-controllable game video generation, allowing the production of unlimited-length interactive game videos. Experimental results demonstrate that GameFactory effectively generates open-domain, diverse, and action-controllable game videos, representing a significant step forward in AI-driven game generation. Our dataset and project page are publicly available at https://vvictoryuki.github.io/gamefactory/. |
|
2025-01-21T00:00:00 | 2501.09781 | VideoWorld: Exploring Knowledge Learning from Unlabeled Videos | [
"Zhongwei Ren",
"Yunchao Wei",
"Xun Guo",
"Yao Zhao",
"Bingyi Kang",
"Jiashi Feng",
"Xiaojie Jin"
] | This work explores whether a deep generative model can learn complex knowledge solely from visual input, in contrast to the prevalent focus on text-based models like large language models (LLMs). We develop VideoWorld, an auto-regressive video generation model trained on unlabeled video data, and test its knowledge acquisition abilities in video-based Go and robotic control tasks. Our experiments reveal two key findings: (1) video-only training provides sufficient information for learning knowledge, including rules, reasoning and planning capabilities, and (2) the representation of visual change is crucial for knowledge acquisition. To improve both the efficiency and efficacy of this process, we introduce the Latent Dynamics Model (LDM) as a key component of VideoWorld. Remarkably, VideoWorld reaches a 5-dan professional level in the Video-GoBench with just a 300-million-parameter model, without relying on search algorithms or reward mechanisms typical in reinforcement learning. In robotic tasks, VideoWorld effectively learns diverse control operations and generalizes across environments, approaching the performance of oracle models in CALVIN and RLBench. This study opens new avenues for knowledge acquisition from visual data, with all code, data, and models open-sourced for further research. |
|
2025-01-21T00:00:00 | 2501.09284 | SEAL: Entangled White-box Watermarks on Low-Rank Adaptation | [
"Giyeong Oh",
"Saejin Kim",
"Woohyun Cho",
"Sangkyu Lee",
"Jiwan Chung",
"Dokyung Song",
"Youngjae Yu"
] | Recently, LoRA and its variants have become the de facto strategy for training and sharing task-specific versions of large pretrained models, thanks to their efficiency and simplicity. However, the issue of copyright protection for LoRA weights, especially through watermark-based techniques, remains underexplored. To address this gap, we propose SEAL (SEcure wAtermarking on LoRA weights), the universal whitebox watermarking for LoRA. SEAL embeds a secret, non-trainable matrix between trainable LoRA weights, serving as a passport to claim ownership. SEAL then entangles the passport with the LoRA weights through training, without extra loss for entanglement, and distributes the finetuned weights after hiding the passport. When applying SEAL, we observed no performance degradation across commonsense reasoning, textual/visual instruction tuning, and text-to-image synthesis tasks. We demonstrate that SEAL is robust against a variety of known attacks: removal, obfuscation, and ambiguity attacks. |
|
2025-01-22T00:00:00 | 2501.12380 | MMVU: Measuring Expert-Level Multi-Discipline Video Understanding | [
"Yilun Zhao",
"Lujing Xie",
"Haowei Zhang",
"Guo Gan",
"Yitao Long",
"Zhiyuan Hu",
"Tongyan Hu",
"Weiyuan Chen",
"Chuhan Li",
"Junyang Song",
"Zhijian Xu",
"Chengye Wang",
"Weifeng Pan",
"Ziyao Shangguan",
"Xiangru Tang",
"Zhenwen Liang",
"Yixin Liu",
"Chen Zhao",
"Arman Cohan"
] | We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark for evaluating foundation models in video understanding. MMVU includes 3,000 expert-annotated questions spanning 27 subjects across four core disciplines: Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to prior benchmarks, MMVU features three key advancements. First, it challenges models to apply domain-specific knowledge and perform expert-level reasoning to analyze specialized-domain videos, moving beyond the basic visual perception typically assessed in current video benchmarks. Second, each example is annotated by human experts from scratch. We implement strict data quality controls to ensure the high quality of the dataset. Finally, each example is enriched with expert-annotated reasoning rationals and relevant domain knowledge, facilitating in-depth analysis. We conduct an extensive evaluation of 32 frontier multimodal foundation models on MMVU. The latest System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest performance among the tested models. However, they still fall short of matching human expertise. Through in-depth error analyses and case studies, we offer actionable insights for future advancements in expert-level, knowledge-intensive video understanding for specialized domains. |
|
2025-01-22T00:00:00 | 2501.12273 | Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement | [
"Maosong Cao",
"Taolin Zhang",
"Mo Li",
"Chuyu Zhang",
"Yunxin Liu",
"Haodong Duan",
"Songyang Zhang",
"Kai Chen"
] | The quality of Supervised Fine-Tuning (SFT) data plays a critical role in enhancing the conversational capabilities of Large Language Models (LLMs). However, as LLMs become more advanced, the availability of high-quality human-annotated SFT data has become a significant bottleneck, necessitating a greater reliance on synthetic training data. In this work, we introduce Condor, a novel two-stage synthetic data generation framework that incorporates World Knowledge Tree and Self-Reflection Refinement to produce high-quality SFT data at scale. Our experimental results demonstrate that a base model fine-tuned on only 20K Condor-generated samples achieves superior performance compared to counterparts. The additional refinement stage in Condor further enables iterative self-improvement for LLMs at various scales (up to 72B), validating the effectiveness of our approach. Furthermore, our investigation into the scaling for synthetic data in post-training reveals substantial unexplored potential for performance improvements, opening promising avenues for future research. |
|
2025-01-22T00:00:00 | 2501.12390 | GPS as a Control Signal for Image Generation | [
"Chao Feng",
"Ziyang Chen",
"Aleksander Holynski",
"Alexei A. Efros",
"Andrew Owens"
] | We show that the GPS tags contained in photo metadata provide a useful control signal for image generation. We train GPS-to-image models and use them for tasks that require a fine-grained understanding of how images vary within a city. In particular, we train a diffusion model to generate images conditioned on both GPS and text. The learned model generates images that capture the distinctive appearance of different neighborhoods, parks, and landmarks. We also extract 3D models from 2D GPS-to-image models through score distillation sampling, using GPS conditioning to constrain the appearance of the reconstruction from each viewpoint. Our evaluations suggest that our GPS-conditioned models successfully learn to generate images that vary based on location, and that GPS conditioning improves estimated 3D structure. |
|
2025-01-22T00:00:00 | 2501.11873 | Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models | [
"Zihan Qiu",
"Zeyu Huang",
"Bo Zheng",
"Kaiyue Wen",
"Zekun Wang",
"Rui Men",
"Ivan Titov",
"Dayiheng Liu",
"Jingren Zhou",
"Junyang Lin"
] | This paper revisits the implementation of Load-balancing Loss (LBL) when training Mixture-of-Experts (MoEs) models. Specifically, LBL for MoEs is defined as N_E sum_{i=1}^{N_E} f_i p_i, where N_E is the total number of experts, f_i represents the frequency of expert i being selected, and p_i denotes the average gating score of the expert i. Existing MoE training frameworks usually employ the parallel training strategy so that f_i and the LBL are calculated within a micro-batch and then averaged across parallel groups. In essence, a micro-batch for training billion-scale LLMs normally contains very few sequences. So, the micro-batch LBL is almost at the sequence level, and the router is pushed to distribute the token evenly within each sequence. Under this strict constraint, even tokens from a domain-specific sequence (e.g., code) are uniformly routed to all experts, thereby inhibiting expert specialization. In this work, we propose calculating LBL using a global-batch to loose this constraint. Because a global-batch contains much more diverse sequences than a micro-batch, which will encourage load balance at the corpus level. Specifically, we introduce an extra communication step to synchronize f_i across micro-batches and then use it to calculate the LBL. Through experiments on training MoEs-based LLMs (up to 42.8B total parameters and 400B tokens), we surprisingly find that the global-batch LBL strategy yields excellent performance gains in both pre-training perplexity and downstream tasks. Our analysis reveals that the global-batch LBL also greatly improves the domain specialization of MoE experts. |
|
2025-01-22T00:00:00 | 2501.11223 | Reasoning Language Models: A Blueprint | [
"Maciej Besta",
"Julia Barth",
"Eric Schreiber",
"Ales Kubicek",
"Afonso Catarino",
"Robert Gerstenberger",
"Piotr Nyczyk",
"Patrick Iff",
"Yueling Li",
"Sam Houliston",
"Tomasz Sternal",
"Marcin Copik",
"Grzegorz Kwaśniewski",
"Jürgen Müller",
"Łukasz Flis",
"Hannes Eberhard",
"Hubert Niewiadomski",
"Torsten Hoefler"
] | Reasoning language models (RLMs), also known as Large Reasoning Models (LRMs), such as OpenAI's o1 and o3, DeepSeek-V3, and Alibaba's QwQ, have redefined AI's problem-solving capabilities by extending large language models (LLMs) with advanced reasoning mechanisms. Yet, their high costs, proprietary nature, and complex architectures - uniquely combining Reinforcement Learning (RL), search heuristics, and LLMs - present accessibility and scalability challenges. To address these, we propose a comprehensive blueprint that organizes RLM components into a modular framework, based on a survey and analysis of all RLM works. This blueprint incorporates diverse reasoning structures (chains, trees, graphs, and nested forms), reasoning strategies (e.g., Monte Carlo Tree Search, Beam Search), RL concepts (policy, value models and others), and supervision schemes (Output-Based and Process-Based Supervision). We also provide detailed mathematical formulations and algorithmic specifications to simplify RLM implementation. By showing how schemes like LLaMA-Berry, QwQ, Journey Learning, and Graph of Thoughts fit as special cases, we demonstrate the blueprint's versatility and unifying potential. To illustrate its utility, we introduce x1, a modular implementation for rapid RLM prototyping and experimentation. Using x1 and a literature review, we provide key insights, such as multi-phase training for policy and value models, and the importance of familiar training distributions. Finally, we outline how RLMs can integrate with a broader LLM ecosystem, including tools and databases. Our work demystifies RLM construction, democratizes advanced reasoning capabilities, and fosters innovation, aiming to mitigate the gap between "rich AI" and "poor AI" by lowering barriers to RLM development and experimentation. |
|
2025-01-22T00:00:00 | 2501.11425 | Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training | [
"Siyu Yuan",
"Zehui Chen",
"Zhiheng Xi",
"Junjie Ye",
"Zhengyin Du",
"Jiecao Chen"
] | Large Language Models (LLMs) agents are increasingly pivotal for addressing complex tasks in interactive environments. Existing work mainly focuses on enhancing performance through behavior cloning from stronger experts, yet such approaches often falter in real-world applications, mainly due to the inability to recover from errors. However, step-level critique data is difficult and expensive to collect. Automating and dynamically constructing self-critique datasets is thus crucial to empowering models with intelligent agent capabilities. In this work, we propose an iterative self-training framework, Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional methods that reward or penalize actions based on correctness, Agent-R leverages MCTS to construct training data that recover correct trajectories from erroneous ones. A key challenge of agent reflection lies in the necessity for timely revision rather than waiting until the end of a rollout. To address this, we introduce a model-guided critique construction mechanism: the actor model identifies the first error step (within its current capability) in a failed trajectory. Starting from it, we splice it with the adjacent correct path, which shares the same parent node in the tree. This strategy enables the model to learn reflection based on its current policy, therefore yielding better learning efficiency. To further explore the scalability of this self-improvement paradigm, we investigate iterative refinement of both error correction capabilities and dataset construction. Our findings demonstrate that Agent-R continuously improves the model's ability to recover from errors and enables timely error correction. Experiments on three interactive environments show that Agent-R effectively equips agents to correct erroneous actions while avoiding loops, achieving superior performance compared to baseline methods (+5.59%). |
|
2025-01-22T00:00:00 | 2501.11733 | Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks | [
"Zhenhailong Wang",
"Haiyang Xu",
"Junyang Wang",
"Xi Zhang",
"Ming Yan",
"Ji Zhang",
"Fei Huang",
"Heng Ji"
] | Smartphones have become indispensable in modern life, yet navigating complex tasks on mobile devices often remains frustrating. Recent advancements in large multimodal model (LMM)-based mobile agents have demonstrated the ability to perceive and act in mobile environments. However, current approaches face significant limitations: they fall short in addressing real-world human needs, struggle with reasoning-intensive and long-horizon tasks, and lack mechanisms to learn and improve from prior experiences. To overcome these challenges, we introduce Mobile-Agent-E, a hierarchical multi-agent framework capable of self-evolution through past experience. By hierarchical, we mean an explicit separation of high-level planning and low-level action execution. The framework comprises a Manager, responsible for devising overall plans by breaking down complex tasks into subgoals, and four subordinate agents--Perceptor, Operator, Action Reflector, and Notetaker--which handle fine-grained visual perception, immediate action execution, error verification, and information aggregation, respectively. Mobile-Agent-E also features a novel self-evolution module which maintains a persistent long-term memory comprising Tips and Shortcuts. Tips are general guidance and lessons learned from prior tasks on how to effectively interact with the environment. Shortcuts are reusable, executable sequences of atomic operations tailored for specific subroutines. The inclusion of Tips and Shortcuts facilitates continuous refinement in performance and efficiency. Alongside this framework, we introduce Mobile-Eval-E, a new benchmark featuring complex mobile tasks requiring long-horizon, multi-app interactions. Empirical results show that Mobile-Agent-E achieves a 22% absolute improvement over previous state-of-the-art approaches across three foundation model backbones. Project page: https://x-plug.github.io/MobileAgent. |
|
2025-01-22T00:00:00 | 2501.12326 | UI-TARS: Pioneering Automated GUI Interaction with Native Agents | [
"Yujia Qin",
"Yining Ye",
"Junjie Fang",
"Haoming Wang",
"Shihao Liang",
"Shizuo Tian",
"Junda Zhang",
"Jiahao Li",
"Yunxin Li",
"Shijue Huang",
"Wanjun Zhong",
"Kuanye Li",
"Jiale Yang",
"Yu Miao",
"Woyu Lin",
"Longxiang Liu",
"Xu Jiang",
"Qianli Ma",
"Jingyu Li",
"Xiaojun Xiao",
"Kai Cai",
"Chuang Li",
"Yaowei Zheng",
"Chaolin Jin",
"Chen Li",
"Xiao Zhou",
"Minchao Wang",
"Haoli Chen",
"Zhaojian Li",
"Haihua Yang",
"Haifeng Liu",
"Feng Lin",
"Tao Peng",
"Xin Liu",
"Guang Shi"
] | This paper introduces UI-TARS, a native GUI agent model that solely perceives the screenshots as input and performs human-like interactions (e.g., keyboard and mouse operations). Unlike prevailing agent frameworks that depend on heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts and workflows, UI-TARS is an end-to-end model that outperforms these sophisticated frameworks. Experiments demonstrate its superior performance: UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating perception, grounding, and GUI task execution. Notably, in the OSWorld benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15 steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld, UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of GUI screenshots for context-aware understanding of UI elements and precise captioning; (2) Unified Action Modeling, which standardizes actions into a unified space across platforms and achieves precise grounding and interaction through large-scale action traces; (3) System-2 Reasoning, which incorporates deliberate reasoning into multi-step decision making, involving multiple reasoning patterns such as task decomposition, reflection thinking, milestone recognition, etc. (4) Iterative Training with Reflective Online Traces, which addresses the data bottleneck by automatically collecting, filtering, and reflectively refining new interaction traces on hundreds of virtual machines. Through iterative training and reflection tuning, UI-TARS continuously learns from its mistakes and adapts to unforeseen situations with minimal human intervention. We also analyze the evolution path of GUI agents to guide the further development of this domain. |
|
2025-01-22T00:00:00 | 2501.10893 | Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in Realistic Environments | [
"Hongjin Su",
"Ruoxi Sun",
"Jinsung Yoon",
"Pengcheng Yin",
"Tao Yu",
"Sercan Ö. Arık"
] | Autonomous agents powered by large language models (LLMs) have the potential to enhance human capabilities, assisting with digital tasks from sending emails to performing data analysis. The abilities of existing LLMs at such tasks are often hindered by the lack of high-quality agent data from the corresponding environments they interact with. We propose Learn-by-interact, a data-centric framework to adapt LLM agents to any given environments without human annotations. Learn-by-interact synthesizes trajectories of agent-environment interactions based on documentations, and constructs instructions by summarizing or abstracting the interaction histories, a process called backward construction. We assess the quality of our synthetic data by using them in both training-based scenarios and training-free in-context learning (ICL), where we craft innovative retrieval approaches optimized for agents. Extensive experiments on SWE-bench, WebArena, OSWorld and Spider2-V spanning across realistic coding, web, and desktop environments show the effectiveness of Learn-by-interact in various downstream agentic tasks -- baseline results are improved by up to 12.2\% for ICL with Claude-3.5 and 19.5\% for training with Codestral-22B. We further demonstrate the critical role of backward construction, which provides up to 14.0\% improvement for training. Our ablation studies demonstrate the efficiency provided by our synthesized data in ICL and the superiority of our retrieval pipeline over alternative approaches like conventional retrieval-augmented generation (RAG). We expect that Learn-by-interact will serve as a foundation for agent data synthesis as LLMs are increasingly deployed at real-world environments. |
|
2025-01-22T00:00:00 | 2501.10687 | EMO2: End-Effector Guided Audio-Driven Avatar Video Generation | [
"Linrui Tian",
"Siqi Hu",
"Qi Wang",
"Bang Zhang",
"Liefeng Bo"
] | In this paper, we propose a novel audio-driven talking head method capable of simultaneously generating highly expressive facial expressions and hand gestures. Unlike existing methods that focus on generating full-body or half-body poses, we investigate the challenges of co-speech gesture generation and identify the weak correspondence between audio features and full-body gestures as a key limitation. To address this, we redefine the task as a two-stage process. In the first stage, we generate hand poses directly from audio input, leveraging the strong correlation between audio signals and hand movements. In the second stage, we employ a diffusion model to synthesize video frames, incorporating the hand poses generated in the first stage to produce realistic facial expressions and body movements. Our experimental results demonstrate that the proposed method outperforms state-of-the-art approaches, such as CyberHost and Vlogger, in terms of both visual quality and synchronization accuracy. This work provides a new perspective on audio-driven gesture generation and a robust framework for creating expressive and natural talking head animations. |
|
2025-01-22T00:00:00 | 2501.12202 | Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation | [
"Zibo Zhao",
"Zeqiang Lai",
"Qingxiang Lin",
"Yunfei Zhao",
"Haolin Liu",
"Shuhui Yang",
"Yifei Feng",
"Mingxin Yang",
"Sheng Zhang",
"Xianghui Yang",
"Huiwen Shi",
"Sicong Liu",
"Junta Wu",
"Yihang Lian",
"Fan Yang",
"Ruining Tang",
"Zebin He",
"Xinzhou Wang",
"Jian Liu",
"Xuhui Zuo",
"Zhuo Chen",
"Biwen Lei",
"Haohan Weng",
"Jing Xu",
"Yiling Zhu",
"Xinhai Liu",
"Lixin Xu",
"Changrong Hu",
"Tianyu Huang",
"Lifu Wang",
"Jihong Zhang",
"Meng Chen",
"Liang Dong",
"Yiwen Jia",
"Yulin Cai",
"Jiaao Yu",
"Yixuan Tang",
"Hao Zhang",
"Zheng Ye",
"Peng He",
"Runzhou Wu",
"Chao Zhang",
"Yonghao Tan",
"Jie Xiao",
"Yangyu Tao",
"Jianchen Zhu",
"Jinbao Xue",
"Kai Liu",
"Chongqing Zhao",
"Xinming Wu",
"Zhichao Hu",
"Lei Qin",
"Jianbing Peng",
"Zhan Li",
"Minghui Chen",
"Xipeng Zhang",
"Lin Niu",
"Paige Wang",
"Yingkai Wang",
"Haozhao Kuang",
"Zhongyi Fan",
"Xu Zheng",
"Weihao Zhuang",
"YingPing He",
"Tian Liu",
"Yong Yang",
"Di Wang",
"Yuhong Liu",
"Jie Jiang",
"Jingwei Huang",
"Chunchao Guo"
] | https://github.com/Tencent/Hunyuan3D-2 | We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model -- Hunyuan3D-DiT, and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio -- a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets. It allows both professional and amateur users to manipulate or even animate their meshes efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and etc. Hunyuan3D 2.0 is publicly released in order to fill the gaps in the open-source 3D community for large-scale foundation generative models. The code and pre-trained weights of our models are available at: https://github.com/Tencent/Hunyuan3D-2 |
2025-01-22T00:00:00 | 2501.08331 | Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise | [
"Ryan Burgert",
"Yuancheng Xu",
"Wenqi Xian",
"Oliver Pilarski",
"Pascal Clausen",
"Mingming He",
"Li Ma",
"Yitong Deng",
"Lingxiao Li",
"Mohsen Mousavi",
"Michael Ryoo",
"Paul Debevec",
"Ning Yu"
] | https://github.com/VGenAI-Netflix-Eyeline-Research/Go-with-the-Flow | Generative modeling aims to transform random noise into structured outputs. In this work, we enhance video diffusion models by allowing motion control via structured latent noise sampling. This is achieved by just a change in data: we pre-process training videos to yield structured noise. Consequently, our method is agnostic to diffusion model design, requiring no changes to model architectures or training pipelines. Specifically, we propose a novel noise warping algorithm, fast enough to run in real time, that replaces random temporal Gaussianity with correlated warped noise derived from optical flow fields, while preserving the spatial Gaussianity. The efficiency of our algorithm enables us to fine-tune modern video diffusion base models using warped noise with minimal overhead, and provide a one-stop solution for a wide range of user-friendly motion control: local object motion control, global camera movement control, and motion transfer. The harmonization between temporal coherence and spatial Gaussianity in our warped noise leads to effective motion control while maintaining per-frame pixel quality. Extensive experiments and user studies demonstrate the advantages of our method, making it a robust and scalable approach for controlling motion in video diffusion models. Video results are available on our webpage: https://vgenai-netflix-eyeline-research.github.io/Go-with-the-Flow. Source code and model checkpoints are available on GitHub: https://github.com/VGenAI-Netflix-Eyeline-Research/Go-with-the-Flow. |
2025-01-22T00:00:00 | 2501.12375 | Video Depth Anything: Consistent Depth Estimation for Super-Long Videos | [
"Sili Chen",
"Hengkai Guo",
"Shengnan Zhu",
"Feihu Zhang",
"Zilong Huang",
"Jiashi Feng",
"Bingyi Kang"
] | Depth Anything has achieved remarkable success in monocular depth estimation with strong generalization ability. However, it suffers from temporal inconsistency in videos, hindering its practical applications. Various methods have been proposed to alleviate this issue by leveraging video generation models or introducing priors from optical flow and camera poses. Nonetheless, these methods are only applicable to short videos (< 10 seconds) and require a trade-off between quality and computational efficiency. We propose Video Depth Anything for high-quality, consistent depth estimation in super-long videos (over several minutes) without sacrificing efficiency. We base our model on Depth Anything V2 and replace its head with an efficient spatial-temporal head. We design a straightforward yet effective temporal consistency loss by constraining the temporal depth gradient, eliminating the need for additional geometric priors. The model is trained on a joint dataset of video depth and unlabeled images, similar to Depth Anything V2. Moreover, a novel key-frame-based strategy is developed for long video inference. Experiments show that our model can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability. Comprehensive evaluations on multiple video benchmarks demonstrate that our approach sets a new state-of-the-art in zero-shot video depth estimation. We offer models of different scales to support a range of scenarios, with our smallest model capable of real-time performance at 30 FPS. |
|
2025-01-22T00:00:00 | 2501.10057 | MSTS: A Multimodal Safety Test Suite for Vision-Language Models | [
"Paul Röttger",
"Giuseppe Attanasio",
"Felix Friedrich",
"Janis Goldzycher",
"Alicia Parrish",
"Rishabh Bhardwaj",
"Chiara Di Bonaventura",
"Roman Eng",
"Gaia El Khoury Geagea",
"Sujata Goswami",
"Jieun Han",
"Dirk Hovy",
"Seogyeong Jeong",
"Paloma Jeretič",
"Flor Miriam Plaza-del-Arco",
"Donya Rooein",
"Patrick Schramowski",
"Anastassia Shaitarova",
"Xudong Shen",
"Richard Willats",
"Andrea Zugarini",
"Bertie Vidgen"
] | Vision-language models (VLMs), which process image and text inputs, are increasingly integrated into chat assistants and other consumer AI applications. Without proper safeguards, however, VLMs may give harmful advice (e.g. how to self-harm) or encourage unsafe behaviours (e.g. to consume drugs). Despite these clear hazards, little work so far has evaluated VLM safety and the novel risks created by multimodal inputs. To address this gap, we introduce MSTS, a Multimodal Safety Test Suite for VLMs. MSTS comprises 400 test prompts across 40 fine-grained hazard categories. Each test prompt consists of a text and an image that only in combination reveal their full unsafe meaning. With MSTS, we find clear safety issues in several open VLMs. We also find some VLMs to be safe by accident, meaning that they are safe because they fail to understand even simple test prompts. We translate MSTS into ten languages, showing non-English prompts to increase the rate of unsafe model responses. We also show models to be safer when tested with text only rather than multimodal prompts. Finally, we explore the automation of VLM safety assessments, finding even the best safety classifiers to be lacking. |
|
2025-01-22T00:00:00 | 2501.12224 | TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space | [
"Daniel Garibi",
"Shahar Yadin",
"Roni Paiss",
"Omer Tov",
"Shiran Zada",
"Ariel Ephrat",
"Tomer Michaeli",
"Inbar Mosseri",
"Tali Dekel"
] | We present TokenVerse -- a method for multi-concept personalization, leveraging a pre-trained text-to-image diffusion model. Our framework can disentangle complex visual elements and attributes from as little as a single image, while enabling seamless plug-and-play generation of combinations of concepts extracted from multiple images. As opposed to existing works, TokenVerse can handle multiple images with multiple concepts each, and supports a wide-range of concepts, including objects, accessories, materials, pose, and lighting. Our work exploits a DiT-based text-to-image model, in which the input text affects the generation through both attention and modulation (shift and scale). We observe that the modulation space is semantic and enables localized control over complex concepts. Building on this insight, we devise an optimization-based framework that takes as input an image and a text description, and finds for each word a distinct direction in the modulation space. These directions can then be used to generate new images that combine the learned concepts in a desired configuration. We demonstrate the effectiveness of TokenVerse in challenging personalization settings, and showcase its advantages over existing methods. project's webpage in https://token-verse.github.io/ |
|
2025-01-22T00:00:00 | 2501.12368 | InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model | [
"Yuhang Zang",
"Xiaoyi Dong",
"Pan Zhang",
"Yuhang Cao",
"Ziyu Liu",
"Shengyuan Ding",
"Shenxi Wu",
"Yubo Ma",
"Haodong Duan",
"Wenwei Zhang",
"Kai Chen",
"Dahua Lin",
"Jiaqi Wang"
] | https://github.com/InternLM/InternLM-XComposer | Despite the promising performance of Large Vision Language Models (LVLMs) in visual understanding, they occasionally generate incorrect outputs. While reward models (RMs) with reinforcement learning or test-time scaling offer the potential for improving generation quality, a critical gap remains: publicly available multi-modal RMs for LVLMs are scarce, and the implementation details of proprietary models are often unclear. We bridge this gap with InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a simple yet effective multi-modal reward model that aligns LVLMs with human preferences. To ensure the robustness and versatility of IXC-2.5-Reward, we set up a high-quality multi-modal preference corpus spanning text, image, and video inputs across diverse domains, such as instruction following, general understanding, text-rich documents, mathematical reasoning, and video understanding. IXC-2.5-Reward achieves excellent results on the latest multi-modal reward model benchmark and shows competitive performance on text-only reward model benchmarks. We further demonstrate three key applications of IXC-2.5-Reward: (1) Providing a supervisory signal for RL training. We integrate IXC-2.5-Reward with Proximal Policy Optimization (PPO) yields IXC-2.5-Chat, which shows consistent improvements in instruction following and multi-modal open-ended dialogue; (2) Selecting the best response from candidate responses for test-time scaling; and (3) Filtering outlier or noisy samples from existing image and video instruction tuning training data. To ensure reproducibility and facilitate further research, we have open-sourced all model weights and training recipes at https://github.com/InternLM/InternLM-XComposer |
2025-01-22T00:00:00 | 2501.11900 | Panoramic Interests: Stylistic-Content Aware Personalized Headline Generation | [
"Junhong Lian",
"Xiang Ao",
"Xinyu Liu",
"Yang Liu",
"Qing He"
] | Personalized news headline generation aims to provide users with attention-grabbing headlines that are tailored to their preferences. Prevailing methods focus on user-oriented content preferences, but most of them overlook the fact that diverse stylistic preferences are integral to users' panoramic interests, leading to suboptimal personalization. In view of this, we propose a novel Stylistic-Content Aware Personalized Headline Generation (SCAPE) framework. SCAPE extracts both content and stylistic features from headlines with the aid of large language model (LLM) collaboration. It further adaptively integrates users' long- and short-term interests through a contrastive learning-based hierarchical fusion network. By incorporating the panoramic interests into the headline generator, SCAPE reflects users' stylistic-content preferences during the generation process. Extensive experiments on the real-world dataset PENS demonstrate the superiority of SCAPE over baselines. |
|
2025-01-22T00:00:00 | 2501.10573 | The Geometry of Tokens in Internal Representations of Large Language Models | [
"Karthik Viswanathan",
"Yuri Gardinazzi",
"Giada Panerai",
"Alberto Cazzaniga",
"Matteo Biagetti"
] | We investigate the relationship between the geometry of token embeddings and their role in the next token prediction within transformer models. An important aspect of this connection uses the notion of empirical measure, which encodes the distribution of token point clouds across transformer layers and drives the evolution of token representations in the mean-field interacting picture. We use metrics such as intrinsic dimension, neighborhood overlap, and cosine similarity to observationally probe these empirical measures across layers. To validate our approach, we compare these metrics to a dataset where the tokens are shuffled, which disrupts the syntactic and semantic structure. Our findings reveal a correlation between the geometric properties of token embeddings and the cross-entropy loss of next token predictions, implying that prompts with higher loss values have tokens represented in higher-dimensional spaces. |
|
2025-01-22T00:00:00 | 2501.12206 | Fixing Imbalanced Attention to Mitigate In-Context Hallucination of Large Vision-Language Model | [
"Kazi Hasan Ibn Arif",
"Sajib Acharjee Dip",
"Khizar Hussain",
"Lang Zhang",
"Chris Thomas"
] | Large Vision Language Models (LVLMs) have demonstrated remarkable capabilities in understanding and describing visual content, achieving state-of-the-art performance across various vision-language tasks. However, these models frequently exhibit hallucination behavior, where they generate descriptions containing objects or details absent in the input image. Our work investigates this phenomenon by analyzing attention patterns across transformer layers and heads, revealing that hallucinations often stem from progressive degradation of visual grounding in deeper layers. We propose a novel attention modification approach that combines selective token emphasis and head-specific modulation to maintain visual grounding throughout the generation process. Our method introduces two key components: (1) a dual-stream token selection mechanism that identifies and prioritizes both locally informative and spatially significant visual tokens, and (2) an attention head-specific modulation strategy that differentially amplifies visual information processing based on measured visual sensitivity of individual attention heads. Through extensive experimentation on the MSCOCO dataset, we demonstrate that our approach reduces hallucination rates by up to 62.3\% compared to baseline models while maintaining comparable task performance. Our analysis reveals that selectively modulating tokens across attention heads with varying levels of visual sensitivity can significantly improve visual grounding without requiring model retraining. |
|
2025-01-22T00:00:00 | 2501.12389 | Taming Teacher Forcing for Masked Autoregressive Video Generation | [
"Deyu Zhou",
"Quan Sun",
"Yuang Peng",
"Kun Yan",
"Runpei Dong",
"Duomin Wang",
"Zheng Ge",
"Nan Duan",
"Xiangyu Zhang",
"Lionel M. Ni",
"Heung-Yeung Shum"
] | We introduce MAGI, a hybrid video generation framework that combines masked modeling for intra-frame generation with causal modeling for next-frame generation. Our key innovation, Complete Teacher Forcing (CTF), conditions masked frames on complete observation frames rather than masked ones (namely Masked Teacher Forcing, MTF), enabling a smooth transition from token-level (patch-level) to frame-level autoregressive generation. CTF significantly outperforms MTF, achieving a +23% improvement in FVD scores on first-frame conditioned video prediction. To address issues like exposure bias, we employ targeted training strategies, setting a new benchmark in autoregressive video generation. Experiments show that MAGI can generate long, coherent video sequences exceeding 100 frames, even when trained on as few as 16 frames, highlighting its potential for scalable, high-quality video generation. |
|
2025-01-23T00:00:00 | 2501.12909 | FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces | [
"Zhenran Xu",
"Longyue Wang",
"Jifang Wang",
"Zhouyi Li",
"Senbao Shi",
"Xue Yang",
"Yiyu Wang",
"Baotian Hu",
"Jun Yu",
"Min Zhang"
] | Virtual film production requires intricate decision-making processes, including scriptwriting, virtual cinematography, and precise actor positioning and actions. Motivated by recent advances in automated decision-making with language agent-based societies, this paper introduces FilmAgent, a novel LLM-based multi-agent collaborative framework for end-to-end film automation in our constructed 3D virtual spaces. FilmAgent simulates various crew roles, including directors, screenwriters, actors, and cinematographers, and covers key stages of a film production workflow: (1) idea development transforms brainstormed ideas into structured story outlines; (2) scriptwriting elaborates on dialogue and character actions for each scene; (3) cinematography determines the camera setups for each shot. A team of agents collaborates through iterative feedback and revisions, thereby verifying intermediate scripts and reducing hallucinations. We evaluate the generated videos on 15 ideas and 4 key aspects. Human evaluation shows that FilmAgent outperforms all baselines across all aspects and scores 3.98 out of 5 on average, showing the feasibility of multi-agent collaboration in filmmaking. Further analysis reveals that FilmAgent, despite using the less advanced GPT-4o model, surpasses the single-agent o1, showing the advantage of a well-coordinated multi-agent system. Lastly, we discuss the complementary strengths and weaknesses of OpenAI's text-to-video model Sora and our FilmAgent in filmmaking. |
|
2025-01-23T00:00:00 | 2501.12948 | DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning | [
"DeepSeek-AI",
"Daya Guo",
"Dejian Yang",
"Haowei Zhang",
"Junxiao Song",
"Ruoyu Zhang",
"Runxin Xu",
"Qihao Zhu",
"Shirong Ma",
"Peiyi Wang",
"Xiao Bi",
"Xiaokang Zhang",
"Xingkai Yu",
"Yu Wu",
"Z. F. Wu",
"Zhibin Gou",
"Zhihong Shao",
"Zhuoshu Li",
"Ziyi Gao",
"Aixin Liu",
"Bing Xue",
"Bingxuan Wang",
"Bochao Wu",
"Bei Feng",
"Chengda Lu",
"Chenggang Zhao",
"Chengqi Deng",
"Chenyu Zhang",
"Chong Ruan",
"Damai Dai",
"Deli Chen",
"Dongjie Ji",
"Erhang Li",
"Fangyun Lin",
"Fucong Dai",
"Fuli Luo",
"Guangbo Hao",
"Guanting Chen",
"Guowei Li",
"H. Zhang",
"Han Bao",
"Hanwei Xu",
"Haocheng Wang",
"Honghui Ding",
"Huajian Xin",
"Huazuo Gao",
"Hui Qu",
"Hui Li",
"Jianzhong Guo",
"Jiashi Li",
"Jiawei Wang",
"Jingchang Chen",
"Jingyang Yuan",
"Junjie Qiu",
"Junlong Li",
"J. L. Cai",
"Jiaqi Ni",
"Jian Liang",
"Jin Chen",
"Kai Dong",
"Kai Hu",
"Kaige Gao",
"Kang Guan",
"Kexin Huang",
"Kuai Yu",
"Lean Wang",
"Lecong Zhang",
"Liang Zhao",
"Litong Wang",
"Liyue Zhang",
"Lei Xu",
"Leyi Xia",
"Mingchuan Zhang",
"Minghua Zhang",
"Minghui Tang",
"Meng Li",
"Miaojun Wang",
"Mingming Li",
"Ning Tian",
"Panpan Huang",
"Peng Zhang",
"Qiancheng Wang",
"Qinyu Chen",
"Qiushi Du",
"Ruiqi Ge",
"Ruisong Zhang",
"Ruizhe Pan",
"Runji Wang",
"R. J. Chen",
"R. L. Jin",
"Ruyi Chen",
"Shanghao Lu",
"Shangyan Zhou",
"Shanhuang Chen",
"Shengfeng Ye",
"Shiyu Wang",
"Shuiping Yu",
"Shunfeng Zhou",
"Shuting Pan",
"S. S. Li",
"Shuang Zhou",
"Shaoqing Wu",
"Shengfeng Ye",
"Tao Yun",
"Tian Pei",
"Tianyu Sun",
"T. Wang",
"Wangding Zeng",
"Wanjia Zhao",
"Wen Liu",
"Wenfeng Liang",
"Wenjun Gao",
"Wenqin Yu",
"Wentao Zhang",
"W. L. Xiao",
"Wei An",
"Xiaodong Liu",
"Xiaohan Wang",
"Xiaokang Chen",
"Xiaotao Nie",
"Xin Cheng",
"Xin Liu",
"Xin Xie",
"Xingchao Liu",
"Xinyu Yang",
"Xinyuan Li",
"Xuecheng Su",
"Xuheng Lin",
"X. Q. Li",
"Xiangyue Jin",
"Xiaojin Shen",
"Xiaosha Chen",
"Xiaowen Sun",
"Xiaoxiang Wang",
"Xinnan Song",
"Xinyi Zhou",
"Xianzu Wang",
"Xinxia Shan",
"Y. K. Li",
"Y. Q. Wang",
"Y. X. Wei",
"Yang Zhang",
"Yanhong Xu",
"Yao Li",
"Yao Zhao",
"Yaofeng Sun",
"Yaohui Wang",
"Yi Yu",
"Yichao Zhang",
"Yifan Shi",
"Yiliang Xiong",
"Ying He",
"Yishi Piao",
"Yisong Wang",
"Yixuan Tan",
"Yiyang Ma",
"Yiyuan Liu",
"Yongqiang Guo",
"Yuan Ou",
"Yuduan Wang",
"Yue Gong",
"Yuheng Zou",
"Yujia He",
"Yunfan Xiong",
"Yuxiang Luo",
"Yuxiang You",
"Yuxuan Liu",
"Yuyang Zhou",
"Y. X. Zhu",
"Yanhong Xu",
"Yanping Huang",
"Yaohui Li",
"Yi Zheng",
"Yuchen Zhu",
"Yunxian Ma",
"Ying Tang",
"Yukun Zha",
"Yuting Yan",
"Z. Z. Ren",
"Zehui Ren",
"Zhangli Sha",
"Zhe Fu",
"Zhean Xu",
"Zhenda Xie",
"Zhengyan Zhang",
"Zhewen Hao",
"Zhicheng Ma",
"Zhigang Yan",
"Zhiyu Wu",
"Zihui Gu",
"Zijia Zhu",
"Zijun Liu",
"Zilin Li",
"Ziwei Xie",
"Ziyang Song",
"Zizheng Pan",
"Zhen Huang",
"Zhipeng Xu",
"Zhongyu Zhang",
"Zhen Zhang"
] | We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama. |
|
2025-01-23T00:00:00 | 2501.12570 | O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning | [
"Haotian Luo",
"Li Shen",
"Haiying He",
"Yibo Wang",
"Shiwei Liu",
"Wei Li",
"Naiqiang Tan",
"Xiaochun Cao",
"Dacheng Tao"
] | https://github.com/StarDewXXX/O1-Pruner | Recently, long-thought reasoning LLMs, such as OpenAI's O1, adopt extended reasoning processes similar to how humans ponder over complex problems. This reasoning paradigm significantly enhances the model's problem-solving abilities and has achieved promising results. However, long-thought reasoning process leads to a substantial increase in inference time. A pressing challenge is reducing the inference overhead of long-thought LLMs while ensuring accuracy. In this paper, we experimentally demonstrate that long-thought reasoning models struggle to effectively allocate token budgets based on problem difficulty and reasoning redundancies. To address this, we propose Length-Harmonizing Fine-Tuning (O1-Pruner), aiming at minimizing reasoning overhead while maintaining accuracy. This effective fine-tuning method first estimates the LLM's baseline performance through pre-sampling and then uses RL-style fine-tuning to encourage the model to generate shorter reasoning processes under accuracy constraints. This allows the model to achieve efficient reasoning with lower redundancy while maintaining accuracy. Experiments on various mathematical reasoning benchmarks show that O1-Pruner not only significantly reduces inference overhead but also achieves higher accuracy, providing a novel and promising solution to this challenge. Our code is coming soon at https://github.com/StarDewXXX/O1-Pruner |
2025-01-23T00:00:00 | 2501.12599 | Kimi k1.5: Scaling Reinforcement Learning with LLMs | [
"Kimi Team",
"Angang Du",
"Bofei Gao",
"Bowei Xing",
"Changjiu Jiang",
"Cheng Chen",
"Cheng Li",
"Chenjun Xiao",
"Chenzhuang Du",
"Chonghua Liao",
"Chuning Tang",
"Congcong Wang",
"Dehao Zhang",
"Enming Yuan",
"Enzhe Lu",
"Fengxiang Tang",
"Flood Sung",
"Guangda Wei",
"Guokun Lai",
"Haiqing Guo",
"Han Zhu",
"Hao Ding",
"Hao Hu",
"Hao Yang",
"Hao Zhang",
"Haotian Yao",
"Haotian Zhao",
"Haoyu Lu",
"Haoze Li",
"Haozhen Yu",
"Hongcheng Gao",
"Huabin Zheng",
"Huan Yuan",
"Jia Chen",
"Jianhang Guo",
"Jianlin Su",
"Jianzhou Wang",
"Jie Zhao",
"Jin Zhang",
"Jingyuan Liu",
"Junjie Yan",
"Junyan Wu",
"Lidong Shi",
"Ling Ye",
"Longhui Yu",
"Mengnan Dong",
"Neo Zhang",
"Ningchen Ma",
"Qiwei Pan",
"Qucheng Gong",
"Shaowei Liu",
"Shengling Ma",
"Shupeng Wei",
"Sihan Cao",
"Siying Huang",
"Tao Jiang",
"Weihao Gao",
"Weimin Xiong",
"Weiran He",
"Weixiao Huang",
"Wenhao Wu",
"Wenyang He",
"Xianghui Wei",
"Xianqing Jia",
"Xingzhe Wu",
"Xinran Xu",
"Xinxing Zu",
"Xinyu Zhou",
"Xuehai Pan",
"Y. Charles",
"Yang Li",
"Yangyang Hu",
"Yangyang Liu",
"Yanru Chen",
"Yejie Wang",
"Yibo Liu",
"Yidao Qin",
"Yifeng Liu",
"Ying Yang",
"Yiping Bao",
"Yulun Du",
"Yuxin Wu",
"Yuzhi Wang",
"Zaida Zhou",
"Zhaoji Wang",
"Zhaowei Li",
"Zhen Zhu",
"Zheng Zhang",
"Zhexu Wang",
"Zhilin Yang",
"Zhiqi Huang",
"Zihao Huang",
"Ziyao Xu",
"Zonghan Yang"
] | Language model pretraining with next token prediction has proved effective for scaling compute but is limited to the amount of available training data. Scaling reinforcement learning (RL) unlocks a new axis for the continued improvement of artificial intelligence, with the promise that large language models (LLMs) can scale their training data by learning to explore with rewards. However, prior published work has not produced competitive results. In light of this, we report on the training practice of Kimi k1.5, our latest multi-modal LLM trained with RL, including its RL training techniques, multi-modal data recipes, and infrastructure optimization. Long context scaling and improved policy optimization methods are key ingredients of our approach, which establishes a simplistic, effective RL framework without relying on more complex techniques such as Monte Carlo tree search, value functions, and process reward models. Notably, our system achieves state-of-the-art reasoning performance across multiple benchmarks and modalities -- e.g., 77.5 on AIME, 96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista -- matching OpenAI's o1. Moreover, we present effective long2short methods that use long-CoT techniques to improve short-CoT models, yielding state-of-the-art short-CoT reasoning results -- e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on LiveCodeBench -- outperforming existing short-CoT models such as GPT-4o and Claude Sonnet 3.5 by a large margin (up to +550%). |
|
2025-01-23T00:00:00 | 2501.13074 | Autonomy-of-Experts Models | [
"Ang Lv",
"Ruobing Xie",
"Yining Qian",
"Songhao Wu",
"Xingwu Sun",
"Zhanhui Kang",
"Di Wang",
"Rui Yan"
] | Mixture-of-Experts (MoE) models mostly use a router to assign tokens to specific expert modules, activating only partial parameters and often outperforming dense models. We argue that the separation between the router's decision-making and the experts' execution is a critical yet overlooked issue, leading to suboptimal expert selection and ineffective learning. To address this, we propose Autonomy-of-Experts (AoE), a novel MoE paradigm in which experts autonomously select themselves to process inputs. AoE is based on the insight that an expert is aware of its own capacity to effectively process a token, an awareness reflected in the scale of its internal activations. In AoE, routers are removed; instead, experts pre-compute internal activations for inputs and are ranked based on their activation norms. Only the top-ranking experts proceed with the forward pass, while the others abort. The overhead of pre-computing activations is reduced through a low-rank weight factorization. This self-evaluating-then-partner-comparing approach ensures improved expert selection and effective learning. We pre-train language models having 700M up to 4B parameters, demonstrating that AoE outperforms traditional MoE models with comparable efficiency. |
|
2025-01-23T00:00:00 | 2501.11067 | IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems | [
"Elad Levi",
"Ilan Kadar"
] | https://github.com/plurai-ai/intellagent | Large Language Models (LLMs) are transforming artificial intelligence, evolving into task-oriented systems capable of autonomous planning and execution. One of the primary applications of LLMs is conversational AI systems, which must navigate multi-turn dialogues, integrate domain-specific APIs, and adhere to strict policy constraints. However, evaluating these agents remains a significant challenge, as traditional methods fail to capture the complexity and variability of real-world interactions. We introduce IntellAgent, a scalable, open-source multi-agent framework designed to evaluate conversational AI systems comprehensively. IntellAgent automates the creation of diverse, synthetic benchmarks by combining policy-driven graph modeling, realistic event generation, and interactive user-agent simulations. This innovative approach provides fine-grained diagnostics, addressing the limitations of static and manually curated benchmarks with coarse-grained metrics. IntellAgent represents a paradigm shift in evaluating conversational AI. By simulating realistic, multi-policy scenarios across varying levels of complexity, IntellAgent captures the nuanced interplay of agent capabilities and policy constraints. Unlike traditional methods, it employs a graph-based policy model to represent relationships, likelihoods, and complexities of policy interactions, enabling highly detailed diagnostics. IntellAgent also identifies critical performance gaps, offering actionable insights for targeted optimization. Its modular, open-source design supports seamless integration of new domains, policies, and APIs, fostering reproducibility and community collaboration. Our findings demonstrate that IntellAgent serves as an effective framework for advancing conversational AI by addressing challenges in bridging research and deployment. The framework is available at https://github.com/plurai-ai/intellagent |
2025-01-23T00:00:00 | 2501.12895 | Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback | [
"Yafu Li",
"Xuyang Hu",
"Xiaoye Qu",
"Linjie Li",
"Yu Cheng"
] | https://github.com/yafuly/TPO | Large language models (LLMs) demonstrate impressive performance but lack the flexibility to adapt to human preferences quickly without retraining. In this work, we introduce Test-time Preference Optimization (TPO), a framework that aligns LLM outputs with human preferences during inference, removing the need to update model parameters. Rather than relying on purely numerical rewards, TPO translates reward signals into textual critiques and uses them as textual rewards to iteratively refine its response. Evaluations on benchmarks covering instruction following, preference alignment, safety, and mathematics reveal that TPO progressively improves alignment with human preferences. Notably, after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO scales efficiently with both the search width and depth during inference. Through case studies, we illustrate how TPO exploits the innate capacity of LLM to interpret and act upon reward signals. Our findings establish TPO as a practical, lightweight alternative for test-time preference optimization, achieving alignment on the fly. Our code is publicly available at https://github.com/yafuly/TPO. |
2025-01-23T00:00:00 | 2501.13106 | VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding | [
"Boqiang Zhang",
"Kehan Li",
"Zesen Cheng",
"Zhiqiang Hu",
"Yuqian Yuan",
"Guanzheng Chen",
"Sicong Leng",
"Yuming Jiang",
"Hang Zhang",
"Xin Li",
"Peng Jin",
"Wenqi Zhang",
"Fan Wang",
"Lidong Bing",
"Deli Zhao"
] | In this paper, we propose VideoLLaMA3, a more advanced multimodal foundation model for image and video understanding. The core design philosophy of VideoLLaMA3 is vision-centric. The meaning of "vision-centric" is two-fold: the vision-centric training paradigm and vision-centric framework design. The key insight of our vision-centric training paradigm is that high-quality image-text data is crucial for both image and video understanding. Instead of preparing massive video-text datasets, we focus on constructing large-scale and high-quality image-text datasets. VideoLLaMA3 has four training stages: 1) vision-centric alignment stage, which warms up the vision encoder and projector; 2) vision-language pretraining stage, which jointly tunes the vision encoder, projector, and LLM with large-scale image-text data covering multiple types (including scene images, documents, charts) as well as text-only data. 3) multi-task fine-tuning stage, which incorporates image-text SFT data for downstream tasks and video-text data to establish a foundation for video understanding. 4) video-centric fine-tuning, which further improves the model's capability in video understanding. As for the framework design, to better capture fine-grained details in images, the pretrained vision encoder is adapted to encode images of varying sizes into vision tokens with corresponding numbers, rather than a fixed number of tokens. For video inputs, we reduce the number of vision tokens according to their similarity so that the representation of videos will be more precise and compact. Benefit from vision-centric designs, VideoLLaMA3 achieves compelling performances in both image and video understanding benchmarks. |
|
2025-01-23T00:00:00 | 2501.13007 | Pairwise RM: Perform Best-of-N Sampling with Knockout Tournament | [
"Yantao Liu",
"Zijun Yao",
"Rui Min",
"Yixin Cao",
"Lei Hou",
"Juanzi Li"
] | Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large Language Models (LLMs), relies on reward models to select the best candidate solution from multiple generations. However, traditional reward models often assign arbitrary and inconsistent scores, limiting their effectiveness. To address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a knockout tournament for BoN sampling. Instead of assigning absolute scores, given one math problem, Pairwise RM evaluates two candidate solutions' correctness simultaneously. This approach eliminates the need for arbitrary scoring and enables cross-validation of solutions through parallel comparison. In the knockout tournament, Pairwise RM conducts pairwise comparisons between candidate solutions and eliminates the incorrect ones iteratively. We construct \ourdataset, a large-scale dataset of 443K pairwise comparisons derived from NumiaMath and annotated using gemini-1.5-flash, and train the Pairwise RM via supervised fine-tuning. Experiments on MATH-500 and the Olympiad Bench demonstrate significant improvements over traditional discriminative reward models. And a 40\% to 60\% relative improvement is achieved on the top 50\% challenging problems. |
|
2025-01-23T00:00:00 | 2501.13928 | Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass | [
"Jianing Yang",
"Alexander Sax",
"Kevin J. Liang",
"Mikael Henaff",
"Hao Tang",
"Ang Cao",
"Joyce Chai",
"Franziska Meier",
"Matt Feiszli"
] | Multi-view 3D reconstruction remains a core challenge in computer vision, particularly in applications requiring accurate and scalable representations across diverse perspectives. Current leading methods such as DUSt3R employ a fundamentally pairwise approach, processing images in pairs and necessitating costly global alignment procedures to reconstruct from multiple views. In this work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view generalization to DUSt3R that achieves efficient and scalable 3D reconstruction by processing many views in parallel. Fast3R's Transformer-based architecture forwards N images in a single forward pass, bypassing the need for iterative alignment. Through extensive experiments on camera pose estimation and 3D reconstruction, Fast3R demonstrates state-of-the-art performance, with significant improvements in inference speed and reduced error accumulation. These results establish Fast3R as a robust alternative for multi-view applications, offering enhanced scalability without compromising reconstruction accuracy. |
|
2025-01-24T00:00:00 | 2501.13629 | Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models | [
"Zhenghao Lin",
"Zihao Tang",
"Xiao Liu",
"Yeyun Gong",
"Yi Cheng",
"Qi Chen",
"Hang Li",
"Ying Xin",
"Ziyue Yang",
"Kailai Yang",
"Yu Yan",
"Xiao Liang",
"Shuai Lu",
"Yiming Huang",
"Zheheng Luo",
"Lei Qu",
"Xuan Feng",
"Yaoxiang Wang",
"Yuqing Xia",
"Feiyang Chen",
"Yuting Jiang",
"Yasen Hu",
"Hao Ni",
"Binyang Li",
"Guoshuai Zhao",
"Jui-Hao Chiang",
"Zhongxin Guo",
"Chen Lin",
"Kun Kuang",
"Wenjie Li",
"Yelong Shen",
"Jian Jiao",
"Peng Cheng",
"Mao Yang"
] | We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%. |
|
2025-01-24T00:00:00 | 2501.13926 | Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step | [
"Ziyu Guo",
"Renrui Zhang",
"Chengzhuo Tong",
"Zhizheng Zhao",
"Peng Gao",
"Hongsheng Li",
"Pheng-Ann Heng"
] | https://github.com/ZiyuGuo99/Image-Generation-CoT | Chain-of-Thought (CoT) reasoning has been extensively explored in large models to tackle complex understanding tasks. However, it still remains an open question whether such strategies can be applied to verifying and reinforcing image generation scenarios. In this paper, we provide the first comprehensive investigation of the potential of CoT reasoning to enhance autoregressive image generation. We focus on three techniques: scaling test-time computation for verification, aligning model preferences with Direct Preference Optimization (DPO), and integrating these techniques for complementary effects. Our results demonstrate that these approaches can be effectively adapted and combined to significantly improve image generation performance. Furthermore, given the pivotal role of reward models in our findings, we propose the Potential Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image generation. PARM adaptively assesses each generation step through a potential assessment approach, merging the strengths of existing reward models, and PARM++ further introduces a reflection mechanism to self-correct the generated unsatisfactory image. Using our investigated reasoning strategies, we enhance a baseline model, Show-o, to achieve superior results, with a significant +24% improvement on the GenEval benchmark, surpassing Stable Diffusion 3 by +15%. We hope our study provides unique insights and paves a new path for integrating CoT reasoning with autoregressive image generation. Code and models are released at https://github.com/ZiyuGuo99/Image-Generation-CoT |
2025-01-24T00:00:00 | 2501.10799 | Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback | [
"Yen-Ting Lin",
"Di Jin",
"Tengyu Xu",
"Tianhao Wu",
"Sainbayar Sukhbaatar",
"Chen Zhu",
"Yun He",
"Yun-Nung Chen",
"Jason Weston",
"Yuandong Tian",
"Arash Rahnama",
"Sinong Wang",
"Hao Ma",
"Han Fang"
] | Large language models (LLMs) have recently demonstrated remarkable success in mathematical reasoning. Despite progress in methods like chain-of-thought prompting and self-consistency sampling, these advances often focus on final correctness without ensuring that the underlying reasoning process is coherent and reliable. This paper introduces Step-KTO, a training framework that combines process-level and outcome-level binary feedback to guide LLMs toward more trustworthy reasoning trajectories. By providing binary evaluations for both the intermediate reasoning steps and the final answer, Step-KTO encourages the model to adhere to logical progressions rather than relying on superficial shortcuts. Our experiments on challenging mathematical benchmarks show that Step-KTO significantly improves both final answer accuracy and the quality of intermediate reasoning steps. For example, on the MATH-500 dataset, Step-KTO achieves a notable improvement in Pass@1 accuracy over strong baselines. These results highlight the promise of integrating stepwise process feedback into LLM training, paving the way toward more interpretable and dependable reasoning capabilities. |
|
2025-01-24T00:00:00 | 2501.13124 | Debate Helps Weak-to-Strong Generalization | [
"Hao Lang",
"Fei Huang",
"Yongbin Li"
] | Common methods for aligning already-capable models with desired behavior rely on the ability of humans to provide supervision. However, future superhuman models will surpass the capability of humans. Therefore, humans will only be able to weakly supervise superhuman models. This expected deficiency of human evaluation would weaken the safety of future AI systems. Scalable oversight and weak-to-strong generalization are two complementary approaches to tackle this issue. In this paper, we attempt to combine the strengths of these two approaches to further improve alignment. Specifically, we investigate ways of improving human supervision with a strong pretrained model and then supervise the strong model with enhanced weak human supervision. To make iterative empirical progress, we consider an analogy: can we use a strong model to improve weak model supervision and then use it to supervise the strong model? We empirically test it by finetuning a small weak model on ground truth labels with the additional help from a large strong model, and then finetuning the strong model on labels generated by the weak model. We find that debate can assist a weak model in extracting trustworthy information from an untrustworthy strong model, which provides leverage as context on samples when training a weak model. We also show that an ensemble of weak models helps exploit long arguments generated by strong model debaters and obtain a more robust supervision estimate. Extensive experiments on the OpenAI weak-to-strong NLP benchmarks show that the combination approach leads to better alignment, which indicates that debate has the potential to help weak-to-strong generalization. |
|
2025-01-24T00:00:00 | 2501.13920 | IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models | [
"Jiayi Lei",
"Renrui Zhang",
"Xiangfei Hu",
"Weifeng Lin",
"Zhen Li",
"Wenjian Sun",
"Ruoyi Du",
"Le Zhuo",
"Zhongyu Li",
"Xinyue Li",
"Shitian Zhao",
"Ziyu Guo",
"Yiting Lu",
"Peng Gao",
"Hongsheng Li"
] | https://github.com/jylei16/Imagine-e | With the rapid development of diffusion models, text-to-image(T2I) models have made significant progress, showcasing impressive abilities in prompt following and image generation. Recently launched models such as FLUX.1 and Ideogram2.0, along with others like Dall-E3 and Stable Diffusion 3, have demonstrated exceptional performance across various complex tasks, raising questions about whether T2I models are moving towards general-purpose applicability. Beyond traditional image generation, these models exhibit capabilities across a range of fields, including controllable generation, image editing, video, audio, 3D, and motion generation, as well as computer vision tasks like semantic segmentation and depth estimation. However, current evaluation frameworks are insufficient to comprehensively assess these models' performance across expanding domains. To thoroughly evaluate these models, we developed the IMAGINE-E and tested six prominent models: FLUX.1, Ideogram2.0, Midjourney, Dall-E3, Stable Diffusion 3, and Jimeng. Our evaluation is divided into five key domains: structured output generation, realism, and physical consistency, specific domain generation, challenging scenario generation, and multi-style creation tasks. This comprehensive assessment highlights each model's strengths and limitations, particularly the outstanding performance of FLUX.1 and Ideogram2.0 in structured and specific domain tasks, underscoring the expanding applications and potential of T2I models as foundational AI tools. This study provides valuable insights into the current state and future trajectory of T2I models as they evolve towards general-purpose usability. Evaluation scripts will be released at https://github.com/jylei16/Imagine-e. |
2025-01-24T00:00:00 | 2501.13919 | Temporal Preference Optimization for Long-Form Video Understanding | [
"Rui Li",
"Xiaohan Wang",
"Yuhui Zhang",
"Zeyu Wang",
"Serena Yeung-Levy"
] | Despite significant advancements in video large multimodal models (video-LMMs), achieving effective temporal grounding in long-form videos remains a challenge for existing models. To address this limitation, we propose Temporal Preference Optimization (TPO), a novel post-training framework designed to enhance the temporal grounding capabilities of video-LMMs through preference learning. TPO adopts a self-training approach that enables models to differentiate between well-grounded and less accurate temporal responses by leveraging curated preference datasets at two granularities: localized temporal grounding, which focuses on specific video segments, and comprehensive temporal grounding, which captures extended temporal dependencies across entire video sequences. By optimizing on these preference datasets, TPO significantly enhances temporal understanding while reducing reliance on manually annotated data. Extensive experiments on three long-form video understanding benchmarks--LongVideoBench, MLVU, and Video-MME--demonstrate the effectiveness of TPO across two state-of-the-art video-LMMs. Notably, LLaVA-Video-TPO establishes itself as the leading 7B model on the Video-MME benchmark, underscoring the potential of TPO as a scalable and efficient solution for advancing temporal reasoning in long-form video understanding. Project page: https://ruili33.github.io/tpo_website. |
|
2025-01-24T00:00:00 | 2501.13075 | Evolution and The Knightian Blindspot of Machine Learning | [
"Joel Lehman",
"Elliot Meyerson",
"Tarek El-Gaaly",
"Kenneth O. Stanley",
"Tarin Ziyaee"
] | This paper claims that machine learning (ML) largely overlooks an important facet of general intelligence: robustness to a qualitatively unknown future in an open world. Such robustness relates to Knightian uncertainty (KU) in economics, i.e. uncertainty that cannot be quantified, which is excluded from consideration in ML's key formalisms. This paper aims to identify this blind spot, argue its importance, and catalyze research into addressing it, which we believe is necessary to create truly robust open-world AI. To help illuminate the blind spot, we contrast one area of ML, reinforcement learning (RL), with the process of biological evolution. Despite staggering ongoing progress, RL still struggles in open-world situations, often failing under unforeseen situations. For example, the idea of zero-shot transferring a self-driving car policy trained only in the US to the UK currently seems exceedingly ambitious. In dramatic contrast, biological evolution routinely produces agents that thrive within an open world, sometimes even to situations that are remarkably out-of-distribution (e.g. invasive species; or humans, who do undertake such zero-shot international driving). Interestingly, evolution achieves such robustness without explicit theory, formalisms, or mathematical gradients. We explore the assumptions underlying RL's typical formalisms, showing how they limit RL's engagement with the unknown unknowns characteristic of an ever-changing complex world. Further, we identify mechanisms through which evolutionary processes foster robustness to novel and unpredictable challenges, and discuss potential pathways to algorithmically embody them. The conclusion is that the intriguing remaining fragility of ML may result from blind spots in its formalisms, and that significant gains may result from direct confrontation with the challenge of KU. |
|
2025-01-24T00:00:00 | 2501.13200 | SRMT: Shared Memory for Multi-agent Lifelong Pathfinding | [
"Alsu Sagirova",
"Yuri Kuratov",
"Mikhail Burtsev"
] | https://github.com/Aloriosa/srmt | Multi-agent reinforcement learning (MARL) demonstrates significant progress in solving cooperative and competitive multi-agent problems in various environments. One of the principal challenges in MARL is the need for explicit prediction of the agents' behavior to achieve cooperation. To resolve this issue, we propose the Shared Recurrent Memory Transformer (SRMT) which extends memory transformers to multi-agent settings by pooling and globally broadcasting individual working memories, enabling agents to exchange information implicitly and coordinate their actions. We evaluate SRMT on the Partially Observable Multi-Agent Pathfinding problem in a toy Bottleneck navigation task that requires agents to pass through a narrow corridor and on a POGEMA benchmark set of tasks. In the Bottleneck task, SRMT consistently outperforms a variety of reinforcement learning baselines, especially under sparse rewards, and generalizes effectively to longer corridors than those seen during training. On POGEMA maps, including Mazes, Random, and MovingAI, SRMT is competitive with recent MARL, hybrid, and planning-based algorithms. These results suggest that incorporating shared recurrent memory into the transformer-based architectures can enhance coordination in decentralized multi-agent systems. The source code for training and evaluation is available on GitHub: https://github.com/Aloriosa/srmt. |
2025-01-24T00:00:00 | 2501.13452 | EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion | [
"Jiangchuan Wei",
"Shiyue Yan",
"Wenfeng Lin",
"Boyuan Liu",
"Renjie Chen",
"Mingyu Guo"
] | Recent advancements in video generation have significantly impacted various downstream applications, particularly in identity-preserving video generation (IPT2V). However, existing methods struggle with "copy-paste" artifacts and low similarity issues, primarily due to their reliance on low-level facial image information. This dependence can result in rigid facial appearances and artifacts reflecting irrelevant details. To address these challenges, we propose EchoVideo, which employs two key strategies: (1) an Identity Image-Text Fusion Module (IITF) that integrates high-level semantic features from text, capturing clean facial identity representations while discarding occlusions, poses, and lighting variations to avoid the introduction of artifacts; (2) a two-stage training strategy, incorporating a stochastic method in the second phase to randomly utilize shallow facial information. The objective is to balance the enhancements in fidelity provided by shallow features while mitigating excessive reliance on them. This strategy encourages the model to utilize high-level features during training, ultimately fostering a more robust representation of facial identities. EchoVideo effectively preserves facial identities and maintains full-body integrity. Extensive experiments demonstrate that it achieves excellent results in generating high-quality, controllability and fidelity videos. |
|
2025-01-24T00:00:00 | 2501.10283 | GSTAR: Gaussian Surface Tracking and Reconstruction | [
"Chengwei Zheng",
"Lixin Xue",
"Juan Zarate",
"Jie Song"
] | 3D Gaussian Splatting techniques have enabled efficient photo-realistic rendering of static scenes. Recent works have extended these approaches to support surface reconstruction and tracking. However, tracking dynamic surfaces with 3D Gaussians remains challenging due to complex topology changes, such as surfaces appearing, disappearing, or splitting. To address these challenges, we propose GSTAR, a novel method that achieves photo-realistic rendering, accurate surface reconstruction, and reliable 3D tracking for general dynamic scenes with changing topology. Given multi-view captures as input, GSTAR binds Gaussians to mesh faces to represent dynamic objects. For surfaces with consistent topology, GSTAR maintains the mesh topology and tracks the meshes using Gaussians. In regions where topology changes, GSTAR adaptively unbinds Gaussians from the mesh, enabling accurate registration and the generation of new surfaces based on these optimized Gaussians. Additionally, we introduce a surface-based scene flow method that provides robust initialization for tracking between frames. Experiments demonstrate that our method effectively tracks and reconstructs dynamic surfaces, enabling a range of applications. Our project page with the code release is available at https://eth-ait.github.io/GSTAR/. |
|
2025-01-24T00:00:00 | 2501.10018 | DiffuEraser: A Diffusion Model for Video Inpainting | [
"Xiaowen Li",
"Haolan Xue",
"Peiran Ren",
"Liefeng Bo"
] | Recent video inpainting algorithms integrate flow-based pixel propagation with transformer-based generation to leverage optical flow for restoring textures and objects using information from neighboring frames, while completing masked regions through visual Transformers. However, these approaches often encounter blurring and temporal inconsistencies when dealing with large masks, highlighting the need for models with enhanced generative capabilities. Recently, diffusion models have emerged as a prominent technique in image and video generation due to their impressive performance. In this paper, we introduce DiffuEraser, a video inpainting model based on stable diffusion, designed to fill masked regions with greater details and more coherent structures. We incorporate prior information to provide initialization and weak conditioning,which helps mitigate noisy artifacts and suppress hallucinations. Additionally, to improve temporal consistency during long-sequence inference, we expand the temporal receptive fields of both the prior model and DiffuEraser, and further enhance consistency by leveraging the temporal smoothing property of Video Diffusion Models. Experimental results demonstrate that our proposed method outperforms state-of-the-art techniques in both content completeness and temporal consistency while maintaining acceptable efficiency. |
|
2025-01-24T00:00:00 | 2501.13826 | Video-MMMU: Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos | [
"Kairui Hu",
"Penghao Wu",
"Fanyi Pu",
"Wang Xiao",
"Yuanhan Zhang",
"Xiang Yue",
"Bo Li",
"Ziwei Liu"
] | Humans acquire knowledge through three cognitive stages: perceiving information, comprehending knowledge, and adapting knowledge to solve novel problems. Videos serve as an effective medium for this learning process, facilitating a progression through these cognitive stages. However, existing video benchmarks fail to systematically evaluate the knowledge acquisition capabilities in Large Multimodal Models (LMMs). To address this gap, we introduce Video-MMMU, a multi-modal, multi-disciplinary benchmark designed to assess LMMs' ability to acquire and utilize knowledge from videos. Video-MMMU features a curated collection of 300 expert-level videos and 900 human-annotated questions across six disciplines, evaluating knowledge acquisition through stage-aligned question-answer pairs: Perception, Comprehension, and Adaptation. A proposed knowledge gain metric, {\Delta}knowledge, quantifies improvement in performance after video viewing. Evaluation of LMMs reveals a steep decline in performance as cognitive demands increase and highlights a significant gap between human and model knowledge acquisition, underscoring the need for methods to enhance LMMs' capability to learn and adapt from videos. |
|
2025-01-24T00:00:00 | 2501.13918 | Improving Video Generation with Human Feedback | [
"Jie Liu",
"Gongye Liu",
"Jiajun Liang",
"Ziyang Yuan",
"Xiaokun Liu",
"Mingwu Zheng",
"Xiele Wu",
"Qiulin Wang",
"Wenyu Qin",
"Menghan Xia",
"Xintao Wang",
"Xiaohong Liu",
"Fei Yang",
"Pengfei Wan",
"Di Zhang",
"Kun Gai",
"Yujiu Yang",
"Wanli Ouyang"
] | Video generation has achieved significant advances through rectified flow techniques, but issues like unsmooth motion and misalignment between videos and prompts persist. In this work, we develop a systematic pipeline that harnesses human feedback to mitigate these problems and refine the video generation model. Specifically, we begin by constructing a large-scale human preference dataset focused on modern video generation models, incorporating pairwise annotations across multi-dimensions. We then introduce VideoReward, a multi-dimensional video reward model, and examine how annotations and various design choices impact its rewarding efficacy. From a unified reinforcement learning perspective aimed at maximizing reward with KL regularization, we introduce three alignment algorithms for flow-based models by extending those from diffusion models. These include two training-time strategies: direct preference optimization for flow (Flow-DPO) and reward weighted regression for flow (Flow-RWR), and an inference-time technique, Flow-NRG, which applies reward guidance directly to noisy videos. Experimental results indicate that VideoReward significantly outperforms existing reward models, and Flow-DPO demonstrates superior performance compared to both Flow-RWR and standard supervised fine-tuning methods. Additionally, Flow-NRG lets users assign custom weights to multiple objectives during inference, meeting personalized video quality needs. Project page: https://gongyeliu.github.io/videoalign. |
|
2025-01-24T00:00:00 | 2501.13554 | One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt | [
"Tao Liu",
"Kai Wang",
"Senmao Li",
"Joost van de Weijer",
"Fahad Shahbaz Khan",
"Shiqi Yang",
"Yaxing Wang",
"Jian Yang",
"Ming-Ming Cheng"
] | https://github.com/byliutao/1Prompt1Story | Text-to-image generation models can create high-quality images from input prompts. However, they struggle to support the consistent generation of identity-preserving requirements for storytelling. Existing approaches to this problem typically require extensive training in large datasets or additional modifications to the original model architectures. This limits their applicability across different domains and diverse diffusion model configurations. In this paper, we first observe the inherent capability of language models, coined context consistency, to comprehend identity through context with a single prompt. Drawing inspiration from the inherent context consistency, we propose a novel training-free method for consistent text-to-image (T2I) generation, termed "One-Prompt-One-Story" (1Prompt1Story). Our approach 1Prompt1Story concatenates all prompts into a single input for T2I diffusion models, initially preserving character identities. We then refine the generation process using two novel techniques: Singular-Value Reweighting and Identity-Preserving Cross-Attention, ensuring better alignment with the input description for each frame. In our experiments, we compare our method against various existing consistent T2I generation approaches to demonstrate its effectiveness through quantitative metrics and qualitative assessments. Code is available at https://github.com/byliutao/1Prompt1Story. |
2025-01-24T00:00:00 | 2501.13824 | Hallucinations Can Improve Large Language Models in Drug Discovery | [
"Shuzhou Yuan",
"Michael Färber"
] | Concerns about hallucinations in Large Language Models (LLMs) have been raised by researchers, yet their potential in areas where creativity is vital, such as drug discovery, merits exploration. In this paper, we come up with the hypothesis that hallucinations can improve LLMs in drug discovery. To verify this hypothesis, we use LLMs to describe the SMILES string of molecules in natural language and then incorporate these descriptions as part of the prompt to address specific tasks in drug discovery. Evaluated on seven LLMs and five classification tasks, our findings confirm the hypothesis: LLMs can achieve better performance with text containing hallucinations. Notably, Llama-3.1-8B achieves an 18.35% gain in ROC-AUC compared to the baseline without hallucination. Furthermore, hallucinations generated by GPT-4o provide the most consistent improvements across models. Additionally, we conduct empirical analyses and a case study to investigate key factors affecting performance and the underlying reasons. Our research sheds light on the potential use of hallucinations for LLMs and offers new perspectives for future research leveraging LLMs in drug discovery. |
|
2025-01-24T00:00:00 | 2501.10979 | Control LLM: Controlled Evolution for Intelligence Retention in LLM | [
"Haichao Wei",
"Yunxiang Ren",
"Zhoutong Fu",
"Aman Lunia",
"Yi-Lin Chen",
"Alice Leung",
"Ya Xu"
] | https://github.com/linkedin/ControlLLM | Large Language Models (LLMs) demand significant computational resources, making it essential to enhance their capabilities without retraining from scratch. A key challenge in this domain is catastrophic forgetting (CF), which hampers performance during Continuous Pre-training (CPT) and Continuous Supervised Fine-Tuning (CSFT). We propose Control LLM, a novel approach that leverages parallel pre-trained and expanded transformer blocks, aligning their hidden-states through interpolation strategies This method effectively preserves performance on existing tasks while seamlessly integrating new knowledge. Extensive experiments demonstrate the effectiveness of Control LLM in both CPT and CSFT. On Llama3.1-8B-Instruct, it achieves significant improvements in mathematical reasoning (+14.4% on Math-Hard) and coding performance (+10% on MBPP-PLUS). On Llama3.1-8B, it enhances multilingual capabilities (+10.6% on C-Eval, +6.8% on CMMLU, and +30.2% on CMMLU-0shot-CoT). It surpasses existing methods and achieves SOTA among open-source models tuned from the same base model, using substantially less data and compute. Crucially, these gains are realized while preserving strong original capabilities, with minimal degradation (<4.3% on MMLU) compared to >35% in open-source Math and Coding models. This approach has been successfully deployed in LinkedIn's GenAI-powered job seeker and Ads unit products. To support further research, we release the training and evaluation code (https://github.com/linkedin/ControlLLM) along with models trained on public datasets ( https://huggingface.co/ControlLLM) to the community. |
2025-01-24T00:00:00 | 2501.11858 | EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents | [
"Zhili Cheng",
"Yuge Tu",
"Ran Li",
"Shiqi Dai",
"Jinyi Hu",
"Shengding Hu",
"Jiahao Li",
"Yang Shi",
"Tianyu Yu",
"Weize Chen",
"Lei Shi",
"Maosong Sun"
] | https://github.com/thunlp/EmbodiedEval | Multimodal Large Language Models (MLLMs) have shown significant advancements, providing a promising future for embodied agents. Existing benchmarks for evaluating MLLMs primarily utilize static images or videos, limiting assessments to non-interactive scenarios. Meanwhile, existing embodied AI benchmarks are task-specific and not diverse enough, which do not adequately evaluate the embodied capabilities of MLLMs. To address this, we propose EmbodiedEval, a comprehensive and interactive evaluation benchmark for MLLMs with embodied tasks. EmbodiedEval features 328 distinct tasks within 125 varied 3D scenes, each of which is rigorously selected and annotated. It covers a broad spectrum of existing embodied AI tasks with significantly enhanced diversity, all within a unified simulation and evaluation framework tailored for MLLMs. The tasks are organized into five categories: navigation, object interaction, social interaction, attribute question answering, and spatial question answering to assess different capabilities of the agents. We evaluated the state-of-the-art MLLMs on EmbodiedEval and found that they have a significant shortfall compared to human level on embodied tasks. Our analysis demonstrates the limitations of existing MLLMs in embodied capabilities, providing insights for their future development. We open-source all evaluation data and simulation framework at https://github.com/thunlp/EmbodiedEval. |
2025-01-27T00:00:00 | 2501.14492 | RealCritic: Towards Effectiveness-Driven Evaluation of Language Model Critiques | [
"Zhengyang Tang",
"Ziniu Li",
"Zhenyang Xiao",
"Tian Ding",
"Ruoyu Sun",
"Benyou Wang",
"Dayiheng Liu",
"Fei Huang",
"Tianyu Liu",
"Bowen Yu",
"Junyang Lin"
] | https://github.com/tangzhy/RealCritic | Critiques are important for enhancing the performance of Large Language Models (LLMs), enabling both self-improvement and constructive feedback for others by identifying flaws and suggesting improvements. However, evaluating the critique capabilities of LLMs presents a significant challenge due to the open-ended nature of the task. In this work, we introduce a new benchmark designed to assess the critique capabilities of LLMs. Unlike existing benchmarks, which typically function in an open-loop fashion, our approach employs a closed-loop methodology that evaluates the quality of corrections generated from critiques. Moreover, the benchmark incorporates features such as self-critique, cross-critique, and iterative critique, which are crucial for distinguishing the abilities of advanced reasoning models from more classical ones. We implement this benchmark using eight challenging reasoning tasks. We have several interesting findings. First, despite demonstrating comparable performance in direct chain-of-thought generation, classical LLMs significantly lag behind the advanced reasoning-based model o1-mini across all critique scenarios. Second, in self-critique and iterative critique settings, classical LLMs may even underperform relative to their baseline capabilities. We hope that this benchmark will serve as a valuable resource to guide future advancements. The code and data are available at https://github.com/tangzhy/RealCritic. |
2025-01-27T00:00:00 | 2501.14342 | Chain-of-Retrieval Augmented Generation | [
"Liang Wang",
"Haonan Chen",
"Nan Yang",
"Xiaolong Huang",
"Zhicheng Dou",
"Furu Wei"
] | This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer. Conventional RAG methods usually perform a single retrieval step before the generation process, which limits their effectiveness in addressing complex queries due to imperfect retrieval results. In contrast, our proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the model to dynamically reformulate the query based on the evolving state. To train CoRAG effectively, we utilize rejection sampling to automatically generate intermediate retrieval chains, thereby augmenting existing RAG datasets that only provide the correct final answer. At test time, we propose various decoding strategies to scale the model's test-time compute by controlling the length and number of sampled retrieval chains. Experimental results across multiple benchmarks validate the efficacy of CoRAG, particularly in multi-hop question answering tasks, where we observe more than 10 points improvement in EM score compared to strong baselines. On the KILT benchmark, CoRAG establishes a new state-of-the-art performance across a diverse range of knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to understand the scaling behavior of CoRAG, laying the groundwork for future research aimed at developing factual and grounded foundation models. |
Subsets and Splits