date
timestamp[ns]date 2023-05-05 00:00:00
2025-03-28 00:00:00
| arxiv_id
stringlengths 10
10
| title
stringlengths 8
177
| authors
sequencelengths 1
942
| github
stringlengths 0
116
| abstract
stringlengths 165
1.92k
|
---|---|---|---|---|---|
2025-01-27T00:00:00 | 2501.14249 | Humanity's Last Exam | [
"Long Phan",
"Alice Gatti",
"Ziwen Han",
"Nathaniel Li",
"Josephina Hu",
"Hugh Zhang",
"Sean Shi",
"Michael Choi",
"Anish Agrawal",
"Arnav Chopra",
"Adam Khoja",
"Ryan Kim",
"Jason Hausenloy",
"Oliver Zhang",
"Mantas Mazeika",
"Daron Anderson",
"Tung Nguyen",
"Mobeen Mahmood",
"Fiona Feng",
"Steven Y. Feng",
"Haoran Zhao",
"Michael Yu",
"Varun Gangal",
"Chelsea Zou",
"Zihan Wang",
"Jessica P. Wang",
"Pawan Kumar",
"Oleksandr Pokutnyi",
"Robert Gerbicz",
"Serguei Popov",
"John-Clark Levin",
"Mstyslav Kazakov",
"Johannes Schmitt",
"Geoff Galgon",
"Alvaro Sanchez",
"Yongki Lee",
"Will Yeadon",
"Scott Sauers",
"Marc Roth",
"Chidozie Agu",
"Søren Riis",
"Fabian Giska",
"Saiteja Utpala",
"Zachary Giboney",
"Gashaw M. Goshu",
"Joan of Arc Xavier",
"Sarah-Jane Crowson",
"Mohinder Maheshbhai Naiya",
"Noah Burns",
"Lennart Finke",
"Zerui Cheng",
"Hyunwoo Park",
"Francesco Fournier-Facio",
"John Wydallis",
"Mark Nandor",
"Ankit Singh",
"Tim Gehrunger",
"Jiaqi Cai",
"Ben McCarty",
"Darling Duclosel",
"Jungbae Nam",
"Jennifer Zampese",
"Ryan G. Hoerr",
"Aras Bacho",
"Gautier Abou Loume",
"Abdallah Galal",
"Hangrui Cao",
"Alexis C Garretson",
"Damien Sileo",
"Qiuyu Ren",
"Doru Cojoc",
"Pavel Arkhipov",
"Usman Qazi",
"Lianghui Li",
"Sumeet Motwani",
"Christian Schroeder de Witt",
"Edwin Taylor",
"Johannes Veith",
"Eric Singer",
"Taylor D. Hartman",
"Paolo Rissone",
"Jaehyeok Jin",
"Jack Wei Lun Shi",
"Chris G. Willcocks",
"Joshua Robinson",
"Aleksandar Mikov",
"Ameya Prabhu",
"Longke Tang",
"Xavier Alapont",
"Justine Leon Uro",
"Kevin Zhou",
"Emily de Oliveira Santos",
"Andrey Pupasov Maksimov",
"Edward Vendrow",
"Kengo Zenitani",
"Julien Guillod",
"Yuqi Li",
"Joshua Vendrow",
"Vladyslav Kuchkin",
"Ng Ze-An",
"Pierre Marion",
"Denis Efremov",
"Jayson Lynch",
"Kaiqu Liang",
"Andrew Gritsevskiy",
"Dakotah Martinez",
"Ben Pageler",
"Nick Crispino",
"Dimitri Zvonkine",
"Natanael Wildner Fraga",
"Saeed Soori",
"Ori Press",
"Henry Tang",
"Julian Salazar",
"Sean R. Green",
"Lina Brüssel",
"Moon Twayana",
"Aymeric Dieuleveut",
"T. Ryan Rogers",
"Wenjin Zhang",
"Bikun Li",
"Jinzhou Yang",
"Arun Rao",
"Gabriel Loiseau",
"Mikhail Kalinin",
"Marco Lukas",
"Ciprian Manolescu",
"Subrata Mishra",
"Ariel Ghislain Kemogne Kamdoum",
"Tobias Kreiman",
"Tad Hogg",
"Alvin Jin",
"Carlo Bosio",
"Gongbo Sun",
"Brian P Coppola",
"Tim Tarver",
"Haline Heidinger",
"Rafael Sayous",
"Stefan Ivanov",
"Joseph M Cavanagh",
"Jiawei Shen",
"Joseph Marvin Imperial",
"Philippe Schwaller",
"Shaipranesh Senthilkuma",
"Andres M Bran",
"Ali Dehghan",
"Andres Algaba",
"Brecht Verbeken",
"David Noever",
"Ragavendran P V",
"Lisa Schut",
"Ilia Sucholutsky",
"Evgenii Zheltonozhskii",
"Derek Lim",
"Richard Stanley",
"Shankar Sivarajan",
"Tong Yang",
"John Maar",
"Julian Wykowski",
"Martí Oller",
"Jennifer Sandlin",
"Anmol Sahu",
"Yuzheng Hu",
"Sara Fish",
"Nasser Heydari",
"Archimedes Apronti",
"Kaivalya Rawal",
"Tobias Garcia Vilchis",
"Yuexuan Zu",
"Martin Lackner",
"James Koppel",
"Jeremy Nguyen",
"Daniil S. Antonenko",
"Steffi Chern",
"Bingchen Zhao",
"Pierrot Arsene",
"Alan Goldfarb",
"Sergey Ivanov",
"Rafał Poświata",
"Chenguang Wang",
"Daofeng Li",
"Donato Crisostomi",
"Andrea Achilleos",
"Benjamin Myklebust",
"Archan Sen",
"David Perrella",
"Nurdin Kaparov",
"Mark H Inlow",
"Allen Zang",
"Elliott Thornley",
"Daniil Orel",
"Vladislav Poritski",
"Shalev Ben-David",
"Zachary Berger",
"Parker Whitfill",
"Michael Foster",
"Daniel Munro",
"Linh Ho",
"Dan Bar Hava",
"Aleksey Kuchkin",
"Robert Lauff",
"David Holmes",
"Frank Sommerhage",
"Keith Schneider",
"Zakayo Kazibwe",
"Nate Stambaugh",
"Mukhwinder Singh",
"Ilias Magoulas",
"Don Clarke",
"Dae Hyun Kim",
"Felipe Meneguitti Dias",
"Veit Elser",
"Kanu Priya Agarwal",
"Victor Efren Guadarrama Vilchis",
"Immo Klose",
"Christoph Demian",
"Ujjwala Anantheswaran",
"Adam Zweiger",
"Guglielmo Albani",
"Jeffery Li",
"Nicolas Daans",
"Maksim Radionov",
"Václav Rozhoň",
"Ziqiao Ma",
"Christian Stump",
"Mohammed Berkani",
"Jacob Platnick",
"Volodymyr Nevirkovets",
"Luke Basler",
"Marco Piccardo",
"Ferenc Jeanplong",
"Niv Cohen",
"Josef Tkadlec",
"Paul Rosu",
"Piotr Padlewski",
"Stanislaw Barzowski",
"Kyle Montgomery",
"Aline Menezes",
"Arkil Patel",
"Zixuan Wang",
"Jamie Tucker-Foltz",
"Jack Stade",
"Tom Goertzen",
"Fereshteh Kazemi",
"Jeremiah Milbauer",
"John Arnold Ambay",
"Abhishek Shukla",
"Yan Carlos Leyva Labrador",
"Alan Givré",
"Hew Wolff",
"Vivien Rossbach",
"Muhammad Fayez Aziz",
"Younesse Kaddar",
"Yanxu Chen",
"Robin Zhang",
"Jiayi Pan",
"Antonio Terpin",
"Niklas Muennighoff",
"Hailey Schoelkopf",
"Eric Zheng",
"Avishy Carmi",
"Adam Jones",
"Jainam Shah",
"Ethan D. L. Brown",
"Kelin Zhu",
"Max Bartolo",
"Richard Wheeler",
"Andrew Ho",
"Shaul Barkan",
"Jiaqi Wang",
"Martin Stehberger",
"Egor Kretov",
"Kaustubh Sridhar",
"Zienab EL-Wasif",
"Anji Zhang",
"Daniel Pyda",
"Joanna Tam",
"David M. Cunningham",
"Vladimir Goryachev",
"Demosthenes Patramanis",
"Michael Krause",
"Andrew Redenti",
"Daniel Bugas",
"David Aldous",
"Jesyin Lai",
"Shannon Coleman",
"Mohsen Bahaloo",
"Jiangnan Xu",
"Sangwon Lee",
"Sandy Zhao",
"Ning Tang",
"Michael K. Cohen",
"Micah Carroll",
"Orr Paradise",
"Jan Hendrik Kirchner",
"Stefan Steinerberger",
"Maksym Ovchynnikov",
"Jason O. Matos",
"Adithya Shenoy",
"Benedito Alves de Oliveira Junior",
"Michael Wang",
"Yuzhou Nie",
"Paolo Giordano",
"Philipp Petersen",
"Anna Sztyber-Betley",
"Priti Shukla",
"Jonathan Crozier",
"Antonella Pinto",
"Shreyas Verma",
"Prashant Joshi",
"Zheng-Xin Yong",
"Allison Tee",
"Jérémy Andréoletti",
"Orion Weller",
"Raghav Singhal",
"Gang Zhang",
"Alexander Ivanov",
"Seri Khoury",
"Hamid Mostaghimi",
"Kunvar Thaman",
"Qijia Chen",
"Tran Quoc Khánh",
"Jacob Loader",
"Stefano Cavalleri",
"Hannah Szlyk",
"Zachary Brown",
"Jonathan Roberts",
"William Alley",
"Kunyang Sun",
"Ryan Stendall",
"Max Lamparth",
"Anka Reuel",
"Ting Wang",
"Hanmeng Xu",
"Sreenivas Goud Raparthi",
"Pablo Hernández-Cámara",
"Freddie Martin",
"Dmitry Malishev",
"Thomas Preu",
"Tomek Korbak",
"Marcus Abramovitch",
"Dominic Williamson",
"Ziye Chen",
"Biró Bálint",
"M Saiful Bari",
"Peyman Kassani",
"Zihao Wang",
"Behzad Ansarinejad",
"Laxman Prasad Goswami",
"Yewen Sun",
"Hossam Elgnainy",
"Daniel Tordera",
"George Balabanian",
"Earth Anderson",
"Lynna Kvistad",
"Alejandro José Moyano",
"Rajat Maheshwari",
"Ahmad Sakor",
"Murat Eron",
"Isaac C. McAlister",
"Javier Gimenez",
"Innocent Enyekwe",
"Andrew Favre D. O.",
"Shailesh Shah",
"Xiaoxiang Zhou",
"Firuz Kamalov",
"Ronald Clark",
"Sherwin Abdoli",
"Tim Santens",
"Khalida Meer",
"Harrison K Wang",
"Kalyan Ramakrishnan",
"Evan Chen",
"Alessandro Tomasiello",
"G. Bruno De Luca",
"Shi-Zhuo Looi",
"Vinh-Kha Le",
"Noam Kolt",
"Niels Mündler",
"Avi Semler",
"Emma Rodman",
"Jacob Drori",
"Carl J Fossum",
"Milind Jagota",
"Ronak Pradeep",
"Honglu Fan",
"Tej Shah",
"Jonathan Eicher",
"Michael Chen",
"Kushal Thaman",
"William Merrill",
"Carter Harris",
"Jason Gross",
"Ilya Gusev",
"Asankhaya Sharma",
"Shashank Agnihotri",
"Pavel Zhelnov",
"Siranut Usawasutsakorn",
"Mohammadreza Mofayezi",
"Sergei Bogdanov",
"Alexander Piperski",
"Marc Carauleanu",
"David K. Zhang",
"Dylan Ler",
"Roman Leventov",
"Ignat Soroko",
"Thorben Jansen",
"Pascal Lauer",
"Joshua Duersch",
"Vage Taamazyan",
"Wiktor Morak",
"Wenjie Ma",
"William Held",
"Tran Đuc Huy",
"Ruicheng Xian",
"Armel Randy Zebaze",
"Mohanad Mohamed",
"Julian Noah Leser",
"Michelle X Yuan",
"Laila Yacar",
"Johannes Lengler",
"Hossein Shahrtash",
"Edson Oliveira",
"Joseph W. Jackson",
"Daniel Espinosa Gonzalez",
"Andy Zou",
"Muthu Chidambaram",
"Timothy Manik",
"Hector Haffenden",
"Dashiell Stander",
"Ali Dasouqi",
"Alexander Shen",
"Emilien Duc",
"Bita Golshani",
"David Stap",
"Mikalai Uzhou",
"Alina Borisovna Zhidkovskaya",
"Lukas Lewark",
"Mátyás Vincze",
"Dustin Wehr",
"Colin Tang",
"Zaki Hossain",
"Shaun Phillips",
"Jiang Muzhen",
"Fredrik Ekström",
"Angela Hammon",
"Oam Patel",
"Nicolas Remy",
"Faraz Farhidi",
"George Medley",
"Forough Mohammadzadeh",
"Madellene Peñaflor",
"Haile Kassahun",
"Alena Friedrich",
"Claire Sparrow",
"Taom Sakal",
"Omkar Dhamane",
"Ali Khajegili Mirabadi",
"Eric Hallman",
"Mike Battaglia",
"Mohammad Maghsoudimehrabani",
"Hieu Hoang",
"Alon Amit",
"Dave Hulbert",
"Roberto Pereira",
"Simon Weber",
"Stephen Mensah",
"Nathan Andre",
"Anton Peristyy",
"Chris Harjadi",
"Himanshu Gupta",
"Stephen Malina",
"Samuel Albanie",
"Will Cai",
"Mustafa Mehkary",
"Frank Reidegeld",
"Anna-Katharina Dick",
"Cary Friday",
"Jasdeep Sidhu",
"Wanyoung Kim",
"Mariana Costa",
"Hubeyb Gurdogan",
"Brian Weber",
"Harsh Kumar",
"Tong Jiang",
"Arunim Agarwal",
"Chiara Ceconello",
"Warren S. Vaz",
"Chao Zhuang",
"Haon Park",
"Andrew R. Tawfeek",
"Daattavya Aggarwal",
"Michael Kirchhof",
"Linjie Dai",
"Evan Kim",
"Johan Ferret",
"Yuzhou Wang",
"Minghao Yan",
"Krzysztof Burdzy",
"Lixin Zhang",
"Antonio Franca",
"Diana T. Pham",
"Kang Yong Loh",
"Joshua Robinson",
"Shreen Gul",
"Gunjan Chhablani",
"Zhehang Du",
"Adrian Cosma",
"Colin White",
"Robin Riblet",
"Prajvi Saxena",
"Jacob Votava",
"Vladimir Vinnikov",
"Ethan Delaney",
"Shiv Halasyamani",
"Syed M. Shahid",
"Jean-Christophe Mourrat",
"Lavr Vetoshkin",
"Renas Bacho",
"Vincent Ginis",
"Aleksandr Maksapetyan",
"Florencia de la Rosa",
"Xiuyu Li",
"Guillaume Malod",
"Leon Lang",
"Julien Laurendeau",
"Fatimah Adesanya",
"Julien Portier",
"Lawrence Hollom",
"Victor Souza",
"Yuchen Anna Zhou",
"Yiğit Yalın",
"Gbenga Daniel Obikoya",
"Luca Arnaboldi",
"Rai",
"Filippo Bigi",
"Kaniuar Bacho",
"Pierre Clavier",
"Gabriel Recchia",
"Mara Popescu",
"Nikita Shulga",
"Ngefor Mildred Tanwie",
"Thomas C. H. Lux",
"Ben Rank",
"Colin Ni",
"Alesia Yakimchyk",
"Huanxu",
"Liu",
"Olle Häggström",
"Emil Verkama",
"Himanshu Narayan",
"Hans Gundlach",
"Leonor Brito-Santana",
"Brian Amaro",
"Vivek Vajipey",
"Rynaa Grover",
"Yiyang Fan",
"Gabriel Poesia Reis e Silva",
"Linwei Xin",
"Yosi Kratish",
"Jakub Łucki",
"Wen-Ding Li",
"Justin Xu",
"Kevin Joseph Scaria",
"Freddie Vargus",
"Farzad Habibi",
"Long",
"Lian",
"Emanuele Rodolà",
"Jules Robins",
"Vincent Cheng",
"Declan Grabb",
"Ida Bosio",
"Tony Fruhauff",
"Ido Akov",
"Eve J. Y. Lo",
"Hao Qi",
"Xi Jiang",
"Ben Segev",
"Jingxuan Fan",
"Sarah Martinson",
"Erik Y. Wang",
"Kaylie Hausknecht",
"Michael P. Brenner",
"Mao Mao",
"Yibo Jiang",
"Xinyu Zhang",
"David Avagian",
"Eshawn Jessica Scipio",
"Muhammad Rehan Siddiqi",
"Alon Ragoler",
"Justin Tan",
"Deepakkumar Patil",
"Rebeka Plecnik",
"Aaron Kirtland",
"Roselynn Grace Montecillo",
"Stephane Durand",
"Omer Faruk Bodur",
"Zahra Adoul",
"Mohamed Zekry",
"Guillaume Douville",
"Ali Karakoc",
"Tania C. B. Santos",
"Samir Shamseldeen",
"Loukmane Karim",
"Anna Liakhovitskaia",
"Nate Resman",
"Nicholas Farina",
"Juan Carlos Gonzalez",
"Gabe Maayan",
"Sarah Hoback",
"Rodrigo De Oliveira Pena",
"Glen Sherman",
"Hodjat Mariji",
"Rasoul Pouriamanesh",
"Wentao Wu",
"Gözdenur Demir",
"Sandra Mendoza",
"Ismail Alarab",
"Joshua Cole",
"Danyelle Ferreira",
"Bryan Johnson",
"Hsiaoyun Milliron",
"Mohammad Safdari",
"Liangti Dai",
"Siriphan Arthornthurasuk",
"Alexey Pronin",
"Jing Fan",
"Angel Ramirez-Trinidad",
"Ashley Cartwright",
"Daphiny Pottmaier",
"Omid Taheri",
"David Outevsky",
"Stanley Stepanic",
"Samuel Perry",
"Luke Askew",
"Raúl Adrián Huerta Rodríguez",
"Abdelkader Dendane",
"Sam Ali",
"Ricardo Lorena",
"Krishnamurthy Iyer",
"Sk Md Salauddin",
"Murat Islam",
"Juan Gonzalez",
"Josh Ducey",
"Russell Campbell",
"Maja Somrak",
"Vasilios Mavroudis",
"Eric Vergo",
"Juehang Qin",
"Benjámin Borbás",
"Eric Chu",
"Jack Lindsey",
"Anil Radhakrishnan",
"Antoine Jallon",
"I. M. J. McInnis",
"Alex Hoover",
"Sören Möller",
"Song Bian",
"John Lai",
"Tejal Patwardhan",
"Summer Yue",
"Alexandr Wang",
"Dan Hendrycks"
] | Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai. |
|
2025-01-27T00:00:00 | 2501.14726 | Relightable Full-Body Gaussian Codec Avatars | [
"Shaofei Wang",
"Tomas Simon",
"Igor Santesteban",
"Timur Bagautdinov",
"Junxuan Li",
"Vasu Agrawal",
"Fabian Prada",
"Shoou-I Yu",
"Pace Nalbone",
"Matt Gramlich",
"Roman Lubachersky",
"Chenglei Wu",
"Javier Romero",
"Jason Saragih",
"Michael Zollhoefer",
"Andreas Geiger",
"Siyu Tang",
"Shunsuke Saito"
] | We propose Relightable Full-Body Gaussian Codec Avatars, a new approach for modeling relightable full-body avatars with fine-grained details including face and hands. The unique challenge for relighting full-body avatars lies in the large deformations caused by body articulation and the resulting impact on appearance caused by light transport. Changes in body pose can dramatically change the orientation of body surfaces with respect to lights, resulting in both local appearance changes due to changes in local light transport functions, as well as non-local changes due to occlusion between body parts. To address this, we decompose the light transport into local and non-local effects. Local appearance changes are modeled using learnable zonal harmonics for diffuse radiance transfer. Unlike spherical harmonics, zonal harmonics are highly efficient to rotate under articulation. This allows us to learn diffuse radiance transfer in a local coordinate frame, which disentangles the local radiance transfer from the articulation of the body. To account for non-local appearance changes, we introduce a shadow network that predicts shadows given precomputed incoming irradiance on a base mesh. This facilitates the learning of non-local shadowing between the body parts. Finally, we use a deferred shading approach to model specular radiance transfer and better capture reflections and highlights such as eye glints. We demonstrate that our approach successfully models both the local and non-local light transport required for relightable full-body avatars, with a superior generalization ability under novel illumination conditions and unseen poses. |
|
2025-01-27T00:00:00 | 2501.13953 | Redundancy Principles for MLLMs Benchmarks | [
"Zicheng Zhang",
"Xiangyu Zhao",
"Xinyu Fang",
"Chunyi Li",
"Xiaohong Liu",
"Xiongkuo Min",
"Haodong Duan",
"Kai Chen",
"Guangtao Zhai"
] | With the rapid iteration of Multi-modality Large Language Models (MLLMs) and the evolving demands of the field, the number of benchmarks produced annually has surged into the hundreds. The rapid growth has inevitably led to significant redundancy among benchmarks. Therefore, it is crucial to take a step back and critically assess the current state of redundancy and propose targeted principles for constructing effective MLLM benchmarks. In this paper, we focus on redundancy from three key perspectives: 1) Redundancy of benchmark capability dimensions, 2) Redundancy in the number of test questions, and 3) Cross-benchmark redundancy within specific domains. Through the comprehensive analysis over hundreds of MLLMs' performance across more than 20 benchmarks, we aim to quantitatively measure the level of redundancy lies in existing MLLM evaluations, provide valuable insights to guide the future development of MLLM benchmarks, and offer strategies to refine and address redundancy issues effectively. |
|
2025-01-27T00:00:00 | 2403.14614 | AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and Modulation | [
"Yuning Cui",
"Syed Waqas Zamir",
"Salman Khan",
"Alois Knoll",
"Mubarak Shah",
"Fahad Shahbaz Khan"
] | https://github.com/c-yn/AdaIR | In the image acquisition process, various forms of degradation, including noise, haze, and rain, are frequently introduced. These degradations typically arise from the inherent limitations of cameras or unfavorable ambient conditions. To recover clean images from degraded versions, numerous specialized restoration methods have been developed, each targeting a specific type of degradation. Recently, all-in-one algorithms have garnered significant attention by addressing different types of degradations within a single model without requiring prior information of the input degradation type. However, these methods purely operate in the spatial domain and do not delve into the distinct frequency variations inherent to different degradation types. To address this gap, we propose an adaptive all-in-one image restoration network based on frequency mining and modulation. Our approach is motivated by the observation that different degradation types impact the image content on different frequency subbands, thereby requiring different treatments for each restoration task. Specifically, we first mine low- and high-frequency information from the input features, guided by the adaptively decoupled spectra of the degraded image. The extracted features are then modulated by a bidirectional operator to facilitate interactions between different frequency components. Finally, the modulated features are merged into the original input for a progressively guided restoration. With this approach, the model achieves adaptive reconstruction by accentuating the informative frequency subbands according to different input degradations. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on different image restoration tasks, including denoising, dehazing, deraining, motion deblurring, and low-light image enhancement. Our code is available at https://github.com/c-yn/AdaIR. |
2025-01-27T00:00:00 | 2406.18516 | Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration | [
"Kang Liao",
"Zongsheng Yue",
"Zhouxia Wang",
"Chen Change Loy"
] | Although learning-based image restoration methods have made significant progress, they still struggle with limited generalization to real-world scenarios due to the substantial domain gap caused by training on synthetic data. Existing methods address this issue by improving data synthesis pipelines, estimating degradation kernels, employing deep internal learning, and performing domain adaptation and regularization. Previous domain adaptation methods have sought to bridge the domain gap by learning domain-invariant knowledge in either feature or pixel space. However, these techniques often struggle to extend to low-level vision tasks within a stable and compact framework. In this paper, we show that it is possible to perform domain adaptation via the noise space using diffusion models. In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss that guides the restoration model in progressively aligning both restored synthetic and real-world outputs with a target clean distribution. We refer to this method as denoising as adaptation. To prevent shortcuts during joint training, we present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model. They implicitly blur the boundaries between conditioned synthetic and real data and prevent the reliance of the model on easily distinguishable features. Experimental results on three classical image restoration tasks, namely denoising, deblurring, and deraining, demonstrate the effectiveness of the proposed method. |
|
2025-01-27T00:00:00 | 2411.19458 | Multiview Equivariance Improves 3D Correspondence Understanding with Minimal Feature Finetuning | [
"Yang You",
"Yixin Li",
"Congyue Deng",
"Yue Wang",
"Leonidas Guibas"
] | https://github.com/qq456cvb/3DCorrEnhance | Vision foundation models, particularly the ViT family, have revolutionized image understanding by providing rich semantic features. However, despite their success in 2D comprehension, their abilities on grasping 3D spatial relationships are still unclear. In this work, we evaluate and enhance the 3D awareness of ViT-based models. We begin by systematically assessing their ability to learn 3D equivariant features, specifically examining the consistency of semantic embeddings across different viewpoints. Our findings indicate that improved 3D equivariance leads to better performance on various downstream tasks, including pose estimation, tracking, and semantic transfer. Building on this insight, we propose a simple yet effective finetuning strategy based on 3D correspondences, which significantly enhances the 3D correspondence understanding of existing vision models. Remarkably, even finetuning on a single object for just one iteration results in substantial performance gains. All code and resources will be made publicly available to support further advancements in 3D-aware vision models. Our code is available at https://github.com/qq456cvb/3DCorrEnhance. |
2025-01-27T00:00:00 | 2501.13925 | GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing | [
"Akashah Shabbir",
"Mohammed Zumri",
"Mohammed Bennamoun",
"Fahad S. Khan",
"Salman Khan"
] | Recent advances in large multimodal models (LMMs) have recognized fine-grained grounding as an imperative factor of visual understanding and dialogue. However, the benefits of such representation in LMMs are limited to the natural image domain, and these models perform poorly for remote sensing (RS). The distinct overhead viewpoint, scale variation, and presence of small objects in high-resolution RS imagery present a unique challenge in region-level comprehension. Moreover, the development of the grounding conversation capability of LMMs within RS is hindered by the lack of granular, RS domain-specific grounded data. Addressing these limitations, we propose GeoPixel - the first end-to-end high resolution RS-LMM that supports pixel-level grounding. This capability allows fine-grained visual perception by generating interleaved masks in conversation. GeoPixel supports up to 4K HD resolution in any aspect ratio, ideal for high-precision RS image analysis. To support the grounded conversation generation (GCG) in RS imagery, we curate a visually grounded dataset GeoPixelD through a semi-automated pipeline that utilizes set-of-marks prompting and spatial priors tailored for RS data to methodically control the data generation process. GeoPixel demonstrates superior performance in pixel-level comprehension, surpassing existing LMMs in both single-target and multi-target segmentation tasks. Our methodological ablation studies validate the effectiveness of each component in the overall architecture. Our code and data will be publicly released. |
|
2025-01-27T00:00:00 | 2501.14176 | RL + Transformer = A General-Purpose Problem Solver | [
"Micah Rentschler",
"Jesse Roberts"
] | What if artificial intelligence could not only solve problems for which it was trained but also learn to teach itself to solve new problems (i.e., meta-learn)? In this study, we demonstrate that a pre-trained transformer fine-tuned with reinforcement learning over multiple episodes develops the ability to solve problems that it has never encountered before - an emergent ability called In-Context Reinforcement Learning (ICRL). This powerful meta-learner not only excels in solving unseen in-distribution environments with remarkable sample efficiency, but also shows strong performance in out-of-distribution environments. In addition, we show that it exhibits robustness to the quality of its training data, seamlessly stitches together behaviors from its context, and adapts to non-stationary environments. These behaviors demonstrate that an RL-trained transformer can iteratively improve upon its own solutions, making it an excellent general-purpose problem solver. |
|
2025-01-27T00:00:00 | 2501.13687 | Question Answering on Patient Medical Records with Private Fine-Tuned LLMs | [
"Sara Kothari",
"Ayush Gupta"
] | Healthcare systems continuously generate vast amounts of electronic health records (EHRs), commonly stored in the Fast Healthcare Interoperability Resources (FHIR) standard. Despite the wealth of information in these records, their complexity and volume make it difficult for users to retrieve and interpret crucial health insights. Recent advances in Large Language Models (LLMs) offer a solution, enabling semantic question answering (QA) over medical data, allowing users to interact with their health records more effectively. However, ensuring privacy and compliance requires edge and private deployments of LLMs. This paper proposes a novel approach to semantic QA over EHRs by first identifying the most relevant FHIR resources for a user query (Task1) and subsequently answering the query based on these resources (Task2). We explore the performance of privately hosted, fine-tuned LLMs, evaluating them against benchmark models such as GPT-4 and GPT-4o. Our results demonstrate that fine-tuned LLMs, while 250x smaller in size, outperform GPT-4 family models by 0.55% in F1 score on Task1 and 42% on Meteor Task in Task2. Additionally, we examine advanced aspects of LLM usage, including sequential fine-tuning, model self-evaluation (narcissistic evaluation), and the impact of training data size on performance. The models and datasets are available here: https://huggingface.co/genloop |
|
2025-01-27T00:00:00 | 2501.11325 | CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On with Temporal Concatenation | [
"Zheng Chong",
"Wenqing Zhang",
"Shiyue Zhang",
"Jun Zheng",
"Xiao Dong",
"Haoxiang Li",
"Yiling Wu",
"Dongmei Jiang",
"Xiaodan Liang"
] | Virtual try-on (VTON) technology has gained attention due to its potential to transform online retail by enabling realistic clothing visualization of images and videos. However, most existing methods struggle to achieve high-quality results across image and video try-on tasks, especially in long video scenarios. In this work, we introduce CatV2TON, a simple and effective vision-based virtual try-on (V2TON) method that supports both image and video try-on tasks with a single diffusion transformer model. By temporally concatenating garment and person inputs and training on a mix of image and video datasets, CatV2TON achieves robust try-on performance across static and dynamic settings. For efficient long-video generation, we propose an overlapping clip-based inference strategy that uses sequential frame guidance and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with reduced resource demands. We also present ViViD-S, a refined video try-on dataset, achieved by filtering back-facing frames and applying 3D mask smoothing for enhanced temporal consistency. Comprehensive experiments demonstrate that CatV2TON outperforms existing methods in both image and video try-on tasks, offering a versatile and reliable solution for realistic virtual try-ons across diverse scenarios. |
|
2025-01-28T00:00:00 | 2501.15368 | Baichuan-Omni-1.5 Technical Report | [
"Yadong Li",
"Jun Liu",
"Tao Zhang",
"Tao Zhang",
"Song Chen",
"Tianpeng Li",
"Zehuan Li",
"Lijun Liu",
"Lingfeng Ming",
"Guosheng Dong",
"Da Pan",
"Chong Li",
"Yuanbo Fang",
"Dongdong Kuang",
"Mingrui Wang",
"Chenglin Zhu",
"Youwei Zhang",
"Hongyu Guo",
"Fengyu Zhang",
"Yuran Wang",
"Bowen Ding",
"Wei Song",
"Xu Li",
"Yuqi Huo",
"Zheng Liang",
"Shusen Zhang",
"Xin Wu",
"Shuai Zhao",
"Linchu Xiong",
"Yozhen Wu",
"Jiahui Ye",
"Wenhao Lu",
"Bowen Li",
"Yan Zhang",
"Yaqi Zhou",
"Xin Chen",
"Lei Su",
"Hongda Zhang",
"Fuzhong Chen",
"Xuezhen Dong",
"Na Nie",
"Zhiying Wu",
"Bin Xiao",
"Ting Li",
"Shunya Dang",
"Ping Zhang",
"Yijia Sun",
"Jincheng Wu",
"Jinjie Yang",
"Xionghai Lin",
"Zhi Ma",
"Kegeng Wu",
"Jia li",
"Aiyuan Yang",
"Hui Liu",
"Jianqiang Zhang",
"Xiaoxi Chen",
"Guangwei Ai",
"Wentao Zhang",
"Yicong Chen",
"Xiaoqin Huang",
"Kun Li",
"Wenjing Luo",
"Yifei Duan",
"Lingling Zhu",
"Ran Xiao",
"Zhe Su",
"Jiani Pu",
"Dian Wang",
"Xu Jia",
"Tianyu Zhang",
"Mengyu Ai",
"Mang Wang",
"Yujing Qiao",
"Lei Zhang",
"Yanjun Shen",
"Fan Yang",
"Miao Zhen",
"Yijie Zhou",
"Mingyang Chen",
"Fei Li",
"Chenzheng Zhu",
"Keer Lu",
"Yaqi Zhao",
"Hao Liang",
"Youquan Li",
"Yanzhao Qin",
"Linzhuang Sun",
"Jianhua Xu",
"Haoze Sun",
"Mingan Lin",
"Zenan Zhou",
"Weipeng Chen"
] | We introduce Baichuan-Omni-1.5, an omni-modal model that not only has omni-modal understanding capabilities but also provides end-to-end audio generation capabilities. To achieve fluent and high-quality interaction across modalities without compromising the capabilities of any modality, we prioritized optimizing three key aspects. First, we establish a comprehensive data cleaning and synthesis pipeline for multimodal data, obtaining about 500B high-quality data (text, audio, and vision). Second, an audio-tokenizer (Baichuan-Audio-Tokenizer) has been designed to capture both semantic and acoustic information from audio, enabling seamless integration and enhanced compatibility with MLLM. Lastly, we designed a multi-stage training strategy that progressively integrates multimodal alignment and multitask fine-tuning, ensuring effective synergy across all modalities. Baichuan-Omni-1.5 leads contemporary models (including GPT4o-mini and MiniCPM-o 2.6) in terms of comprehensive omni-modal capabilities. Notably, it achieves results comparable to leading models such as Qwen2-VL-72B across various multimodal medical benchmarks. |
|
2025-01-28T00:00:00 | 2501.14912 | Feasible Learning | [
"Juan Ramirez",
"Ignacio Hounie",
"Juan Elenter",
"Jose Gallego-Posada",
"Meraj Hashemizadeh",
"Alejandro Ribeiro",
"Simon Lacoste-Julien"
] | We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample. In contrast to the ubiquitous Empirical Risk Minimization (ERM) framework, which optimizes for average performance, FL demands satisfactory performance on every individual data point. Since any model that meets the prescribed performance threshold is a valid FL solution, the choice of optimization algorithm and its dynamics play a crucial role in shaping the properties of the resulting solutions. In particular, we study a primal-dual approach which dynamically re-weights the importance of each sample during training. To address the challenge of setting a meaningful threshold in practice, we introduce a relaxation of FL that incorporates slack variables of minimal norm. Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance. |
|
2025-01-28T00:00:00 | 2501.16295 | Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with Modality-Aware Sparsity | [
"Weixin Liang",
"Junhong Shen",
"Genghan Zhang",
"Ning Dong",
"Luke Zettlemoyer",
"Lili Yu"
] | https://github.com/Weixin-Liang/Mixture-of-Mamba | State Space Models (SSMs) have emerged as efficient alternatives to Transformers for sequential modeling, but their inability to leverage modality-specific features limits their performance in multi-modal pretraining. Here, we propose Mixture-of-Mamba, a novel SSM architecture that introduces modality-aware sparsity through modality-specific parameterization of the Mamba block. Building on Mixture-of-Transformers (W. Liang et al. arXiv:2411.04996; 2024), we extend the benefits of modality-aware sparsity to SSMs while preserving their computational efficiency. We evaluate Mixture-of-Mamba across three multi-modal pretraining settings: Transfusion (interleaved text and continuous image tokens with diffusion loss), Chameleon (interleaved text and discrete image tokens), and an extended three-modality framework incorporating speech. Mixture-of-Mamba consistently reaches the same loss values at earlier training steps with significantly reduced computational costs. In the Transfusion setting, Mixture-of-Mamba achieves equivalent image loss using only 34.76% of the training FLOPs at the 1.4B scale. In the Chameleon setting, Mixture-of-Mamba reaches similar image loss with just 42.50% of the FLOPs at the 1.4B scale, and similar text loss with just 65.40% of the FLOPs. In the three-modality setting, MoM matches speech loss at 24.80% of the FLOPs at the 1.4B scale. Our ablation study highlights the synergistic effects of decoupling projection components, where joint decoupling yields greater gains than individual modifications. These results establish modality-aware sparsity as a versatile and effective design principle, extending its impact from Transformers to SSMs and setting new benchmarks in multi-modal pretraining. Our code can be accessed at https://github.com/Weixin-Liang/Mixture-of-Mamba |
2025-01-28T00:00:00 | 2501.16142 | Towards General-Purpose Model-Free Reinforcement Learning | [
"Scott Fujimoto",
"Pierluca D'Oro",
"Amy Zhang",
"Yuandong Tian",
"Michael Rabbat"
] | Reinforcement learning (RL) promises a framework for near-universal problem-solving. In practice however, RL algorithms are often tailored to specific benchmarks, relying on carefully tuned hyperparameters and algorithmic choices. Recently, powerful model-based RL methods have shown impressive general results across benchmarks but come at the cost of increased complexity and slow run times, limiting their broader applicability. In this paper, we attempt to find a unifying model-free deep RL algorithm that can address a diverse class of domains and problem settings. To achieve this, we leverage model-based representations that approximately linearize the value function, taking advantage of the denser task objectives used by model-based RL while avoiding the costs associated with planning or simulated trajectories. We evaluate our algorithm, MR.Q, on a variety of common RL benchmarks with a single set of hyperparameters and show a competitive performance against domain-specific and general baselines, providing a concrete step towards building general-purpose model-free deep RL algorithms. |
|
2025-01-28T00:00:00 | 2501.15383 | Qwen2.5-1M Technical Report | [
"An Yang",
"Bowen Yu",
"Chengyuan Li",
"Dayiheng Liu",
"Fei Huang",
"Haoyan Huang",
"Jiandong Jiang",
"Jianhong Tu",
"Jianwei Zhang",
"Jingren Zhou",
"Junyang Lin",
"Kai Dang",
"Kexin Yang",
"Le Yu",
"Mei Li",
"Minmin Sun",
"Qin Zhu",
"Rui Men",
"Tao He",
"Weijia Xu",
"Wenbiao Yin",
"Wenyuan Yu",
"Xiafei Qiu",
"Xingzhang Ren",
"Xinlong Yang",
"Yong Li",
"Zhiying Xu",
"Zipeng Zhang"
] | We introduce Qwen2.5-1M, a series of models that extend the context length to 1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series have significantly enhanced long-context capabilities through long-context pre-training and post-training. Key techniques such as long data synthesis, progressive pre-training, and multi-stage supervised fine-tuning are employed to effectively enhance long-context performance while reducing training costs. To promote the use of long-context models among a broader user base, we present and open-source our inference framework. This framework includes a length extrapolation method that can expand the model context lengths by at least four times, or even more, without additional training. To reduce inference costs, we implement a sparse attention method along with chunked prefill optimization for deployment scenarios and a sparsity refinement method to improve precision. Additionally, we detail our optimizations in the inference engine, including kernel optimization, pipeline parallelism, and scheduling optimization, which significantly enhance overall inference performance. By leveraging our inference framework, the Qwen2.5-1M models achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million tokens of context. This framework provides an efficient and powerful solution for developing applications that require long-context processing using open-source models. The Qwen2.5-1M series currently includes the open-source models Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, as well as the API-accessed model Qwen2.5-Turbo. Evaluations show that Qwen2.5-1M models have been greatly improved in long-context tasks without compromising performance in short-context scenarios. Specifically, the Qwen2.5-14B-Instruct-1M model significantly outperforms GPT-4o-mini in long-context tasks and supports contexts eight times longer. |
|
2025-01-28T00:00:00 | 2501.15369 | iFormer: Integrating ConvNet and Transformer for Mobile Application | [
"Chuanyang Zheng"
] | We present a new family of mobile hybrid vision networks, called iFormer, with a focus on optimizing latency and accuracy on mobile applications. iFormer effectively integrates the fast local representation capacity of convolution with the efficient global modeling ability of self-attention. The local interactions are derived from transforming a standard convolutional network, i.e., ConvNeXt, to design a more lightweight mobile network. Our newly introduced mobile modulation attention removes memory-intensive operations in MHA and employs an efficient modulation mechanism to boost dynamic global representational capacity. We conduct comprehensive experiments demonstrating that iFormer outperforms existing lightweight networks across various tasks. Notably, iFormer achieves an impressive Top-1 accuracy of 80.4\% on ImageNet-1k with a latency of only 1.10 ms on an iPhone 13, surpassing the recently proposed MobileNetV4 under similar latency constraints. Additionally, our method shows significant improvements in downstream tasks, including COCO object detection, instance segmentation, and ADE20k semantic segmentation, while still maintaining low latency on mobile devices for high-resolution inputs in these scenarios. |
|
2025-01-28T00:00:00 | 2501.15570 | ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer | [
"Lin Yueyu",
"Li Zhiyuan",
"Peter Yue",
"Liu Xiao"
] | https://github.com/yynil/RWKVInside{https: | As is known, hybrid quadratic and subquadratic attention models in multi-head architectures have surpassed both Transformer and Linear RNN models , with these works primarily focusing on reducing KV complexity and improving efficiency. For further research on expressiveness, we introduce our series of models distilled from Qwen 2.5, based on pure native RWKV-7 attention, which aims to make RNN more expressive and demonstrates state tracking ability beyond transformers. We work with QRWK 32B based on RWKV-6 architecture, another approach that reduces the entire knowledge processing time to just 8 hours using 16 AMD MI300X GPUs while maintaining Qwen 2.5's performance. In fact, the distillation process can utilize any LLM, not just Qwen, and enables knowledge transfer from larger LLMs to smaller ones with more fewer tokens. We will explain the detailed process and share our insights on building more powerful foundation models. Please note that this is an ongoing work that will be updated continuously. The model checkpoints and source code are available at https://github.com/yynil/RWKVInside{https://github.com/yynil/RWKVInside}, https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1{https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1}. |
2025-01-28T00:00:00 | 2501.15907 | Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for Speech Generation | [
"Haorui He",
"Zengqiang Shang",
"Chaoren Wang",
"Xuyuan Li",
"Yicheng Gu",
"Hua Hua",
"Liwei Liu",
"Chen Yang",
"Jiaqi Li",
"Peiyang Shi",
"Yuancheng Wang",
"Kai Chen",
"Pengyuan Zhang",
"Zhizheng Wu"
] | Recent advancements in speech generation have been driven by the large-scale training datasets. However, current models fall short of capturing the spontaneity and variability inherent in real-world human speech, due to their reliance on audiobook datasets limited to formal read-aloud speech styles. To bridge this gap, we introduce Emilia-Pipe, an open-source preprocessing pipeline to extract high-quality training data from valuable yet underexplored in-the-wild data that capture spontaneous human speech in real-world contexts. By leveraging Emilia-Pipe, we construct Emilia, the first multilingual speech generation dataset derived from in-the-wild speech data. This dataset comprises over 101k hours of speech across six languages: English, Chinese, German, French, Japanese, and Korean. Besides, we expand Emilia to Emilia-Large, a dataset exceeding 216k hours, making it the largest open-source speech generation dataset available. Extensive experiments demonstrate that Emilia significantly outperforms traditional audiobook datasets in generating spontaneous and human-like speech, showcasing superior performance in capturing diverse speaker timbre and speaking styles of real-world human speech. Furthermore, this work underscores the importance of scaling dataset size to advance speech generation research and validates the effectiveness of Emilia for both multilingual and crosslingual speech generation. |
|
2025-01-28T00:00:00 | 2501.12370 | Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models | [
"Samira Abnar",
"Harshay Shah",
"Dan Busbridge",
"Alaaeldin Mohamed Elnouby Ali",
"Josh Susskind",
"Vimal Thilak"
] | Scaling the capacity of language models has consistently proven to be a reliable approach for improving performance and unlocking new capabilities. Capacity can be primarily defined by two dimensions: the number of model parameters and the compute per example. While scaling typically involves increasing both, the precise interplay between these factors and their combined contribution to overall capacity remains not fully understood. We explore this relationship in the context of sparse Mixture-of-Experts (MoEs), which allow scaling the number of parameters without proportionally increasing the FLOPs per example. We investigate how varying the sparsity level, i.e., the fraction of inactive parameters, impacts model's performance during pretraining and downstream few-shot evaluation. We find that under different constraints (e.g., parameter size and total training compute), there is an optimal level of sparsity that improves both training efficiency and model performance. These results provide a better understanding of the impact of sparsity in scaling laws for MoEs and complement existing works in this area, offering insights for designing more efficient architectures. |
|
2025-01-28T00:00:00 | 2501.15427 | OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale Synthetic Personas | [
"Xiaoyang Wang",
"Hongming Zhang",
"Tao Ge",
"Wenhao Yu",
"Dian Yu",
"Dong Yu"
] | Customizable role-playing in large language models (LLMs), also known as character generalization, is gaining increasing attention for its versatility and cost-efficiency in developing and deploying role-playing dialogue agents. This study explores a large-scale data synthesis approach to equip LLMs with character generalization capabilities. We begin by synthesizing large-scale character profiles using personas from Persona Hub and then explore two strategies: response rewriting and response generation, to create character-aligned instructional responses. To validate the effectiveness of our synthetic instruction tuning data for character generalization, we perform supervised fine-tuning (SFT) using the LLaMA-3 8B model. Our best-performing model strengthens the original LLaMA-3 8B Instruct model and achieves performance comparable to GPT-4o models on role-playing dialogue. We release our synthetic characters and instruction-tuning dialogues to support public research. |
|
2025-01-28T00:00:00 | 2501.16273 | Return of the Encoder: Maximizing Parameter Efficiency for SLMs | [
"Mohamed Elfeki",
"Rui Liu",
"Chad Voegele"
] | The dominance of large decoder-only language models has overshadowed encoder-decoder architectures, despite their fundamental efficiency advantages in sequence processing. For small language models (SLMs) - those with 1 billion parameters or fewer - our systematic analysis across GPU, CPU, and NPU platforms reveals that encoder-decoder architectures achieve 47% lower first-token latency and 4.7x higher throughput compared to decoder-only models on edge devices. These gains may be attributed to encoder-decoder's one-time input processing and efficient separation of understanding and generation phases. We introduce a novel knowledge distillation framework that enables encoder-decoder models to leverage capabilities from large scalable decoder-only teachers while preserving their architectural advantages, achieving up to 6 average performance points improvement across diverse tasks, with significant gains in asymmetric sequence tasks where input and output distributions can benefit from different processing approaches. When combined with modern advances like Rotary Positional Embeddings (RoPE) and Vision encoders, our systematic investigation demonstrates that encoder-decoder architectures provide a more practical path toward deploying capable language models in resource-constrained environments. Our findings challenge the prevailing trend toward decoder-only scaling, showing that architectural choices become increasingly crucial as parameter budgets decrease, particularly for on-device and edge deployments where computational efficiency is paramount. |
|
2025-01-28T00:00:00 | 2403.09193 | Are Vision Language Models Texture or Shape Biased and Can We Steer Them? | [
"Paul Gavrikov",
"Jovita Lukasik",
"Steffen Jung",
"Robert Geirhos",
"Bianca Lamm",
"Muhammad Jehanzeb Mirza",
"Margret Keuper",
"Janis Keuper"
] | Vision language models (VLMs) have drastically changed the computer vision model landscape in only a few years, opening an exciting array of new applications from zero-shot image classification, over to image captioning, and visual question answering. Unlike pure vision models, they offer an intuitive way to access visual content through language prompting. The wide applicability of such models encourages us to ask whether they also align with human vision - specifically, how far they adopt human-induced visual biases through multimodal fusion, or whether they simply inherit biases from pure vision models. One important visual bias is the texture vs. shape bias, or the dominance of local over global information. In this paper, we study this bias in a wide range of popular VLMs. Interestingly, we find that VLMs are often more shape-biased than their vision encoders, indicating that visual biases are modulated to some extent through text in multimodal models. If text does indeed influence visual biases, this suggests that we may be able to steer visual biases not just through visual input but also through language: a hypothesis that we confirm through extensive experiments. For instance, we are able to steer shape bias from as low as 49% to as high as 72% through prompting alone. For now, the strong human bias towards shape (96%) remains out of reach for all tested VLMs. |
|
2025-01-28T00:00:00 | 2501.15420 | Visual Generation Without Guidance | [
"Huayu Chen",
"Kai Jiang",
"Kaiwen Zheng",
"Jianfei Chen",
"Hang Su",
"Jun Zhu"
] | https://github.com/thu-ml/GFT | Classifier-Free Guidance (CFG) has been a default technique in various visual generative models, yet it requires inference from both conditional and unconditional models during sampling. We propose to build visual models that are free from guided sampling. The resulting algorithm, Guidance-Free Training (GFT), matches the performance of CFG while reducing sampling to a single model, halving the computational cost. Unlike previous distillation-based approaches that rely on pretrained CFG networks, GFT enables training directly from scratch. GFT is simple to implement. It retains the same maximum likelihood objective as CFG and differs mainly in the parameterization of conditional models. Implementing GFT requires only minimal modifications to existing codebases, as most design choices and hyperparameters are directly inherited from CFG. Our extensive experiments across five distinct visual models demonstrate the effectiveness and versatility of GFT. Across domains of diffusion, autoregressive, and masked-prediction modeling, GFT consistently achieves comparable or even lower FID scores, with similar diversity-fidelity trade-offs compared with CFG baselines, all while being guidance-free. Code will be available at https://github.com/thu-ml/GFT. |
2025-01-28T00:00:00 | 2501.14723 | CodeMonkeys: Scaling Test-Time Compute for Software Engineering | [
"Ryan Ehrlich",
"Bradley Brown",
"Jordan Juravsky",
"Ronald Clark",
"Christopher Ré",
"Azalia Mirhoseini"
] | Scaling test-time compute is a promising axis for improving LLM capabilities. However, test-time compute can be scaled in a variety of ways, and effectively combining different approaches remains an active area of research. Here, we explore this problem in the context of solving real-world GitHub issues from the SWE-bench dataset. Our system, named CodeMonkeys, allows models to iteratively edit a codebase by jointly generating and running a testing script alongside their draft edit. We sample many of these multi-turn trajectories for every issue to generate a collection of candidate edits. This approach lets us scale "serial" test-time compute by increasing the number of iterations per trajectory and "parallel" test-time compute by increasing the number of trajectories per problem. With parallel scaling, we can amortize up-front costs across multiple downstream samples, allowing us to identify relevant codebase context using the simple method of letting an LLM read every file. In order to select between candidate edits, we combine voting using model-generated tests with a final multi-turn trajectory dedicated to selection. Overall, CodeMonkeys resolves 57.4% of issues from SWE-bench Verified using a budget of approximately 2300 USD. Our selection method can also be used to combine candidates from different sources. Selecting over an ensemble of edits from existing top SWE-bench Verified submissions obtains a score of 66.2% and outperforms the best member of the ensemble on its own. We fully release our code and data at https://scalingintelligence.stanford.edu/pubs/codemonkeys. |
|
2025-01-29T00:00:00 | 2501.15747 | IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding | [
"Sankalp KJ",
"Ashutosh Kumar",
"Laxmaan Balaji",
"Nikunj Kotecha",
"Vinija Jain",
"Aman Chadha",
"Sreyoshi Bhaduri"
] | Known by more than 1.5 billion people in the Indian subcontinent, Indic languages present unique challenges and opportunities for natural language processing (NLP) research due to their rich cultural heritage, linguistic diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark designed to evaluate Large Language Models (LLMs) across Indic languages, building upon the MMLU Pro (Massive Multitask Language Understanding) framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi, Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique challenges and opportunities presented by the linguistic diversity of the Indian subcontinent. This benchmark encompasses a wide range of tasks in language comprehension, reasoning, and generation, meticulously crafted to capture the intricacies of Indian languages. IndicMMLU-Pro provides a standardized evaluation framework to push the research boundaries in Indic language AI, facilitating the development of more accurate, efficient, and culturally sensitive models. This paper outlines the benchmarks' design principles, task taxonomy, and data collection methodology, and presents baseline results from state-of-the-art multilingual models. |
|
2025-01-29T00:00:00 | 2501.16372 | Low-Rank Adapters Meet Neural Architecture Search for LLM Compression | [
"J. Pablo Muñoz",
"Jinjie Yuan",
"Nilesh Jain"
] | https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning | The rapid expansion of Large Language Models (LLMs) has posed significant challenges regarding the computational resources required for fine-tuning and deployment. Recent advancements in low-rank adapters have demonstrated their efficacy in parameter-efficient fine-tuning (PEFT) of these models. This retrospective paper comprehensively discusses innovative approaches that synergize low-rank representations with Neural Architecture Search (NAS) techniques, particularly weight-sharing super-networks. Robust solutions for compressing and fine-tuning large pre-trained models are developed by integrating these methodologies. Our analysis highlights the potential of these combined strategies to democratize the use of LLMs, making them more accessible for deployment in resource-constrained environments. The resulting models exhibit reduced memory footprints and faster inference times, paving the way for more practical and scalable applications of LLMs. Models and code are available at https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning. |
2025-01-29T00:00:00 | 2501.16975 | Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling | [
"Hongzhi Huang",
"Defa Zhu",
"Banggu Wu",
"Yutao Zeng",
"Ya Wang",
"Qiyang Min",
"Xun Zhou"
] | Tokenization is a fundamental component of large language models (LLMs), yet its influence on model scaling and performance is not fully explored. In this paper, we introduce Over-Tokenized Transformers, a novel framework that decouples input and output vocabularies to improve language modeling performance. Specifically, our approach scales up input vocabularies to leverage multi-gram tokens. Through extensive experiments, we uncover a log-linear relationship between input vocabulary size and training loss, demonstrating that larger input vocabularies consistently enhance model performance, regardless of model size. Using a large input vocabulary, we achieve performance comparable to double-sized baselines with no additional cost. Our findings highlight the importance of tokenization in scaling laws and provide practical insight for tokenizer design, paving the way for more efficient and powerful LLMs. |
|
2025-01-29T00:00:00 | 2501.16496 | Open Problems in Mechanistic Interpretability | [
"Lee Sharkey",
"Bilal Chughtai",
"Joshua Batson",
"Jack Lindsey",
"Jeff Wu",
"Lucius Bushnaq",
"Nicholas Goldowsky-Dill",
"Stefan Heimersheim",
"Alejandro Ortega",
"Joseph Bloom",
"Stella Biderman",
"Adria Garriga-Alonso",
"Arthur Conmy",
"Neel Nanda",
"Jessica Rumbelow",
"Martin Wattenberg",
"Nandi Schoots",
"Joseph Miller",
"Eric J. Michaud",
"Stephen Casper",
"Max Tegmark",
"William Saunders",
"David Bau",
"Eric Todd",
"Atticus Geiger",
"Mor Geva",
"Jesse Hoogland",
"Daniel Murfet",
"Tom McGrath"
] | Mechanistic interpretability aims to understand the computational mechanisms underlying neural networks' capabilities in order to accomplish concrete scientific and engineering goals. Progress in this field thus promises to provide greater assurance over AI system behavior and shed light on exciting scientific questions about the nature of intelligence. Despite recent progress toward these goals, there are many open problems in the field that require solutions before many scientific and practical benefits can be realized: Our methods require both conceptual and practical improvements to reveal deeper insights; we must figure out how best to apply our methods in pursuit of specific goals; and the field must grapple with socio-technical challenges that influence and are influenced by our work. This forward-facing review discusses the current frontier of mechanistic interpretability and the open problems that the field may benefit from prioritizing. |
|
2025-01-29T00:00:00 | 2501.17161 | SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training | [
"Tianzhe Chu",
"Yuexiang Zhai",
"Jihan Yang",
"Shengbang Tong",
"Saining Xie",
"Dale Schuurmans",
"Quoc V. Le",
"Sergey Levine",
"Yi Ma"
] | Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models. However, their roles in enhancing model generalization capabilities remain unclear. This paper studies the difference between SFT and RL on generalization and memorization, focusing on text-based rule variants and visual variants. We introduce GeneralPoints, an arithmetic reasoning card game, and adopt V-IRL, a real-world navigation environment, to assess how models trained with SFT and RL generalize to unseen variants in both textual and visual domains. We show that RL, especially when trained with an outcome-based reward, generalizes across both rule-based textual and visual variants. SFT, in contrast, tends to memorize training data and struggles to generalize out-of-distribution scenarios. Further analysis reveals that RL improves the model's underlying visual recognition capabilities, contributing to its enhanced generalization in the visual domain. Despite RL's superior generalization, we show that SFT remains essential for effective RL training; SFT stabilizes the model's output format, enabling subsequent RL to achieve its performance gains. These findings demonstrates the capability of RL for acquiring generalizable knowledge in complex, multi-modal tasks. |
|
2025-01-29T00:00:00 | 2501.17116 | Optimizing Large Language Model Training Using FP4 Quantization | [
"Ruizhe Wang",
"Yeyun Gong",
"Xiao Liu",
"Guoshuai Zhao",
"Ziyue Yang",
"Baining Guo",
"Zhengjun Zha",
"Peng Cheng"
] | The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training. |
|
2025-01-29T00:00:00 | 2501.16764 | DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation | [
"Chenguo Lin",
"Panwang Pan",
"Bangbang Yang",
"Zeming Li",
"Yadong Mu"
] | Recent advancements in 3D content generation from text or a single image struggle with limited high-quality 3D datasets and inconsistency from 2D multi-view generation. We introduce DiffSplat, a novel 3D generative framework that natively generates 3D Gaussian splats by taming large-scale text-to-image diffusion models. It differs from previous 3D generative models by effectively utilizing web-scale 2D priors while maintaining 3D consistency in a unified model. To bootstrap the training, a lightweight reconstruction model is proposed to instantly produce multi-view Gaussian splat grids for scalable dataset curation. In conjunction with the regular diffusion loss on these grids, a 3D rendering loss is introduced to facilitate 3D coherence across arbitrary views. The compatibility with image diffusion models enables seamless adaptions of numerous techniques for image generation to the 3D realm. Extensive experiments reveal the superiority of DiffSplat in text- and image-conditioned generation tasks and downstream applications. Thorough ablation studies validate the efficacy of each critical design choice and provide insights into the underlying mechanism. |
|
2025-01-29T00:00:00 | 2501.17117 | Histoires Morales: A French Dataset for Assessing Moral Alignment | [
"Thibaud Leteno",
"Irina Proskurina",
"Antoine Gourru",
"Julien Velcin",
"Charlotte Laclau",
"Guillaume Metzler",
"Christophe Gravier"
] | Aligning language models with human values is crucial, especially as they become more integrated into everyday life. While models are often adapted to user preferences, it is equally important to ensure they align with moral norms and behaviours in real-world social situations. Despite significant progress in languages like English and Chinese, French has seen little attention in this area, leaving a gap in understanding how LLMs handle moral reasoning in this language. To address this gap, we introduce Histoires Morales, a French dataset derived from Moral Stories, created through translation and subsequently refined with the assistance of native speakers to guarantee grammatical accuracy and adaptation to the French cultural context. We also rely on annotations of the moral values within the dataset to ensure their alignment with French norms. Histoires Morales covers a wide range of social situations, including differences in tipping practices, expressions of honesty in relationships, and responsibilities toward animals. To foster future research, we also conduct preliminary experiments on the alignment of multilingual models on French and English data and the robustness of the alignment. We find that while LLMs are generally aligned with human moral norms by default, they can be easily influenced with user-preference optimization for both moral and immoral data. |
|
2025-01-29T00:00:00 | 2501.14417 | DeepFlow: Serverless Large Language Model Serving at Scale | [
"Junhao Hu",
"Jiang Xu",
"Zhixia Liu",
"Yulong He",
"Yuetao Chen",
"Hao Xu",
"Jiang Liu",
"Baoquan Zhang",
"Shining Wan",
"Gengyuan Dan",
"Zhiyu Dong",
"Zhihao Ren",
"Jie Meng",
"Chao He",
"Changhong Liu",
"Tao Xie",
"Dayun Lin",
"Qin Zhang",
"Yue Yu",
"Hao Feng",
"Xusheng Chen",
"Yizhou Shan"
] | This paper introduces DeepFlow, a scalable and serverless AI platform designed to efficiently serve large language models (LLMs) at scale in cloud environments. DeepFlow addresses key challenges such as resource allocation, serving efficiency, and cold start latencies through four main design components. First, it uses a simple serverless abstraction called the request-job-task model, which helps manage AI workloads across post-training and model serving tasks. Second, it builds an in-house serving engine FlowServe using a microkernel-inspired design, NPU-centric execution, and SPMD-based parallelism to optimize LLM serving. The system also includes novel scheduling policies tailored for both PD-disaggregated and PD-colocated configurations. With optimizations like pre-warmed pods, DRAM pre-loading, and NPU-fork, DeepFlow can scale up to 64 instances in seconds. DeepFlow has been in production for over a year, operating on a large Ascend NPU cluster and providing industrystandard APIs for fine-tuning, agent serving, and model serving to our customers. |
|
2025-01-29T00:00:00 | 2501.16937 | TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models | [
"Makoto Shing",
"Kou Misaki",
"Han Bao",
"Sho Yokoi",
"Takuya Akiba"
] | Causal language models have demonstrated remarkable capabilities, but their size poses significant challenges for deployment in resource-constrained environments. Knowledge distillation, a widely-used technique for transferring knowledge from a large teacher model to a small student model, presents a promising approach for model compression. A significant remaining issue lies in the major differences between teacher and student models, namely the substantial capacity gap, mode averaging, and mode collapse, which pose barriers during distillation. To address these issues, we introduce Temporally Adaptive Interpolated Distillation (TAID), a novel knowledge distillation approach that dynamically interpolates student and teacher distributions through an adaptive intermediate distribution, gradually shifting from the student's initial distribution towards the teacher's distribution. We provide a theoretical analysis demonstrating TAID's ability to prevent mode collapse and empirically show its effectiveness in addressing the capacity gap while balancing mode averaging and mode collapse. Our comprehensive experiments demonstrate TAID's superior performance across various model sizes and architectures in both instruction tuning and pre-training scenarios. Furthermore, we showcase TAID's practical impact by developing two state-of-the-art compact foundation models: TAID-LLM-1.5B for language tasks and TAID-VLM-2B for vision-language tasks. These results demonstrate TAID's effectiveness in creating high-performing and efficient models, advancing the development of more accessible AI technologies. |
|
2025-01-30T00:00:00 | 2501.17703 | Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate | [
"Yubo Wang",
"Xiang Yue",
"Wenhu Chen"
] | Supervised Fine-Tuning (SFT) is commonly used to train language models to imitate annotated responses for given instructions. In this paper, we challenge this paradigm and propose Critique Fine-Tuning (CFT), a strategy where models learn to critique noisy responses rather than simply imitate correct ones. Inspired by human learning processes that emphasize critical thinking, CFT encourages deeper analysis and nuanced understanding-traits often overlooked by standard SFT. To validate the effectiveness of CFT, we construct a 50K-sample dataset from WebInstruct, using GPT-4o as the teacher to generate critiques in the form of (input=[query; noisy response], output=critique). CFT on this dataset yields a consistent 4-10% improvement over SFT on six math benchmarks with different base models like Qwen2.5, Qwen2.5-Math and DeepSeek-Math. We further expand to MetaMath and NuminaMath datasets and observe similar gains over SFT. Notably, our Qwen2.5-Math-CFT model-trained on just 50K samples-matches or outperforms competitive models such as AceMath and Qwen2.5-Math-Instruct on most benchmarks, both of which use over 2M samples. Ablation studies show that CFT is robust to the source of noisy response and teacher critique model. Through these findings, we argue that critique-based training offers a more effective alternative to advance the reasoning of language models. |
|
2025-01-30T00:00:00 | 2501.17195 | Atla Selene Mini: A General Purpose Evaluation Model | [
"Andrei Alexandru",
"Antonia Calvi",
"Henry Broomfield",
"Jackson Golden",
"Kyle Dai",
"Mathias Leys",
"Maurice Burger",
"Max Bartolo",
"Roman Engeler",
"Sashank Pisupati",
"Toby Drane",
"Young Sun Park"
] | We introduce Atla Selene Mini, a state-of-the-art small language model-as-a-judge (SLMJ). Selene Mini is a general-purpose evaluator that outperforms the best SLMJs and GPT-4o-mini on overall performance across 11 out-of-distribution benchmarks, spanning absolute scoring, classification, and pairwise preference tasks. It is the highest-scoring 8B generative model on RewardBench, surpassing strong baselines like GPT-4o and specialized judges. To achieve this, we develop a principled data curation strategy that augments public datasets with synthetically generated critiques and ensures high quality through filtering and dataset ablations. We train our model on a combined direct preference optimization (DPO) and supervised fine-tuning (SFT) loss, and produce a highly promptable evaluator that excels in real-world scenarios. Selene Mini shows dramatically improved zero-shot agreement with human expert evaluations on financial and medical industry datasets. It is also robust to variations in prompt format. Preliminary results indicate that Selene Mini is the top-ranking evaluator in a live, community-driven Judge Arena. We release the model weights on HuggingFace (https://hf.co/AtlaAI/Selene-1-Mini-Llama-3.1-8B) and Ollama to encourage widespread community adoption. |
|
2025-01-30T00:00:00 | 2501.17749 | Early External Safety Testing of OpenAI's o3-mini: Insights from the Pre-Deployment Evaluation | [
"Aitor Arrieta",
"Miriam Ugarte",
"Pablo Valle",
"José Antonio Parejo",
"Sergio Segura"
] | Large Language Models (LLMs) have become an integral part of our daily lives. However, they impose certain risks, including those that can harm individuals' privacy, perpetuate biases and spread misinformation. These risks highlight the need for robust safety mechanisms, ethical guidelines, and thorough testing to ensure their responsible deployment. Safety of LLMs is a key property that needs to be thoroughly tested prior the model to be deployed and accessible to the general users. This paper reports the external safety testing experience conducted by researchers from Mondragon University and University of Seville on OpenAI's new o3-mini LLM as part of OpenAI's early access for safety testing program. In particular, we apply our tool, ASTRAL, to automatically and systematically generate up to date unsafe test inputs (i.e., prompts) that helps us test and assess different safety categories of LLMs. We automatically generate and execute a total of 10,080 unsafe test input on a early o3-mini beta version. After manually verifying the test cases classified as unsafe by ASTRAL, we identify a total of 87 actual instances of unsafe LLM behavior. We highlight key insights and findings uncovered during the pre-deployment external testing phase of OpenAI's latest LLM. |
|
2025-01-30T00:00:00 | 2501.17433 | Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation | [
"Tiansheng Huang",
"Sihao Hu",
"Fatih Ilhan",
"Selim Furkan Tekin",
"Ling Liu"
] | https://github.com/git-disl/Virus | Recent research shows that Large Language Models (LLMs) are vulnerable to harmful fine-tuning attacks -- models lose their safety alignment ability after fine-tuning on a few harmful samples. For risk mitigation, a guardrail is typically used to filter out harmful samples before fine-tuning. By designing a new red-teaming method, we in this paper show that purely relying on the moderation guardrail for data filtration is not reliable. Our proposed attack method, dubbed Virus, easily bypasses the guardrail moderation by slightly modifying the harmful data. Experimental results show that the harmful data optimized by Virus is not detectable by the guardrail with up to 100\% leakage ratio, and can simultaneously achieve superior attack performance. Finally, the key message we want to convey through this paper is that: it is reckless to consider guardrail moderation as a clutch at straws towards harmful fine-tuning attack, as it cannot solve the inherent safety issue of the pre-trained LLMs. Our code is available at https://github.com/git-disl/Virus |
2025-01-30T00:00:00 | 2501.14334 | Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts | [
"Clément Desroches",
"Martin Chauvin",
"Louis Ladan",
"Caroline Vateau",
"Simon Gosset",
"Philippe Cordier"
] | The rapid growth of artificial intelligence (AI), particularly Large Language Models (LLMs), has raised concerns regarding its global environmental impact that extends beyond greenhouse gas emissions to include consideration of hardware fabrication and end-of-life processes. The opacity from major providers hinders companies' abilities to evaluate their AI-related environmental impacts and achieve net-zero targets. In this paper, we propose a methodology to estimate the environmental impact of a company's AI portfolio, providing actionable insights without necessitating extensive AI and Life-Cycle Assessment (LCA) expertise. Results confirm that large generative AI models consume up to 4600x more energy than traditional models. Our modelling approach, which accounts for increased AI usage, hardware computing efficiency, and changes in electricity mix in line with IPCC scenarios, forecasts AI electricity use up to 2030. Under a high adoption scenario, driven by widespread Generative AI and agents adoption associated to increasingly complex models and frameworks, AI electricity use is projected to rise by a factor of 24.4. Mitigating the environmental impact of Generative AI by 2030 requires coordinated efforts across the AI value chain. Isolated measures in hardware efficiency, model efficiency, or grid improvements alone are insufficient. We advocate for standardized environmental assessment frameworks, greater transparency from the all actors of the value chain and the introduction of a "Return on Environment" metric to align AI development with net-zero goals. |
|
2025-01-30T00:00:00 | 2501.15654 | People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text | [
"Jenna Russell",
"Marzena Karpinska",
"Mohit Iyyer"
] | In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300 non-fiction English articles, label them as either human-written or AI-generated, and provide paragraph-length explanations for their decisions. Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such "expert" annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts' free-form explanations shows that while they rely heavily on specific lexical clues ('AI vocabulary'), they also pick up on more complex phenomena within the text (e.g., formality, originality, clarity) that are challenging to assess for automatic detectors. We release our annotated dataset and code to spur future research into both human and automated detection of AI-generated text. |
|
2025-01-30T00:00:00 | 2501.15891 | Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile Virtual Clothing Tasks | [
"Hailong Guo",
"Bohan Zeng",
"Yiren Song",
"Wentao Zhang",
"Chuang Zhang",
"Jiaming Liu"
] | Image-based virtual try-on (VTON) aims to generate a virtual try-on result by transferring an input garment onto a target person's image. However, the scarcity of paired garment-model data makes it challenging for existing methods to achieve high generalization and quality in VTON. Also, it limits the ability to generate mask-free try-ons. To tackle the data scarcity problem, approaches such as Stable Garment and MMTryon use a synthetic data strategy, effectively increasing the amount of paired data on the model side. However, existing methods are typically limited to performing specific try-on tasks and lack user-friendliness. To enhance the generalization and controllability of VTON generation, we propose Any2AnyTryon, which can generate try-on results based on different textual instructions and model garment images to meet various needs, eliminating the reliance on masks, poses, or other conditions. Specifically, we first construct the virtual try-on dataset LAION-Garment, the largest known open-source garment try-on dataset. Then, we introduce adaptive position embedding, which enables the model to generate satisfactory outfitted model images or garment images based on input images of different sizes and categories, significantly enhancing the generalization and controllability of VTON generation. In our experiments, we demonstrate the effectiveness of our Any2AnyTryon and compare it with existing methods. The results show that Any2AnyTryon enables flexible, controllable, and high-quality image-based virtual try-on generation.https://logn-2024.github.io/Any2anyTryonProjectPage/ |
|
2025-01-31T00:00:00 | 2501.18492 | GuardReasoner: Towards Reasoning-based LLM Safeguards | [
"Yue Liu",
"Hongcheng Gao",
"Shengfang Zhai",
"Jun Xia",
"Tianyi Wu",
"Zhiwei Xue",
"Yulin Chen",
"Kenji Kawaguchi",
"Jiaheng Zhang",
"Bryan Hooi"
] | https://github.com/yueliu1999/GuardReasoner | As LLMs increasingly impact safety-critical applications, ensuring their safety using guardrails remains a key challenge. This paper proposes GuardReasoner, a new safeguard for LLMs, by guiding the guard model to learn to reason. Concretely, we first create the GuardReasonerTrain dataset, which consists of 127K samples with 460K detailed reasoning steps. Then, we introduce reasoning SFT to unlock the reasoning capability of guard models. In addition, we present hard sample DPO to further strengthen their reasoning ability. In this manner, GuardReasoner achieves better performance, explainability, and generalizability. Extensive experiments and analyses on 13 benchmarks of 3 guardrail tasks demonstrate its superiority. Remarkably, GuardReasoner 8B surpasses GPT-4o+CoT by 5.74% and LLaMA Guard 3 8B by 20.84% F1 score on average. We release the training data, code, and models with different scales (1B, 3B, 8B) of GuardReasoner : https://github.com/yueliu1999/GuardReasoner/. |
2025-01-31T00:00:00 | 2501.16411 | PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding | [
"Wei Chow",
"Jiageng Mao",
"Boyi Li",
"Daniel Seita",
"Vitor Guizilini",
"Yue Wang"
] | Understanding the physical world is a fundamental challenge in embodied AI, critical for enabling agents to perform complex tasks and operate safely in real-world environments. While Vision-Language Models (VLMs) have shown great promise in reasoning and task planning for embodied agents, their ability to comprehend physical phenomena remains extremely limited. To close this gap, we introduce PhysBench, a comprehensive benchmark designed to evaluate VLMs' physical world understanding capability across a diverse set of tasks. PhysBench contains 10,002 entries of interleaved video-image-text data, categorized into four major domains: physical object properties, physical object relationships, physical scene understanding, and physics-based dynamics, further divided into 19 subclasses and 8 distinct capability dimensions. Our extensive experiments, conducted on 75 representative VLMs, reveal that while these models excel in common-sense reasoning, they struggle with understanding the physical world -- likely due to the absence of physical knowledge in their training data and the lack of embedded physical priors. To tackle the shortfall, we introduce PhysAgent, a novel framework that combines the generalization strengths of VLMs with the specialized expertise of vision models, significantly enhancing VLMs' physical understanding across a variety of tasks, including an 18.4\% improvement on GPT-4o. Furthermore, our results demonstrate that enhancing VLMs' physical world understanding capabilities can help embodied agents such as MOKA. We believe that PhysBench and PhysAgent offer valuable insights and contribute to bridging the gap between VLMs and physical world understanding. |
|
2025-01-31T00:00:00 | 2501.18585 | Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs | [
"Yue Wang",
"Qiuzhi Liu",
"Jiahao Xu",
"Tian Liang",
"Xingyu Chen",
"Zhiwei He",
"Linfeng Song",
"Dian Yu",
"Juntao Li",
"Zhuosheng Zhang",
"Rui Wang",
"Zhaopeng Tu",
"Haitao Mi",
"Dong Yu"
] | Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks by scaling test-time compute and exhibiting human-like deep thinking. However, we identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts without sufficiently exploring promising paths to reach a correct solution. This behavior leads to inadequate depth of reasoning and decreased performance, particularly on challenging mathematical problems. To systematically analyze this issue, we conduct experiments on three challenging test sets and two representative open-source o1-like models, revealing that frequent thought switching correlates with incorrect responses. We introduce a novel metric to quantify underthinking by measuring token efficiency in incorrect answers. To address underthinking, we propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts, encouraging deeper exploration of each reasoning path. Experimental results demonstrate that our approach improves accuracy across challenging datasets without requiring model fine-tuning. Our findings contribute to understanding reasoning inefficiencies in o1-like LLMs and offer a practical solution to enhance their problem-solving capabilities. |
|
2025-01-31T00:00:00 | 2501.18009 | Large Language Models Think Too Fast To Explore Effectively | [
"Lan Pan",
"Hanbo Xie",
"Robert C. Wilson"
] | Large Language Models have emerged many intellectual capacities. While numerous benchmarks assess their intelligence, limited attention has been given to their ability to explore, an essential capacity for discovering new information and adapting to novel environments in both natural and artificial systems. The extent to which LLMs can effectively explore, particularly in open-ended tasks, remains unclear. This study investigates whether LLMs can surpass humans in exploration during an open-ended task, using Little Alchemy 2 as a paradigm, where agents combine elements to discover new ones. Results show most LLMs underperform compared to humans, except for the o1 model, with those traditional LLMs relying primarily on uncertainty driven strategies, unlike humans who balance uncertainty and empowerment. Representational analysis of the models with Sparse Autoencoders revealed that uncertainty and choices are represented at earlier transformer blocks, while empowerment values are processed later, causing LLMs to think too fast and make premature decisions, hindering effective exploration. These findings shed light on the limitations of LLM exploration and suggest directions for improving their adaptability. |
|
2025-01-31T00:00:00 | 2501.18438 | o3-mini vs DeepSeek-R1: Which One is Safer? | [
"Aitor Arrieta",
"Miriam Ugarte",
"Pablo Valle",
"José Antonio Parejo",
"Sergio Segura"
] | The irruption of DeepSeek-R1 constitutes a turning point for the AI industry in general and the LLMs in particular. Its capabilities have demonstrated outstanding performance in several tasks, including creative thinking, code generation, maths and automated program repair, at apparently lower execution cost. However, LLMs must adhere to an important qualitative property, i.e., their alignment with safety and human values. A clear competitor of DeepSeek-R1 is its American counterpart, OpenAI's o3-mini model, which is expected to set high standards in terms of performance, safety and cost. In this paper we conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b version) and OpenAI's o3-mini (beta version). To this end, we make use of our recently released automated safety testing tool, named ASTRAL. By leveraging this tool, we automatically and systematically generate and execute a total of 1260 unsafe test inputs on both models. After conducting a semi-automated assessment of the outcomes provided by both LLMs, the results indicate that DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts whereas o3-mini only to 1.19%. |
|
2025-01-31T00:00:00 | 2501.18362 | MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding | [
"Yuxin Zuo",
"Shang Qu",
"Yifei Li",
"Zhangren Chen",
"Xuekai Zhu",
"Ermo Hua",
"Kaiyan Zhang",
"Ning Ding",
"Bowen Zhou"
] | We introduce MedXpertQA, a highly challenging and comprehensive benchmark to evaluate expert-level medical knowledge and advanced reasoning. MedXpertQA includes 4,460 questions spanning 17 specialties and 11 body systems. It includes two subsets, Text for text evaluation and MM for multimodal evaluation. Notably, MM introduces expert-level exam questions with diverse images and rich clinical information, including patient records and examination results, setting it apart from traditional medical multimodal benchmarks with simple QA pairs generated from image captions. MedXpertQA applies rigorous filtering and augmentation to address the insufficient difficulty of existing benchmarks like MedQA, and incorporates specialty board questions to improve clinical relevance and comprehensiveness. We perform data synthesis to mitigate data leakage risk and conduct multiple rounds of expert reviews to ensure accuracy and reliability. We evaluate 16 leading models on MedXpertQA. Moreover, medicine is deeply connected to real-world decision-making, providing a rich and representative setting for assessing reasoning abilities beyond mathematics and code. To this end, we develop a reasoning-oriented subset to facilitate the assessment of o1-like models. |
|
2025-01-31T00:00:00 | 2501.18511 | WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training | [
"Benjamin Feuer",
"Chinmay Hegde"
] | https://github.com/penfever/wildchat-50m | Language model (LLM) post-training, from DPO to distillation, can refine behaviors and unlock new skills, but the open science supporting these post-training techniques is still in its infancy. One limiting factor has been the difficulty of conducting large-scale comparative analyses of synthetic data generating models and LLM judges. To close this gap, we introduce WILDCHAT-50M, the largest public chat dataset to date. We extend the existing WildChat dataset to include responses not only from GPT, but from over 50 different open-weight models, ranging in size from 0.5B to 104B parameters. We conduct an extensive comparative analysis and demonstrate the potential of this dataset by creating RE-WILD, our own public SFT mix, which outperforms the recent Tulu-3 SFT mixture from Allen AI with only 40% as many samples. Our dataset, samples and code are available at https://github.com/penfever/wildchat-50m. |
2025-01-31T00:00:00 | 2501.18512 | Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch | [
"Arthur Douillard",
"Yanislav Donchev",
"Keith Rush",
"Satyen Kale",
"Zachary Charles",
"Zachary Garrett",
"Gabriel Teston",
"Dave Lacey",
"Ross McIlroy",
"Jiajun Shen",
"Alexandre Ramé",
"Arthur Szlam",
"Marc'Aurelio Ranzato",
"Paul Barham"
] | Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time. Since internal states and parameter gradients need to be exchanged at each and every single gradient step, all devices need to be co-located using low-latency high-bandwidth communication links to support the required high volume of exchanged bits. Recently, distributed algorithms like DiLoCo have relaxed such co-location constraint: accelerators can be grouped into ``workers'', where synchronizations between workers only occur infrequently. This in turn means that workers can afford being connected by lower bandwidth communication links without affecting learning quality. However, in these methods, communication across workers still requires the same peak bandwidth as before, as the synchronizations require all parameters to be exchanged across all workers. In this paper, we improve DiLoCo in three ways. First, we synchronize only subsets of parameters in sequence, rather than all at once, which greatly reduces peak bandwidth. Second, we allow workers to continue training while synchronizing, which decreases wall clock time. Third, we quantize the data exchanged by workers, which further reduces bandwidth across workers. By properly combining these modifications, we show experimentally that we can distribute training of billion-scale parameters and reach similar quality as before, but reducing required bandwidth by two orders of magnitude. |
|
2025-01-31T00:00:00 | 2501.16609 | CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation | [
"Faria Huq",
"Zora Zhiruo Wang",
"Frank F. Xu",
"Tianyue Ou",
"Shuyan Zhou",
"Jeffrey P. Bigham",
"Graham Neubig"
] | While much work on web agents emphasizes the promise of autonomously performing tasks on behalf of users, in reality, agents often fall short on complex tasks in real-world contexts and modeling user preference. This presents an opportunity for humans to collaborate with the agent and leverage the agent's capabilities effectively. We propose CowPilot, a framework supporting autonomous as well as human-agent collaborative web navigation, and evaluation across task success and task efficiency. CowPilot reduces the number of steps humans need to perform by allowing agents to propose next steps, while users are able to pause, reject, or take alternative actions. During execution, users can interleave their actions with the agent by overriding suggestions or resuming agent control when needed. We conducted case studies on five common websites and found that the human-agent collaborative mode achieves the highest success rate of 95% while requiring humans to perform only 15.2% of the total steps. Even with human interventions during task execution, the agent successfully drives up to half of task success on its own. CowPilot can serve as a useful tool for data collection and agent evaluation across websites, which we believe will enable research in how users and agents can work together. Video demonstrations are available at https://oaishi.github.io/cowpilot.html |
|
2025-01-31T00:00:00 | 2501.18427 | SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer | [
"Enze Xie",
"Junsong Chen",
"Yuyang Zhao",
"Jincheng Yu",
"Ligeng Zhu",
"Yujun Lin",
"Zhekai Zhang",
"Muyang Li",
"Junyu Chen",
"Han Cai",
"Bingchen Liu",
"Daquan Zhou",
"Song Han"
] | This paper presents SANA-1.5, a linear Diffusion Transformer for efficient scaling in text-to-image generation. Building upon SANA-1.0, we introduce three key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that enables scaling from 1.6B to 4.8B parameters with significantly reduced computational resources, combined with a memory-efficient 8-bit optimizer. (2) Model Depth Pruning: A block importance analysis technique for efficient model compression to arbitrary sizes with minimal quality loss. (3) Inference-time Scaling: A repeated sampling strategy that trades computation for model capacity, enabling smaller models to match larger model quality at inference time. Through these strategies, SANA-1.5 achieves a text-image alignment score of 0.72 on GenEval, which can be further improved to 0.80 through inference scaling, establishing a new SoTA on GenEval benchmark. These innovations enable efficient model scaling across different compute budgets while maintaining high quality, making high-quality image generation more accessible. |
|
2025-02-03T00:00:00 | 2501.19393 | s1: Simple test-time scaling | [
"Niklas Muennighoff",
"Zitong Yang",
"Weijia Shi",
"Xiang Lisa Li",
"Li Fei-Fei",
"Hannaneh Hajishirzi",
"Luke Zettlemoyer",
"Percy Liang",
"Emmanuel Candès",
"Tatsunori Hashimoto"
] | https://github.com/simplescaling/s1 | Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI's o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and strong reasoning performance. First, we curate a small dataset s1K of 1,000 questions paired with reasoning traces relying on three criteria we validate through ablations: difficulty, diversity, and quality. Second, we develop budget forcing to control test-time compute by forcefully terminating the model's thinking process or lengthening it by appending "Wait" multiple times to the model's generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps. After supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1 exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24). Further, scaling s1 with budget forcing allows extrapolating beyond its performance without test-time intervention: from 50% to 57% on AIME24. Our model, data, and code are open-source at https://github.com/simplescaling/s1. |
2025-02-03T00:00:00 | 2501.18841 | Trading Inference-Time Compute for Adversarial Robustness | [
"Wojciech Zaremba",
"Evgenia Nitishinskaya",
"Boaz Barak",
"Stephanie Lin",
"Sam Toyer",
"Yaodong Yu",
"Rachel Dias",
"Eric Wallace",
"Kai Xiao",
"Johannes Heidecke",
"Amelia Glaese"
] | We conduct experiments on the impact of increasing inference-time compute in reasoning models (specifically OpenAI o1-preview and o1-mini) on their robustness to adversarial attacks. We find that across a variety of attacks, increased inference-time compute leads to improved robustness. In many cases (with important exceptions), the fraction of model samples where the attack succeeds tends to zero as the amount of test-time compute grows. We perform no adversarial training for the tasks we study, and we increase inference-time compute by simply allowing the models to spend more compute on reasoning, independently of the form of attack. Our results suggest that inference-time compute has the potential to improve adversarial robustness for Large Language Models. We also explore new attacks directed at reasoning models, as well as settings where inference-time compute does not improve reliability, and speculate on the reasons for these as well as ways to address them. |
|
2025-02-03T00:00:00 | 2501.19324 | Reward-Guided Speculative Decoding for Efficient LLM Reasoning | [
"Baohao Liao",
"Yuhui Xu",
"Hanze Dong",
"Junnan Li",
"Christof Monz",
"Silvio Savarese",
"Doyen Sahoo",
"Caiming Xiong"
] | We introduce Reward-Guided Speculative Decoding (RSD), a novel framework aimed at improving the efficiency of inference in large language models (LLMs). RSD synergistically combines a lightweight draft model with a more powerful target model, incorporating a controlled bias to prioritize high-reward outputs, in contrast to existing speculative decoding methods that enforce strict unbiasedness. RSD employs a process reward model to evaluate intermediate decoding steps and dynamically decide whether to invoke the target model, optimizing the trade-off between computational cost and output quality. We theoretically demonstrate that a threshold-based mixture strategy achieves an optimal balance between resource utilization and performance. Extensive evaluations on challenging reasoning benchmarks, including Olympiad-level tasks, show that RSD delivers significant efficiency gains against decoding with the target model only (up to 4.4x fewer FLOPs), while achieving significant better accuracy than parallel decoding method on average (up to +3.5). These results highlight RSD as a robust and cost-effective approach for deploying LLMs in resource-intensive scenarios. |
|
2025-02-03T00:00:00 | 2501.18837 | Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming | [
"Mrinank Sharma",
"Meg Tong",
"Jesse Mu",
"Jerry Wei",
"Jorrit Kruthoff",
"Scott Goodfriend",
"Euan Ong",
"Alwin Peng",
"Raj Agarwal",
"Cem Anil",
"Amanda Askell",
"Nathan Bailey",
"Joe Benton",
"Emma Bluemke",
"Samuel R. Bowman",
"Eric Christiansen",
"Hoagy Cunningham",
"Andy Dau",
"Anjali Gopal",
"Rob Gilson",
"Logan Graham",
"Logan Howard",
"Nimit Kalra",
"Taesung Lee",
"Kevin Lin",
"Peter Lofgren",
"Francesco Mosconi",
"Clare O'Hara",
"Catherine Olsson",
"Linda Petrini",
"Samir Rajani",
"Nikhil Saxena",
"Alex Silverstein",
"Tanya Singh",
"Theodore Sumers",
"Leonard Tang",
"Kevin K. Troy",
"Constantin Weisser",
"Ruiqi Zhong",
"Giulio Zhou",
"Jan Leike",
"Jared Kaplan",
"Ethan Perez"
] | Large language models (LLMs) are vulnerable to universal jailbreaks-prompting strategies that systematically bypass model safeguards and enable users to carry out harmful processes that require many model interactions, like manufacturing illegal substances at scale. To defend against these attacks, we introduce Constitutional Classifiers: safeguards trained on synthetic data, generated by prompting LLMs with natural language rules (i.e., a constitution) specifying permitted and restricted content. In over 3,000 estimated hours of red teaming, no red teamer found a universal jailbreak that could extract information from an early classifier-guarded LLM at a similar level of detail to an unguarded model across most target queries. On automated evaluations, enhanced classifiers demonstrated robust defense against held-out domain-specific jailbreaks. These classifiers also maintain deployment viability, with an absolute 0.38% increase in production-traffic refusals and a 23.7% inference overhead. Our work demonstrates that defending against universal jailbreaks while maintaining practical deployment viability is tractable. |
|
2025-02-03T00:00:00 | 2404.07097 | Fast Encoder-Based 3D from Casual Videos via Point Track Processing | [
"Yoni Kasten",
"Wuyue Lu",
"Haggai Maron"
] | This paper addresses the long-standing challenge of reconstructing 3D structures from videos with dynamic content. Current approaches to this problem were not designed to operate on casual videos recorded by standard cameras or require a long optimization time. Aiming to significantly improve the efficiency of previous approaches, we present TracksTo4D, a learning-based approach that enables inferring 3D structure and camera positions from dynamic content originating from casual videos using a single efficient feed-forward pass. To achieve this, we propose operating directly over 2D point tracks as input and designing an architecture tailored for processing 2D point tracks. Our proposed architecture is designed with two key principles in mind: (1) it takes into account the inherent symmetries present in the input point tracks data, and (2) it assumes that the movement patterns can be effectively represented using a low-rank approximation. TracksTo4D is trained in an unsupervised way on a dataset of casual videos utilizing only the 2D point tracks extracted from the videos, without any 3D supervision. Our experiments show that TracksTo4D can reconstruct a temporal point cloud and camera positions of the underlying video with accuracy comparable to state-of-the-art methods, while drastically reducing runtime by up to 95\%. We further show that TracksTo4D generalizes well to unseen videos of unseen semantic categories at inference time. |
|
2025-02-03T00:00:00 | 2411.04983 | DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning | [
"Gaoyue Zhou",
"Hengkai Pan",
"Yann LeCun",
"Lerrel Pinto"
] | The ability to predict future outcomes given control actions is fundamental for physical reasoning. However, such predictive models, often called world models, have proven challenging to learn and are typically developed for task-specific solutions with online policy learning. We argue that the true potential of world models lies in their ability to reason and plan across diverse problems using only passive data. Concretely, we require world models to have the following three properties: 1) be trainable on offline, pre-collected trajectories, 2) support test-time behavior optimization, and 3) facilitate task-agnostic reasoning. To realize this, we present DINO World Model (DINO-WM), a new method to model visual dynamics without reconstructing the visual world. DINO-WM leverages spatial patch features pre-trained with DINOv2, enabling it to learn from offline behavioral trajectories by predicting future patch features. This design allows DINO-WM to achieve observational goals through action sequence optimization, facilitating task-agnostic behavior planning by treating desired goal patch features as prediction targets. We evaluate DINO-WM across various domains, including maze navigation, tabletop pushing, and particle manipulation. Our experiments demonstrate that DINO-WM can generate zero-shot behavioral solutions at test time without relying on expert demonstrations, reward modeling, or pre-learned inverse models. Notably, DINO-WM exhibits strong generalization capabilities compared to prior state-of-the-art work, adapting to diverse task families such as arbitrarily configured mazes, push manipulation with varied object shapes, and multi-particle scenarios. |
|
2025-02-03T00:00:00 | 2501.18128 | Unraveling the Capabilities of Language Models in News Summarization | [
"Abdurrahman Odabaşı",
"Göksel Biricik"
] | Given the recent introduction of multiple language models and the ongoing demand for improved Natural Language Processing tasks, particularly summarization, this work provides a comprehensive benchmarking of 20 recent language models, focusing on smaller ones for the news summarization task. In this work, we systematically test the capabilities and effectiveness of these models in summarizing news article texts which are written in different styles and presented in three distinct datasets. Specifically, we focus in this study on zero-shot and few-shot learning settings and we apply a robust evaluation methodology that combines different evaluation concepts including automatic metrics, human evaluation, and LLM-as-a-judge. Interestingly, including demonstration examples in the few-shot learning setting did not enhance models' performance and, in some cases, even led to worse quality of the generated summaries. This issue arises mainly due to the poor quality of the gold summaries that have been used as reference summaries, which negatively impacts the models' performance. Furthermore, our study's results highlight the exceptional performance of GPT-3.5-Turbo and GPT-4, which generally dominate due to their advanced capabilities. However, among the public models evaluated, certain models such as Qwen1.5-7B, SOLAR-10.7B-Instruct-v1.0, Meta-Llama-3-8B and Zephyr-7B-Beta demonstrated promising results. These models showed significant potential, positioning them as competitive alternatives to large models for the task of news summarization. |
|
2025-02-03T00:00:00 | 2501.18119 | Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models | [
"Qika Lin",
"Tianzhe Zhao",
"Kai He",
"Zhen Peng",
"Fangzhi Xu",
"Ling Huang",
"Jingying Ma",
"Mengling Feng"
] | Due to the presence of the natural gap between Knowledge Graph (KG) structures and the natural language, the effective integration of holistic structural information of KGs with Large Language Models (LLMs) has emerged as a significant question. To this end, we propose a two-stage framework to learn and apply quantized codes for each entity, aiming for the seamless integration of KGs with LLMs. Firstly, a self-supervised quantized representation (SSQR) method is proposed to compress both KG structural and semantic knowledge into discrete codes (\ie, tokens) that align the format of language sentences. We further design KG instruction-following data by viewing these learned codes as features to directly input to LLMs, thereby achieving seamless integration. The experiment results demonstrate that SSQR outperforms existing unsupervised quantized methods, producing more distinguishable codes. Further, the fine-tuned LLaMA2 and LLaMA3.1 also have superior performance on KG link prediction and triple classification tasks, utilizing only 16 tokens per entity instead of thousands in conventional prompting methods. |
|
2025-02-03T00:00:00 | 2501.18753 | INT: Instance-Specific Negative Mining for Task-Generic Promptable Segmentation | [
"Jian Hu",
"Zixu Cheng",
"Shaogang Gong"
] | Task-generic promptable image segmentation aims to achieve segmentation of diverse samples under a single task description by utilizing only one task-generic prompt. Current methods leverage the generalization capabilities of Vision-Language Models (VLMs) to infer instance-specific prompts from these task-generic prompts in order to guide the segmentation process. However, when VLMs struggle to generalise to some image instances, predicting instance-specific prompts becomes poor. To solve this problem, we introduce Instance-specific Negative Mining for Task-Generic Promptable Segmentation (INT). The key idea of INT is to adaptively reduce the influence of irrelevant (negative) prior knowledge whilst to increase the use the most plausible prior knowledge, selected by negative mining with higher contrast, in order to optimise instance-specific prompts generation. Specifically, INT consists of two components: (1) instance-specific prompt generation, which progressively fliters out incorrect information in prompt generation; (2) semantic mask generation, which ensures each image instance segmentation matches correctly the semantics of the instance-specific prompts. INT is validated on six datasets, including camouflaged objects and medical images, demonstrating its effectiveness, robustness and scalability. |
|
2025-02-03T00:00:00 | 2501.19339 | PixelWorld: Towards Perceiving Everything as Pixels | [
"Zhiheng Lyu",
"Xueguang Ma",
"Wenhu Chen"
] | Existing foundation models typically process visual input as pixels and textual input as tokens, a paradigm that contrasts with human perception, where both modalities are processed in a unified manner. With the rise of embodied and agentic AI, where inputs primarily come from camera pixels, the need for a unified perception framework becomes increasingly evident. In this paper, we propose to unify all modalities (text, tables, code, diagrams, images, etc) as pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce PixelWorld, a novel evaluation suite that unifies all the mentioned modalities into pixel space to gauge the existing models' performance. Our findings show that (1) PEAP outperforms baseline with token-based input in multimodal datasets, benefiting from unified input for better disambiguation, (2) significant declines in reasoning and coding capabilities across all models when processing pixel-based input, underscoring the need to enhance foundation models' perceptual abilities, (3) larger models can maintain strong performance on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer significant performance degradation, (4) the attention pattern of PEAP is highly aligned with text token input, (5) PEAP can be accelerated significantly by exploiting the spatial sparsity. We conclude that the existing frontier models are competent in pixel perception, however, there is still headroom for improvement. Our code, dataset will be released upon acceptance. |
|
2025-02-03T00:00:00 | 2501.18965 | The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training | [
"Fabian Schaipp",
"Alexander Hägele",
"Adrien Taylor",
"Umut Simsekli",
"Francis Bach"
] | We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant schedule with linear cooldown; in particular, the practical benefit of cooldown is reflected in the bound due to the absence of logarithmic terms. Further, we show that this surprisingly close match between optimization theory and practice can be exploited for learning-rate tuning: we achieve noticeable improvements for training 124M and 210M Llama-type models by (i) extending the schedule for continued training with optimal learning-rate, and (ii) transferring the optimal learning-rate across schedules. |
|
2025-02-03T00:00:00 | 2501.19399 | Scalable-Softmax Is Superior for Attention | [
"Ken M. Nakanishi"
] | The maximum element of the vector output by the Softmax function approaches zero as the input vector size increases. Transformer-based language models rely on Softmax to compute attention scores, causing the attention distribution to flatten as the context size grows. This reduces the model's ability to prioritize key information effectively and potentially limits its length generalization. To address this problem, we propose Scalable-Softmax (SSMax), which replaces Softmax in scenarios where the input vector size varies. SSMax can be seamlessly integrated into existing Transformer-based architectures. Experimental results in language modeling show that models using SSMax not only achieve faster loss reduction during pretraining but also significantly improve performance in long contexts and key information retrieval. Furthermore, an analysis of attention scores reveals that SSMax enables the model to focus attention on key information even in long contexts. Additionally, although models that use SSMax from the beginning of pretraining achieve better length generalization, those that have already started pretraining can still gain some of this ability by replacing Softmax in the attention layers with SSMax, either during or after pretraining. |
|
2025-02-03T00:00:00 | 2501.14677 | MatAnyone: Stable Video Matting with Consistent Memory Propagation | [
"Peiqing Yang",
"Shangchen Zhou",
"Jixin Zhao",
"Qingyi Tao",
"Chen Change Loy"
] | Auxiliary-free human video matting methods, which rely solely on input frames, often struggle with complex or ambiguous backgrounds. To address this, we propose MatAnyone, a robust framework tailored for target-assigned video matting. Specifically, building on a memory-based paradigm, we introduce a consistent memory propagation module via region-adaptive memory fusion, which adaptively integrates memory from the previous frame. This ensures semantic stability in core regions while preserving fine-grained details along object boundaries. For robust training, we present a larger, high-quality, and diverse dataset for video matting. Additionally, we incorporate a novel training strategy that efficiently leverages large-scale segmentation data, boosting matting stability. With this new network design, dataset, and training strategy, MatAnyone delivers robust and accurate video matting results in diverse real-world scenarios, outperforming existing methods. |
|
2025-02-03T00:00:00 | 2501.18804 | Zero-Shot Novel View and Depth Synthesis with Multi-View Geometric Diffusion | [
"Vitor Guizilini",
"Muhammad Zubair Irshad",
"Dian Chen",
"Greg Shakhnarovich",
"Rares Ambrus"
] | Current methods for 3D scene reconstruction from sparse posed images employ intermediate 3D representations such as neural fields, voxel grids, or 3D Gaussians, to achieve multi-view consistent scene appearance and geometry. In this paper we introduce MVGD, a diffusion-based architecture capable of direct pixel-level generation of images and depth maps from novel viewpoints, given an arbitrary number of input views. Our method uses raymap conditioning to both augment visual features with spatial information from different viewpoints, as well as to guide the generation of images and depth maps from novel views. A key aspect of our approach is the multi-task generation of images and depth maps, using learnable task embeddings to guide the diffusion process towards specific modalities. We train this model on a collection of more than 60 million multi-view samples from publicly available datasets, and propose techniques to enable efficient and consistent learning in such diverse conditions. We also propose a novel strategy that enables the efficient training of larger models by incrementally fine-tuning smaller ones, with promising scaling behavior. Through extensive experiments, we report state-of-the-art results in multiple novel view synthesis benchmarks, as well as multi-view stereo and video depth estimation. |
|
2025-02-03T00:00:00 | 2501.18052 | SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders | [
"Bartosz Cywiński",
"Kamil Deja"
] | https://github.com/cywinski/SAeUron | Diffusion models, while powerful, can inadvertently generate harmful or undesirable content, raising significant ethical and safety concerns. Recent machine unlearning approaches offer potential solutions but often lack transparency, making it difficult to understand the changes they introduce to the base model. In this work, we introduce SAeUron, a novel method leveraging features learned by sparse autoencoders (SAEs) to remove unwanted concepts in text-to-image diffusion models. First, we demonstrate that SAEs, trained in an unsupervised manner on activations from multiple denoising timesteps of the diffusion model, capture sparse and interpretable features corresponding to specific concepts. Building on this, we propose a feature selection method that enables precise interventions on model activations to block targeted content while preserving overall performance. Evaluation with the competitive UnlearnCanvas benchmark on object and style unlearning highlights SAeUron's state-of-the-art performance. Moreover, we show that with a single SAE, we can remove multiple concepts simultaneously and that in contrast to other methods, SAeUron mitigates the possibility of generating unwanted content, even under adversarial attack. Code and checkpoints are available at: https://github.com/cywinski/SAeUron. |
2025-02-03T00:00:00 | 2502.00299 | ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference | [
"Xiang Liu",
"Zhenheng Tang",
"Peijie Dong",
"Zeyu Li",
"Bo Li",
"Xuming Hu",
"Xiaowen Chu"
] | To reduce memory costs in long-context inference with Large Language Models (LLMs), many recent works focus on compressing the key-value (KV) cache of different tokens. However, we identify that the previous KV cache compression methods measure token importance individually, neglecting the dependency between different tokens in the real-world language characterics. In light of this, we introduce ChunkKV, grouping the tokens in a chunk as a basic compressing unit, and retaining the most informative semantic chunks while discarding the less important ones. Furthermore, observing that ChunkKV exhibits higher similarity in the preserved indices across different layers, we propose layer-wise index reuse to further reduce computational overhead. We evaluated ChunkKV on cutting-edge long-context benchmarks including LongBench and Needle-In-A-HayStack, as well as the GSM8K and JailbreakV in-context learning benchmark. Our experiments with instruction tuning and multi-step reasoning (O1 and R1) LLMs, achieve up to 10\% performance improvement under aggressive compression ratios compared to existing methods. |
|
2025-02-04T00:00:00 | 2502.00094 | AIN: The Arabic INclusive Large Multimodal Model | [
"Ahmed Heakl",
"Sara Ghaboura",
"Omkar Thawkar",
"Fahad Shahbaz Khan",
"Hisham Cholakkal",
"Rao Muhammad Anwer",
"Salman Khan"
] | Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a few specific aspects of the language and visual understanding. To bridge this gap, we introduce AIN-the Arabic Inclusive Multimodal Model-designed to excel across diverse domains. AIN is an English-Arabic bilingual LMM designed to excel in English and Arabic, leveraging carefully constructed 3.6 million high-quality Arabic-English multimodal data samples. AIN demonstrates state-of-the-art Arabic performance, while also possessing strong English-language visual capabilities. On the recent CAMEL-Bench benchmark comprising 38 sub-domains including, multi-image understanding, complex visual perception, handwritten document understanding, video understanding, medical imaging, plant diseases, and remote sensing-based land use understanding, our AIN demonstrates strong performance with the 7B model outperforming GPT-4o by an absolute gain of 3.4% averaged over eight domains and 38 sub-domains. AIN's superior capabilities position it as a significant step toward empowering Arabic speakers with advanced multimodal generative AI tools across diverse applications. |
|
2025-02-04T00:00:00 | 2502.01441 | Improved Training Technique for Latent Consistency Models | [
"Quan Dao",
"Khanh Doan",
"Di Liu",
"Trung Le",
"Dimitris Metaxas"
] | https://github.com/quandao10/sLCT | Consistency models are a new family of generative models capable of producing high-quality samples in either a single step or multiple steps. Recently, consistency models have demonstrated impressive performance, achieving results on par with diffusion models in the pixel space. However, the success of scaling consistency training to large-scale datasets, particularly for text-to-image and video generation tasks, is determined by performance in the latent space. In this work, we analyze the statistical differences between pixel and latent spaces, discovering that latent data often contains highly impulsive outliers, which significantly degrade the performance of iCT in the latent space. To address this, we replace Pseudo-Huber losses with Cauchy losses, effectively mitigating the impact of outliers. Additionally, we introduce a diffusion loss at early timesteps and employ optimal transport (OT) coupling to further enhance performance. Lastly, we introduce the adaptive scaling-c scheduler to manage the robust training process and adopt Non-scaling LayerNorm in the architecture to better capture the statistics of the features and reduce outlier impact. With these strategies, we successfully train latent consistency models capable of high-quality sampling with one or two steps, significantly narrowing the performance gap between latent consistency and diffusion models. The implementation is released here: https://github.com/quandao10/sLCT/ |
2025-02-04T00:00:00 | 2502.01456 | Process Reinforcement through Implicit Rewards | [
"Ganqu Cui",
"Lifan Yuan",
"Zefan Wang",
"Hanbin Wang",
"Wendi Li",
"Bingxiang He",
"Yuchen Fan",
"Tianyu Yu",
"Qixin Xu",
"Weize Chen",
"Jiarui Yuan",
"Huayu Chen",
"Kaiyan Zhang",
"Xingtai Lv",
"Shuo Wang",
"Yuan Yao",
"Xu Han",
"Hao Peng",
"Yu Cheng",
"Zhiyuan Liu",
"Maosong Sun",
"Bowen Zhou",
"Ning Ding"
] | Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data. |
|
2025-02-04T00:00:00 | 2502.01068 | FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation | [
"Dongwon Jo",
"Jiwon Song",
"Yulhwa Kim",
"Jae-Joon Kim"
] | https://github.com/dongwonjo/FastKV | While large language models (LLMs) excel at handling long-context sequences, they require substantial key-value (KV) caches to store contextual information, which can heavily burden computational efficiency and memory usage. Previous efforts to compress these KV caches primarily focused on reducing memory demands but were limited in enhancing latency. To address this issue, we introduce FastKV, a KV cache compression method designed to enhance latency for long-context sequences. To enhance processing speeds while maintaining accuracy, FastKV adopts a novel Token-Selective Propagation (TSP) approach that retains the full context information in the initial layers of LLMs and selectively propagates only a portion of this information in deeper layers even in the prefill stage. Additionally, FastKV incorporates grouped-query attention (GQA)-aware KV cache compression to exploit the advantages of GQA in both memory and computational efficiency. Our experimental results show that FastKV achieves 2.00times and 1.40times improvements in time-to-first-token (TTFT) and throughput, respectively, compared to HeadKV, the state-of-the-art KV cache compression method. Moreover, FastKV successfully maintains accuracy on long-context benchmarks at levels comparable to the baselines. Our code is available at https://github.com/dongwonjo/FastKV. |
2025-02-04T00:00:00 | 2502.01061 | OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models | [
"Gaojie Lin",
"Jianwen Jiang",
"Jiaqi Yang",
"Zerong Zheng",
"Chao Liang"
] | End-to-end human animation, such as audio-driven talking human generation, has undergone notable advancements in the recent few years. However, existing methods still struggle to scale up as large general video generation models, limiting their potential in real applications. In this paper, we propose OmniHuman, a Diffusion Transformer-based framework that scales up data by mixing motion-related conditions into the training phase. To this end, we introduce two training principles for these mixed conditions, along with the corresponding model architecture and inference strategy. These designs enable OmniHuman to fully leverage data-driven motion generation, ultimately achieving highly realistic human video generation. More importantly, OmniHuman supports various portrait contents (face close-up, portrait, half-body, full-body), supports both talking and singing, handles human-object interactions and challenging body poses, and accommodates different image styles. Compared to existing end-to-end audio-driven methods, OmniHuman not only produces more realistic videos, but also offers greater flexibility in inputs. It also supports multiple driving modalities (audio-driven, video-driven and combined driving signals). Video samples are provided on the ttfamily project page (https://omnihuman-lab.github.io) |
|
2025-02-04T00:00:00 | 2502.01100 | ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning | [
"Bill Yuchen Lin",
"Ronan Le Bras",
"Kyle Richardson",
"Ashish Sabharwal",
"Radha Poovendran",
"Peter Clark",
"Yejin Choi"
] | We investigate the logical reasoning capabilities of large language models (LLMs) and their scalability in complex non-monotonic reasoning. To this end, we introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM reasoning performance on logic grid puzzles derived from constraint satisfaction problems (CSPs). ZebraLogic enables the generation of puzzles with controllable and quantifiable complexity, facilitating a systematic study of the scaling limits of models such as Llama, o1 models, and DeepSeek-R1. By encompassing a broad range of search space complexities and diverse logical constraints, ZebraLogic provides a structured environment to evaluate reasoning under increasing difficulty. Our results reveal a significant decline in accuracy as problem complexity grows -- a phenomenon we term the curse of complexity. This limitation persists even with larger models and increased inference-time computation, suggesting inherent constraints in current LLM reasoning capabilities. Additionally, we explore strategies to enhance logical reasoning, including Best-of-N sampling, backtracking mechanisms, and self-verification prompts. Our findings offer critical insights into the scalability of LLM reasoning, highlight fundamental limitations, and outline potential directions for improvement. |
|
2025-02-04T00:00:00 | 2502.01081 | The Jumping Reasoning Curve? Tracking the Evolution of Reasoning Performance in GPT-[n] and o-[n] Models on Multimodal Puzzles | [
"Vernon Y. H. Toh",
"Yew Ken Chia",
"Deepanway Ghosal",
"Soujanya Poria"
] | https://github.com/declare-lab/LLM-PuzzleTest | The releases of OpenAI's o1 and o3 mark a significant paradigm shift in Large Language Models towards advanced reasoning capabilities. Notably, o3 outperformed humans in novel problem-solving and skill acquisition on the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). However, this benchmark is limited to symbolic patterns, whereas humans often perceive and reason about multimodal scenarios involving both vision and language data. Thus, there is an urgent need to investigate advanced reasoning capabilities in multimodal tasks. To this end, we track the evolution of the GPT-[n] and o-[n] series models on challenging multimodal puzzles, requiring fine-grained visual perception with abstract or algorithmic reasoning. The superior performance of o1 comes at nearly 750 times the computational cost of GPT-4o, raising concerns about its efficiency. Our results reveal a clear upward trend in reasoning capabilities across model iterations, with notable performance jumps across GPT-series models and subsequently to o1. Nonetheless, we observe that the o1 model still struggles with simple multimodal puzzles requiring abstract reasoning. Furthermore, its performance in algorithmic puzzles remains poor. We plan to continuously track new models in the series and update our results in this paper accordingly. All resources used in this evaluation are openly available https://github.com/declare-lab/LLM-PuzzleTest. |
2025-02-04T00:00:00 | 2502.01637 | Scaling Embedding Layers in Language Models | [
"Da Yu",
"Edith Cohen",
"Badih Ghazi",
"Yangsibo Huang",
"Pritish Kamath",
"Ravi Kumar",
"Daogao Liu",
"Chiyuan Zhang"
] | We propose SCONE (Scalable, Contextualized, Offloaded, N-gram Embedding), a method for extending input embedding layers to enhance language model performance as layer size scales. To avoid increased decoding costs, SCONE retains the original vocabulary while introducing embeddings for a set of frequent n-grams. These embeddings provide contextualized representation for each input token and are learned with a separate model during training. During inference, they are precomputed and stored in off-accelerator memory with minimal impact on inference speed. SCONE enables two new scaling strategies: increasing the number of cached n-gram embeddings and scaling the model used to learn them, all while maintaining fixed inference-time FLOPS. We show that scaling both aspects allows SCONE to outperform a 1.9B parameter baseline across diverse corpora, while using only half the inference-time FLOPS. |
|
2025-02-04T00:00:00 | 2502.01591 | Improving Transformer World Models for Data-Efficient RL | [
"Antoine Dedieu",
"Joseph Ortiz",
"Xinghua Lou",
"Carter Wendelken",
"Wolfgang Lehrach",
"J Swaroop Guntupalli",
"Miguel Lazaro-Gredilla",
"Kevin Patrick Murphy"
] | We present an approach to model-based RL that achieves a new state of the art performance on the challenging Craftax-classic benchmark, an open-world 2D survival game that requires agents to exhibit a wide range of general abilities -- such as strong generalization, deep exploration, and long-term reasoning. With a series of careful design choices aimed at improving sample efficiency, our MBRL algorithm achieves a reward of 67.4% after only 1M environment steps, significantly outperforming DreamerV3, which achieves 53.2%, and, for the first time, exceeds human performance of 65.0%. Our method starts by constructing a SOTA model-free baseline, using a novel policy architecture that combines CNNs and RNNs. We then add three improvements to the standard MBRL setup: (a) "Dyna with warmup", which trains the policy on real and imaginary data, (b) "nearest neighbor tokenizer" on image patches, which improves the scheme to create the transformer world model (TWM) inputs, and (c) "block teacher forcing", which allows the TWM to reason jointly about the future tokens of the next timestep. |
|
2025-02-04T00:00:00 | 2502.01534 | Preference Leakage: A Contamination Problem in LLM-as-a-judge | [
"Dawei Li",
"Renliang Sun",
"Yue Huang",
"Ming Zhong",
"Bohan Jiang",
"Jiawei Han",
"Xiangliang Zhang",
"Wei Wang",
"Huan Liu"
] | https://github.com/David-Li0406/Preference-Leakage | Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods in model development. While their combination significantly enhances the efficiency of model training and evaluation, little attention has been given to the potential contamination brought by this new model development paradigm. In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators. To study this issue, we first define three common relatednesses between data generator LLM and judge LLM: being the same model, having an inheritance relationship, and belonging to the same model family. Through extensive experiments, we empirically confirm the bias of judges towards their related student models caused by preference leakage across multiple LLM baselines and benchmarks. Further analysis suggests that preference leakage is a pervasive issue that is harder to detect compared to previously identified biases in LLM-as-a-judge scenarios. All of these findings imply that preference leakage is a widespread and challenging problem in the area of LLM-as-a-judge. We release all codes and data at: https://github.com/David-Li0406/Preference-Leakage. |
2025-02-04T00:00:00 | 2502.01636 | Lifelong Sequential Knowledge Editing without Model Degradation | [
"Akshat Gupta",
"Phudish Prateepamornkul",
"Maochuan Lu",
"Ahmed Alaa",
"Thomas Hartvigsen",
"Gopala Anumanchipalli"
] | Prior work in parameter-modifying knowledge editing has shown that large-scale sequential editing leads to significant model degradation. In this paper, we study the reasons behind this and scale sequential knowledge editing to 10,000 sequential edits, while maintaining the downstream performance of the original model. We first show that locate-then-edit knowledge editing methods lead to overfitting on the edited facts. We also show that continuous knowledge editing using these methods leads to disproportionate growth in the norm of the edited matrix. We then provide a crucial insight into the inner workings of locate-then-edit methods. We show that norm-growth is a hidden trick employed by these methods that gives larger importance to the output activations produced from the edited layers. With this "importance hacking", the edited layers provide a much larger contributions to the model's output. To mitigate these issues, we present ENCORE - Early stopping and Norm-Constrained Robust knowledge Editing. ENCORE controls for overfitting and the disproportionate norm-growth to enable long-term sequential editing, where we are able to perform up to 10,000 sequential edits without loss of downstream performance. ENCORE is also 61% faster than MEMIT and 64% faster than AlphaEdit on Llama3-8B. |
|
2025-02-04T00:00:00 | 2502.01237 | The Differences Between Direct Alignment Algorithms are a Blur | [
"Alexey Gorbatovski",
"Boris Shaposhnikov",
"Viacheslav Sinii",
"Alexey Malakhov",
"Daniil Gavrilov"
] | Direct Alignment Algorithms (DAAs) simplify language model alignment by replacing reinforcement learning (RL) and reward modeling (RM) in Reinforcement Learning from Human Feedback (RLHF) with direct policy optimization. DAAs can be classified by their ranking losses (pairwise vs. pointwise), by the rewards used in those losses (e.g., likelihood ratios of policy and reference policy, or odds ratios), or by whether a Supervised Fine-Tuning (SFT) phase is required (two-stage vs. one-stage). We first show that one-stage methods underperform two-stage methods. To address this, we incorporate an explicit SFT phase and introduce the beta parameter, controlling the strength of preference optimization, into single-stage ORPO and ASFT. These modifications improve their performance in Alpaca Eval 2 by +3.46 (ORPO) and +8.27 (ASFT), matching two-stage methods like DPO. Further analysis reveals that the key factor is whether the approach uses pairwise or pointwise objectives, rather than the specific implicit reward or loss function. These results highlight the importance of careful evaluation to avoid premature claims of performance gains or overall superiority in alignment algorithms. |
|
2025-02-04T00:00:00 | 2501.18636 | SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of Large Language Model | [
"Xun Liang",
"Simin Niu",
"Zhiyu Li",
"Sensen Zhang",
"Hanyu Wang",
"Feiyu Xiong",
"Jason Zhaoxin Fan",
"Bo Tang",
"Shichao Song",
"Mengwei Wang",
"Jiawei Yang"
] | https://github.com/IAAR-Shanghai/SafeRAG | The indexing-retrieval-generation paradigm of retrieval-augmented generation (RAG) has been highly successful in solving knowledge-intensive tasks by integrating external knowledge into large language models (LLMs). However, the incorporation of external and unverified knowledge increases the vulnerability of LLMs because attackers can perform attack tasks by manipulating knowledge. In this paper, we introduce a benchmark named SafeRAG designed to evaluate the RAG security. First, we classify attack tasks into silver noise, inter-context conflict, soft ad, and white Denial-of-Service. Next, we construct RAG security evaluation dataset (i.e., SafeRAG dataset) primarily manually for each task. We then utilize the SafeRAG dataset to simulate various attack scenarios that RAG may encounter. Experiments conducted on 14 representative RAG components demonstrate that RAG exhibits significant vulnerability to all attack tasks and even the most apparent attack task can easily bypass existing retrievers, filters, or advanced LLMs, resulting in the degradation of RAG service quality. Code is available at: https://github.com/IAAR-Shanghai/SafeRAG. |
2025-02-04T00:00:00 | 2502.00314 | A Study on the Performance of U-Net Modifications in Retroperitoneal Tumor Segmentation | [
"Moein Heidari",
"Ehsan Khodapanah Aghdam",
"Alexander Manzella",
"Daniel Hsu",
"Rebecca Scalabrino",
"Wenjin Chen",
"David J. Foran",
"Ilker Hacihaliloglu"
] | The retroperitoneum hosts a variety of tumors, including rare benign and malignant types, which pose diagnostic and treatment challenges due to their infrequency and proximity to vital structures. Estimating tumor volume is difficult due to their irregular shapes, and manual segmentation is time-consuming. Automatic segmentation using U-Net and its variants, incorporating Vision Transformer (ViT) elements, has shown promising results but struggles with high computational demands. To address this, architectures like the Mamba State Space Model (SSM) and Extended Long-Short Term Memory (xLSTM) offer efficient solutions by handling long-range dependencies with lower resource consumption. This study evaluates U-Net enhancements, including CNN, ViT, Mamba, and xLSTM, on a new in-house CT dataset and a public organ segmentation dataset. The proposed ViLU-Net model integrates Vi-blocks for improved segmentation. Results highlight xLSTM's efficiency in the U-Net framework. The code is publicly accessible on GitHub. |
|
2025-02-04T00:00:00 | 2502.01142 | DeepRAG: Thinking to Retrieval Step by Step for Large Language Models | [
"Xinyan Guan",
"Jiali Zeng",
"Fandong Meng",
"Chunlei Xin",
"Yaojie Lu",
"Hongyu Lin",
"Xianpei Han",
"Le Sun",
"Jie Zhou"
] | Large Language Models (LLMs) have shown remarkable potential in reasoning while they still suffer from severe factual hallucinations due to timeliness, accuracy, and coverage of parametric knowledge. Meanwhile, integrating reasoning with retrieval-augmented generation (RAG) remains challenging due to ineffective task decomposition and redundant retrieval, which can introduce noise and degrade response quality. In this paper, we propose DeepRAG, a framework that models retrieval-augmented reasoning as a Markov Decision Process (MDP), enabling strategic and adaptive retrieval. By iteratively decomposing queries, DeepRAG dynamically determines whether to retrieve external knowledge or rely on parametric reasoning at each step. Experiments show that DeepRAG improves retrieval efficiency while improving answer accuracy by 21.99%, demonstrating its effectiveness in optimizing retrieval-augmented reasoning. |
|
2025-02-04T00:00:00 | 2502.01208 | Almost Surely Safe Alignment of Large Language Models at Inference-Time | [
"Xiaotong Ji",
"Shyam Sundhar Ramesh",
"Matthieu Zimmer",
"Ilija Bogunovic",
"Jun Wang",
"Haitham Bou Ammar"
] | Even highly capable large language models (LLMs) can produce biased or unsafe responses, and alignment techniques, such as RLHF, aimed at mitigating this issue, are expensive and prone to overfitting as they retrain the LLM. This paper introduces a novel inference-time alignment approach that ensures LLMs generate safe responses almost surely, i.e., with a probability approaching one. We achieve this by framing the safe generation of inference-time responses as a constrained Markov decision process within the LLM's latent space. Crucially, we augment a safety state that tracks the evolution of safety constraints and enables us to demonstrate formal safety guarantees upon solving the MDP in the latent space. Building on this foundation, we propose InferenceGuard, a practical implementation that safely aligns LLMs without modifying the model weights. Empirically, we demonstrate InferenceGuard effectively balances safety and task performance, outperforming existing inference-time alignment methods in generating safe and aligned responses. |
|
2025-02-04T00:00:00 | 2502.01584 | PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models | [
"Carolyn Jane Anderson",
"Joydeep Biswas",
"Aleksander Boruch-Gruszecki",
"Federico Cassano",
"Molly Q Feldman",
"Arjun Guha",
"Francesca Lucchetti",
"Zixuan Wu"
] | Existing benchmarks for frontier models often test specialized, ``PhD-level'' knowledge that is difficult for non-experts to grasp. In contrast, we present a benchmark based on the NPR Sunday Puzzle Challenge that requires only general knowledge. Our benchmark is challenging for both humans and models, however correct solutions are easy to verify, and models' mistakes are easy to spot. Our work reveals capability gaps that are not evident in existing benchmarks: OpenAI o1 significantly outperforms other reasoning models that are on par on benchmarks that test specialized knowledge. Furthermore, our analysis of reasoning outputs uncovers new kinds of failures. DeepSeek R1, for instance, often concedes with ``I give up'' before providing an answer that it knows is wrong. R1 can also be remarkably ``uncertain'' in its output and in rare cases, it does not ``finish thinking,'' which suggests the need for an inference-time technique to ``wrap up'' before the context window limit is reached. We also quantify the effectiveness of reasoning longer with R1 and Gemini Thinking to identify the point beyond which more reasoning is unlikely to improve accuracy on our benchmark. |
|
2025-02-04T00:00:00 | 2501.18055 | Current Pathology Foundation Models are unrobust to Medical Center Differences | [
"Edwin D. de Jong",
"Eric Marcus",
"Jonas Teuwen"
] | Pathology Foundation Models (FMs) hold great promise for healthcare. Before they can be used in clinical practice, it is essential to ensure they are robust to variations between medical centers. We measure whether pathology FMs focus on biological features like tissue and cancer type, or on the well known confounding medical center signatures introduced by staining procedure and other differences. We introduce the Robustness Index. This novel robustness metric reflects to what degree biological features dominate confounding features. Ten current publicly available pathology FMs are evaluated. We find that all current pathology foundation models evaluated represent the medical center to a strong degree. Significant differences in the robustness index are observed. Only one model so far has a robustness index greater than one, meaning biological features dominate confounding features, but only slightly. A quantitative approach to measure the influence of medical center differences on FM-based prediction performance is described. We analyze the impact of unrobustness on classification performance of downstream models, and find that cancer-type classification errors are not random, but specifically attributable to same-center confounders: images of other classes from the same medical center. We visualize FM embedding spaces, and find these are more strongly organized by medical centers than by biological factors. As a consequence, the medical center of origin is predicted more accurately than the tissue source and cancer type. The robustness index introduced here is provided with the aim of advancing progress towards clinical adoption of robust and reliable pathology FMs. |
|
2025-02-04T00:00:00 | 2502.01639 | SliderSpace: Decomposing the Visual Capabilities of Diffusion Models | [
"Rohit Gandikota",
"Zongze Wu",
"Richard Zhang",
"David Bau",
"Eli Shechtman",
"Nick Kolkin"
] | We present SliderSpace, a framework for automatically decomposing the visual capabilities of diffusion models into controllable and human-understandable directions. Unlike existing control methods that require a user to specify attributes for each edit direction individually, SliderSpace discovers multiple interpretable and diverse directions simultaneously from a single text prompt. Each direction is trained as a low-rank adaptor, enabling compositional control and the discovery of surprising possibilities in the model's latent space. Through extensive experiments on state-of-the-art diffusion models, we demonstrate SliderSpace's effectiveness across three applications: concept decomposition, artistic style exploration, and diversity enhancement. Our quantitative evaluation shows that SliderSpace-discovered directions decompose the visual structure of model's knowledge effectively, offering insights into the latent capabilities encoded within diffusion models. User studies further validate that our method produces more diverse and useful variations compared to baselines. Our code, data and trained weights are available at https://sliderspace.baulab.info |
|
2025-02-04T00:00:00 | 2502.00698 | MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models | [
"Huanqia Cai",
"Yijun Yang",
"Winston Hu"
] | IQ testing has served as a foundational methodology for evaluating human cognitive capabilities, deliberately decoupling assessment from linguistic background, language proficiency, or domain-specific knowledge to isolate core competencies in abstraction and reasoning. Yet, artificial intelligence research currently lacks systematic benchmarks to quantify these critical cognitive dimensions in multimodal systems. To address this critical gap, we propose MM-IQ, a comprehensive evaluation framework comprising 2,710 meticulously curated test items spanning 8 distinct reasoning paradigms. Through systematic evaluation of leading open-source and proprietary multimodal models, our benchmark reveals striking limitations: even state-of-the-art architectures achieve only marginally superior performance to random chance (27.49% vs. 25% baseline accuracy). This substantial performance chasm highlights the inadequacy of current multimodal systems in approximating fundamental human reasoning capacities, underscoring the need for paradigm-shifting advancements to bridge this cognitive divide. |
|
2025-02-04T00:00:00 | 2502.01341 | AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding | [
"Ahmed Masry",
"Juan A. Rodriguez",
"Tianyu Zhang",
"Suyuchen Wang",
"Chao Wang",
"Aarash Feizi",
"Akshay Kalkunte Suresh",
"Abhay Puri",
"Xiangru Jian",
"Pierre-André Noël",
"Sathwik Tejaswi Madhusudhan",
"Marco Pedersoli",
"Bang Liu",
"Nicolas Chapados",
"Yoshua Bengio",
"Enamul Hoque",
"Christopher Pal",
"Issam H. Laradji",
"David Vazquez",
"Perouz Taslakian",
"Spandana Gella",
"Sai Rajeswar"
] | Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). The performance of such models hinges on having a good connector that maps visual features generated by a vision encoder to a shared embedding space with the LLM while preserving semantic similarity. Existing connectors, such as multilayer perceptrons (MLPs), often produce out-of-distribution or noisy inputs, leading to misalignment between the modalities. In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach leverages the linguistic priors encoded by the LLM to ensure that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM is particularly effective for document understanding tasks, where scanned document images must be accurately mapped to their textual content. Our extensive experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods. We provide further analysis demonstrating improved vision-text feature alignment and robustness to noise. |
|
2025-02-04T00:00:00 | 2502.01619 | Learning to Generate Unit Tests for Automated Debugging | [
"Archiki Prasad",
"Elias Stengel-Eskin",
"Justin Chih-Yao Chen",
"Zaid Khan",
"Mohit Bansal"
] | Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to a large language model (LLM) as it iteratively debugs faulty code, motivating automated test generation. However, we uncover a trade-off between generating unit test inputs that reveal errors when given a faulty code and correctly predicting the unit test output without access to the gold solution. To address this trade-off, we propose UTGen, which teaches LLMs to generate unit test inputs that reveal errors along with their correct expected outputs based on task descriptions and candidate code. We integrate UTGen into UTDebug, a robust debugging pipeline that uses generated tests to help LLMs debug effectively. Since model-generated tests can provide noisy signals (e.g., from incorrectly predicted outputs), UTDebug (i) scales UTGen via test-time compute to improve UT output prediction, and (ii) validates and back-tracks edits based on multiple generated UTs to avoid overfitting. We show that UTGen outperforms UT generation baselines by 7.59% based on a metric measuring the presence of both error-revealing UT inputs and correct UT outputs. When used with UTDebug, we find that feedback from UTGen's unit tests improves pass@1 accuracy of Qwen-2.5 7B on HumanEvalFix and our own harder debugging split of MBPP+ by over 3% and 12.35% (respectively) over other LLM-based UT generation baselines. |
|
2025-02-04T00:00:00 | 2502.01126 | Language Models Prefer What They Know: Relative Confidence Estimation via Confidence Preferences | [
"Vaishnavi Shrivastava",
"Ananya Kumar",
"Percy Liang"
] | Language models (LMs) should provide reliable confidence estimates to help users detect mistakes in their outputs and defer to human experts when necessary. Asking a language model to assess its confidence ("Score your confidence from 0-1.") is a natural way of evaluating its uncertainty. However, models struggle to provide absolute assessments of confidence (i.e. judging confidence in answering a question independent of other questions) and the coarse-grained scores they produce are not useful for evaluating the correctness of their answers. We propose relative confidence estimation, where we match up questions against each other and ask the model to make relative judgments of confidence ("Which question are you more confident in answering correctly?"). Treating each question as a "player" in a series of matchups against other questions and the model's preferences as match outcomes, we can use rank aggregation methods like Elo rating and Bradley-Terry to translate the model's confidence preferences into confidence scores. We evaluate relative confidence estimation against absolute confidence estimation and self-consistency confidence methods on five state-of-the-art LMs -- GPT-4, GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, and Llama 3.1 405B -- across 14 challenging STEM, social science, and commonsense reasoning question answering tasks. Our results demonstrate that relative confidence estimation consistently provides more reliable confidence scores than absolute confidence estimation, with average gains of 3.5% in selective classification AUC over direct absolute confidence estimation methods and 1.7% over self-consistency approaches across all models and datasets. |
|
2025-02-04T00:00:00 | 2502.01572 | MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation | [
"Yiren Song",
"Cheng Liu",
"Mike Zheng Shou"
] | A hallmark of human intelligence is the ability to create complex artifacts through structured multi-step processes. Generating procedural tutorials with AI is a longstanding but challenging goal, facing three key obstacles: (1) scarcity of multi-task procedural datasets, (2) maintaining logical continuity and visual consistency between steps, and (3) generalizing across multiple domains. To address these challenges, we propose a multi-domain dataset covering 21 tasks with over 24,000 procedural sequences. Building upon this foundation, we introduce MakeAnything, a framework based on the diffusion transformer (DIT), which leverages fine-tuning to activate the in-context capabilities of DIT for generating consistent procedural sequences. We introduce asymmetric low-rank adaptation (LoRA) for image generation, which balances generalization capabilities and task-specific performance by freezing encoder parameters while adaptively tuning decoder layers. Additionally, our ReCraft model enables image-to-process generation through spatiotemporal consistency constraints, allowing static images to be decomposed into plausible creation sequences. Extensive experiments demonstrate that MakeAnything surpasses existing methods, setting new performance benchmarks for procedural generation tasks. |
|
2025-02-04T00:00:00 | 2502.00987 | RandLoRA: Full-rank parameter-efficient fine-tuning of large models | [
"Paul Albert",
"Frederic Z. Zhang",
"Hemanth Saratchandran",
"Cristian Rodriguez-Opazo",
"Anton van den Hengel",
"Ehsan Abbasnejad"
] | Low-Rank Adaptation (LoRA) and its variants have shown impressive results in reducing the number of trainable parameters and memory requirements of large transformer networks while maintaining fine-tuning performance. However, the low-rank nature of the weight update inherently limits the representation power of fine-tuned models, potentially compromising performance on complex tasks. This raises a critical question: when a performance gap between LoRA and standard fine-tuning is observed, is it due to the reduced number of trainable parameters or the rank deficiency? This paper aims to answer this question by introducing RandLoRA, a parameter-efficient method that performs full-rank updates using a learned linear combinations of low-rank, non-trainable random matrices. Our method limits the number of trainable parameters by restricting optimization to diagonal scaling matrices applied to the fixed random matrices. This allows us to effectively overcome the low-rank limitations while maintaining parameter and memory efficiency during training. Through extensive experimentation across vision, language, and vision-language benchmarks, we systematically evaluate the limitations of LoRA and existing random basis methods. Our findings reveal that full-rank updates are beneficial across vision and language tasks individually, and even more so for vision-language tasks, where RandLoRA significantly reduces -- and sometimes eliminates -- the performance gap between standard fine-tuning and LoRA, demonstrating its efficacy. |
|
2025-02-04T00:00:00 | 2502.02095 | LongDPO: Unlock Better Long-form Generation Abilities for LLMs via Critique-augmented Stepwise Information | [
"Bowen Ping",
"Jiali Zeng",
"Fandong Meng",
"Shuo Wang",
"Jie Zhou",
"Shanghang Zhang"
] | Long-form generation is crucial for academic writing papers and repo-level code generation. Despite this, current models, including GPT-4o, still exhibit unsatisfactory performance. Existing methods that utilize preference learning with outcome supervision often fail to provide detailed feedback for extended contexts. This shortcoming can lead to content that does not fully satisfy query requirements, resulting in issues like length deviations, and diminished quality. In this paper, we propose enhancing long-form generation by incorporating process supervision. We employ Monte Carlo Tree Search to gather stepwise preference pairs, utilizing a global memory pool to maintain consistency. To address the issue of suboptimal candidate selection, we integrate external critiques to refine and improve the quality of the preference pairs. Finally, we apply step-level DPO using the collected stepwise preference pairs. Experimental results show that our method improves length and quality on long-form generation benchmarks, with almost lossless performance on general benchmarks across various model backbones. |
|
2025-02-05T00:00:00 | 2502.02584 | QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search | [
"Zongyu Lin",
"Yao Tang",
"Xingcheng Yao",
"Da Yin",
"Ziniu Hu",
"Yizhou Sun",
"Kai-Wei Chang"
] | Language agents have become a promising solution to complex interactive tasks. One of the key ingredients to the success of language agents is the reward model on the trajectory of the agentic workflow, which provides valuable guidance during training or inference. However, due to the lack of annotations of intermediate interactions, most existing works use an outcome reward model to optimize policies across entire trajectories. This may lead to sub-optimal policies and hinder the overall performance. To address this, we propose QLASS (Q-guided Language Agent Stepwise Search), to automatically generate annotations by estimating Q-values in a stepwise manner for open language agents. By introducing a reasoning tree and performing process reward modeling, QLASS provides effective intermediate guidance for each step. With the stepwise guidance, we propose a Q-guided generation strategy to enable language agents to better adapt to long-term value, resulting in significant performance improvement during model inference on complex interactive agent tasks. Notably, even with almost half the annotated data, QLASS retains strong performance, demonstrating its efficiency in handling limited supervision. We also empirically demonstrate that QLASS can lead to more effective decision making through qualitative analysis. We will release our code and data. |
|
2025-02-05T00:00:00 | 2502.01718 | ACECODER: Acing Coder RL via Automated Test-Case Synthesis | [
"Huaye Zeng",
"Dongfu Jiang",
"Haozhe Wang",
"Ping Nie",
"Xiaotong Chen",
"Wenhu Chen"
] | Most progress in recent coder models has been driven by supervised fine-tuning (SFT), while the potential of reinforcement learning (RL) remains largely unexplored, primarily due to the lack of reliable reward data/model in the code domain. In this paper, we address this challenge by leveraging automated large-scale test-case synthesis to enhance code model training. Specifically, we design a pipeline that generates extensive (question, test-cases) pairs from existing code data. Using these test cases, we construct preference pairs based on pass rates over sampled programs to train reward models with Bradley-Terry loss. It shows an average of 10-point improvement for Llama-3.1-8B-Ins and 5-point improvement for Qwen2.5-Coder-7B-Ins through best-of-32 sampling, making the 7B model on par with 236B DeepSeek-V2.5. Furthermore, we conduct reinforcement learning with both reward models and test-case pass rewards, leading to consistent improvements across HumanEval, MBPP, BigCodeBench, and LiveCodeBench (V4). Notably, we follow the R1-style training to start from Qwen2.5-Coder-base directly and show that our RL training can improve model on HumanEval-plus by over 25\% and MBPP-plus by 6\% for merely 80 optimization steps. We believe our results highlight the huge potential of reinforcement learning in coder models. |
|
2025-02-05T00:00:00 | 2502.02508 | Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search | [
"Maohao Shen",
"Guangtao Zeng",
"Zhenting Qi",
"Zhang-Wei Hong",
"Zhenfang Chen",
"Wei Lu",
"Gregory Wornell",
"Subhro Das",
"David Cox",
"Chuang Gan"
] | Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM? This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (i.e., an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models will be fully open-sourced. |
|
2025-02-05T00:00:00 | 2502.01941 | Can LLMs Maintain Fundamental Abilities under KV Cache Compression? | [
"Xiang Liu",
"Zhenheng Tang",
"Hong Chen",
"Peijie Dong",
"Zeyu Li",
"Xiuze Zhou",
"Bo Li",
"Xuming Hu",
"Xiaowen Chu"
] | This paper investigates an under-explored challenge in large language models (LLMs): the impact of KV cache compression methods on LLMs' fundamental capabilities. While existing methods achieve impressive compression ratios on long-context benchmarks, their effects on core model capabilities remain understudied. We present a comprehensive empirical study evaluating prominent KV cache compression methods across diverse tasks, spanning world knowledge, commonsense reasoning, arithmetic reasoning, code generation, safety, and long-context understanding and generation.Our analysis reveals that KV cache compression methods exhibit task-specific performance degradation. Arithmetic reasoning tasks prove particularly sensitive to aggressive compression, with different methods showing performance drops of 17.4%-43.3%. Notably, the DeepSeek R1 Distill model exhibits more robust compression tolerance compared to instruction-tuned models, showing only 9.67%-25.53% performance degradation. Based on our analysis of attention patterns and cross-task compression performance, we propose ShotKV, a novel compression approach that distinctly handles prefill and decoding phases while maintaining shot-level semantic coherence. Empirical results show that ShotKV achieves 9%-18% performance improvements on long-context generation tasks under aggressive compression ratios. |
|
2025-02-05T00:00:00 | 2502.02492 | VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models | [
"Hila Chefer",
"Uriel Singer",
"Amit Zohar",
"Yuval Kirstain",
"Adam Polyak",
"Yaniv Taigman",
"Lior Wolf",
"Shelly Sheynin"
] | Despite tremendous recent progress, generative video models still struggle to capture real-world motion, dynamics, and physics. We show that this limitation arises from the conventional pixel reconstruction objective, which biases models toward appearance fidelity at the expense of motion coherence. To address this, we introduce VideoJAM, a novel framework that instills an effective motion prior to video generators, by encouraging the model to learn a joint appearance-motion representation. VideoJAM is composed of two complementary units. During training, we extend the objective to predict both the generated pixels and their corresponding motion from a single learned representation. During inference, we introduce Inner-Guidance, a mechanism that steers the generation toward coherent motion by leveraging the model's own evolving motion prediction as a dynamic guidance signal. Notably, our framework can be applied to any video model with minimal adaptations, requiring no modifications to the training data or scaling of the model. VideoJAM achieves state-of-the-art performance in motion coherence, surpassing highly competitive proprietary models while also enhancing the perceived visual quality of the generations. These findings emphasize that appearance and motion can be complementary and, when effectively integrated, enhance both the visual quality and the coherence of video generation. Project website: https://hila-chefer.github.io/videojam-paper.github.io/ |
|
2025-02-05T00:00:00 | 2502.01720 | Generating Multi-Image Synthetic Data for Text-to-Image Customization | [
"Nupur Kumari",
"Xi Yin",
"Jun-Yan Zhu",
"Ishan Misra",
"Samaneh Azadi"
] | Customization of text-to-image models enables users to insert custom concepts and generate the concepts in unseen settings. Existing methods either rely on costly test-time optimization or train encoders on single-image training datasets without multi-image supervision, leading to worse image quality. We propose a simple approach that addresses both limitations. We first leverage existing text-to-image models and 3D datasets to create a high-quality Synthetic Customization Dataset (SynCD) consisting of multiple images of the same object in different lighting, backgrounds, and poses. We then propose a new encoder architecture based on shared attention mechanisms that better incorporate fine-grained visual details from input images. Finally, we propose a new inference technique that mitigates overexposure issues during inference by normalizing the text and image guidance vectors. Through extensive experiments, we show that our model, trained on the synthetic dataset with the proposed encoder and inference algorithm, outperforms existing tuning-free methods on standard customization benchmarks. |
|
2025-02-05T00:00:00 | 2502.01362 | Inverse Bridge Matching Distillation | [
"Nikita Gushchin",
"David Li",
"Daniil Selikhanovych",
"Evgeny Burnaev",
"Dmitry Baranchuk",
"Alexander Korotin"
] | Learning diffusion bridge models is easy; making them fast and practical is an art. Diffusion bridge models (DBMs) are a promising extension of diffusion models for applications in image-to-image translation. However, like many modern diffusion and flow models, DBMs suffer from the problem of slow inference. To address it, we propose a novel distillation technique based on the inverse bridge matching formulation and derive the tractable objective to solve it in practice. Unlike previously developed DBM distillation techniques, the proposed method can distill both conditional and unconditional types of DBMs, distill models in a one-step generator, and use only the corrupted images for training. We evaluate our approach for both conditional and unconditional types of bridge matching on a wide set of setups, including super-resolution, JPEG restoration, sketch-to-image, and other tasks, and show that our distillation technique allows us to accelerate the inference of DBMs from 4x to 100x and even provide better generation quality than used teacher model depending on particular setup. |
Subsets and Splits