abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/kolesov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kolesov24a/kolesov24a.pdf
https://openreview.net/forum?id=ymgcTqrZLT
Estimating Barycenters of Distributions with Neural Optimal Transport
https://proceedings.mlr.press/v235/kolesov24a.html
Alexander Kolesov, Petr Mokrov, Igor Udovichenko, Milena Gazdieva, Gudmund Pammer, Evgeny Burnaev, Alexander Korotin
https://proceedings.mlr.press/v235/kolesov24a.html
ICML 2024
Given a collection of probability measures, a practitioner sometimes needs to find an "average" distribution which adequately aggregates reference distributions. A theoretically appealing notion of such an average is the Wasserstein barycenter, which is the primal focus of our work. By building upon the dual formulation of Optimal Transport (OT), we propose a new scalable approach for solving the Wasserstein barycenter problem. Our methodology is based on the recent Neural OT solver: it has bi-level adversarial learning objective and works for general cost functions. These are key advantages of our method since the typical adversarial algorithms leveraging barycenter tasks utilize tri-level optimization and focus mostly on quadratic cost. We also establish theoretical error bounds for our proposed approach and showcase its applicability and effectiveness in illustrative scenarios and image data setups. Our source code is available at https://github.com/justkolesov/NOTBarycenters.
https://proceedings.mlr.press/v235/kolluru24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kolluru24a/kolluru24a.pdf
https://openreview.net/forum?id=ZMgpE58PMj
AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion
https://proceedings.mlr.press/v235/kolluru24a.html
Adeesh Kolluru, John R. Kitchin
https://proceedings.mlr.press/v235/kolluru24a.html
ICML 2024
Determining the optimal configuration of adsorbates on a slab (adslab) is pivotal in the exploration of novel catalysts across diverse applications. Traditionally, the quest for the lowest energy adslab configuration involves placing the adsorbate onto the slab followed by an optimization process. Prior methodologies have relied on heuristics, problem-specific intuitions, or brute-force approaches to guide adsorbate placement. In this work, we propose a novel framework for adsorbate placement using denoising diffusion. The model is designed to predict the optimal adsorbate site and orientation corresponding to the lowest energy configuration. Further, we have an end-to-end evaluation framework where diffusion-predicted adslab configuration is optimized with a pretrained machine learning force field and finally evaluated with Density Functional Theory (DFT). Our findings demonstrate an acceleration of up to 5x or 3.5x improvement in accuracy compared to the previous best approach. Given the novelty of this framework and application, we provide insights into the impact of pretraining, model architectures, and conduct extensive experiments to underscore the significance of this approach.
https://proceedings.mlr.press/v235/koloskova24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koloskova24a/koloskova24a.pdf
https://openreview.net/forum?id=ZRMQX6aTUS
On Convergence of Incremental Gradient for Non-convex Smooth Functions
https://proceedings.mlr.press/v235/koloskova24a.html
Anastasia Koloskova, Nikita Doikov, Sebastian U Stich, Martin Jaggi
https://proceedings.mlr.press/v235/koloskova24a.html
ICML 2024
In machine learning and neural network optimization, algorithms like incremental gradient, single shuffle SGD, and random reshuffle SGD are popular due to their cache-mismatch efficiency and good practical convergence behavior. However, their optimization properties in theory, especially for non-convex smooth functions, remain incompletely explored. This paper delves into the convergence properties of SGD algorithms with arbitrary data ordering, within a broad framework for non-convex smooth functions. Our findings show enhanced convergence guarantees for incremental gradient and single shuffle SGD. Particularly if $n$ is the training set size, we improve $n$ times the optimization term of convergence guarantee to reach accuracy $\epsilon$ from $O \left( \frac{n}{\epsilon} \right)$ to $O \left( \frac{1}{\epsilon}\right)$.
https://proceedings.mlr.press/v235/komodromos24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/komodromos24a/komodromos24a.pdf
https://openreview.net/forum?id=3FBO41d4T2
Logistic Variational Bayes Revisited
https://proceedings.mlr.press/v235/komodromos24a.html
Michael Komodromos, Marina Evangelou, Sarah Lucie Filippi
https://proceedings.mlr.press/v235/komodromos24a.html
ICML 2024
Variational logistic regression is a popular method for approximate Bayesian inference seeing wide-spread use in many areas of machine learning including: Bayesian optimization, reinforcement learning and multi-instance learning to name a few. However, due to the intractability of the Evidence Lower Bound, authors have turned to the use of Monte Carlo, quadrature or bounds to perform inference, methods which are costly or give poor approximations to the true posterior. In this paper we introduce a new bound for the expectation of softplus function and subsequently show how this can be applied to variational logistic regression and Gaussian process classification. Unlike other bounds, our proposal does not rely on extending the variational family, or introducing additional parameters to ensure the bound is tight. In fact, we show that this bound is tighter than the state-of-the-art, and that the resulting variational posterior achieves state-of-the-art performance, whilst being significantly faster to compute than Monte-Carlo methods.
https://proceedings.mlr.press/v235/kondratyuk24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kondratyuk24a/kondratyuk24a.pdf
https://openreview.net/forum?id=LRkJwPIDuE
VideoPoet: A Large Language Model for Zero-Shot Video Generation
https://proceedings.mlr.press/v235/kondratyuk24a.html
Dan Kondratyuk, Lijun Yu, Xiuye Gu, Jose Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, Krishna Somandepalli, Hassan Akbari, Yair Alon, Yong Cheng, Joshua V. Dillon, Agrim Gupta, Meera Hahn, Anja Hauth, David Hendon, Alonso Martinez, David Minnen, Mikhail Sirotenko, Kihyuk Sohn, Xuan Yang, Hartwig Adam, Ming-Hsuan Yang, Irfan Essa, Huisheng Wang, David A Ross, Bryan Seybold, Lu Jiang
https://proceedings.mlr.press/v235/kondratyuk24a.html
ICML 2024
We present VideoPoet, a language model capable of synthesizing high-quality video from a large variety of conditioning signals. VideoPoet employs a decoder-only transformer architecture that processes multimodal inputs – including images, videos, text, and audio. The training protocol follows that of Large Language Models (LLMs), consisting of two stages: pretraining and task-specific adaptation. During pretraining, VideoPoet incorporates a mixture of multimodal generative objectives within an autoregressive Transformer framework. The pretrained LLM serves as a foundation that can be adapted for a range of video generation tasks. We present empirical results demonstrating the model’s state-of-the-art capabilities in zero-shot video generation, specifically highlighting the ability to generate high-fidelity motions. Project page: http://sites.research.google/videopoet/
https://proceedings.mlr.press/v235/kong24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kong24a/kong24a.pdf
https://openreview.net/forum?id=WYi3WKZjYe
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
https://proceedings.mlr.press/v235/kong24a.html
Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, Bryan Catanzaro
https://proceedings.mlr.press/v235/kong24a.html
ICML 2024
Augmenting large language models (LLMs) to understand audio – including non-speech sounds and non-verbal speech – is critically important for diverse real-world applications of LLMs. In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to quickly adapt to unseen tasks via in-context learning and retrieval, and 3) strong multi-turn dialogue abilities. We introduce a series of training techniques, architecture design, and data strategies to enhance our model with these abilities. Extensive evaluations across various audio understanding tasks confirm the efficacy of our method, setting new state-of-the-art benchmarks. Our demo website is https://audioflamingo.github.io/ and the code is open-sourced at https://github.com/NVIDIA/audio-flamingo.
https://proceedings.mlr.press/v235/kong24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kong24b/kong24b.pdf
https://openreview.net/forum?id=dWxb80a0TW
Generalist Equivariant Transformer Towards 3D Molecular Interaction Learning
https://proceedings.mlr.press/v235/kong24b.html
Xiangzhe Kong, Wenbing Huang, Yang Liu
https://proceedings.mlr.press/v235/kong24b.html
ICML 2024
Many processes in biology and drug discovery involve various 3D interactions between molecules, such as protein and protein, protein and small molecule, etc. Given that different molecules are usually represented in different granularity, existing methods usually encode each type of molecules independently with different models, leaving it defective to learn the various underlying interaction physics. In this paper, we first propose to universally represent an arbitrary 3D complex as a geometric graph of sets, shedding light on encoding all types of molecules with one model. We then propose a Generalist Equivariant Transformer (GET) to effectively capture both domain-specific hierarchies and domain-agnostic interaction physics. To be specific, GET consists of a bilevel attention module, a feed-forward module and a layer normalization module, where each module is E(3) equivariant and specialized for handling sets of variable sizes. Notably, in contrast to conventional pooling-based hierarchical models, our GET is able to retain fine-grained information of all levels. Extensive experiments on the interactions between proteins, small molecules and RNA/DNAs verify the effectiveness and generalization capability of our proposed method across different domains.
https://proceedings.mlr.press/v235/kong24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kong24c/kong24c.pdf
https://openreview.net/forum?id=Ug1m4P4AKf
ELF: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis
https://proceedings.mlr.press/v235/kong24c.html
Jungil Kong, Junmo Lee, Jeongmin Kim, Beomjeong Kim, Jihoon Park, Dohee Kong, Changheon Lee, Sangjin Kim
https://proceedings.mlr.press/v235/kong24c.html
ICML 2024
In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker’s dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers’ speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a high-performance multi-speaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker’s speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers’ characteristics in various tasks.
https://proceedings.mlr.press/v235/kontogiannis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kontogiannis24a/kontogiannis24a.pdf
https://openreview.net/forum?id=t8WDBcegae
The Computational Complexity of Finding Second-Order Stationary Points
https://proceedings.mlr.press/v235/kontogiannis24a.html
Andreas Kontogiannis, Vasilis Pollatos, Sotiris Kanellopoulos, Panayotis Mertikopoulos, Aris Pagourtzis, Ioannis Panageas
https://proceedings.mlr.press/v235/kontogiannis24a.html
ICML 2024
Non-convex minimization problems are universally considered hard, and even guaranteeing that a computed solution is locally minimizing is known to be NP-hard. In this general context, our paper focuses on the problem of finding stationary points that satisfy an approximate second-order optimality condition, which serves to exclude strict saddles and other non-minimizing stationary points. Our main result is that the problem of finding approximate second-order stationary points (SOSPs) is PLS-complete, i.e., of the same complexity as the problem of finding first-order stationary points (FOSPs), thus resolving an open question in the field. In particular, our results imply that, under the widely believed complexity conjecture that PLS $\neq$ FNP, finding approximate SOSPs in unconstrained domains is easier than in constrained domains, which is known to be NP-hard. This comes in stark contrast with earlier results which implied that, unless PLS = CLS, finding approximate FOSPs in unconstrained domains is harder than in constrained domains.
https://proceedings.mlr.press/v235/koppel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koppel24a/koppel24a.pdf
https://openreview.net/forum?id=JOKOsJHSao
Information-Directed Pessimism for Offline Reinforcement Learning
https://proceedings.mlr.press/v235/koppel24a.html
Alec Koppel, Sujay Bhatt, Jiacheng Guo, Joe Eappen, Mengdi Wang, Sumitra Ganesh
https://proceedings.mlr.press/v235/koppel24a.html
ICML 2024
Policy optimization from batch data, i.e., offline reinforcement learning (RL) is important when collecting data from a current policy is not possible. This setting incurs distribution mismatch between batch training data and trajectories from the current policy. Pessimistic offsets estimate mismatch using concentration bounds, which possess strong theoretical guarantees and simplicity of implementation. Mismatch may be conservative in sparse data regions and less so otherwise, which can result in under-performing their no-penalty variants in practice. We derive a new pessimistic penalty as the distance between the data and the true distribution using an evaluable one-sample test known as Stein Discrepancy that requires minimal smoothness conditions, and noticeably, allows a mixture family representation of distribution over next states. This entity forms a quantifier of information in offline data, which justifies calling this approach information-directed pessimism (IDP) for offline RL. We further establish that this new penalty based on discrete Stein discrepancy yields practical gains in performance while generalizing the regret of prior art to multimodal distributions.
https://proceedings.mlr.press/v235/korkmaz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/korkmaz24a/korkmaz24a.pdf
https://openreview.net/forum?id=s9RKqT7jVM
Understanding and Diagnosing Deep Reinforcement Learning
https://proceedings.mlr.press/v235/korkmaz24a.html
Ezgi Korkmaz
https://proceedings.mlr.press/v235/korkmaz24a.html
ICML 2024
Deep neural policies have recently been installed in a diverse range of settings, from biotechnology to automated financial systems. However, the utilization of deep neural networks to approximate the value function leads to concerns on the decision boundary stability, in particular, with regard to the sensitivity of policy decision making to indiscernible, non-robust features due to highly non-convex and complex deep neural manifolds. These concerns constitute an obstruction to understanding the reasoning made by deep neural policies, and their foundational limitations. Hence, it is crucial to develop techniques that aim to understand the sensitivities in the learnt representations of neural network policies. To achieve this we introduce a theoretically founded method that provides a systematic analysis of the unstable directions in the deep neural policy decision boundary across both time and space. Through experiments in the Arcade Learning Environment (ALE), we demonstrate the effectiveness of our technique for identifying correlated directions of instability, and for measuring how sample shifts remold the set of sensitive directions in the neural policy landscape. Most importantly, we demonstrate that state-of-the-art robust training techniques yield learning of disjoint unstable directions, with dramatically larger oscillations over time, when compared to standard training. We believe our results reveal the fundamental properties of the decision process made by reinforcement learning policies, and can help in constructing reliable and robust deep neural policies.
https://proceedings.mlr.press/v235/koromilas24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koromilas24a/koromilas24a.pdf
https://openreview.net/forum?id=SvvvB5t5EW
Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based Losses
https://proceedings.mlr.press/v235/koromilas24a.html
Panagiotis Koromilas, Giorgos Bouritsas, Theodoros Giannakopoulos, Mihalis Nicolaou, Yannis Panagakis
https://proceedings.mlr.press/v235/koromilas24a.html
ICML 2024
What do different contrastive learning (CL) losses actually optimize for? Although multiple CL methods have demonstrated remarkable representation learning capabilities, the differences in their inner workings remain largely opaque. In this work, we analyse several CL families and prove that, under certain conditions, they admit the same minimisers when optimizing either their batch-level objectives or their expectations asymptotically. In both cases, an intimate connection with the hyperspherical energy minimisation (HEM) problem resurfaces. Drawing inspiration from this, we introduce a novel CL objective, coined Decoupled Hyperspherical Energy Loss (DHEL). DHEL simplifies the problem by decoupling the target hyperspherical energy from the alignment of positive examples while preserving the same theoretical guarantees. Going one step further, we show the same results hold for another relevant CL family, namely kernel contrastive learning (KCL), with the additional advantage of the expected loss being independent of batch size, thus identifying the minimisers in the non-asymptotic regime. Empirical results demonstrate improved downstream performance and robustness across combinations of different batch sizes and hyperparameters and reduced dimensionality collapse, on several computer vision datasets.
https://proceedings.mlr.press/v235/koshkin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koshkin24a/koshkin24a.pdf
https://openreview.net/forum?id=KVa4i4RR1O
convSeq: Fast and Scalable Method for Detecting Patterns in Spike Data
https://proceedings.mlr.press/v235/koshkin24a.html
Roman Koshkin, Tomoki Fukai
https://proceedings.mlr.press/v235/koshkin24a.html
ICML 2024
Spontaneous neural activity, crucial in memory, learning, and spatial navigation, often manifests itself as repetitive spatiotemporal patterns. Despite their importance, analyzing these patterns in large neural recordings remains challenging due to a lack of efficient and scalable detection methods. Addressing this gap, we introduce convSeq, an unsupervised method that employs backpropagation for optimizing spatiotemporal filters that effectively identify these neural patterns. Our method’s performance is validated on various synthetic data and real neural recordings, revealing spike sequences with unprecedented scalability and efficiency. Significantly surpassing existing methods in speed, convSeq sets a new standard for analyzing spontaneous neural activity, potentially advancing our understanding of information processing in neural circuits.
https://proceedings.mlr.press/v235/koskela24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koskela24a/koskela24a.pdf
https://openreview.net/forum?id=hgHQvrvwH9
Privacy Profiles for Private Selection
https://proceedings.mlr.press/v235/koskela24a.html
Antti Koskela, Rachel Emily Redberg, Yu-Xiang Wang
https://proceedings.mlr.press/v235/koskela24a.html
ICML 2024
Private selection mechanisms (e.g., Report Noisy Max, Sparse Vector) are fundamental primitives of differentially private (DP) data analysis with wide applications to private query release, voting, and hyperparameter tuning. Recent work (Liu and Talwar, 2019; Papernot and Steinke, 2022) has made significant progress in both generalizing private selection mechanisms and tightening their privacy analysis using modern numerical privacy accounting tools, e.g., Rényi DP. But Rényi DP is known to be lossy when $(\epsilon,\delta)$-DP is ultimately needed, and there is a trend to close the gap by directly handling privacy profiles, i.e., $\delta$ as a function of $\epsilon$ or its equivalent dual form known as $f$-DPs. In this paper, we work out an easy-to-use recipe that bounds the privacy profiles of ReportNoisyMax and PrivateTuning using the privacy profiles of the base algorithms they corral. Numerically, our approach improves over the RDP-based accounting in all regimes of interest and leads to substantial benefits in end-to-end private learning experiments. Our analysis also suggests new distributions, e.g., binomial distribution for randomizing the number of rounds that leads to more substantial improvements in certain regimes.
https://proceedings.mlr.press/v235/kosson24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kosson24a/kosson24a.pdf
https://openreview.net/forum?id=MQirNNU2pC
Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks
https://proceedings.mlr.press/v235/kosson24a.html
Atli Kosson, Bettina Messmer, Martin Jaggi
https://proceedings.mlr.press/v235/kosson24a.html
ICML 2024
This study investigates how weight decay affects the update behavior of individual neurons in deep neural networks through a combination of applied analysis and experimentation. Weight decay can cause the expected magnitude and angular updates of a neuron’s weight vector to converge to a steady state we call rotational equilibrium. These states can be highly homogeneous, effectively balancing the average rotation—a proxy for the effective learning rate—across different layers and neurons. Our work analyzes these dynamics across optimizers like Adam, Lion, and SGD with momentum, offering a new simple perspective on training that elucidates the efficacy of widely used but poorly understood methods in deep learning. We demonstrate how balanced rotation plays a key role in the effectiveness of normalization like Weight Standardization, as well as that of AdamW over Adam with L2-regularization. Finally, we show that explicitly controlling the rotation provides the benefits of weight decay while substantially reducing the need for learning rate warmup.
https://proceedings.mlr.press/v235/kostic24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kostic24a/kostic24a.pdf
https://openreview.net/forum?id=dfR6FU53qk
Consistent Long-Term Forecasting of Ergodic Dynamical Systems
https://proceedings.mlr.press/v235/kostic24a.html
Vladimir R Kostic, Karim Lounici, Prune Inzerilli, Pietro Novelli, Massimiliano Pontil
https://proceedings.mlr.press/v235/kostic24a.html
ICML 2024
We study the problem of forecasting the evolution of a function of the state (observable) of a discrete ergodic dynamical system over multiple time steps. The elegant theory of Koopman and transfer operators can be used to evolve any such function forward in time. However, their estimators are usually unreliable in long-term forecasting. We show how classical techniques of eigenvalue deflation from operator theory and feature centering from statistics can be exploited to enhance standard estimators. We develop a novel technique to derive high probability bounds on powers of empirical estimators. Our approach, rooted in the stability theory of non-normal operators, allows us to establish uniform in time bounds for the forecasting error, which hold even on infinite time horizons. We further show that our approach can be seamlessly employed to forecast future state distributions from an initial one, with provably uniform error bounds. Numerical experiments illustrate the advantages of our approach in practice.
https://proceedings.mlr.press/v235/kotlowski24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kotlowski24a/kotlowski24a.pdf
https://openreview.net/forum?id=pfnBLXgFVS
A General Online Algorithm for Optimizing Complex Performance Metrics
https://proceedings.mlr.press/v235/kotlowski24a.html
Wojciech Kotlowski, Marek Wydmuch, Erik Schultheis, Rohit Babbar, Krzysztof Dembczynski
https://proceedings.mlr.press/v235/kotlowski24a.html
ICML 2024
We consider sequential maximization of performance metrics that are general functions of a confusion matrix of a classifier (such as precision, F-measure, or G-mean). Such metrics are, in general, non-decomposable over individual instances, making their optimization very challenging. While they have been extensively studied under different frameworks in the batch setting, their analysis in the online learning regime is very limited, with only a few distinguished exceptions. In this paper, we introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems. The algorithm’s update and prediction rules are appealingly simple and computationally efficient without the need to store any past data. We show the algorithm attains $\mathcal{O}(\frac{\ln n}{n})$ regret for concave and smooth metrics and verify the efficiency of the proposed algorithm in empirical studies.
https://proceedings.mlr.press/v235/kou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kou24a/kou24a.pdf
https://openreview.net/forum?id=8uzBOVmh8H
CLLMs: Consistency Large Language Models
https://proceedings.mlr.press/v235/kou24a.html
Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang
https://proceedings.mlr.press/v235/kou24a.html
ICML 2024
Jacobi decoding shows promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into more parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point in a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.
https://proceedings.mlr.press/v235/kou24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kou24b/kou24b.pdf
https://openreview.net/forum?id=oCI9gHocws
KISA: A Unified Keyframe Identifier and Skill Annotator for Long-Horizon Robotics Demonstrations
https://proceedings.mlr.press/v235/kou24b.html
Longxin Kou, Fei Ni, Yan Zheng, Jinyi Liu, Yifu Yuan, Zibin Dong, Jianye Hao
https://proceedings.mlr.press/v235/kou24b.html
ICML 2024
Robotic manipulation tasks often span over long horizons and encapsulate multiple subtasks with different skills. Learning policies directly from long-horizon demonstrations is challenging without intermediate keyframes guidance and corresponding skill annotations. Existing approaches for keyframe identification often struggle to offer reliable decomposition for low accuracy and fail to provide semantic relevance between keyframes and skills. For this, we propose a unified Keyframe Identifier and Skill Anotator (KISA) that utilizes pretrained visual-language representations for precise and interpretable decomposition of unlabeled demonstrations. Specifically, we develop a simple yet effective temporal enhancement module that enriches frame-level representations with expanded receptive fields to capture semantic dynamics at the video level. We further propose coarse contrastive learning and fine-grained monotonic encouragement to enhance the alignment between visual representations from keyframes and language representations from skills. The experimental results across three benchmarks demonstrate that KISA outperforms competitive baselines in terms of accuracy and interpretability of keyframe identification. Moreover, KISA exhibits robust generalization capabilities and the flexibility to incorporate various pretrained representations.
https://proceedings.mlr.press/v235/koul24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/koul24a/koul24a.pdf
https://openreview.net/forum?id=AaTYLZQPyC
PcLast: Discovering Plannable Continuous Latent States
https://proceedings.mlr.press/v235/koul24a.html
Anurag Koul, Shivakanth Sujit, Shaoru Chen, Ben Evans, Lili Wu, Byron Xu, Rajan Chari, Riashat Islam, Raihan Seraj, Yonathan Efroni, Lekan P Molu, Miroslav Dudı́k, John Langford, Alex Lamb
https://proceedings.mlr.press/v235/koul24a.html
ICML 2024
Goal-conditioned planning benefits from learned low-dimensional representations of rich observations. While compact latent representations typically learned from variational autoencoders or inverse dynamics enable goal-conditioned decision making, they ignore state reachability, hampering their performance. In this paper, we learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning. We first learn a latent representation with multi-step inverse dynamics (to remove distracting information), and then transform this representation to associate reachable states together in $\ell_2$ space. Our proposals are rigorously tested in various simulation testbeds. Numerical results in reward-based settings show significant improvements in sampling efficiency. Further, in reward-free settings this approach yields layered state abstractions that enable computationally efficient hierarchical planning for reaching ad hoc goals with zero additional samples.
https://proceedings.mlr.press/v235/kozdoba24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kozdoba24a/kozdoba24a.pdf
https://openreview.net/forum?id=PMASooqgoq
Sobolev Space Regularised Pre Density Models
https://proceedings.mlr.press/v235/kozdoba24a.html
Mark Kozdoba, Binyamin Perets, Shie Mannor
https://proceedings.mlr.press/v235/kozdoba24a.html
ICML 2024
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density. This method is statistically consistent, and makes the inductive bias of the model clear and interpretable. While there is no closed analytic form for the associated kernel, we show that one can approximate it using sampling. The optimization problem needed to determine the density is non-convex, and standard gradient methods do not perform well. However, we show that with an appropriate initialization and using natural gradients, one can obtain well performing solutions. Finally, while the approach provides pre-densities (i.e. not necessarily integrating to 1), which prevents the use of log-likelihood for cross validation, we show that one can instead adapt Fisher divergence based score matching methods for this task. We evaluate the resulting method on the comprehensive recent anomaly detection benchmark suite, ADBench, and find that it ranks second best, among more than 15 algorithms.
https://proceedings.mlr.press/v235/krasheninnikov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/krasheninnikov24a/krasheninnikov24a.pdf
https://openreview.net/forum?id=Fzp1DRzCIN
Implicit meta-learning may lead language models to trust more reliable sources
https://proceedings.mlr.press/v235/krasheninnikov24a.html
Dmitrii Krasheninnikov, Egor Krasheninnikov, Bruno Kacper Mlodozeniec, Tegan Maharaj, David Krueger
https://proceedings.mlr.press/v235/krasheninnikov24a.html
ICML 2024
We demonstrate that large language models (LLMs) may learn indicators of document usefulness and modulate their updates accordingly. We introduce random strings ("tags") as indicators of usefulness in a synthetic fine-tuning dataset. Fine-tuning on this dataset leads to implicit meta-learning (IML): in further fine-tuning, the model updates to make more use of text that is tagged as useful. We perform a thorough empirical investigation of this phenomenon, finding (among other things) that (i) it occurs in both pretrained LLMs and those trained from scratch, as well as on a vision task, and (ii) larger models and smaller batch sizes tend to give more IML. We also use probing to examine how IML changes the way models store knowledge in their parameters. Finally, we reflect on what our results might imply about the capabilities, risks, and controllability of future AI systems.
https://proceedings.mlr.press/v235/kremer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kremer24a/kremer24a.pdf
https://openreview.net/forum?id=6KLNiRdWH6
Geometry-Aware Instrumental Variable Regression
https://proceedings.mlr.press/v235/kremer24a.html
Heiner Kremer, Bernhard Schölkopf
https://proceedings.mlr.press/v235/kremer24a.html
ICML 2024
Instrumental variable (IV) regression can be approached through its formulation in terms of conditional moment restrictions (CMR). Building on variants of the generalized method of moments, most CMR estimators are implicitly based on approximating the population data distribution via reweightings of the empirical sample. While for large sample sizes, in the independent identically distributed (IID) setting, reweightings can provide sufficient flexibility, they might fail to capture the relevant information in presence of corrupted data or data prone to adversarial attacks. To address these shortcomings, we propose the Sinkhorn Method of Moments, an optimal transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information. We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings but improves robustness against data corruption and adversarial attacks.
https://proceedings.mlr.press/v235/krishna24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/krishna24a/krishna24a.pdf
https://openreview.net/forum?id=KjazcKPMME
Understanding the Effects of Iterative Prompting on Truthfulness
https://proceedings.mlr.press/v235/krishna24a.html
Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
https://proceedings.mlr.press/v235/krishna24a.html
ICML 2024
The development of Large Language Models (LLMs) has notably transformed numerous sectors, offering impressive text generation capabilities. Yet, the reliability and truthfulness of these models remain pressing concerns. To this end, we investigate iterative prompting, a strategy hypothesized to refine LLM responses, assessing its impact on LLM truthfulness, an area which has not been thoroughly explored. Our extensive experiments explore the intricacies of iterative prompting variants, examining their influence on the accuracy and calibration of model responses. Our findings reveal that naive prompting methods significantly undermine truthfulness, leading to exacerbated calibration errors. In response to these challenges, we introduce several prompting variants designed to address the identified issues. These variants demonstrate marked improvements over existing baselines, signaling a promising direction for future research. Our work provides a nuanced understanding of iterative prompting and introduces novel approaches to enhance the truthfulness of LLMs, thereby contributing to the development of more accurate and trustworthy AI systems
https://proceedings.mlr.press/v235/kristiadi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kristiadi24a/kristiadi24a.pdf
https://openreview.net/forum?id=Pa3GyTe3kf
A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?
https://proceedings.mlr.press/v235/kristiadi24a.html
Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta, Pascal Poupart, Alan Aspuru-Guzik, Geoff Pleiss
https://proceedings.mlr.press/v235/kristiadi24a.html
ICML 2024
Automation is one of the cornerstones of contemporary material discovery. Bayesian optimization (BO) is an essential part of such workflows, enabling scientists to leverage prior domain knowledge into efficient exploration of a large molecular space. While such prior knowledge can take many forms, there has been significant fanfare around the ancillary scientific knowledge encapsulated in large language models (LLMs). However, existing work thus far has only explored LLMs for heuristic materials searches. Indeed, recent work obtains the uncertainty estimate—an integral part of BO—from point-estimated, non-Bayesian LLMs. In this work, we study the question of whether LLMs are actually useful to accelerate principled Bayesian optimization in the molecular space. We take a sober, dispassionate stance in answering this question. This is done by carefully (i) viewing LLMs as fixed feature extractors for standard but principled BO surrogate models and by (ii) leveraging parameter-efficient finetuning methods and Bayesian neural networks to obtain the posterior of the LLM surrogate. Our extensive experiments with real-world chemistry problems show that LLMs can be useful for BO over molecules, but only if they have been pretrained or finetuned with domain-specific data.
https://proceedings.mlr.press/v235/kuang24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kuang24a/kuang24a.pdf
https://openreview.net/forum?id=ULleq1Dtaw
Towards General Algorithm Discovery for Combinatorial Optimization: Learning Symbolic Branching Policy from Bipartite Graph
https://proceedings.mlr.press/v235/kuang24a.html
Yufei Kuang, Jie Wang, Yuyan Zhou, Xijun Li, Fangzhou Zhu, Jianye Hao, Feng Wu
https://proceedings.mlr.press/v235/kuang24a.html
ICML 2024
Machine learning (ML) approaches have been successfully applied to accelerating exact combinatorial optimization (CO) solvers. However, many of them fail to explain what patterns they have learned that accelerate the CO algorithms due to the black-box nature of ML models like neural networks, and thus they prevent researchers from further understanding the tasks they are interested in. To tackle this problem, we propose the first graph-based algorithm discovery framework—namely, graph symbolic discovery for exact combinatorial optimization solver (GS4CO)—that learns interpretable branching policies directly from the general bipartite graph representation of CO problems. Specifically, we design a unified representation for symbolic policies with graph inputs, and then we employ a Transformer with multiple tree-structural encodings to generate symbolic trees end-to-end, which effectively reduces the cumulative error from iteratively distilling graph neural networks. Experiments show that GS4CO learned interpretable and lightweight policies outperform all the baselines on CPU machines, including both the human-designed and the learning-based. GS4CO shows an encouraging step towards general algorithm discovery on modern CO solvers.
https://proceedings.mlr.press/v235/kulesza24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kulesza24a/kulesza24a.pdf
https://openreview.net/forum?id=cwIhvoTzuK
Mean Estimation in the Add-Remove Model of Differential Privacy
https://proceedings.mlr.press/v235/kulesza24a.html
Alex Kulesza, Ananda Theertha Suresh, Yuyan Wang
https://proceedings.mlr.press/v235/kulesza24a.html
ICML 2024
Differential privacy is often studied under two different models of neighboring datasets: the add-remove model and the swap model. While the swap model is frequently used in the academic literature to simplify analysis, many practical applications rely on the more conservative add-remove model, where obtaining tight results can be difficult. Here, we study the problem of one-dimensional mean estimation under the add-remove model. We propose a new algorithm and show that it is min-max optimal, achieving the best possible constant in the leading term of the mean squared error for all $\epsilon$, and that this constant is the same as the optimal algorithm under the swap model. These results show that the add-remove and swap models give nearly identical errors for mean estimation, even though the add-remove model cannot treat the size of the dataset as public information. We also demonstrate empirically that our proposed algorithm yields at least a factor of two improvement in mean squared error over algorithms frequently used in practice. One of our main technical contributions is a new hourglass mechanism, which might be of independent interest in other scenarios.
https://proceedings.mlr.press/v235/kumar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kumar24a/kumar24a.pdf
https://openreview.net/forum?id=Uzb45nolTb
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
https://proceedings.mlr.press/v235/kumar24a.html
Tanishq Kumar, Kevin Luo, Mark Sellke
https://proceedings.mlr.press/v235/kumar24a.html
ICML 2024
The existence of “lottery tickets” (Frankle & Carbin, 2018) at or near initialization raises the tantalizing question of whether large models are necessary in deep learning, or whether sparse networks can be quickly identified and trained without ever training the dense models that contain them. However, efforts to find these sparse subnetworks without training the dense model (“pruning at initialization”) have been broadly unsuccessful (Frankle et al., 2020b). We put forward a theoretical explanation for this, based on the model’s effective parameter count, $p_\text{eff}$, given by the sum of the number of non-zero weights in the final network and the mutual information between the sparsity mask and the data. We show the Law of Robustness of (Bubeck & Sellke, 2023) extends to sparse networks with the usual parameter count replaced by $p_\text{eff}$, meaning a sparse neural network which robustly interpolates noisy data requires a heavily data-dependent mask. We posit that pruning during and after training outputs masks with higher mutual information than those produced by pruning at initialization. Thus two networks may have the same sparsities, but differ in effective parameter count based on how they were trained. This suggests that pruning near initialization may be infeasible and explains why lottery tickets exist, but cannot be found fast (i.e. without training the full network). Experiments on neural networks confirm that information gained during training may indeed affect model capacity.
https://proceedings.mlr.press/v235/kumar24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kumar24b/kumar24b.pdf
https://openreview.net/forum?id=J4LTDgwAZq
Efficient Value Iteration for s-rectangular Robust Markov Decision Processes
https://proceedings.mlr.press/v235/kumar24b.html
Navdeep Kumar, Kaixin Wang, Kfir Yehuda Levy, Shie Mannor
https://proceedings.mlr.press/v235/kumar24b.html
ICML 2024
We focus on s-rectangular robust Markov decision processes (MDPs), which capture interconnected uncertainties across different actions within each state. This framework is more general compared to sa-rectangular robust MDPs, where uncertainties in each action are independent. However, the introduced interdependence significantly amplifies the complexity of the problem. Existing methods either have slow performance guarantees or are inapplicable to even moderately large state spaces. In this work, we derive optimal robust Bellman operators in explicit forms. This leads to robust value iteration methods with significantly faster time complexities than existing approaches, which can be used in large state spaces. Further, our findings reveal that the optimal policies demonstrate a novel threshold behavior, selectively favoring a limited set of actions based on their respective advantage functions. Additionally, our study uncovers a noteworthy connection between the robustness of a policy and the variance in its value function, highlighting that policies with lower variance exhibit greater resilience.
https://proceedings.mlr.press/v235/kur24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kur24a/kur24a.pdf
https://openreview.net/forum?id=G4b32bKnBy
Minimum Norm Interpolation Meets The Local Theory of Banach Spaces
https://proceedings.mlr.press/v235/kur24a.html
Gil Kur, Pedro Abdalla, Pierre Bizeul, Fanny Yang
https://proceedings.mlr.press/v235/kur24a.html
ICML 2024
Minimum-norm interpolators have recently gained attention primarily as an analyzable model to shed light on the double descent phenomenon observed for neural networks. The majority of the work has focused on analyzing interpolators in Hilbert spaces, where typically an effectively low-rank structure of the feature covariance prevents a large bias. More recently, tight vanishing bounds have also been shown for isotropic high-dimensional data for $\ell_p$-spaces with $p\in[1,2)$, leveraging sparse structure of the ground truth. However, these proofs are tailored to specific settings and hard to generalize. This paper takes a first step towards establishing a general framework that connects generalization properties of the interpolators to well-known concepts from high-dimensional geometry, specifically, from the local theory of Banach spaces. In particular, we show that under $2$-uniform convexity, the bias of the minimal norm solution is bounded by the Gaussian complexity of the class. We then prove a “reverse” Efron-Stein lower bound on the expected conditional variance of the minimal norm solution under cotype $2$. Finally, we prove that this bound is sharp for $\ell_p$-linear regression under sub-Gaussian covariates.
https://proceedings.mlr.press/v235/kwon24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kwon24a/kwon24a.pdf
https://openreview.net/forum?id=p5FIjG9fbs
Prospective Side Information for Latent MDPs
https://proceedings.mlr.press/v235/kwon24a.html
Jeongyeol Kwon, Yonathan Efroni, Shie Mannor, Constantine Caramanis
https://proceedings.mlr.press/v235/kwon24a.html
ICML 2024
In many interactive decision-making problems, there is contextual side information that remains fixed within the course of an interaction. This problem has been studied quite extensively under the assumption the context is fully observed, as well as in the opposing limit when the context is unobserved, a special type of POMDP also referred to as a Latent MDP (LMDP). In this work, we consider a class of decision problems that interpolates between the settings, namely, between the case the context is fully observed, and the case the context is unobserved. We refer to this class of decision problems as LMDPs with prospective side information. In such an environment an agent receives additional, weakly revealing, information on the latent context at the beginning of each episode. We show that, surprisingly, this problem is not captured by contemporary POMDP settings and is not solved by RL algorithms designed for partially observed environments. We then establish that any sample efficient algorithm must suffer at least $\Omega(K^{2/3})$-regret, as opposed to standard $\Omega(\sqrt{K})$ lower bounds. We design an algorithm with a matching upper bound that depends only polynomially on the problem parameters. This establishes exponential improvement in the sample complexity relative to the existing LMDP lower bound, when prospective information is not given in prior work.
https://proceedings.mlr.press/v235/kwon24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kwon24b/kwon24b.pdf
https://openreview.net/forum?id=OQ97v7uRGc
On The Complexity of First-Order Methods in Stochastic Bilevel Optimization
https://proceedings.mlr.press/v235/kwon24b.html
Jeongyeol Kwon, Dohyun Kwon, Hanbaek Lyu
https://proceedings.mlr.press/v235/kwon24b.html
ICML 2024
We consider the problem of finding stationary points in Bilevel optimization when the lower-level problem is unconstrained and strongly convex. The problem has been extensively studied in recent years; the main technical challenge is to keep track of lower-level solutions $y^*(x)$ in response to the changes in the upper-level variables $x$. Subsequently, all existing approaches tie their analyses to a genie algorithm that knows lower-level solutions and, therefore, need not query any points far from them. We consider a dual question to such approaches: suppose we have an oracle, which we call $y^*$-aware, that returns an $O(\epsilon)$-estimate of the lower-level solution, in addition to first-order gradient estimators locally unbiased within the $\Theta(\epsilon)$-ball around $y^*(x)$. We study the complexity of finding stationary points with such an $y^*$-aware oracle: we propose a simple first-order method that converges to an $\epsilon$ stationary point using $O(\epsilon^{-6}), O(\epsilon^{-4})$ access to first-order $y^*$-aware oracles. Our upper bounds also apply to standard unbiased first-order oracles, improving the best-known complexity of first-order methods by $O(\epsilon)$ with minimal assumptions. We then provide the matching $\Omega(\epsilon^{-6})$, $\Omega(\epsilon^{-4})$ lower bounds without and with an additional smoothness assumption, respectively. Our results imply that any approach that simulates an algorithm with an $y^*$-aware oracle must suffer the same lower bounds.
https://proceedings.mlr.press/v235/kwon24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kwon24c/kwon24c.pdf
https://openreview.net/forum?id=MWZWUyfFHC
TinyTrain: Resource-Aware Task-Adaptive Sparse Training of DNNs at the Data-Scarce Edge
https://proceedings.mlr.press/v235/kwon24c.html
Young D. Kwon, Rui Li, Stylianos Venieris, Jagmohan Chauhan, Nicholas Donald Lane, Cecilia Mascolo
https://proceedings.mlr.press/v235/kwon24c.html
ICML 2024
On-device training is essential for user personalisation and privacy. With the pervasiveness of IoT devices and microcontroller units (MCUs), this task becomes more challenging due to the constrained memory and compute resources, and the limited availability of labelled user data. Nonetheless, prior works neglect the data scarcity issue, require excessively long training time ($\textit{e.g.}$ a few hours), or induce substantial accuracy loss ($\geq$10%). In this paper, we propose TinyTrain, an on-device training approach that drastically reduces training time by selectively updating parts of the model and explicitly coping with data scarcity. TinyTrain introduces a task-adaptive sparse-update method that $\textit{dynamically}$ selects the layer/channel to update based on a multi-objective criterion that jointly captures user data, the memory, and the compute capabilities of the target device, leading to high accuracy on unseen tasks with reduced computation and memory footprint. TinyTrain outperforms vanilla fine-tuning of the entire network by 3.6-5.0% in accuracy, while reducing the backward-pass memory and computation cost by up to 1,098$\times$ and 7.68$\times$, respectively. Targeting broadly used real-world edge devices, TinyTrain achieves 9.5$\times$ faster and 3.5$\times$ more energy-efficient training over status-quo approaches, and 2.23$\times$ smaller memory footprint than SOTA methods, while remaining within the 1 MB memory envelope of MCU-grade platforms.
https://proceedings.mlr.press/v235/laenen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/laenen24a/laenen24a.pdf
https://openreview.net/forum?id=coP4kPdhKr
Dynamic Spectral Clustering with Provable Approximation Guarantee
https://proceedings.mlr.press/v235/laenen24a.html
Steinar Laenen, He Sun
https://proceedings.mlr.press/v235/laenen24a.html
ICML 2024
This paper studies clustering algorithms for dynamically evolving graphs $\{G_t\}$, in which new edges (and potential new vertices) are added into a graph, and the underlying cluster structure of the graph can gradually change. The paper proves that, under some mild condition on the cluster-structure, the clusters of the final graph $G_T$ of $n_T$ vertices at time $T$ can be well approximated by a dynamic variant of the spectral clustering algorithm. The algorithm runs in amortised update time $O(1)$ and query time $o(n_T)$. Experimental studies on both synthetic and real-world datasets further confirm the practicality of our designed algorithm.
https://proceedings.mlr.press/v235/lai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lai24a/lai24a.pdf
https://openreview.net/forum?id=DhxZVq1ZOo
Collective Certified Robustness against Graph Injection Attacks
https://proceedings.mlr.press/v235/lai24a.html
Yuni Lai, Bailin Pan, Kaihuang Chen, Yancheng Yuan, Kai Zhou
https://proceedings.mlr.press/v235/lai24a.html
ICML 2024
We investigate certified robustness for GNNs under graph injection attacks. Existing research only provides sample-wise certificates by verifying each node independently, leading to very limited certifying performance. In this paper, we present the first collective certificate, which certifies a set of target nodes simultaneously. To achieve it, we formulate the problem as a binary integer quadratic constrained linear programming (BQCLP). We further develop a customized linearization technique that allows us to relax the BQCLP into linear programming (LP) that can be efficiently solved. Through comprehensive experiments, we demonstrate that our collective certification scheme significantly improves certification performance with minimal computational overhead. For instance, by solving the LP within 1 minute on the Citeseer dataset, we achieve a significant increase in the certified ratio from 0.0% to 81.2% when the injected node number is 5% of the graph size. Our paper marks a crucial step towards making provable defense more practical. Our source code is available at https://github.com/Yuni-Lai/CollectiveLPCert.
https://proceedings.mlr.press/v235/lai24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lai24b/lai24b.pdf
https://openreview.net/forum?id=u6PeRHEsjL
Position: Evolving AI Collectives Enhance Human Diversity and Enable Self-Regulation
https://proceedings.mlr.press/v235/lai24b.html
Shiyang Lai, Yujin Potter, Junsol Kim, Richard Zhuang, Dawn Song, James Evans
https://proceedings.mlr.press/v235/lai24b.html
ICML 2024
Large language model behavior is shaped by the language of those with whom they interact. This capacity and their increasing prevalence online portend that they will intentionally or unintentionally "program" one another and form emergent AI subjectivities, relationships, and collectives. Here, we call upon the research community to investigate these "societies" of interacting artificial intelligences to increase their rewards and reduce their risks for human society and the health of online environments. We use a small "community" of models and their evolving outputs to illustrate how such emergent, decentralized AI collectives can spontaneously expand the bounds of human diversity and reduce the risk of toxic, anti-social behavior online. Finally, we discuss opportunities for AI cross-moderation and address ethical issues and design challenges associated with creating and maintaining free-formed AI collectives.
https://proceedings.mlr.press/v235/lai24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lai24c/lai24c.pdf
https://openreview.net/forum?id=P7qwBmzwwZ
Invariant Risk Minimization Is A Total Variation Model
https://proceedings.mlr.press/v235/lai24c.html
Zhao-Rong Lai, Weiwen Wang
https://proceedings.mlr.press/v235/lai24c.html
ICML 2024
Invariant risk minimization (IRM) is an arising approach to generalize invariant features to different environments in machine learning. While most related works focus on new IRM settings or new application scenarios, the mathematical essence of IRM remains to be properly explained. We verify that IRM is essentially a total variation based on $L^2$ norm (TV-$\ell_2$) of the learning risk with respect to the classifier variable. Moreover, we propose a novel IRM framework based on the TV-$\ell_1$ model. It not only expands the classes of functions that can be used as the learning risk and the feature extractor, but also has robust performance in denoising and invariant feature preservation based on the coarea formula. We also illustrate some requirements for IRM-TV-$\ell_1$ to achieve out-of-distribution generalization. Experimental results show that the proposed framework achieves competitive performance in several benchmark machine learning scenarios.
https://proceedings.mlr.press/v235/lalanne24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lalanne24a/lalanne24a.pdf
https://openreview.net/forum?id=NeEbsvnaWE
Privately Learning Smooth Distributions on the Hypercube by Projections
https://proceedings.mlr.press/v235/lalanne24a.html
Clément Lalanne, Sébastien Gadat
https://proceedings.mlr.press/v235/lalanne24a.html
ICML 2024
Fueled by the ever-increasing need for statistics that guarantee the privacy of their training sets, this article studies the centrally-private estimation of Sobolev-smooth densities of probability over the hypercube in dimension d. The contributions of this article are two-fold : Firstly, it generalizes the one-dimensional results of (Lalanne et al., 2023) to non-integer levels of smoothness and to a high-dimensional setting, which is important for two reasons : it is more suited for modern learning tasks, and it allows understanding the relations between privacy, dimensionality and smoothness, which is a central question with differential privacy. Secondly, this article presents a private strategy of estimation that is data-driven (usually referred to as adaptive in Statistics) in order to privately choose an estimator that achieves a good bias-variance trade-off among a finite family of private projection estimators without prior knowledge of the ground-truth smoothness β. This is achieved by adapting the Lepskii method for private selection, by adding a new penalization term that makes the estimation privacy-aware.
https://proceedings.mlr.press/v235/lan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lan24a/lan24a.pdf
https://openreview.net/forum?id=VAKkoJjVpn
A Neural-Preconditioned Poisson Solver for Mixed Dirichlet and Neumann Boundary Conditions
https://proceedings.mlr.press/v235/lan24a.html
Kai Weixian Lan, Elias Gueidon, Ayano Kaneda, Julian Panetta, Joseph Teran
https://proceedings.mlr.press/v235/lan24a.html
ICML 2024
We introduce a neural-preconditioned iterative solver for Poisson equations with mixed boundary conditions. Typical Poisson discretizations yield large, ill-conditioned linear systems. Iterative solvers can be effective for these problems, but only when equipped with powerful preconditioners. Unfortunately, effective preconditioners like multigrid require costly setup phases that must be re-executed every time domain shapes or boundary conditions change, forming a severe bottleneck for problems with evolving boundaries. In contrast, we present a neural preconditioner trained to efficiently approximate the inverse of the discrete Laplacian in the presence of such changes. Our approach generalizes to domain shapes, boundary conditions, and grid sizes outside the training set. The key to our preconditioner’s success is a novel, lightweight neural network architecture featuring spatially varying convolution kernels and supporting fast inference. We demonstrate that our solver outperforms state-of-the-art methods like algebraic multigrid as well as recently proposed neural preconditioners on challenging test cases arising from incompressible fluid simulations.
https://proceedings.mlr.press/v235/lao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lao24a/lao24a.pdf
https://openreview.net/forum?id=6DBvBcW770
Sub-token ViT Embedding via Stochastic Resonance Transformers
https://proceedings.mlr.press/v235/lao24a.html
Dong Lao, Yangchao Wu, Tian Yu Liu, Alex Wong, Stefano Soatto
https://proceedings.mlr.press/v235/lao24a.html
ICML 2024
Vision Transformer (ViT) architectures represent images as collections of high-dimensional vectorized tokens, each corresponding to a rectangular non-overlapping patch. This representation trades spatial granularity for embedding dimensionality, and results in semantically rich but spatially coarsely quantized feature maps. In order to retrieve spatial details beneficial to fine-grained inference tasks we propose a training-free method inspired by "stochastic resonance." Specifically, we perform sub-token spatial transformations to the input data, and aggregate the resulting ViT features after applying the inverse transformation. The resulting "Stochastic Resonance Transformer" (SRT) retains the rich semantic information of the original representation, but grounds it on a finer-scale spatial domain, partly mitigating the coarse effect of spatial tokenization. SRT is applicable across any layer of any ViT architecture, consistently boosting performance on several tasks including segmentation, classification, depth estimation, and others by up to 14.9% without the need for any fine-tuning. Code: https://github.com/donglao/srt.
https://proceedings.mlr.press/v235/laszkiewicz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/laszkiewicz24a/laszkiewicz24a.pdf
https://openreview.net/forum?id=Hs9GcILuZN
Single-Model Attribution of Generative Models Through Final-Layer Inversion
https://proceedings.mlr.press/v235/laszkiewicz24a.html
Mike Laszkiewicz, Jonas Ricker, Johannes Lederer, Asja Fischer
https://proceedings.mlr.press/v235/laszkiewicz24a.html
ICML 2024
Recent breakthroughs in generative modeling have sparked interest in practical single-model attribution. Such methods predict whether a sample was generated by a specific generator or not, for instance, to prove intellectual property theft. However, previous works are either limited to the closed-world setting or require undesirable changes to the generative model. We address these shortcomings by, first, viewing single-model attribution through the lens of anomaly detection. Arising from this change of perspective, we propose FLIPAD, a new approach for single-model attribution in the open-world setting based on final-layer inversion and anomaly detection. We show that the utilized final-layer inversion can be reduced to a convex lasso optimization problem, making our approach theoretically sound and computationally efficient. The theoretical findings are accompanied by an experimental study demonstrating the effectiveness of our approach and its flexibility to various domains.
https://proceedings.mlr.press/v235/lavie24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lavie24a/lavie24a.pdf
https://openreview.net/forum?id=HOMXUneCTR
Towards Understanding Inductive Bias in Transformers: A View From Infinity
https://proceedings.mlr.press/v235/lavie24a.html
Itay Lavie, Guy Gur-Ari, Zohar Ringel
https://proceedings.mlr.press/v235/lavie24a.html
ICML 2024
We study inductive bias in Transformers in the infinitely over-parameterized Gaussian process limit and argue transformers tend to be biased towards more permutation symmetric functions in sequence space. We show that the representation theory of the symmetric group can be used to give quantitative analytical predictions when the dataset is symmetric to permutations between tokens. We present a simplified transformer block and solve the model at the limit, including accurate predictions for the learning curves and network outputs. We show that in common setups, one can derive tight bounds in the form of a scaling law for the learnability as a function of the context length. Finally, we argue WikiText dataset, does indeed possess a degree of permutation symmetry.
https://proceedings.mlr.press/v235/lavoie24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lavoie24a/lavoie24a.pdf
https://openreview.net/forum?id=iaV2fU6Dif
Modeling Caption Diversity in Contrastive Vision-Language Pretraining
https://proceedings.mlr.press/v235/lavoie24a.html
Samuel Lavoie, Polina Kirichenko, Mark Ibrahim, Mido Assran, Andrew Gordon Wilson, Aaron Courville, Nicolas Ballas
https://proceedings.mlr.press/v235/lavoie24a.html
ICML 2024
There are a thousand ways to caption an image. Contrastive Language Pretraining (CLIP) on the other hand, works by mapping an image and its caption to a single vector – limiting how well CLIP-like models can represent the diverse ways to describe an image. In this work, we introduce Llip, Latent Language Image Pretraining, which models the diversity of captions that could match an image. Llip’s vision encoder outputs a set of visual features that are mixed into a final representation by conditioning on information derived from the text. We show that Llip outperforms non-contextualized baselines like CLIP and SigLIP on a variety of tasks even with large-scale encoders. Llip improves zero-shot classification by an average of 2.9% zero-shot classification benchmarks with a ViT-G/14 encoder. Specifically, Llip attains a zero-shot top-1 accuracy of 83.5% on ImageNet outperforming a similarly sized CLIP by 1.4%. We also demonstrate improvement on zero-shot retrieval on MS-COCO by 6.0%. We provide a comprehensive analysis of the components introduced by the method and demonstrate that Llip leads to richer visual representations.
https://proceedings.mlr.press/v235/lazzati24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lazzati24a/lazzati24a.pdf
https://openreview.net/forum?id=23tMOWscus
Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms
https://proceedings.mlr.press/v235/lazzati24a.html
Filippo Lazzati, Mirco Mutti, Alberto Maria Metelli
https://proceedings.mlr.press/v235/lazzati24a.html
ICML 2024
Inverse reinforcement learning (IRL) aims to recover the reward function of an expert agent from demonstrations of behavior. It is well-known that the IRL problem is fundamentally ill-posed, i.e., many reward functions can explain the demonstrations. For this reason, IRL has been recently reframed in terms of estimating the feasible reward set (Metelli et al., 2021), thus, postponing the selection of a single reward. However, so far, the available formulations and algorithmic solutions have been proposed and analyzed mainly for the online setting, where the learner can interact with the environment and query the expert at will. This is clearly unrealistic in most practical applications, where the availability of an offline dataset is a much more common scenario. In this paper, we introduce a novel notion of feasible reward set capturing the opportunities and limitations of the offline setting and we analyze the complexity of its estimation. This requires the introduction an original learning framework that copes with the intrinsic difficulty of the setting, for which data coverage is not under control. Then, we propose two computationally and statistically efficient algorithms, IRLO and PIRLO, for addressing the problem. In particular, the latter adopts a specific form of pessimism to enforce the novel, desirable property of inclusion monotonicity of the delivered feasible set. With this work, we aim to provide a panorama of the challenges of the offline IRL problem and how they can be fruitfully addressed.
https://proceedings.mlr.press/v235/le24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/le24a/le24a.pdf
https://openreview.net/forum?id=0GC0NG6Orr
Generalized Sobolev Transport for Probability Measures on a Graph
https://proceedings.mlr.press/v235/le24a.html
Tam Le, Truyen Nguyen, Kenji Fukumizu
https://proceedings.mlr.press/v235/le24a.html
ICML 2024
We study the optimal transport (OT) problem for measures supported on a graph metric space. Recently, Le et al. (2022) leverage the graph structure and propose a variant of OT, namely Sobolev transport (ST), which yields a closed-form expression for a fast computation. However, ST is essentially coupled with the $L^p$ geometric structure within its definition which makes it nontrivial to utilize ST for other prior structures. In contrast, the classic OT has the flexibility to adapt to various geometric structures by modifying the underlying cost function. An important instance is the Orlicz-Wasserstein (OW) which moves beyond the $L^p$ structure by leveraging the Orlicz geometric structure. Comparing to the usage of standard $p$-order Wasserstein, OW remarkably helps to advance certain machine learning approaches. Nevertheless, OW brings up a new challenge on its computation due to its two-level optimization formulation. In this work, we leverage a specific class of convex functions for Orlicz structure to propose the generalized Sobolev transport (GST). GST encompasses the ST as its special case, and can be utilized for prior structures beyond the $L^p$ geometry. In connection with the OW, we show that one only needs to simply solve a univariate optimization problem to compute the GST, unlike the complex two-level optimization problem in OW. We empirically illustrate that GST is several-order faster than the OW. Moreover, we provide preliminary evidences on the advantages of GST for document classification and for several tasks in topological data analysis.
https://proceedings.mlr.press/v235/le24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/le24b/le24b.pdf
https://openreview.net/forum?id=dwWef5w2cR
Robust Inverse Graphics via Probabilistic Inference
https://proceedings.mlr.press/v235/le24b.html
Tuan Anh Le, Pavel Sountsov, Matthew Douglas Hoffman, Ben Lee, Brian Patton, Rif A. Saurous
https://proceedings.mlr.press/v235/le24b.html
ICML 2024
How do we infer a 3D scene from a single image in the presence of corruptions like rain, snow or fog? Straightforward domain randomization relies on knowing the family of corruptions ahead of time. Here, we propose a Bayesian approach—dubbed robust inverse graphics (RIG)—that relies on a strong scene prior and an uninformative uniform corruption prior, making it applicable to a wide range of corruptions. Given a single image, RIG performs posterior inference jointly over the scene and the corruption. We demonstrate this idea by training a neural radiance field (NeRF) scene prior and using a secondary NeRF to represent the corruptions over which we place an uninformative prior. RIG, trained only on clean data, outperforms depth estimators and alternative NeRF approaches that perform point estimation instead of full inference. The results hold for a number of scene prior architectures based on normalizing flows and diffusion models. For the latter, we develop reconstruction-guidance with auxiliary latents (ReGAL)—a diffusion conditioning algorithm that is applicable in the presence of auxiliary latent variables such as the corruption. RIG demonstrates how scene priors can be used beyond generation tasks.
https://proceedings.mlr.press/v235/le24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/le24c/le24c.pdf
https://openreview.net/forum?id=Al5GlVytqi
Knowledge Graphs Can be Learned with Just Intersection Features
https://proceedings.mlr.press/v235/le24c.html
Duy Le, Shaochen Zhong, Zirui Liu, Shuai Xu, Vipin Chaudhary, Kaixiong Zhou, Zhaozhuo Xu
https://proceedings.mlr.press/v235/le24c.html
ICML 2024
Knowledge Graphs (KGs) are potent frameworks for knowledge representation and reasoning. Nevertheless, KGs are inherently incomplete, leaving numerous uncharted relationships and facts awaiting discovery. Deep learning methodologies have proven effective in enhancing KG completion by framing it as a link prediction task, where the goal is to discern the validity of a triple comprising a head, relation, and tail. The significance of structural information in assessing the validity of a triple within a KG is well-established. However, quantifying this structural information poses a challenge. We need to pinpoint the metric that encapsulates the structural information of a triple and smoothly incorporate this metric into the link prediction learning process. In this study, we recognize the critical importance of the intersection among the $k$-hop neighborhoods of the head, relation, and tail when determining the validity of a triple. To address this, we introduce a novel randomized algorithm designed to efficiently generate intersection features for candidate triples. Our experimental results demonstrate that a straightforward fully-connected network leveraging these intersection features can surpass the performance of established KG embedding models and even outperform graph neural network baselines. Additionally, we highlight the substantial training time efficiency gains achieved by our network trained on intersection features.
https://proceedings.mlr.press/v235/le-bars24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/le-bars24a/le-bars24a.pdf
https://openreview.net/forum?id=JKPhWzp7Oi
Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm
https://proceedings.mlr.press/v235/le-bars24a.html
Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Kevin Scaman, Giovanni Neglia
https://proceedings.mlr.press/v235/le-bars24a.html
ICML 2024
This paper presents a new generalization error analysis for Decentralized Stochastic Gradient Descent (D-SGD) based on algorithmic stability. The obtained results overhaul a series of recent works that suggested an increased instability due to decentralization and a detrimental impact of poorly-connected communication graphs on generalization. On the contrary, we show, for convex, strongly convex and non-convex functions, that D-SGD can always recover generalization bounds analogous to those of classical SGD, suggesting that the choice of graph does not matter. We then argue that this result is coming from a worst-case analysis, and we provide a refined optimization-dependent generalization bound for general convex functions. This new bound reveals that the choice of graph can in fact improve the worst-case bound in certain regimes, and that surprisingly, a poorly-connected graph can even be beneficial for generalization.
https://proceedings.mlr.press/v235/leahy24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/leahy24a/leahy24a.pdf
https://openreview.net/forum?id=E4ItiEU8Iu
Run-Time Task Composition with Safety Semantics
https://proceedings.mlr.press/v235/leahy24a.html
Kevin Leahy, Makai Mann, Zachary Serlin
https://proceedings.mlr.press/v235/leahy24a.html
ICML 2024
Compositionality is a critical aspect of scalable system design. Here, we focus on Boolean composition of learned tasks as opposed to functional or sequential composition. Existing Boolean composition for Reinforcement Learning focuses on reaching a satisfying absorbing state in environments with discrete action spaces, but does not support composable safety (i.e., avoidance) constraints. We provide three contributions: i) introduce two distinct notions of compositional safety semantics; ii) show how to enforce either safety semantics, prove correctness, and analyze the trade-offs between the two safety notions; and iii) extend Boolean composition from discrete action spaces to continuous action spaces. We demonstrate these techniques using modified versions of value iteration in a grid world, Deep Q-Network (DQN) in a grid world with image observations, and Twin Delayed DDPG (TD3) in a continuous-observation and continuous-action Bullet physics environment
https://proceedings.mlr.press/v235/lechowicz24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lechowicz24a/lechowicz24a.pdf
https://openreview.net/forum?id=hRBdOHVn7y
Chasing Convex Functions with Long-term Constraints
https://proceedings.mlr.press/v235/lechowicz24a.html
Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, Prashant Shenoy
https://proceedings.mlr.press/v235/lechowicz24a.html
ICML 2024
We introduce and study a family of online metric problems with long-term constraints. In these problems, an online player makes decisions $\mathbf{x}_t$ in a metric space $(X,d)$ to simultaneously minimize their hitting cost $f_t(\mathbf{x}_t)$ and switching cost as determined by the metric. Over the time horizon $T$, the player must satisfy a long-term demand constraint $\sum_t c(\mathbf{x}_t) \geq 1$, where $c(\mathbf{x}_t)$ denotes the fraction of demand satisfied at time $t$. Such problems can find a wide array of applications to online resource allocation in sustainable energy/computing systems. We devise optimal competitive and learning-augmented algorithms for the case of bounded hitting cost gradients and weighted $\ell_1$ metrics, and further show that our proposed algorithms perform well in numerical experiments.
https://proceedings.mlr.press/v235/ledent24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ledent24a/ledent24a.pdf
https://openreview.net/forum?id=40foON48am
Generalization Analysis of Deep Non-linear Matrix Completion
https://proceedings.mlr.press/v235/ledent24a.html
Antoine Ledent, Rodrigo Alves
https://proceedings.mlr.press/v235/ledent24a.html
ICML 2024
We provide generalization bounds for matrix completion with Schatten $p$ quasi-norm constraints, which is equivalent to deep matrix factorization with Frobenius constraints. In the uniform sampling regime, the sample complexity scales like $\widetilde{O}\left( rn\right)$ where $n$ is the size of the matrix and $r$ is a constraint of the same order as the ground truth rank in the isotropic case. In the distribution-free setting, the bounds scale as $\widetilde{O}\left(r^{1-\frac{p}{2}}n^{1+\frac{p}{2}}\right)$, which reduces to the familiar $\sqrt{r}n^{\frac{3}{2}}$ for $p=1$. Furthermore, we provide an analogue of the weighted trace norm for this setting which brings the sample complexity down to $\widetilde{O}(nr)$ in all cases. We then present a non-linear model, Functionally Rescaled Matrix Completion (FRMC) which applies a single trainable function from $\mathbb{R}\rightarrow \mathbb{R}$ to each entry of a latent matrix, and prove that this adds only negligible terms of the overall sample complexity, whilst experiments demonstrate that this simple model improvement already leads to significant gains on real data. We also provide extensions of our results to various neural architectures, thereby providing the first comprehensive uniform convergence PAC analysis of neural network matrix completion.
https://proceedings.mlr.press/v235/lee24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24a/lee24a.pdf
https://openreview.net/forum?id=dBqHGZPGZI
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
https://proceedings.mlr.press/v235/lee24a.html
Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, Rada Mihalcea
https://proceedings.mlr.press/v235/lee24a.html
ICML 2024
While alignment algorithms are commonly used to tune pre-trained language models towards user preferences, we lack explanations for the underlying mechanisms in which models become “aligned”, thus making it difficult to explain phenomena like jailbreaks. In this work we study a popular algorithm, direct preference optimization (DPO), and the mechanisms by which it reduces toxicity. Namely, we first study how toxicity is represented and elicited in pre-trained language models (GPT2-medium, Llama2-7b). We then apply DPO with a carefully crafted pairwise dataset to reduce toxicity. We examine how the resulting models avert toxic outputs, and find that capabilities learned from pre-training are not removed, but rather bypassed. We use this insight to demonstrate a simple method to un-align the models, reverting them back to their toxic behavior.
https://proceedings.mlr.press/v235/lee24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24b/lee24b.pdf
https://openreview.net/forum?id=HmKMpJXH67
Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation
https://proceedings.mlr.press/v235/lee24b.html
Jae-Hong Lee, Joon-Hyuk Chang
https://proceedings.mlr.press/v235/lee24b.html
ICML 2024
In the rapidly evolving field of online test-time adaptation (OTTA), effectively managing distribution shifts is a pivotal concern. State-of-the-art OTTA methodologies often face limitations such as an inadequate target domain information integration, leading to significant issues like catastrophic forgetting and a lack of adaptability in dynamically changing environments. In this paper, we introduce a stationary latent weight inference (SLWI) framework, a novel approach to overcome these challenges. The proposed SLWI uniquely incorporates Bayesian filtering to continually track and update the target model weights along with the source model weight in online settings, thereby ensuring that the adapted model remains responsive to ongoing changes in the target domain. The proposed framework has the peculiar property to identify and backtrack nonlinear weights that exhibit local non-stationarity, thereby mitigating error propagation, a common pitfall of previous approaches. By integrating and refining information from both source and target domains, SLWI presents a robust solution to the persistent issue of domain adaptation in OTTA, significantly improving existing methodologies. The efficacy of SLWI is demonstrated through various experimental setups, showcasing its superior performance in diverse distribution shift scenarios.
https://proceedings.mlr.press/v235/lee24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24c/lee24c.pdf
https://openreview.net/forum?id=OTmcsyEO5G
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
https://proceedings.mlr.press/v235/lee24c.html
Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer
https://proceedings.mlr.press/v235/lee24c.html
ICML 2024
Current Large Language Models (LLMs) are not only limited to some maximum context length, but also are not able to robustly consume long inputs. To address these limitations, we propose ReadAgent, an LLM agent system that increases effective context length up to 20x in our experiments. Inspired by how humans interactively read long documents, we implement ReadAgent as a simple prompting system that uses the advanced language capabilities of LLMs to (1) decide what content to store together in a memory episode, (2) compress those memory episodes into short episodic memories called gist memories, and (3) take actions to look up passages in the original text if ReadAgent needs to remind itself of relevant details to complete a task. We evaluate ReadAgent against baselines using retrieval methods, using the original long contexts, and using the gist memories. These evaluations are performed on three long-document reading comprehension tasks: QuALITY, NarrativeQA, and QMSum. ReadAgent outperforms the baselines on all three tasks while extending the effective context window by 3.5-20x.
https://proceedings.mlr.press/v235/lee24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24d/lee24d.pdf
https://openreview.net/forum?id=VF177x7Syw
Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks
https://proceedings.mlr.press/v235/lee24d.html
Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle
https://proceedings.mlr.press/v235/lee24d.html
ICML 2024
This study investigates the loss of generalization ability in neural networks, revisiting warm-starting experiments from Ash & Adams. Our empirical analysis reveals that common methods designed to enhance plasticity by maintaining trainability provide limited benefits to generalization. While reinitializing the network can be effective, it also risks losing valuable prior knowledge. To this end, we introduce the Hare & Tortoise, inspired by the brain’s complementary learning system. Hare & Tortoise consists of two components: the Hare network, which rapidly adapts to new information analogously to the hippocampus, and the Tortoise network, which gradually integrates knowledge akin to the neocortex. By periodically reinitializing the Hare network to the Tortoise’s weights, our method preserves plasticity while retaining general knowledge. Hare & Tortoise can effectively maintain the network’s ability to generalize, which improves advanced reinforcement learning algorithms on the Atari-100k benchmark. The code is available at https://github.com/dojeon-ai/hare-tortoise.
https://proceedings.mlr.press/v235/lee24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24e/lee24e.pdf
https://openreview.net/forum?id=s6ZAT8MLKU
Fundamental Benefit of Alternating Updates in Minimax Optimization
https://proceedings.mlr.press/v235/lee24e.html
Jaewook Lee, Hanseul Cho, Chulhee Yun
https://proceedings.mlr.press/v235/lee24e.html
ICML 2024
The Gradient Descent-Ascent (GDA) algorithm, designed to solve minimax optimization problems, takes the descent and ascent steps either simultaneously (Sim-GDA) or alternately (Alt-GDA). While Alt-GDA is commonly observed to converge faster, the performance gap between the two is not yet well understood theoretically, especially in terms of global convergence rates. To address this theory-practice gap, we present fine-grained convergence analyses of both algorithms for strongly-convex-strongly-concave and Lipschitz-gradient objectives. Our new iteration complexity upper bound of Alt-GDA is strictly smaller than the lower bound of Sim-GDA; i.e., Alt-GDA is provably faster. Moreover, we propose Alternating-Extrapolation GDA (Alex-GDA), a general algorithmic framework that subsumes Sim-GDA and Alt-GDA, for which the main idea is to alternately take gradients from extrapolations of the iterates. We show that Alex-GDA satisfies a smaller iteration complexity bound, identical to that of the Extra-gradient method, while requiring less gradient computations. We also prove that Alex-GDA enjoys linear convergence for bilinear problems, for which both Sim-GDA and Alt-GDA fail to converge at all.
https://proceedings.mlr.press/v235/lee24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24f/lee24f.pdf
https://openreview.net/forum?id=szvKJgmubh
DataFreeShield: Defending Adversarial Attacks without Training Data
https://proceedings.mlr.press/v235/lee24f.html
Hyeyoon Lee, Kanghyun Choi, Dain Kwon, Sunjong Park, Mayoore Selvarasa Jaiswal, Noseong Park, Jonghyun Choi, Jinho Lee
https://proceedings.mlr.press/v235/lee24f.html
ICML 2024
Recent advances in adversarial robustness rely on an abundant set of training data, where using external or additional datasets has become a common setting. However, in real life, the training data is often kept private for security and privacy issues, while only the pretrained weight is available to the public. In such scenarios, existing methods that assume accessibility to the original data become inapplicable. Thus we investigate the pivotal problem of data-free adversarial robustness, where we try to achieve adversarial robustness without accessing any real data. Through a preliminary study, we highlight the severity of the problem by showing that robustness without the original dataset is difficult to achieve, even with similar domain datasets. To address this issue, we propose DataFreeShield, which tackles the problem from two perspectives: surrogate dataset generation and adversarial training using the generated data. Through extensive validation, we show that DataFreeShield outperforms baselines, demonstrating that the proposed method sets the first entirely data-free solution for the adversarial robustness problem.
https://proceedings.mlr.press/v235/lee24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24g/lee24g.pdf
https://openreview.net/forum?id=pTFud6SetK
SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching
https://proceedings.mlr.press/v235/lee24g.html
Yongmin Lee, Hye Won Chung
https://proceedings.mlr.press/v235/lee24g.html
ICML 2024
Dataset distillation aims to synthesize a small number of images per class (IPC) from a large dataset to approximate full dataset training with minimal performance loss. While effective in very small IPC ranges, many distillation methods become less effective, even underperforming random sample selection, as IPC increases. Our examination of state-of-the-art trajectory-matching based distillation methods across various IPC scales reveals that these methods struggle to incorporate the complex, rare features of harder samples into the synthetic dataset even with the increased IPC, resulting in a persistent coverage gap between easy and hard test samples. Motivated by such observations, we introduce SelMatch, a novel distillation method that effectively scales with IPC. SelMatch uses selection-based initialization and partial updates through trajectory matching to manage the synthetic dataset’s desired difficulty level tailored to IPC scales. When tested on CIFAR-10/100 and TinyImageNet, SelMatch consistently outperforms leading selection-only and distillation-only methods across subset ratios from 5% to 30%.
https://proceedings.mlr.press/v235/lee24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24h/lee24h.pdf
https://openreview.net/forum?id=w4B42sxNq3
Recurrent Early Exits for Federated Learning with Heterogeneous Clients
https://proceedings.mlr.press/v235/lee24h.html
Royson Lee, Javier Fernandez-Marques, Shell Xu Hu, Da Li, Stefanos Laskaridis, Łukasz Dudziak, Timothy Hospedales, Ferenc Huszár, Nicholas Donald Lane
https://proceedings.mlr.press/v235/lee24h.html
ICML 2024
Federated learning (FL) has enabled distributed learning of a model across multiple clients in a privacy-preserving manner. One of the main challenges of FL is to accommodate clients with varying hardware capacities; clients have differing compute and memory requirements. To tackle this challenge, recent state-of-the-art approaches leverage the use of early exits. Nonetheless, these approaches fall short of mitigating the challenges of joint learning multiple exit classifiers, often relying on hand-picked heuristic solutions for knowledge distillation among classifiers and/or utilizing additional layers for weaker classifiers. In this work, instead of utilizing multiple classifiers, we propose a recurrent early exit approach named ReeFL that fuses features from different sub-models into a single shared classifier. Specifically, we use a transformer-based early-exit module shared among sub-models to i) better exploit multi-layer feature representations for task-specific prediction and ii) modulate the feature representation of the backbone model for subsequent predictions. We additionally present a per-client self-distillation approach where the best sub-model is automatically selected as the teacher of the other sub-models at each client. Our experiments on standard image and speech classification benchmarks across various emerging federated fine-tuning baselines demonstrate ReeFL effectiveness over previous works.
https://proceedings.mlr.press/v235/lee24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24i/lee24i.pdf
https://openreview.net/forum?id=sOyJSNUrzQ
PAC-Bayesian Generalization Bounds for Knowledge Graph Representation Learning
https://proceedings.mlr.press/v235/lee24i.html
Jaejun Lee, Minsung Hwang, Joyce Jiyoung Whang
https://proceedings.mlr.press/v235/lee24i.html
ICML 2024
While a number of knowledge graph representation learning (KGRL) methods have been proposed over the past decade, very few theoretical analyses have been conducted on them. In this paper, we present the first PAC-Bayesian generalization bounds for KGRL methods. To analyze a broad class of KGRL models, we propose a generic framework named ReED (Relation-aware Encoder-Decoder), which consists of a relation-aware message passing encoder and a triplet classification decoder. Our ReED framework can express at least 15 different existing KGRL models, including not only graph neural network-based models such as R-GCN and CompGCN but also shallow-architecture models such as RotatE and ANALOGY. Our generalization bounds for the ReED framework provide theoretical grounds for the commonly used tricks in KGRL, e.g., parameter-sharing and weight normalization schemes, and guide desirable design choices for practical KGRL methods. We empirically show that the critical factors in our generalization bounds can explain actual generalization errors on three real-world knowledge graphs.
https://proceedings.mlr.press/v235/lee24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24j/lee24j.pdf
https://openreview.net/forum?id=IpPnmhjw30
Learning to Continually Learn with the Bayesian Principle
https://proceedings.mlr.press/v235/lee24j.html
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim
https://proceedings.mlr.press/v235/lee24j.html
ICML 2024
In the present era of deep learning, continual learning research is mainly focused on mitigating forgetting when training a neural network with stochastic gradient descent on a non-stationary stream of data. On the other hand, in the more classical literature of statistical machine learning, many models have sequential Bayesian update rules that yield the same learning outcome as the batch training, i.e., they are completely immune to catastrophic forgetting. However, they are often overly simple to model complex real-world data. In this work, we adopt the meta-learning paradigm to combine the strong representational power of neural networks and simple statistical models’ robustness to forgetting. In our novel meta-continual learning framework, continual learning takes place only in statistical models via ideal sequential Bayesian update rules, while neural networks are meta-learned to bridge the raw data and the statistical models. Since the neural networks remain fixed during continual learning, they are protected from catastrophic forgetting. This approach not only achieves significantly improved performance but also exhibits excellent scalability. Since our approach is domain-agnostic and model-agnostic, it can be applied to a wide range of problems and easily integrated with existing model architectures.
https://proceedings.mlr.press/v235/lee24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24k/lee24k.pdf
https://openreview.net/forum?id=VyfEv6EjKR
Graph Neural Networks with a Distribution of Parametrized Graphs
https://proceedings.mlr.press/v235/lee24k.html
See Hian Lee, Feng Ji, Kelin Xia, Wee Peng Tay
https://proceedings.mlr.press/v235/lee24k.html
ICML 2024
Traditionally, graph neural networks have been trained using a single observed graph. However, the observed graph represents only one possible realization. In many applications, the graph may encounter uncertainties, such as having erroneous or missing edges, as well as edge weights that provide little informative value. To address these challenges and capture additional information previously absent in the observed graph, we introduce latent variables to parameterize and generate multiple graphs. The parameters follow an unknown distribution to be estimated. We propose a formulation in terms of maximum likelihood estimation of the network parameters. Therefore, it is possible to devise an algorithm based on Expectation-Maximization (EM). Specifically, we iteratively determine the distribution of the graphs using a Markov Chain Monte Carlo (MCMC) method, incorporating the principles of PAC-Bayesian theory. Numerical experiments demonstrate improvements in performance against baseline models on node classification for both heterogeneous and homogeneous graphs.
https://proceedings.mlr.press/v235/lee24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24l/lee24l.pdf
https://openreview.net/forum?id=qY622O6Ehg
Pausing Policy Learning in Non-stationary Reinforcement Learning
https://proceedings.mlr.press/v235/lee24l.html
Hyunin Lee, Ming Jin, Javad Lavaei, Somayeh Sojoudi
https://proceedings.mlr.press/v235/lee24l.html
ICML 2024
Real-time inference is a challenge of real-world reinforcement learning due to temporal differences in time-varying environments: the system collects data from the past, updates the decision model in the present, and deploys it in the future. We tackle a common belief that continually updating the decision is optimal to minimize the temporal gap. We propose forecasting an online reinforcement learning framework and show that strategically pausing decision updates yields better overall performance by effectively managing aleatoric uncertainty. Theoretically, we compute an optimal ratio between policy update and hold duration, and show that a non-zero policy hold duration provides a sharper upper bound on the dynamic regret. Our experimental evaluations on three different environments also reveal that a non-zero policy hold duration yields higher rewards compared to continuous decision updates.
https://proceedings.mlr.press/v235/lee24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24m/lee24m.pdf
https://openreview.net/forum?id=buW1Bi6XFw
Feature Distribution on Graph Topology Mediates the Effect of Graph Convolution: Homophily Perspective
https://proceedings.mlr.press/v235/lee24m.html
Soo Yong Lee, Sunwoo Kim, Fanchen Bu, Jaemin Yoo, Jiliang Tang, Kijung Shin
https://proceedings.mlr.press/v235/lee24m.html
ICML 2024
How would randomly shuffling feature vectors among nodes from the same class affect graph neural networks (GNNs)? The feature shuffle, intuitively, perturbs the dependence between graph topology and features (A-X dependence) for GNNs to learn from. Surprisingly, we observe a consistent and significant improvement in GNN performance following the feature shuffle. Having overlooked the impact of A-X dependence on GNNs, the prior literature does not provide a satisfactory understanding of the phenomenon. Thus, we raise two research questions. First, how should A-X dependence be measured, while controlling for potential confounds? Second, how does A-X dependence affect GNNs? In response, we (i) propose a principled measure for A-X dependence, (ii) design a random graph model that controls A-X dependence, (iii) establish a theory on how A-X dependence relates to graph convolution, and (iv) present empirical analysis on real-world graphs that align with the theory. We conclude that A-X dependence mediates the effect of graph convolution, such that smaller dependence improves GNN-based node classification.
https://proceedings.mlr.press/v235/lee24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24n/lee24n.pdf
https://openreview.net/forum?id=u8TZ9gm4im
Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity
https://proceedings.mlr.press/v235/lee24n.html
Hagyeong Lee, Minkyu Kim, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, Jaeho Lee
https://proceedings.mlr.press/v235/lee24n.html
ICML 2024
Recent advances in text-guided image compression have shown great potential to enhance the perceptual quality of reconstructed images. These methods, however, tend to have significantly degraded pixel-wise fidelity, limiting their practicality. To fill this gap, we develop a new text-guided image compression algorithm that achieves both high perceptual and pixel-wise fidelity. In particular, we propose a compression framework that leverages text information mainly by text-adaptive encoding and training with joint image-text loss. By doing so, we avoid decoding based on text-guided generative models—known for high generative diversity—and effectively utilize the semantic information of text at a global level. Experimental results on various datasets show that our method can achieve high pixel-level and perceptual quality, with either human- or machine-generated captions. In particular, our method outperforms all baselines in terms of LPIPS, with some room for even more improvements when we use more carefully generated captions.
https://proceedings.mlr.press/v235/lee24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24o/lee24o.pdf
https://openreview.net/forum?id=xuX2rDSSco
Drug Discovery with Dynamic Goal-aware Fragments
https://proceedings.mlr.press/v235/lee24o.html
Seul Lee, Seanie Lee, Kenji Kawaguchi, Sung Ju Hwang
https://proceedings.mlr.press/v235/lee24o.html
ICML 2024
Fragment-based drug discovery is an effective strategy for discovering drug candidates in the vast chemical space, and has been widely employed in molecular generative models. However, many existing fragment extraction methods in such models do not take the target chemical properties into account or rely on heuristic rules. Additionally, the existing fragment-based generative models cannot update the fragment vocabulary with goal-aware fragments newly discovered during the generation. To this end, we propose a molecular generative framework for drug discovery, named Goal-aware fragment Extraction, Assembly, and Modification (GEAM). GEAM consists of three modules, each responsible for goal-aware fragment extraction, fragment assembly, and fragment modification. The fragment extraction module identifies important fragments contributing to the desired target properties with the information bottleneck principle, thereby constructing an effective goal-aware fragment vocabulary. Moreover, GEAM can explore beyond the initial vocabulary with the fragment modification module, and the exploration is further enhanced through the dynamic goal-aware vocabulary update. We experimentally demonstrate that GEAM effectively discovers drug candidates through the generative cycle of the three modules in various drug discovery tasks. Our code is available at https://github.com/SeulLee05/GEAM.
https://proceedings.mlr.press/v235/lee24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24p/lee24p.pdf
https://openreview.net/forum?id=YlJy1FcM9E
Supervised Matrix Factorization: Local Landscape Analysis and Applications
https://proceedings.mlr.press/v235/lee24p.html
Joowon Lee, Hanbaek Lyu, Weixin Yao
https://proceedings.mlr.press/v235/lee24p.html
ICML 2024
Supervised matrix factorization (SMF) is a classical machine learning method that seeks low-dimensional feature extraction and classification tasks at the same time. Training an SMF model involves solving a non-convex and factor-wise constrained optimization problem with at least three blocks of parameters. Due to the high non-convexity and constraints, theoretical understanding of the optimization landscape of SMF has been limited. In this paper, we provide an extensive local landscape analysis for SMF and derive several theoretical and practical applications. Analyzing diagonal blocks of the Hessian naturally leads to a block coordinate descent (BCD) algorithm with adaptive step sizes. We provide global convergence and iteration complexity guarantees for this algorithm. Full Hessian analysis gives minimum $L_{2}$-regularization to guarantee local strong convexity and robustness of parameters. We establish a local estimation guarantee under a statistical SMF model. We also propose a novel GPU-friendly neural implementation of the BCD algorithm and validate our theoretical findings through numerical experiments. Our work contributes to a deeper understanding of SMF optimization, offering insights into the optimization landscape and providing practical solutions to enhance its performance.
https://proceedings.mlr.press/v235/lee24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24q/lee24q.pdf
https://openreview.net/forum?id=qXoqV40imX
Defining Neural Network Architecture through Polytope Structures of Datasets
https://proceedings.mlr.press/v235/lee24q.html
Sangmin Lee, Abbas Mammadov, Jong Chul Ye
https://proceedings.mlr.press/v235/lee24q.html
ICML 2024
Current theoretical and empirical research in neural networks suggests that complex datasets require large network architectures for thorough classification, yet the precise nature of this relationship remains unclear. This paper tackles this issue by defining upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question. We also delve into the application of these principles to simplicial complexes and specific manifold shapes, explaining how the requirement for network width varies in accordance with the geometric complexity of the dataset. Moreover, we develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks. Through our algorithm, it is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
https://proceedings.mlr.press/v235/lee24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24r/lee24r.pdf
https://openreview.net/forum?id=S0DPCE7tt4
Why Do Animals Need Shaping? A Theory of Task Composition and Curriculum Learning
https://proceedings.mlr.press/v235/lee24r.html
Jin Hwa Lee, Stefano Sarao Mannelli, Andrew M Saxe
https://proceedings.mlr.press/v235/lee24r.html
ICML 2024
Diverse studies in systems neuroscience begin with extended periods of curriculum training known as ‘shaping’ procedures. These involve progressively studying component parts of more complex tasks, and can make the difference between learning a task quickly, slowly or not at all. Despite the importance of shaping to the acquisition of complex tasks, there is as yet no theory that can help guide the design of shaping procedures, or more fundamentally, provide insight into its key role in learning. Modern deep reinforcement learning systems might implicitly learn compositional primitives within their multilayer policy networks. Inspired by these models, we propose and analyse a model of deep policy gradient learning of simple compositional reinforcement learning tasks. Using the tools of statistical physics, we solve for exact learning dynamics and characterise different learning strategies including primitives pre-training, in which task primitives are studied individually before learning compositional tasks. We find a complex interplay between task complexity and the efficacy of shaping strategies. Overall, our theory provides an analytical understanding of the benefits of shaping in a class of compositional tasks and a quantitative account of how training protocols can disclose useful task primitives, ultimately yielding faster and more robust learning.
https://proceedings.mlr.press/v235/lee24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24s/lee24s.pdf
https://openreview.net/forum?id=FYQIgQWH3d
3D Geometric Shape Assembly via Efficient Point Cloud Matching
https://proceedings.mlr.press/v235/lee24s.html
Nahyuk Lee, Juhong Min, Junha Lee, Seungwook Kim, Kanghee Lee, Jaesik Park, Minsu Cho
https://proceedings.mlr.press/v235/lee24s.html
ICML 2024
Learning to assemble geometric shapes into a larger target structure is a pivotal task in various practical applications. In this work, we tackle this problem by establishing local correspondences between point clouds of part shapes in both coarse- and fine-levels. To this end, we introduce Proxy Match Transform (PMT), an approximate high-order feature transform layer that enables reliable matching between mating surfaces of parts while incurring low costs in memory and compute. Building upon PMT, we introduce a new framework, dubbed Proxy Match TransformeR (PMTR), for the geometric assembly task. We evaluate the proposed PMTR on the large-scale 3D geometric shape assembly benchmark dataset of Breaking Bad and demonstrate its superior performance and efficiency compared to state-of-the-art methods. Project page: https://nahyuklee.github.io/pmtr
https://proceedings.mlr.press/v235/lee24t.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24t/lee24t.pdf
https://openreview.net/forum?id=uydQ2W41KO
RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
https://proceedings.mlr.press/v235/lee24t.html
Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Ren Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash
https://proceedings.mlr.press/v235/lee24t.html
ICML 2024
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences, but gathering high-quality preference labels is expensive. RL from AI Feedback (RLAIF), introduced in Bai et al. (2022b), offers a promising alternative that trains the reward model (RM) on preferences generated by an off-the-shelf LLM. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, we show that RLAIF achieves comparable performance to RLHF. Furthermore, we take a step towards "self-improvement" by demonstrating that RLAIF can outperform a supervised fine-tuned baseline even when the AI labeler is the same size as the policy, or even the exact same checkpoint as the initial policy. Finally, we introduce direct-RLAIF (d-RLAIF) - a technique that circumvents RM training by obtaining rewards directly from an off-the-shelf LLM during RL, which achieves superior performance to canonical RLAIF. Our results suggest that RLAIF can achieve performance on-par with using human feedback, offering a potential solution to the scalability limitations of RLHF.
https://proceedings.mlr.press/v235/lee24u.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24u/lee24u.pdf
https://openreview.net/forum?id=kLZZWvqlEm
StrWAEs to Invariant Representations
https://proceedings.mlr.press/v235/lee24u.html
Hyunjong Lee, Yedarm Seong, Sungdong Lee, Joong-Ho Won
https://proceedings.mlr.press/v235/lee24u.html
ICML 2024
Autoencoders have become an indispensable tool for generative modeling and representation learning in high dimensions. Imposing structural constraints such as conditional independence in order to capture invariance of latent variables to nuisance information has been attempted through adding ad hoc penalties to the loss function mostly in the variational autoencoder (VAE) context, often based on heuristics. This paper demonstrates that Wasserstein autoencoders (WAEs) are highly flexible in embracing such structural constraints. Well-known extensions of VAEs for this purpose are gracefully handled within the framework of WAEs. In particular, given a conditional independence structure of the generative model (decoder), corresponding encoder structure and penalties are derived from the functional constraints that define the WAE. These structural uses of WAEs, termed StrWAEs (“stairways”), open up a principled way of penalizing autoencoders to impose structural constraints. Utilizing these advantages, we present handful of results on semi-supervised classification, conditional generation, and invariant representation tasks.
https://proceedings.mlr.press/v235/lee24v.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24v/lee24v.pdf
https://openreview.net/forum?id=ErkzxOlOLy
Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains
https://proceedings.mlr.press/v235/lee24v.html
Kyungeun Lee, Ye Seul Sim, Hyeseung Cho, Moonjung Eo, Suhee Yoon, Sanghyu Yoon, Woohyung Lim
https://proceedings.mlr.press/v235/lee24v.html
ICML 2024
The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions. To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets. Our empirical investigations ascertain several advantages of binning: capturing the irregular function, compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. The codes are available in https://github.com/kyungeun-lee/tabularbinning.
https://proceedings.mlr.press/v235/lee24w.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24w/lee24w.pdf
https://openreview.net/forum?id=jP8mf34iCW
Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization
https://proceedings.mlr.press/v235/lee24w.html
Deokjae Lee, Hyun Oh Song, Kyunghyun Cho
https://proceedings.mlr.press/v235/lee24w.html
ICML 2024
Active learning is increasingly adopted for expensive multi-objective combinatorial optimization problems, but it involves a challenging subset selection problem, optimizing the batch acquisition score that quantifies the goodness of a batch for evaluation. Due to the excessively large search space of the subset selection problem, prior methods optimize the batch acquisition on the latent space, which has discrepancies with the actual space, or optimize individual acquisition scores without considering the dependencies among candidates in a batch instead of directly optimizing the batch acquisition. To manage the vast search space, a simple and effective approach is the greedy method, which decomposes the problem into smaller subproblems, yet it has difficulty in parallelization since each subproblem depends on the outcome from the previous ones. To this end, we introduce a novel greedy-style subset selection algorithm that optimizes batch acquisition directly on the combinatorial space by sequential greedy sampling from the greedy policy, specifically trained to address all greedy subproblems concurrently. Notably, our experiments on the red fluorescent proteins design task show that our proposed method achieves the baseline performance in 1.69x fewer queries, demonstrating its efficiency.
https://proceedings.mlr.press/v235/lee24x.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24x/lee24x.pdf
https://openreview.net/forum?id=0zbxwvJqwf
Robust Optimization in Protein Fitness Landscapes Using Reinforcement Learning in Latent Space
https://proceedings.mlr.press/v235/lee24x.html
Minji Lee, Luiz Felipe Vecchietti, Hyunkyu Jung, Hyun Joo Ro, Meeyoung Cha, Ho Min Kim
https://proceedings.mlr.press/v235/lee24x.html
ICML 2024
Proteins are complex molecules responsible for different functions in nature. Enhancing the functionality of proteins and cellular fitness can significantly impact various industries. However, protein optimization using computational methods remains challenging, especially when starting from low-fitness sequences. We propose LatProtRL, an optimization method to efficiently traverse a latent space learned by an encoder-decoder leveraging a large protein language model. To escape local optima, our optimization is modeled as a Markov decision process using reinforcement learning acting directly in latent space. We evaluate our approach on two important fitness optimization tasks, demonstrating its ability to achieve comparable or superior fitness over baseline methods. Our findings and in vitro evaluation show that the generated sequences can reach high-fitness regions, suggesting a substantial potential of LatProtRL in lab-in-the-loop scenarios.
https://proceedings.mlr.press/v235/lee24y.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24y/lee24y.pdf
https://openreview.net/forum?id=hoVwecMqV5
Behavior Generation with Latent Actions
https://proceedings.mlr.press/v235/lee24y.html
Seungjae Lee, Yibin Wang, Haritheja Etukuru, H. Jin Kim, Nur Muhammad Mahi Shafiullah, Lerrel Pinto
https://proceedings.mlr.press/v235/lee24y.html
ICML 2024
Generative modeling of complex behaviors from labeled datasets has been a longstanding problem in decision-making. Unlike language or image generation, decision-making requires modeling actions – continuous-valued vectors that are multimodal in their distribution, potentially drawn from uncurated sources, where generation errors can compound in sequential prediction. A recent class of models called Behavior Transformers (BeT) addresses this by discretizing actions using k-means clustering to capture different modes. However, k-means struggles to scale for high-dimensional action spaces or long sequences, and lacks gradient information, and thus BeT suffers in modeling long-range actions. In this work, we present Vector-Quantized Behavior Transformer (VQ-BeT), a versatile model for behavior generation that handles multimodal action prediction, conditional generation, and partial observations. VQ-BeT augments BeT by tokenizing continuous actions with a hierarchical vector quantization module. Across seven environments including simulated manipulation, autonomous driving, and robotics, VQ-BeT improves on state-of-the-art models such as BeT and Diffusion Policies. Importantly, we demonstrate VQ-BeT’s improved ability to capture behavior modes while accelerating inference speed 5× over Diffusion Policies. Videos can be found https://sjlee.cc/vq-bet/
https://proceedings.mlr.press/v235/lee24z.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24z/lee24z.pdf
https://openreview.net/forum?id=eCCaHZKdl4
Improving Instruction Following in Language Models through Proxy-Based Uncertainty Estimation
https://proceedings.mlr.press/v235/lee24z.html
Joonho Lee, Jae Oh Woo, Juree Seok, Parisa Hassanzadeh, Wooseok Jang, Juyoun Son, Sima Didari, Baruch Gutow, Heng Hao, Hankyu Moon, Wenjun Hu, Yeong-Dae Kwon, Taehee Lee, Seungjai Min
https://proceedings.mlr.press/v235/lee24z.html
ICML 2024
Assessing response quality to instructions in language models is vital but challenging due to the complexity of human language across different contexts. This complexity often results in ambiguous or inconsistent interpretations, making accurate assessment difficult. To address this issue, we propose a novel Uncertainty-aware Reward Model (URM) that introduces a robust uncertainty estimation for the quality of paired responses based on Bayesian approximation. Trained with preference datasets, our uncertainty-enabled proxy not only scores rewards for responses but also evaluates their inherent uncertainty. Empirical results demonstrate significant benefits of incorporating the proposed proxy into language model training. Our method boosts the instruction following capability of language models by refining data curation for training and improving policy optimization objectives, thereby surpassing existing methods by a large margin on benchmarks such as Vicuna and MT-bench. These findings highlight that our proposed approach substantially advances language model training and paves a new way of harnessing uncertainty within language models.
https://proceedings.mlr.press/v235/lee24aa.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24aa/lee24aa.pdf
https://openreview.net/forum?id=6TM62kpI5c
Rethinking the Flat Minima Searching in Federated Learning
https://proceedings.mlr.press/v235/lee24aa.html
Taehwan Lee, Sung Whan Yoon
https://proceedings.mlr.press/v235/lee24aa.html
ICML 2024
Albeit the success of federated learning (FL) in decentralized training, bolstering the generalization of models by overcoming heterogeneity across clients still remains a huge challenge. To aim at improved generalization of FL, a group of recent works pursues flatter minima of models by employing sharpness-aware minimization in the local training at the client side. However, we observe that the global model, i.e., the aggregated model, does not lie on flat minima of the global objective, even with the effort of flatness searching in local training, which we define as flatness discrepancy. By rethinking and theoretically analyzing flatness searching in FL through the lens of the discrepancy problem, we propose a method called Federated Learning for Global Flatness (FedGF) that explicitly pursues the flatter minima of the global models, leading to the relieved flatness discrepancy and remarkable performance gains in the heterogeneous FL benchmarks.
https://proceedings.mlr.press/v235/lee24ab.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24ab/lee24ab.pdf
https://openreview.net/forum?id=5zXTwX92qv
BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation
https://proceedings.mlr.press/v235/lee24ab.html
Daeun Lee, Jaehong Yoon, Sung Ju Hwang
https://proceedings.mlr.press/v235/lee24ab.html
ICML 2024
Continual Test-Time Adaptation (CTTA) is designed to optimize the model during deployment under changing conditions. CTTA is an important problem as it enables models to remain effective and reliable in dynamic and evolving environments. However, tackling the CTTA problem is nontrivial. The model needs to be computationally and memory-efficient to rapidly update its parameters for ever-changing environments in real-time. Also, the model should generalize well to new unseen domains while maintaining its capability on previously encountered ones, as old domains can be revisited in future adaptation phases. To tackle these challenges, this paper proposes BECoTTA, a parameter/memory-efficient yet powerful framework for CTTA. We introduce Mixture-of-Domain Low-rank Experts (MoDE) that contains two core components: ?i) Domain-Adaptive Routing, which can aid in selectively capturing the domain-adaptive knowledge, and ii) Domain-Expert Synergy Loss to maximize the dependency between each domain and expert. We validate our proposed method over multiple CTTA benchmarks, getting 5.81% performance gain, while only requiring 0.001x trainable parameters. We also provide analyses of our BECoTTA, including expert assignment and target domain relation.
https://proceedings.mlr.press/v235/lee24ac.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24ac/lee24ac.pdf
https://openreview.net/forum?id=u4VR3WBH7a
STELLA: Continual Audio-Video Pre-training with SpatioTemporal Localized Alignment
https://proceedings.mlr.press/v235/lee24ac.html
Jaewoo Lee, Jaehong Yoon, Wonjae Kim, Yunji Kim, Sung Ju Hwang
https://proceedings.mlr.press/v235/lee24ac.html
ICML 2024
Continuously learning a variety of audio-video semantics over time is crucial for audio-related reasoning tasks in our ever-evolving world. However, this is a nontrivial problem and poses two critical challenges: sparse spatio-temporal correlation between audio-video pairs and multimodal correlation overwriting that forgets audio-video relations. To tackle this problem, we propose a new continual audio-video pre-training method with two novel ideas: (1) Localized Patch Importance Scoring: we introduce a multimodal encoder to determine the importance score for each patch, emphasizing semantically intertwined audio-video patches. (2) Replay-guided Correlation Assessment: to reduce the corruption of previously learned audiovisual knowledge due to drift, we propose to assess the correlation of the current patches on the past steps to identify the patches exhibiting high correlations with the past steps. Based on the results from the two ideas, we perform probabilistic patch selection for effective continual audio-video pre-training. Experimental validation on multiple benchmarks shows that our method achieves a $3.69%$p of relative performance gain in zero-shot retrieval tasks compared to strong continual learning baselines, while reducing memory consumption by $\sim 45 %$.
https://proceedings.mlr.press/v235/lee24ad.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lee24ad/lee24ad.pdf
https://openreview.net/forum?id=Lb8G2dZjcB
Sign Rank Limitations for Inner Product Graph Decoders
https://proceedings.mlr.press/v235/lee24ad.html
Su Hyeong Lee, Qingqi Zhang, Risi Kondor
https://proceedings.mlr.press/v235/lee24ad.html
ICML 2024
Inner product-based decoders are among the most influential frameworks used to extract meaningful data from latent embeddings. However, such decoders have shown limitations in representation capacity in numerous works within the literature, which have been particularly notable in graph reconstruction problems. In this paper, we provide the first theoretical elucidation of this pervasive phenomenon in graph data, and suggest straightforward modifications to circumvent this issue without deviating from the inner product framework.
https://proceedings.mlr.press/v235/legacci24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/legacci24a/legacci24a.pdf
https://openreview.net/forum?id=7RSIGQRT1F
A Geometric Decomposition of Finite Games: Convergence vs. Recurrence under Exponential Weights
https://proceedings.mlr.press/v235/legacci24a.html
Davide Legacci, Panayotis Mertikopoulos, Bary Pradelski
https://proceedings.mlr.press/v235/legacci24a.html
ICML 2024
In view of the complexity of the dynamics of learning in games, we seek to decompose a game into simpler components where the dynamics’ long-run behavior is well understood. A natural starting point for this is Helmholtz’s theorem, which decomposes a vector field into a potential and an incompressible component. However, the geometry of game dynamics - and, in particular, the dynamics of exponential / multiplicative weights (EW) schemes - is not compatible with the Euclidean underpinnings of Helmholtz’s theorem. This leads us to consider a specific Riemannian framework based on the so-called Shahshahani metric, and introduce the class of incompressible games, for which we establish the following results: First, in addition to being volume-preserving, the continuous-time EW dynamics in incompressible games admit a constant of motion and are Poincaré recurrent - i.e., almost every trajectory of play comes arbitrarily close to its starting point infinitely often. Second, we establish a deep connection with a well-known decomposition of games into a potential and harmonic component (where the players’ objectives are aligned and anti-aligned respectively): a game is incompressible if and only if it is harmonic, implying in turn that the EW dynamics lead to Poincaré recurrence in harmonic games.
https://proceedings.mlr.press/v235/lei24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lei24a/lei24a.pdf
https://openreview.net/forum?id=xgoilgLPGD
Langevin Policy for Safe Reinforcement Learning
https://proceedings.mlr.press/v235/lei24a.html
Fenghao Lei, Long Yang, Shiting Wen, Zhixiong Huang, Zhiwang Zhang, Chaoyi Pang
https://proceedings.mlr.press/v235/lei24a.html
ICML 2024
Optimization and sampling based algorithms are two branches of methods in machine learning, while existing safe reinforcement learning (RL) algorithms are mainly based on optimization, it is still unclear whether sampling based methods can lead to desirable performance with safe policy. This paper formulates the Langevin policy for safe RL, and proposes Langevin Actor-Critic (LAC) to accelerate the process of policy inference. Concretely, instead of parametric policy, the proposed Langevin policy provides a stochastic process that directly infers actions, which is the numerical solver to the Langevin dynamic of actions on the continuous time. Furthermore, to make Langevin policy practical on RL tasks, the proposed LAC accumulates the transitions induced by Langevin policy and reproduces them with a generator. Finally, extensive empirical results show the effectiveness and superiority of LAC on the MuJoCo-based and Safety Gym tasks.
https://proceedings.mlr.press/v235/leluc24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/leluc24a/leluc24a.pdf
https://openreview.net/forum?id=28SEr5iFyT
Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates
https://proceedings.mlr.press/v235/leluc24a.html
Rémi Leluc, Aymeric Dieuleveut, François Portier, Johan Segers, Aigerim Zhuman
https://proceedings.mlr.press/v235/leluc24a.html
ICML 2024
The Sliced-Wasserstein (SW) distance between probability measures is defined as the average of the Wasserstein distances resulting for the associated one-dimensional projections. As a consequence, the SW distance can be written as an integral with respect to the uniform measure on the sphere and the Monte Carlo framework can be employed for calculating the SW distance. Spherical harmonics are polynomials on the sphere that form an orthonormal basis of the set of square-integrable functions on the sphere. Putting these two facts together, a new Monte Carlo method, hereby referred to as Spherical Harmonics Control Variates (SHCV), is proposed for approximating the SW distance using spherical harmonics as control variates. The resulting approach is shown to have good theoretical properties, e.g., a no-error property for Gaussian measures under a certain form of linear dependency between the variables. Moreover, an improved rate of convergence, compared to Monte Carlo, is established for general measures. The convergence analysis relies on the Lipschitz property associated to the SW integrand. Several numerical experiments demonstrate the superior performance of SHCV against state-of-the-art methods for SW distance computation.
https://proceedings.mlr.press/v235/lemercier24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lemercier24a/lemercier24a.pdf
https://openreview.net/forum?id=CLJZI5kDhX
An Independence-promoting Loss for Music Generation with Language Models
https://proceedings.mlr.press/v235/lemercier24a.html
Jean-Marie Lemercier, Simon Rouard, Jade Copet, Yossi Adi, Alexandre Défossez
https://proceedings.mlr.press/v235/lemercier24a.html
ICML 2024
Music generation schemes using language modeling rely on a vocabulary of audio tokens, generally provided as codes in a discrete latent space learnt by an auto-encoder. Multi-stage quantizers are often employed to produce these tokens, therefore the decoding strategy used for token prediction must be adapted to account for multiple codebooks: either it should model the joint distribution over all codebooks, or fit the product of the codebook marginal distributions. Modelling the joint distribution requires a costly increase in the number of auto-regressive steps, while fitting the product of the marginals yields an inexact model unless the codebooks are mutually independent. In this work, we introduce an independence-promoting loss to regularize the auto-encoder used as the tokenizer in language models for music generation. The proposed loss is a proxy for mutual information based on the maximum mean discrepancy principle, applied in reproducible kernel Hilbert spaces. Our criterion is simple to implement and train, and it is generalizable to other multi-stream codecs. We show that it reduces the statistical dependence between codebooks during auto-encoding. This leads to an increase in the generated music quality when modelling the product of the marginal distributions, while generating audio much faster than the joint distribution model.
https://proceedings.mlr.press/v235/lemos24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lemos24a/lemos24a.pdf
https://openreview.net/forum?id=K5h6VAsJaV
Improving Gradient-Guided Nested Sampling for Posterior Inference
https://proceedings.mlr.press/v235/lemos24a.html
Pablo Lemos, Nikolay Malkin, Will Handley, Yoshua Bengio, Yashar Hezaveh, Laurence Perreault-Levasseur
https://proceedings.mlr.press/v235/lemos24a.html
ICML 2024
We present a performant, general-purpose gradient-guided nested sampling (GGNS) algorithm, combining the state of the art in differentiable programming, Hamiltonian slice sampling, clustering, mode separation, dynamic nested sampling, and parallelization. This unique combination allows GGNS to scale well with dimensionality and perform competitively on a variety of synthetic and real-world problems. We also show the potential of combining nested sampling with generative flow networks to obtain large amounts of high-quality samples from the posterior distribution. This combination leads to faster mode discovery and more accurate estimates of the partition function.
https://proceedings.mlr.press/v235/letzelter24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/letzelter24a/letzelter24a.pdf
https://openreview.net/forum?id=SAbL40d8A4
Winner-takes-all learners are geometry-aware conditional density estimators
https://proceedings.mlr.press/v235/letzelter24a.html
Victor Letzelter, David Perera, Cédric Rommel, Mathieu Fontaine, Slim Essid, Gaël Richard, Patrick Perez
https://proceedings.mlr.press/v235/letzelter24a.html
ICML 2024
Winner-takes-all training is a simple learning paradigm, which handles ambiguous tasks by predicting a set of plausible hypotheses. Recently, a connection was established between Winner-takes-all training and centroidal Voronoi tessellations, showing that, once trained, hypotheses should quantize optimally the shape of the conditional distribution to predict. However, the best use of these hypotheses for uncertainty quantification is still an open question. In this work, we show how to leverage the appealing geometric properties of the Winner-takes-all learners for conditional density estimation, without modifying its original training scheme. We theoretically establish the advantages of our novel estimator both in terms of quantization and density estimation, and we demonstrate its competitiveness on synthetic and real-world datasets, including audio data.
https://proceedings.mlr.press/v235/leveni24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/leveni24a/leveni24a.pdf
https://openreview.net/forum?id=CbIZatwz9z
Online Isolation Forest
https://proceedings.mlr.press/v235/leveni24a.html
Filippo Leveni, Guilherme Weigert Cassales, Bernhard Pfahringer, Albert Bifet, Giacomo Boracchi
https://proceedings.mlr.press/v235/leveni24a.html
ICML 2024
The anomaly detection literature is abundant with offline methods, which require repeated access to data in memory, and impose impractical assumptions when applied to a streaming context. Existing online anomaly detection methods also generally fail to address these constraints, resorting to periodic retraining to adapt to the online context. We propose Online-iForest, a novel method explicitly designed for streaming conditions that seamlessly tracks the data generating process as it evolves over time. Experimental validation on real-world datasets demonstrated that Online-iForest is on par with online alternatives and closely rivals state-of-the-art offline anomaly detection techniques that undergo periodic retraining. Notably, Online-iForest consistently outperforms all competitors in terms of efficiency, making it a promising solution in applications where fast identification of anomalies is of primary importance such as cybersecurity, fraud and fault detection.
https://proceedings.mlr.press/v235/levine24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/levine24a/levine24a.pdf
https://openreview.net/forum?id=EWt5wsEdvc
Cell2Sentence: Teaching Large Language Models the Language of Biology
https://proceedings.mlr.press/v235/levine24a.html
Daniel Levine, Syed A Rizvi, Sacha Lévy, Nazreen Pallikkavaliyaveetil, David Zhang, Xingyu Chen, Sina Ghadermarzi, Ruiming Wu, Zihe Zheng, Ivan Vrkic, Anna Zhong, Daphne Raskin, Insu Han, Antonio Henrique De Oliveira Fonseca, Josue Ortega Caro, Amin Karbasi, Rahul Madhav Dhodapkar, David Van Dijk
https://proceedings.mlr.press/v235/levine24a.html
ICML 2024
We introduce Cell2Sentence (C2S), a novel method to directly adapt large language models to a biological context, specifically single-cell transcriptomics. By transforming gene expression data into "cell sentences," C2S bridges the gap between natural language processing and biology. We demonstrate cell sentences enable the fine-tuning of language models for diverse tasks in biology, including cell generation, complex cell-type annotation, and direct data-driven text generation. Our experiments reveal that GPT-2, when fine-tuned with C2S, can generate biologically valid cells based on cell type inputs, and accurately predict cell types from cell sentences. This illustrates that language models, through C2S fine-tuning, can acquire a significant understanding of single-cell biology while maintaining robust text generation capabilities. C2S offers a flexible, accessible framework to integrate natural language processing with transcriptomics, utilizing existing models and libraries for a wide range of biological applications.
https://proceedings.mlr.press/v235/levy24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/levy24a/levy24a.pdf
https://openreview.net/forum?id=47jMS97wJX
Eluder-based Regret for Stochastic Contextual MDPs
https://proceedings.mlr.press/v235/levy24a.html
Orin Levy, Asaf Cassel, Alon Cohen, Yishay Mansour
https://proceedings.mlr.press/v235/levy24a.html
ICML 2024
We present the E-UC$^3$RL algorithm for regret minimization in Stochastic Contextual Markov Decision Processes (CMDPs). The algorithm operates under the minimal assumptions of realizable function class and access to offline least squares and log loss regression oracles. Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys a regret guarantee of $ \widetilde{O}(H^3 \sqrt{T |S| |A|d_{\mathrm{E}}(\mathcal{P}) \log (|\mathcal{F}| |\mathcal{P}|/ \delta) )}) $ , with $T$ being the number of episodes, $S$ the state space, $A$ the action space, $H$ the horizon, $\mathcal{P}$ and $\mathcal{F}$ are finite function classes used to approximate the context-dependent dynamics and rewards, respectively, and $d_{\mathrm{E}}(\mathcal{P})$ is the Eluder dimension of $\mathcal{P}$ w.r.t the Hellinger distance. To the best of our knowledge, our algorithm is the first efficient and rate-optimal regret minimization algorithm for CMDPs that operates under the general offline function approximation setting. In addition, we extend the Eluder dimension to general bounded metrics which may be of independent interest.
https://proceedings.mlr.press/v235/li24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24a/li24a.pdf
https://openreview.net/forum?id=wdTiuvd0fR
Feature Reuse and Scaling: Understanding Transfer Learning with Protein Language Models
https://proceedings.mlr.press/v235/li24a.html
Francesca-Zhoufan Li, Ava P Amini, Yisong Yue, Kevin K Yang, Alex Xijie Lu
https://proceedings.mlr.press/v235/li24a.html
ICML 2024
Large pretrained protein language models (PLMs) have improved protein property and structure prediction from sequences via transfer learning, in which weights and representations from PLMs are repurposed for downstream tasks. Although PLMs have shown great promise, currently there is little understanding of how the features learned by pretraining relate to and are useful for downstream tasks. We perform a systematic analysis of transfer learning using PLMs, conducting 370 experiments across a comprehensive suite of factors including different downstream tasks, architectures, model sizes, model depths, and pretraining time. We observe that while almost all downstream tasks do benefit from pretrained models compared to naive sequence representations, for the majority of tasks performance does not scale with pretraining, and instead relies on low-level features learned early in pretraining. Our results point to a mismatch between current PLM pretraining paradigms and most applications of these models, indicating a need for better pretraining methods.
https://proceedings.mlr.press/v235/li24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24b/li24b.pdf
https://openreview.net/forum?id=7KtFQnF368
Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms
https://proceedings.mlr.press/v235/li24b.html
Yuchen Li, Laura Balzano, Deanna Needell, Hanbaek Lyu
https://proceedings.mlr.press/v235/li24b.html
ICML 2024
We analyze inexact Riemannian gradient descent (RGD) where Riemannian gradients and retractions are inexactly (and cheaply) computed. Our focus is on understanding when inexact RGD converges and what is the complexity in the general nonconvex and constrained setting. We answer these questions in a general framework of tangential Block Majorization-Minimization (tBMM). We establish that tBMM converges to an $\epsilon$-stationary point within $O(\epsilon^{-2})$ iterations. Under a mild assumption, the results still hold when the subproblem is solved inexactly in each iteration provided the total optimality gap is bounded. Our general analysis applies to a wide range of classical algorithms with Riemannian constraints including inexact RGD and proximal gradient method on Stiefel manifolds. We numerically validate that tBMM shows improved performance over existing methods when applied to various problems, including nonnegative tensor decomposition with Riemannian constraints, regularized nonnegative matrix factorization, and low-rank matrix recovery problems.
https://proceedings.mlr.press/v235/li24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24c/li24c.pdf
https://openreview.net/forum?id=SBR8Gwe1E2
DetKDS: Knowledge Distillation Search for Object Detectors
https://proceedings.mlr.press/v235/li24c.html
Lujun Li, Yufan Bao, Peijie Dong, Chuanguang Yang, Anggeng Li, Wenhan Luo, Qifeng Liu, Wei Xue, Yike Guo
https://proceedings.mlr.press/v235/li24c.html
ICML 2024
In this paper, we present DetKDS, the first framework that searches for optimal detection distillation policies. Manual design of detection distillers becomes challenging and time-consuming due to significant disparities in distillation behaviors between detectors with different backbones, paradigms, and label assignments. To tackle these challenges, we leverage search algorithms to discover optimal distillers for homogeneous and heterogeneous student-teacher pairs. Firstly, our search space encompasses global features, foreground-background features, instance features, logits response, and localization response as inputs. Then, we construct omni-directional cascaded transformations and obtain the distiller by selecting the advanced distance function and common weight value options. Finally, we present a divide-and-conquer evolutionary algorithm to handle the explosion of the search space. In this strategy, we first evolve the best distiller formulations of individual knowledge inputs and then optimize the combined weights of these multiple distillation losses. DetKDS automates the distillation process without requiring expert design or additional tuning, effectively reducing the teacher-student gap in various scenarios. Based on the analysis of our search results, we provide valuable guidance that contributes to detection distillation designs. Comprehensive experiments on different detectors demonstrate that DetKDS outperforms state-of-the-art methods in detection and instance segmentation tasks. For instance, DetKDS achieves significant gains than baseline detectors: $+3.7$, $+4.1$, $+4.0$, $+3.7$, and $+3.5$ AP on RetinaNet, Faster-RCNN, FCOS, RepPoints, and GFL, respectively. Code at: https://github.com/lliai/DetKDS.
https://proceedings.mlr.press/v235/li24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24d/li24d.pdf
https://openreview.net/forum?id=dW29JZj0G5
Denoising Autoregressive Representation Learning
https://proceedings.mlr.press/v235/li24d.html
Yazhe Li, Jorg Bornschein, Ting Chen
https://proceedings.mlr.press/v235/li24d.html
ICML 2024
In this paper, we explore a new generative approach for learning visual representations. Our method, DARL, employs a decoder-only Transformer to predict image patches autoregressively. We find that training with Mean Squared Error (MSE) alone leads to strong representations. To enhance the image generation ability, we replace the MSE loss with the diffusion objective by using a denoising patch decoder. We show that the learned representation can be improved by using tailored noise schedules and longer training in larger models. Notably, the optimal schedule differs significantly from the typical ones used in standard image diffusion models. Overall, despite its simple architecture, DARL delivers performance remarkably close to state-of-the-art masked prediction models under the fine-tuning protocol. This marks an important step towards a unified model capable of both visual perception and generation, effectively combining the strengths of autoregressive and denoising diffusion models.
https://proceedings.mlr.press/v235/li24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24e/li24e.pdf
https://openreview.net/forum?id=CEfr3h68KU
Purifying Quantization-conditioned Backdoors via Layer-wise Activation Correction with Distribution Approximation
https://proceedings.mlr.press/v235/li24e.html
Boheng Li, Yishuo Cai, Jisong Cai, Yiming Li, Han Qiu, Run Wang, Tianwei Zhang
https://proceedings.mlr.press/v235/li24e.html
ICML 2024
Model quantization is a compression technique that converts a full-precision model to a more compact low-precision version for better storage. Despite the great success of quantization, recent studies revealed the feasibility of malicious exploiting model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors remain dormant in full-precision models but are exposed upon quantization. Unfortunately, existing defenses have limited effects on mitigating QCBs. In this paper, we conduct an in-depth analysis of QCBs. We reveal an intriguing characteristic of QCBs, where activation of backdoor-related neurons on even benign samples enjoy a distribution drift after quantization, although this drift is more significant on poisoned samples. Motivated by this finding, we propose to purify the backdoor-exposed quantized model by aligning its layer-wise activation with its full-precision version. To further exploit the more pronounced activation drifts on poisoned samples, we design an additional module to layer-wisely approximate poisoned activation distribution based on batch normalization statistics of the full-precision model. Extensive experiments are conducted, verifying the effectiveness of our defense. Our code is publicly available.
https://proceedings.mlr.press/v235/li24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24f/li24f.pdf
https://openreview.net/forum?id=JObct1zyTb
Improving Neural Logic Machines via Failure Reflection
https://proceedings.mlr.press/v235/li24f.html
Zhiming Li, Yushi Cao, Yan Zheng, Xu Liu, Bozhi Wu, Tianlin Li, Xiufeng Xu, Junzhe Jiang, Yon Shin Teo, Shang-Wei Lin, Yang Liu
https://proceedings.mlr.press/v235/li24f.html
ICML 2024
Reasoning is a fundamental ability towards artificial general intelligence (AGI). Fueled by the success of deep learning, the neural logic machines models (NLMs) have introduced novel neural-symbolic structures and demonstrate great performance and generalization on reasoning and decision-making tasks. However, the original training approaches of the NLMs are still far from perfect, the models would repeat similar mistakes during the training process which leads to sub-optimal performance. To mitigate this issue, we present a novel framework named Failure Reflection Guided Regularizer (FRGR). FRGR first dynamically identifies and summarizes the root cause if the model repeats similar mistakes during training. Then it penalizes the model if it makes similar mistakes in future training iterations. In this way, the model is expected to avoid repeating errors of similar root causes and converge faster to a better-performed optimum. Experimental results on multiple relational reasoning and decision-making tasks demonstrate the effectiveness of FRGR in improving performance, generalization, training efficiency, and data efficiency.
https://proceedings.mlr.press/v235/li24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24g/li24g.pdf
https://openreview.net/forum?id=a8ZpjLJuKk
Critical windows: non-asymptotic theory for feature emergence in diffusion models
https://proceedings.mlr.press/v235/li24g.html
Marvin Li, Sitan Chen
https://proceedings.mlr.press/v235/li24g.html
ICML 2024
We develop theory to understand an intriguing property of diffusion models for image generation that we term critical windows. Empirically, it has been observed that there are narrow time intervals in sampling during which particular features of the final image emerge, e.g. the image class or background color (Ho et al., 2020b; Meng et al., 2022; Choi et al., 2022; Raya & Ambrogioni, 2023; Georgiev et al., 2023; Sclocchi et al., 2024; Biroli et al., 2024). While this is advantageous for interpretability as it implies one can localize properties of the generation to a small segment of the trajectory, it seems at odds with the continuous nature of the diffusion. We propose a formal framework for studying these windows and show that for data coming from a mixture of strongly log-concave densities, these windows can be provably bounded in terms of certain measures of inter- and intra-group separation. We also instantiate these bounds for concrete examples like well-conditioned Gaussian mixtures. Finally, we use our bounds to give a rigorous interpretation of diffusion models as hierarchical samplers that progressively “decide” output features over a discrete sequence of times. We validate our bounds with experiments on synthetic data and show that critical windows may serve as a useful tool for diagnosing fairness and privacy violations in real-world diffusion models.
https://proceedings.mlr.press/v235/li24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24h/li24h.pdf
https://openreview.net/forum?id=LvuuYqU0BW
Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition
https://proceedings.mlr.press/v235/li24h.html
Yuke Li, Guangyi Chen, Ben Abramowitz, Stefano Anzellotti, Donglai Wei
https://proceedings.mlr.press/v235/li24h.html
ICML 2024
Few-shot action recognition aims at quickly adapting a pre-trained model to the novel data with a distribution shift using only a limited number of samples. Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model. We therefore propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer. To identify the temporally invariant and variant representations, we employ the causal representation learning methods for unsupervised pertaining, and then tune the classifier with supervisions in next stage. Specifically, we assume the domain information can be well estimated and the pre-trained temporal dynamic generation and transition models can be well transferred. During adaptation, we fix the transferable temporal dynamics and update the image encoder and domain estimator. The efficacy of our approach is revealed by the superior accuracy of CDTD over leading alternatives across standard few-shot action recognition datasets.
https://proceedings.mlr.press/v235/li24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24i/li24i.pdf
https://openreview.net/forum?id=qKL25sGjxL
GiLOT: Interpreting Generative Language Models via Optimal Transport
https://proceedings.mlr.press/v235/li24i.html
Xuhong Li, Jiamin Chen, Yekun Chai, Haoyi Xiong
https://proceedings.mlr.press/v235/li24i.html
ICML 2024
While large language models (LLMs) surge with the rise of generative AI, algorithms to explain LLMs highly desire. Existing feature attribution methods adequate for discriminative language models like BERT often fail to deliver faithful explanations for LLMs, primarily due to two issues: (1) For every specific prediction, the LLM outputs a probability distribution over the vocabulary–a large number of tokens with unequal semantic distance; (2) As an autoregressive language model, the LLM handles input tokens while generating a sequence of probability distributions of various tokens. To address above two challenges, this work proposes GiLOT that leverages Optimal Transport to measure the distributional change of all possible generated sequences upon the absence of every input token, while taking into account the tokens’ similarity, so as to faithfully estimate feature attribution for LLMs. We have carried out extensive experiments on top of Llama families and their fine-tuned derivatives across various scales to validate the effectiveness of GiLOT for estimating the input attributions. The results show that GiLOT outperforms existing solutions on a number of faithfulness metrics under fair comparison settings. Source code is publicly available at https://github.com/holyseven/GiLOT.
https://proceedings.mlr.press/v235/li24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24j/li24j.pdf
https://openreview.net/forum?id=nLgtHHBgl3
Completing Visual Objects via Bridging Generation and Segmentation
https://proceedings.mlr.press/v235/li24j.html
Xiang Li, Yinpeng Chen, Chung-Ching Lin, Hao Chen, Kai Hu, Rita Singh, Bhiksha Raj, Lijuan Wang, Zicheng Liu
https://proceedings.mlr.press/v235/li24j.html
ICML 2024
This paper presents a novel approach to object completion, with the primary goal of reconstructing a complete object from its partially visible components. Our method, named MaskComp, delineates the completion process through iterative stages of generation and segmentation. In each iteration, the object mask is provided as an additional condition to boost image generation, and, in return, the generated images can lead to a more accurate mask by fusing the segmentation of images. We demonstrate that the combination of one generation and one segmentation stage effectively functions as a mask denoiser. Through alternation between the generation and segmentation stages, the partial object mask is progressively refined, providing precise shape guidance and yielding superior object completion results. Our experiments demonstrate the superiority of MaskComp over existing approaches, e.g., ControlNet and Stable Diffusion, establishing it as an effective solution for object completion.