abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/shah24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shah24a/shah24a.pdf
https://openreview.net/forum?id=rTBR0eqE4G
Decomposing and Editing Predictions by Modeling Model Computation
https://proceedings.mlr.press/v235/shah24a.html
Harshay Shah, Andrew Ilyas, Aleksander Madry
https://proceedings.mlr.press/v235/shah24a.html
ICML 2024
How does the internal computation of a machine learning model transform inputs into predictions? To tackle this question, we introduce a framework called component modeling for decomposing a model prediction in terms of its components—architectural "building blocks" such as convolution filters or attention heads. We focus on a special case of this framework, component attribution, where the goal is to estimate the counterfactual impact of individual components on a given prediction. We then present COAR, a scalable algorithm for estimating component attributions, and demonstrate its effectiveness across models, datasets and modalities. Finally, we show that COAR directly enables effective model editing. Our code is available at github.com/MadryLab/modelcomponents.
https://proceedings.mlr.press/v235/shaham24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shaham24a/shaham24a.pdf
https://openreview.net/forum?id=mDw42ZanmE
A Multimodal Automated Interpretability Agent
https://proceedings.mlr.press/v235/shaham24a.html
Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba
https://proceedings.mlr.press/v235/shaham24a.html
ICML 2024
This paper describes MAIA, a Multimodal Automated Interpretability Agent. MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery. It equips a pre-trained vision-language model with a set of tools that support iterative experimentation on subcomponents of other models to explain their behavior. These include tools commonly used by human interpretability researchers: for synthesizing and editing inputs, computing maximally activating exemplars from real-world datasets, and summarizing and describing experimental results. Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior. We evaluate applications of MAIA to computer vision models. We first characterize MAIA’s ability to describe (neuron-level) features in learned representations of images. Across several trained models and a novel dataset of synthetic vision neurons with paired ground-truth descriptions, MAIA produces descriptions comparable to those generated by expert human experimenters. We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
https://proceedings.mlr.press/v235/shahroudi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shahroudi24a/shahroudi24a.pdf
https://openreview.net/forum?id=FCmWhJQ14I
Evaluation of Trajectory Distribution Predictions with Energy Score
https://proceedings.mlr.press/v235/shahroudi24a.html
Novin Shahroudi, Mihkel Lepson, Meelis Kull
https://proceedings.mlr.press/v235/shahroudi24a.html
ICML 2024
Predicting the future trajectory of surrounding objects is inherently uncertain and vital in the safe and reliable planning of autonomous systems such as in self-driving cars. Although trajectory prediction models have become increasingly sophisticated in dealing with the complexities of spatiotemporal data, the evaluation methods used to assess these models have not kept pace. "Minimum of N" is a common family of metrics used to assess the rich outputs of such models. We critically examine the Minimum of N within the proper scoring rules framework to show that it is not strictly proper and demonstrate how that could lead to a misleading assessment of multimodal trajectory predictions. As an alternative, we propose using Energy Score-based evaluation measures, leveraging their proven propriety for a more reliable evaluation of trajectory distribution predictions.
https://proceedings.mlr.press/v235/shalam24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shalam24a/shalam24a.pdf
https://openreview.net/forum?id=pspyQm4ko0
The Balanced-Pairwise-Affinities Feature Transform
https://proceedings.mlr.press/v235/shalam24a.html
Daniel Shalam, Simon Korman
https://proceedings.mlr.press/v235/shalam24a.html
ICML 2024
The Balanced-Pairwise-Affinities (BPA) feature transform is designed to upgrade the features of a set of input items to facilitate downstream matching or grouping related tasks. The transformed set encodes a rich representation of high order relations between the input features. A particular min-cost-max-flow fractional matching problem, whose entropy regularized version can be approximated by an optimal transport (OT) optimization, leads to a transform which is efficient, differentiable, equivariant, parameterless and probabilistically interpretable. While the Sinkhorn OT solver has been adapted extensively in many contexts, we use it differently by minimizing the cost between a set of features to itself and using the transport plan’s rows as the new representation.Empirically, the transform is highly effective and flexible in its use and consistently improves networks it is inserted into, in a variety of tasks and training schemes. We demonstrate state-of-the-art results in few-shot classification, unsupervised image clustering and person re-identification. Code is available at github.com/DanielShalam/BPA .
https://proceedings.mlr.press/v235/shamshoum24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shamshoum24a/shamshoum24a.pdf
https://openreview.net/forum?id=tu5fCCuua2
DNCs Require More Planning Steps
https://proceedings.mlr.press/v235/shamshoum24a.html
Yara Shamshoum, Nitzan Hodos, Yuval Sieradzki, Assaf Schuster
https://proceedings.mlr.press/v235/shamshoum24a.html
ICML 2024
Many recent works use machine learning models to solve various complex algorithmic problems. However, these models attempt to reach a solution without considering the problem’s required computational complexity, which can be detrimental to their ability to solve it correctly. In this work we investigate the effect of computational time and memory on generalization of implicit algorithmic solvers. To do so, we focus on the Differentiable Neural Computer (DNC), a general problem solver that also lets us reason directly about its usage of time and memory. In this work, we argue that the number of planning steps the model is allowed to take, which we call ”planning budget”, is a constraint that can cause the model to generalize poorly and hurt its ability to fully utilize its external memory. We evaluate our method on Graph Shortest Path, Convex Hull, Graph MinCut and Associative Recall, and show how the planning budget can drastically change the behavior of the learned algorithm, in terms of learned time complexity, training time, stability and generalization to inputs larger than those seen during training.
https://proceedings.mlr.press/v235/shamsian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shamsian24a/shamsian24a.pdf
https://openreview.net/forum?id=3o7G6tIo4X
Improved Generalization of Weight Space Networks via Augmentations
https://proceedings.mlr.press/v235/shamsian24a.html
Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron
https://proceedings.mlr.press/v235/shamsian24a.html
ICML 2024
Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks. Unfortunately, weight space models tend to suffer from substantial overfitting. We empirically analyze the reasons for this overfitting and find that a key reason is the lack of diversity in DWS datasets. While a given object can be represented by many different weight configurations, typical INR training sets fail to capture variability across INRs that represent the same object. To address this, we explore strategies for data augmentation in weight spaces and propose a MixUp method adapted for weight spaces. We demonstrate the effectiveness of these methods in two setups. In classification, they improve performance similarly to having up to 10 times more data. In self-supervised contrastive learning, they yield substantial 5-10% gains in downstream classification.
https://proceedings.mlr.press/v235/shankar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shankar24a/shankar24a.pdf
https://openreview.net/forum?id=merZTLSdC9
On Online Experimentation without Device Identifiers
https://proceedings.mlr.press/v235/shankar24a.html
Shiv Shankar, Ritwik Sinha, Madalina Fiterau
https://proceedings.mlr.press/v235/shankar24a.html
ICML 2024
Measuring human feedback via randomized experimentation is a cornerstone of data-driven decision-making. The methodology used to estimate user preferences from their online behaviours is critically dependent on user identifiers. However, in today’s digital landscape, consumers frequently interact with content across multiple devices, which are often recorded with different identifiers for the same consumer. The inability to match different device identities across consumers poses significant challenges for accurately estimating human preferences and other causal effects. Moreover, without strong assumptions about the device-user graph, the causal effects might not be identifiable. In this paper, we propose HIFIVE, a variational method to solve the problem of estimating global average treatment effects (GATE) from a fragmented view of exposures and outcomes. Experiments show that our estimator is superior to standard estimators, with a lower bias and greater robustness to network uncertainty.
https://proceedings.mlr.press/v235/shao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shao24a/shao24a.pdf
https://openreview.net/forum?id=D7wi9LIE6i
Improved Dimensionality Dependence for Zeroth-Order Optimisation over Cross-Polytopes
https://proceedings.mlr.press/v235/shao24a.html
Weijia Shao
https://proceedings.mlr.press/v235/shao24a.html
ICML 2024
This work proposes an algorithm improving the dimensionality dependence for gradient-free optimisation over cross-polytopes, which has many applications such as adversarial attacks, explainable AI and sparse regression. For bandit convex optimisation with two-point feedback over cross-polytopes, the state-of-the-art algorithms have a dimensionality dependence of $\mathcal{O}(\sqrt{d\log d})$, while the known lower bound is of the form $\Omega(\sqrt{d(\log d)^{-1}})$. We propose a mirror descent algorithm equipped with a symmetric version of the negative $\frac{1}{2}$-Tsallis entropy. Combined with an $\ell_1$-ellipsoidal smoothing-based gradient estimator, the proposed algorithm guarantees a dimensionality dependence on $\mathcal{O}(\sqrt{d})$, which improves the state-of-the-art algorithms by a factor of $\sqrt{\log d}$. The idea can be further applied to optimising non-smooth and non-convex functions. We propose an algorithm with a convergence depending on $\mathcal{O}(d)$, which is the best-known dimensionality dependence.
https://proceedings.mlr.press/v235/shao24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shao24b/shao24b.pdf
https://openreview.net/forum?id=j35VcooKG8
On Multi-Armed Bandit with Impatient Arms
https://proceedings.mlr.press/v235/shao24b.html
Yuming Shao, Zhixuan Fang
https://proceedings.mlr.press/v235/shao24b.html
ICML 2024
In this paper, we investigate a Multi-Armed Bandit (MAB) setting where an arm exits the game if the algorithm continuously neglects it. This setup is motivated by real-world scenarios, such as online advertising and crowdsourcing, where arms only gain benefits after being pulled by the algorithm. We identify the intrinsic hardness of this problem and limitations in existing approaches. We propose FC-SE algorithm with expected regret upper bounds as our solution to this problem. As an extension, we even allow new arms to enter after the game starts and design FC-Entry algorithm with performance guarantees for this setup. Finally, we conduct experiments to validate our theoretical results.
https://proceedings.mlr.press/v235/shao24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shao24c/shao24c.pdf
https://openreview.net/forum?id=LALSZ88Xpx
Language Generation with Strictly Proper Scoring Rules
https://proceedings.mlr.press/v235/shao24c.html
Chenze Shao, Fandong Meng, Yijin Liu, Jie Zhou
https://proceedings.mlr.press/v235/shao24c.html
ICML 2024
Language generation based on maximum likelihood estimation (MLE) has become the fundamental approach for text generation. Maximum likelihood estimation is typically performed by minimizing the log-likelihood loss, also known as the logarithmic score in statistical decision theory. The logarithmic score is strictly proper in the sense that it encourages honest forecasts, where the expected score is maximized only when the model reports true probabilities. Although many strictly proper scoring rules exist, the logarithmic score is the only local scoring rule among them that depends exclusively on the probability of the observed sample, making it capable of handling the exponentially large sample space of natural text. In this work, we propose a straightforward strategy for adapting scoring rules to language generation, allowing for language modeling with any non-local scoring rules. Leveraging this strategy, we train language generation models using two classic strictly proper scoring rules, the Brier score and the Spherical score, as alternatives to the logarithmic score. Experimental results indicate that simply substituting the loss function, without adjusting other hyperparameters, can yield substantial improvements in model’s generation capabilities. Moreover, these improvements can scale up to large language models (LLMs) such as LLaMA-7B and LLaMA-13B. Source code: https://github.com/shaochenze/ScoringRulesLM.
https://proceedings.mlr.press/v235/shao24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shao24d/shao24d.pdf
https://openreview.net/forum?id=iRcmqXZjeK
Learning Decision Policies with Instrumental Variables through Double Machine Learning
https://proceedings.mlr.press/v235/shao24d.html
Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska
https://proceedings.mlr.press/v235/shao24d.html
ICML 2024
A common issue in learning decision-making policies in data-rich settings is spurious correlations in the offline dataset, which can be caused by hidden confounders. Instrumental variable (IV) regression, which utilises a key uncounfounded variable called the instrument, is a standard technique for learning causal relationships between confounded action, outcome and context variables. Most recent IV regression algorithms use a two-stage approach, where a deep neural network (DNN) estimator learnt in the first stage is directly plugged into the second stage, in which another DNN is used to estimate the causal effect. Naively plugging the estimator can cause heavy bias in the second stage, especially when regularisation bias is present in the first stage estimator. We propose DML-IV, a non-linear IV regression method that reduces the bias in two-stage IV regressions and effectively learns high-performing policies. We derive a novel learning objective to reduce bias and design the DML-IV algorithm following the double/debiased machine learning (DML) framework. The learnt DML-IV estimator has strong convergence rate and $O(N^{-1/2})$ suboptimality guarantees that match those when the dataset is unconfounded. DML-IV outperforms state-of-the-art IV regression methods on IV regression benchmarks and learns high-performing policies in the presence of instruments.
https://proceedings.mlr.press/v235/sharma24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sharma24a/sharma24a.pdf
https://openreview.net/forum?id=RfQT6vJt8b
How Far Can Fairness Constraints Help Recover From Biased Data?
https://proceedings.mlr.press/v235/sharma24a.html
Mohit Sharma, Amit Deshpande
https://proceedings.mlr.press/v235/sharma24a.html
ICML 2024
A general belief in fair classification is that fairness constraints incur a trade-off with accuracy, which biased data may worsen. Contrary to this belief, Blum & Stangl (2019) show that fair classification with equal opportunity constraints even on extremely biased data can recover optimally accurate and fair classifiers on the original data distribution. Their result is interesting because it demonstrates that fairness constraints can implicitly rectify data bias and simultaneously overcome a perceived fairness-accuracy trade-off. Their data bias model simulates under-representation and label bias in underprivileged population, and they show the above result on a stylized data distribution with i.i.d. label noise, under simple conditions on the data distribution and bias parameters. We propose a general approach to extend the result of Blum & Stangl (2019) to different fairness constraints, data bias models, data distributions, and hypothesis classes. We strengthen their result, and extend it to the case when their stylized distribution has labels with Massart noise instead of i.i.d. noise. We prove a similar recovery result for arbitrary data distributions using fair reject option classifiers. We further generalize it to arbitrary data distributions and arbitrary hypothesis classes, i.e., we prove that for any data distribution, if the optimally accurate classifier in a given hypothesis class is fair and robust, then it can be recovered through fair classification with equal opportunity constraints on the biased distribution whenever the bias parameters satisfy certain simple conditions. Finally, we show applications of our technique to time-varying data bias in classification and fair machine learning pipelines.
https://proceedings.mlr.press/v235/sharma24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sharma24b/sharma24b.pdf
https://openreview.net/forum?id=ia0Z8d1DbY
Diffuse, Sample, Project: Plug-And-Play Controllable Graph Generation
https://proceedings.mlr.press/v235/sharma24b.html
Kartik Sharma, Srijan Kumar, Rakshit Trivedi
https://proceedings.mlr.press/v235/sharma24b.html
ICML 2024
Diffusion models lend transformative capabilities to the graph generation task, yet controlling the properties of the generated graphs remains challenging. Recent approaches augment support for controlling soft, differentiable properties but they fail to handle user-specified hard constraints that are non-differentiable. This often results in vague control, unsuitable for applications like drug discovery that demand satisfaction of precise constraints, e.g., the maximum number of bonds. To address this, we formalize the problem of controlled graph generation and introduce PRODIGY (PROjected DIffusion for controlled Graph Generation), an innovative plug-and-play approach enabling the generation of graphs with precise control, from any pre-trained diffusion model. PRODIGY employs a novel operator to project the samples at each diffusion step onto the specified constrained space. For a large class of practical constraints and a variety of graphs, our extensive experiments demonstrate that PRODIGY empowers state-of-the-art continuous and discrete diffusion models to produce graphs meeting specific, hard constraints. Our approach achieves up to 100% constraint satisfaction for non-attributed and molecular graphs, under a variety of constraints, marking a significant step forward in precise, interpretable graph generation. Code is provided on the project webpage: https://prodigy-diffusion.github.io/.
https://proceedings.mlr.press/v235/sharrock24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sharrock24a/sharrock24a.pdf
https://openreview.net/forum?id=8viuf9PdzU
Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models
https://proceedings.mlr.press/v235/sharrock24a.html
Louis Sharrock, Jack Simons, Song Liu, Mark Beaumont
https://proceedings.mlr.press/v235/sharrock24a.html
ICML 2024
We introduce Sequential Neural Posterior Score Estimation (SNPSE), a score-based method for Bayesian inference in simulator-based models. Our method, inspired by the remarkable success of score-based methods in generative modelling, leverages conditional score-based diffusion models to generate samples from the posterior distribution of interest. The model is trained using an objective function which directly estimates the score of the posterior. We embed the model into a sequential training procedure, which guides simulations using the current approximation of the posterior at the observation of interest, thereby reducing the simulation cost. We also introduce several alternative sequential approaches, and discuss their relative merits. We then validate our method, as well as its amortised, non-sequential, variant on several numerical examples, demonstrating comparable or superior performance to existing state-of-the-art methods such as Sequential Neural Posterior Estimation (SNPE).
https://proceedings.mlr.press/v235/shaul24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shaul24a/shaul24a.pdf
https://openreview.net/forum?id=FCtO757Onl
Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models
https://proceedings.mlr.press/v235/shaul24a.html
Neta Shaul, Uriel Singer, Ricky T. Q. Chen, Matthew Le, Ali Thabet, Albert Pumarola, Yaron Lipman
https://proceedings.mlr.press/v235/shaul24a.html
ICML 2024
This paper introduces Bespoke Non-Stationary (BNS) Solvers, a solver distillation approach to improve sample efficiency of Diffusion and Flow models. BNS solvers are based on a family of non-stationary solvers that provably subsumes existing numerical ODE solvers and consequently demonstrate considerable improvement in sample approximation (PSNR) over these baselines. Compared to model distillation, BNS solvers benefit from a tiny parameter space ($<$200 parameters), fast optimization (two orders of magnitude faster), maintain diversity of samples, and in contrast to previous solver distillation approaches nearly close the gap from standard distillation methods such as Progressive Distillation in the low-medium NFE regime. For example, BNS solver achieves 45 PSNR / 1.76 FID using 16 NFE in class-conditional ImageNet-64. We experimented with BNS solvers for conditional image generation, text-to-image generation, and text-2-audio generation showing significant improvement in sample approximation (PSNR) in all.
https://proceedings.mlr.press/v235/shekhar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shekhar24a/shekhar24a.pdf
https://openreview.net/forum?id=EZLsxOgcDg
Reducing sequential change detection to sequential estimation
https://proceedings.mlr.press/v235/shekhar24a.html
Shubhanshu Shekhar, Aaditya Ramdas
https://proceedings.mlr.press/v235/shekhar24a.html
ICML 2024
We consider the problem of sequential change detection under minimal assumptions on the distribution generating the stream of observations. Formally, our goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes. We describe a simple reduction from sequential change detection to sequential estimation using confidence sequences (CSs): begin a new level-$(1-\alpha)$ CS at each time step, and proclaim a change as soon as the intersection of all active CSs becomes empty. We prove that the average run length of our scheme is at least $1/\alpha$, resulting in a change detection scheme with minimal structural assumptions (thus allowing for possibly dependent observations, and nonparametric distribution classes), but strong guarantees. We also describe an interesting parallel with Lorden’s reduction from change detection to sequential testing and connections to the recent ”e-detector” framework.
https://proceedings.mlr.press/v235/shen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24a/shen24a.pdf
https://openreview.net/forum?id=QgMqvxvWpX
Exploring the Complexity of Deep Neural Networks through Functional Equivalence
https://proceedings.mlr.press/v235/shen24a.html
Guohao Shen
https://proceedings.mlr.press/v235/shen24a.html
ICML 2024
We investigate the complexity of deep neural networks through the lens of functional equivalence, which posits that different parameterizations can yield the same network function. Leveraging the equivalence property, we present a novel bound on the covering number for deep neural networks, which reveals that the complexity of neural networks can be reduced. Additionally, we demonstrate that functional equivalence benefits optimization, as overparameterized networks tend to be easier to train since increasing network width leads to a diminishing volume of the effective parameter space. These findings can offer valuable insights into the phenomenon of overparameterization and have implications for understanding generalization and optimization in deep learning.
https://proceedings.mlr.press/v235/shen24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24b/shen24b.pdf
https://openreview.net/forum?id=cXBv07GKvk
Variational Learning is Effective for Large Deep Networks
https://proceedings.mlr.press/v235/shen24b.html
Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Bazan Clement Emile Marcel Raoul, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan, Thomas Möllenhoff
https://proceedings.mlr.press/v235/shen24b.html
ICML 2024
We give extensive empirical evidence against the common belief that variational learning is ineffective for large neural networks. We show that an optimizer called Improved Variational Online Newton (IVON) consistently matches or outperforms Adam for training large networks such as GPT-2 and ResNets from scratch. IVON’s computational costs are nearly identical to Adam but its predictive uncertainty is better. We show several new use cases of IVON where we improve finetuning and model merging in Large Language Models, accurately predict generalization error, and faithfully estimate sensitivity to data. We find overwhelming evidence that variational learning is effective. Code is available at https://github.com/team-approx-bayes/ivon.
https://proceedings.mlr.press/v235/shen24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24c/shen24c.pdf
https://openreview.net/forum?id=nP7Q1PnuLK
Thermometer: Towards Universal Calibration for Large Language Models
https://proceedings.mlr.press/v235/shen24c.html
Maohao Shen, Subhro Das, Kristjan Greenewald, Prasanna Sattigeri, Gregory W. Wornell, Soumya Ghosh
https://proceedings.mlr.press/v235/shen24c.html
ICML 2024
We consider the issue of calibration in large language models (LLM). Recent studies have found that common interventions such as instruction tuning often result in poorly calibrated LLMs. Although calibration is well-explored in traditional applications, calibrating LLMs is uniquely challenging. These challenges stem as much from the severe computational requirements of LLMs as from their versatility, which allows them to be applied to diverse tasks. Addressing these challenges, we propose THERMOMETER, a calibration approach tailored to LLMs. THERMOMETER learns an auxiliary model, given data from multiple tasks, for calibrating a LLM. It is computationally efficient, preserves the accuracy of the LLM, and produces better-calibrated responses for new tasks. Extensive empirical evaluations across various benchmarks demonstrate the effectiveness of the proposed method.
https://proceedings.mlr.press/v235/shen24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24d/shen24d.pdf
https://openreview.net/forum?id=WsawczEqO6
Position: Do pretrained Transformers Learn In-Context by Gradient Descent?
https://proceedings.mlr.press/v235/shen24d.html
Lingfeng Shen, Aayush Mishra, Daniel Khashabi
https://proceedings.mlr.press/v235/shen24d.html
ICML 2024
The emergence of In-Context Learning (ICL) in LLMs remains a remarkable phenomenon that is partially understood. To explain ICL, recent studies have created theoretical connections to Gradient Descent (GD). We ask, do such connections hold up in actual pre-trained language models? We highlight the limiting assumptions in prior works that make their setup considerably different from the practical setup in which language models are trained. For example, their experimental verification uses ICL objective (training models explicitly for ICL), which differs from the emergent ICL in the wild. Furthermore, the theoretical hand-constructed weights used in these studies have properties that don’t match those of real LLMs. We also look for evidence in real models. We observe that ICL and GD have different sensitivity to the order in which they observe demonstrations. Finally, we probe and compare the ICL vs. GD hypothesis in a natural setting. We conduct comprehensive empirical analyses on language models pre-trained on natural data (LLaMa-7B). Our comparisons of three performance metrics highlight the inconsistent behavior of ICL and GD as a function of various factors such as datasets, models, and the number of demonstrations. We observe that ICL and GD modify the output distribution of language models differently. These results indicate that the equivalence between ICL and GD remains an open hypothesis and calls for further studies.
https://proceedings.mlr.press/v235/shen24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24e/shen24e.pdf
https://openreview.net/forum?id=7iH9RgMrzX
Adaptive Stabilization Based on Machine Learning for Column Generation
https://proceedings.mlr.press/v235/shen24e.html
Yunzhuang Shen, Yuan Sun, Xiaodong Li, Zhiguang Cao, Andrew Eberhard, Guangquan Zhang
https://proceedings.mlr.press/v235/shen24e.html
ICML 2024
Column generation (CG) is a well-established method for solving large-scale linear programs. It involves iteratively optimizing a subproblem containing a subset of columns and using its dual solution to generate new columns with negative reduced costs. This process continues until the dual values converge to the optimal dual solution to the original problem. A natural phenomenon in CG is the heavy oscillation of the dual values during iterations, which can lead to a substantial slowdown in the convergence rate. Stabilization techniques are devised to accelerate the convergence of dual values by using information beyond the state of the current subproblem. However, there remains a significant gap in obtaining more accurate dual values at an earlier stage. To further narrow this gap, this paper introduces a novel approach consisting of 1) a machine learning approach for accurate prediction of optimal dual solutions and 2) an adaptive stabilization technique that effectively capitalizes on accurate predictions. On the graph coloring problem, we show that our method achieves a significantly improved convergence rate compared to traditional methods.
https://proceedings.mlr.press/v235/shen24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24f/shen24f.pdf
https://openreview.net/forum?id=LlqphyBdeT
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains
https://proceedings.mlr.press/v235/shen24f.html
Junhong Shen, Neil Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolo Fusi
https://proceedings.mlr.press/v235/shen24f.html
ICML 2024
Large Language Models (LLMs) have demonstrated remarkable proficiency in understanding and generating natural language. However, their capabilities wane in highly specialized domains underrepresented in the pretraining corpus, such as physical and biomedical sciences. This work explores how to repurpose general LLMs into effective task solvers for specialized domains. We introduce a novel, model-agnostic framework for learning custom input tags, which are parameterized as continuous vectors appended to the LLM’s embedding layer, to condition the LLM. We design two types of input tags: domain tags are used to delimit specialized representations (e.g., chemical formulas) and provide domain-relevant context; function tags are used to represent specific functions (e.g., predicting molecular properties) and compress function-solving instructions. We develop a three-stage protocol to learn these tags using auxiliary data and domain knowledge. By explicitly disentangling task domains from task functions, our method enables zero-shot generalization to unseen problems through diverse combinations of the input tags. It also boosts LLM’s performance in various specialized domains, such as predicting protein or chemical properties and modeling drug-target interactions, outperforming expert models tailored to these tasks.
https://proceedings.mlr.press/v235/shen24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shen24g/shen24g.pdf
https://openreview.net/forum?id=Xb3IXEBYuw
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF
https://proceedings.mlr.press/v235/shen24g.html
Han Shen, Zhuoran Yang, Tianyi Chen
https://proceedings.mlr.press/v235/shen24g.html
ICML 2024
Bilevel optimization has been recently applied to many machine learning tasks. However, their applications have been restricted to the supervised learning setting, where static objective functions with benign structures are considered. But bilevel problems such as incentive design, inverse reinforcement learning (RL), and RL from human feedback (RLHF) are often modeled as dynamic objective functions that go beyond the simple static objective structures, which pose significant challenges of using existing bilevel solutions. To tackle this new class of bilevel problems, we introduce the first principled algorithmic framework for solving bilevel RL problems through the lens of penalty formulation. We provide theoretical studies of the problem landscape and its penalty-based (policy) gradient algorithms. We demonstrate the effectiveness of our algorithms via simulations in the Stackelberg game and RLHF.
https://proceedings.mlr.press/v235/shenouda24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shenouda24a/shenouda24a.pdf
https://openreview.net/forum?id=srejp9uOx7
ReLUs Are Sufficient for Learning Implicit Neural Representations
https://proceedings.mlr.press/v235/shenouda24a.html
Joseph Shenouda, Yamin Zhou, Robert D Nowak
https://proceedings.mlr.press/v235/shenouda24a.html
ICML 2024
Motivated by the growing theoretical understanding of neural networks that employ the Rectified Linear Unit (ReLU) as their activation function, we revisit the use of ReLU activation functions for learning implicit neural representations (INRs). Inspired by second order B-spline wavelets, we incorporate a set of simple constraints to the ReLU neurons in each layer of a deep neural network (DNN) to remedy the spectral bias. This in turn enables its use for various INR tasks. Empirically, we demonstrate that, contrary to popular belief, one can learn state-of-the-art INRs based on a DNN composed of only ReLU neurons. Next, by leveraging recent theoretical works which characterize the kinds of functions ReLU neural networks learn, we provide a way to quantify the regularity of the learned function. This offers a principled approach to selecting the hyperparameters in INR architectures. We substantiate our claims through experiments in signal representation, super resolution, and computed tomography, demonstrating the versatility and effectiveness of our method. The code for all experiments can be found at https://github.com/joeshenouda/relu-inrs.
https://proceedings.mlr.press/v235/sherman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sherman24a/sherman24a.pdf
https://openreview.net/forum?id=VJwsDwuiuH
Rate-Optimal Policy Optimization for Linear Markov Decision Processes
https://proceedings.mlr.press/v235/sherman24a.html
Uri Sherman, Alon Cohen, Tomer Koren, Yishay Mansour
https://proceedings.mlr.press/v235/sherman24a.html
ICML 2024
We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of $K$) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.
https://proceedings.mlr.press/v235/shi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24a/shi24a.pdf
https://openreview.net/forum?id=7OPHCeXcSS
Double Momentum Method for Lower-Level Constrained Bilevel Optimization
https://proceedings.mlr.press/v235/shi24a.html
Wanli Shi, Yi Chang, Bin Gu
https://proceedings.mlr.press/v235/shi24a.html
ICML 2024
Bilevel optimization (BO) has recently gained prominence in many machine learning applications due to its ability to capture the nested structure inherent in these problems. Recently, many hypergradient methods have been proposed as effective solutions for solving large-scale problems. However, current hypergradient methods for the lower-level constrained bilevel optimization (LCBO) problems need very restrictive assumptions, namely, where optimality conditions satisfy the differentiability and invertibility conditions, and lack a solid analysis of the convergence rate. What’s worse, existing methods require either double-loop updates, which are sometimes less efficient. To solve this problem, in this paper, we propose a new hypergradient of LCBO leveraging the theory of nonsmooth implicit function theorem instead of using the restrive assumptions. In addition, we propose a single-loop single-timescale algorithm based on the double-momentum method and adaptive step size method and prove it can return a $(\delta, \epsilon)$-stationary point with $\tilde{\mathcal{O}}(d_2^2\epsilon^{-4})$ iterations. Experiments on two applications demonstrate the effectiveness of our proposed method.
https://proceedings.mlr.press/v235/shi24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24b/shi24b.pdf
https://openreview.net/forum?id=X8uQ1TslUc
OT-CLIP: Understanding and Generalizing CLIP via Optimal Transport
https://proceedings.mlr.press/v235/shi24b.html
Liangliang Shi, Jack Fan, Junchi Yan
https://proceedings.mlr.press/v235/shi24b.html
ICML 2024
We propose to understand Contrastive Language-Image Pretraining model (CLIP) from the Optimal Transport (OT) perspective. Specifically, we show that training of CLIP is an embodiment of inverse OT and the adopted two InfoNCE losses in CLIP correspond to a special case of bilevel optimization of modified entropic OT. We then generalize the original CLIP loss to an OT-based loss family using variants of Regularized OT (e.g. Fused Gromov OT, unbalanced OT, etc.), and demonstrate their superior performance on public datasets for both image and text downstream tasks. We also rethink the inference stage of CLIP by using the tool of OT, and propose to adopt the fused Gromov OT for (zero-shot) classification, in which the prediction is based on the graph representation whereby images and texts are nodes for graph matching. By our new technique, we show how to generalize zero-shot classification to other more flexible zero-shot tasks with competitive performance: long-tailed classification and selective classification. The former assumes the known prior distribution of labels, while in the latter case, only a subset of samples are asked to predict, yet with high prediction confidence.
https://proceedings.mlr.press/v235/shi24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24c/shi24c.pdf
https://openreview.net/forum?id=HPXRzM9BYZ
LCA-on-the-Line: Benchmarking Out of Distribution Generalization with Class Taxonomies
https://proceedings.mlr.press/v235/shi24c.html
Jia Shi, Gautam Rajendrakumar Gare, Jinjin Tian, Siqi Chai, Zhiqiu Lin, Arun Balajee Vasudevan, Di Feng, Francesco Ferroni, Shu Kong
https://proceedings.mlr.press/v235/shi24c.html
ICML 2024
We tackle the challenge of predicting models’ Out-of-Distribution (OOD) performance using in-distribution (ID) measurements without requiring OOD data. Existing evaluations with “Effective robustness”, which use ID accuracy as an indicator of OOD accuracy, encounter limitations when models are trained with diverse supervision and distributions, such as class labels (Vision Models, VMs, on ImageNet) and textual descriptions (Visual-Language Models, VLMs, on LAION). VLMs often generalize better to OOD data than VMs despite having similar or lower ID performance. To improve the prediction of models’ OOD performance from ID measurements, we introduce the Lowest Common Ancestor (LCA)-on-the-Line framework. This approach revisits the established concept of LCA distance, which measures the hierarchical distance between labels and predictions within a predefined class hierarchy, such as WordNet. We assess 75 models using ImageNet as the ID dataset and five significantly shifted OOD variants, uncovering a strong linear correlation between ID LCA distance and OOD top-1 accuracy. Our method provides a compelling alternative for understanding why VLMs tend to generalize better. Additionally, we propose a technique to construct a taxonomic hierarchy on any dataset using $K$-means clustering, demonstrating that LCA distance is robust to the constructed taxonomic hierarchy. Moreover, we demonstrate that aligning model predictions with class taxonomies, through soft labels or prompt engineering, can enhance model generalization. Open source code in our Project Page.
https://proceedings.mlr.press/v235/shi24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24d/shi24d.pdf
https://openreview.net/forum?id=qDw4FxMubj
Sample-Efficient Robust Multi-Agent Reinforcement Learning in the Face of Environmental Uncertainty
https://proceedings.mlr.press/v235/shi24d.html
Laixi Shi, Eric Mazumdar, Yuejie Chi, Adam Wierman
https://proceedings.mlr.press/v235/shi24d.html
ICML 2024
To overcome the sim-to-real gap in reinforcement learning (RL), learned policies must maintain robustness against environmental uncertainties. While robust RL has been widely studied in single-agent regimes, in multi-agent environments, the problem remains understudied—despite the fact that the problems posed by environmental uncertainties are often exacerbated by strategic interactions. This work focuses on learning in distributionally robust Markov games (RMGs), a robust variant of standard Markov games, wherein each agent aims to learn a policy that maximizes its own worst-case performance when the deployed environment deviates within its own prescribed uncertainty set. This results in a set of robust equilibrium strategies for all agents that align with classic notions of game-theoretic equilibria. Assuming a non-adaptive sampling mechanism from a generative model, we propose a sample-efficient model-based algorithm (DRNVI) with finite-sample complexity guarantees for learning robust variants of various notions of game-theoretic equilibria. We also establish an information-theoretic lower bound for solving RMGs, which confirms the near-optimal sample complexity of DRNVI with respect to problem-dependent factors such as the size of the state space, the target accuracy, and the horizon length.
https://proceedings.mlr.press/v235/shi24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24e/shi24e.pdf
https://openreview.net/forum?id=CSIfCpXhCF
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
https://proceedings.mlr.press/v235/shi24e.html
Dachuan Shi, Chaofan Tao, Anyi Rao, Zhendong Yang, Chun Yuan, Jiaqi Wang
https://proceedings.mlr.press/v235/shi24e.html
ICML 2024
Recent vision-language models have achieved tremendous advances. However, their computational costs are also escalating dramatically, making model acceleration exceedingly critical. To pursue more efficient vision-language Transformers, this paper introduces Cross-Guided Ensemble of Tokens (CrossGET), a general acceleration framework for vision-language Transformers. This framework adaptively combines tokens in real-time during inference, significantly reducing computational costs while maintaining high performance. CrossGET features two primary innovations: 1) Cross-Guided Matching and Ensemble. CrossGET leverages cross-modal guided token matching and ensemble to effectively utilize cross-modal information, achieving wider applicability across both modality-independent models, e.g., CLIP, and modality-dependent ones, e.g., BLIP2. 2) Complete-Graph Soft Matching. CrossGET introduces an algorithm for the token-matching mechanism, ensuring reliable matching results while facilitating parallelizability and high efficiency. Extensive experiments have been conducted on various vision-language tasks, such as image-text retrieval, visual reasoning, image captioning, and visual question answering. The performance on both classic multimodal architectures and emerging multimodal LLMs demonstrates the framework’s effectiveness and versatility. The code is available at https://github.com/sdc17/CrossGET.
https://proceedings.mlr.press/v235/shi24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24f/shi24f.pdf
https://openreview.net/forum?id=WOa96EG26M
Why Larger Language Models Do In-context Learning Differently?
https://proceedings.mlr.press/v235/shi24f.html
Zhenmei Shi, Junyi Wei, Zhuoyan Xu, Yingyu Liang
https://proceedings.mlr.press/v235/shi24f.html
ICML 2024
Large language models (LLM) have emerged as a powerful tool for AI, with the key ability of in-context learning (ICL), where they can perform well on unseen tasks based on a brief series of task examples without necessitating any adjustments to the model parameters. One recent interesting mysterious observation is that models of different scales may have different ICL behaviors: larger models tend to be more sensitive to noise in the test context. This work studies this observation theoretically aiming to improve the understanding of LLM and ICL. We analyze two stylized settings: (1) linear regression with one-layer single-head linear transformers and (2) parity classification with two-layer multiple attention heads transformers (non-linear data and non-linear model). In both settings, we give closed-form optimal solutions and find that smaller models emphasize important hidden features while larger ones cover more hidden features; thus, smaller models are more robust to noise while larger ones are more easily distracted, leading to different ICL behaviors. This sheds light on where transformers pay attention to and how that affects ICL. Preliminary experimental results on large base and chat models provide positive support for our analysis.
https://proceedings.mlr.press/v235/shi24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shi24g/shi24g.pdf
https://openreview.net/forum?id=ccSSKTz9LX
Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts
https://proceedings.mlr.press/v235/shi24g.html
Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, Yu-Feng Li
https://proceedings.mlr.press/v235/shi24g.html
ICML 2024
The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.
https://proceedings.mlr.press/v235/shimizu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shimizu24a/shimizu24a.pdf
https://openreview.net/forum?id=0wso32h0jc
Neural-Kernel Conditional Mean Embeddings
https://proceedings.mlr.press/v235/shimizu24a.html
Eiki Shimizu, Kenji Fukumizu, Dino Sejdinovic
https://proceedings.mlr.press/v235/shimizu24a.html
ICML 2024
Kernel conditional mean embeddings (CMEs) offer a powerful framework for representing conditional distributions, but they often face scalability and expressiveness challenges. In this work, we propose a new method that effectively combines the strengths of deep learning with CMEs in order to address these challenges. Specifically, our approach leverages the end-to-end neural network (NN) optimization framework using a kernel-based objective. This design circumvents the computationally expensive Gram matrix inversion required by current CME methods. To further enhance performance, we provide efficient strategies to optimize the remaining kernel hyperparameters. In conditional density estimation tasks, our NN-CME hybrid achieves competitive performance and often surpasses existing deep learning-based methods. Lastly, we showcase its remarkable versatility by seamlessly integrating it into reinforcement learning (RL) contexts. Building on Q-learning, our approach naturally leads to a new variant of distributional RL methods, which demonstrates consistent effectiveness across different environments.
https://proceedings.mlr.press/v235/shiragur24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shiragur24a/shiragur24a.pdf
https://openreview.net/forum?id=HpT19AKddu
Causal Discovery with Fewer Conditional Independence Tests
https://proceedings.mlr.press/v235/shiragur24a.html
Kirankumar Shiragur, Jiaqi Zhang, Caroline Uhler
https://proceedings.mlr.press/v235/shiragur24a.html
ICML 2024
Many questions in science center around the fundamental problem of understanding causal relationships. However, most constraint-based causal discovery algorithms, including the well-celebrated PC algorithm, often incur an exponential number of conditional independence (CI) tests, posing limitations in various applications. Addressing this, our work focuses on characterizing what can be learned about the underlying causal graph with a reduced number of CI tests. We show that it is possible to a learn a coarser representation of the hidden causal graph with a polynomial number of tests. This coarser representation, named Causal Consistent Partition Graph (CCPG), comprises of a partition of the vertices and a directed graph defined over its components. CCPG satisfies consistency of orientations and additional constraints which favor finer partitions. Furthermore, it reduces to the underlying causal graph when the causal graph is identifiable. As a consequence, our results offer the first efficient algorithm for recovering the true causal graph with a polynomial number of tests, in special cases where the causal graph is fully identifiable through observational data and potentially additional interventions.
https://proceedings.mlr.press/v235/shiraishi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shiraishi24a/shiraishi24a.pdf
https://openreview.net/forum?id=uLonuOfrwp
Statistical Test for Attention Maps in Vision Transformers
https://proceedings.mlr.press/v235/shiraishi24a.html
Tomohiro Shiraishi, Daiki Miwa, Teruyuki Katsuoka, Vo Nguyen Le Duy, Kouichi Taji, Ichiro Takeuchi
https://proceedings.mlr.press/v235/shiraishi24a.html
ICML 2024
The Vision Transformer (ViT) demonstrates exceptional performance in various computer vision tasks. Attention is crucial for ViT to capture complex wide-ranging relationships among image patches, allowing the model to weigh the importance of image patches and aiding our understanding of the decision-making process. However, when utilizing the attention of ViT as evidence in high-stakes decision-making tasks such as medical diagnostics, a challenge arises due to the potential of attention mechanisms erroneously focusing on irrelevant regions. In this study, we propose a statistical test for ViT’s attentions, enabling us to use the attentions as reliable quantitative evidence indicators for ViT’s decision-making with a rigorously controlled error rate. Using the framework called selective inference, we quantify the statistical significance of attentions in the form of p-values, which enables the theoretically grounded quantification of the false positive detection probability of attentions. We demonstrate the validity and the effectiveness of the proposed method through numerical experiments and applications to brain image diagnoses.
https://proceedings.mlr.press/v235/shirakawa24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shirakawa24a/shirakawa24a.pdf
https://openreview.net/forum?id=CHz7WshPcp
Longitudinal Targeted Minimum Loss-based Estimation with Temporal-Difference Heterogeneous Transformer
https://proceedings.mlr.press/v235/shirakawa24a.html
Toru Shirakawa, Yi Li, Yulun Wu, Sky Qiu, Yuxuan Li, Mingduo Zhao, Hiroyasu Iso, Mark J. Van Der Laan
https://proceedings.mlr.press/v235/shirakawa24a.html
ICML 2024
We propose Deep Longitudinal Targeted Minimum Loss-based Estimation (Deep LTMLE), a novel approach to estimate the counterfactual mean of outcome under dynamic treatment policies in longitudinal problem settings. Our approach utilizes a transformer architecture with heterogeneous type embedding trained using temporal-difference learning. After obtaining an initial estimate using the transformer, following the targeted minimum loss-based likelihood estimation (TMLE) framework, we statistically corrected for the bias commonly associated with machine learning algorithms. Furthermore, our method also facilitates statistical inference by enabling the provision of 95% confidence intervals grounded in asymptotic statistical theory. Simulation results demonstrate our method’s superior performance over existing approaches, particularly in complex, long time-horizon scenarios. It remains effective in small-sample, short-duration contexts, matching the performance of asymptotically efficient estimators. To demonstrate our method in practice, we applied our method to estimate counterfactual mean outcomes for standard versus intensive blood pressure management strategies in a real-world cardiovascular epidemiology cohort study.
https://proceedings.mlr.press/v235/shirali24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shirali24a/shirali24a.pdf
https://openreview.net/forum?id=WUicA0hOF9
Allocation Requires Prediction Only if Inequality Is Low
https://proceedings.mlr.press/v235/shirali24a.html
Ali Shirali, Rediet Abebe, Moritz Hardt
https://proceedings.mlr.press/v235/shirali24a.html
ICML 2024
Algorithmic predictions are emerging as a promising solution concept for efficiently allocating societal resources. Fueling their use is an underlying assumption that such systems are necessary to identify individuals for interventions. We propose a principled framework for assessing this assumption: Using a simple mathematical model, we evaluate the efficacy of prediction-based allocations in settings where individuals belong to larger units such as hospitals, neighborhoods, or schools. We find that prediction-based allocations outperform baseline methods using aggregate unit-level statistics only when between-unit inequality is low and the intervention budget is high. Our results hold for a wide range of settings for the price of prediction, treatment effect heterogeneity, and unit-level statistics’ learnability. Combined, we highlight the potential limits to improving the efficacy of interventions through prediction.
https://proceedings.mlr.press/v235/shoushtari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shoushtari24a/shoushtari24a.pdf
https://openreview.net/forum?id=AYWBRwsZ8z
Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis
https://proceedings.mlr.press/v235/shoushtari24a.html
Shirin Shoushtari, Jiaming Liu, Edward P. Chandler, M. Salman Asif, Ulugbek S. Kamilov
https://proceedings.mlr.press/v235/shoushtari24a.html
ICML 2024
Plug-and-Play (PnP) priors is a widely-used family of methods for solving imaging inverse problems by integrating physical measurement models with image priors specified using image denoisers. PnP methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful deep denoisers. Despite extensive work on PnP, the topic of distribution mismatch between the training and testing data has often been overlooked in the PnP literature. This paper presents a set of new theoretical and numerical results on the topic of prior distribution mismatch and domain adaptation for the alternating direction method of multipliers (ADMM) variant of PnP. Our theoretical result provides an explicit error bound for PnP-ADMM due to the mismatch between the desired denoiser and the one used for inference. Our analysis contributes to the work in the area by considering the mismatch under nonconvex data-fidelity terms and expansive denoisers. Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of PnP-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. Our results suggest the relative robustness of PnP-ADMM to prior distribution mismatch, while also showing that the performance gap can be significantly reduced with only a few training samples from the desired distribution.
https://proceedings.mlr.press/v235/shu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shu24a/shu24a.pdf
https://openreview.net/forum?id=IxZ4xaHSYG
Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing
https://proceedings.mlr.press/v235/shu24a.html
Youwei Shu, Xi Xiao, Derui Wang, Yuxin Cao, Siji Chen, Jason Xue, Linyi Li, Bo Li
https://proceedings.mlr.press/v235/shu24a.html
ICML 2024
Randomized Smoothing (RS) is currently a scalable certified defense method providing robustness certification against adversarial examples. Although significant progress has been achieved in providing defenses against $\ell_p$ adversaries, the interaction between the smoothing distribution and the robustness certification still remains vague. In this work, we comprehensively study the effect of two families of distributions, named Exponential Standard Gaussian (ESG) and Exponential General Gaussian (EGG) distributions, on Randomized Smoothing and Double Sampling Randomized Smoothing (DSRS). We derive an analytic formula for ESG’s certified radius, which converges to the origin formula of RS as the dimension $d$ increases. Additionally, we prove that EGG can provide tighter constant factors than DSRS in providing $\Omega(\sqrt{d})$ lower bounds of $\ell_2$ certified radius, and thus further addresses the curse of dimensionality in RS. Our experiments on real-world datasets confirm our theoretical analysis of the ESG distributions, that they provide almost the same certification under different exponents $\eta$ for both RS and DSRS. In addition, EGG brings a significant improvement to the DSRS certification, but the mechanism can be different when the classifier properties are different. Compared to the primitive DSRS, the increase in certified accuracy provided by EGG is prominent, up to 6.4% on ImageNet.
https://proceedings.mlr.press/v235/shubham24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shubham24a/shubham24a.pdf
https://openreview.net/forum?id=8ySQaphUYH
WISER: Weak Supervision and Supervised Representation Learning to Improve Drug Response Prediction in Cancer
https://proceedings.mlr.press/v235/shubham24a.html
Kumar Shubham, Aishwarya Jayagopal, Syed Mohammed Danish, Prathosh Ap, Vaibhav Rajan
https://proceedings.mlr.press/v235/shubham24a.html
ICML 2024
Cancer, a leading cause of death globally, occurs due to genomic changes and manifests heterogeneously across patients. To advance research on personalized treatment strategies, the effectiveness of various drugs on cells derived from cancers (’cell lines’) is experimentally determined in laboratory settings. Nevertheless, variations in the distribution of genomic data and drug responses between cell lines and humans arise due to biological and environmental differences. Moreover, while genomic profiles of many cancer patients are readily available, the scarcity of corresponding drug response data limits the ability to train machine learning models that can predict drug response in patients effectively. Recent cancer drug response prediction methods have largely followed the paradigm of unsupervised domain-invariant representation learning followed by a downstream drug response classification step. Introducing supervision in both stages is challenging due to heterogeneous patient response to drugs and limited drug response data. This paper addresses these challenges through a novel representation learning method in the first phase and weak supervision in the second. Experimental results on real patient data demonstrate the efficacy of our method WISER (Weak supervISion and supErvised Representation learning) over state-of-the-art alternatives on predicting personalized drug response. Our implementation is available at https://github.com/kyrs/WISER
https://proceedings.mlr.press/v235/shukla24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shukla24a/shukla24a.pdf
https://openreview.net/forum?id=zdNTiTs5gU
TIC-TAC: A Framework For Improved Covariance Estimation In Deep Heteroscedastic Regression
https://proceedings.mlr.press/v235/shukla24a.html
Megh Shukla, Mathieu Salzmann, Alexandre Alahi
https://proceedings.mlr.press/v235/shukla24a.html
ICML 2024
Deep heteroscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood. However, recent works show that this may result in sub-optimal convergence due to the challenges associated with covariance estimation. While the literature addresses this by proposing alternate formulations to mitigate the impact of the predicted covariance, we focus on improving the predicted covariance itself. We study two questions: (1) Does the predicted covariance truly capture the randomness of the predicted mean? (2) In the absence of supervision, how can we quantify the accuracy of covariance estimation? We address (1) with a Taylor Induced Covariance (TIC), which captures the randomness of the predicted mean by incorporating its gradient and curvature through the second order Taylor polynomial. Furthermore, we tackle (2) by introducing a Task Agnostic Correlations (TAC) metric, which combines the notion of correlations and absolute error to evaluate the covariance. We evaluate TIC-TAC across multiple experiments spanning synthetic and real-world datasets. Our results show that not only does TIC accurately learn the covariance, it additionally facilitates an improved convergence of the negative log-likelihood. Our code is available at https://github.com/vita-epfl/TIC-TAC
https://proceedings.mlr.press/v235/shulgin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shulgin24a/shulgin24a.pdf
https://openreview.net/forum?id=XUc29ydmLX
Towards a Better Theoretical Understanding of Independent Subnetwork Training
https://proceedings.mlr.press/v235/shulgin24a.html
Egor Shulgin, Peter Richtárik
https://proceedings.mlr.press/v235/shulgin24a.html
ICML 2024
Modern advancements in large-scale machine learning would be impossible without the paradigm of data-parallel distributed computing. Since distributed computing with large-scale models imparts excessive pressure on communication channels, significant recent research has been directed toward co-designing communication compression strategies and training algorithms with the goal of reducing communication costs. While pure data parallelism allows better data scaling, it suffers from poor model scaling properties. Indeed, compute nodes are severely limited by memory constraints, preventing further increases in model size. For this reason, the latest achievements in training giant neural network models also rely on some form of model parallelism. In this work, we take a closer theoretical look at Independent Subnetwork Training (IST), which is a recently proposed and highly effective technique for solving the aforementioned problems. We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication, and provide a precise analysis of its optimization performance on a quadratic model.
https://proceedings.mlr.press/v235/shumaylov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shumaylov24a/shumaylov24a.pdf
https://openreview.net/forum?id=E8FpcUyPuS
Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation
https://proceedings.mlr.press/v235/shumaylov24a.html
Zakhar Shumaylov, Jeremy Budd, Subhadip Mukherjee, Carola-Bibiane Schönlieb
https://proceedings.mlr.press/v235/shumaylov24a.html
ICML 2024
Variational regularisation is the primary method for solving inverse problems, and recently there has been considerable work leveraging deeply learned regularisation for enhanced performance. However, few results exist addressing the convergence of such regularisation, particularly within the context of critical points as opposed to global minimisers. In this paper, we present a generalised formulation of convergent regularisation in terms of critical points, and show that this is achieved by a class of weakly convex regularisers. We prove convergence of the primal-dual hybrid gradient method for the associated variational problem, and, given a Kurdyka-Łojasiewicz condition, an $\mathcal{O}(\log{k}/k)$ ergodic convergence rate. Finally, applying this theory to learned regularisation, we prove universal approximation for input weakly convex neural networks (IWCNN), and show empirically that IWCNNs can lead to improved performance of learned adversarial regularisers for computed tomography (CT) reconstruction.
https://proceedings.mlr.press/v235/shumilin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shumilin24a/shumilin24a.pdf
https://openreview.net/forum?id=kMBvZ40Iu9
Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation
https://proceedings.mlr.press/v235/shumilin24a.html
Sergei Shumilin, Alexander Ryabov, Nikolay Yavich, Evgeny Burnaev, Vladimir Vanovskiy
https://proceedings.mlr.press/v235/shumilin24a.html
ICML 2024
Due to the high computational load of modern numerical simulation, there is a demand for approaches that would reduce the size of discrete problems while keeping the accuracy reasonable. In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics. We achieve this by employing $k$-means clustering, autodifferentiation and stochastic minimization algorithms. We demonstrate performance of the designed algorithm on two PDEs: a linear parabolic equation which governs slightly compressible fluid flow in porous media and the wave equation. Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest. The proposed approach can be applied to the simulation of an arbitrary system described by evolutionary partial differential equations.
https://proceedings.mlr.press/v235/shumitskaya24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/shumitskaya24a/shumitskaya24a.pdf
https://openreview.net/forum?id=Chy4rSqy4Y
IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics
https://proceedings.mlr.press/v235/shumitskaya24a.html
Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy S. Vatolin
https://proceedings.mlr.press/v235/shumitskaya24a.html
ICML 2024
No-reference image- and video-quality metrics are widely used in video processing benchmarks. The robustness of learning-based metrics under video attacks has not been widely studied. In addition to having success, attacks on metrics that can be employed in video processing benchmarks must be fast and imperceptible. This paper introduces an Invisible One-Iteration (IOI) adversarial attack on no-reference image and video quality metrics. The proposed method uses two modules to ensure high visual quality and temporal stability of adversarial videos and runs for one iteration, which makes it fast. We compared our method alongside eight prior approaches using image and video datasets via objective and subjective tests. Our method exhibited superior visual quality across various attacked metric architectures while maintaining comparable attack success and speed. We made the code available on GitHub: https://github.com/katiashh/ioi-attack.
https://proceedings.mlr.press/v235/si24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/si24a/si24a.pdf
https://openreview.net/forum?id=or8BQ4ohGb
InterpreTabNet: Distilling Predictive Signals from Tabular Data by Salient Feature Interpretation
https://proceedings.mlr.press/v235/si24a.html
Jacob Yoke Hong Si, Wendy Yusi Cheng, Michael Cooper, Rahul Krishnan
https://proceedings.mlr.press/v235/si24a.html
ICML 2024
Tabular data are omnipresent in various sectors of industries. Neural networks for tabular data such as TabNet have been proposed to make predictions while leveraging the attention mechanism for interpretability. However, the inferred attention masks are often dense, making it challenging to come up with rationales about the predictive signal. To remedy this, we propose InterpreTabNet, a variant of the TabNet model that models the attention mechanism as a latent variable sampled from a Gumbel-Softmax distribution. This enables us to regularize the model to learn distinct concepts in the attention masks via a KL Divergence regularizer. It prevents overlapping feature selection by promoting sparsity which maximizes the model’s efficacy and improves interpretability to determine the important features when predicting the outcome. To assist in the interpretation of feature interdependencies from our model, we employ a large language model (GPT-4) and use prompt engineering to map from the learned feature mask onto natural language text describing the learned signal. Through comprehensive experiments on real-world datasets, we demonstrate that InterpreTabNet outperforms previous methods for interpreting tabular data while attaining competitive accuracy.
https://proceedings.mlr.press/v235/silva24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/silva24a/silva24a.pdf
https://openreview.net/forum?id=KJhLpzqNri
Embarrassingly Parallel GFlowNets
https://proceedings.mlr.press/v235/silva24a.html
Tiago Silva, Luiz Max Carvalho, Amauri H Souza, Samuel Kaski, Diego Mesquita
https://proceedings.mlr.press/v235/silva24a.html
ICML 2024
GFlowNets are a promising alternative to MCMC sampling for discrete compositional random variables. Training GFlowNets requires repeated evaluations of the unnormalized target distribution, or reward function. However, for large-scale posterior sampling, this may be prohibitive since it incurs traversing the data several times. Moreover, if the data are distributed across clients, employing standard GFlowNets leads to intensive client-server communication. To alleviate both these issues, we propose embarrassingly parallel GFlowNet (EP-GFlowNet). EP-GFlowNet is a provably correct divide-and-conquer method to sample from product distributions of the form $R(\cdot) \propto R_1(\cdot) ... R_N(\cdot)$ — e.g., in parallel or federated Bayes, where each $R_n$ is a local posterior defined on a data partition. First, in parallel, we train a local GFlowNet targeting each $R_n$ and send the resulting models to the server. Then, the server learns a global GFlowNet by enforcing our newly proposed aggregating balance condition, requiring a single communication step. Importantly, EP-GFlowNets can also be applied to multi-objective optimization and model reuse. Our experiments illustrate the effectiveness of EP-GFlowNets on multiple tasks, including parallel Bayesian phylogenetics, multi-objective multiset and sequence generation, and federated Bayesian structure learning.
https://proceedings.mlr.press/v235/silva24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/silva24b/silva24b.pdf
https://openreview.net/forum?id=ie3vXkMvRY
On the Unexpected Effectiveness of Reinforcement Learning for Sequential Recommendation
https://proceedings.mlr.press/v235/silva24b.html
Álvaro Labarca Silva, Denis Parra, Rodrigo Toro Icarte
https://proceedings.mlr.press/v235/silva24b.html
ICML 2024
In recent years, Reinforcement Learning (RL) has shown great promise in session-based recommendation. Sequential models that use RL have reached state-of-the-art performance for the Next-item Prediction (NIP) task. This result is intriguing, as the NIP task only evaluates how well the system can correctly recommend the next item to the user, while the goal of RL is to find a policy that optimizes rewards in the long term – sometimes at the expense of suboptimal short-term performance. Then, how can RL improve the system’s performance on short-term metrics? This article investigates this question by exploring proxy learning objectives, which we identify as goals RL models might be following, and thus could explain the performance boost. We found that RL – when used as an auxiliary loss – promotes the learning of embeddings that capture information about the user’s previously interacted items. Subsequently, we replaced the RL objective with a straightforward auxiliary loss designed to predict the number of items the user interacted with. This substitution results in performance gains comparable to RL. These findings pave the way to improve performance and understanding of RL methods for recommender systems.
https://proceedings.mlr.press/v235/silva24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/silva24c/silva24c.pdf
https://openreview.net/forum?id=Ed4KgHoKNe
Learning from Memory: Non-Parametric Memory Augmented Self-Supervised Learning of Visual Features
https://proceedings.mlr.press/v235/silva24c.html
Thalles Silva, Helio Pedrini, Adı́n Ramı́rez Rivera
https://proceedings.mlr.press/v235/silva24c.html
ICML 2024
This paper introduces a novel approach to improving the training stability of self-supervised learning (SSL) methods by leveraging a non-parametric memory of seen concepts. The proposed method involves augmenting a neural network with a memory component to stochastically compare current image views with previously encountered concepts. Additionally, we introduce stochastic memory blocks to regularize training and enforce consistency between image views. We extensively benchmark our method on many vision tasks, such as linear probing, transfer learning, few-shot classification, and image retrieval on many datasets. The experimental results consolidate the effectiveness of the proposed approach in achieving stable SSL training without additional regularizers while learning highly transferable representations and requiring less computing time and resources.
https://proceedings.mlr.press/v235/sim24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sim24a/sim24a.pdf
https://openreview.net/forum?id=ecvuJWE1YY
Deletion-Anticipative Data Selection with a Limited Budget
https://proceedings.mlr.press/v235/sim24a.html
Rachael Hwee Ling Sim, Jue Fan, Xiao Tian, Patrick Jaillet, Bryan Kian Hsiang Low
https://proceedings.mlr.press/v235/sim24a.html
ICML 2024
Learners with a limited budget can use supervised data subset selection and active learning techniques to select a smaller training set and reduce the cost of acquiring data and training machine learning (ML) models. However, the resulting high model performance, measured by a data utility function, may not be preserved when some data owners, enabled by the GDPR’s right to erasure, request their data to be deleted from the ML model. This raises an important question for learners who are temporarily unable or unwilling to acquire data again: During the initial data acquisition of a training set of size $k$, can we proactively maximize the data utility after future unknown deletions? We propose that the learner anticipates/estimates the probability that (i) each data owner in the feasible set will independently delete its data or (ii) a number of deletions occur out of $k$, and justify our proposal with concrete real-world use cases. Then, instead of directly maximizing the data utility function, the learner can maximize the expected or risk-averse post-deletion utility based on the anticipated probabilities. We further propose how to construct these deletion-anticipative data selection ($\texttt{DADS}$) maximization objectives to preserve monotone submodularity and near-optimality of greedy solutions, how to optimize the objectives and empirically evaluate $\texttt{DADS}$’ performance on real-world datasets.
https://proceedings.mlr.press/v235/simmons-edler24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/simmons-edler24a/simmons-edler24a.pdf
https://openreview.net/forum?id=ZwUThOE7Zc
Position: AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research
https://proceedings.mlr.press/v235/simmons-edler24a.html
Riley Simmons-Edler, Ryan Paul Badman, Shayne Longpre, Kanaka Rajan
https://proceedings.mlr.press/v235/simmons-edler24a.html
ICML 2024
The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research. This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue. ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war. In the case of peer adversaries, this increases the likelihood of "low intensity" conflicts which risk escalation to broader warfare. In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression. This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties, and does not require any superhuman AI capabilities. Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research. Our goal in this paper is to raise awareness among the public and ML researchers on the near-future risks posed by full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks. We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.
https://proceedings.mlr.press/v235/sinelnikov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sinelnikov24a/sinelnikov24a.pdf
https://openreview.net/forum?id=g1Gf0hoPSz
Latent variable model for high-dimensional point process with structured missingness
https://proceedings.mlr.press/v235/sinelnikov24a.html
Maksim Sinelnikov, Manuel Haussmann, Harri Lähdesmäki
https://proceedings.mlr.press/v235/sinelnikov24a.html
ICML 2024
Longitudinal data are important in numerous fields, such as healthcare, sociology and seismology, but real-world datasets present notable challenges for practitioners because they can be high-dimensional, contain structured missingness patterns, and measurement time points can be governed by an unknown stochastic process. While various solutions have been suggested, the majority of them have been designed to account for only one of these challenges. In this work, we propose a flexible and efficient latent-variable model that is capable of addressing all these limitations. Our approach utilizes Gaussian processes to capture correlations between samples and their associated missingness masks as well as to model the underlying point process. We construct our model as a variational autoencoder together with deep neural network parameterised decoder and encoder models, and develop a scalable amortised variational inference approach for efficient model training. We demonstrate competitive performance using both simulated and real datasets.
https://proceedings.mlr.press/v235/singh24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24a/singh24a.pdf
https://openreview.net/forum?id=yFUdZfbEme
Domain Generalisation via Imprecise Learning
https://proceedings.mlr.press/v235/singh24a.html
Anurag Singh, Siu Lun Chau, Shahine Bouabid, Krikamol Muandet
https://proceedings.mlr.press/v235/singh24a.html
ICML 2024
Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g. optimise based on the average-case risk, worst-case risk, or interpolations thereof. While this decision should in principle be decided by the model operator like medical doctors in practice, this information might not always be available at training time. This situation leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Our work, supported by theoretical and empirical evidence, showcases the benefits of integrating imprecision into domain generalisation.
https://proceedings.mlr.press/v235/singh24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24b/singh24b.pdf
https://openreview.net/forum?id=tTtSnpH4fc
Finite Time Logarithmic Regret Bounds for Self-Tuning Regulation
https://proceedings.mlr.press/v235/singh24b.html
Rahul Singh, Akshay Mete, Avik Kar, Panganamala Kumar
https://proceedings.mlr.press/v235/singh24b.html
ICML 2024
We establish the first finite-time logarithmic regret bounds for the self-tuning regulation problem. We introduce a modified version of the certainty equivalence algorithm, which we call PIECE, that clips inputs in addition to utilizing probing inputs for exploration. We show that it has a $C \log T$ upper bound on the regret after $T$ time-steps for bounded noise, and $C\log^3 T$ in the case of sub-Gaussian noise, unlike the LQ problem where logarithmic regret is shown to be not possible. The PIECE algorithm is also designed to address the critical challenge of poor initial transient performance of reinforcement learning algorithms for linear systems. Comparative simulation results illustrate the improved performance of PIECE.
https://proceedings.mlr.press/v235/singh24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24c/singh24c.pdf
https://openreview.net/forum?id=O8rrXl71D5
What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
https://proceedings.mlr.press/v235/singh24c.html
Aaditya K Singh, Ted Moskovitz, Felix Hill, Stephanie C.Y. Chan, Andrew M Saxe
https://proceedings.mlr.press/v235/singh24c.html
ICML 2024
In-context learning is a powerful emergent ability in transformer models. Prior work in mechanistic interpretability has identified a circuit element that may be critical for in-context learning – the induction head (IH), which performs a match-and-copy operation. During training of large transformers on natural language data, IHs emerge around the same time as a notable phase change in the loss. Despite the robust evidence for IHs and this interesting coincidence with the phase change, relatively little is known about the diversity and emergence dynamics of IHs. Why is there more than one IH, and how are they dependent on each other? Why do IHs appear all of a sudden, and what are the subcircuits that enable them to emerge? We answer these questions by studying IH emergence dynamics in a controlled setting by training on synthetic data. In doing so, we develop and share a novel optogenetics-inspired causal framework for modifying activations throughout training. Using this framework, we delineate the diverse and additive nature of IHs. By "clamping" subsets of activations throughout training, we then identify three underlying subcircuits that interact to drive IH formation, yielding the phase change. Furthermore, these subcircuits shed light on data-dependent properties of formation, such as phase change timing, already showing the promise of this more in-depth understanding of subcircuits that need to "go right" for an induction head.
https://proceedings.mlr.press/v235/singh24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24d/singh24d.pdf
https://openreview.net/forum?id=GwA4go0Mw4
Representation Surgery: Theory and Practice of Affine Steering
https://proceedings.mlr.press/v235/singh24d.html
Shashwat Singh, Shauli Ravfogel, Jonathan Herzig, Roee Aharoni, Ryan Cotterell, Ponnurangam Kumaraguru
https://proceedings.mlr.press/v235/singh24d.html
ICML 2024
Language models often exhibit undesirable behavior, e.g., generating toxic or gender-biased text. In the case of neural language models, an encoding of the undesirable behavior is often present in the model’s representations. Thus, one natural (and common) approach to prevent the model from exhibiting undesirable behavior is to steer the model’s representations in a manner that reduces the probability of it generating undesirable text. This paper investigates the formal and empirical properties of steering functions, i.e., transformation of the neural language model’s representations that alter its behavior. First, we derive two optimal, in the least-squares sense, affine steering functions under different constraints. Our theory provides justification for existing approaches and offers a novel, improved steering approach. Second, we offer a series of experiments that demonstrate the empirical effectiveness of the methods in mitigating bias and reducing toxic generation.
https://proceedings.mlr.press/v235/singh24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24e/singh24e.pdf
https://openreview.net/forum?id=l6Hef6FVd0
PIPER: Primitive-Informed Preference-based Hierarchical Reinforcement Learning via Hindsight Relabeling
https://proceedings.mlr.press/v235/singh24e.html
Utsav Singh, Wesley A Suttle, Brian M. Sadler, Vinay P. Namboodiri, Amrit Bedi
https://proceedings.mlr.press/v235/singh24e.html
ICML 2024
In this work, we introduce PIPER: Primitive-Informed Preference-based Hierarchical reinforcement learning via Hindsight Relabeling, a novel approach that leverages preference-based learning to learn a reward model, and subsequently uses this reward model to relabel higher-level replay buffers. Since this reward is unaffected by lower primitive behavior, our relabeling-based approach is able to mitigate non-stationarity, which is common in existing hierarchical approaches, and demonstrates impressive performance across a range of challenging sparse-reward tasks. Since obtaining human feedback is typically impractical, we propose to replace the human-in-the-loop approach with our primitive-in-the-loop approach, which generates feedback using sparse rewards provided by the environment. Moreover, in order to prevent infeasible subgoal prediction and avoid degenerate solutions, we propose primitive-informed regularization that conditions higher-level policies to generate feasible subgoals for lower-level policies. We perform extensive experiments to show that PIPER mitigates non-stationarity in hierarchical reinforcement learning and achieves greater than 50$\%$ success rates in challenging, sparse-reward robotic environments, where most other baselines fail to achieve any significant progress.
https://proceedings.mlr.press/v235/singh24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24f/singh24f.pdf
https://openreview.net/forum?id=q5q59s2WJy
Byzantine Resilient and Fast Federated Few-Shot Learning
https://proceedings.mlr.press/v235/singh24f.html
Ankit Pratap Singh, Namrata Vaswani
https://proceedings.mlr.press/v235/singh24f.html
ICML 2024
This work introduces a Byzantine resilient solution for learning low-dimensional linear representation. Our main contribution is the development of a provably Byzantine-resilient AltGDmin algorithm for solving this problem in a federated setting. We argue that our solution is sample-efficient, fast, and communicationefficient. In solving this problem, we also introduce a novel secure solution to the federated subspace learning meta-problem that occurs in many different applications.
https://proceedings.mlr.press/v235/singh24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singh24g/singh24g.pdf
https://openreview.net/forum?id=KpeGdDzucX
Parallelized Spatiotemporal Slot Binding for Videos
https://proceedings.mlr.press/v235/singh24g.html
Gautam Singh, Yue Wang, Jiawei Yang, Boris Ivanovic, Sungjin Ahn, Marco Pavone, Tong Che
https://proceedings.mlr.press/v235/singh24g.html
ICML 2024
While modern best practices advocate for scalable architectures that support long-range interactions, object-centric models are yet to fully embrace these architectures. In particular, existing object-centric models for handling sequential inputs, due to their reliance on RNN-based implementation, show poor stability and capacity and are slow to train on long sequences. We introduce Parallelizable Spatiotemporal Binder or PSB, the first temporally-parallelizable slot learning architecture for sequential inputs. Unlike conventional RNN-based approaches, PSB produces object-centric representations, known as slots, for all time-steps in parallel. This is achieved by refining the initial slots across all time-steps through a fixed number of layers equipped with causal attention. By capitalizing on the parallelism induced by our architecture, the proposed model exhibits a significant boost in efficiency. In experiments, we test PSB extensively as an encoder within an auto-encoding framework paired with a wide variety of decoder options. Compared to the state-of-the-art, our architecture demonstrates stable training on longer sequences, achieves parallelization that results in a 60% increase in training speed, and yields performance that is on par with or better on unsupervised 2D and 3D object-centric scene decomposition and understanding.
https://proceedings.mlr.press/v235/singhal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singhal24a/singhal24a.pdf
https://openreview.net/forum?id=wLoESsgZIq
What’s the score? Automated Denoising Score Matching for Nonlinear Diffusions
https://proceedings.mlr.press/v235/singhal24a.html
Raghav Singhal, Mark Goldstein, Rajesh Ranganath
https://proceedings.mlr.press/v235/singhal24a.html
ICML 2024
Reversing a diffusion process by learning its score forms the heart of diffusion-based generative modeling and for estimating properties of scientific systems. The diffusion processes that are tractable center on linear processes with a Gaussian stationary distribution, limiting the kinds of models that can be built to those that target a Gaussian prior or more generally limits the kinds of problems that can be generically solved to those that have conditionally linear score functions. In this work, we introduce a family of tractable denoising score matching objectives, called local-DSM, built using local increments of the diffusion process. We show how local-DSM melded with Taylor expansions enables automated training and score estimation with nonlinear diffusion processes. To demonstrate these ideas, we use automated-DSM to train generative models using non-Gaussian priors on challenging low dimensional distributions and the CIFAR10 image dataset. Additionally, we use the automated-DSM to learn the scores for nonlinear processes studied in statistical physics.
https://proceedings.mlr.press/v235/singla24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/singla24a/singla24a.pdf
https://openreview.net/forum?id=4dOJAfXhNV
SAPG: Split and Aggregate Policy Gradients
https://proceedings.mlr.press/v235/singla24a.html
Jayesh Singla, Ananye Agarwal, Deepak Pathak
https://proceedings.mlr.press/v235/singla24a.html
ICML 2024
Despite extreme sample inefficiency, on-policy reinforcement learning, aka policy gradients, has become a fundamental tool in decision-making problems. With the recent advances in GPU-driven simulation, the ability to collect large amounts of data for RL training has scaled exponentially. However, we show that current RL methods, e.g. PPO, fail to ingest the benefit of parallelized environments beyond a certain point and their performance saturates. To address this, we propose a new on-policy RL algorithm that can effectively leverage large-scale environments by splitting them into chunks and fusing them back together via importance sampling. Our algorithm, termed SAPG, shows significantly higher performance across a variety of challenging environments where vanilla PPO and other strong baselines fail to achieve high performance. Webpage at https://sapg-rl.github.io/.
https://proceedings.mlr.press/v235/sinii24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sinii24a/sinii24a.pdf
https://openreview.net/forum?id=pp3v2ch5Sd
In-Context Reinforcement Learning for Variable Action Spaces
https://proceedings.mlr.press/v235/sinii24a.html
Viacheslav Sinii, Alexander Nikulin, Vladislav Kurenkov, Ilya Zisman, Sergey Kolesnikov
https://proceedings.mlr.press/v235/sinii24a.html
ICML 2024
Recently, it has been shown that transformers pre-trained on diverse datasets with multi-episode contexts can generalize to new reinforcement learning tasks in-context. A key limitation of previously proposed models is their reliance on a predefined action space size and structure. The introduction of a new action space often requires data re-collection and model re-training, which can be costly for some applications. In our work, we show that it is possible to mitigate this issue by proposing the Headless-AD model that, despite being trained only once, is capable of generalizing to discrete action spaces of variable size, semantic content and order. By experimenting with Bernoulli and contextual bandits, as well as a gridworld environment, we show that Headless-AD exhibits significant capability to generalize to action spaces it has never encountered, even outperforming specialized models trained for a specific set of actions on several environment configurations.
https://proceedings.mlr.press/v235/sittoni24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sittoni24a/sittoni24a.pdf
https://openreview.net/forum?id=YBXwr7wF7i
Subhomogeneous Deep Equilibrium Models
https://proceedings.mlr.press/v235/sittoni24a.html
Pietro Sittoni, Francesco Tudisco
https://proceedings.mlr.press/v235/sittoni24a.html
ICML 2024
Implicit-depth neural networks have grown as powerful alternatives to traditional networks in various applications in recent years. However, these models often lack guarantees of existence and uniqueness, raising stability, performance, and reproducibility issues. In this paper, we present a new analysis of the existence and uniqueness of fixed points for implicit-depth neural networks based on the concept of subhomogeneous operators and the nonlinear Perron-Frobenius theory. Compared to previous similar analyses, our theory allows for weaker assumptions on the parameter matrices, thus yielding a more flexible framework for well-defined implicit networks. We illustrate the performance of the resulting subhomogeneous networks on feedforward, convolutional, and graph neural network examples
https://proceedings.mlr.press/v235/sivagnanam24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sivagnanam24a/sivagnanam24a.pdf
https://openreview.net/forum?id=TTZXl9WYFF
Multi-Agent Reinforcement Learning with Hierarchical Coordination for Emergency Responder Stationing
https://proceedings.mlr.press/v235/sivagnanam24a.html
Amutheezan Sivagnanam, Ava Pettet, Hunter Lee, Ayan Mukhopadhyay, Abhishek Dubey, Aron Laszka
https://proceedings.mlr.press/v235/sivagnanam24a.html
ICML 2024
An emergency responder management (ERM) system dispatches responders, such as ambulances, when it receives requests for medical aid. ERM systems can also proactively reposition responders between predesignated waiting locations to cover any gaps that arise due to the prior dispatch of responders or significant changes in the distribution of anticipated requests. Optimal repositioning is computationally challenging due to the exponential number of ways to allocate responders between locations and the uncertainty in future requests. The state-of-the-art approach in proactive repositioning is a hierarchical approach based on spatial decomposition and online Monte Carlo tree search, which may require minutes of computation for each decision in a domain where seconds can save lives. We address the issue of long decision times by introducing a novel reinforcement learning (RL) approach, based on the same hierarchical decomposition, but replacing online search with learning. To address the computational challenges posed by large, variable-dimensional, and discrete state and action spaces, we propose: (1) actor-critic based agents that incorporate transformers to handle variable-dimensional states and actions, (2) projections to fixed-dimensional observations to handle complex states, and (3) combinatorial techniques to map continuous actions to discrete allocations. We evaluate our approach using real-world data from two U.S. cities, Nashville, TN and Seattle, WA. Our experiments show that compared to the state of the art, our approach reduces computation time per decision by three orders of magnitude, while also slightly reducing average ambulance response time by 5 seconds.
https://proceedings.mlr.press/v235/smee24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/smee24a/smee24a.pdf
https://openreview.net/forum?id=p7gpooFIr3
Inexact Newton-type Methods for Optimisation with Nonnegativity Constraints
https://proceedings.mlr.press/v235/smee24a.html
Oscar Smee, Fred Roosta
https://proceedings.mlr.press/v235/smee24a.html
ICML 2024
We consider solving large scale nonconvex optimisation problems with nonnegativity constraints. Such problems arise frequently in machine learning, such as nonnegative least-squares, nonnegative matrix factorisation, as well as problems with sparsity-inducing regularisation. In such settings, first-order methods, despite their simplicity, can be prohibitively slow on ill-conditioned problems or become trapped near saddle regions, while most second-order alternatives involve non-trivially challenging subproblems. The two-metric projection framework, initially proposed by Bertsekas (1982), alleviates these issues and achieves the best of both worlds by combining projected gradient steps at the boundary of the feasible region with Newton steps in the interior in such a way that feasibility can be maintained by simple projection onto the nonnegative orthant. We develop extensions of the two-metric projection framework, which by inexactly solving the subproblems as well as employing non-positive curvature directions, are suitable for large scale and nonconvex settings. We obtain state-of-the-art convergence rates for various classes of non-convex problems and demonstrate competitive practical performance on a variety of problems.
https://proceedings.mlr.press/v235/smit24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/smit24a/smit24a.pdf
https://openreview.net/forum?id=CrUmgUaAQp
Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs
https://proceedings.mlr.press/v235/smit24a.html
Andries Petrus Smit, Nathan Grinsztajn, Paul Duckworth, Thomas D Barrett, Arnu Pretorius
https://proceedings.mlr.press/v235/smit24a.html
ICML 2024
Recent advancements in large language models (LLMs) underscore their potential for responding to inquiries in various domains. However, ensuring that generative agents provide accurate and reliable answers remains an ongoing challenge. In this context, multi-agent debate (MAD) has emerged as a promising strategy for enhancing the truthfulness of LLMs. We benchmark a range of debating and prompting strategies to explore the trade-offs between cost, time, and accuracy. Importantly, we find that multi-agent debating systems, in their current form, do not reliably outperform other proposed prompting strategies, such as self-consistency and ensembling using multiple reasoning paths. However, when performing hyperparameter tuning, several MAD systems, such as Multi-Persona, perform better. This suggests that MAD protocols might not be inherently worse than other approaches, but that they are more sensitive to different hyperparameter settings and difficult to optimize. We build on these results to offer insights into improving debating strategies, such as adjusting agent agreement levels, which can significantly enhance performance and even surpass all other non-debate protocols we evaluated. We provide an open-source repository to the community with several state-of-the-art protocols together with evaluation scripts to benchmark across popular research datasets.
https://proceedings.mlr.press/v235/soares24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/soares24a/soares24a.pdf
https://openreview.net/forum?id=4zOZ0yKhm6
Probabilistic Modeling of Interpersonal Coordination Processes
https://proceedings.mlr.press/v235/soares24a.html
Paulo Soares, Adarsh Pyarelal, Meghavarshini Krishnaswamy, Emily Butler, Kobus Barnard
https://proceedings.mlr.press/v235/soares24a.html
ICML 2024
We develop a novel probabilistic model for interpersonal coordination as a latent phenomenon explaining statistical temporal influence between multiple components in a system. For example, the state of one person can influence that of another at a later time, as indicated by their observed behaviors. We characterize coordination as the degree to which the distributions for such states at one time point are merged for the next salient time point. We evaluate our model in the context of three-person teams executing a virtual search and rescue (SAR) mission. We first use synthetic data to confirm that our technical definition of coordination is consistent with expectations and that we can recover generated coordination despite noise. We then show that captured coordination can be predictive of team performance on real data. Here we use speech vocalics and semantics to infer coordination for 36 teams carrying out two successive SAR missions. In two different datasets, we find that coordination is generally predictive of team score for the second mission, but not for the first, where teams are largely learning to play the game. In addition, we found that including a semantic modality improves prediction in some scenarios. This shows that our intuitive technical definition can capture useful explanatory aspects of team behavior.
https://proceedings.mlr.press/v235/sohrabi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sohrabi24a/sohrabi24a.pdf
https://openreview.net/forum?id=1khG2xf1yt
On PI Controllers for Updating Lagrange Multipliers in Constrained Optimization
https://proceedings.mlr.press/v235/sohrabi24a.html
Motahareh Sohrabi, Juan Ramirez, Tianyue H. Zhang, Simon Lacoste-Julien, Jose Gallego-Posada
https://proceedings.mlr.press/v235/sohrabi24a.html
ICML 2024
Constrained optimization offers a powerful framework to prescribe desired behaviors in neural network models. Typically, constrained problems are solved via their min-max Lagrangian formulations, which exhibit unstable oscillatory dynamics when optimized using gradient descent-ascent. The adoption of constrained optimization techniques in the machine learning community is currently limited by the lack of reliable, general-purpose update schemes for the Lagrange multipliers. This paper proposes the νPI algorithm and contributes an optimization perspective on Lagrange multiplier updates based on PI controllers, extending the work of Stooke, Achiam and Abbeel (2020). We provide theoretical and empirical insights explaining the inability of momentum methods to address the shortcomings of gradient descent-ascent, and contrast this with the empirical success of our proposed νPI controller. Moreover, we prove that νPI generalizes popular momentum methods for single-objective minimization. Our experiments demonstrate that νPI reliably stabilizes the multiplier dynamics and its hyperparameters enjoy robust and predictable behavior.
https://proceedings.mlr.press/v235/soltani-moakhar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/soltani-moakhar24a/soltani-moakhar24a.pdf
https://openreview.net/forum?id=oBYv73nOoA
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
https://proceedings.mlr.press/v235/soltani-moakhar24a.html
Arshia Soltani Moakhar, Eugenia Iofinova, Elias Frantar, Dan Alistarh
https://proceedings.mlr.press/v235/soltani-moakhar24a.html
ICML 2024
It is known that sparsity can improve interpretability for deep neural networks. However, existing methods in the area either require networks that are pre-trained with sparsity constraints, or impose sparsity after the fact, altering the network’s general behavior. In this paper, we demonstrate, for the first time, that sparsity can instead be incorporated into the interpretation process itself, as a sample-specific preprocessing step. Unlike previous work, this approach, which we call SPADE, does not place constraints on the trained model and does not affect its behavior during inference on the sample. Given a trained model and a target sample, SPADE uses sample-targeted pruning to provide a "trace" of the network’s execution on the sample, reducing the network to the most important connections prior to computing an interpretation. We demonstrate that preprocessing with SPADE significantly increases the accuracy of image saliency maps across several interpretability methods. Additionally, SPADE improves the usefulness of neuron visualizations, aiding humans in reasoning about network behavior. Our code is available at https://github.com/IST-DASLab/SPADE.
https://proceedings.mlr.press/v235/sommer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sommer24a/sommer24a.pdf
https://openreview.net/forum?id=tc3Nmcpmnx
Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks?
https://proceedings.mlr.press/v235/sommer24a.html
Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer
https://proceedings.mlr.press/v235/sommer24a.html
ICML 2024
A major challenge in sample-based inference (SBI) for Bayesian neural networks is the size and structure of the networks’ parameter space. Our work shows that successful SBI is possible by embracing the characteristic relationship between weight and function space, uncovering a systematic link between overparameterization and the difficulty of the sampling problem. Through extensive experiments, we establish practical guidelines for sampling and convergence diagnosis. As a result, we present a deep ensemble initialized approach as an effective solution with competitive performance and uncertainty quantification.
https://proceedings.mlr.press/v235/song24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24a/song24a.pdf
https://openreview.net/forum?id=c6rVlTKpb5
Hybrid Reinforcement Learning from Offline Observation Alone
https://proceedings.mlr.press/v235/song24a.html
Yuda Song, Drew Bagnell, Aarti Singh
https://proceedings.mlr.press/v235/song24a.html
ICML 2024
We consider the hybrid reinforcement learning setting where the agent has access to both offline data and online interactive access. While RL research typically assumes offline data contains complete action, reward and transition information, datasets with only state information (also known as observation-only datasets) are more general, abundant and practical. This motivates our study of the hybrid RL with observation-only offline dataset framework. While the task of competing with the best policy “covered” by the offline data can be solved if a reset model of the environment is provided (i.e., one that can be reset to any state), we show evidence of hardness of competing when only given the weaker trace model (i.e., one can only reset to the initial states and must produce full traces through the environment), without further assumption of admissibility of the offline data. Under the admissibility assumptions– that the offline data could actually be produced by the policy class we consider– we propose the first algorithm in the trace model setting that provably matches the performance of algorithms that leverage a reset model. We also perform proof-of-concept experiments that suggest the effectiveness of our algorithm in practice.
https://proceedings.mlr.press/v235/song24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24b/song24b.pdf
https://openreview.net/forum?id=3MfvxH3Gia
Multimodal Prototyping for cancer survival prediction
https://proceedings.mlr.press/v235/song24b.html
Andrew H. Song, Richard J. Chen, Guillaume Jaume, Anurag Jayant Vaidya, Alexander Baras, Faisal Mahmood
https://proceedings.mlr.press/v235/song24b.html
ICML 2024
Multimodal survival methods combining gigapixel histology whole-slide images (WSIs) and transcriptomic profiles are particularly promising for patient prognostication and stratification. Current approaches involve tokenizing the WSIs into smaller patches ($>10^4$ patches) and transcriptomics into gene groups, which are then integrated using a Transformer for predicting outcomes. However, this process generates many tokens, which leads to high memory requirements for computing attention and complicates post-hoc interpretability analyses. Instead, we hypothesize that we can: (1) effectively summarize the morphological content of a WSI by condensing its constituting tokens using morphological prototypes, achieving more than $300\times$ compression; and (2) accurately characterize cellular functions by encoding the transcriptomic profile with biological pathway prototypes, all in an unsupervised fashion. The resulting multimodal tokens are then processed by a fusion network, either with a Transformer or an optimal transport cross-alignment, which now operates with a small and fixed number of tokens without approximations. Extensive evaluation on six cancer types shows that our framework outperforms state-of-the-art methods with much less computation while unlocking new interpretability analyses. The code is available at https://github.com/mahmoodlab/MMP.
https://proceedings.mlr.press/v235/song24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24c/song24c.pdf
https://openreview.net/forum?id=a8QpoEJCRI
SurfPro: Functional Protein Design Based on Continuous Surface
https://proceedings.mlr.press/v235/song24c.html
Zhenqiao Song, Tinglin Huang, Lei Li, Wengong Jin
https://proceedings.mlr.press/v235/song24c.html
ICML 2024
How can we design proteins with desired functions? We are motivated by a chemical intuition that both geometric structure and biochemical properties are critical to a protein’s function. In this paper, we propose SurfPro, a new method to generate functional proteins given a desired surface and its associated biochemical properties. SurfPro comprises a hierarchical encoder that progressively models the geometric shape and biochemical features of a protein surface, and an autoregressive decoder to produce an amino acid sequence. We evaluate SurfPro on a standard inverse folding benchmark CATH 4.2 and two functional protein design tasks: protein binder design and enzyme design. Our SurfPro consistently surpasses previous state-of-the-art inverse folding methods, achieving a recovery rate of 57.78% on CATH 4.2 and higher success rates in terms of protein-protein binding and enzyme-substrate interaction scores
https://proceedings.mlr.press/v235/song24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24d/song24d.pdf
https://openreview.net/forum?id=TTYVG17wfc
OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos
https://proceedings.mlr.press/v235/song24d.html
Ziyang Song, Jinxi Li, Bo Yang
https://proceedings.mlr.press/v235/song24d.html
ICML 2024
It has long been challenging to recover the underlying dynamic 3D scene representations from a monocular RGB video. Existing works formulate this problem into finding a single most plausible solution by adding various constraints such as depth priors and strong geometry constraints, ignoring the fact that there could be infinitely many 3D scene representations corresponding to a single dynamic video. In this paper, we aim to learn all plausible 3D scene configurations that match the input video, instead of just inferring a specific one. To achieve this ambitious goal, we introduce a new framework, called OSN. The key to our approach is a simple yet innovative object scale network together with a joint optimization module to learn an accurate scale range for every dynamic 3D object. This allows us to sample as many faithful 3D scene configurations as possible. Extensive experiments show that our method surpasses all baselines and achieves superior accuracy in dynamic novel view synthesis on multiple synthetic and real-world datasets. Most notably, our method demonstrates a clear advantage in learning fine-grained 3D scene geometry.
https://proceedings.mlr.press/v235/song24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24e/song24e.pdf
https://openreview.net/forum?id=10hu2D3hAg
Sparse is Enough in Fine-tuning Pre-trained Large Language Models
https://proceedings.mlr.press/v235/song24e.html
Weixi Song, Zuchao Li, Lefei Zhang, Hai Zhao, Bo Du
https://proceedings.mlr.press/v235/song24e.html
ICML 2024
With the prevalence of pre-training-fine-tuning paradigm, how to efficiently adapt the pre-trained model to the downstream tasks has been an intriguing issue. $\textbf{P}$arameter-$\textbf{E}$fficient $\textbf{F}$ine-$\textbf{T}$uning(PEFT) methods have been proposed for low-cost adaptation. Although PEFT has demonstrated effectiveness and been widely applied, the underlying principles are still unclear. In this paper, we adopt the PAC-Bayesian generalization error bound, viewing pre-training as a shift of prior distribution which leads to a tighter bound for generalization error. We validate this shift from the perspectives of oscillations in the loss landscape and the quasi-sparsity in gradient distribution. Based on this, we propose a gradient-based sparse fine-tuning algorithm, named $\textbf{S}$parse $\textbf{I}$ncrement $\textbf{F}$ine-$\textbf{T}$uning(SIFT), and validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning. The code is accessible at https://github.com/song-wx/SIFT/.
https://proceedings.mlr.press/v235/song24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24f/song24f.pdf
https://openreview.net/forum?id=fuX4hyLPmO
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
https://proceedings.mlr.press/v235/song24f.html
Jiwon Song, Kyungseok Oh, Taesu Kim, Hyungjun Kim, Yulhwa Kim, Jae-Joon Kim
https://proceedings.mlr.press/v235/song24f.html
ICML 2024
Large language models (LLMs) have proven to be highly effective across various natural language processing tasks. However, their large number of parameters poses significant challenges for practical deployment. Pruning, a technique aimed at reducing the size and complexity of LLMs, offers a potential solution by removing redundant components from the network. Despite the promise of pruning, existing methods often struggle to achieve substantial end-to-end LLM inference speedup. In this paper, we introduce SLEB, a novel approach designed to stream- line LLMs by eliminating redundant transformer blocks. We choose the transformer block as the fundamental unit for pruning, because LLMs exhibit block-level redundancy with high similarity between the outputs of neighboring blocks. This choice allows us to effectively enhance the processing speed of LLMs. Our experimental results demonstrate that SLEB outperforms previous LLM pruning methods in accelerating LLM inference while also maintaining superior perplexity and accuracy, making SLEB as a promising technique for enhancing the efficiency of LLMs. The code is available at: https://github.com/jiwonsong-dev/SLEB.
https://proceedings.mlr.press/v235/song24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24g/song24g.pdf
https://openreview.net/forum?id=5hfvLBgnNE
Unveiling the Dynamics of Information Interplay in Supervised Learning
https://proceedings.mlr.press/v235/song24g.html
Kun Song, Zhiquan Tan, Bochao Zou, Huimin Ma, Weiran Huang
https://proceedings.mlr.press/v235/song24g.html
ICML 2024
In this paper, we use matrix information theory as an analytical tool to analyze the dynamics of the information interplay between data representations and classification head vectors in the supervised learning process. Specifically, inspired by the theory of Neural Collapse, we introduce matrix mutual information ratio (MIR) and matrix entropy difference ratio (HDR) to assess the interactions of data representation and class classification heads in supervised learning, and we determine the theoretical optimal values for MIR and HDR when Neural Collapse happens. Our experiments show that MIR and HDR can effectively explain many phenomena occurring in neural networks, for example, the standard supervised training dynamics, linear mode connectivity, and the performance of label smoothing and pruning. Additionally, we use MIR and HDR to gain insights into the dynamics of grokking, which is an intriguing phenomenon observed in supervised training, where the model demonstrates generalization capabilities long after it has learned to fit the training data. Furthermore, we introduce MIR and HDR as loss terms in supervised and semi-supervised learning to optimize the information interactions among samples and classification heads. The empirical results provide evidence of the method’s effectiveness, demonstrating that the utilization of MIR and HDR not only aids in comprehending the dynamics throughout the training process but can also enhances the training procedure itself.
https://proceedings.mlr.press/v235/song24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24h/song24h.pdf
https://openreview.net/forum?id=ea2MgKn3sV
Position: Leverage Foundational Models for Black-Box Optimization
https://proceedings.mlr.press/v235/song24h.html
Xingyou Song, Yingtao Tian, Robert Tjarko Lange, Chansoo Lee, Yujin Tang, Yutian Chen
https://proceedings.mlr.press/v235/song24h.html
ICML 2024
Undeniably, Large Language Models (LLMs) have stirred an extraordinary wave of innovation in the machine learning research domain, resulting in substantial impact across diverse fields such as reinforcement learning, robotics, and computer vision. Their incorporation has been rapid and transformative, marking a significant paradigm shift in the field of machine learning research. However, the field of experimental design, grounded on black-box optimization, has been much less affected by such a paradigm shift, even though integrating LLMs with optimization presents a unique landscape ripe for exploration. In this position paper, we frame the field of black-box optimization around sequence-based foundation models and organize their relationship with previous literature. We discuss the most promising ways foundational language models can revolutionize optimization, which include harnessing the vast wealth of information encapsulated in free-form text to enrich task comprehension, utilizing highly flexible sequence models such as Transformers to engineer superior optimization strategies, and enhancing performance prediction over previously unseen search spaces.
https://proceedings.mlr.press/v235/song24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24i/song24i.pdf
https://openreview.net/forum?id=Gq1ajaKhBC
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics
https://proceedings.mlr.press/v235/song24i.html
Yuda Song, Lili Wu, Dylan J Foster, Akshay Krishnamurthy
https://proceedings.mlr.press/v235/song24i.html
ICML 2024
Sample-efficiency and reliability remain major bottlenecks toward wide adoption of reinforcement learning algorithms in continuous settings with high-dimensional perceptual inputs. Toward addressing these challenges, we introduce a new theoretical framework, RichCLD (“Rich-Observation RL with Continuous Latent Dynamics”), in which the agent performs control based on high-dimensional observations, but the environment is governed by low-dimensional latent states and Lipschitz continuous dynamics. Our main contribution is a new algorithm for this setting that is provably statistically and computationally efficient. The core of our algorithm is a new representation learning objective; we show that prior representation learning schemes tailored to discrete dynamics do not naturally extend to the continuous setting. Our new objective is amenable to practical implementation, and empirically, we find that it compares favorably to prior schemes in a standard evaluation protocol. We further provide several insights into the statistical complexity of the RichCLD framework, in particular proving that certain notions of Lipschitzness that admit sample-efficient learning in the absence of rich observations are insufficient in the rich-observation setting.
https://proceedings.mlr.press/v235/song24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24j/song24j.pdf
https://openreview.net/forum?id=pwfcwEqdUz
Latent Logic Tree Extraction for Event Sequence Explanation from LLMs
https://proceedings.mlr.press/v235/song24j.html
Zitao Song, Chao Yang, Chaojie Wang, Bo An, Shuang Li
https://proceedings.mlr.press/v235/song24j.html
ICML 2024
Modern high-stakes systems, such as healthcare or robotics, often generate vast streaming event sequences. Our goal is to design an efficient, plug-and-play tool to elicit logic tree-based explanations from Large Language Models (LLMs) to provide customized insights into each observed event sequence. Built on the temporal point process model for events, our method employs the likelihood function as a score to evaluate generated logic trees. We propose an amortized Expectation-Maximization (EM) learning framework and treat the logic tree as latent variables. In the E-step, we evaluate the posterior distribution over the latent logic trees using an LLM prior and the likelihood of the observed event sequences. LLM provides a high-quality prior for the latent logic trees, however, since the posterior is built over a discrete combinatorial space, we cannot get the closed-form solution. We propose to generate logic tree samples from the posterior using a learnable GFlowNet, which is a diversity-seeking generator for structured discrete variables. The M-step employs the generated logic rules to approximate marginalization over the posterior, facilitating the learning of model parameters and refining the tunable LLM prior parameters. In the online setting, our locally built, lightweight model will iteratively extract the most relevant rules from LLMs for each sequence using only a few iterations. Empirical demonstrations showcase the promising performance and adaptability of our framework.
https://proceedings.mlr.press/v235/song24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24k/song24k.pdf
https://openreview.net/forum?id=ATvN9JnqZ8
Generative Enzyme Design Guided by Functionally Important Sites and Small-Molecule Substrates
https://proceedings.mlr.press/v235/song24k.html
Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Wengong Jin, Yang Yang, Lei Li
https://proceedings.mlr.press/v235/song24k.html
ICML 2024
Enzymes are genetically encoded biocatalysts capable of accelerating chemical reactions. How can we automatically design functional enzymes? In this paper, we propose EnzyGen, an approach to learn a unified model to design enzymes across all functional families. Our key idea is to generate an enzyme’s amino acid sequence and their three-dimensional (3D) coordinates based on functionally important sites and substrates corresponding to a desired catalytic function. These sites are automatically mined from enzyme databases. EnzyGen consists of a novel interleaving network of attention and neighborhood equivariant layers, which captures both long-range correlation in an entire protein sequence and local influence from nearest amino acids in 3D space. To learn the generative model, we devise a joint training objective, including a sequence generation loss, a position prediction loss and an enzyme-substrate interaction loss. We further construct EnzyBench, a dataset with 3157 enzyme families, covering all available enzymes within the protein data bank (PDB). Experimental results show that our EnzyGen consistently achieves the best performance across all 323 testing families, surpassing the best baseline by 10.79% in terms of substrate binding affinity. These findings demonstrate EnzyGen’s superior capability in designing well-folded and effective enzymes binding to specific substrates with high affinities. Our code, model and dataset are provided at https://github.com/LeiLiLab/EnzyGen.
https://proceedings.mlr.press/v235/sorensen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sorensen24a/sorensen24a.pdf
https://openreview.net/forum?id=gQpBnRHwxM
Position: A Roadmap to Pluralistic Alignment
https://proceedings.mlr.press/v235/sorensen24a.html
Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi
https://proceedings.mlr.press/v235/sorensen24a.html
ICML 2024
With increased power and prevalence of AI systems, it is ever more critical that AI systems are designed to serve all, i.e., people with diverse values and perspectives. However, aligning models to serve pluralistic human values remains an open research question. In this piece, we propose a roadmap to pluralistic alignment, specifically using large language models as a test bed. We identify and formalize three possible ways to define and operationalize pluralism in AI systems: 1) Overton pluralistic models that present a spectrum of reasonable responses; 2) Steerably pluralistic models that can steer to reflect certain perspectives; and 3) Distributionally pluralistic models that are well-calibrated to a given population in distribution. We also formalize and discuss three possible classes of pluralistic benchmarks: 1) Multi-objective benchmarks, 2) Trade-off steerable benchmarks that incentivize models to steer to arbitrary trade-offs, and 3) Jury-pluralistic benchmarks that explicitly model diverse human ratings. We use this framework to argue that current alignment techniques may be fundamentally limited for pluralistic AI; indeed, we highlight empirical evidence, both from our own experiments and from other work, that standard alignment procedures might reduce distributional pluralism in models, motivating the need for further research on pluralistic alignment.
https://proceedings.mlr.press/v235/spangher24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/spangher24a/spangher24a.pdf
https://openreview.net/forum?id=arwP5FA2dO
Position: Opportunities Exist for Machine Learning in Magnetic Fusion Energy
https://proceedings.mlr.press/v235/spangher24a.html
Lucas Spangher, Allen M. Wang, Andrew Maris, Myles Stapelberg, Viraj Mehta, Alex Saperstein, Stephen Lane-Walsh, Akshata Kishore Moharir, Alessandro Pau, Cristina Rea
https://proceedings.mlr.press/v235/spangher24a.html
ICML 2024
Magnetic confinement fusion may one day provide reliable, carbon-free energy, but the field currently faces technical hurdles. In this position paper, we highlight six key research challenges in the field of fusion energy that we believe should be research priorities for the Machine Learning (ML) community because they are especially ripe for ML applications: (1) disruption prediction, (2) simulation and dynamics modeling (3) resolving partially observed data, (4) improving controls, (5) guiding experiments with optimal design, and (6) enhancing materials discovery. For each problem, we give background, review past ML work, suggest features of future models, and list challenges and idiosyncrasies facing ML development. We also discuss ongoing efforts to update the fusion data ecosystem and identify opportunities further down the line that will be enabled as fusion and its data infrastructure advance. It is our position that fusion energy offers especially exciting opportunities for ML practitioners to impact decarbonization and the future of energy.
https://proceedings.mlr.press/v235/springenberg24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/springenberg24a/springenberg24a.pdf
https://openreview.net/forum?id=tl2qmO5kpD
Offline Actor-Critic Reinforcement Learning Scales to Large Models
https://proceedings.mlr.press/v235/springenberg24a.html
Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Maria Elisabeth Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller
https://proceedings.mlr.press/v235/springenberg24a.html
ICML 2024
We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning. We find that offline actor-critic algorithms can outperform strong, supervised, behavioral cloning baselines for multi-task training on a large dataset; containing both sub-optimal and expert behavior on 132 continuous control tasks. We introduce a Perceiver-based actor-critic model and elucidate the key features needed to make offline RL work with self- and cross-attention modules. Overall, we find that: i) simple offline actor critic algorithms are a natural choice for gradually moving away from the currently predominant paradigm of behavioral cloning, and ii) via offline RL it is possible to learn multi-task policies that master many domains simultaneously, including real robotics tasks, from sub-optimal demonstrations or self-generated data.
https://proceedings.mlr.press/v235/sprueill24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sprueill24a/sprueill24a.pdf
https://openreview.net/forum?id=3tJDnEszco
CHEMREASONER: Heuristic Search over a Large Language Model’s Knowledge Space using Quantum-Chemical Feedback
https://proceedings.mlr.press/v235/sprueill24a.html
Henry W. Sprueill, Carl Edwards, Khushbu Agarwal, Mariefel V Olarte, Udishnu Sanyal, Conrad Johnston, Hongbin Liu, Heng Ji, Sutanay Choudhury
https://proceedings.mlr.press/v235/sprueill24a.html
ICML 2024
The discovery of new catalysts is essential for the design of new and more efficient chemical processes in order to transition to a sustainable future. We introduce an AI-guided computational screening framework unifying linguistic reasoning with quantum-chemistry based feedback from 3D atomistic representations. Our approach formulates catalyst discovery as an uncertain environment where an agent actively searches for highly effective catalysts via the iterative combination of large language model (LLM)-derived hypotheses and atomistic graph neural network (GNN)-derived feedback. Identified catalysts in intermediate search steps undergo structural evaluation based on spatial orientation, reaction pathways, and stability. Scoring functions based on adsorption energies and reaction energy barriers steer the exploration in the LLM’s knowledge space toward energetically favorable, high-efficiency catalysts. We introduce planning methods that automatically guide the exploration without human input, providing competitive performance against expert-enumerated chemical descriptor-based implementations. By integrating language-guided reasoning with computational chemistry feedback, our work pioneers AI-accelerated, trustworthy catalyst discovery.
https://proceedings.mlr.press/v235/sristi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/sristi24a/sristi24a.pdf
https://openreview.net/forum?id=E6Nm3x7acv
Contextual Feature Selection with Conditional Stochastic Gates
https://proceedings.mlr.press/v235/sristi24a.html
Ram Dyuthi Sristi, Ofir Lindenbaum, Shira Lifshitz, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty
https://proceedings.mlr.press/v235/sristi24a.html
ICML 2024
Feature selection is a crucial tool in machine learning and is widely applied across various scientific disciplines. Traditional supervised methods generally identify a universal set of informative features for the entire population. However, feature relevance often varies with context, while the context itself may not directly affect the outcome variable. Here, we propose a novel architecture for contextual feature selection where the subset of selected features is conditioned on the value of context variables. Our new approach, Conditional Stochastic Gates (c-STG), models the importance of features using conditional Bernoulli variables whose parameters are predicted based on contextual variables. We introduce a hypernetwork that maps context variables to feature selection parameters to learn the context-dependent gates along with a prediction model. We further present a theoretical analysis of our model, indicating that it can improve performance and flexibility over population-level methods in complex feature selection settings. Finally, we conduct an extensive benchmark using simulated and real-world datasets across multiple domains demonstrating that c-STG can lead to improved feature selection capabilities while enhancing prediction accuracy and interpretability.
https://proceedings.mlr.press/v235/stadler24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stadler24a/stadler24a.pdf
https://openreview.net/forum?id=ZGEICuuUJo
The Fundamental Limits of Least-Privilege Learning
https://proceedings.mlr.press/v235/stadler24a.html
Theresa Stadler, Bogdan Kulynych, Michael Gastpar, Nicolas Papernot, Carmela Troncoso
https://proceedings.mlr.press/v235/stadler24a.html
ICML 2024
The promise of least-privilege learning – to find feature representations that are useful for a learning task but prevent inference of any sensitive information unrelated to this task – is highly appealing. However, so far this concept has only been stated informally. It thus remains an open question whether and how we can achieve this goal. In this work, we provide the first formalisation of the least-privilege principle for machine learning and characterise its feasibility. We prove that there is a fundamental trade-off between a representation’s utility for a given task and its leakage beyond the intended task: it is not possible to learn representations that have high utility for the intended task but, at the same time, prevent inference of any attribute other than the task label itself. This trade-off holds regardless of the technique used to learn the feature mappings that produce these representations. We empirically validate this result for a wide range of learning techniques, model architectures, and datasets.
https://proceedings.mlr.press/v235/stanczuk24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stanczuk24a/stanczuk24a.pdf
https://openreview.net/forum?id=a0XiA6v256
Diffusion Models Encode the Intrinsic Dimension of Data Manifolds
https://proceedings.mlr.press/v235/stanczuk24a.html
Jan Pawel Stanczuk, Georgios Batzolis, Teo Deveney, Carola-Bibiane Schönlieb
https://proceedings.mlr.press/v235/stanczuk24a.html
ICML 2024
In this work, we provide a mathematical proof that diffusion models encode data manifolds by approximating their normal bundles. Based on this observation we propose a novel method for extracting the intrinsic dimension of the data manifold from a trained diffusion model. Our insights are based on the fact that a diffusion model approximates the score function i.e. the gradient of the log density of a noise-corrupted version of the target distribution for varying levels of corruption. We prove that as the level of corruption decreases, the score function points towards the manifold, as this direction becomes the direction of maximal likelihood increase. Therefore, at low noise levels, the diffusion model provides us with an approximation of the manifold’s normal bundle, allowing for an estimation of the manifold’s intrinsic dimension. To the best of our knowledge our method is the first estimator of intrinsic dimension based on diffusion models and it outperforms well established estimators in controlled experiments on both Euclidean and image data.
https://proceedings.mlr.press/v235/stander24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stander24a/stander24a.pdf
https://openreview.net/forum?id=hcQfTsVnBo
Grokking Group Multiplication with Cosets
https://proceedings.mlr.press/v235/stander24a.html
Dashiell Stander, Qinan Yu, Honglu Fan, Stella Biderman
https://proceedings.mlr.press/v235/stander24a.html
ICML 2024
The complex and unpredictable nature of deep neural networks prevents their safe use in many high-stakes applications. There have been many techniques developed to interpret deep neural networks, but all have substantial limitations. Algorithmic tasks have proven to be a fruitful test ground for interpreting a neural network end-to-end. Building on previous work, we completely reverse engineer fully connected one-hidden layer networks that have “grokked” the arithmetic of the permutation groups $S_5$ and $S_6$. The models discover the true subgroup structure of the full group and converge on neural circuits that decompose the group arithmetic using the permutation group’s subgroups. We relate how we reverse engineered the model’s mechanisms and confirmed our theory was a faithful description of the circuit’s functionality. We also draw attention to current challenges in conducting interpretability research by comparing our work to Chughtai et al. (2023) which alleges to find a different algorithm for this same problem.
https://proceedings.mlr.press/v235/stark24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stark24a/stark24a.pdf
https://openreview.net/forum?id=XTrMY9sHKF
Harmonic Self-Conditioned Flow Matching for joint Multi-Ligand Docking and Binding Site Design
https://proceedings.mlr.press/v235/stark24a.html
Hannes Stark, Bowen Jing, Regina Barzilay, Tommi Jaakkola
https://proceedings.mlr.press/v235/stark24a.html
ICML 2024
A significant amount of protein function requires binding small molecules, including enzymatic catalysis. As such, designing binding pockets for small molecules has several impactful applications ranging from drug synthesis to energy storage. Towards this goal, we first develop HarmonicFlow, an improved generative process over 3D protein-ligand binding structures based on our self-conditioned flow matching objective. FlowSite extends this flow model to jointly generate a protein pocket’s discrete residue types and the molecule’s binding 3D structure. We show that HarmonicFlow improves upon state-of-the-art generative processes for docking in simplicity, generality, and average sample quality in pocket-level docking. Enabled by this structure modeling, FlowSite designs binding sites substantially better than baseline approaches.
https://proceedings.mlr.press/v235/stark24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stark24b/stark24b.pdf
https://openreview.net/forum?id=syXFAVqx85
Dirichlet Flow Matching with Applications to DNA Sequence Design
https://proceedings.mlr.press/v235/stark24b.html
Hannes Stark, Bowen Jing, Chenyu Wang, Gabriele Corso, Bonnie Berger, Regina Barzilay, Tommi Jaakkola
https://proceedings.mlr.press/v235/stark24b.html
ICML 2024
Discrete diffusion or flow models could enable faster and more controllable sequence generation than autoregressive models. We show that naive linear flow matching on the simplex is insufficient toward this goal since it suffers from discontinuities in the training target and further pathologies. To overcome this, we develop Dirichlet flow matching on the simplex based on mixtures of Dirichlet distributions as probability paths. In this framework, we derive a connection between the mixtures’ scores and the flow’s vector field that allows for classifier and classifier-free guidance. Further, we provide distilled Dirichlet flow matching, which enables one-step sequence generation with minimal performance hits, resulting in $O(L)$ speedups compared to autoregressive models. On complex DNA sequence generation tasks, we demonstrate superior performance compared to all baselines in distributional metrics and in achieving desired design targets for generated sequences. Finally, we show that our classifier-free guidance approach improves unconditional generation and is effective for generating DNA that satisfies design targets.
https://proceedings.mlr.press/v235/stein24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stein24a/stein24a.pdf
https://openreview.net/forum?id=vYYIuJDTHq
Partial Optimality in the Linear Ordering Problem
https://proceedings.mlr.press/v235/stein24a.html
David Stein, Bjoern Andres
https://proceedings.mlr.press/v235/stein24a.html
ICML 2024
The linear ordering problem consists in finding a linear order $<$ on a finite set $A$ so as to minimize the sum of costs associated with pairs of elements $a, b$ for which $a < b$. The problem is NP-hard and APX-hard. We introduce algorithms for solving the problem partially by deciding efficiently for some pairs $(a,b)$ whether $a < b$ is in an optimal solution. To do so, we construct maps from the feasible set of orders to itself and establish efficiently testable conditions on the cost function of the problem for which these maps are improving. We examine the effectiveness and efficiency of these conditions and algorithms empirically, on two data sets.
https://proceedings.mlr.press/v235/stein24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stein24b/stein24b.pdf
https://openreview.net/forum?id=upO8FUwf92
Towards Compositionality in Concept Learning
https://proceedings.mlr.press/v235/stein24b.html
Adam Stein, Aaditya Naik, Yinjun Wu, Mayur Naik, Eric Wong
https://proceedings.mlr.press/v235/stein24b.html
ICML 2024
Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individual concepts compose to explain the full sample. We show that existing unsupervised concept extraction methods find concepts which are not compositional. To automatically discover compositional concept representations, we identify two salient properties of such representations, and propose Compositional Concept Extraction (CCE) for finding concepts which obey these properties. We evaluate CCE on five different datasets over image and text data. Our evaluation shows that CCE finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks.
https://proceedings.mlr.press/v235/steinmann24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/steinmann24a/steinmann24a.pdf
https://openreview.net/forum?id=gEbl6XNLK6
Learning to Intervene on Concept Bottlenecks
https://proceedings.mlr.press/v235/steinmann24a.html
David Steinmann, Wolfgang Stammer, Felix Friedrich, Kristian Kersting
https://proceedings.mlr.press/v235/steinmann24a.html
ICML 2024
While deep learning models often lack interpretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Moreover, they allow users to perform interventional interactions on these concepts by updating the concept values and thus correcting the predictive output of the model. Up to this point, these interventions were typically applied to the model just once and then discarded. To rectify this, we present concept bottleneck memory models (CB2Ms), which keep a memory of past interventions. Specifically, CB2Ms leverage a two-fold memory to generalize interventions to appropriate novel situations, enabling the model to identify errors and reapply previous interventions. This way, a CB2M learns to automatically improve model performance from a few initially obtained interventions. If no prior human interventions are available, a CB2M can detect potential mistakes of the CBM bottleneck and request targeted interventions. Our experimental evaluations on challenging scenarios like handling distribution shifts and confounded data demonstrate that CB2Ms are able to successfully generalize interventions to unseen data and can indeed identify wrongly inferred concepts. Hence, CB2Ms are a valuable tool for users to provide interactive feedback on CBMs, by guiding a user’s interaction and requiring fewer interventions.
https://proceedings.mlr.press/v235/stella24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stella24a/stella24a.pdf
https://openreview.net/forum?id=pAzDdYzEva
QORA: Zero-Shot Transfer via Interpretable Object-Relational Model Learning
https://proceedings.mlr.press/v235/stella24a.html
Gabriel Stella, Dmitri Loguinov
https://proceedings.mlr.press/v235/stella24a.html
ICML 2024
Although neural networks have demonstrated significant success in various reinforcement-learning tasks, even the highest-performing deep models often fail to generalize. As an alternative, object-oriented approaches offer a promising path towards better efficiency and generalization; however, they typically address narrow problem classes and require extensive domain knowledge. To overcome these limitations, we introduce QORA, an algorithm that constructs models expressive enough to solve a variety of domains, including those with stochastic transition functions, directly from a domain-agnostic object-based state representation. We also provide a novel benchmark suite to evaluate learners’ generalization capabilities. In our test domains, QORA achieves 100% predictive accuracy using almost four orders of magnitude fewer observations than a neural-network baseline, demonstrates zero-shot transfer to modified environments, and adapts rapidly when applied to tasks involving previously unseen object interactions. Finally, we give examples of QORA’s learned rules, showing them to be easily interpretable.
https://proceedings.mlr.press/v235/stemmer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stemmer24a/stemmer24a.pdf
https://openreview.net/forum?id=BdQTCAuT6L
Private Truly-Everlasting Robust-Prediction
https://proceedings.mlr.press/v235/stemmer24a.html
Uri Stemmer
https://proceedings.mlr.press/v235/stemmer24a.html
ICML 2024
Private everlasting prediction (PEP), recently introduced by Naor et al. [2023], is a model for differentially private learning in which the learner never publicly releases a hypothesis. Instead, it provides black-box access to a "prediction oracle" that can predict the labels of an endless stream of unlabeled examples drawn from the underlying distribution. Importantly, PEP provides privacy both for the initial training set and for the endless stream of classification queries. We present two conceptual modifications to the definition of PEP, as well as new constructions exhibiting significant improvements over prior work. Specifically, we incorporate robustness against poisoning attacks into the definition of PEP; we present a relaxed privacy definition, suitable for PEP, that allows us to disconnect the privacy parameter $\delta$ from the number of total time steps $T$; and we present new constructions for axis-aligned rectangles and decision-stumps exhibiting improved sample complexity and runtime.
https://proceedings.mlr.press/v235/stengel-eskin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stengel-eskin24a/stengel-eskin24a.pdf
https://openreview.net/forum?id=FovMAzXUpj
ReGAL: Refactoring Programs to Discover Generalizable Abstractions
https://proceedings.mlr.press/v235/stengel-eskin24a.html
Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal
https://proceedings.mlr.press/v235/stengel-eskin24a.html
ICML 2024
While large language models (LLMs) are increasingly being used for program synthesis, they lack the global view needed to develop useful abstractions; they generally predict programs one at a time, often repeating the same functionality. Generating redundant code from scratch is both inefficient and error-prone. To address this, we propose Refactoring for Generalizable Abstraction Learning (ReGAL), a gradient-free method for learning a library of reusable functions via code refactorization, i.e., restructuring code without changing its execution output. ReGAL learns from a small set of existing programs, iteratively verifying and refining its abstractions via execution. We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains. On five datasets – LOGO graphics generation, Date reasoning, TextCraft (a Minecraft-based text-game) MATH, and TabMWP – both open-source and proprietary LLMs improve in accuracy when predicting programs with REGAL functions. For CodeLlama-13B, REGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, out-performing GPT-3.5 in two of three domains. Our analysis reveals REGAL’s abstractions encapsulate frequently-used subroutines as well as environment dynamics.
https://proceedings.mlr.press/v235/stephan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stephan24a/stephan24a.pdf
https://openreview.net/forum?id=ngcZhfXCBW
RLVF: Learning from Verbal Feedback without Overgeneralization
https://proceedings.mlr.press/v235/stephan24a.html
Moritz Pascal Stephan, Alexander Khazatsky, Eric Mitchell, Annie S Chen, Sheryl Hsu, Archit Sharma, Chelsea Finn
https://proceedings.mlr.press/v235/stephan24a.html
ICML 2024
The diversity of contexts in which large language models (LLMs) are deployed requires the ability to modify or customize default model behaviors to incorporate nuanced requirements and preferences. A convenient interface to specify such model adjustments is high-level verbal feedback, such as “Don’t use emojis when drafting emails to my boss.” However, while writing high-level feedback is far simpler than collecting annotations for reinforcement learning from human feedback (RLHF), we find that simply prompting a model with such feedback leads to $\textbf{overgeneralization}$–applying feedback in contexts where it is not relevant. We propose a new method Contextualized Critiques with Constrained Preference Optimization (C3PO) to learn from high-level verbal feedback while reducing overgeneralization compared to current work. C3PO uses a piece of high-level feedback to generate a small synthetic preference dataset to specify when and how the feedback should (and should not) be applied. It then fine-tunes the model in accordance with the synthetic preference data while minimizing the divergence from the original model for prompts where the feedback does not apply. Our experimental results indicate that our approach effectively applies verbal feedback to relevant scenarios while preserving existing behaviors for other contexts more than current methods. For both human- and GPT-4-generated high-level feedback, C3PO effectively adheres to the given feedback comparably to in-context baselines while reducing overgeneralization by 30%.
https://proceedings.mlr.press/v235/stoica24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stoica24a/stoica24a.pdf
https://openreview.net/forum?id=HZ6lrZzB02
Causal Inference from Competing Treatments
https://proceedings.mlr.press/v235/stoica24a.html
Ana-Andreea Stoica, Vivian Yvonne Nastl, Moritz Hardt
https://proceedings.mlr.press/v235/stoica24a.html
ICML 2024
Many applications of RCTs involve the presence of multiple treatment administrators—from field experiments to online advertising—that compete for the subjects’ attention. In the face of competition, estimating a causal effect becomes difficult, as the position at which a subject sees a treatment influences their response, and thus the treatment effect. In this paper, we build a game-theoretic model of agents who wish to estimate causal effects in the presence of competition, through a bidding system and a utility function that minimizes estimation error. Our main technical result establishes an approximation with a tractable objective that maximizes the sample value obtained through strategically allocating budget on subjects. This allows us to find an equilibrium in our model: we show that the tractable objective has a pure Nash equilibrium, and that any Nash equilibrium is an approximate equilibrium for our general objective that minimizes estimation error under broad conditions. Conceptually, our work successfully combines elements from causal inference and game theory to shed light on the equilibrium behavior of experimentation under competition.
https://proceedings.mlr.press/v235/stradi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/stradi24a/stradi24a.pdf
https://openreview.net/forum?id=Qv5szC1zp7
Online Learning in CMDPs: Handling Stochastic and Adversarial Constraints
https://proceedings.mlr.press/v235/stradi24a.html
Francesco Emanuele Stradi, Jacopo Germano, Gianmarco Genalti, Matteo Castiglioni, Alberto Marchesi, Nicola Gatti
https://proceedings.mlr.press/v235/stradi24a.html
ICML 2024
We study online learning in episodic constrained Markov decision processes (CMDPs), where the learner aims at collecting as much reward as possible over the episodes, while satisfying some long-term constraints during the learning process. Rewards and constraints can be selected either stochastically or adversarially, and the transition function is not known to the learner. While online learning in classical (unconstrained) MDPs has received considerable attention over the last years, the setting of CMDPs is still largely unexplored. This is surprising, since in real-world applications, such as, e.g., autonomous driving, automated bidding, and recommender systems, there are usually additional constraints and specifications that an agent has to obey during the learning process. In this paper, we provide the first best-of-both-worlds algorithm for CMDPs with long-term constraints, in the flavor of Balseiro et al. (2023). Our algorithm is capable of handling settings in which rewards and constraints are selected either stochastically or adversarially, without requiring any knowledge of the underling process. Moreover, our algorithm matches state-of-the-art regret and constraint violation bounds for settings in which constraints are selected stochastically, while it is the first to provide guarantees in the case in which they are chosen adversarially.