abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/malagon24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/malagon24a/malagon24a.pdf
|
https://openreview.net/forum?id=f5gtX2VWSB
|
Self-Composing Policies for Scalable Continual Reinforcement Learning
|
https://proceedings.mlr.press/v235/malagon24a.html
|
Mikel Malagon, Josu Ceberio, Jose A. Lozano
|
https://proceedings.mlr.press/v235/malagon24a.html
|
ICML 2024
|
This work introduces a growable and modular neural network architecture that naturally avoids catastrophic forgetting and interference in continual reinforcement learning. The structure of each module allows the selective combination of previous policies along with its internal policy accelerating the learning process on the current task. Unlike previous growing neural network approaches, we show that the number of parameters of the proposed approach grows linearly with respect to the number of tasks, and does not sacrifice plasticity to scale. Experiments conducted in benchmark continuous control and visual problems reveal that the proposed approach achieves greater knowledge transfer and performance than alternative methods.
|
https://proceedings.mlr.press/v235/malekmohammadi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/malekmohammadi24a/malekmohammadi24a.pdf
|
https://openreview.net/forum?id=wuQ2DRPAuy
|
Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning
|
https://proceedings.mlr.press/v235/malekmohammadi24a.html
|
Saber Malekmohammadi, Yaoliang Yu, Yang Cao
|
https://proceedings.mlr.press/v235/malekmohammadi24a.html
|
ICML 2024
|
High utility and rigorous data privacy are of the main goals of a federated learning (FL) system, which learns a model from the data distributed among some clients. The latter has been tried to achieve by using differential privacy in FL (DPFL). There is often heterogeneity in clients’ privacy requirements, and existing DPFL works either assume uniform privacy requirements for clients or are not applicable when server is not fully trusted (our setting). Furthermore, there is often heterogeneity in batch and/or dataset size of clients, which as shown, results in extra variation in the DP noise level across clients’ model updates. With these sources of heterogeneity, straightforward aggregation strategies, e.g., assigning clients’ aggregation weights proportional to their privacy parameters ($\epsilon$) will lead to lower utility. We propose Robust-HDP, which efficiently estimates the true noise level in clients’ model updates and reduces the noise-level in the aggregated model updates considerably. Robust-HDP improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter $\epsilon$ to server. Extensive experimental results on multiple datasets and our theoretical analysis confirm the effectiveness of Robust-HDP. Our code can be found here: https://github.com/Saber-mm/HDPFL.git
|
https://proceedings.mlr.press/v235/malherbe24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/malherbe24a/malherbe24a.pdf
|
https://openreview.net/forum?id=YoUb2vW9WP
|
Measures of diversity and space-filling designs for categorical data
|
https://proceedings.mlr.press/v235/malherbe24a.html
|
Cedric Malherbe, Emilio Domı́nguez-Sánchez, Merwan Barlier, Igor Colin, Haitham Bou Ammar, Tom Diethe
|
https://proceedings.mlr.press/v235/malherbe24a.html
|
ICML 2024
|
Selecting a small subset of items that represent the diversity of a larger population lies at the heart of many data analysis and machine learning applications. However, when it comes to items described by discrete features, the lack of natural ordering and the combinatorial nature of the search space pose significant challenges to the current selection techniques and make existing methods ill-suited. In this paper, we propose to make a step in that direction by proposing novel methods to select subsets of diverse categorical data based on the advances in combinatorial optimization. First, we start to cast the subset selection problem through the lens of the optimization of three diversity metrics. We then provide novel bounds for this problem and present exact solvers that unfortunately come with a high computational cost. To overcome this bottleneck, we go on and show how to employ tools from linear programming and submodular optimization by introducing two computationally plausible methods that still present approximation guarantees about the diversity metrics. Finally, a numerical assessment is provided to illustrate the potential of the designs with respect to state-of-the-art methods.
|
https://proceedings.mlr.press/v235/malla24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/malla24a/malla24a.pdf
|
https://openreview.net/forum?id=Lt8Lk7IQ5b
|
COPAL: Continual Pruning in Large Language Generative Models
|
https://proceedings.mlr.press/v235/malla24a.html
|
Srikanth Malla, Joon Hee Choi, Chiho Choi
|
https://proceedings.mlr.press/v235/malla24a.html
|
ICML 2024
|
Adapting pre-trained large language models to different domains in natural language processing requires two key considerations: high computational demands and model’s inability to continual adaptation. To simultaneously address both issues, this paper presents COPAL (COntinual Pruning in Adaptive Language settings), an algorithm developed for pruning large language generative models under a continual model adaptation setting. While avoiding resource-heavy finetuning or retraining, our pruning process is guided by the proposed sensitivity analysis. The sensitivity effectively measures model’s ability to withstand perturbations introduced by the new dataset and finds model’s weights that are relevant for all encountered datasets. As a result, COPAL allows seamless model adaptation to new domains while enhancing the resource efficiency. Our empirical evaluation on a various size of LLMs show that COPAL outperforms baseline models, demonstrating its efficacy in efficiency and adaptability.
|
https://proceedings.mlr.press/v235/mallinar24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mallinar24a/mallinar24a.pdf
|
https://openreview.net/forum?id=Zw7TcnTmHj
|
Minimum-Norm Interpolation Under Covariate Shift
|
https://proceedings.mlr.press/v235/mallinar24a.html
|
Neil Rohit Mallinar, Austin Zane, Spencer Frei, Bin Yu
|
https://proceedings.mlr.press/v235/mallinar24a.html
|
ICML 2024
|
Transfer learning is a critical part of real-world machine learning deployments and has been extensively studied in experimental works with overparameterized neural networks. However, even in the simplest setting of linear regression a notable gap still exists in the theoretical understanding of transfer learning. In-distribution research on high-dimensional linear regression has led to the identification of a phenomenon known as benign overfitting, in which linear interpolators overfit to noisy training labels and yet still generalize well. This behavior occurs under specific conditions on the source covariance matrix and input data dimension. Therefore, it is natural to wonder how such high-dimensional linear models behave under transfer learning. We prove the first non-asymptotic excess risk bounds for benignly-overfit linear interpolators in the transfer learning setting. From our analysis, we propose a taxonomy of beneficial and malignant covariate shifts based on the degree of overparameterization. We follow our analysis with empirical studies that show these beneficial and malignant covariate shifts for linear interpolators on real image data, and for fully-connected neural networks in settings where the input data dimension is larger than the training sample size.
|
https://proceedings.mlr.press/v235/mannelli24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mannelli24a/mannelli24a.pdf
|
https://openreview.net/forum?id=9L7BZiTtJR
|
Tilting the Odds at the Lottery: the Interplay of Overparameterisation and Curricula in Neural Networks
|
https://proceedings.mlr.press/v235/mannelli24a.html
|
Stefano Sarao Mannelli, Yaraslau Ivashynka, Andrew M Saxe, Luca Saglietti
|
https://proceedings.mlr.press/v235/mannelli24a.html
|
ICML 2024
|
A wide range of empirical and theoretical works have shown that overparameterisation can amplify the performance of neural networks. According to the lottery ticket hypothesis, overparameterised networks have an increased chance of containing a sub-network that is well-initialised to solve the task at hand. A more parsimonious approach, inspired by animal learning, consists in guiding the learner towards solving the task by curating the order of the examples, ie. providing a curriculum. However, this learning strategy seems to be hardly beneficial in deep learning applications. In this work, we propose a theoretical analysis that connects curriculum learning and overparameterisation. In particular, we investigate their interplay in the online learning setting for a 2-layer network in the XOR-like Gaussian Mixture problem. Our results show that a high degree of overparameterisation—while simplifying the problem—can limit the benefit from curricula, providing a theoretical account of the ineffectiveness of curricula in deep learning.
|
https://proceedings.mlr.press/v235/manor24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/manor24a/manor24a.pdf
|
https://openreview.net/forum?id=mCzyRdDak5
|
Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion
|
https://proceedings.mlr.press/v235/manor24a.html
|
Hila Manor, Tomer Michaeli
|
https://proceedings.mlr.press/v235/manor24a.html
|
ICML 2024
|
Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain. However, this wave has yet to reach the audio domain. In this paper, we explore two zero-shot editing techniques for audio signals, which use DDPM inversion with pre-trained diffusion models. The first, which we coin ZEro-shot Text-based Audio (ZETA) editing, is adopted from the image domain. The second, named ZEro-shot UnSupervized (ZEUS) editing, is a novel approach for discovering semantically meaningful editing directions without supervision. When applied to music signals, this method exposes a range of musically interesting modifications, from controlling the participation of specific instruments to improvisations on the melody. Samples and code can be found on our examples page.
|
https://proceedings.mlr.press/v235/manupriya24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/manupriya24a/manupriya24a.pdf
|
https://openreview.net/forum?id=bfQCO9Vqhk
|
Submodular framework for structured-sparse optimal transport
|
https://proceedings.mlr.press/v235/manupriya24a.html
|
Piyushi Manupriya, Pratik Jawanpuria, Karthik S. Gurumoorthy, Sakethanath Jagarlapudi, Bamdev Mishra
|
https://proceedings.mlr.press/v235/manupriya24a.html
|
ICML 2024
|
Unbalanced optimal transport (UOT) has recently gained much attention due to its flexible framework for handling un-normalized measures and its robustness properties. In this work, we explore learning (structured) sparse transport plans in the UOT setting, i.e., transport plans have an upper bound on the number of non-sparse entries in each column (structured sparse pattern) or in the whole plan (general sparse pattern). We propose novel sparsity-constrained UOT formulations building on the recently explored maximum mean discrepancy based UOT. We show that the proposed optimization problem is equivalent to the maximization of a weakly submodular function over a uniform matroid or a partition matroid. We develop efficient gradient-based discrete greedy algorithms and provide the corresponding theoretical guarantees. Empirically, we observe that our proposed greedy algorithms select a diverse support set and we illustrate the efficacy of the proposed approach in various applications.
|
https://proceedings.mlr.press/v235/manvi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/manvi24a/manvi24a.pdf
|
https://openreview.net/forum?id=sHtIStlg0v
|
Large Language Models are Geographically Biased
|
https://proceedings.mlr.press/v235/manvi24a.html
|
Rohin Manvi, Samar Khanna, Marshall Burke, David B. Lobell, Stefano Ermon
|
https://proceedings.mlr.press/v235/manvi24a.html
|
ICML 2024
|
Large Language Models (LLMs) inherently carry the biases contained in their training corpora, which can lead to the perpetuation of societal harm. As the impact of these foundation models grows, understanding and evaluating their biases becomes crucial to achieving fairness and accuracy. We propose to study what LLMs know about the world we live in through the lens of geography. This approach is particularly powerful as there is ground truth for the numerous aspects of human life that are meaningfully projected onto geographic space such as culture, race, language, politics, and religion. We show various problematic geographic biases, which we define as systemic errors in geospatial predictions. Initially, we demonstrate that LLMs are capable of making accurate zero-shot geospatial predictions in the form of ratings that show strong monotonic correlation with ground truth (Spearman’s $\rho$ of up to 0.89). We then show that LLMs exhibit common biases across a range of objective and subjective topics. In particular, LLMs are clearly biased against locations with lower socioeconomic conditions (e.g. most of Africa) on a variety of sensitive subjective topics such as attractiveness, morality, and intelligence (Spearman’s $\rho$ of up to 0.70). Finally, we introduce a bias score to quantify this and find that there is significant variation in the magnitude of bias across existing LLMs. Code is available on the project website: https://rohinmanvi.github.io/GeoLLM.
|
https://proceedings.mlr.press/v235/mao24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mao24a/mao24a.pdf
|
https://openreview.net/forum?id=Edz0QXKKAo
|
Position: Graph Foundation Models Are Already Here
|
https://proceedings.mlr.press/v235/mao24a.html
|
Haitao Mao, Zhikai Chen, Wenzhuo Tang, Jianan Zhao, Yao Ma, Tong Zhao, Neil Shah, Mikhail Galkin, Jiliang Tang
|
https://proceedings.mlr.press/v235/mao24a.html
|
ICML 2024
|
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain, aiming to develop graph models trained on extensive and diverse data to enhance their applicability across various tasks and domains. Developing GFMs presents unique challenges over traditional Graph Neural Networks (GNNs), which are typically trained from scratch for specific tasks on particular datasets. The primary challenge in constructing GFMs lies in effectively leveraging vast and diverse graph data to achieve positive transfer. Drawing inspiration from existing foundation models in the CV and NLP domains, we propose a novel perspective for the GFM development by advocating for a "graph vocabulary”, in which the basic transferable units underlying graphs encode the invariance on graphs. We ground the graph vocabulary construction from essential aspects including network analysis, expressiveness, and stability. Such a vocabulary perspective can potentially advance the future GFM design in line with the neural scaling laws. All relevant resources with GFM design can be found here.
|
https://proceedings.mlr.press/v235/mao24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mao24b/mao24b.pdf
|
https://openreview.net/forum?id=FNKnLhLuhY
|
Towards General Neural Surrogate Solvers with Specialized Neural Accelerators
|
https://proceedings.mlr.press/v235/mao24b.html
|
Chenkai Mao, Robert Lupoiu, Tianxiang Dai, Mingkun Chen, Jonathan Fan
|
https://proceedings.mlr.press/v235/mao24b.html
|
ICML 2024
|
Surrogate neural network-based partial differential equation (PDE) solvers have the potential to solve PDEs in an accelerated manner, but they are largely limited to systems featuring fixed domain sizes, geometric layouts, and boundary conditions. We propose Specialized Neural Accelerator-Powered Domain Decomposition Methods (SNAP-DDM), a DDM-based approach to PDE solving in which subdomain problems containing arbitrary boundary conditions and geometric parameters are accurately solved using an ensemble of specialized neural operators. We tailor SNAP-DDM to 2D electromagnetics and fluidic flow problems and show how innovations in network architecture and loss function engineering can produce specialized surrogate subdomain solvers with near unity accuracy. We utilize these solvers with standard DDM algorithms to accurately solve freeform electromagnetics and fluids problems featuring a wide range of domain sizes.
|
https://proceedings.mlr.press/v235/mao24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mao24c/mao24c.pdf
|
https://openreview.net/forum?id=nvHlHfjJPe
|
$H$-Consistency Guarantees for Regression
|
https://proceedings.mlr.press/v235/mao24c.html
|
Anqi Mao, Mehryar Mohri, Yutao Zhong
|
https://proceedings.mlr.press/v235/mao24c.html
|
ICML 2024
|
We present a detailed study of $H$-consistency bounds for regression. We first present new theorems that generalize the tools previously given to establish $H$-consistency bounds. This generalization proves essential for analyzing $H$-consistency bounds specific to regression. Next, we prove a series of novel $H$-consistency bounds for surrogate loss functions of the squared loss, under the assumption of a symmetric distribution and a bounded hypothesis set. This includes positive results for the Huber loss, all $\ell_p$ losses, $p \geq 1$, the squared $\epsilon$-insensitive loss, as well as a negative result for the $\epsilon$-insensitive loss used in Support Vector Regression (SVR). We further leverage our analysis of $H$-consistency for regression and derive principled surrogate losses for adversarial regression (Section 5). This readily establishes novel algorithms for adversarial regression, for which we report favorable experimental results in Section 6.
|
https://proceedings.mlr.press/v235/mao24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mao24d/mao24d.pdf
|
https://openreview.net/forum?id=5NTTCCO74S
|
Regression with Multi-Expert Deferral
|
https://proceedings.mlr.press/v235/mao24d.html
|
Anqi Mao, Mehryar Mohri, Yutao Zhong
|
https://proceedings.mlr.press/v235/mao24d.html
|
ICML 2024
|
Learning to defer with multiple experts is a framework where the learner can choose to defer the prediction to several experts. While this problem has received significant attention in classification contexts, it presents unique challenges in regression due to the infinite and continuous nature of the label space. In this work, we introduce a novel framework of regression with deferral, which involves deferring the prediction to multiple experts. We present a comprehensive analysis for both the single-stage scenario, where there is simultaneous learning of predictor and deferral functions, and the two-stage scenario, which involves a pre-trained predictor with a learned deferral function. We introduce new surrogate loss functions for both scenarios and prove that they are supported by $H$-consistency bounds. These bounds provide consistency guarantees that are stronger than Bayes consistency, as they are non-asymptotic and hypothesis set-specific. Our framework is versatile, applying to multiple experts, accommodating any bounded regression losses, addressing both instance-dependent and label-dependent costs, and supporting both single-stage and two-stage methods. Our single-stage formulation subsumes as a special case the recent regression with abstention (Cheng et al., 2023) framework, where only a single expert is considered, specifically for the squared loss and a label-independent cost. Minimizing our proposed loss functions directly leads to novel algorithms for regression with deferral. We report the results of extensive experiments showing the effectiveness of our proposed algorithms.
|
https://proceedings.mlr.press/v235/maran24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/maran24a/maran24a.pdf
|
https://openreview.net/forum?id=GGnYDXZC1B
|
No-Regret Reinforcement Learning in Smooth MDPs
|
https://proceedings.mlr.press/v235/maran24a.html
|
Davide Maran, Alberto Maria Metelli, Matteo Papini, Marcello Restelli
|
https://proceedings.mlr.press/v235/maran24a.html
|
ICML 2024
|
Obtaining no-regret guarantees for reinforcement learning (RL) in the case of problems with continuous state and/or action spaces is still one of the major open challenges in the field. Recently, a variety of solutions have been proposed, but besides very specific settings, the general problem remains unsolved. In this paper, we introduce a novel structural assumption on the Markov decision processes (MDPs), namely $\nu-$smoothness, that generalizes most of the settings proposed so far (e.g., linear MDPs and Lipschitz MDPs). To face this challenging scenario, we propose two algorithms for regret minimization in $\nu-$smooth MDPs. Both algorithms build upon the idea of constructing an MDP representation through an orthogonal feature map based on Legendre polynomials. The first algorithm, Legendre-Eleanor, archives the no-regret property under weaker assumptions but is computationally inefficient, whereas the second one, Legendre-LSVI, runs in polynomial time, although for a smaller class of problems. After analyzing their regret properties, we compare our results with state-of-the-art ones from RL theory, showing that our algorithms achieve the best guarantees.
|
https://proceedings.mlr.press/v235/marcotte24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marcotte24a/marcotte24a.pdf
|
https://openreview.net/forum?id=hG6gddAKnJ
|
Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows
|
https://proceedings.mlr.press/v235/marcotte24a.html
|
Sibylle Marcotte, Rémi Gribonval, Gabriel Peyré
|
https://proceedings.mlr.press/v235/marcotte24a.html
|
ICML 2024
|
Conservation laws are well-established in the context of Euclidean gradient flow dynamics, notably for linear or ReLU neural network training. Yet, their existence and principles for non-Euclidean geometries and momentum-based dynamics remain largely unknown. In this paper, we characterize "all" conservation laws in this general setting. In stark contrast to the case of gradient flows, we prove that the conservation laws for momentum-based dynamics exhibit temporal dependence. Additionally, we often observe a "conservation loss" when transitioning from gradient flow to momentum dynamics. Specifically, for linear networks, our framework allows us to identify all momentum conservation laws, which are less numerous than in the gradient flow case except in sufficiently over-parameterized regimes. With ReLU networks, no conservation law remains. This phenomenon also manifests in non-Euclidean metrics, used e.g. for Nonnegative Matrix Factorization (NMF): all conservation laws can be determined in the gradient flow context, yet none persists in the momentum case.
|
https://proceedings.mlr.press/v235/mariella24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mariella24a/mariella24a.pdf
|
https://openreview.net/forum?id=A9hJvQHEEP
|
Quantum Theory and Application of Contextual Optimal Transport
|
https://proceedings.mlr.press/v235/mariella24a.html
|
Nicola Mariella, Albert Akhriev, Francesco Tacchino, Christa Zoufal, Juan Carlos Gonzalez-Espitia, Benedek Harsanyi, Eugene Koskin, Ivano Tavernelli, Stefan Woerner, Marianna Rapsomaniki, Sergiy Zhuk, Jannis Born
|
https://proceedings.mlr.press/v235/mariella24a.html
|
ICML 2024
|
Optimal Transport (OT) has fueled machine learning (ML) across many domains. When paired data measurements $(\boldsymbol{\mu}, \boldsymbol{\nu})$ are coupled to covariates, a challenging conditional distribution learning setting arises. Existing approaches for learning a global transport map parameterized through a potentially unseen context utilize Neural OT and largely rely on Brenier’s theorem. Here, we propose a first-of-its-kind quantum computing formulation for amortized optimization of contextualized transportation plans. We exploit a direct link between doubly stochastic matrices and unitary operators thus unravelling a natural connection between OT and quantum computation. We verify our method (QontOT) on synthetic and real data by predicting variations in cell type distributions conditioned on drug dosage. Importantly we conduct a 24-qubit hardware experiment on a task challenging for classical computers and report a performance that cannot be matched with our classical neural OT approach. In sum, this is a first step toward learning to predict contextualized transportation plans through quantum computing.
|
https://proceedings.mlr.press/v235/marisca24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marisca24a/marisca24a.pdf
|
https://openreview.net/forum?id=uYIFQOtb58
|
Graph-based Forecasting with Missing Data through Spatiotemporal Downsampling
|
https://proceedings.mlr.press/v235/marisca24a.html
|
Ivan Marisca, Cesare Alippi, Filippo Maria Bianchi
|
https://proceedings.mlr.press/v235/marisca24a.html
|
ICML 2024
|
Given a set of synchronous time series, each associated with a sensor-point in space and characterized by inter-series relationships, the problem of spatiotemporal forecasting consists of predicting future observations for each point. Spatiotemporal graph neural networks achieve striking results by representing the relationships across time series as a graph. Nonetheless, most existing methods rely on the often unrealistic assumption that inputs are always available and fail to capture hidden spatiotemporal dynamics when part of the data is missing. In this work, we tackle this problem through hierarchical spatiotemporal downsampling. The input time series are progressively coarsened over time and space, obtaining a pool of representations that capture heterogeneous temporal and spatial dynamics. Conditioned on observations and missing data patterns, such representations are combined by an interpretable attention mechanism to generate the forecasts. Our approach outperforms state-of-the-art methods on synthetic and real-world benchmarks under different missing data distributions, particularly in the presence of contiguous blocks of missing values.
|
https://proceedings.mlr.press/v235/marnissi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marnissi24a/marnissi24a.pdf
|
https://openreview.net/forum?id=dV9QGostQk
|
A Unified View of FANOVA: A Comprehensive Bayesian Framework for Component Selection and Estimation
|
https://proceedings.mlr.press/v235/marnissi24a.html
|
Yosra Marnissi, Maxime Leiber
|
https://proceedings.mlr.press/v235/marnissi24a.html
|
ICML 2024
|
This paper presents a comprehensive Bayesian framework for FANOVA models. We provide guidelines for tuning and practical implementation to improve scalability of learning and prediction. Our model is very flexible and can handle different levels of sparsity across and within decomposition orders, as well as among covariates. This flexibility enables the modeling of complex real-world data while enhancing interpretability. Additionally, it allows our model to unify diverse deterministic and Bayesian non-parametric approaches into a single equation, making comparisons and understanding easier. Notably, our model serves as the Bayesian counterpart of several deterministic methods allowing uncertainty quantification. This general framework unlocks potential for novel model developments that have been previously overlooked, such as the proposed Dirichlet mixing model that addresses limitations of existing models.
|
https://proceedings.mlr.press/v235/martinelli24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/martinelli24a/martinelli24a.pdf
|
https://openreview.net/forum?id=3MIuPRJYwf
|
Expand-and-Cluster: Parameter Recovery of Neural Networks
|
https://proceedings.mlr.press/v235/martinelli24a.html
|
Flavio Martinelli, Berfin Simsek, Wulfram Gerstner, Johanni Brea
|
https://proceedings.mlr.press/v235/martinelli24a.html
|
ICML 2024
|
Can we identify the weights of a neural network by probing its input-output mapping? At first glance, this problem seems to have many solutions because of permutation, overparameterisation and activation function symmetries. Yet, we show that the incoming weight vector of each neuron is identifiable up to sign or scaling, depending on the activation function. Our novel method ’Expand-and-Cluster’ can identify layer sizes and weights of a target network for all commonly used activation functions. Expand-and-Cluster consists of two phases: (i) to relax the non-convex optimisation problem, we train multiple overparameterised student networks to best imitate the target function; (ii) to reverse engineer the target network’s weights, we employ an ad-hoc clustering procedure that reveals the learnt weight vectors shared between students – these correspond to the target weight vectors. We demonstrate successful weights and size recovery of trained shallow and deep networks with less than 10% overhead in the layer size and describe an ’ease-of-identifiability’ axis by analysing 150 synthetic problems of variable difficulty.
|
https://proceedings.mlr.press/v235/marti-nez-rubio24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marti-nez-rubio24a/marti-nez-rubio24a.pdf
|
https://openreview.net/forum?id=ltb2XaIr9p
|
Convergence and Trade-Offs in Riemannian Gradient Descent and Riemannian Proximal Point
|
https://proceedings.mlr.press/v235/marti-nez-rubio24a.html
|
David Martı́nez-Rubio, Christophe Roux, Sebastian Pokutta
|
https://proceedings.mlr.press/v235/marti-nez-rubio24a.html
|
ICML 2024
|
In this work, we analyze two of the most fundamental algorithms in geodesically convex optimization: Riemannian gradient descent and (possibly inexact) Riemannian proximal point. We quantify their rates of convergence and produce different variants with several trade-offs. Crucially, we show the iterates naturally stay in a ball around an optimizer, of radius depending on the initial distance and, in some cases, on the curvature. Previous works simply assumed bounded iterates, resulting in rates that were not fully quantified. We also provide an implementable inexact proximal point algorithm and prove several new useful properties of Riemannian proximal methods: they work when positive curvature is present, the proximal operator does not move points away from any optimizer, and we quantify the smoothness of its induced Moreau envelope. Further, we explore beyond our theory with empirical tests.
|
https://proceedings.mlr.press/v235/marusich24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marusich24a/marusich24a.pdf
|
https://openreview.net/forum?id=fowZNENcVJ
|
Using AI Uncertainty Quantification to Improve Human Decision-Making
|
https://proceedings.mlr.press/v235/marusich24a.html
|
Laura Marusich, Jonathan Bakdash, Yan Zhou, Murat Kantarcioglu
|
https://proceedings.mlr.press/v235/marusich24a.html
|
ICML 2024
|
AI Uncertainty Quantification (UQ) has the potential to improve human decision-making beyond AI predictions alone by providing additional probabilistic information to users. The majority of past research on AI and human decision-making has concentrated on model explainability and interpretability, with little focus on understanding the potential impact of UQ on human decision-making. We evaluated the impact on human decision-making for instance-level UQ, calibrated using a strict scoring rule, in two online behavioral experiments. In the first experiment, our results showed that UQ was beneficial for decision-making performance compared to only AI predictions. In the second experiment, we found UQ had generalizable benefits for decision-making across a variety of representations for probabilistic information. These results indicate that implementing high quality, instance-level UQ for AI may improve decision-making with real systems compared to AI predictions alone.
|
https://proceedings.mlr.press/v235/marzouk24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/marzouk24a/marzouk24a.pdf
|
https://openreview.net/forum?id=htq0FbPOsY
|
On the Tractability of SHAP Explanations under Markovian Distributions
|
https://proceedings.mlr.press/v235/marzouk24a.html
|
Reda Marzouk, Colin De La Higuera
|
https://proceedings.mlr.press/v235/marzouk24a.html
|
ICML 2024
|
Thanks to its solid theoretical foundation, the SHAP framework is arguably one the most widely utilized frameworks for local explainability of ML models. Despite its popularity, its exact computation is known to be very challenging, proven to be NP-Hard in various configurations. Recent works have unveiled positive complexity results regarding the computation of the SHAP score for specific model families, encompassing decision trees, random forests, and some classes of boolean circuits. Yet, all these positive results hinge on the assumption of feature independence, often simplistic in real-world scenarios. In this article, we investigate the computational complexity of the SHAP score by relaxing this assumption and introducing a Markovian perspective. We show that, under the Markovian assumption, computing the SHAP score for the class of Weighted automata, Disjoint DNFs and Decision Trees can be performed in polynomial time, offering a first positive complexity result for the problem of SHAP score computation that transcends the limitations of the feature independence assumption.
|
https://proceedings.mlr.press/v235/masserano24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/masserano24a/masserano24a.pdf
|
https://openreview.net/forum?id=RXxTuxPopa
|
Classification under Nuisance Parameters and Generalized Label Shift in Likelihood-Free Inference
|
https://proceedings.mlr.press/v235/masserano24a.html
|
Luca Masserano, Alexander Shen, Michele Doro, Tommaso Dorigo, Rafael Izbicki, Ann B. Lee
|
https://proceedings.mlr.press/v235/masserano24a.html
|
ICML 2024
|
An open scientific challenge is how to classify events with reliable measures of uncertainty, when we have a mechanistic model of the data-generating process but the distribution over both labels and latent nuisance parameters is different between train and target data. We refer to this type of distributional shift as generalized label shift (GLS). Direct classification using observed data $\mathbf{X}$ as covariates leads to biased predictions and invalid uncertainty estimates of labels $Y$. We overcome these biases by proposing a new method for robust uncertainty quantification that casts classification as a hypothesis testing problem under nuisance parameters. The key idea is to estimate the classifier’s receiver operating characteristic (ROC) across the entire nuisance parameter space, which allows us to devise cutoffs that are invariant under GLS. Our method effectively endows a pre-trained classifier with domain adaptation capabilities and returns valid prediction sets while maintaining high power. We demonstrate its performance on two challenging scientific problems in biology and astroparticle physics with data from realistic mechanistic models.
|
https://proceedings.mlr.press/v235/massiani24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/massiani24a/massiani24a.pdf
|
https://openreview.net/forum?id=AEHXvoOxV9
|
On the Consistency of Kernel Methods with Dependent Observations
|
https://proceedings.mlr.press/v235/massiani24a.html
|
Pierre-François Massiani, Sebastian Trimpe, Friedrich Solowjow
|
https://proceedings.mlr.press/v235/massiani24a.html
|
ICML 2024
|
The consistency of a learning method is usually established under the assumption that the observations are a realization of an independent and identically distributed (i.i.d.) or mixing process. Yet, kernel methods such as support vector machines (SVMs), Gaussian processes, or conditional kernel mean embeddings (CKMEs) all give excellent performance under sampling schemes that are obviously non-i.i.d., such as when data comes from a dynamical system. We propose the new notion of empirical weak convergence (EWC) as a general assumption explaining such phenomena for kernel methods. It assumes the existence of a random asymptotic data distribution and is a strict weakening of previous assumptions in the field. Our main results then establish consistency of SVMs, kernel mean embeddings, and general Hilbert-space valued empirical expectations with EWC data. Our analysis holds for both finite- and infinite-dimensional outputs, as we extend classical results of statistical learning to the latter case. In particular, it is also applicable to CKMEs. Overall, our results open new classes of processes to statistical learning and can serve as a foundation for a theory of learning beyond i.i.d. and mixing.
|
https://proceedings.mlr.press/v235/mastrototaro24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mastrototaro24a/mastrototaro24a.pdf
|
https://openreview.net/forum?id=jbPc3pW6sC
|
Online Variational Sequential Monte Carlo
|
https://proceedings.mlr.press/v235/mastrototaro24a.html
|
Alessandro Mastrototaro, Jimmy Olsson
|
https://proceedings.mlr.press/v235/mastrototaro24a.html
|
ICML 2024
|
Being the most classical generative model for serial data, state-space models (SSM) are fundamental in AI and statistical machine learning. In SSM, any form of parameter learning or latent state inference typically involves the computation of complex latent-state posteriors. In this work, we build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference by combining particle methods and variational inference. While standard VSMC operates in the offline mode, by re-processing repeatedly a given batch of data, we distribute the approximation of the gradient of the VSMC surrogate ELBO in time using stochastic approximation, allowing for online learning in the presence of streams of data. This results in an algorithm, online VSMC, that is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation. In addition, we provide rigorous theoretical results describing the algorithm’s convergence properties as the number of data tends to infinity as well as numerical illustrations of its excellent convergence properties and usefulness also in batch-processing settings.
|
https://proceedings.mlr.press/v235/matias24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/matias24a/matias24a.pdf
|
https://openreview.net/forum?id=MSMKQuZhD5
|
Amortized Variational Deep Kernel Learning
|
https://proceedings.mlr.press/v235/matias24a.html
|
Alan L. S. Matias, César Lincoln Mattos, João Paulo Pordeus Gomes, Diego Mesquita
|
https://proceedings.mlr.press/v235/matias24a.html
|
ICML 2024
|
Deep kernel learning (DKL) marries the uncertainty quantification of Gaussian processes (GPs) and the representational power of deep neural networks. However, training DKL is challenging and often leads to overfitting. Most notably, DKL often learns “non-local” kernels — incurring spurious correlations. To remedy this issue, we propose using amortized inducing points and a parameter-sharing scheme, which ties together the amortization and DKL networks. This design imposes an explicit dependency between the ELBO’s model fit and capacity terms. In turn, this prevents the former from dominating the optimization procedure and incurring the aforementioned spurious correlations. Extensive experiments show that our resulting method, amortized varitional DKL (AVDKL), i) consistently outperforms DKL and standard GPs for tabular data; ii) achieves significantly higher accuracy than DKL in node classification tasks; and iii) leads to substantially better accuracy and negative log-likelihood than DKL on CIFAR100.
|
https://proceedings.mlr.press/v235/mattes24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mattes24a/mattes24a.pdf
|
https://openreview.net/forum?id=IUBhvyJ9Sr
|
Hieros: Hierarchical Imagination on Structured State Space Sequence World Models
|
https://proceedings.mlr.press/v235/mattes24a.html
|
Paul Mattes, Rainer Schlosser, Ralf Herbrich
|
https://proceedings.mlr.press/v235/mattes24a.html
|
ICML 2024
|
One of the biggest challenges to modern deep reinforcement learning (DRL) algorithms is sample efficiency. Many approaches learn a world model in order to train an agent entirely in imagination, eliminating the need for direct environment interaction during training. However, these methods often suffer from either a lack of imagination accuracy, exploration capabilities, or runtime efficiency. We propose HIEROS, a hierarchical policy that learns time abstracted world representations and imagines trajectories at multiple time scales in latent space. HIEROS uses an S5 layer-based world model, which predicts next world states in parallel during training and iteratively during environment interaction. Due to the special properties of S5 layers, our method can train in parallel and predict next world states iteratively during imagination. This allows for more efficient training than RNN-based world models and more efficient imagination than Transformer-based world models. We show that our approach outperforms the state of the art in terms of mean and median normalized human score on the Atari 100k benchmark, and that our proposed world model is able to predict complex dynamics very accurately. We also show that HIEROS displays superior exploration capabilities compared to existing approaches.
|
https://proceedings.mlr.press/v235/matthews24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/matthews24a/matthews24a.pdf
|
https://openreview.net/forum?id=hg4wXlrQCV
|
Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning
|
https://proceedings.mlr.press/v235/matthews24a.html
|
Michael Matthews, Michael Beukman, Benjamin Ellis, Mikayel Samvelyan, Matthew Thomas Jackson, Samuel Coward, Jakob Nicolaus Foerster
|
https://proceedings.mlr.press/v235/matthews24a.html
|
ICML 2024
|
Benchmarks play a crucial role in the development and analysis of reinforcement learning (RL) algorithms. We identify that existing benchmarks used for research into open-ended learning fall into one of two categories. Either they are too slow for meaningful research to be performed without enormous computational resources, like Crafter, NetHack and Minecraft, or they are not complex enough to pose a significant challenge, like Minigrid and Procgen. To remedy this, we first present Craftax-Classic: a ground-up rewrite of Crafter in JAX that runs up to 250x faster than the Python-native original. A run of PPO using 1 billion environment interactions finishes in under an hour using only a single GPU and averages 90% of the optimal reward. To provide a more compelling challenge we present the main Craftax benchmark, a significant extension of the Crafter mechanics with elements inspired from NetHack. Solving Craftax requires deep exploration, long term planning and memory, as well as continual adaptation to novel situations as more of the world is discovered. We show that existing methods including global and episodic exploration, as well as unsupervised environment design fail to make material progress on the benchmark. We therefore believe that Craftax can for the first time allow researchers to experiment in a complex, open-ended environment with limited computational resources.
|
https://proceedings.mlr.press/v235/maurais24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/maurais24a/maurais24a.pdf
|
https://openreview.net/forum?id=rtyqBfcg8j
|
Sampling in Unit Time with Kernel Fisher-Rao Flow
|
https://proceedings.mlr.press/v235/maurais24a.html
|
Aimee Maurais, Youssef Marzouk
|
https://proceedings.mlr.press/v235/maurais24a.html
|
ICML 2024
|
We introduce a new mean-field ODE and corresponding interacting particle systems (IPS) for sampling from an unnormalized target density. The IPS are gradient-free, available in closed form, and only require the ability to sample from a reference density and compute the (unnormalized) target-to-reference density ratio. The mean-field ODE is obtained by solving a Poisson equation for a velocity field that transports samples along the geometric mixture of the two densities, $\pi_0^{1-t} \pi_1^t$, which is the path of a particular Fisher-Rao gradient flow. We employ a RKHS ansatz for the velocity field, which makes the Poisson equation tractable and enables discretization of the resulting mean-field ODE over finite samples. The mean-field ODE can be additionally be derived from a discrete-time perspective as the limit of successive linearizations of the Monge-Ampère equations within a framework known as sample-driven optimal transport. We introduce a stochastic variant of our approach and demonstrate empirically that our IPS can produce high-quality samples from varied target distributions, outperforming comparable gradient-free particle systems and competitive with gradient-based alternatives.
|
https://proceedings.mlr.press/v235/maus24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/maus24a/maus24a.pdf
|
https://openreview.net/forum?id=wkCUmO7oi2
|
Joint Composite Latent Space Bayesian Optimization
|
https://proceedings.mlr.press/v235/maus24a.html
|
Natalie Maus, Zhiyuan Jerry Lin, Maximilian Balandat, Eytan Bakshy
|
https://proceedings.mlr.press/v235/maus24a.html
|
ICML 2024
|
Bayesian Optimization (BO) is a technique for sample-efficient black-box optimization that employs probabilistic models to identify promising input for evaluation. When dealing with composite-structured functions, such as $f=g \circ h$, evaluating a specific location $x$ yields observations of both the final outcome $f(x) = g(h(x))$ as well as the intermediate output(s) $h(x)$. Previous research has shown that integrating information from these intermediate outputs can enhance BO performance substantially. However, existing methods struggle if the outputs $h(x)$ are high-dimensional. Many relevant problems fall into this setting, including in the context of generative AI, molecular design, or robotics. To effectively tackle these challenges, we introduce Joint Composite Latent Space Bayesian Optimization (JoCo), a novel framework that jointly trains neural network encoders and probabilistic models to adaptively compress high-dimensional input and output spaces into manageable latent representations. This enables effective BO on these compressed representations, allowing JoCo to outperform other state-of-the-art methods in high-dimensional BO on a wide variety of simulated and real-world problems.
|
https://proceedings.mlr.press/v235/mazeika24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mazeika24a/mazeika24a.pdf
|
https://openreview.net/forum?id=f3TUipYU3U
|
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
|
https://proceedings.mlr.press/v235/mazeika24a.html
|
Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David Forsyth, Dan Hendrycks
|
https://proceedings.mlr.press/v235/mazeika24a.html
|
ICML 2024
|
Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify several desirable properties previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses. We open source HarmBench at https://github.com/centerforaisafety/HarmBench.
|
https://proceedings.mlr.press/v235/mazzawi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mazzawi24a/mazzawi24a.pdf
|
https://openreview.net/forum?id=4PuM6iGPPi
|
Deep Fusion: Efficient Network Training via Pre-trained Initializations
|
https://proceedings.mlr.press/v235/mazzawi24a.html
|
Hanna Mazzawi, Javier Gonzalvo, Michael Wunder, Sammy Jerome, Benoit Dherin
|
https://proceedings.mlr.press/v235/mazzawi24a.html
|
ICML 2024
|
Training deep neural networks for large language models (LLMs) remains computationally very expensive. To mitigate this, network growing algorithms offer potential cost savings, but their underlying mechanisms are poorly understood. In this paper, we propose a theoretical framework using backward error analysis to illuminate the dynamics of mid-training network growth. Furthermore, we introduce Deep Fusion, an efficient network training approach that leverages pre-trained initializations of smaller networks, facilitating network growth from diverse sources. Our experiments validate the power of our theoretical framework in guiding the optimal use of Deep Fusion. With carefully optimized training dynamics, Deep Fusion demonstrates significant reductions in both training time and resource consumption. Importantly, these gains are achieved without sacrificing performance. We demonstrate reduced computational requirements, and improved generalization performance on a variety of NLP tasks and T5 model sizes.
|
https://proceedings.mlr.press/v235/mccauley24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mccauley24a/mccauley24a.pdf
|
https://openreview.net/forum?id=wea7nsJdMc
|
Incremental Topological Ordering and Cycle Detection with Predictions
|
https://proceedings.mlr.press/v235/mccauley24a.html
|
Samuel Mccauley, Benjamin Moseley, Aidin Niaparast, Shikha Singh
|
https://proceedings.mlr.press/v235/mccauley24a.html
|
ICML 2024
|
This paper leverages the framework of algorithms-with-predictions to design data structures for two fundamental dynamic graph problems: incremental topological ordering and cycle detection. In these problems, the input is a directed graph on $n$ nodes, and the $m$ edges arrive one by one. The data structure must maintain a topological ordering of the vertices at all times and detect if the newly inserted edge creates a cycle. The theoretically best worst-case algorithms for these problems have high update cost (polynomial in $n$ and $m$). In practice, greedy heuristics (that recompute the solution from scratch each time) perform well but can have high update cost in the worst case. In this paper, we bridge this gap by leveraging predictions to design a learned new data structure for the problems. Our data structure guarantees consistency, robustness, and smoothness with respect to predictions—that is, it has the best possible running time under perfect predictions, never performs worse than the best-known worst-case methods, and its running time degrades smoothly with the prediction error. Moreover, we demonstrate empirically that predictions, learned from a very small training dataset, are sufficient to provide significant speed-ups on real datasets.
|
https://proceedings.mlr.press/v235/mcduff24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mcduff24a/mcduff24a.pdf
|
https://openreview.net/forum?id=7JKVPNEBkU
|
Position: Standardization of Behavioral Use Clauses is Necessary for the Adoption of Responsible Licensing of AI
|
https://proceedings.mlr.press/v235/mcduff24a.html
|
Daniel Mcduff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny Lee, Yacine Jernite, Carlos Muñoz Ferrandis, Aaron Gokaslan, Alek Tarkowski, Joseph Lindley, A. Feder Cooper, Danish Contractor
|
https://proceedings.mlr.press/v235/mcduff24a.html
|
ICML 2024
|
Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed to give developers a framework for releasing AI assets while specifying their users to mitigate negative applications. As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses licenses. Notable models licensed with behavioral use clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion (image), and GRID (robotics). This paper explores why and how these licenses have been adopted, and why and how they have been adapted to fit particular use cases. We use a mixed-methods methodology of qualitative interviews, clustering of license clauses, and quantitative analysis of license adoption. Based on this evidence we take the position that responsible AI licenses need standardization to avoid confusing users or diluting their impact. At the same time, customization of behavioral restrictions is also appropriate in some contexts (e.g., medical domains). We advocate for “standardized customization” that can meet users’ needs and can be supported via tooling.
|
https://proceedings.mlr.press/v235/mcmahan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mcmahan24a/mcmahan24a.pdf
|
https://openreview.net/forum?id=y6y2HauOpR
|
Roping in Uncertainty: Robustness and Regularization in Markov Games
|
https://proceedings.mlr.press/v235/mcmahan24a.html
|
Jeremy Mcmahan, Giovanni Artiglio, Qiaomin Xie
|
https://proceedings.mlr.press/v235/mcmahan24a.html
|
ICML 2024
|
We study robust Markov games (RMG) with $s$-rectangular uncertainty. We show a general equivalence between computing a robust Nash equilibrium (RNE) of a $s$-rectangular RMG and computing a Nash equilibrium (NE) of an appropriately constructed regularized MG. The equivalence result yields a planning algorithm for solving $s$-rectangular RMGs, as well as provable robustness guarantees for policies computed using regularized methods. However, we show that even for just reward-uncertain two-player zero-sum matrix games, computing an RNE is PPAD-hard. Consequently, we derive a special uncertainty structure called efficient player-decomposability and show that RNE for two-player zero-sum RMG in this class can be provably solved in polynomial time. This class includes commonly used uncertainty sets such as $L_1$ and $L_\infty$ ball uncertainty sets.
|
https://proceedings.mlr.press/v235/meeus24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/meeus24a/meeus24a.pdf
|
https://openreview.net/forum?id=LDq1JPdc55
|
Copyright Traps for Large Language Models
|
https://proceedings.mlr.press/v235/meeus24a.html
|
Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre De Montjoye
|
https://proceedings.mlr.press/v235/meeus24a.html
|
ICML 2024
|
Questions of fair use of copyright-protected content to train Large Language Models (LLMs) are being actively debated. Document-level inference has been proposed as a new task: inferring from black-box access to the trained model whether a piece of content has been seen during training. SOTA methods however rely on naturally occurring memorization of (part of) the content. While very effective against models that memorize significantly, we hypothesize - and later confirm - that they will not work against models that do not naturally memorize, e.g. medium-size 1B models. We here propose to use copyright traps, the inclusion of fictitious entries in original content, to detect the use of copyrighted materials in LLMs with a focus on models where memorization does not naturally occur. We carefully design a randomized controlled experimental setup, inserting traps into original content (books) and train a 1.3B LLM from scratch. We first validate that the use of content in our target model would be undetectable using existing methods. We then show, contrary to intuition, that even medium-length trap sentences repeated a significant number of times (100) are not detectable using existing methods. However, we show that longer sequences repeated a large number of times can be reliably detected (AUC=0.75) and used as copyright traps. Beyond copyright applications, our findings contribute to the study of LLM memorization: the randomized controlled setup enables us to draw causal relationships between memorization and certain sequence properties such as repetition in model training data and perplexity.
|
https://proceedings.mlr.press/v235/melas-kyriazi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/melas-kyriazi24a/melas-kyriazi24a.pdf
|
https://openreview.net/forum?id=swTG6xju8O
|
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation
|
https://proceedings.mlr.press/v235/melas-kyriazi24a.html
|
Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos
|
https://proceedings.mlr.press/v235/melas-kyriazi24a.html
|
ICML 2024
|
Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score Distillation Sampling (SDS), which is slow, somewhat unstable, and prone to artifacts. A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly. In this paper, we further explore the design space of text-to-3D models. We significantly improve multi-view generation by considering video instead of image generators. Combined with a 3D reconstruction algorithm which, by using Gaussian splatting, can optimize a robust image-based loss, we directly produce high-quality 3D outputs from the generated views. Our new method, IM-3D, reduces the number of evaluations of the 2D generator network 10-100$\times$, resulting in a much more efficient pipeline, better quality, fewer geometric inconsistencies, and higher yield of usable 3D assets.
|
https://proceedings.mlr.press/v235/melnyk24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/melnyk24a/melnyk24a.pdf
|
https://openreview.net/forum?id=pFWmHUdJE5
|
O$n$ Learning Deep O($n$)-Equivariant Hyperspheres
|
https://proceedings.mlr.press/v235/melnyk24a.html
|
Pavlo Melnyk, Michael Felsberg, Mårten Wadenbäck, Andreas Robinson, Cuong Le
|
https://proceedings.mlr.press/v235/melnyk24a.html
|
ICML 2024
|
In this paper, we utilize hyperspheres and regular $n$-simplexes and propose an approach to learning deep features equivariant under the transformations of $n$D reflections and rotations, encompassed by the powerful group of O$(n)$. Namely, we propose O$(n)$-equivariant neurons with spherical decision surfaces that generalize to any dimension $n$, which we call Deep Equivariant Hyperspheres. We demonstrate how to combine them in a network that directly operates on the basis of the input points and propose an invariant operator based on the relation between two points and a sphere, which as we show, turns out to be a Gram matrix. Using synthetic and real-world data in $n$D, we experimentally verify our theoretical contributions and find that our approach is superior to the competing methods for O$(n)$-equivariant benchmark datasets (classification and regression), demonstrating a favorable speed/performance trade-off. The code is available on GitHub.
|
https://proceedings.mlr.press/v235/memmel24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/memmel24a/memmel24a.pdf
|
https://openreview.net/forum?id=mcg6jppkwb
|
Position: Tensor Networks are a Valuable Asset for Green AI
|
https://proceedings.mlr.press/v235/memmel24a.html
|
Eva Memmel, Clara Menzen, Jetze Schuurmans, Frederiek Wesel, Kim Batselier
|
https://proceedings.mlr.press/v235/memmel24a.html
|
ICML 2024
|
For the first time, this position paper introduces a fundamental link between tensor networks (TNs) and Green AI, highlighting their synergistic potential to enhance both the inclusivity and sustainability of AI research. We argue that TNs are valuable for Green AI due to their strong mathematical backbone and inherent logarithmic compression potential. We undertake a comprehensive review of the ongoing discussions on Green AI, emphasizing the importance of sustainability and inclusivity in AI research to demonstrate the significance of establishing the link between Green AI and TNs. To support our position, we first provide a comprehensive overview of efficiency metrics proposed in Green AI literature and then evaluate examples of TNs in the fields of kernel machines and deep learning using the proposed efficiency metrics. This position paper aims to incentivize meaningful, constructive discussions by bridging fundamental principles of Green AI and TNs. We advocate for researchers to seriously evaluate the integration of TNs into their research projects, and in alignment with the link established in this paper, we support prior calls encouraging researchers to treat Green AI principles as a research priority.
|
https://proceedings.mlr.press/v235/meng24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/meng24a/meng24a.pdf
|
https://openreview.net/forum?id=ZctlF8RlV4
|
OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization
|
https://proceedings.mlr.press/v235/meng24a.html
|
Xiang Meng, Shibal Ibrahim, Kayhan Behdin, Hussein Hazimeh, Natalia Ponomareva, Rahul Mazumder
|
https://proceedings.mlr.press/v235/meng24a.html
|
ICML 2024
|
Structured pruning is a promising approach for reducing the inference costs of large vision and language models. By removing carefully chosen structures, e.g., neurons or attention heads, the improvements from this approach can be realized on standard deep learning hardware. In this work, we focus on structured pruning in the one-shot (post-training) setting, which does not require model retraining after pruning. We propose a novel combinatorial optimization framework for this problem, based on a layer-wise reconstruction objective and a careful reformulation that allows for scalable optimization. Moreover, we design a new local combinatorial optimization algorithm, which exploits low-rank updates for efficient local search. Our framework is time and memory-efficient and considerably improves upon state-of-the-art one-shot methods on vision models (e.g., ResNet50, MobileNet) and language models (e.g., OPT-1.3B – OPT-30B). For language models, e.g., OPT-2.7B, OSSCAR can lead to $125\times$ lower test perplexity on WikiText with $2\times$ inference time speedup in comparison to the state-of-the-art ZipLM approach. Our framework is also $6\times$ – $8\times$ faster. Notably, our work considers models with tens of billions of parameters, which is up to $100\times$ larger than what has been previously considered in the structured pruning literature. Our code is available at https://github.com/mazumder-lab/OSSCAR.
|
https://proceedings.mlr.press/v235/meng24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/meng24b/meng24b.pdf
|
https://openreview.net/forum?id=sZla6SnooP
|
Physics-Informed Neural Network Policy Iteration: Algorithms, Convergence, and Verification
|
https://proceedings.mlr.press/v235/meng24b.html
|
Yiming Meng, Ruikun Zhou, Amartya Mukherjee, Maxwell Fitzsimmons, Christopher Song, Jun Liu
|
https://proceedings.mlr.press/v235/meng24b.html
|
ICML 2024
|
Solving nonlinear optimal control problems is a challenging task, particularly for high-dimensional problems. We propose algorithms for model-based policy iterations to solve nonlinear optimal control problems with convergence guarantees. The main component of our approach is an iterative procedure that utilizes neural approximations to solve linear partial differential equations (PDEs), ensuring convergence. We present two variants of the algorithms. The first variant formulates the optimization problem as a linear least square problem, drawing inspiration from extreme learning machine (ELM) for solving PDEs. This variant efficiently handles low-dimensional problems with high accuracy. The second variant is based on a physics-informed neural network (PINN) for solving PDEs and has the potential to address high-dimensional problems. We demonstrate that both algorithms outperform traditional approaches, such as Galerkin methods, by a significant margin. We provide a theoretical analysis of both algorithms in terms of convergence of neural approximations towards the true optimal solutions in a general setting. Furthermore, we employ formal verification techniques to demonstrate the verifiable stability of the resulting controllers.
|
https://proceedings.mlr.press/v235/meng24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/meng24c/meng24c.pdf
|
https://openreview.net/forum?id=EhU0xBSP4l
|
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data
|
https://proceedings.mlr.press/v235/meng24c.html
|
Xuran Meng, Difan Zou, Yuan Cao
|
https://proceedings.mlr.press/v235/meng24c.html
|
ICML 2024
|
Modern deep learning models are usually highly over-parameterized so that they can overfit the training data. Surprisingly, such overfitting neural networks can usually still achieve high prediction accuracy. To study this “benign overfitting” phenomenon, a line of recent works has theoretically studied the learning of linear models and two-layer neural networks. However, most of these analyses are still limited to the very simple learning problems where the Bayes-optimal classifier is linear. In this work, we investigate a class of XOR-type classification tasks with label-flipping noises. We show that, under a certain condition on the sample complexity and signal-to-noise ratio, an over-parameterized ReLU CNN trained by gradient descent can achieve near Bayes-optimal accuracy. Moreover, we also establish a matching lower bound result showing that when the previous condition is not satisfied, the prediction accuracy of the obtained CNN is an absolute constant away from the Bayes-optimal rate. Our result demonstrates that CNNs have a remarkable capacity to efficiently learn XOR problems, even in the presence of highly correlated features.
|
https://proceedings.mlr.press/v235/mergny24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mergny24a/mergny24a.pdf
|
https://openreview.net/forum?id=W97gFmrKe6
|
Spectral Phase Transition and Optimal PCA in Block-Structured Spiked Models
|
https://proceedings.mlr.press/v235/mergny24a.html
|
Pierre Mergny, Justin Ko, Florent Krzakala
|
https://proceedings.mlr.press/v235/mergny24a.html
|
ICML 2024
|
We discuss the inhomogeneous Wigner spike model, a theoretical framework recently introduced to study structured noise in various learning scenarios, through the prism of random matrix theory, with a specific focus on its spectral properties. Our primary objective is to find an optimal spectral method, and to extend the celebrated (BBP) phase transition criterion —well-known in the homogeneous case— to our inhomogeneous, block-structured, Wigner model. We provide a thorough rigorous analysis of a transformed matrix and show that the transition for the appearance of 1) an outlier outside the bulk of the limiting spectral distribution and 2) a positive overlap between the associated eigenvector and the signal, occurs precisely at the optimal threshold, making the proposed spectral method optimal within the class of iterative methods for the inhomogeneous Wigner problem.
|
https://proceedings.mlr.press/v235/merrill24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/merrill24a/merrill24a.pdf
|
https://openreview.net/forum?id=QZgo9JZpLq
|
The Illusion of State in State-Space Models
|
https://proceedings.mlr.press/v235/merrill24a.html
|
William Merrill, Jackson Petty, Ashish Sabharwal
|
https://proceedings.mlr.press/v235/merrill24a.html
|
ICML 2024
|
State-space models (SSMs) have emerged as a potential alternative architecture for building large language models (LLMs) compared to the previously ubiquitous transformer architecture. One theoretical weakness of transformers is that they cannot express certain kinds of sequential computation and state tracking (Merrill & Sabharwal, 2023), which SSMs are explicitly designed to address via their close architectural similarity to recurrent neural networks (RNNs). But do SSMs truly have an advantage (over transformers) in expressive power for state tracking? Surprisingly, the answer is no. Our analysis reveals that the expressive power of SSMs is limited very similarly to transformers: SSMs cannot express computation outside the complexity class $\mathsf{TC}^0$. In particular, this means they cannot solve simple state-tracking problems like permutation composition. It follows that SSMs are provably unable to accurately track chess moves with certain notation, evaluate code, or track entities in a long narrative. To supplement our formal analysis, we report experiments showing that Mamba-style SSMs indeed struggle with state tracking. Thus, despite its recurrent formulation, the "state” in an SSM is an illusion: SSMs have similar expressiveness limitations to non-recurrent models like transformers, which may fundamentally limit their ability to solve real-world state-tracking problems.
|
https://proceedings.mlr.press/v235/merth24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/merth24a/merth24a.pdf
|
https://openreview.net/forum?id=r8k5JrGip6
|
Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation
|
https://proceedings.mlr.press/v235/merth24a.html
|
Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi
|
https://proceedings.mlr.press/v235/merth24a.html
|
ICML 2024
|
Despite the successes of large language models (LLMs), they exhibit significant drawbacks, particularly when processing long contexts. Their inference cost scales quadratically with respect to sequence length, making it expensive for deployment in some real-world text processing applications, such as retrieval-augmented generation (RAG). Additionally, LLMs also exhibit the "distraction phenomenon", where irrelevant context in the prompt degrades output quality. To address these drawbacks, we propose a novel RAG prompting methodology, superposition prompting, which can be directly applied to pre-trained transformer-based LLMs without the need for fine-tuning. At a high level, superposition prompting allows the LLM to process input documents in parallel prompt paths, discarding paths once they are deemed irrelevant. We demonstrate the capability of our method to simultaneously enhance time efficiency across a variety of question-answering benchmarks using multiple pre-trained LLMs. Furthermore, our technique significantly improves accuracy when the retrieved context is large relative the context the model was trained on. For example, our approach facilitates a $93\times$ reduction in compute time while improving accuracy by $43%$ on the NaturalQuestions-Open dataset with the MPT-7B instruction-tuned model over naive RAG.
|
https://proceedings.mlr.press/v235/miao24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/miao24a/miao24a.pdf
|
https://openreview.net/forum?id=f6QenZyyeP
|
How Deep Do We Need: Accelerating Training and Inference of Neural ODEs via Control Perspective
|
https://proceedings.mlr.press/v235/miao24a.html
|
Keyan Miao, Konstantinos Gatsis
|
https://proceedings.mlr.press/v235/miao24a.html
|
ICML 2024
|
Neural Ordinary Differential Equations (ODEs) have shown promise in learning continuous dynamics. However, their slow training and inference speed hinder wider applications. In this paper, we propose to optimize Neural ODEs from a spatial and temporal perspective, drawing inspiration from control theory. We aim to find a reasonable depth of the network, accelerating both training and inference while maintaining network performance. Two approaches are proposed. One reformulates training as a minimum-time optimal control problem directly in a single stage to search for the terminal time and network weights. The second approach uses pre-training coupled with a Lyapunov method in an initial stage, and then at a secondary stage introduces a safe terminal time updating mechanism in the forward direction. Experimental results demonstrate the effectiveness of speeding up Neural ODEs.
|
https://proceedings.mlr.press/v235/miao24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/miao24b/miao24b.pdf
|
https://openreview.net/forum?id=vJx6fld6l0
|
Locality-Sensitive Hashing-Based Efficient Point Transformer with Applications in High-Energy Physics
|
https://proceedings.mlr.press/v235/miao24b.html
|
Siqi Miao, Zhiyuan Lu, Mia Liu, Javier Duarte, Pan Li
|
https://proceedings.mlr.press/v235/miao24b.html
|
ICML 2024
|
This study introduces a novel transformer model optimized for large-scale point cloud processing in scientific domains such as high-energy physics (HEP) and astrophysics. Addressing the limitations of graph neural networks and standard transformers, our model integrates local inductive bias and achieves near-linear complexity with hardware-friendly regular operations. One contribution of this work is the quantitative analysis of the error-complexity tradeoff of various sparsification techniques for building efficient transformers. Our findings highlight the superiority of using locality-sensitive hashing (LSH), especially OR & AND-construction LSH, in kernel approximation for large-scale point cloud data with local inductive bias. Based on this finding, we propose LSH-based Efficient Point Transformer (HEPT), which combines E$^2$LSH with OR & AND constructions and is built upon regular computations. HEPT demonstrates remarkable performance on two critical yet time-consuming HEP tasks, significantly outperforming existing GNNs and transformers in accuracy and computational speed, marking a significant advancement in geometric deep learning and large-scale scientific data processing. Our code is available at https://github.com/Graph-COM/HEPT.
|
https://proceedings.mlr.press/v235/miao24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/miao24c/miao24c.pdf
|
https://openreview.net/forum?id=zrQIc9mQQN
|
Rethinking Independent Cross-Entropy Loss For Graph-Structured Data
|
https://proceedings.mlr.press/v235/miao24c.html
|
Rui Miao, Kaixiong Zhou, Yili Wang, Ninghao Liu, Ying Wang, Xin Wang
|
https://proceedings.mlr.press/v235/miao24c.html
|
ICML 2024
|
Graph neural networks (GNNs) have exhibited prominent performance in learning graph-structured data. Considering node classification task, based on the i.i.d assumption among node labels, the traditional supervised learning simply sums up cross-entropy losses of the independent training nodes and applies the average loss to optimize GNNs’ weights. But different from other data formats, the nodes are naturally connected. It is found that the independent distribution modeling of node labels restricts GNNs’ capability to generalize over the entire graph and defend adversarial attacks. In this work, we propose a new framework, termed joint-cluster supervised learning, to model the joint distribution of each node with its corresponding cluster. We learn the joint distribution of node and cluster labels conditioned on their representations, and train GNNs with the obtained joint loss. In this way, the data-label reference signals extracted from the local cluster explicitly strengthen the discrimination ability on the target node. The extensive experiments demonstrate that our joint-cluster supervised learning can effectively bolster GNNs’ node classification accuracy. Furthermore, being benefited from the reference signals which may be free from spiteful interference, our learning paradigm significantly protects the node classification from being affected by the adversarial attack.
|
https://proceedings.mlr.press/v235/miao24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/miao24d/miao24d.pdf
|
https://openreview.net/forum?id=AFAX28TdO4
|
DFlow: A Generative Model Combining Denoising AutoEncoder and Normalizing Flow for High Fidelity Waveform Generation
|
https://proceedings.mlr.press/v235/miao24d.html
|
Chenfeng Miao, Qingying Zhu, Minchuan Chen, Wei Hu, Zijian Li, Shaojun Wang, Jing Xiao
|
https://proceedings.mlr.press/v235/miao24d.html
|
ICML 2024
|
In this work, we present DFlow, a novel generative framework that combines Normalizing Flow (NF) with a Denoising AutoEncoder (DAE), for high-fidelity waveform generation. With a tactfully designed structure, DFlow seamlessly integrates the capabilities of both NF and DAE, resulting in a significantly improved performance compared to the standard NF models. Experimental results showcase DFlow’s superiority, achieving the highest MOS score among the existing methods on commonly used datasets and the fastest synthesis speed among all likelihood models. We further demonstrate the generalization ability of DFlow by generating high-quality out-of-distribution audio samples, such as singing and music audio. Additionally, we extend the model capacity of DFlow by scaling up both the model size and training set size. Our large-scale universal vocoder, DFlow-XL, achieves highly competitive performance against the best universal vocoder, BigVGAN.
|
https://proceedings.mlr.press/v235/michel24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/michel24a/michel24a.pdf
|
https://openreview.net/forum?id=UW5nO9NGjt
|
Rethinking Momentum Knowledge Distillation in Online Continual Learning
|
https://proceedings.mlr.press/v235/michel24a.html
|
Nicolas Michel, Maorong Wang, Ling Xiao, Toshihiko Yamasaki
|
https://proceedings.mlr.press/v235/michel24a.html
|
ICML 2024
|
Online Continual Learning (OCL) addresses the problem of training neural networks on a continuous data stream where multiple classification tasks emerge in sequence. In contrast to offline Continual Learning, data can be seen only once in OCL, which is a very severe constraint. In this context, replay-based strategies have achieved impressive results and most state-of-the-art approaches heavily depend on them. While Knowledge Distillation (KD) has been extensively used in offline Continual Learning, it remains under-exploited in OCL, despite its high potential. In this paper, we analyze the challenges in applying KD to OCL and give empirical justifications. We introduce a direct yet effective methodology for applying Momentum Knowledge Distillation (MKD) to many flagship OCL methods and demonstrate its capabilities to enhance existing approaches. In addition to improving existing state-of-the-art accuracy by more than $10%$ points on ImageNet100, we shed light on MKD internal mechanics and impacts during training in OCL. We argue that similar to replay, MKD should be considered a central component of OCL. The code is available at https://github.com/Nicolas1203/mkd_ocl.
|
https://proceedings.mlr.press/v235/micheli24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/micheli24a/micheli24a.pdf
|
https://openreview.net/forum?id=BiWIERWBFX
|
Efficient World Models with Context-Aware Tokenization
|
https://proceedings.mlr.press/v235/micheli24a.html
|
Vincent Micheli, Eloi Alonso, François Fleuret
|
https://proceedings.mlr.press/v235/micheli24a.html
|
ICML 2024
|
Scaling up deep Reinforcement Learning (RL) methods presents a significant challenge. Following developments in generative modelling, model-based RL positions itself as a strong contender. Recent advances in sequence modelling have led to effective transformer-based world models, albeit at the price of heavy computations due to the long sequences of tokens required to accurately simulate environments. In this work, we propose $\Delta$-IRIS, a new agent with a world model architecture composed of a discrete autoencoder that encodes stochastic deltas between time steps and an autoregressive transformer that predicts future deltas by summarizing the current state of the world with continuous tokens. In the Crafter benchmark, $\Delta$-IRIS sets a new state of the art at multiple frame budgets, while being an order of magnitude faster to train than previous attention-based approaches. We release our code and models at https://github.com/vmicheli/delta-iris.
|
https://proceedings.mlr.press/v235/mihelich24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mihelich24a/mihelich24a.pdf
|
https://openreview.net/forum?id=ALc7DmOTI2
|
Interplay of ROC and Precision-Recall AUCs: Theoretical Limits and Practical Implications in Binary Classification
|
https://proceedings.mlr.press/v235/mihelich24a.html
|
Martin Mihelich, François Castagnos, Charles Dognin
|
https://proceedings.mlr.press/v235/mihelich24a.html
|
ICML 2024
|
In this paper, we present two key theorems that should have significant implications for machine learning practitioners working with binary classification models. The first theorem provides a formula to calculate the maximum and minimum Precision-Recall AUC ($AUC_{PR}$) for a fixed Receiver Operating Characteristic AUC ($AUC_{ROC}$), demonstrating the variability of $AUC_{PR}$ even with a high $AUC_{ROC}$. This is particularly relevant for imbalanced datasets, where a good $AUC_{ROC}$ does not necessarily imply a high $AUC_{PR}$. The second theorem inversely establishes the bounds of $AUC_{ROC}$ given a fixed $AUC_{PR}$. Our findings highlight that in certain situations, especially for imbalanced datasets, it is more informative to prioritize $AUC_{PR}$ over $AUC_{ROC}$. Additionally, we introduce a method to determine when a higher $AUC_{ROC}$ in one model implies a higher $AUC_{PR}$ in another and vice versa, streamlining the model evaluation process.
|
https://proceedings.mlr.press/v235/mikhael24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mikhael24a/mikhael24a.pdf
|
https://openreview.net/forum?id=0mYAK6Yhhm
|
CLIPZyme: Reaction-Conditioned Virtual Screening of Enzymes
|
https://proceedings.mlr.press/v235/mikhael24a.html
|
Peter Mikhael, Itamar Chinn, Regina Barzilay
|
https://proceedings.mlr.press/v235/mikhael24a.html
|
ICML 2024
|
Computational screening of naturally occurring proteins has the potential to identify efficient catalysts among the hundreds of millions of sequences that remain uncharacterized. Current experimental methods remain time, cost and labor intensive, limiting the number of enzymes they can reasonably screen. In this work, we propose a computational framework for in-silico enzyme screening. Through a contrastive objective, we train CLIPZyme to encode and align representations of enzyme structures and reaction pairs. With no standard computational baseline, we compare CLIPZyme to existing EC (enzyme commission) predictors applied to virtual enzyme screening and show improved performance in scenarios where limited information on the reaction is available (BEDROC$_{85}$ of 44.69%). Additionally, we evaluate combining EC predictors with CLIPZyme and show its generalization capacity on both unseen reactions and protein clusters.
|
https://proceedings.mlr.press/v235/miller24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/miller24a/miller24a.pdf
|
https://openreview.net/forum?id=W4pB7VbzZI
|
FlowMM: Generating Materials with Riemannian Flow Matching
|
https://proceedings.mlr.press/v235/miller24a.html
|
Benjamin Kurt Miller, Ricky T. Q. Chen, Anuroop Sriram, Brandon M Wood
|
https://proceedings.mlr.press/v235/miller24a.html
|
ICML 2024
|
Crystalline materials are a fundamental component in next-generation technologies, yet modeling their distribution presents unique computational challenges. Of the plausible arrangements of atoms in a periodic lattice only a vanishingly small percentage are thermodynamically stable, which is a key indicator of the materials that can be experimentally realized. Two fundamental tasks in this area are to (a) predict the stable crystal structure of a known composition of elements and (b) propose novel compositions along with their stable structures. We present FlowMM, a pair of generative models that achieve state-of-the-art performance on both tasks while being more efficient and more flexible than competing methods. We extend Riemannian Flow Matching to suit the symmetries inherent to crystals: translation, rotation, permutation, and periodic boundary conditions. Our framework enables the freedom to choose the flow base distributions, drastically simplifying the problem of learning crystal structures compared with diffusion models. In addition to standard benchmarks, we validate FlowMM’s generated structures with quantum chemistry calculations, demonstrating that it is $\sim$3x more efficient, in terms of integration steps, at finding stable materials compared to previous open methods.
|
https://proceedings.mlr.press/v235/min24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/min24a/min24a.pdf
|
https://openreview.net/forum?id=GYGkt2M8ee
|
Can Implicit Bias Imply Adversarial Robustness?
|
https://proceedings.mlr.press/v235/min24a.html
|
Hancheng Min, Rene Vidal
|
https://proceedings.mlr.press/v235/min24a.html
|
ICML 2024
|
The implicit bias of gradient-based training algorithms has been considered mostly beneficial as it leads to trained networks that often generalize well. However, Frei et al. (2023) show that such implicit bias can harm adversarial robustness. Specifically, they show that if the data consists of clusters with small inter-cluster correlation, a shallow (two-layer) ReLU network trained by gradient flow generalizes well, but it is not robust to adversarial attacks of small radius. Moreover, this phenomenon occurs despite the existence of a much more robust classifier that can be explicitly constructed from a shallow network. In this paper, we extend recent analyses of neuron alignment to show that a shallow network with a polynomial ReLU activation (pReLU) trained by gradient flow not only generalizes well but is also robust to adversarial attacks. Our results highlight the importance of the interplay between data structure and architecture design in the implicit bias and robustness of trained networks.
|
https://proceedings.mlr.press/v235/ming24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ming24a/ming24a.pdf
|
https://openreview.net/forum?id=RIMRKeeVsr
|
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models
|
https://proceedings.mlr.press/v235/ming24a.html
|
Yifei Ming, Yixuan Li
|
https://proceedings.mlr.press/v235/ming24a.html
|
ICML 2024
|
Pre-trained contrastive vision-language models have demonstrated remarkable performance across a wide range of tasks. However, they often struggle on fine-trained datasets with categories not adequately represented during pre-training, which makes adaptation necessary. Recent works have shown promising results by utilizing samples from web-scale databases for retrieval-augmented adaptation, especially in low-data regimes. Despite the empirical success, understanding how retrieval impacts the adaptation of vision-language models remains an open research question. In this work, we adopt a reflective perspective by presenting a systematic study to understand the roles of key components in retrieval-augmented adaptation. We unveil new insights on uni-modal and cross-modal retrieval and highlight the critical role of logit ensemble for effective adaptation. We further present theoretical underpinnings that directly support our empirical observations.
|
https://proceedings.mlr.press/v235/mirzaei24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mirzaei24a/mirzaei24a.pdf
|
https://openreview.net/forum?id=yOe5lqDPvM
|
RODEO: Robust Outlier Detection via Exposing Adaptive Out-of-Distribution Samples
|
https://proceedings.mlr.press/v235/mirzaei24a.html
|
Hossein Mirzaei, Mohammad Jafari, Hamid Reza Dehbashi, Ali Ansari, Sepehr Ghobadi, Masoud Hadi, Arshia Soltani Moakhar, Mohammad Azizmalayeri, Mahdieh Soleymani Baghshah, Mohammad Hossein Rohban
|
https://proceedings.mlr.press/v235/mirzaei24a.html
|
ICML 2024
|
In recent years, there have been significant improvements in various forms of image outlier detection. However, outlier detection performance under adversarial settings lags far behind that in standard settings. This is due to the lack of effective exposure to adversarial scenarios during training, especially on unseen outliers, leading detection models failing to learn robust features. To bridge this gap, we introduce RODEO, a data-centric approach that generates effective outliers for robust outlier detection. More specifically, we show that incorporating outlier exposure (OE) and adversarial training could be an effective strategy for this purpose, as long as the exposed training outliers meet certain characteristics, including diversity, and both conceptual differentiability and analogy to the inlier samples. We leverage a text-to-image model to achieve this goal. We demonstrate both quantitatively and qualitatively that our adaptive OE method effectively generates ”diverse” and ”near-distribution” outliers, leveraging information from both text and image domains. Moreover, our experimental results show that utilizing our synthesized outliers significantly enhances the performance of the outlier detector, particularly in adversarial settings.
|
https://proceedings.mlr.press/v235/mishchenko24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mishchenko24a/mishchenko24a.pdf
|
https://openreview.net/forum?id=JJpOssn0uP
|
Prodigy: An Expeditiously Adaptive Parameter-Free Learner
|
https://proceedings.mlr.press/v235/mishchenko24a.html
|
Konstantin Mishchenko, Aaron Defazio
|
https://proceedings.mlr.press/v235/mishchenko24a.html
|
ICML 2024
|
We consider the problem of estimating the learning rate in adaptive methods, such as AdaGrad and Adam. We propose Prodigy, an algorithm that provably estimates the distance to the solution $D$, which is needed to set the learning rate optimally. At its core, Prodigy is a modification of the D-Adaptation method for learning-rate-free learning. It improves upon the convergence rate of D-Adaptation by a factor of $\mathcal{O}(\sqrt{\log(D/d_0)})$, where $d_0$ is the initial estimate of $D$. We test Prodigy on 12 common logistic-regression benchmark datasets, VGG11 and ResNet-50 training on CIFAR10, ViT training on Imagenet, LSTM training on IWSLT14, DLRM training on Criteo dataset, VarNet on Knee MRI dataset, as well as RoBERTa and GPT transformer training on BookWiki. Our experimental results show that our approach consistently outperforms D-Adaptation and reaches test accuracy values close to that of hand-tuned Adam.
|
https://proceedings.mlr.press/v235/mishra24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mishra24a/mishra24a.pdf
|
https://openreview.net/forum?id=GJzqRKOdRi
|
From Inverse Optimization to Feasibility to ERM
|
https://proceedings.mlr.press/v235/mishra24a.html
|
Saurabh Kumar Mishra, Anant Raj, Sharan Vaswani
|
https://proceedings.mlr.press/v235/mishra24a.html
|
ICML 2024
|
Inverse optimization involves inferring unknown parameters of an optimization problem from known solutions and is widely used in fields such as transportation, power systems, and healthcare. We study the contextual inverse optimization setting that utilizes additional contextual information to better predict the unknown problem parameters. We focus on contextual inverse linear programming (CILP) addressing the challenges posed by the non-differentiable nature of LPs. For a linear prediction model, we reduce CILP to a convex feasibility problem allowing the use of standard algorithms such as alternating projections. The resulting algorithm for CILP is equipped with theoretical convergence guarantees without additional assumptions such as degeneracy or interpolation. Next, we reduce CILP to empirical risk minimization (ERM) on a smooth, convex loss that satisfies the Polyak-Lojasiewicz condition. This reduction enables the use of scalable first-order optimization methods to solve large non-convex problems while maintaining theoretical guarantees in the convex setting. Subsequently, we use the reduction to ERM to quantify the generalization performance of the proposed algorithm on previously unseen instances. Finally, we experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.
|
https://proceedings.mlr.press/v235/misra24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/misra24a/misra24a.pdf
|
https://openreview.net/forum?id=CgO2cuWWLV
|
Provable Interactive Learning with Hindsight Instruction Feedback
|
https://proceedings.mlr.press/v235/misra24a.html
|
Dipendra Misra, Aldo Pacchiano, Robert E. Schapire
|
https://proceedings.mlr.press/v235/misra24a.html
|
ICML 2024
|
We study interactive learning in a setting where the agent has to generate a response (e.g., an action or trajectory) given a context and an instruction. In contrast, to typical approaches that train the system using reward or expert supervision on response, we study learning with hindsight labeling where a teacher provides an instruction that is most suitable for the agent’s generated response. This hindsight labeling of instruction is often easier to provide than providing expert supervision of the optimal response which may require expert knowledge or can be impractical to elicit. We initiate the theoretical analysis of interactive learning with hindsight labeling. We first provide a lower bound showing that in general, the regret of any algorithm must scale with the size of the agent’s response space. Next, we study a specialized setting where the underlying instruction-response distribution can be decomposed as a low-rank matrix. We introduce an algorithm called LORIL for this setting and show that it is a no-regret algorithm with the regret scaling with $\sqrt{T}$ and depends on the intrinsic rank but does not depend on the agent’s response space. We provide experiments showing the performance of LORIL in practice for 2 domains.
|
https://proceedings.mlr.press/v235/mitelut24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mitelut24a/mitelut24a.pdf
|
https://openreview.net/forum?id=rfvgdfd1K9
|
Position: Intent-aligned AI Systems Must Optimize for Agency Preservation
|
https://proceedings.mlr.press/v235/mitelut24a.html
|
Catalin Mitelut, Benjamin Smith, Peter Vamplew
|
https://proceedings.mlr.press/v235/mitelut24a.html
|
ICML 2024
|
A central approach to AI-safety research has been to generate aligned AI systems: i.e. systems that do not deceive users and yield actions or recommendations that humans might judge as consistent with their intentions and goals. Here we argue that truthful AIs aligned solely to human intent are insufficient and that preservation of long-term agency of humans may be a more robust standard that may need to be separated and explicitly optimized for. We discuss the science of intent and control and how human intent can be manipulated and we provide a formal definition of agency-preserving AI-human interactions focusing on forward-looking explicit agency evaluations. Our work points to a novel pathway for human harm in AI-human interactions and proposes solutions to this challenge.
|
https://proceedings.mlr.press/v235/mitrovic24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mitrovic24a/mitrovic24a.pdf
|
https://openreview.net/forum?id=6h6ovHcC9G
|
Faster Streaming and Scalable Algorithms for Finding Directed Dense Subgraphs in Large Graphs
|
https://proceedings.mlr.press/v235/mitrovic24a.html
|
Slobodan Mitrovic, Theodore Pan
|
https://proceedings.mlr.press/v235/mitrovic24a.html
|
ICML 2024
|
Finding dense subgraphs is a fundamental algorithmic tool in data mining, community detection, and clustering. In this problem, the aim is to find an induced subgraph whose edge-to-vertex ratio is maximized. We show how to find a $(2+\epsilon)$ approximation of the directed densest subgraph on randomized streams in a single pass while using $O(n \cdot {\rm poly} \log n)$ memory on $n$-vertex graphs. In contrast, the approach by Bahmani et al. (VLDB 2012) uses $O(\log n)$ passes and by Esfandiari et al. (2015) makes one pass but uses $O(n^{3/2})$ memory; both algorithms also apply to arbitrary-ordered streams. Our techniques extend to Massively Parallel Computation (MPC), yielding quadratic improvement over state-of-the-art by Bahmani et al. (VLDB 2012 and WAW 2014). We empirically show that the quality of our output is essentially the same as that of Bahmani et al. (VLDB 2012) while being $2$ times faster on large graphs, even on non-randomly ordered streams.
|
https://proceedings.mlr.press/v235/mo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mo24a/mo24a.pdf
|
https://openreview.net/forum?id=lpHjmPvxW1
|
TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors
|
https://proceedings.mlr.press/v235/mo24a.html
|
Yichuan Mo, Hui Huang, Mingjie Li, Ang Li, Yisen Wang
|
https://proceedings.mlr.press/v235/mo24a.html
|
ICML 2024
|
Diffusion models have achieved notable success in image generation, but they remain highly vulnerable to backdoor attacks, which compromise their integrity by producing specific undesirable outputs when presented with a pre-defined trigger. In this paper, we investigate how to protect diffusion models from this dangerous threat. Specifically, we propose TERD, a backdoor defense framework that builds unified modeling for current attacks, which enables us to derive an accessible reversed loss. A trigger reversion strategy is further employed: an initial approximation of the trigger through noise sampled from a prior distribution, followed by refinement through differential multi-step samplers. Additionally, with the reversed trigger, we propose backdoor detection from the noise space, introducing the first backdoor input detection approach for diffusion models and a novel model detection algorithm that calculates the KL divergence between reversed and benign distributions. Extensive evaluations demonstrate that TERD secures a 100% True Positive Rate (TPR) and True Negative Rate (TNR) across datasets of varying resolutions. TERD also demonstrates nice adaptability to other Stochastic Differential Equation (SDE)-based models. Our code is available at https://github.com/PKU-ML/TERD.
|
https://proceedings.mlr.press/v235/modoranu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/modoranu24a/modoranu24a.pdf
|
https://openreview.net/forum?id=OJTKlubFk1
|
Error Feedback Can Accurately Compress Preconditioners
|
https://proceedings.mlr.press/v235/modoranu24a.html
|
Ionut-Vlad Modoranu, Aleksei Kalinov, Eldar Kurtic, Elias Frantar, Dan Alistarh
|
https://proceedings.mlr.press/v235/modoranu24a.html
|
ICML 2024
|
Leveraging second-order information about the loss at the scale of deep networks is one of the main lines of approach for improving the performance of current optimizers for deep learning. Yet, existing approaches for accurate full-matrix preconditioning, such as Full-Matrix Adagrad (GGT) or Matrix-Free Approximate Curvature (M-FAC) suffer from massive storage costs when applied even to small-scale models, as they must store a sliding window of gradients, whose memory requirements are multiplicative in the model dimension. In this paper, we address this issue via a novel and efficient error-feedback technique that can be applied to compress preconditioners by up to two orders of magnitude in practice, without loss of convergence. Specifically, our approach compresses the gradient information via sparsification or low-rank compression before it is fed into the preconditioner, feeding the compression error back into future iterations. Extensive experiments on deep neural networks show that this approach can compress full-matrix preconditioners to up to 99% sparsity without accuracy loss, effectively removing the memory overhead of fullmatrix preconditioners such as GGT and M-FAC.
|
https://proceedings.mlr.press/v235/mohamadi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mohamadi24a/mohamadi24a.pdf
|
https://openreview.net/forum?id=ad5I6No9G1
|
Why Do You Grok? A Theoretical Analysis on Grokking Modular Addition
|
https://proceedings.mlr.press/v235/mohamadi24a.html
|
Mohamad Amin Mohamadi, Zhiyuan Li, Lei Wu, Danica J. Sutherland
|
https://proceedings.mlr.press/v235/mohamadi24a.html
|
ICML 2024
|
We present a theoretical explanation of the “grokking” phenomenon (Power et al., 2022), where a model generalizes long after overfitting, for the originally-studied problem of modular addition. First, we show that early in gradient descent, so that the “kernel regime” approximately holds, no permutation-equivariant model can achieve small population error on modular addition unless it sees at least a constant fraction of all possible data points. Eventually, however, models escape the kernel regime. We show that one-hidden-layer quadratic networks that achieve zero training loss with bounded $\ell_\infty$ norm generalize well with substantially fewer training points, and further show such networks exist and can be found by gradient descent with small $\ell_\infty$ regularization. We further provide empirical evidence that these networks leave the kernel regime only after initially overfitting. Taken together, our results strongly support the case for grokking as a consequence of the transition from kernel-like behavior to limiting behavior of gradient descent on deep networks.
|
https://proceedings.mlr.press/v235/mohamed24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mohamed24a/mohamed24a.pdf
|
https://openreview.net/forum?id=Oj18qGN1gC
|
Straight-Through Meets Sparse Recovery: the Support Exploration Algorithm
|
https://proceedings.mlr.press/v235/mohamed24a.html
|
Mimoun Mohamed, Francois Malgouyres, Valentin Emiya, Caroline Chaux
|
https://proceedings.mlr.press/v235/mohamed24a.html
|
ICML 2024
|
The straight-through estimator (STE) is commonly used to optimize quantized neural networks, yet its contexts of effective performance are still unclear despite empirical successes. To make a step forward in this comprehension, we apply STE to a well-understood problem: sparse support recovery. We introduce the Support Exploration Algorithm (SEA), a novel algorithm promoting sparsity, and we analyze its performance in support recovery (a.k.a. model selection) problems. SEA explores more supports than the state-of-the-art, leading to superior performance in experiments, especially when the columns of $A$ are strongly coherent. The theoretical analysis considers recovery guarantees when the linear measurements matrix $A$ satisfies the Restricted Isometry Property (RIP). The sufficient conditions of recovery are comparable but more stringent than those of the state-of-the-art in sparse support recovery. Their significance lies mainly in their applicability to an instance of the STE.
|
https://proceedings.mlr.press/v235/mohan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mohan24a/mohan24a.pdf
|
https://openreview.net/forum?id=Cbacx90Wkt
|
OAK: Enriching Document Representations using Auxiliary Knowledge for Extreme Classification
|
https://proceedings.mlr.press/v235/mohan24a.html
|
Shikhar Mohan, Deepak Saini, Anshul Mittal, Sayak Ray Chowdhury, Bhawna Paliwal, Jian Jiao, Manish Gupta, Manik Varma
|
https://proceedings.mlr.press/v235/mohan24a.html
|
ICML 2024
|
The objective in eXtreme Classification (XC) is to find relevant labels for a document from an exceptionally large label space. Most XC application scenarios have rich auxiliary data associated with the input documents, e.g., frequently clicked webpages for search queries in sponsored search. Unfortunately, most of the existing XC methods do not use any auxiliary data. In this paper, we propose a novel framework, Online Auxiliary Knowledge (OAK), which harnesses auxiliary information linked to the document to improve XC accuracy. OAK stores information learnt from the auxiliary data in a knowledge bank and during a forward pass, retrieves relevant auxiliary knowledge embeddings for a given document. An enriched embedding is obtained by fusing these auxiliary knowledge embeddings with the document’s embedding, thereby enabling much more precise candidate label selection and final classification. OAK training involves three stages. (1) Training a linker module to link documents to relevant auxiliary data points. (2) Learning an embedding for documents enriched using linked auxiliary information. (3) Using the enriched document embeddings to learn the final classifiers. OAK outperforms current state-of-the-art XC methods by up to $\sim 5 %$ on academic datasets, and by $\sim 3 %$ on an auxiliary data-augmented variant of LF-ORCAS-800K dataset in Precision@1. OAK also demonstrates statistically significant improvements in sponsored search metrics when deployed on a large scale search engine.
|
https://proceedings.mlr.press/v235/mohri24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mohri24a/mohri24a.pdf
|
https://openreview.net/forum?id=uYISs2tpwP
|
Language Models with Conformal Factuality Guarantees
|
https://proceedings.mlr.press/v235/mohri24a.html
|
Christopher Mohri, Tatsunori Hashimoto
|
https://proceedings.mlr.press/v235/mohri24a.html
|
ICML 2024
|
Guaranteeing the correctness and factuality of language model (LM) outputs is a major open problem. In this work, we propose conformal factuality, a framework that can ensure high probability correctness guarantees for LMs by connecting language modeling and conformal prediction. Our insight is that the correctness of an LM output is equivalent to an uncertainty quantification problem, where the uncertainty sets are defined as the entailment set of an LM’s output. Using this connection, we show that conformal prediction in language models corresponds to a back-off algorithm that provides high probability correctness guarantees by progressively making LM outputs less specific (and expanding the associated uncertainty sets). This approach applies to any black-box LM and requires very few human-annotated samples. Evaluations of our approach on closed book QA (FActScore, NaturalQuestions) and reasoning tasks (MATH) show that our approach can provide 80-90% correctness guarantees while retaining the majority of the LM’s original output.
|
https://proceedings.mlr.press/v235/moller24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/moller24a/moller24a.pdf
|
https://openreview.net/forum?id=Hzpt1Gws9g
|
Finding NEM-U: Explaining unsupervised representation learning through neural network generated explanation masks
|
https://proceedings.mlr.press/v235/moller24a.html
|
Bjørn Leth Møller, Christian Igel, Kristoffer Knutsen Wickstrøm, Jon Sporring, Robert Jenssen, Bulat Ibragimov
|
https://proceedings.mlr.press/v235/moller24a.html
|
ICML 2024
|
Unsupervised representation learning has become an important ingredient of today’s deep learning systems. However, only a few methods exist that explain a learned vector embedding in the sense of providing information about which parts of an input are the most important for its representation. These methods generate the explanation for a given input after the model has been evaluated and tend to produce either inaccurate explanations or are slow, which limits their practical use. To address these limitations, we introduce the Neural Explanation Masks (NEM) framework, which turns a fixed representation model into a self-explaining model by augmenting it with a masking network. This network provides occlusion-based explanations in parallel to computing the representations during inference. We present an instance of this framework, the NEM-U (NEM using U-net structure) architecture, which leverages similarities between segmentation and occlusion-based masks. Our experiments show that NEM-U generates explanations faster and with lower complexity compared to the current state-of-the-art while maintaining high accuracy as measured by locality.
|
https://proceedings.mlr.press/v235/monath24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/monath24a/monath24a.pdf
|
https://openreview.net/forum?id=ScRhEuj480
|
A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks
|
https://proceedings.mlr.press/v235/monath24a.html
|
Nicholas Monath, Will Sussman Grathwohl, Michael Boratko, Rob Fergus, Andrew Mccallum, Manzil Zaheer
|
https://proceedings.mlr.press/v235/monath24a.html
|
ICML 2024
|
In dense retrieval, deep encoders provide embeddings for both inputs and targets, and the softmax function is used to parameterize a distribution over a large number of candidate targets (e.g., textual passages for information retrieval). Significant challenges arise in training such encoders in the increasingly prevalent scenario of (1) a large number of targets, (2) a computationally expensive target encoder model, (3) cached target embeddings that are out-of-date due to ongoing training of target encoder parameters. This paper presents a simple and highly scalable response to these challenges by training a small parametric corrector network that adjusts stale cached target embeddings, enabling an accurate softmax approximation and thereby sampling of up-to-date high scoring "hard negatives." We theoretically investigate the generalization properties of our proposed target corrector, relating the complexity of the network, staleness of cached representations, and the amount of training data. We present experimental results on large benchmark dense retrieval datasets as well as on QA with retrieval augmented language models. Our approach matches state-of-the-art results even when no target embedding updates are made during training beyond an initial cache from the unsupervised pre-trained model, providing a 4-80x reduction in re-embedding computational cost.
|
https://proceedings.mlr.press/v235/mondal24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mondal24a/mondal24a.pdf
|
https://openreview.net/forum?id=duyl8sy8qV
|
Slot Abstractors: Toward Scalable Abstract Visual Reasoning
|
https://proceedings.mlr.press/v235/mondal24a.html
|
Shanka Subhra Mondal, Jonathan D. Cohen, Taylor Whittington Webb
|
https://proceedings.mlr.press/v235/mondal24a.html
|
ICML 2024
|
Abstract visual reasoning is a characteristically human ability, allowing the identification of relational patterns that are abstracted away from object features, and the systematic generalization of those patterns to unseen problems. Recent work has demonstrated strong systematic generalization in visual reasoning tasks involving multi-object inputs, through the integration of slot-based methods used for extracting object-centric representations coupled with strong inductive biases for relational abstraction. However, this approach was limited to problems containing a single rule, and was not scalable to visual reasoning problems containing a large number of objects. Other recent work proposed Abstractors, an extension of Transformers that incorporates strong relational inductive biases, thereby inheriting the Transformer’s scalability and multi-head architecture, but it has yet to be demonstrated how this approach might be applied to multi-object visual inputs. Here we combine the strengths of the above approaches and propose Slot Abstractors, an approach to abstract visual reasoning that can be scaled to problems involving a large number of objects and multiple relations among them. The approach displays state-of-the-art performance across four abstract visual reasoning tasks, as well as an abstract reasoning task involving real-world images.
|
https://proceedings.mlr.press/v235/moniri24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/moniri24a/moniri24a.pdf
|
https://openreview.net/forum?id=TWu1fzFJm0
|
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
|
https://proceedings.mlr.press/v235/moniri24a.html
|
Behrad Moniri, Donghwan Lee, Hamed Hassani, Edgar Dobriban
|
https://proceedings.mlr.press/v235/moniri24a.html
|
ICML 2024
|
Feature learning is thought to be one of the fundamental reasons for the success of deep neural networks. It is rigorously known that in two-layer fully-connected neural networks under certain conditions, one step of gradient descent on the first layer can lead to feature learning; characterized by the appearance of a separated rank-one component—spike—in the spectrum of the feature matrix. However, with a constant gradient descent step size, this spike only carries information from the linear component of the target function and therefore learning non-linear components is impossible. We show that with a learning rate that grows with the sample size, such training in fact introduces multiple rank-one components, each corresponding to a specific polynomial feature. We further prove that the limiting large-dimensional and large sample training and test errors of the updated neural networks are fully characterized by these spikes. By precisely analyzing the improvement in the training and test errors, we demonstrate that these non-linear features can enhance learning.
|
https://proceedings.mlr.press/v235/montenegro24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/montenegro24a/montenegro24a.pdf
|
https://openreview.net/forum?id=ABt0jlLZtX
|
Learning Optimal Deterministic Policies with Stochastic Policy Gradients
|
https://proceedings.mlr.press/v235/montenegro24a.html
|
Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini
|
https://proceedings.mlr.press/v235/montenegro24a.html
|
ICML 2024
|
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems. They learn stochastic parametric (hyper)policies by either exploring in the space of actions or in the space of parameters. Stochastic controllers, however, are often undesirable from a practical perspective because of their lack of robustness, safety, and traceability. In common practice, stochastic (hyper)policies are learned only to deploy their deterministic version. In this paper, we make a step towards the theoretical understanding of this practice. After introducing a novel framework for modeling this scenario, we study the global convergence to the best deterministic policy, under (weak) gradient domination assumptions. Then, we illustrate how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy. Finally, we quantitatively compare action-based and parameter-based exploration, giving a formal guise to intuitive results.
|
https://proceedings.mlr.press/v235/moon24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/moon24a/moon24a.pdf
|
https://openreview.net/forum?id=OnOaj3g9fi
|
A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models
|
https://proceedings.mlr.press/v235/moon24a.html
|
Taehong Moon, Moonseok Choi, Eunggu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, Juho Lee
|
https://proceedings.mlr.press/v235/moon24a.html
|
ICML 2024
|
Diffusion models have shown remarkable performance in generation problems over various domains including images, videos, text, and audio. A practical bottleneck of diffusion models is their sampling speed, due to the repeated evaluation of score estimation networks during the inference. In this work, we propose a novel framework capable of adaptively allocating compute required for the score estimation, thereby reducing the overall sampling time of diffusion models. We observe that the amount of computation required for the score estimation may vary along the time step for which the score is estimated. Based on this observation, we propose an early-exiting scheme, where we skip the subset of parameters in the score estimation network during the inference, based on a time-dependent exit schedule. Using the diffusion models for image synthesis, we show that our method could significantly improve the sampling throughput of the diffusion models without compromising image quality. Furthermore, we also demonstrate that our method seamlessly integrates with various types of solvers for faster sampling, capitalizing on their compatibility to enhance overall efficiency.
|
https://proceedings.mlr.press/v235/mordido24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mordido24a/mordido24a.pdf
|
https://openreview.net/forum?id=vCN5lwcWWE
|
Lookbehind-SAM: k steps back, 1 step forward
|
https://proceedings.mlr.press/v235/mordido24a.html
|
Goncalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar
|
https://proceedings.mlr.press/v235/mordido24a.html
|
ICML 2024
|
Sharpness-aware minimization (SAM) methods have gained increasing popularity by formulating the problem of minimizing both loss value and loss sharpness as a minimax objective. In this work, we increase the efficiency of the maximization and minimization parts of SAM’s objective to achieve a better loss-sharpness trade-off. By taking inspiration from the Lookahead optimizer, which uses multiple descent steps ahead, we propose Lookbehind, which performs multiple ascent steps behind to enhance the maximization step of SAM and find a worst-case perturbation with higher loss. Then, to mitigate the variance in the descent step arising from the gathered gradients across the multiple ascent steps, we employ linear interpolation to refine the minimization step. Lookbehind leads to a myriad of benefits across a variety of tasks. Particularly, we show increased generalization performance, greater robustness against noisy weights, as well as improved learning and less catastrophic forgetting in lifelong learning settings. Our code is available at https://github.com/chandar-lab/Lookbehind-SAM.
|
https://proceedings.mlr.press/v235/morioka24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/morioka24a/morioka24a.pdf
|
https://openreview.net/forum?id=SL6V527p1F
|
Causal Representation Learning Made Identifiable by Grouping of Observational Variables
|
https://proceedings.mlr.press/v235/morioka24a.html
|
Hiroshi Morioka, Aapo Hyvarinen
|
https://proceedings.mlr.press/v235/morioka24a.html
|
ICML 2024
|
A topic of great current interest is Causal Representation Learning (CRL), whose goal is to learn a causal model for hidden features in a data-driven manner. Unfortunately, CRL is severely ill-posed since it is a combination of the two notoriously ill-posed problems of representation learning and causal discovery. Yet, finding practical identifiability conditions that guarantee a unique solution is crucial for its practical applicability. Most approaches so far have been based on assumptions on the latent causal mechanisms, such as temporal causality, or existence of supervision or interventions; these can be too restrictive in actual applications. Here, we show identifiability based on novel, weak constraints, which requires no temporal structure, intervention, nor weak supervision. The approach is based on assuming the observational mixing exhibits a suitable grouping of the observational variables. We also propose a novel self-supervised estimation framework consistent with the model, prove its statistical consistency, and experimentally show its superior CRL performances compared to the state-of-the-art baselines. We further demonstrate its robustness against latent confounders and causal cycles.
|
https://proceedings.mlr.press/v235/morris24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/morris24a/morris24a.pdf
|
https://openreview.net/forum?id=wBr5ozDEKp
|
Position: Future Directions in the Theory of Graph Machine Learning
|
https://proceedings.mlr.press/v235/morris24a.html
|
Christopher Morris, Fabrizio Frasca, Nadav Dym, Haggai Maron, Ismail Ilkan Ceylan, Ron Levie, Derek Lim, Michael M. Bronstein, Martin Grohe, Stefanie Jegelka
|
https://proceedings.mlr.press/v235/morris24a.html
|
ICML 2024
|
Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.
|
https://proceedings.mlr.press/v235/morris24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/morris24b/morris24b.pdf
|
https://openreview.net/forum?id=0ofzEysK2D
|
Position: Levels of AGI for Operationalizing Progress on the Path to AGI
|
https://proceedings.mlr.press/v235/morris24b.html
|
Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg
|
https://proceedings.mlr.press/v235/morris24b.html
|
ICML 2024
|
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. With these principles in mind, we propose “Levels of AGI” based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.
|
https://proceedings.mlr.press/v235/motamedi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/motamedi24a/motamedi24a.pdf
|
https://openreview.net/forum?id=iGMTxygzcJ
|
Gibbs Sampling of Continuous Potentials on a Quantum Computer
|
https://proceedings.mlr.press/v235/motamedi24a.html
|
Arsalan Motamedi, Pooya Ronagh
|
https://proceedings.mlr.press/v235/motamedi24a.html
|
ICML 2024
|
Gibbs sampling from continuous real-valued functions is a challenging problem of interest in machine learning. Here we leverage quantum Fourier transforms to build a quantum algorithm for this task when the function is periodic. We use the quantum algorithms for solving linear ordinary differential equations to solve the Fokker–Planck equation and prepare a quantum state encoding the Gibbs distribution. We show that the efficiency of interpolation and differentiation of these functions on a quantum computer depends on the rate of decay of the Fourier coefficients of the Fourier transform of the function. We view this property as a concentration of measure in the Fourier domain, and also provide functional analytic conditions for it. Our algorithm makes zeroeth order queries to a quantum oracle of the function and achieves polynomial quantum speedups in mean estimation in the Gibbs measure for generic non-convex periodic functions. At high temperatures the algorithm also allows for exponentially improved precision in sampling from Morse functions.
|
https://proceedings.mlr.press/v235/mouli24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mouli24a/mouli24a.pdf
|
https://openreview.net/forum?id=Y50K6DSrWo
|
Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs
|
https://proceedings.mlr.press/v235/mouli24a.html
|
S Chandra Mouli, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael W. Mahoney, Bernie Wang
|
https://proceedings.mlr.press/v235/mouli24a.html
|
ICML 2024
|
Existing work in scientific machine learning (SciML) has shown that data-driven learning of solution operators can provide a fast approximate alternative to classical numerical partial differential equation (PDE) solvers. Of these, Neural Operators (NOs) have emerged as particularly promising. We observe that several uncertainty quantification (UQ) methods for NOs fail for test inputs that are even moderately out-of-domain (OOD), even when the model approximates the solution well for in-domain tasks. To address this limitation, we show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates that are well-correlated with prediction errors. Based on this, we propose a cost-effective alternative, DiverseNO, that mimics the properties of the ensemble by encouraging diverse predictions from its multiple heads in the last feed-forward layer. We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model. Our empirical results show that Operator-ProbConserv enhances OOD model performance for a variety of challenging PDE problems and satisfies physical constraints such as conservation laws.
|
https://proceedings.mlr.press/v235/mrini24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mrini24a/mrini24a.pdf
|
https://openreview.net/forum?id=mggc3oYHy4
|
Privacy Attacks in Decentralized Learning
|
https://proceedings.mlr.press/v235/mrini24a.html
|
Abdellah El Mrini, Edwige Cyffers, Aurélien Bellet
|
https://proceedings.mlr.press/v235/mrini24a.html
|
ICML 2024
|
Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data by iteratively averaging local model updates with their neighbors in a network graph. The absence of direct communication between non-neighbor nodes might lead to the belief that users cannot infer precise information about the data of others. In this work, we demonstrate the opposite, by proposing the first attack against D-GD that enables a user (or set of users) to reconstruct the private data of other users outside their immediate neighborhood. Our approach is based on a reconstruction attack against the gossip averaging protocol, which we then extend to handle the additional challenges raised by D-GD. We validate the effectiveness of our attack on real graphs and datasets, showing that the number of users compromised by a single or a handful of attackers is often surprisingly large. We empirically investigate some of the factors that affect the performance of the attack, namely the graph topology, the number of attackers, and their position in the graph.
|
https://proceedings.mlr.press/v235/mu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mu24a/mu24a.pdf
|
https://openreview.net/forum?id=xnQ1qoly7Q
|
RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis
|
https://proceedings.mlr.press/v235/mu24a.html
|
Yao Mu, Junting Chen, Qing-Long Zhang, Shoufa Chen, Qiaojun Yu, Chongjian Ge, Runjian Chen, Zhixuan Liang, Mengkang Hu, Chaofan Tao, Peize Sun, Haibao Yu, Chao Yang, Wenqi Shao, Wenhai Wang, Jifeng Dai, Yu Qiao, Mingyu Ding, Ping Luo
|
https://proceedings.mlr.press/v235/mu24a.html
|
ICML 2024
|
Robotic behavior synthesis, the problem of understanding multimodal inputs and generating precise physical control for robots, is an important part of Embodied AI. Despite successes in applying multimodal large language models for high-level understanding, it remains challenging to translate these conceptual understandings into detailed robotic actions while achieving generalization across various scenarios. In this paper, we propose a tree-structured multimodal code generation framework for generalized robotic behavior synthesis, termed RoboCodeX. RoboCodeX decomposes high-level human instructions into multiple object-centric manipulation units consisting of physical preferences such as affordance and safety constraints, and applies code generation to introduce generalization ability across various robotics platforms. To further enhance the capability to map conceptual and perceptual understanding into control commands, a specialized multimodal reasoning dataset is collected for pre-training and an iterative self-updating methodology is introduced for supervised fine-tuning. Extensive experiments demonstrate that RoboCodeX achieves state-of-the-art performance in both simulators and real robots on four different kinds of manipulation tasks and one embodied navigation task.
|
https://proceedings.mlr.press/v235/mu24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mu24b/mu24b.pdf
|
https://openreview.net/forum?id=RfsagmV1AG
|
On the Second-Order Convergence of Biased Policy Gradient Algorithms
|
https://proceedings.mlr.press/v235/mu24b.html
|
Siqiao Mu, Diego Klabjan
|
https://proceedings.mlr.press/v235/mu24b.html
|
ICML 2024
|
Since the objective functions of reinforcement learning problems are typically highly nonconvex, it is desirable that policy gradient, the most popular algorithm, escapes saddle points and arrives at second-order stationary points. Existing results only consider vanilla policy gradient algorithms with unbiased gradient estimators, but practical implementations under the infinite-horizon discounted reward setting are biased due to finite-horizon sampling. Moreover, actor-critic methods, whose second-order convergence has not yet been established, are also biased due to the critic approximation of the value function. We provide a novel second-order analysis of biased policy gradient methods, including the vanilla gradient estimator computed from Monte-Carlo sampling of trajectories as well as the double-loop actor-critic algorithm, where in the inner loop the critic improves the approximation of the value function via TD(0) learning. Separately, we also establish the convergence of TD(0) on Markov chains irrespective of initial state distribution.
|
https://proceedings.mlr.press/v235/mudgal24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mudgal24a/mudgal24a.pdf
|
https://openreview.net/forum?id=bVIcZb7Qa0
|
Controlled Decoding from Language Models
|
https://proceedings.mlr.press/v235/mudgal24a.html
|
Sidharth Mudgal, Jong Lee, Harish Ganapathy, Yaguang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, Jilin Chen, Alex Beutel, Ahmad Beirami
|
https://proceedings.mlr.press/v235/mudgal24a.html
|
ICML 2024
|
KL-regularized reinforcement learning (RL) is a popular alignment framework to control the language model responses towards high reward outcomes. We pose a tokenwise RL objective and propose a modular solver for it, called controlled decoding (CD). CD exerts control through a separate prefix scorer module, which is trained to learn a value function for the reward. The prefix scorer is used at inference time to control the generation from a frozen base model, provably sampling from a solution to the RL objective. We empirically demonstrate that CD is effective as a control mechanism on popular benchmarks. We also show that prefix scorers for multiple rewards may be combined at inference time, effectively solving a multi-objective RL problem with no additional training. We show that the benefits of applying CD transfer to an unseen base model with no further tuning as well. Finally, we show that CD can be applied in a blockwise decoding fashion at inference-time, essentially bridging the gap between the popular best-of-$K$ strategy and tokenwise control through reinforcement learning. This makes CD a promising approach for alignment of language models.
|
https://proceedings.mlr.press/v235/mudrik24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mudrik24a/mudrik24a.pdf
|
https://openreview.net/forum?id=h8aTi32tul
|
SiBBlInGS: Similarity-driven Building-Block Inference using Graphs across States
|
https://proceedings.mlr.press/v235/mudrik24a.html
|
Noga Mudrik, Gal Mishne, Adam Shabti Charles
|
https://proceedings.mlr.press/v235/mudrik24a.html
|
ICML 2024
|
Time series data across scientific domains are often collected under distinct states (e.g., tasks), wherein latent processes (e.g., biological factors) create complex inter- and intra-state variability. A key approach to capture this complexity is to uncover fundamental interpretable units within the data, Building Blocks (BBs), which modulate their activity and adjust their structure across observations. Existing methods for identifying BBs in multi-way data often overlook inter- vs. intra-state variability, produce uninterpretable components, or do not align with properties of real-world data, such as missing samples and sessions of different duration. Here, we present a framework for Similarity-driven Building Block Inference using Graphs across States (SiBBlInGS). SiBBlInGS offers a graph-based dictionary learning approach for discovering sparse BBs along with their temporal traces, based on co-activity patterns and inter- vs. intra-state relationships. Moreover, SiBBlInGS captures per-trial temporal variability and controlled cross-state structural BB adaptations, identifies state-specific vs. state-invariant components, and accommodates variability in the number and duration of observed sessions across states. We demonstrate SiBBlInGS’s ability to reveal insights into complex phenomena as well as its robustness to noise and missing samples through several synthetic and real-world examples, including web search and neural data.
|
https://proceedings.mlr.press/v235/mukherjee24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mukherjee24a/mukherjee24a.pdf
|
https://openreview.net/forum?id=reB9FFAaKw
|
SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in Tabular MDP
|
https://proceedings.mlr.press/v235/mukherjee24a.html
|
Subhojyoti Mukherjee, Josiah P. Hanna, Robert D Nowak
|
https://proceedings.mlr.press/v235/mukherjee24a.html
|
ICML 2024
|
In this paper, we study safe data collection for the purpose of policy evaluation in tabular Markov decision processes (MDPs). In policy evaluation, we are given a target policy and asked to estimate the expected cumulative reward it will obtain. Policy evaluation requires data and we are interested in the question of what behavior policy should collect the data for the most accurate evaluation of the target policy. While prior work has considered behavior policy selection, in this paper, we additionally consider a safety constraint on the behavior policy. Namely, we assume there exists a known default policy that incurs a particular expected cost when run and we enforce that the cumulative cost of all behavior policies ran is better than a constant factor of the cost that would be incurred had we always run the default policy. We first show that there exists a class of intractable MDPs where no safe oracle algorithm with knowledge about problem parameters can efficiently collect data and satisfy the safety constraints. We then define the tractability condition for an MDP such that a safe oracle algorithm can efficiently collect data and using that we prove the first lower bound for this setting. We then introduce an algorithm SaVeR for this problem that approximates the safe oracle algorithm and bound the finite-sample mean squared error of the algorithm while ensuring it satisfies the safety constraint. Finally, we show in simulations that SaVeR produces low MSE policy evaluation while satisfying the safety constraint.
|
https://proceedings.mlr.press/v235/muldrew24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muldrew24a/muldrew24a.pdf
|
https://openreview.net/forum?id=CTgEV6qgUy
|
Active Preference Learning for Large Language Models
|
https://proceedings.mlr.press/v235/muldrew24a.html
|
William Muldrew, Peter Hayes, Mingtian Zhang, David Barber
|
https://proceedings.mlr.press/v235/muldrew24a.html
|
ICML 2024
|
As large language models (LLMs) become more capable, fine-tuning techniques for aligning with human intent are increasingly important. A key consideration for aligning these models is how to most effectively use human resources, or model resources in the case where LLMs themselves are used as oracles. Reinforcement learning from Human or AI preferences (RLHF/RLAIF) is the most prominent example of such a technique, but is complex and often unstable. Direct Preference Optimization (DPO) has recently been proposed as a simpler and more stable alternative. In this work, we develop an active learning strategy for DPO to make better use of preference labels. We propose a practical acquisition function for prompt/completion pairs based on the predictive entropy of the language model and a measure of certainty of the implicit preference model optimized by DPO. We demonstrate how our approach improves both the rate of learning and final performance of fine-tuning on pairwise preference data.
|
https://proceedings.mlr.press/v235/muller24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muller24a/muller24a.pdf
|
https://openreview.net/forum?id=UE79AkNg60
|
Is Kernel Prediction More Powerful than Gating in Convolutional Neural Networks?
|
https://proceedings.mlr.press/v235/muller24a.html
|
Lorenz K Muller
|
https://proceedings.mlr.press/v235/muller24a.html
|
ICML 2024
|
Neural networks whose weights are the output of a predictor (HyperNetworks) achieve excellent performance on many tasks. In ConvNets, kernel prediction layers are a popular type of HyperNetwork. Previous theoretical work has argued that a hierarchy of multiplicative interactions exists in which gating is at the bottom and full weight prediction, as in HyperNetworks, is at the top. In this paper, we constructively demonstrate an equivalence between gating combined with fixed weight layers and weight prediction, relativizing the notion of a hierarchy of multiplicative interactions. We further derive an equivalence between a restricted type of HyperNetwork and factorization machines. Finally, we find empirically that gating layers can learn to imitate weight prediction layers with an SGD variant and show a novel practical application in image denoising using kernel prediction networks. Our reformulation of predicted kernels, combining fixed layers and gating, reduces memory requirements.
|
https://proceedings.mlr.press/v235/muller24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muller24b/muller24b.pdf
|
https://openreview.net/forum?id=hrWte3nlzr
|
Truly No-Regret Learning in Constrained MDPs
|
https://proceedings.mlr.press/v235/muller24b.html
|
Adrian Müller, Pragnya Alatur, Volkan Cevher, Giorgia Ramponi, Niao He
|
https://proceedings.mlr.press/v235/muller24b.html
|
ICML 2024
|
Constrained Markov decision processes (CMDPs) are a common way to model safety constraints in reinforcement learning. State-of-the-art methods for efficiently solving CMDPs are based on primal-dual algorithms. For these algorithms, all currently known regret bounds allow for error cancellations — one can compensate for a constraint violation in one round with a strict constraint satisfaction in another. This makes the online learning process unsafe since it only guarantees safety for the final (mixture) policy but not during learning. As Efroni et al. (2020) pointed out, it is an open question whether primal-dual algorithms can provably achieve sublinear regret if we do not allow error cancellations. In this paper, we give the first affirmative answer. We first generalize a result on last-iterate convergence of regularized primal-dual schemes to CMDPs with multiple constraints. Building upon this insight, we propose a model-based primal-dual algorithm to learn in an unknown CMDP. We prove that our algorithm achieves sublinear regret without error cancellations.
|
https://proceedings.mlr.press/v235/muller24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muller24c/muller24c.pdf
|
https://openreview.net/forum?id=4FJJfYjUQR
|
Aligning Transformers with Weisfeiler-Leman
|
https://proceedings.mlr.press/v235/muller24c.html
|
Luis Müller, Christopher Morris
|
https://proceedings.mlr.press/v235/muller24c.html
|
ICML 2024
|
Graph neural network architectures aligned with the $k$-dimensional Weisfeiler–Leman ($k$-WL) hierarchy offer theoretically well-understood expressive power. However, these architectures often fail to deliver state-of-the-art predictive performance on real-world graphs, limiting their practical utility. While recent works aligning graph transformer architectures with the $k$-WL hierarchy have shown promising empirical results, employing transformers for higher orders of $k$ remains challenging due to a prohibitive runtime and memory complexity of self-attention as well as impractical architectural assumptions, such as an infeasible number of attention heads. Here, we advance the alignment of transformers with the $k$-WL hierarchy, showing stronger expressivity results for each $k$, making them more feasible in practice. In addition, we develop a theoretical framework that allows the study of established positional encodings such as Laplacian PEs and SPE. We evaluate our transformers on the large-scale PCQM4Mv2 dataset, showing competitive predictive performance with the state-of-the-art and demonstrating strong downstream performance when fine-tuning them on small-scale molecular datasets.
|
https://proceedings.mlr.press/v235/muller24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muller24d/muller24d.pdf
|
https://openreview.net/forum?id=MOrvoYrlOg
|
Position: Optimization in SciML Should Employ the Function Space Geometry
|
https://proceedings.mlr.press/v235/muller24d.html
|
Johannes Müller, Marius Zeinhofer
|
https://proceedings.mlr.press/v235/muller24d.html
|
ICML 2024
|
We provide an infinite-dimensional view on optimization problems encountered in scientific machine learning (SciML) and advocate for the paradigm first optimize, then discretize for their solution. This amounts to first choosing an appropriate infinite-dimensional algorithm which is then discretized in a second step. To illustrate this point, we discuss recently proposed state-of-the-art algorithms for SciML applications and see that they can be derived within this framework. Hence, this perspective allows for a principled guide for the design of optimization algorithms for SciML. As the infinite-dimensional viewpoint is presently underdeveloped we formalize it here to foster the development of novel optimization algorithms.
|
https://proceedings.mlr.press/v235/munagala24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/munagala24a/munagala24a.pdf
|
https://openreview.net/forum?id=8f8SI9X9ox
|
Individual Fairness in Graph Decomposition
|
https://proceedings.mlr.press/v235/munagala24a.html
|
Kamesh Munagala, Govind S. Sankar
|
https://proceedings.mlr.press/v235/munagala24a.html
|
ICML 2024
|
In this paper, we consider classic randomized low diameter decomposition procedures for planar graphs that obtain connected clusters that are cohesive in that close by pairs of nodes are assigned to the same cluster with high probability. We consider the additional aspect of individual fairness – pairs of nodes at comparable distances should be separated with comparable probability. We show that classic decomposition procedures do not satisfy this property. We present novel algorithms that achieve various trade-offs between this property and additional desiderata of connectivity of the clusters and optimality in number of clusters. We show that our individual fairness bounds may be difficult to improve by tying the improvement to resolving a major open question in metric embeddings. We finally show the efficacy of our algorithms on real planar networks modeling Congressional redistricting.
|
https://proceedings.mlr.press/v235/munos24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/munos24a/munos24a.pdf
|
https://openreview.net/forum?id=Y5AmNYiyCQ
|
Nash Learning from Human Feedback
|
https://proceedings.mlr.press/v235/munos24a.html
|
Remi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J Mankowitz, Doina Precup, Bilal Piot
|
https://proceedings.mlr.press/v235/munos24a.html
|
ICML 2024
|
Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human preferences. Traditionally, RLHF involves the initial step of learning a reward model from pairwise human feedback, i.e., expressed as preferences between pairs of text generations. Subsequently, the LLM’s policy is fine-tuned to maximize the reward through a reinforcement learning algorithm. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a pairwise preference model, which is conditioned on two inputs (instead of a single input in the case of a reward model) given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash-MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deep-learning architectures. We illustrate the effectiveness of our approach by presenting experimental results on a text summarization task. We believe NLHF offers a compelling avenue for fine-tuning LLMs and enhancing the alignment of LLMs with human preferences.
|
https://proceedings.mlr.press/v235/munteanu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/munteanu24a/munteanu24a.pdf
|
https://openreview.net/forum?id=ohH3sbUue2
|
Optimal bounds for $\ell_p$ sensitivity sampling via $\ell_2$ augmentation
|
https://proceedings.mlr.press/v235/munteanu24a.html
|
Alexander Munteanu, Simon Omlor
|
https://proceedings.mlr.press/v235/munteanu24a.html
|
ICML 2024
|
Data subsampling is one of the most natural methods to approximate a massively large data set by a small representative proxy. In particular, sensitivity sampling received a lot of attention, which samples points proportional to an individual importance measure called sensitivity. This framework reduces in very general settings the size of data to roughly the VC dimension $d$ times the total sensitivity $\mathfrak S$ while providing strong $(1\pm\varepsilon)$ guarantees on the quality of approximation. The recent work of Woodruff & Yasuda (2023c) improved substantially over the general $\tilde O(\varepsilon^{-2}\mathfrak Sd)$ bound for the important problem of $\ell_p$ subspace embeddings to $\tilde O(\varepsilon^{-2}\mathfrak S^{2/p})$ for $p\in[1,2]$. Their result was subsumed by an earlier $\tilde O(\varepsilon^{-2}\mathfrak Sd^{1-p/2})$ bound which was implicitly given in the work of Chen & Derezinski (2021). We show that their result is tight when sampling according to plain $\ell_p$ sensitivities. We observe that by augmenting the $\ell_p$ sensitivities by $\ell_2$ sensitivities, we obtain better bounds improving over the aforementioned results to optimal linear $\tilde O(\varepsilon^{-2}(\mathfrak S+d)) = \tilde O(\varepsilon^{-2}d)$ sampling complexity for all $p \in [1,2]$. In particular, this resolves an open question of Woodruff & Yasuda (2023c) in the affirmative for $p \in [1,2]$ and brings sensitivity subsampling into the regime that was previously only known to be possible using Lewis weights (Cohen & Peng, 2015). As an application of our main result, we also obtain an $\tilde O(\varepsilon^{-2}\mu d)$ sensitivity sampling bound for logistic regression, where $\mu$ is a natural complexity measure for this problem. This improves over the previous $\tilde O(\varepsilon^{-2}\mu^2 d)$ bound of Mai et al. (2021) which was based on Lewis weights subsampling.
|
https://proceedings.mlr.press/v235/munteanu24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/munteanu24b/munteanu24b.pdf
|
https://openreview.net/forum?id=l4ZjeDDnu9
|
Turnstile $\ell_p$ leverage score sampling with applications
|
https://proceedings.mlr.press/v235/munteanu24b.html
|
Alexander Munteanu, Simon Omlor
|
https://proceedings.mlr.press/v235/munteanu24b.html
|
ICML 2024
|
The turnstile data stream model offers the most flexible framework where data can be manipulated dynamically, i.e., rows, columns, and even single entries of an input matrix can be added, deleted, or updated multiple times in a data stream. We develop a novel algorithm for sampling rows $a_i$ of a matrix $A\in\mathbb{R}^{n\times d}$, proportional to their $\ell_p$ norm, when $A$ is presented in a turnstile data stream. Our algorithm not only returns the set of sampled row indexes, it also returns slightly perturbed rows $\tilde{a}_i \approx a_i$, and approximates their sampling probabilities up to $\varepsilon$ relative error. When combined with preconditioning techniques, our algorithm extends to $\ell_p$ leverage score sampling over turnstile data streams. With these properties in place, it allows us to simulate subsampling constructions of coresets for important regression problems to operate over turnstile data streams with very little overhead compared to their respective off-line subsampling algorithms. For logistic regression, our framework yields the first algorithm that achieves a $(1+\varepsilon)$ approximation and works in a turnstile data stream using polynomial sketch/subsample size, improving over $O(1)$ approximations, or $\exp(1/\varepsilon)$ sketch size of previous work. We compare experimentally to plain oblivious sketching and plain leverage score sampling algorithms for $\ell_p$ and logistic regression.
|
https://proceedings.mlr.press/v235/muqeeth24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muqeeth24a/muqeeth24a.pdf
|
https://openreview.net/forum?id=r0qcGcFL4U
|
Learning to Route Among Specialized Experts for Zero-Shot Generalization
|
https://proceedings.mlr.press/v235/muqeeth24a.html
|
Mohammed Muqeeth, Haokun Liu, Yufan Liu, Colin Raffel
|
https://proceedings.mlr.press/v235/muqeeth24a.html
|
ICML 2024
|
Recently, there has been a widespread proliferation of "expert" language models that are specialized to a specific task or domain through parameter-efficient fine-tuning. How can we recycle large collections of expert language models to improve zero-shot generalization to unseen tasks? In this work, we propose $\textbf{P}$ost-$\textbf{H}$oc $\textbf{A}$daptive $\textbf{T}$okenwise $\textbf{G}$ating $\textbf{O}$ver an $\textbf{O}$cean of $\textbf{S}$pecialized $\textbf{E}$xperts (PHATGOOSE), which learns to route among specialized modules that were produced through parameter-efficient fine-tuning. Unlike past methods that learn to route among specialized models, PHATGOOSE explores the possibility that zero-shot generalization will be improved if different experts can be adaptively chosen for each token and at each layer in the model. Crucially, our method is post-hoc - it does not require simultaneous access to the datasets used to create the specialized models and only requires a modest amount of additional compute after each expert model is trained. In experiments covering a range of specialized model collections and zero-shot generalization benchmarks, we find that PHATGOOSE outperforms past methods for post-hoc routing and, in some cases, outperforms explicit multitask training (which requires simultaneous data access). To better understand the routing strategy learned by PHATGOOSE, we perform qualitative experiments to validate that PHATGOOSE’s performance stems from its ability to make adaptive per-token and per-module expert choices.
|
https://proceedings.mlr.press/v235/murphy24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/murphy24a/murphy24a.pdf
|
https://openreview.net/forum?id=bylZbZOsGA
|
Autoformalizing Euclidean Geometry
|
https://proceedings.mlr.press/v235/murphy24a.html
|
Logan Murphy, Kaiyu Yang, Jialiang Sun, Zhaoyu Li, Anima Anandkumar, Xujie Si
|
https://proceedings.mlr.press/v235/murphy24a.html
|
ICML 2024
|
Autoformalization involves automatically translating informal math into formal theorems and proofs that are machine-verifiable. Euclidean geometry provides an interesting and controllable domain for studying autoformalization. In this paper, we introduce a neuro-symbolic framework for autoformalizing Euclidean geometry, which combines domain knowledge, SMT solvers, and large language models (LLMs). One challenge in Euclidean geometry is that informal proofs rely on diagrams, leaving gaps in texts that are hard to formalize. To address this issue, we use theorem provers to fill in such diagrammatic information automatically, so that the LLM only needs to autoformalize the explicit textual steps, making it easier for the model. We also provide automatic semantic evaluation for autoformalized theorem statements. We construct LeanEuclid, an autoformalization benchmark consisting of problems from Euclid’s Elements and the UniGeo dataset formalized in the Lean proof assistant. Experiments with GPT-4 and GPT-4V show the capability and limitations of state-of-the-art LLMs on autoformalizing geometry problems. The data and code are available at https://github.com/loganrjmurphy/LeanEuclid.
|
https://proceedings.mlr.press/v235/murty24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/murty24a/murty24a.pdf
|
https://openreview.net/forum?id=VsvfSMI5bs
|
BAGEL: Bootstrapping Agents by Guiding Exploration with Language
|
https://proceedings.mlr.press/v235/murty24a.html
|
Shikhar Murty, Christopher D Manning, Peter Shaw, Mandar Joshi, Kenton Lee
|
https://proceedings.mlr.press/v235/murty24a.html
|
ICML 2024
|
Following natural language instructions by executing actions in digital environments (e.g. web-browsers and REST APIs) is a challenging task for language model (LM) agents. Unfortunately, LM agents often fail to generalize to new environments without human demonstrations. This work presents BAGEL, a method for bootstrapping LM agents without human supervision. BAGEL converts a seed set of randomly explored trajectories to synthetic demonstrations via round-trips between two noisy LM components: an LM labeler which converts a trajectory into a synthetic instruction, and a zero-shot LM agent which maps the synthetic instruction into a refined trajectory. By performing these round-trips iteratively, BAGEL quickly converts the initial distribution of trajectories towards those that are well-described by natural language. We adapt the base LM agent at test time with in-context learning by retrieving relevant BAGEL demonstrations based on the instruction, and find improvements of over 2-13% absolute on ToolQA and MiniWob++, with up to 13x reduction in execution failures.
|
https://proceedings.mlr.press/v235/mussi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mussi24a/mussi24a.pdf
|
https://openreview.net/forum?id=C7Z8EhZ6bl
|
Factored-Reward Bandits with Intermediate Observations
|
https://proceedings.mlr.press/v235/mussi24a.html
|
Marco Mussi, Simone Drago, Marcello Restelli, Alberto Maria Metelli
|
https://proceedings.mlr.press/v235/mussi24a.html
|
ICML 2024
|
In several real-world sequential decision problems, at every step, the learner is required to select different actions. Every action affects a specific part of the system and generates an observable intermediate effect. In this paper, we introduce the Factored-Reward Bandits (FRBs), a novel setting able to effectively capture and exploit the structure of this class of scenarios, where the reward is computed as the product of the action intermediate observations. We characterize the statistical complexity of the learning problem in the FRBs, by deriving worst-case and asymptotic instance-dependent regret lower bounds. Then, we devise and analyze two regret minimization algorithms. The former, F-UCB, is an anytime optimistic approach matching the worst-case lower bound (up to logarithmic factors) but fails to perform optimally from the instance-dependent perspective. The latter, F-Track, is a bound-tracking approach, that enjoys optimal asymptotic instance-dependent regret guarantees.
|
https://proceedings.mlr.press/v235/mussi24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mussi24b/mussi24b.pdf
|
https://openreview.net/forum?id=WwLtwPHmSM
|
Best Arm Identification for Stochastic Rising Bandits
|
https://proceedings.mlr.press/v235/mussi24b.html
|
Marco Mussi, Alessandro Montenegro, Francesco Trovò, Marcello Restelli, Alberto Maria Metelli
|
https://proceedings.mlr.press/v235/mussi24b.html
|
ICML 2024
|
Stochastic Rising Bandits (SRBs) model sequential decision-making problems in which the expected reward of the available options increases every time they are selected. This setting captures a wide range of scenarios in which the available options are learning entities whose performance improves (in expectation) over time (e.g., online best model selection). While previous works addressed the regret minimization problem, this paper focuses on the fixed-budget Best Arm Identification (BAI) problem for SRBs. In this scenario, given a fixed budget of rounds, we are asked to provide a recommendation about the best option at the end of the identification process. We propose two algorithms to tackle the above-mentioned setting, namely R-UCBE, which resorts to a UCB-like approach, and R-SR, which employs a successive reject procedure. Then, we prove that, with a sufficiently large budget, they provide guarantees on the probability of properly identifying the optimal option at the end of the learning process and on the simple regret. Furthermore, we derive a lower bound on the error probability, matched by our R-SR (up to constants), and illustrate how the need for a sufficiently large budget is unavoidable in the SRB setting. Finally, we numerically validate the proposed algorithms in both synthetic and realistic environments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.