abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/eringis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eringis24a/eringis24a.pdf
https://openreview.net/forum?id=a1Olc2QhPv
PAC-Bayesian Error Bound, via Rényi Divergence, for a Class of Linear Time-Invariant State-Space Models
https://proceedings.mlr.press/v235/eringis24a.html
Deividas Eringis, John Leth, Zheng-Hua Tan, Rafal Wisniewski, Mihaly Petreczky
https://proceedings.mlr.press/v235/eringis24a.html
ICML 2024
In this paper we derive a PAC-Bayesian error bound for a class of stochastic dynamical systems with inputs, namely, for linear time-invariant stochastic state-space models (stochastic LTI systems for short). This class of systems is widely used in control engineering and econometrics, in particular, they represent a special case of recurrent neural networks. In this paper we 1) formalize the learning problem for stochastic LTI systems with inputs, 2) derive a PAC-Bayesian error bound for such systems, and 3) discuss various consequences of this error bound.
https://proceedings.mlr.press/v235/esfandiari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/esfandiari24a/esfandiari24a.pdf
https://openreview.net/forum?id=yQfA0etfB7
High-Dimensional Geometric Streaming for Nearly Low Rank Data
https://proceedings.mlr.press/v235/esfandiari24a.html
Hossein Esfandiari, Praneeth Kacham, Vahab Mirrokni, David Woodruff, Peilin Zhong
https://proceedings.mlr.press/v235/esfandiari24a.html
ICML 2024
We study streaming algorithms for the $\ell_p$ subspace approximation problem. Given points $a_1, \ldots, a_n$ as an insertion-only stream and a rank parameter $k$, the $\ell_p$ subspace approximation problem is to find a $k$-dimensional subspace $V$ such that $(\sum_{i=1}^n d(a_i, V)^p)^{1/p}$ is minimized, where $d(a, V)$ denotes the Euclidean distance between $a$ and $V$ defined as $\min_{v \in V} ||a - v||$. When $p = \infty$, we need to find a subspace $V$ that minimizes $\max_i d(a_i, V)$. For $\ell_{\infty}$ subspace approximation, we give a deterministic strong coreset construction algorithm and show that it can be used to compute a $\mathrm{poly}(k, \log n)$ approximate solution. We show that the distortion obtained by our coreset is nearly tight for any sublinear space algorithm. For $\ell_p$ subspace approximation, we show that suitably scaling the points and then using our $\ell_{\infty}$ coreset construction, we can compute a $\mathrm{poly}(k, \log n)$ approximation. Our algorithms are easy to implement and run very fast on large datasets. We also use our strong coreset construction to improve the results in a recent work of Woodruff and Yasuda (FOCS 2022) which gives streaming algorithms for high-dimensional geometric problems such as width estimation, convex hull estimation, and volume estimation.
https://proceedings.mlr.press/v235/esser24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/esser24a/esser24a.pdf
https://openreview.net/forum?id=FPnUhsQJ5B
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
https://proceedings.mlr.press/v235/esser24a.html
Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Robin Rombach
https://proceedings.mlr.press/v235/esser24a.html
ICML 2024
Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a recent generative model formulation that connects data and noise in a straight line. Despite its better theoretical properties and conceptual simplicity, it is not yet decisively established as standard practice. In this work, we improve existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales. Through a large-scale study, we demonstrate the superior performance of this approach compared to established diffusion formulations for high-resolution text-to-image synthesis. Additionally, we present a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens, improving text comprehension, typography, and human preference ratings. We demonstrate that this architecture follows predictable scaling trends and correlates lower validation loss to improved text-to-image synthesis as measured by various metrics and human evaluations. Our largest models outperform state-of-the-art models. Stability AI is considering making experimental data, code, and model weights publicly available.
https://proceedings.mlr.press/v235/ethayarajh24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ethayarajh24a/ethayarajh24a.pdf
https://openreview.net/forum?id=iUwHnoENnl
Model Alignment as Prospect Theoretic Optimization
https://proceedings.mlr.press/v235/ethayarajh24a.html
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, Douwe Kiela
https://proceedings.mlr.press/v235/ethayarajh24a.html
ICML 2024
Kahneman & Tversky’s $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases—the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them belonging to a family of loss functions that we call $\textit{human-aware losses}$ (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach KTO, and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B, despite only learning from a binary signal of whether an output is desirable. More broadly, our work suggests that there is no one HALO that is universally superior; the best loss depends on the inductive biases most appropriate for a given setting, an oft-overlooked consideration.
https://proceedings.mlr.press/v235/evans24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/evans24a/evans24a.pdf
https://openreview.net/forum?id=jOlO8t1xdx
Fast Timing-Conditioned Latent Audio Diffusion
https://proceedings.mlr.press/v235/evans24a.html
Zach Evans, Cj Carr, Josiah Taylor, Scott H. Hawley, Jordi Pons
https://proceedings.mlr.press/v235/evans24a.html
ICML 2024
Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative model. It is based on latent diffusion, with its latent defined by a fully-convolutional variational autoencoder. The generative model is conditioned on text prompts as well as timing embeddings, allowing for fine control over both the content and length of the generated music and sounds. It is capable of rendering stereo signals of up to 95 sec at 44.1kHz in 8 sec on an A100 GPU. Despite its compute efficiency and fast inference, the proposed model is one of the best in two public text-to-music and -audio benchmarks and, differently from state-of-the-art models, can generate music with structure and stereo sounds.
https://proceedings.mlr.press/v235/everett24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/everett24a/everett24a.pdf
https://openreview.net/forum?id=0ksNeD1SJT
Scaling Exponents Across Parameterizations and Optimizers
https://proceedings.mlr.press/v235/everett24a.html
Katie E Everett, Lechao Xiao, Mitchell Wortsman, Alexander A Alemi, Roman Novak, Peter J Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, Jeffrey Pennington
https://proceedings.mlr.press/v235/everett24a.html
ICML 2024
Robust and effective scaling of models from small to large width typically requires the precise adjustment of many algorithmic and architectural details, such as parameterization and optimizer choices. In this work, we propose a new perspective on parameterization by investigating a key assumption in prior work about the alignment between parameters and data and derive new theoretical results under weaker assumptions and a broader set of optimizers. Our extensive empirical investigation includes tens of thousands of models trained with all combinations of three optimizers, four parameterizations, several alignment assumptions, more than a dozen learning rates, and fourteen model sizes up to 27B parameters. We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work. Our results show that all parameterizations, not just maximal update parameterization (muP), can achieve hyperparameter transfer; moreover, our novel per-layer learning rate prescription for standard parameterization outperforms muP. Finally, we demonstrate that an overlooked aspect of parameterization, the epsilon parameter in Adam, must be scaled correctly to avoid gradient underflow and propose Adam-atan2, a new numerically stable, scale-invariant version of Adam that eliminates the epsilon hyperparameter entirely.
https://proceedings.mlr.press/v235/eyre24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eyre24a/eyre24a.pdf
https://openreview.net/forum?id=H3bATm4mKn
Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift
https://proceedings.mlr.press/v235/eyre24a.html
Benjamin Eyre, Elliot Creager, David Madras, Vardan Papyan, Richard Zemel
https://proceedings.mlr.press/v235/eyre24a.html
ICML 2024
Designing deep neural network classifiers that perform robustly on distributions differing from the available training data is an active area of machine learning research. However, out-of-distribution generalization for regression—the analogous problem for modeling continuous targets—remains relatively unexplored. To tackle this problem, we return to first principles and analyze how the closed-form solution for Ordinary Least Squares (OLS) regression is sensitive to covariate shift. We characterize the out-of-distribution risk of the OLS model in terms of the eigenspectrum decomposition of the source and target data. We then use this insight to propose a method called Spectral Adapted Regressor (SpAR) for adapting the weights of the last layer of a pre-trained neural regression model to perform better on input data originating from a different distribution. We demonstrate how this lightweight spectral adaptation procedure can improve out-of-distribution performance for synthetic and real-world datasets.
https://proceedings.mlr.press/v235/fabian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fabian24a/fabian24a.pdf
https://openreview.net/forum?id=V3OpGwo68Z
Adapt and Diffuse: Sample-adaptive Reconstruction via Latent Diffusion Models
https://proceedings.mlr.press/v235/fabian24a.html
Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi
https://proceedings.mlr.press/v235/fabian24a.html
ICML 2024
Inverse problems arise in a multitude of applications, where the goal is to recover a clean signal from noisy and possibly (non)linear observations. The difficulty of a reconstruction problem depends on multiple factors, such as the ground truth signal structure, the severity of the degradation and the complex interactions between the above. This results in natural sample-by-sample variation in the difficulty of a reconstruction problem. Our key observation is that most existing inverse problem solvers lack the ability to adapt their compute power to the difficulty of the reconstruction task, resulting in subpar performance and wasteful resource allocation. We propose a novel method, severity encoding, to estimate the degradation severity of corrupted signals in the latent space of an autoencoder. We show that the estimated severity has strong correlation with the true corruption level and can provide useful hints on the difficulty of reconstruction problems on a sample-by-sample basis. Furthermore, we propose a reconstruction method based on latent diffusion models that leverages the predicted degradation severities to fine-tune the reverse diffusion sampling trajectory and thus achieve sample-adaptive inference times. Our framework, Flash-Diffusion, acts as a wrapper that can be combined with any latent diffusion-based baseline solver, imbuing it with sample-adaptivity and acceleration. We perform experiments on both linear and nonlinear inverse problems and demonstrate that our technique greatly improves the performance of the baseline solver and achieves up to $10\times$ acceleration in mean sampling speed.
https://proceedings.mlr.press/v235/fabian24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fabian24b/fabian24b.pdf
https://openreview.net/forum?id=ibwxzYCep9
DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency
https://proceedings.mlr.press/v235/fabian24b.html
Zalan Fabian, Berk Tinaz, Mahdi Soltanolkotabi
https://proceedings.mlr.press/v235/fabian24b.html
ICML 2024
Diffusion models have established new state of the art in a multitude of computer vision tasks, including image restoration. Diffusion-based inverse problem solvers generate reconstructions of exceptional visual quality from heavily corrupted measurements. However, in what is widely known as the perception-distortion trade-off, the price of perceptually appealing reconstructions is often paid in declined distortion metrics, such as PSNR. Distortion metrics measure faithfulness to the observation, a crucial requirement in inverse problems. In this work, we propose a novel framework for inverse problem solving, namely we assume that the observation comes from a stochastic degradation process that gradually degrades and noises the original clean image. We learn to reverse the degradation process in order to recover the clean image. Our technique maintains consistency with the original measurement throughout the reverse process, and allows for great flexibility in trading off perceptual quality for improved distortion metrics and sampling speedup via early-stopping. We demonstrate the efficiency of our method on different high-resolution datasets and inverse problems, achieving great improvements over other state-of-the-art diffusion-based methods with respect to both perceptual and distortion metrics.
https://proceedings.mlr.press/v235/falck24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/falck24a/falck24a.pdf
https://openreview.net/forum?id=b1YQ5WKY3w
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective
https://proceedings.mlr.press/v235/falck24a.html
Fabian Falck, Ziyu Wang, Christopher C. Holmes
https://proceedings.mlr.press/v235/falck24a.html
ICML 2024
In-context learning (ICL) has emerged as a particularly remarkable characteristic of Large Language Models (LLM): given a pretrained LLM and an observed dataset, LLMs can make predictions for new data points from the same distribution without fine-tuning. Numerous works have postulated ICL as approximately Bayesian inference, rendering this a natural hypothesis. In this work, we analyse this hypothesis from a new angle through the martingale property, a fundamental requirement of a Bayesian learning system for exchangeable data. We show that the martingale property is a necessary condition for unambiguous predictions in such scenarios, and enables a principled, decomposed notion of uncertainty vital in trustworthy, safety-critical systems. We derive actionable checks with corresponding theory and test statistics which must hold if the martingale property is satisfied. We also examine if uncertainty in LLMs decreases as expected in Bayesian learning when more data is observed. In three experiments, we provide evidence for violations of the martingale property, and deviations from a Bayesian scaling behaviour of uncertainty, falsifying the hypothesis that ICL is Bayesian.
https://proceedings.mlr.press/v235/fan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24a/fan24a.pdf
https://openreview.net/forum?id=bZNH0SU37Y
On the Recoverability of Causal Relations from Temporally Aggregated I.I.D. Data
https://proceedings.mlr.press/v235/fan24a.html
Shunxing Fan, Mingming Gong, Kun Zhang
https://proceedings.mlr.press/v235/fan24a.html
ICML 2024
We consider the effect of temporal aggregation on instantaneous (non-temporal) causal discovery in general setting. This is motivated by the observation that the true causal time lag is often considerably shorter than the observational interval. This discrepancy leads to high aggregation, causing time-delay causality to vanish and instantaneous dependence to manifest. Although we expect such instantaneous dependence has consistency with the true causal relation in certain sense to make the discovery results meaningful, it remains unclear what type of consistency we need and when will such consistency be satisfied. We proposed functional consistency and conditional independence consistency in formal way correspond functional causal model-based methods and conditional independence-based methods respectively and provide the conditions under which these consistencies will hold. We show theoretically and experimentally that causal discovery results may be seriously distorted by aggregation especially in complete nonlinear case and we also find causal relationship still recoverable from aggregated data if we have partial linearity or appropriate prior. Our findings suggest community should take a cautious and meticulous approach when interpreting causal discovery results from such data and show why and when aggregation will distort the performance of causal discovery methods.
https://proceedings.mlr.press/v235/fan24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24b/fan24b.pdf
https://openreview.net/forum?id=VDgfJnOEMV
On the Convergence of Projected Bures-Wasserstein Gradient Descent under Euclidean Strong Convexity
https://proceedings.mlr.press/v235/fan24b.html
Junyi Fan, Yuxuan Han, Zijian Liu, Jian-Feng Cai, Yang Wang, Zhengyuan Zhou
https://proceedings.mlr.press/v235/fan24b.html
ICML 2024
The Bures-Wasserstein (BW) gradient descent method has gained considerable attention in various domains, including Gaussian barycenter, matrix recovery and variational inference problems, due to its alignment with the Wasserstein geometry of normal distributions. Despite its popularity, existing convergence analysis are often contingent upon specific loss functions, and the exploration of constrained settings within this framework remains limited. In this work, we make an attempt to bridge this gap by providing a general convergence rate guarantee for BW gradient descent when the Euclidean strong convexity of the loss and the constraints is assumed. In an effort to advance practical implementations, we also derive a closed-form solution for the projection onto BW distance-constrained sets, which enables the fast implementation of projected BW gradient descent for problems that arise in the constrained barycenter and distributionally robust optimization literature. Experimental results demonstrate significant improvements in computational efficiency and convergence speed, underscoring the efficacy of our method in practical scenarios.
https://proceedings.mlr.press/v235/fan24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24c/fan24c.pdf
https://openreview.net/forum?id=6axTFAlzRV
Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
https://proceedings.mlr.press/v235/fan24c.html
Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya Zhang, Masashi Sugiyama, Yanfeng Wang
https://proceedings.mlr.press/v235/fan24c.html
ICML 2024
In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima, degenerating the performance of the resulted global model. Prevalent federated approaches incorporate sharpness-aware minimization (SAM) into local training to mitigate this problem. However, the local loss landscapes may not accurately reflect the flatness of global loss landscape in heterogeneous environments; as a result, minimizing local sharpness and calculating perturbations on client data might not align the efficacy of SAM in FL with centralized training. To overcome this challenge, we propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side as the difference between global models received in the previous active and current rounds. Besides the improved quality, FedLESAM also speed up federated SAM-based approaches since it only performs once backpropagation in each iteration. Theoretically, we prove a slightly tighter bound than its original FedSAM by ensuring consistent perturbation. Empirically, we conduct comprehensive experiments on four federated benchmark datasets under three partition strategies to demonstrate the superior performance and efficiency of FedLESAM.
https://proceedings.mlr.press/v235/fan24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24d/fan24d.pdf
https://openreview.net/forum?id=NZgbwzaOIx
Revisit the Essence of Distilling Knowledge through Calibration
https://proceedings.mlr.press/v235/fan24d.html
Wen-Shu Fan, Su Lu, Xin-Chun Li, De-Chuan Zhan, Le Gan
https://proceedings.mlr.press/v235/fan24d.html
ICML 2024
Knowledge Distillation (KD) has evolved into a practical technology for transferring knowledge from a well-performing model (teacher) to a weak model (student). A counter-intuitive phenomenon known as capacity mismatch has been identified, wherein KD performance may not be good when a better teacher instructs the student. Various preliminary methods have been proposed to alleviate capacity mismatch, but a unifying explanation for its cause remains lacking. In this paper, we propose a unifying analytical framework to pinpoint the core of capacity mismatch based on calibration. Through extensive analytical experiments, we observe a positive correlation between the calibration of the teacher model and the KD performance with original KD methods. As this correlation arises due to the sensitivity of metrics (e.g., KL divergence) to calibration, we recommend employing measurements insensitive to calibration such as ranking-based loss. Our experiments demonstrate that ranking-based loss can effectively replace KL divergence, aiding large models with poor calibration to teach better.
https://proceedings.mlr.press/v235/fan24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24e/fan24e.pdf
https://openreview.net/forum?id=7rfZ6bMZq4
DOGE: Domain Reweighting with Generalization Estimation
https://proceedings.mlr.press/v235/fan24e.html
Simin Fan, Matteo Pagliardini, Martin Jaggi
https://proceedings.mlr.press/v235/fan24e.html
ICML 2024
The coverage and composition of the pretraining data significantly impacts the generalization ability of Large Language Models (LLMs). Despite its importance, recent LLMs still rely on heuristics and trial and error to increase or reduce the influence of data-domains. We propose DOmain reweighting with Generalization Estimation (DoGE), which optimizes the probability of sampling from each domain (domain weights) in a principled way. Our approach is a two stage process consisting (i) training a proxy model to obtain domain weights using a bi-level optimization algorithm; (ii) training a larger base model by sampling training domains according to the learnt domain weights. In our experiments, we extensively show how DoGE improves the generalization of the base model to any target data mixture. On the SlimPajama dataset, our base model gets a better perplexity and few-shot reasoning accuracies across 6 tasks compared to baseline methods. Moreover, aiming to generalize to out-of-domain target tasks, which is unseen in the pretraining corpus (OOD domain), DoGE can effectively identify inter-domain dependencies, consistently achieves better test perplexity on the target domain.
https://proceedings.mlr.press/v235/fan24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fan24f/fan24f.pdf
https://openreview.net/forum?id=Kt4fwiuKqf
Path-Guided Particle-based Sampling
https://proceedings.mlr.press/v235/fan24f.html
Mingzhou Fan, Ruida Zhou, Chao Tian, Xiaoning Qian
https://proceedings.mlr.press/v235/fan24f.html
ICML 2024
Particle-based Bayesian inference methods by sampling from a partition-free target (posterior) distribution, e.g., Stein variational gradient descent (SVGD), have attracted significant attention. We propose a path-guided particle-based sampling (PGPS) method based on a novel Log-weighted Shrinkage (LwS) density path linking an initial distribution to the target distribution. We propose to utilize a Neural network to learn a vector field motivated by the Fokker-Planck equation of the designed density path. Particles, initiated from the initial distribution, evolve according to the ordinary differential equation defined by the vector field. The distribution of these particles is guided along a density path from the initial distribution to the target distribution. The proposed LwS density path allows for an efficient search of modes of the target distribution while canonical methods fail. We theoretically analyze the Wasserstein distance of the distribution of the PGPS-generated samples and the target distribution due to approximation and discretization errors. Practically, the proposed PGPS-LwS method demonstrates higher Bayesian inference accuracy and better calibration ability in experiments conducted on both synthetic and real-world Bayesian learning tasks, compared to baselines, such as SVGD and Langevin dynamics, etc.
https://proceedings.mlr.press/v235/fang24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fang24a/fang24a.pdf
https://openreview.net/forum?id=knZ4NYzGUd
Bayesian Knowledge Distillation: A Bayesian Perspective of Distillation with Uncertainty Quantification
https://proceedings.mlr.press/v235/fang24a.html
Luyang Fang, Yongkai Chen, Wenxuan Zhong, Ping Ma
https://proceedings.mlr.press/v235/fang24a.html
ICML 2024
Knowledge distillation (KD) has been widely used for model compression and deployment acceleration. Nonetheless, the statistical insight of the remarkable performance of KD remains elusive, and methods for evaluating the uncertainty of the distilled model/student model are lacking. To address these issues, we establish a close connection between KD and a Bayesian model. In particular, we develop an innovative method named Bayesian Knowledge Distillation (BKD) to provide a transparent interpretation of the working mechanism of KD, and a suite of Bayesian inference tools for the uncertainty quantification of the student model. In BKD, the regularization imposed by the teacher model in KD is formulated as a teacher-informed prior for the student model’s parameters. Consequently, we establish the equivalence between minimizing the KD loss and estimating the posterior mode in BKD. Efficient Bayesian inference algorithms are developed based on the stochastic gradient Langevin Monte Carlo and examined with extensive experiments on uncertainty ranking and credible intervals construction for predicted class probabilities.
https://proceedings.mlr.press/v235/fang24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fang24b/fang24b.pdf
https://openreview.net/forum?id=O3CFN1VIwt
Exploring Correlations of Self-Supervised Tasks for Graphs
https://proceedings.mlr.press/v235/fang24b.html
Taoran Fang, Wei Chow, Yifei Sun, Kaiqiao Han, Lvbin Ma, Yang Yang
https://proceedings.mlr.press/v235/fang24b.html
ICML 2024
Graph self-supervised learning has sparked a research surge in training informative representations without accessing any labeled data. However, our understanding of graph self-supervised learning remains limited, and the inherent relationships between various self-supervised tasks are still unexplored. Our paper aims to provide a fresh understanding of graph self-supervised learning based on task correlations. Specifically, we evaluate the performance of the representations trained by one specific task on other tasks and define correlation values to quantify task correlations. Through this process, we unveil the task correlations between various self-supervised tasks and can measure their expressive capabilities, which are closely related to downstream performance. By analyzing the correlation values between tasks across various datasets, we reveal the complexity of task correlations and the limitations of existing multi-task learning methods. To obtain more capable representations, we propose Graph Task Correlation Modeling (GraphTCM) to illustrate the task correlations and utilize it to enhance graph self-supervised training. The experimental results indicate that our method significantly outperforms existing methods across various downstream tasks.
https://proceedings.mlr.press/v235/fang24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fang24c/fang24c.pdf
https://openreview.net/forum?id=1IZLOPxtfK
INViT: A Generalizable Routing Problem Solver with Invariant Nested View Transformer
https://proceedings.mlr.press/v235/fang24c.html
Han Fang, Zhihao Song, Paul Weng, Yutong Ban
https://proceedings.mlr.press/v235/fang24c.html
ICML 2024
Recently, deep reinforcement learning has shown promising results for learning fast heuristics to solve routing problems. Meanwhile, most of the solvers suffer from generalizing to an unseen distribution or distributions with different scales. To address this issue, we propose a novel architecture, called Invariant Nested View Transformer (INViT), which is designed to enforce a nested design together with invariant views inside the encoders to promote the generalizability of the learned solver. It applies a modified policy gradient algorithm enhanced with data augmentations. We demonstrate that the proposed INViT achieves a dominant generalization performance on both TSP and CVRP problems with various distributions and different problem scales. Our source code and datasets are available in supplementary materials.
https://proceedings.mlr.press/v235/fang24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fang24d/fang24d.pdf
https://openreview.net/forum?id=aGBpiEcB8z
BayOTIDE: Bayesian Online Multivariate Time Series Imputation with Functional Decomposition
https://proceedings.mlr.press/v235/fang24d.html
Shikai Fang, Qingsong Wen, Yingtao Luo, Shandian Zhe, Liang Sun
https://proceedings.mlr.press/v235/fang24d.html
ICML 2024
In real-world scenarios such as traffic and energy management, we frequently encounter large volumes of time-series data characterized by missing values, noise, and irregular sampling patterns. While numerous imputation methods have been proposed, the majority tend to operate within a local horizon, which involves dividing long sequences into batches of fixed-length segments for model training. This local horizon often leads to the overlooking of global trends and periodic patterns. More importantly, most methods assume the observations are sampled at regular timestamps, and fail to handle complex irregular sampled time series in various applications. Additionally, most existing methods are learned in an offline manner. Thus, it is not suitable for applications with rapidly arriving streaming data. To address these challenges, we propose BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition. Our method conceptualizes multivariate time series as the weighted combination of groups of low-rank temporal factors with different patterns. We employ a suite of Gaussian Processes (GPs),each with a unique kernel, as functional priors to model these factors. For computational efficiency, we further convert the GPs into a state-space prior by constructing an equivalent stochastic differential equation (SDE), and developing a scalable algorithm for online inference. The proposed method can not only handle imputation over arbitrary timestamps, but also offer uncertainty quantification and interpretability for the downstream application. We evaluate our method on both synthetic and real-world datasets. We release the code at https://github.com/xuangu-fang/BayOTIDE.
https://proceedings.mlr.press/v235/fang24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fang24e/fang24e.pdf
https://openreview.net/forum?id=gn5AsHIIwb
StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation
https://proceedings.mlr.press/v235/fang24e.html
Weike Fang, Zhejian Zhou, Junzhou He, Weihang Wang
https://proceedings.mlr.press/v235/fang24e.html
ICML 2024
WebAssembly enables near-native execution in web applications and is increasingly adopted for tasks that demand high performance and robust security. However, its assembly-like syntax, implicit stack machine, and low-level data types make it extremely difficult for human developers to understand, spurring the need for effective WebAssembly reverse engineering techniques. In this paper, we propose StackSight, a novel neurosymbolic approach that combines Large Language Models (LLMs) with advanced program analysis to decompile complex WebAssembly code into readable C++ snippets. StackSight visualizes and tracks virtual stack alterations via a static analysis algorithm and then applies chain-of-thought prompting to harness LLM’s complex reasoning capabilities. Evaluation results show that StackSight significantly improves WebAssembly decompilation. Our user study also demonstrates that code snippets generated by StackSight have significantly higher win rates and enable a better grasp of code semantics.
https://proceedings.mlr.press/v235/fani-24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fani-24a/fani-24a.pdf
https://openreview.net/forum?id=cMige5MK1N
Accelerating Heterogeneous Federated Learning with Closed-form Classifiers
https://proceedings.mlr.press/v235/fani-24a.html
Eros Fanı̀, Raffaello Camoriano, Barbara Caputo, Marco Ciccone
https://proceedings.mlr.press/v235/fani-24a.html
ICML 2024
Federated Learning (FL) methods often struggle in highly statistically heterogeneous settings. Indeed, non-IID data distributions cause client drift and biased local solutions, particularly pronounced in the final classification layer, negatively impacting convergence speed and accuracy. To address this issue, we introduce Federated Recursive Ridge Regression (Fed3R). Our method fits a Ridge Regression classifier computed in closed form leveraging pre-trained features. Fed3R is immune to statistical heterogeneity and is invariant to the sampling order of the clients. Therefore, it proves particularly effective in cross-device scenarios. Furthermore, it is fast and efficient in terms of communication and computation costs, requiring up to two orders of magnitude fewer resources than the competitors. Finally, we propose to leverage the Fed3R parameters as an initialization for a softmax classifier and subsequently fine-tune the model using any FL algorithm (Fed3R with Fine-Tuning, Fed3R+FT). Our findings also indicate that maintaining a fixed classifier aids in stabilizing the training and learning more discriminative features in cross-device settings. Official website: https://fed-3r.github.io/.
https://proceedings.mlr.press/v235/farebrother24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/farebrother24a/farebrother24a.pdf
https://openreview.net/forum?id=dVpFKfqF3R
Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
https://proceedings.mlr.press/v235/farebrother24a.html
Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taiga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, Aviral Kumar, Rishabh Agarwal
https://proceedings.mlr.press/v235/farebrother24a.html
ICML 2024
Value functions are an essential component in deep reinforcement learning (RL), that are typically trained via mean squared error regression to match bootstrapped target values. However, scaling value-based RL methods to large networks has proven challenging. This difficulty is in stark contrast to supervised learning: by leveraging a cross-entropy classification loss, supervised methods have scaled reliably to massive networks. Observing this discrepancy, in this paper, we investigate whether the scalability of deep RL can also be improved simply by using classification in place of regression for training value functions. We show that training value functions with categorical cross-entropy significantly enhances performance and scalability across various domains, including single-task RL on Atari 2600 games, multi-task RL on Atari with large-scale ResNets, robotic manipulation with Q-transformers, playing Chess without search, and a language-agent Wordle task with high-capacity Transformers, achieving state-of-the-art results on these domains. Through careful analysis, we show that categorical cross-entropy mitigates issues inherent to value-based RL, such as noisy targets and non-stationarity. We argue that shifting to categorical cross-entropy for training value functions can substantially improve the scalability of deep RL at little-to-no cost.
https://proceedings.mlr.press/v235/farnadi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/farnadi24a/farnadi24a.pdf
https://openreview.net/forum?id=XDz9leJ9iK
Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities
https://proceedings.mlr.press/v235/farnadi24a.html
Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh
https://proceedings.mlr.press/v235/farnadi24a.html
ICML 2024
The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind. In this position paper, we discuss that disparities towards marginalized communities – performance, representation, privacy, robustness, interpretability and safety – are not isolated concerns but rather interconnected elements of a cascading disparity phenomenon. We contrast foundation models with traditional models and highlight the potential for exacerbated disparity against marginalized communities. Moreover, we emphasize the unique threat of cascading impacts in foundation models, where interconnected disparities can trigger long-lasting negative consequences, specifically to the people on the margin. We define marginalized communities within the machine learning context and explore the multifaceted nature of disparities. We analyze the sources of these disparities, tracing them from data creation, training and deployment procedures to highlight the complex technical and socio-technical landscape. To mitigate the pressing crisis, we conclude with a set of calls to action to mitigate disparity at its source.
https://proceedings.mlr.press/v235/farzam24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/farzam24a/farzam24a.pdf
https://openreview.net/forum?id=4DAl3IsvlU
From Geometry to Causality- Ricci Curvature and the Reliability of Causal Inference on Networks
https://proceedings.mlr.press/v235/farzam24a.html
Amirhossein Farzam, Allen Tannenbaum, Guillermo Sapiro
https://proceedings.mlr.press/v235/farzam24a.html
ICML 2024
Causal inference on networks faces challenges posed in part by violations of standard identification assumptions due to dependencies between treatment units. Although graph geometry fundamentally influences such dependencies, the potential of geometric tools for causal inference on networked treatment units is yet to be unlocked. Moreover, despite significant progress utilizing graph neural networks (GNNs) for causal inference on networks, methods for evaluating their achievable reliability without ground truth are lacking. In this work we establish for the first time a theoretical link between network geometry, the graph Ricci curvature in particular, and causal inference, formalizing the intrinsic challenges that negative curvature poses to estimating causal parameters. The Ricci curvature can then be used to assess the reliability of causal estimates in structured data, as we empirically demonstrate. Informed by this finding, we propose a method using the geometric Ricci flow to reduce causal effect estimation error in networked data, showcasing how this newfound connection between graph geometry and causal inference could improve GNN-based causal inference. Bridging graph geometry and causal inference, this paper opens the door to geometric techniques for improving causal estimation on networks.
https://proceedings.mlr.press/v235/fei24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fei24a/fei24a.pdf
https://openreview.net/forum?id=fO31YAyNbI
Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition
https://proceedings.mlr.press/v235/fei24a.html
Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, Wynne Hsu
https://proceedings.mlr.press/v235/fei24a.html
ICML 2024
Existing research of video understanding still struggles to achieve in-depth comprehension and reasoning in complex videos, primarily due to the under-exploration of two key bottlenecks: fine-grained spatial-temporal perceptive understanding and cognitive-level video scene comprehension. This paper bridges the gap by presenting a novel solution. We first introduce a novel video Multimodal Large Language Model (MLLM), MotionEpic, which achieves fine-grained pixel-level spatial-temporal video grounding by integrating video spatial-temporal scene graph (STSG) representation. Building upon MotionEpic, we then develop a Video-of-Thought (VoT) reasoning framework. VoT inherits the Chain-of-Thought (CoT) core, breaking down a complex task into simpler and manageable sub-problems, and addressing them step-by-step from a low-level pixel perception to high-level cognitive interpretation. Extensive experiments across various complex video QA benchmarks demonstrate that our overall framework strikingly boosts existing state-of-the-art. To our knowledge, this is the first attempt at successfully implementing the CoT technique for achieving human-level video reasoning, where we show great potential in extending it to a wider range of video understanding scenarios. Systems and codes will be open later.
https://proceedings.mlr.press/v235/fellows24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fellows24a/fellows24a.pdf
https://openreview.net/forum?id=OYw6sS8QmL
Bayesian Exploration Networks
https://proceedings.mlr.press/v235/fellows24a.html
Mattie Fellows, Brandon Gary Kaplowitz, Christian Schroeder De Witt, Shimon Whiteson
https://proceedings.mlr.press/v235/fellows24a.html
ICML 2024
Bayesian reinforcement learning (RL) offers a principled and elegant approach for sequential decision making under uncertainty. Most notably, Bayesian agents do not face an exploration/exploitation dilemma, a major pathology of frequentist methods. However theoretical understanding of model-free approaches is lacking. In this paper, we introduce a novel Bayesian model-free formulation and the first analysis showing that model-free approaches can yield Bayes-optimal policies. We show all existing model-free approaches make approximations that yield policies that can be arbitrarily Bayes-suboptimal. As a first step towards model-free Bayes optimality, we introduce the Bayesian exploration network (BEN) which uses normalising flows to model both the aleatoric uncertainty (via density estimation) and epistemic uncertainty (via variational inference) in the Bellman operator. In the limit of complete optimisation, BEN learns true Bayes-optimal policies, but like in variational expectation-maximisation, partial optimisation renders our approach tractable. Empirical results demonstrate that BEN can learn true Bayes-optimal policies in tasks where existing model-free approaches fail.
https://proceedings.mlr.press/v235/feng24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24a/feng24a.pdf
https://openreview.net/forum?id=3ZM8MXGFRA
Auto-Linear Phenomenon in Subsurface Imaging
https://proceedings.mlr.press/v235/feng24a.html
Yinan Feng, Yinpeng Chen, Peng Jin, Shihang Feng, Youzuo Lin
https://proceedings.mlr.press/v235/feng24a.html
ICML 2024
Subsurface imaging involves solving full waveform inversion (FWI) to predict geophysical properties from measurements. This problem can be reframed as an image-to-image translation, with the usual approach being to train an encoder-decoder network using paired data from two domains: geophysical property and measurement. A recent seminal work (InvLINT) demonstrates there is only a linear mapping between the latent spaces of the two domains, and the decoder requires paired data for training. This paper extends this direction by demonstrating that only linear mapping necessitates paired data, while both the encoder and decoder can be learned from their respective domains through self-supervised learning. This unveils an intriguing phenomenon (named Auto-Linear) where the self-learned features of two separate domains are automatically linearly correlated. Compared with existing methods, our Auto-Linear has four advantages: (a) solving both forward and inverse modeling simultaneously, (b) reducing model size, (c) enhanced performance, especially when the paired data is limited, and (d) strong generalization ability of the trained encoder and decoder.
https://proceedings.mlr.press/v235/feng24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24b/feng24b.pdf
https://openreview.net/forum?id=NbYAmsFJrc
Resisting Stochastic Risks in Diffusion Planners with the Trajectory Aggregation Tree
https://proceedings.mlr.press/v235/feng24b.html
Lang Feng, Pengjie Gu, Bo An, Gang Pan
https://proceedings.mlr.press/v235/feng24b.html
ICML 2024
Diffusion planners have shown promise in handling long-horizon and sparse-reward tasks due to the non-autoregressive plan generation. However, their inherent stochastic risk of generating infeasible trajectories presents significant challenges to their reliability and stability. We introduce a novel approach, the Trajectory Aggregation Tree (TAT), to address this issue in diffusion planners. Compared to prior methods that rely solely on raw trajectory predictions, TAT aggregates information from both historical and current trajectories, forming a dynamic tree-like structure. Each trajectory is conceptualized as a branch and individual states as nodes. As the structure evolves with the integration of new trajectories, unreliable states are marginalized, and the most impactful nodes are prioritized for decision-making. TAT can be deployed without modifying the original training and sampling pipelines of diffusion planners, making it a training-free, ready-to-deploy solution. We provide both theoretical analysis and empirical evidence to support TAT’s effectiveness. Our results highlight its remarkable ability to resist the risk from unreliable trajectories, guarantee the performance boosting of diffusion planners in 100% of tasks, and exhibit an appreciable tolerance margin for sample quality, thereby enabling planning with a more than $3\times$ acceleration.
https://proceedings.mlr.press/v235/feng24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24c/feng24c.pdf
https://openreview.net/forum?id=uaExqhJ2Ag
Fast White-Box Adversarial Streaming Without a Random Oracle
https://proceedings.mlr.press/v235/feng24c.html
Ying Feng, Aayush Jain, David Woodruff
https://proceedings.mlr.press/v235/feng24c.html
ICML 2024
Recently, the question of adversarially robust streaming, where the stream is allowed to depend on the randomness of the streaming algorithm, has gained a lot of attention. In this work, we consider a strong white-box adversarial model (Ajtai et al. PODS 2022), in which the adversary has access to all past random coins and the parameters used by the streaming algorithm. We focus on the sparse recovery problem and extend our result to other tasks such as distinct element estimation and low-rank approximation of matrices and tensors. The main drawback of previous work is that it requires a random oracle, which is especially problematic in the streaming model since the amount of randomness is counted in the space complexity of a streaming algorithm. Also, the previous work suffers from large update time. We construct a near-optimal solution for the sparse recovery problem in white-box adversarial streams, based on the subexponentially secure Learning with Errors assumption. Importantly, our solution does not require a random oracle and has a polylogarithmic per item processing time. We also give results in a related white-box adversarially robust distributed model. Our constructions are based on homomorphic encryption schemes satisfying very mild structural properties that are currently satisfied by most known schemes.
https://proceedings.mlr.press/v235/feng24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24d/feng24d.pdf
https://openreview.net/forum?id=zS8zUuAU8T
DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection
https://proceedings.mlr.press/v235/feng24d.html
Yongchao Feng, Shiwei Li, Yingjie Gao, Ziyue Huang, Yanan Zhang, Qingjie Liu, Yunhong Wang
https://proceedings.mlr.press/v235/feng24d.html
ICML 2024
Though feature-alignment based Domain Adaptive Object Detection (DAOD) methods have achieved remarkable progress, they ignore the source bias issue, i.e., the detector tends to acquire more source-specific knowledge, impeding its generalization capabilities in the target domain. Furthermore, these methods face a more formidable challenge in achieving consistent classification and localization in the target domain compared to the source domain. To overcome these challenges, we propose a novel Distillation-based Source Debiasing (DSD) framework for DAOD, which can distill domain-agnostic knowledge from a pre-trained teacher model, improving the detector’s performance on both domains. In addition, we design a Target-Relevant Object Localization Network (TROLN), which can mine target-related localization information from source and target-style mixed data. Accordingly, we present a Domain-aware Consistency Enhancing (DCE) strategy, in which these information are formulated into a new localization representation to further refine classification scores in the testing stage, achieving a harmonization between classification and localization. Extensive experiments have been conducted to manifest the effectiveness of this method, which consistently improves the strong baseline by large margins, outperforming existing alignment-based works.
https://proceedings.mlr.press/v235/feng24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24e/feng24e.pdf
https://openreview.net/forum?id=tgsSKziIEa
Keypoint-based Progressive Chain-of-Thought Distillation for LLMs
https://proceedings.mlr.press/v235/feng24e.html
Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou, Ye Yuan, Guoren Wang
https://proceedings.mlr.press/v235/feng24e.html
ICML 2024
Chain-of-thought distillation is a powerful technique for transferring reasoning abilities from large language models (LLMs) to smaller student models. Previous methods typically require the student to mimic the step-by-step rationale produced by LLMs, often facing the following challenges: (i) Tokens within a rationale vary in significance, and treating them equally may fail to accurately mimic keypoint tokens, leading to reasoning errors. (ii) They usually distill knowledge by consistently predicting all the steps in a rationale, which falls short in distinguishing the learning order of step generation. This diverges from the human cognitive progression of starting with easy tasks and advancing to harder ones, resulting in sub-optimal outcomes. To this end, we propose a unified framework, called KPOD, to address these issues. Specifically, we propose a token weighting module utilizing mask learning to encourage accurate mimicry of keypoint tokens by the student during distillation. Besides, we develop an in-rationale progressive distillation strategy, starting with training the student to generate the final reasoning steps and gradually extending to cover the entire rationale. To accomplish this, a weighted token generation loss is proposed to assess step reasoning difficulty, and a value function is devised to schedule the progressive distillation by considering both step difficulty and question diversity. Extensive experiments on four reasoning benchmarks illustrate our KPOD outperforms previous methods by a large margin.
https://proceedings.mlr.press/v235/feng24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24f/feng24f.pdf
https://openreview.net/forum?id=2NfpFwJfKu
UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning
https://proceedings.mlr.press/v235/feng24f.html
Shikun Feng, Yuyan Ni, Minghao Li, Yanwen Huang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan
https://proceedings.mlr.press/v235/feng24f.html
ICML 2024
Recently, a noticeable trend has emerged in developing pre-trained foundation models in the domains of CV and NLP. However, for molecular pre-training, there lacks a universal model capable of effectively applying to various categories of molecular tasks, since existing prevalent pre-training methods exhibit effectiveness for specific types of downstream tasks. Furthermore, the lack of profound understanding of existing pre-training methods, including 2D graph masking, 2D-3D contrastive learning, and 3D denoising, hampers the advancement of molecular foundation models. In this work, we provide a unified comprehension of existing pre-training methods through the lens of contrastive learning. Thus their distinctions lie in clustering different views of molecules, which is shown beneficial to specific downstream tasks. To achieve a complete and general-purpose molecular representation, we propose a novel pre-training framework, named UniCorn, that inherits the merits of the three methods, depicting molecular views in three different levels. SOTA performance across quantum, physicochemical, and biological tasks, along with comprehensive ablation study, validate the universality and effectiveness of UniCorn.
https://proceedings.mlr.press/v235/feng24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24g/feng24g.pdf
https://openreview.net/forum?id=Y0sH9HGMwq
Prediction Accuracy of Learning in Games : Follow-the-Regularized-Leader meets Heisenberg
https://proceedings.mlr.press/v235/feng24g.html
Yi Feng, Georgios Piliouras, Xiao Wang
https://proceedings.mlr.press/v235/feng24g.html
ICML 2024
We investigate the accuracy of prediction in deterministic learning dynamics of zero-sum games with random initializations, specifically focusing on observer uncertainty and its relationship to the evolution of covariances. Zero-sum games are a prominent field of interest in machine learning due to their various applications. Concurrently, the accuracy of prediction in dynamical systems from mechanics has long been a classic subject of investigation since the discovery of the Heisenberg Uncertainty Principle. This principle employs covariance and standard deviation of particle states to measure prediction accuracy. In this study, we bring these two approaches together to analyze the Follow-the-Regularized-Leader (FTRL) algorithm in two-player zero-sum games. We provide growth rates of covariance information for continuous-time FTRL, as well as its two canonical discretization methods (Euler and Symplectic). A Heisenberg-type inequality is established for FTRL. Our analysis and experiments also show that employing Symplectic discretization enhances the accuracy of prediction in learning dynamics.
https://proceedings.mlr.press/v235/feng24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24h/feng24h.pdf
https://openreview.net/forum?id=7yixJXmzb8
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models
https://proceedings.mlr.press/v235/feng24h.html
Shanglun Feng, Florian Tramèr
https://proceedings.mlr.press/v235/feng24h.html
ICML 2024
Practitioners commonly download pretrained machine learning models from open repositories and finetune them to fit specific applications. We show that this practice introduces a new risk of privacy backdoors. By tampering with a pretrained model’s weights, an attacker can fully compromise the privacy of the finetuning data. We show how to build privacy backdoors for a variety of models, including transformers, which enable an attacker to reconstruct individual finetuning samples, with a guaranteed success! We further show that backdoored models allow for tight privacy attacks on models trained with differential privacy (DP). The common optimistic practice of training DP models with loose privacy guarantees is thus insecure if the model is not trusted. Overall, our work highlights a crucial and overlooked supply chain attack on machine learning privacy.
https://proceedings.mlr.press/v235/feng24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24i/feng24i.pdf
https://openreview.net/forum?id=xtwCf7iAs2
Memory Efficient Neural Processes via Constant Memory Attention Block
https://proceedings.mlr.press/v235/feng24i.html
Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Yoshua Bengio, Mohamed Osama Ahmed
https://proceedings.mlr.press/v235/feng24i.html
ICML 2024
Neural Processes (NPs) are popular meta-learning methods for efficiently modelling predictive uncertainty. Recent state-of-the-art methods, however, leverage expensive attention mechanisms, limiting their applications, particularly in low-resource settings. In this work, we propose Constant Memory Attentive Neural Processes (CMANPs), an NP variant that only requires constant memory. To do so, we first propose an efficient update operation for Cross Attention. Leveraging the update operation, we propose Constant Memory Attention Block (CMAB), a novel attention block that (i) is permutation invariant, (ii) computes its output in constant memory, and (iii) performs constant computation updates. Finally, building on CMAB, we detail Constant Memory Attentive Neural Processes. Empirically, we show CMANPs achieve state-of-the-art results on popular NP benchmarks while being significantly more memory efficient than prior methods.
https://proceedings.mlr.press/v235/feng24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24j/feng24j.pdf
https://openreview.net/forum?id=ZVmMV3AHjC
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games
https://proceedings.mlr.press/v235/feng24j.html
Songtao Feng, Ming Yin, Yu-Xiang Wang, Jing Yang, Yingbin Liang
https://proceedings.mlr.press/v235/feng24j.html
ICML 2024
The problem of two-player zero-sum Markov games has recently attracted increasing interests in theoretical studies of multi-agent reinforcement learning (RL). In particular, for finite-horizon episodic Markov decision processes (MDPs), it has been shown that model-based algorithms can find an $\epsilon$-optimal Nash Equilibrium (NE) with the sample complexity of $O(H^3SAB/\epsilon^2)$, which is optimal in the dependence of the horizon $H$ and the number of states $S$ (where $A$ and $B$ denote the number of actions of the two players, respectively). However, none of the existing model-free algorithms can achieve such an optimality. In this work, we propose a model-free stage-based algorithm and show that it achieves the same sample complexity as the best model-based algorithm, and hence for the first time demonstrate that model-free algorithms can enjoy the same optimality in the $H$ dependence as model-based algorithms. The main improvement of the dependency on $H$ arises by leveraging the popular variance reduction technique based on the reference-advantage decomposition previously used only for single-agent RL. However, such a technique relies on a critical monotonicity property of the value function, which does not hold in Markov games due to the update of the policy via the coarse correlated equilibrium (CCE) oracle. Thus, to extend such a technique to Markov games, our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions whose value difference is the smallest in the history in order to achieve the desired improvement in the sample efficiency.
https://proceedings.mlr.press/v235/feng24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/feng24k/feng24k.pdf
https://openreview.net/forum?id=8xKGZsnV2a
AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA
https://proceedings.mlr.press/v235/feng24k.html
Weitao Feng, Wenbo Zhou, Jiyan He, Jie Zhang, Tianyi Wei, Guanlin Li, Tianwei Zhang, Weiming Zhang, Nenghai Yu
https://proceedings.mlr.press/v235/feng24k.html
ICML 2024
Diffusion models have achieved remarkable success in generating high-quality images. Recently, the open-source models represented by Stable Diffusion (SD) are thriving and are accessible for customization, giving rise to a vibrant community of creators and enthusiasts. However, the widespread availability of customized SD models has led to copyright concerns, like unauthorized model distribution and unconsented commercial use. To address it, recent works aim to let SD models output watermarked content for post-hoc forensics. Unfortunately, none of them can achieve the challenging white-box protection, wherein the malicious user can easily remove or replace the watermarking module to fail the subsequent verification. For this, we propose AquaLoRA as the first implementation under this scenario. Briefly, we merge watermark information into the U-Net of Stable Diffusion Models via a watermark LowRank Adaptation (LoRA) module in a two-stage manner. For watermark LoRA module, we devise a scaling matrix to achieve flexible message updates without retraining. To guarantee fidelity, we design Prior Preserving Fine-Tuning (PPFT) to ensure watermark learning with minimal impacts on model distribution, validated by proofs. Finally, we conduct extensive experiments and ablation studies to verify our design. Our code is available at github.com/Georgefwt/AquaLoRA.
https://proceedings.mlr.press/v235/ferber24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ferber24a/ferber24a.pdf
https://openreview.net/forum?id=DiyE6OOGBa
GenCO: Generating Diverse Designs with Combinatorial Constraints
https://proceedings.mlr.press/v235/ferber24a.html
Aaron M Ferber, Arman Zharmagambetov, Taoan Huang, Bistra Dilkina, Yuandong Tian
https://proceedings.mlr.press/v235/ferber24a.html
ICML 2024
Deep generative models like GAN and VAE have shown impressive results in generating unconstrained objects like images. However, many design settings arising in industrial design, material science, computer graphics and more require that the generated objects satisfy hard combinatorial constraints or meet objectives in addition to modeling a data distribution. To address this, we propose GenCO, a generative framework that guarantees constraint satisfaction throughout training by leveraging differentiable combinatorial solvers to enforce feasibility. GenCO imposes the generative loss on provably feasible solutions rather than intermediate soft solutions, meaning that the deep generative network can focus on ensuring the generated objects match the data distribution without having to also capture feasibility. This shift enables practitioners to enforce hard constraints on the generated outputs during end-to-end training, enabling assessments of their feasibility and introducing additional combinatorial loss components to deep generative training. We demonstrate the effectiveness of our approach on a variety of generative combinatorial tasks, including game level generation, map creation for path planning, and photonic device design, consistently demonstrating its capability to yield diverse, high-quality solutions that verifiably adhere to user-specified combinatorial properties.
https://proceedings.mlr.press/v235/ferchichi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ferchichi24a/ferchichi24a.pdf
https://openreview.net/forum?id=UZZaWUR0n4
Active Ranking and Matchmaking, with Perfect Matchings
https://proceedings.mlr.press/v235/ferchichi24a.html
Hafedh El Ferchichi, Matthieu Lerasle, Vianney Perchet
https://proceedings.mlr.press/v235/ferchichi24a.html
ICML 2024
We address the challenge of actively ranking a set of items/players with varying values/strengths. The comparison outcomes are random, with a greater noise the closer the values. A crucial requirement is that, at each iteration of the algorithm, all items must be compared once, i.e., an iteration is a perfect matching. Furthermore, we presume that comparing two players with closely matched strengths incurs no cost and, in contrast, a unit cost is associated with comparing players whose strength difference is more substantial. Our secondary objective is to determine an optimal matching between players based on this cost function: we propose and analyze an algorithm that draws on concepts from both AKS sorting networks and bandit theory. Our algorithm achieves both objectives with high probability, and the total cost is optimal (up to logarithmic terms).
https://proceedings.mlr.press/v235/fernando24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fernando24a/fernando24a.pdf
https://openreview.net/forum?id=9ZxnPZGmPU
Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution
https://proceedings.mlr.press/v235/fernando24a.html
Chrisantha Fernando, Dylan Sunil Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel
https://proceedings.mlr.press/v235/fernando24a.html
ICML 2024
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, evaluates them for fitness on a training set, and repeats this process over multiple generations to evolve task-prompts. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutation-prompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
https://proceedings.mlr.press/v235/ferry24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ferry24a/ferry24a.pdf
https://openreview.net/forum?id=cc72Vnfvoc
Trained Random Forests Completely Reveal your Dataset
https://proceedings.mlr.press/v235/ferry24a.html
Julien Ferry, Ricardo Fukasawa, Timothée Pascal, Thibaut Vidal
https://proceedings.mlr.press/v235/ferry24a.html
ICML 2024
We introduce an optimization-based reconstruction attack capable of completely or near-completely reconstructing a dataset utilized for training a random forest. Notably, our approach relies solely on information readily available in commonly used libraries such as scikit-learn. To achieve this, we formulate the reconstruction problem as a combinatorial problem under a maximum likelihood objective. We demonstrate that this problem is NP-hard, though solvable at scale using constraint programming - an approach rooted in constraint propagation and solution-domain reduction. Through an extensive computational investigation, we demonstrate that random forests trained without bootstrap aggregation but with feature randomization are susceptible to a complete reconstruction. This holds true even with a small number of trees. Even with bootstrap aggregation, the majority of the data can also be reconstructed. These findings underscore a critical vulnerability inherent in widely adopted ensemble methods, warranting attention and mitigation. Although the potential for such reconstruction attacks has been discussed in privacy research, our study provides clear empirical evidence of their practicability.
https://proceedings.mlr.press/v235/ferte24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ferte24a/ferte24a.pdf
https://openreview.net/forum?id=CY0lFwD4qx
Reservoir Computing for Short High-Dimensional Time Series: an Application to SARS-CoV-2 Hospitalization Forecast
https://proceedings.mlr.press/v235/ferte24a.html
Thomas Ferté, Dan Dutartre, Boris P Hejblum, Romain Griffier, Vianney Jouhet, Rodolphe Thiébaut, Pierrick Legrand, Xavier Hinaut
https://proceedings.mlr.press/v235/ferte24a.html
ICML 2024
In this work, we aimed at forecasting the number of SARS-CoV-2 hospitalized patients at 14 days to help anticipate the bed requirements of a large scale hospital using public data and electronic health records data. Previous attempts led to mitigated performance in this high-dimension setting; we introduce a novel approach to time series forecasting by providing an alternative to conventional methods to deal with high number of potential features of interest (409 predictors). We integrate Reservoir Computing (RC) with feature selection using a genetic algorithm (GA) to gather optimal non-linear combinations of inputs to improve prediction in sample-efficient context. We illustrate that the RC-GA combination exhibits excellent performance in forecasting SARS-CoV-2 hospitalizations. This approach outperformed the use of RC alone and other conventional methods: LSTM, Transformers, Elastic-Net, XGBoost. Notably, this work marks the pioneering use of RC (along with GA) in the realm of short and high-dimensional time series, positioning it as a competitive and innovative approach in comparison to standard methods.
https://proceedings.mlr.press/v235/fey24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fey24a/fey24a.pdf
https://openreview.net/forum?id=BIMSHniyCP
Position: Relational Deep Learning - Graph Representation Learning on Relational Databases
https://proceedings.mlr.press/v235/fey24a.html
Matthias Fey, Weihua Hu, Kexin Huang, Jan Eric Lenssen, Rishabh Ranjan, Joshua Robinson, Rex Ying, Jiaxuan You, Jure Leskovec
https://proceedings.mlr.press/v235/fey24a.html
ICML 2024
Much of the world’s most valued data is stored in relational databases and data warehouses, where the data is organized into tables connected by primary-foreign key relations. However, building machine learning models using this data is both challenging and time consuming because no ML algorithm can directly learn from multiple connected tables. Current approaches can only learn from a single table, so data must first be manually joined and aggregated into this format, the laborious process known as feature engineering. Feature engineering is slow, error prone and leads to suboptimal models. Here we introduce Relational Deep Learning (RDL), a blueprint for end-to-end learning on relational databases. The key is to represent relational databases as a temporal, heterogeneous graphs, with a node for each row in each table, and edges specified by primary-foreign key links. Graph Neural Networks then learn representations that leverage all input data, without any manual feature engineering. We also introduce RelBench, and benchmark and testing suite, demonstrating strong initial results. Overall, we define a new research area that generalizes graph machine learning and broadens its applicability.
https://proceedings.mlr.press/v235/fiedler24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fiedler24a/fiedler24a.pdf
https://openreview.net/forum?id=LGDYsBslWi
On Statistical Learning Theory for Distributional Inputs
https://proceedings.mlr.press/v235/fiedler24a.html
Christian Fiedler, Pierre-François Massiani, Friedrich Solowjow, Sebastian Trimpe
https://proceedings.mlr.press/v235/fiedler24a.html
ICML 2024
Kernel-based statistical learning on distributional inputs appears in many relevant applications, from medical diagnostics to causal inference, and poses intriguing theoretical questions. While this learning scenario received considerable attention from the machine learning community recently, many gaps in the theory remain. In particular, most works consider only the distributional regression setting, and focus on the regularized least-squares algorithm for this problem. In this work, we start to fill these gaps. We prove two oracle inequalities for kernel machines in general distributional learning scenarios, as well as a generalization result based on algorithmic stability. Our main results are formulated in great generality, utilizing general Hilbertian embeddings, which makes them applicable to a wide array of approaches to distributional learning. Additionally, we specialize our results to the cases of kernel mean embeddings and of the recently introduced Hilbertian embeddings based on sliced Wasserstein distances, providing concrete instances of the general setup. Our results considerably enlarge the scope of theoretically grounded distributional learning, and provide many interesting avenues for future work.
https://proceedings.mlr.press/v235/finkelshtein24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/finkelshtein24a/finkelshtein24a.pdf
https://openreview.net/forum?id=ZQcqXCuoxD
Cooperative Graph Neural Networks
https://proceedings.mlr.press/v235/finkelshtein24a.html
Ben Finkelshtein, Xingyue Huang, Michael M. Bronstein, Ismail Ilkan Ceylan
https://proceedings.mlr.press/v235/finkelshtein24a.html
ICML 2024
Graph neural networks are popular architectures for graph machine learning, based on iterative computation of node representations of an input graph through a series of invariant transformations. A large class of graph neural networks follow a standard message-passing paradigm: at every layer, each node state is updated based on an aggregate of messages from its neighborhood. In this work, we propose a novel framework for training graph neural networks, where every node is viewed as a player that can choose to either listen, broadcast, listen and broadcast, or to isolate. The standard message propagation scheme can then be viewed as a special case of this framework where every node listens and broadcasts to all neighbors. Our approach offers a more flexible and dynamic message-passing paradigm, where each node can determine its own strategy based on their state, effectively exploring the graph topology while learning. We provide a theoretical analysis of the new message-passing scheme which is further supported by an extensive empirical analysis on a synthetic and real-world datasets.
https://proceedings.mlr.press/v235/fischer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fischer24a/fischer24a.pdf
https://openreview.net/forum?id=xMJT4XW468
Critical feature learning in deep neural networks
https://proceedings.mlr.press/v235/fischer24a.html
Kirsten Fischer, Javed Lindner, David Dahmen, Zohar Ringel, Michael Krämer, Moritz Helias
https://proceedings.mlr.press/v235/fischer24a.html
ICML 2024
A key property of neural networks driving their success is their ability to learn features from data. Understanding feature learning from a theoretical viewpoint is an emerging field with many open questions. In this work we capture finite-width effects with a systematic theory of network kernels in deep non-linear neural networks. We show that the Bayesian prior of the network can be written in closed form as a superposition of Gaussian processes, whose kernels are distributed with a variance that depends inversely on the network width $N$. A large deviation approach, which is exact in the proportional limit for the number of data points $P=\alpha N\to\infty$, yields a pair of forward-backward equations for the maximum a posteriori kernels in all layers at once. We study their solutions perturbatively, to demonstrate how the backward propagation across layers aligns kernels with the target. An alternative field-theoretic formulation shows that kernel adaptation of the Bayesian posterior at finite-width results from fluctuations in the prior: larger fluctuations correspond to a more flexible network prior and thus enable stronger adaptation to data. We thus find a bridge between the classical edge-of-chaos NNGP theory and feature learning, exposing an intricate interplay between criticality, response functions, and feature scale.
https://proceedings.mlr.press/v235/fischer24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fischer24b/fischer24b.pdf
https://openreview.net/forum?id=3FKEtlX4aM
Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields
https://proceedings.mlr.press/v235/fischer24b.html
Tom Fischer, Pascal Peter, Joachim Weickert, Eddy Ilg
https://proceedings.mlr.press/v235/fischer24b.html
ICML 2024
Deep learning has revolutionized the field of computer vision by introducing large scale neural networks with millions of parameters. Training these networks requires massive datasets and leads to intransparent models that can fail to generalize. At the other extreme, models designed from partial differential equations (PDEs) embed specialized domain knowledge into mathematical equations and usually rely on few manually chosen hyperparameters. This makes them transparent by construction and if designed and calibrated carefully, they can generalize well to unseen scenarios. In this paper, we show how to bring model- and data-driven approaches together by combining the explicit PDE-based approaches with convolutional neural networks to obtain the best of both worlds. We illustrate a joint architecture for the task of inpainting optical flow fields and show that the combination of model- and data-driven modeling leads to an effective architecture. Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data. Averaging the endpoint error across different mask densities, our method outperforms the explicit baselines by 11-27%, the GAN baseline by 47% and the Probabilisitic Diffusion baseline by 42%. With that, our method sets a new state of the art for inpainting of optical flow fields from random masks.
https://proceedings.mlr.press/v235/fisher24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fisher24a/fisher24a.pdf
https://openreview.net/forum?id=TUKOklS3gg
Inverse-Variance Weighting for Estimation of Heterogeneous Treatment Effects
https://proceedings.mlr.press/v235/fisher24a.html
Aaron Fisher
https://proceedings.mlr.press/v235/fisher24a.html
ICML 2024
Many methods for estimating conditional average treatment effects (CATEs) can be expressed as weighted pseudo-outcome regressions (PORs). Previous comparisons of POR techniques have paid careful attention to the choice of pseudo-outcome transformation. However, we argue that the dominant driver of performance is actually the choice of weights. For example, we point out that R-Learning implicitly performs a POR with inverse-variance weights (IVWs). In the CATE setting, IVWs mitigate the instability associated with inverse-propensity weights, and lead to convenient simplifications of bias terms. We demonstrate the superior performance of IVWs in simulations, and derive convergence rates for IVWs that are, to our knowledge, the fastest yet shown without assuming knowledge of the covariate distribution.
https://proceedings.mlr.press/v235/fourati24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fourati24a/fourati24a.pdf
https://openreview.net/forum?id=HPQaMmABgK
Stochastic Q-learning for Large Discrete Action Spaces
https://proceedings.mlr.press/v235/fourati24a.html
Fares Fourati, Vaneet Aggarwal, Mohamed-Slim Alouini
https://proceedings.mlr.press/v235/fourati24a.html
ICML 2024
In complex environments with large discrete action spaces, effective decision-making is critical in reinforcement learning (RL). Despite the widespread use of value-based RL approaches like Q-learning, they come with a computational burden, necessitating the maximization of a value function over all actions in each iteration. This burden becomes particularly challenging when addressing large-scale problems and using deep neural networks as function approximators. In this paper, we present stochastic value-based RL approaches which, in each iteration, as opposed to optimizing over the entire set of $n$ actions, only consider a variable stochastic set of a sublinear number of actions, possibly as small as $\mathcal{O}(\log(n))$. The presented stochastic value-based RL methods include, among others, Stochastic Q-learning, StochDQN, and StochDDQN, all of which integrate this stochastic approach for both value-function updates and action selection. The theoretical convergence of Stochastic Q-learning is established, while an analysis of stochastic maximization is provided. Moreover, through empirical validation, we illustrate that the various proposed approaches outperform the baseline methods across diverse environments, including different control problems, achieving near-optimal average returns in significantly reduced time.
https://proceedings.mlr.press/v235/fourati24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fourati24b/fourati24b.pdf
https://openreview.net/forum?id=lrFwPeDdEQ
Federated Combinatorial Multi-Agent Multi-Armed Bandits
https://proceedings.mlr.press/v235/fourati24b.html
Fares Fourati, Mohamed-Slim Alouini, Vaneet Aggarwal
https://proceedings.mlr.press/v235/fourati24b.html
ICML 2024
This paper introduces a federated learning framework tailored for online combinatorial optimization with bandit feedback. In this setting, agents select subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals. Our framework transforms any offline resilient single-agent $(\alpha-\epsilon)$-approximation algorithm—having a complexity of $\tilde{\mathcal{O}}\left(\frac{\psi}{\epsilon^\beta}\right)$, where the logarithm is omitted, for some function $\psi$ and constant $\beta$—into an online multi-agent algorithm with $m$ communicating agents and an $\alpha$-regret of no more than $\tilde{\mathcal{O}}\left(m^{-\frac{1}{3+\beta}} \psi^\frac{1}{3+\beta} T^\frac{2+\beta}{3+\beta}\right)$. Our approach not only eliminates the $\epsilon$ approximation error but also ensures sublinear growth with respect to the time horizon $T$ and demonstrates a linear speedup with an increasing number of communicating agents. Additionally, the algorithm is notably communication-efficient, requiring only a sublinear number of communication rounds, quantified as $\tilde{\mathcal{O}}\left(\psi T^\frac{\beta}{\beta+1}\right)$. Furthermore, the framework has been successfully applied to online stochastic submodular maximization using various offline algorithms, yielding the first results for both single-agent and multi-agent settings and recovering specialized single-agent theoretical guarantees. We empirically validate our approach to a stochastic data summarization problem, illustrating the effectiveness of the proposed framework, even in single-agent scenarios.
https://proceedings.mlr.press/v235/francazi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/francazi24a/francazi24a.pdf
https://openreview.net/forum?id=UZstTlLq1E
Initial Guessing Bias: How Untrained Networks Favor Some Classes
https://proceedings.mlr.press/v235/francazi24a.html
Emanuele Francazi, Aurelien Lucchi, Marco Baity-Jesi
https://proceedings.mlr.press/v235/francazi24a.html
ICML 2024
Understanding and controlling biasing effects in neural networks is crucial for ensuring accurate and fair model performance. In the context of classification problems, we provide a theoretical analysis demonstrating that the structure of a deep neural network (DNN) can condition the model to assign all predictions to the same class, even before the beginning of training, and in the absence of explicit biases. We prove that, besides dataset properties, the presence of this phenomenon, which we call Initial Guessing Bias (IGB), is influenced by model choices including dataset preprocessing methods, and architectural decisions, such as activation functions, max-pooling layers, and network depth. Our analysis of IGB provides information for architecture selection and model initialization. We also highlight theoretical consequences, such as the breakdown of node-permutation symmetry, the violation of self-averaging and the non-trivial effects that depth has on the phenomenon.
https://proceedings.mlr.press/v235/franceschi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/franceschi24a/franceschi24a.pdf
https://openreview.net/forum?id=37xFIeYgE0
Explaining Probabilistic Models with Distributional Values
https://proceedings.mlr.press/v235/franceschi24a.html
Luca Franceschi, Michele Donini, Cedric Archambeau, Matthias Seeger
https://proceedings.mlr.press/v235/franceschi24a.html
ICML 2024
A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifier) and what current methods such as SHAP explain (e.g. the scalar probability of a class). This paper addresses such gap for probabilistic models by generalising cooperative games and value operators. We introduce the distributional values, random variables that track changes in the model output (e.g. flipping of the predicted class) and derive their analytic expressions for games with Gaussian, Bernoulli and Categorical payoffs. We further establish several characterising properties, and show that our framework provides fine-grained and insightful explanations with case studies on vision and language models.
https://proceedings.mlr.press/v235/franco24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/franco24a/franco24a.pdf
https://openreview.net/forum?id=hKdJPMQvew
Hyperbolic Active Learning for Semantic Segmentation under Domain Shift
https://proceedings.mlr.press/v235/franco24a.html
Luca Franco, Paolo Mandica, Konstantinos Kallidromitis, Devin Guillory, Yu-Teng Li, Trevor Darrell, Fabio Galasso
https://proceedings.mlr.press/v235/franco24a.html
ICML 2024
We introduce a hyperbolic neural network approach to pixel-level active learning for semantic segmentation. Analysis of the data statistics leads to a novel interpretation of the hyperbolic radius as an indicator of data scarcity. In HALO (Hyperbolic Active Learning Optimization), for the first time, we propose the use of epistemic uncertainty as a data acquisition strategy, following the intuition of selecting data points that are the least known. The hyperbolic radius, complemented by the widely-adopted prediction entropy, effectively approximates epistemic uncertainty. We perform extensive experimental analysis based on two established synthetic-to-real benchmarks, i.e. GTAV $\rightarrow$ Cityscapes and SYNTHIA $\rightarrow$ Cityscapes. Additionally, we test HALO on Cityscape $\rightarrow$ ACDC for domain adaptation under adverse weather conditions, and we benchmark both convolutional and attention-based backbones. HALO sets a new state-of-the-art in active learning for semantic segmentation under domain shift and it is the first active learning approach that surpasses the performance of supervised domain adaptation while using only a small portion of labels (i.e., 1%).
https://proceedings.mlr.press/v235/franks24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/franks24a/franks24a.pdf
https://openreview.net/forum?id=HTNgNt8CTJ
Weisfeiler-Leman at the margin: When more expressivity matters
https://proceedings.mlr.press/v235/franks24a.html
Billy Joe Franks, Christopher Morris, Ameya Velingker, Floris Geerts
https://proceedings.mlr.press/v235/franks24a.html
ICML 2024
The Weisfeiler–Leman algorithm (1-WL) is a well-studied heuristic for the graph isomorphism problem. Recently, the algorithm has played a prominent role in understanding the expressive power of message-passing graph neural networks (MPNNs) and being effective as a graph kernel. Despite its success, the 1-WL faces challenges in distinguishing non-isomorphic graphs, leading to the development of more expressive MPNN and kernel architectures. However, the relationship between enhanced expressivity and improved generalization performance remains unclear. Here, we show that an architecture’s expressivity offers limited insights into its generalization performance when viewed through graph isomorphism. Moreover, we focus on augmenting 1-WL and MPNNs with subgraph information and employ classical margin theory to investigate the conditions under which an architecture’s increased expressivity aligns with improved generalization performance. In addition, we introduce variations of expressive 1-WL-based kernel and MPNN architectures with provable generalization properties. Our empirical study confirms the validity of our theoretical findings.
https://proceedings.mlr.press/v235/frans24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/frans24a/frans24a.pdf
https://openreview.net/forum?id=a6wCNfIj8E
Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings
https://proceedings.mlr.press/v235/frans24a.html
Kevin Frans, Seohong Park, Pieter Abbeel, Sergey Levine
https://proceedings.mlr.press/v235/frans24a.html
ICML 2024
Can we pre-train a generalist agent from a large amount of unlabeled offline trajectories such that it can be immediately adapted to any new downstream tasks in a zero-shot manner? In this work, we present a functional reward encoding (FRE) as a general, scalable solution to this zero-shot RL problem. Our main idea is to learn functional representations of any arbitrary tasks by encoding their state-reward samples using a transformer-based variational auto-encoder. This functional encoding not only enables the pre-training of an agent from a wide diversity of general unsupervised reward functions, but also provides a way to solve any new downstream tasks in a zero-shot manner, given a small number of reward-annotated samples. We empirically show that FRE agents trained on diverse random unsupervised reward functions can generalize to solve novel tasks in a range of simulated robotic benchmarks, often outperforming previous zero-shot RL and offline RL methods.
https://proceedings.mlr.press/v235/frauen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/frauen24a/frauen24a.pdf
https://openreview.net/forum?id=poEPRuNvM3
Fair Off-Policy Learning from Observational Data
https://proceedings.mlr.press/v235/frauen24a.html
Dennis Frauen, Valentyn Melnychuk, Stefan Feuerriegel
https://proceedings.mlr.press/v235/frauen24a.html
ICML 2024
Algorithmic decision-making in practice must be fair for legal, ethical, and societal reasons. To achieve this, prior research has contributed various approaches that ensure fairness in machine learning predictions, while comparatively little effort has focused on fairness in decision-making, specifically off-policy learning. In this paper, we propose a novel framework for fair off-policy learning: we learn decision rules from observational data under different notions of fairness, where we explicitly assume that observational data were collected under a different – potentially discriminatory – behavioral policy. Importantly, our framework applies to different fairness notions for off-policy learning, where fairness is formalized based on actions or policy values. As our main contribution, we propose a neural network-based framework to learn optimal policies under different fairness notions. We further provide theoretical guarantees in the form of generalization bounds for the finite-sample version of our framework. We demonstrate the effectiveness of our framework through extensive numerical experiments using both simulated and real-world data. Altogether, our work enables algorithmic decision-making in a wide array of practical applications where fairness must be ensured.
https://proceedings.mlr.press/v235/frauenknecht24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/frauenknecht24a/frauenknecht24a.pdf
https://openreview.net/forum?id=N0ntTjTfHb
Trust the Model Where It Trusts Itself - Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption
https://proceedings.mlr.press/v235/frauenknecht24a.html
Bernd Frauenknecht, Artur Eisele, Devdutt Subhasish, Friedrich Solowjow, Sebastian Trimpe
https://proceedings.mlr.press/v235/frauenknecht24a.html
ICML 2024
Dyna-style model-based reinforcement learning (MBRL) combines model-free agents with predictive transition models through model-based rollouts. This combination raises a critical question: “When to trust your model?”; i.e., which rollout length results in the model providing useful data? Janner et al. (2019) address this question by gradually increasing rollout lengths throughout the training. While theoretically tempting, uniform model accuracy is a fallacy that collapses at the latest when extrapolating. Instead, we propose asking the question “Where to trust your model?”. Using inherent model uncertainty to consider local accuracy, we obtain the Model-Based Actor-Critic with Uncertainty-Aware Rollout Adaption (MACURA) algorithm. We propose an easy-to-tune rollout mechanism and demonstrate substantial improvements in data efficiency and performance compared to state-of-the-art deep MBRL methods on the MuJoCo benchmark.
https://proceedings.mlr.press/v235/friedbaum24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/friedbaum24a/friedbaum24a.pdf
https://openreview.net/forum?id=zkjGpZrIX3
Trustworthy Actionable Perturbations
https://proceedings.mlr.press/v235/friedbaum24a.html
Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon
https://proceedings.mlr.press/v235/friedbaum24a.html
ICML 2024
Counterfactuals, or modified inputs that lead to a different outcome, are an important tool for understanding the logic used by machine learning classifiers and how to change an undesirable classification. Even if a counterfactual changes a classifier’s decision, however, it may not affect the true underlying class probabilities, i.e. the counterfactual may act like an adversarial attack and “fool” the classifier. We propose a new framework for creating modified inputs that change the true underlying probabilities in a beneficial way which we call Trustworthy Actionable Perturbations (TAP). This includes a novel verification procedure to ensure that TAP change the true class probabilities instead of acting adversarially. Our framework also includes new cost, reward, and goal definitions that are better suited to effectuating change in the real world. We present PAC-learnability results for our verification procedure and theoretically analyze our new method for measuring reward. We also develop a methodology for creating TAP and compare our results to those achieved by previous counterfactual methods.
https://proceedings.mlr.press/v235/friedman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/friedman24a/friedman24a.pdf
https://openreview.net/forum?id=YJWlUMW6YP
Interpretability Illusions in the Generalization of Simplified Models
https://proceedings.mlr.press/v235/friedman24a.html
Dan Friedman, Andrew Kyle Lampinen, Lucas Dixon, Danqi Chen, Asma Ghandeharioun
https://proceedings.mlr.press/v235/friedman24a.html
ICML 2024
A common method to study deep learning systems is to use simplified model representations—for example, using singular value decomposition to visualize the model’s hidden states in a lower dimensional space. This approach assumes that the results of these simplifications are faithful to the original model. Here, we illustrate an important caveat to this assumption: even if the simplified representations can accurately approximate the full model on the training set, they may fail to accurately capture the model’s behavior out of distribution. We illustrate this by training Transformer models on controlled datasets with systematic generalization splits, including the Dyck balanced-parenthesis languages and a code completion task. We simplify these models using tools like dimensionality reduction and clustering, and then explicitly test how these simplified proxies match the behavior of the original model. We find consistent generalization gaps: cases in which the simplified proxies are more faithful to the original model on the in-distribution evaluations and less faithful on various tests of systematic generalization. This includes cases where the original model generalizes systematically but the simplified proxies fail, and cases where the simplified proxies generalize better. Together, our results raise questions about the extent to which mechanistic interpretations derived using tools like SVD can reliably predict what a model will do in novel situations.
https://proceedings.mlr.press/v235/fu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24a/fu24a.pdf
https://openreview.net/forum?id=eDjvSFOkXw
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
https://proceedings.mlr.press/v235/fu24a.html
Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang
https://proceedings.mlr.press/v235/fu24a.html
ICML 2024
Autoregressive decoding of large language models (LLMs) is memory bandwidth bounded, resulting in high latency and significant wastes of the parallel processing power of modern accelerators. Existing methods for accelerating LLM decoding often require a draft model (e.g., speculative decoding), which is nontrivial to obtain and unable to generalize. In this paper, we introduce Lookahead decoding, an exact, parallel decoding algorithm that accelerates LLM decoding without needing auxiliary models or data stores. It allows trading per-step log(FLOPs) to reduce the number of total decoding steps, is more parallelizable on single or multiple modern accelerators, and is compatible with concurrent memory-efficient attention (e.g., FlashAttention). Our implementation of Lookahead decoding can speed up autoregressive decoding by up to 1.8x on MT-bench and 4x with strong scaling on multiple GPUs in code completion tasks. Our code is avialable at https://github.com/hao-ai-lab/LookaheadDecoding
https://proceedings.mlr.press/v235/fu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24b/fu24b.pdf
https://openreview.net/forum?id=tFEOOH9eH0
A Touch, Vision, and Language Dataset for Multimodal Alignment
https://proceedings.mlr.press/v235/fu24b.html
Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg
https://proceedings.mlr.press/v235/fu24b.html
ICML 2024
Touch is an important sensing modality for humans, but it has not yet been incorporated into a multimodal generative language model. This is partially due to the difficulty of obtaining natural language labels for tactile data and the complexity of aligning tactile readings with both visual observations and language descriptions. As a step towards bridging that gap, this work introduces a new dataset of 44K in-the-wild visiontouch pairs, with English language labels annotated by humans (10%) and textual pseudo-labels from GPT-4V (90%). We use this dataset to train a vision-language-aligned tactile encoder for open-vocabulary classification and a touch-visionlanguage (TVL) model for text generation using the trained encoder. Results suggest that by incorporating touch, the TVL model improves (+29% classification accuracy) tactile-vision-language alignment over existing models trained on any pair of those modalities. Although only a small fraction of the dataset is human labeled, the TVL model demonstrates improved visual-tactile understanding over GPT-4V (+12%) and open-source vision-language models (+32%) on a new touch-vision understanding benchmark. Code, checkpoints and data are available on https: //tactile-vlm.github.io.
https://proceedings.mlr.press/v235/fu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24c/fu24c.pdf
https://openreview.net/forum?id=6OkvBGqW62
Hyperbolic Geometric Latent Diffusion Model for Graph Generation
https://proceedings.mlr.press/v235/fu24c.html
Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li
https://proceedings.mlr.press/v235/fu24c.html
ICML 2024
Diffusion models have made significant contributions to computer vision, sparking a growing interest in the community recently regarding the application of it to graph generation. The existing discrete graph diffusion models exhibit heightened computational complexity and diminished training efficiency. A preferable and natural way is to directly diffuse the graph within the latent space. However, due to the non-Euclidean structure of graphs is not isotropic in the latent space, the existing latent diffusion models effectively make it difficult to capture and preserve the topological information of graphs. To address the above challenges, we propose a novel geometrically latent diffusion framework HypDiff. Specifically, we first establish a geometrically latent space with interpretability measures based on hyperbolic geometry, to define anisotropic latent diffusion processes for graphs. Then, we propose a geometrically latent diffusion process that is constrained by both radial and angular geometric properties, thereby ensuring the preservation of the original topological properties in the generative graphs. Extensive experimental results demonstrate the superior effectiveness of HypDiff for graph generation with various topologies.
https://proceedings.mlr.press/v235/fu24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24d/fu24d.pdf
https://openreview.net/forum?id=TaAqeo7lUh
Data Engineering for Scaling Language Models to 128K Context
https://proceedings.mlr.press/v235/fu24d.html
Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, Hao Peng
https://proceedings.mlr.press/v235/fu24d.html
ICML 2024
We study continual pretraining recipe for scaling language models’ context lengths to 128K, with a focus on data engineering. We hypothesize that long context modeling, in particular the ability to utilize information at arbitrary input locations, is a capability that is mostly already acquired through large-scale pretraining, and that this capability can be readily extended to contexts substantially longer than seen during training (e.g., 4K to 128K) through lightweight continual pretraining on appropriate data mixture. We investigate the quantity and quality of the data for continual pretraining: (1) for quantity, we show that 500 million to 5 billion tokens are enough to enable the model to retrieve information anywhere within the 128K context; (2) for quality, our results equally emphasize domain balance and length upsampling. Concretely, naïvely upsampling longer data on certain domains like books, a common practice of existing work, gives suboptimal performance; a balanced domain mixture is equally important. We demonstrate that continual pretraining of the full model on 1B-5B tokens of such data is an effective and affordable strategy for scaling the context length of language models to 128K. Our recipe outperforms strong open-source long-context models and closes the gap to frontier models like GPT-4 128K.
https://proceedings.mlr.press/v235/fu24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24e/fu24e.pdf
https://openreview.net/forum?id=awo5H10K6v
Language-guided Skill Learning with Temporal Variational Inference
https://proceedings.mlr.press/v235/fu24e.html
Haotian Fu, Pratyusha Sharma, Elias Stengel-Eskin, George Konidaris, Nicolas Le Roux, Marc-Alexandre Côté, Xingdi Yuan
https://proceedings.mlr.press/v235/fu24e.html
ICML 2024
We present an algorithm for skill discovery from expert demonstrations. The algorithm first utilizes Large Language Models (LLMs) to propose an initial segmentation of the trajectories. Following that, a hierarchical variational inference framework incorporates the LLM-generated segmentation information to discover reusable skills by merging trajectory segments. To further control the trade-off between compression and reusability, we introduce a novel auxiliary objective based on the Minimum Description Length principle that helps guide this skill discovery process. Our results demonstrate that agents equipped with our method are able to discover skills that help accelerate learning and outperform baseline skill learning approaches on new long-horizon tasks in BabyAI, a grid world navigation environment, as well as ALFRED, a household simulation environment.
https://proceedings.mlr.press/v235/fu24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24f/fu24f.pdf
https://openreview.net/forum?id=TqcZfMZjgM
PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation
https://proceedings.mlr.press/v235/fu24f.html
Han Fu, Jian Tan, Pinhan Zhang, Feifei Li, Jianling Sun
https://proceedings.mlr.press/v235/fu24f.html
ICML 2024
Automatically generating high-quality code descriptions greatly improves the readability and maintainability of the codebase. Recently, retrieval augmented code-to-text generation has proven to be an effective solution, which has achieved state-of-the-art results on various benchmarks. It brings out the potential to leverage large unlabeled code descriptions to further improve the generation quality. Despite the promising performance, retrieval-augmented models however suffer from being deluded by inconducive retrieved references, due to irrelevant or even misleading information contained therein. To this end, we design PinNet, a new framework for code-to-text generation. PinNet relies on a discriminator to measure how well the retrievals match the semantics of the input code. Remarkably, the hidden representation of the reference before the output layer of the discriminator can be leveraged to significantly improve the code-to-text generation by modifying the attention weights. It essentially pays high attention to valuable information and eliminates misleadingness. To effectively execute this idea, we also propose a novel contrastive learning method to quantify the semantical similarities between unlabeled references. Using extensive experiments on code summarization and SQL-to-text generation, we demonstrate that the proposed method can significantly outperform all of the baselines.
https://proceedings.mlr.press/v235/fu24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24g/fu24g.pdf
https://openreview.net/forum?id=4qsduFJDEB
Mean-field Underdamped Langevin Dynamics and its Spacetime Discretization
https://proceedings.mlr.press/v235/fu24g.html
Qiang Fu, Ashia Camage Wilson
https://proceedings.mlr.press/v235/fu24g.html
ICML 2024
We propose a new method called the N-particle underdamped Langevin algorithm for optimizing a special class of non-linear functionals defined over the space of probability measures. Examples of problems with this formulation include training mean-field neural networks, maximum mean discrepancy minimization and kernel Stein discrepancy minimization. Our algorithm is based on a novel spacetime discretization of the mean-field underdamped Langevin dynamics, for which we provide a new, fast mixing guarantee. In addition, we demonstrate that our algorithm converges globally in total variation distance, bridging the theoretical gap between the dynamics and its practical implementation.
https://proceedings.mlr.press/v235/fu24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24h/fu24h.pdf
https://openreview.net/forum?id=2K87GFLYWz
Breaking through the learning plateaus of in-context learning in Transformer
https://proceedings.mlr.press/v235/fu24h.html
Jingwen Fu, Tao Yang, Yuwang Wang, Yan Lu, Nanning Zheng
https://proceedings.mlr.press/v235/fu24h.html
ICML 2024
In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model’s in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually separate a component within the model’s internal representation that is exclusively affected by the model’s weights. We call this the “weights component”, and the remainder is identified as the “context component”. By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior that drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner.
https://proceedings.mlr.press/v235/fu24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24i/fu24i.pdf
https://openreview.net/forum?id=aw6L8sB2Ts
Towards Theoretical Understandings of Self-Consuming Generative Models
https://proceedings.mlr.press/v235/fu24i.html
Shi Fu, Sen Zhang, Yingjie Wang, Xinmei Tian, Dacheng Tao
https://proceedings.mlr.press/v235/fu24i.html
ICML 2024
This paper tackles the emerging challenge of training generative models within a self-consuming loop, wherein successive generations of models are recursively trained on mixtures of real and synthetic data from previous generations. We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models, including parametric and non-parametric models. Specifically, we derive bounds on the total variation (TV) distance between the synthetic data distributions produced by future models and the original real data distribution under various mixed training scenarios for diffusion models with a one-hidden-layer neural network score function. Our analysis demonstrates that this distance can be effectively controlled under the condition that mixed training dataset sizes or proportions of real data are large enough. Interestingly, we further unveil a phase transition induced by expanding synthetic data amounts, proving theoretically that while the TV distance exhibits an initial ascent, it declines beyond a threshold point. Finally, we present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
https://proceedings.mlr.press/v235/fu24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24j/fu24j.pdf
https://openreview.net/forum?id=BmPWtzL7Eq
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
https://proceedings.mlr.press/v235/fu24j.html
Yuwei Fu, Haichao Zhang, Di Wu, Wei Xu, Benoit Boulet
https://proceedings.mlr.press/v235/fu24j.html
ICML 2024
In this work, we investigate how to leverage pre-trained visual-language models (VLM) for online Reinforcement Learning (RL). In particular, we focus on sparse reward tasks with pre-defined textual task descriptions. We first identify the problem of reward misalignment when applying VLM as a reward in RL tasks. To address this issue, we introduce a lightweight fine-tuning method, named Fuzzy VLM reward-aided RL (FuRL), based on reward alignment and relay RL. Specifically, we enhance the performance of SAC/DrQ baseline agents on sparse reward tasks by fine-tuning VLM representations and using relay RL to avoid local minima. Extensive experiments on the Meta-world benchmark tasks demonstrate the efficacy of the proposed method. Code is available at: https://github.com/fuyw/FuRL.
https://proceedings.mlr.press/v235/fuchsgruber24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fuchsgruber24a/fuchsgruber24a.pdf
https://openreview.net/forum?id=BCEtumPYDt
Uncertainty for Active Learning on Graphs
https://proceedings.mlr.press/v235/fuchsgruber24a.html
Dominik Fuchsgruber, Tom Wollschläger, Bertrand Charpentier, Antonio Oroz, Stephan Günnemann
https://proceedings.mlr.press/v235/fuchsgruber24a.html
ICML 2024
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models by iteratively acquiring labels of data points with the highest uncertainty. While it has proven effective for independent data its applicability to graphs remains under-explored. We propose the first extensive study of Uncertainty Sampling for node classification: (1) We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies. (2) We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries. We confirm our results on synthetic data and design an approximate approach that consistently outperforms other uncertainty estimators on real datasets. (3) Based on this analysis, we relate pitfalls in modeling uncertainty to existing methods. Our analysis enables and informs the development of principled uncertainty estimation on graphs.
https://proceedings.mlr.press/v235/fumagalli24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/fumagalli24a/fumagalli24a.pdf
https://openreview.net/forum?id=d5jXW2H4gg
KernelSHAP-IQ: Weighted Least Square Optimization for Shapley Interactions
https://proceedings.mlr.press/v235/fumagalli24a.html
Fabian Fumagalli, Maximilian Muschalik, Patrick Kolpaczki, Eyke Hüllermeier, Barbara Hammer
https://proceedings.mlr.press/v235/fumagalli24a.html
ICML 2024
The Shapley value (SV) is a prevalent approach of allocating credit to machine learning (ML) entities to understand black box ML models. Enriching such interpretations with higher-order interactions is inevitable for complex systems, where the Shapley Interaction Index (SII) is a direct axiomatic extension of the SV. While it is well-known that the SV yields an optimal approximation of any game via a weighted least square (WLS) objective, an extension of this result to SII has been a long-standing open problem, which even led to the proposal of an alternative index. In this work, we characterize higher-order SII as a solution to a WLS problem, which constructs an optimal approximation via SII and k-Shapley values (k-SII). We prove this representation for the SV and pairwise SII and give empirically validated conjectures for higher orders. As a result, we propose KernelSHAP-IQ, a direct extension of KernelSHAP for SII, and demonstrate state-of-the-art performance for feature interactions.
https://proceedings.mlr.press/v235/gabidolla24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gabidolla24a/gabidolla24a.pdf
https://openreview.net/forum?id=mXLcbRBA8v
Beyond the ROC Curve: Classification Trees Using Cost-Optimal Curves, with Application to Imbalanced Datasets
https://proceedings.mlr.press/v235/gabidolla24a.html
Magzhan Gabidolla, Arman Zharmagambetov, Miguel Á. Carreira-Perpiñán
https://proceedings.mlr.press/v235/gabidolla24a.html
ICML 2024
Important applications such as fraud or spam detection or churn prediction involve binary classification problems where the datasets are imbalanced and the cost of false positives greatly differs from the cost of false negatives. We focus on classification trees, in particular oblique trees, which subsume both the traditional axis-aligned trees and logistic regression, but are more accurate than both while providing interpretable models. Rather than using ROC curves, we advocate a loss based on minimizing the false negatives subject to a maximum false positive rate, which we prove to be equivalent to minimizing a weighted 0/1 loss. This yields a curve of classifiers that provably dominates the ROC curve, but is hard to optimize due to the 0/1 loss. We give the first algorithm that can iteratively update the tree parameters globally so that the weighted 0/1 loss decreases monotonically. Experiments on various datasets with class imbalance or class costs show this indeed dominates ROC-based classifiers and significantly improves over previous approaches to learn trees based on weighted purity criteria or over- or undersampling.
https://proceedings.mlr.press/v235/gabor24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gabor24a/gabor24a.pdf
https://openreview.net/forum?id=e0SKaKEEdr
Positive Concave Deep Equilibrium Models
https://proceedings.mlr.press/v235/gabor24a.html
Mateusz Gabor, Tomasz Piotrowski, Renato L. G. Cavalcante
https://proceedings.mlr.press/v235/gabor24a.html
ICML 2024
Deep equilibrium (DEQ) models are widely recognized as a memory efficient alternative to standard neural networks, achieving state-of-the-art performance in language modeling and computer vision tasks. These models solve a fixed point equation instead of explicitly computing the output, which sets them apart from standard neural networks. However, existing DEQ models often lack formal guarantees of the existence and uniqueness of the fixed point, and the convergence of the numerical scheme used for computing the fixed point is not formally established. As a result, DEQ models are potentially unstable in practice. To address these drawbacks, we introduce a novel class of DEQ models called positive concave deep equilibrium (pcDEQ) models. Our approach, which is based on nonlinear Perron-Frobenius theory, enforces nonnegative weights and activation functions that are concave on the positive orthant. By imposing these constraints, we can easily ensure the existence and uniqueness of the fixed point without relying on additional complex assumptions commonly found in the DEQ literature, such as those based on monotone operator theory in convex analysis. Furthermore, the fixed point can be computed with the standard fixed point algorithm, and we provide theoretical guarantees of its geometric convergence, which, in particular, simplifies the training process. Experiments demonstrate the competitiveness of our pcDEQ models against other implicit models.
https://proceedings.mlr.press/v235/gadetsky24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gadetsky24a/gadetsky24a.pdf
https://openreview.net/forum?id=RZHRnnGcEx
Let Go of Your Labels with Unsupervised Transfer
https://proceedings.mlr.press/v235/gadetsky24a.html
Artyom Gadetsky, Yulun Jiang, Maria Brbic
https://proceedings.mlr.press/v235/gadetsky24a.html
ICML 2024
Foundation vision-language models have enabled remarkable zero-shot transferability of the pre-trained representations to a wide range of downstream tasks. However, to solve a new task, zero-shot transfer still necessitates human guidance to define visual categories that appear in the data. Here, we show that fully unsupervised transfer emerges when searching for the labeling of a dataset that induces maximal margin classifiers in representation spaces of different foundation models. We present TURTLE, a fully unsupervised method that effectively employs this guiding principle to uncover the underlying labeling of a downstream dataset without any supervision and task-specific representation learning. We evaluate TURTLE on a diverse benchmark suite of 26 datasets and show that it achieves new state-of-the-art unsupervised performance. Furthermore, TURTLE, although being fully unsupervised, outperforms zero-shot transfer baselines on a wide range of datasets. In particular, TURTLE matches the average performance of CLIP zero-shot on 26 datasets by employing the same representation space, spanning a wide range of architectures and model sizes. By guiding the search for the underlying labeling using the representation spaces of two foundation models, TURTLE surpasses zero-shot transfer and unsupervised prompt tuning baselines, demonstrating the surprising power and effectiveness of unsupervised transfer.
https://proceedings.mlr.press/v235/gadot24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gadot24a/gadot24a.pdf
https://openreview.net/forum?id=UqoG0YRfQx
Bring Your Own (Non-Robust) Algorithm to Solve Robust MDPs by Estimating The Worst Kernel
https://proceedings.mlr.press/v235/gadot24a.html
Uri Gadot, Kaixin Wang, Navdeep Kumar, Kfir Yehuda Levy, Shie Mannor
https://proceedings.mlr.press/v235/gadot24a.html
ICML 2024
Robust Markov Decision Processes (RMDPs) provide a framework for sequential decision-making that is robust to perturbations on the transition kernel. However, current RMDP methods are often limited to small-scale problems, hindering their use in high-dimensional domains. To bridge this gap, we present EWoK, a novel online approach to solve RMDP that Estimates the Worst transition Kernel to learn robust policies. Unlike previous works that regularize the policy or value updates, EWoK achieves robustness by simulating the worst scenarios for the agent while retaining complete flexibility in the learning process. Notably, EWoK can be applied on top of any off-the-shelf non-robust RL algorithm, enabling easy scaling to high-dimensional domains. Our experiments, spanning from simple Cartpole to high-dimensional DeepMind Control Suite environments, demonstrate the effectiveness and applicability of the EWoK paradigm as a practical method for learning robust policies.
https://proceedings.mlr.press/v235/gala24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gala24a/gala24a.pdf
https://openreview.net/forum?id=KHymcy2xxF
Leverage Class-Specific Accuracy to Guide Data Generation for Improving Image Classification
https://proceedings.mlr.press/v235/gala24a.html
Jay Gala, Pengtao Xie
https://proceedings.mlr.press/v235/gala24a.html
ICML 2024
In many image classification applications, the number of labeled training images is limited, which leads to model overfitting. To mitigate the lack of training data, deep generative models have been leveraged to generate synthetic training data. However, existing methods generate data for individual classes based on how much training data they have without considering their actual data needs. To address this limitation, we propose needs-aware image generation, which automatically identifies the different data needs of individual classes based on their classification performance and divides a limited data generation budget into these classes according to their needs. We propose a multi-level optimization based framework which performs four learning stages in an end-to-end manner. Experiments on both imbalanced and balanced classification datasets demonstrate the effectiveness of our proposed method.
https://proceedings.mlr.press/v235/gan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gan24a/gan24a.pdf
https://openreview.net/forum?id=f47ZK6gy3I
Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning
https://proceedings.mlr.press/v235/gan24a.html
Kai Gan, Tong Wei
https://proceedings.mlr.press/v235/gan24a.html
ICML 2024
Semi-supervised learning (SSL) has witnessed remarkable progress, resulting in the emergence of numerous method variations. However, practitioners often encounter challenges when attempting to deploy these methods due to their subpar performance. In this paper, we present a novel SSL approach named FineSSL that significantly addresses this limitation by adapting pre-trained foundation models. We identify the aggregated biases and cognitive deviation problems inherent in foundation models, and propose a simple yet effective solution by imposing balanced margin softmax and decoupled label smoothing. Through extensive experiments, we demonstrate that FineSSL sets a new state of the art for SSL on multiple benchmark datasets, reduces the training cost by over six times, and can seamlessly integrate various fine-tuning and modern SSL algorithms. The source code is available at https://github.com/Gank0078/FineSSL.
https://proceedings.mlr.press/v235/gan24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gan24b/gan24b.pdf
https://openreview.net/forum?id=Cs0Xy6WETl
Reflective Policy Optimization
https://proceedings.mlr.press/v235/gan24b.html
Yaozhong Gan, Renye Yan, Zhe Wu, Junliang Xing
https://proceedings.mlr.press/v235/gan24b.html
ICML 2024
On-policy reinforcement learning methods, like Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO), often demand extensive data per update, leading to sample inefficiency. This paper introduces Reflective Policy Optimization (RPO), a novel on-policy extension that amalgamates past and future state-action information for policy optimization. This approach empowers the agent for introspection, allowing modifications to its actions within the current state. Theoretical analysis confirms that policy performance is monotonically improved and contracts the solution space, consequently expediting the convergence procedure. Empirical results demonstrate RPO’s feasibility and efficacy in two reinforcement learning benchmarks, culminating in superior sample efficiency. The source code of this work is available at https://github.com/Edgargan/RPO.
https://proceedings.mlr.press/v235/ganapathi-subramanian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ganapathi-subramanian24a/ganapathi-subramanian24a.pdf
https://openreview.net/forum?id=6TCeizkLJV
Confidence Aware Inverse Constrained Reinforcement Learning
https://proceedings.mlr.press/v235/ganapathi-subramanian24a.html
Sriram Ganapathi Subramanian, Guiliang Liu, Mohammed Elmahgiubi, Kasra Rezaee, Pascal Poupart
https://proceedings.mlr.press/v235/ganapathi-subramanian24a.html
ICML 2024
In coming up with solutions to real-world problems, humans implicitly adhere to constraints that are too numerous and complex to be specified completely. However, reinforcement learning (RL) agents need these constraints to learn the correct optimal policy in these settings. The field of Inverse Constraint Reinforcement Learning (ICRL) deals with this problem and provides algorithms that aim to estimate the constraints from expert demonstrations collected offline. Practitioners prefer to know a measure of confidence in the estimated constraints, before deciding to use these constraints, which allows them to only use the constraints that satisfy a desired level of confidence. However, prior works do not allow users to provide the desired level of confidence for the inferred constraints. This work provides a principled ICRL method that can take a confidence level with a set of expert demonstrations and outputs a constraint that is at least as constraining as the true underlying constraint with the desired level of confidence. Further, unlike previous methods, this method allows a user to know if the number of expert trajectories is insufficient to learn a constraint with a desired level of confidence, and therefore collect more expert trajectories as required to simultaneously learn constraints with the desired level of confidence and a policy that achieves the desired level of performance.
https://proceedings.mlr.press/v235/gangrade24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gangrade24a/gangrade24a.pdf
https://openreview.net/forum?id=TfwGtfPkhV
Testing the Feasibility of Linear Programs with Bandit Feedback
https://proceedings.mlr.press/v235/gangrade24a.html
Aditya Gangrade, Aditya Gopalan, Venkatesh Saligrama, Clayton Scott
https://proceedings.mlr.press/v235/gangrade24a.html
ICML 2024
While the recent literature has seen a surge in the study of constrained bandit problems, all existing methods for these begin by assuming the feasibility of the underlying problem. We initiate the study of testing such feasibility assumptions, and in particular address the problem in the linear bandit setting, thus characterising the costs of feasibility testing for an unknown linear program using bandit feedback. Concretely, we test if $\exists x: Ax \ge 0$ for an unknown $A \in \mathbb{R}^{m \times d}$, by playing a sequence of actions $x_t\in \mathbb{R}^d$, and observing $Ax_t + \mathrm{noise}$ in response. By identifying the hypothesis as determining the sign of the value of a minimax game, we construct a novel test based on low-regret algorithms and a nonasymptotic law of iterated logarithms. We prove that this test is reliable, and adapts to the ‘signal level,’ $\Gamma,$ of any instance, with mean sample costs scaling as $\widetilde{O}(d^2/\Gamma^2)$. We complement this by a minimax lower bound of $\Omega(d/\Gamma^2)$ for sample costs of reliable tests, dominating prior asymptotic lower bounds by capturing the dependence on $d$, and thus elucidating a basic insight missing in the extant literature on such problems.
https://proceedings.mlr.press/v235/gao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24a/gao24a.pdf
https://openreview.net/forum?id=GentO2E4ID
A Doubly Recursive Stochastic Compositional Gradient Descent Method for Federated Multi-Level Compositional Optimization
https://proceedings.mlr.press/v235/gao24a.html
Hongchang Gao
https://proceedings.mlr.press/v235/gao24a.html
ICML 2024
Federated compositional optimization has been actively studied in the past few years. However, existing methods mainly focus on the two-level compositional optimization problem, which cannot be directly applied to the multi-level counterparts. Moreover, the convergence rate of existing federated two-level compositional optimization learning algorithms fails to achieve linear speedup with respect to the number of workers under heterogeneous settings. After identifying the reason for this failure, we developed a novel federated stochastic multi-level compositional optimization algorithm by introducing a novel Jacobian-vector product estimator. This innovation mitigates both the heterogeneity issue and the communication efficiency issue simultaneously. We then theoretically proved that our algorithm can achieve the level-independent and linear speedup convergence rate for nonconvex problems. To our knowledge, this is the first time that a federated learning algorithm can achieve such a favorable convergence rate for multi-level compositional problems. Moreover, experimental results confirm the efficacy of our algorithm.
https://proceedings.mlr.press/v235/gao24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24b/gao24b.pdf
https://openreview.net/forum?id=TJ6tVNt6Y4
Energy-based Backdoor Defense without Task-Specific Samples and Model Retraining
https://proceedings.mlr.press/v235/gao24b.html
Yudong Gao, Honglong Chen, Peng Sun, Zhe Li, Junjian Li, Huajie Shao
https://proceedings.mlr.press/v235/gao24b.html
ICML 2024
Backdoor defense is crucial to ensure the safety and robustness of machine learning models when under attack. However, most existing methods specialize in either the detection or removal of backdoors, but seldom both. While few works have addressed both, these methods rely on strong assumptions or entail significant overhead costs, such as the need of task-specific samples for detection and model retraining for removal. Hence, the key challenge is how to reduce overhead and relax unrealistic assumptions. In this work, we propose two Energy-Based BAckdoor defense methods, called EBBA and EBBA+, that can achieve both backdoored model detection and backdoor removal with low overhead. Our contributions are twofold: First, we offer theoretical analysis for our observation that a predefined target label is more likely to occur among the top results for various samples. Inspired by this, we develop an enhanced energy-based technique, called EBBA, to detect backdoored models without task-specific samples (i.e., samples from any tasks). Secondly, we theoretically analyze that after data corruption, the original clean label of a poisoned sample is more likely to be predicted as a top output by the model, a sharp contrast to clean samples. Accordingly, we extend EBBA to develop EBBA+, a new transferred energy approach to efficiently detect poisoned images and remove backdoors without model retraining. Extensive experiments on multiple benchmark datasets demonstrate the superior performance of our methods over baselines in both backdoor detection and removal. Notably, the proposed methods can effectively detect backdoored model and poisoned images as well as remove backdoors at the same time.
https://proceedings.mlr.press/v235/gao24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24c/gao24c.pdf
https://openreview.net/forum?id=po4NsL9KvX
An Intrinsic Vector Heat Network
https://proceedings.mlr.press/v235/gao24c.html
Alexander Gao, Maurice Chu, Mubbasir Kapadia, Ming Lin, Hsueh-Ti Derek Liu
https://proceedings.mlr.press/v235/gao24c.html
ICML 2024
Vector fields are widely used to represent and model flows for many science and engineering applications. This paper introduces a novel neural network architecture for learning tangent vector fields that are intrinsically defined on manifold surfaces embedded in 3D. Previous approaches to learning vector fields on surfaces treat vectors as multi-dimensional scalar fields, using traditional scalar-valued architectures to process channels individually, thus fail to preserve fundamental intrinsic properties of the vector field. The core idea of this work is to introduce a trainable vector heat diffusion module to spatially propagate vector-valued feature data across the surface, which we incorporate into our proposed architecture that consists of vector-valued neurons. Our architecture is invariant to rigid motion of the input, isometric deformation, and choice of local tangent bases, and is robust to discretizations of the surface. We evaluate our Vector Heat Network on triangle meshes, and empirically validate its invariant properties. We also demonstrate the effectiveness of our method on the useful industrial application of quadrilateral mesh generation.
https://proceedings.mlr.press/v235/gao24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24d/gao24d.pdf
https://openreview.net/forum?id=pAyX8q1IIn
Stochastic Weakly Convex Optimization beyond Lipschitz Continuity
https://proceedings.mlr.press/v235/gao24d.html
Wenzhi Gao, Qi Deng
https://proceedings.mlr.press/v235/gao24d.html
ICML 2024
This paper considers stochastic weakly convex optimization without the standard Lipschitz continuity assumption. Based on new adaptive regularization (stepsize) strategies, we show that a wide class of stochastic algorithms, including the stochastic subgradient method, preserve the $\mathcal{O} ( 1 / \sqrt{K})$ convergence rate with constant failure rate. Our analyses rest on rather weak assumptions: the Lipschitz parameter can be either bounded by a general growth function of $\\|x\\|$ or locally estimated through independent random samples. Numerical experiments demonstrate the efficiency and robustness of our proposed stepsize policies.
https://proceedings.mlr.press/v235/gao24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24e/gao24e.pdf
https://openreview.net/forum?id=zxxSJAVQPc
A Graph is Worth $K$ Words: Euclideanizing Graph using Pure Transformer
https://proceedings.mlr.press/v235/gao24e.html
Zhangyang Gao, Daize Dong, Cheng Tan, Jun Xia, Bozhen Hu, Stan Z. Li
https://proceedings.mlr.press/v235/gao24e.html
ICML 2024
Can we model Non-Euclidean graphs as pure language or even Euclidean vectors while retaining their inherent information? The Non-Euclidean property have posed a long term challenge in graph modeling. Despite recent graph neural networks and graph transformers efforts encoding graphs as Euclidean vectors, recovering the original graph from vectors remains a challenge. In this paper, we introduce GraphsGPT, featuring an Graph2Seq encoder that transforms Non-Euclidean graphs into learnable Graph Words in the Euclidean space, along with a GraphGPT decoder that reconstructs the original graph from Graph Words to ensure information equivalence. We pretrain GraphsGPT on $100$M molecules and yield some interesting findings: (1) The pretrained Graph2Seq excels in graph representation learning, achieving state-of-the-art results on $8/9$ graph classification and regression tasks. (2) The pretrained GraphGPT serves as a strong graph generator, demonstrated by its strong ability to perform both few-shot and conditional graph generation. (3) Graph2Seq+GraphGPT enables effective graph mixup in the Euclidean space, overcoming previously known Non-Euclidean challenges. (4) The edge-centric pretraining framework GraphsGPT demonstrates its efficacy in graph domain tasks, excelling in both representation and generation. Code is available at https://github.com/A4Bio/GraphsGPT.
https://proceedings.mlr.press/v235/gao24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24f/gao24f.pdf
https://openreview.net/forum?id=Y4wxCICbD0
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback
https://proceedings.mlr.press/v235/gao24f.html
Songyang Gao, Qiming Ge, Wei Shen, Shihan Dou, Junjie Ye, Xiao Wang, Rui Zheng, Yicheng Zou, Zhi Chen, Hang Yan, Qi Zhang, Dahua Lin
https://proceedings.mlr.press/v235/gao24f.html
ICML 2024
The success of AI assistants based on Language Models (LLMs) hinges on Reinforcement Learning from Human Feedback (RLHF) to comprehend and align with user intentions. However, traditional alignment algorithms, such as PPO, are hampered by complex annotation and training requirements. This reliance limits the applicability of RLHF and hinders the development of professional assistants tailored to diverse human preferences. In this work, we introduce Linear Alignment, a novel algorithm that aligns language models with human preferences in one single inference step, eliminating the reliance on data annotation and model training. Linear alignment incorporates a new parameterization for policy optimization under divergence constraints, which enables the extraction of optimal policy in a closed-form manner and facilitates the direct estimation of the aligned response. Extensive experiments on both general and personalized preference datasets demonstrate that linear alignment significantly enhances the performance and efficiency of LLM alignment across diverse scenarios.
https://proceedings.mlr.press/v235/gao24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24g/gao24g.pdf
https://openreview.net/forum?id=qwKSTLbati
Multi-Agent Reinforcement Learning Meets Leaf Sequencing in Radiotherapy
https://proceedings.mlr.press/v235/gao24g.html
Riqiang Gao, Florin-Cristian Ghesu, Simon Arberet, Shahab Basiri, Esa Kuusela, Martin Kraus, Dorin Comaniciu, Ali Kamen
https://proceedings.mlr.press/v235/gao24g.html
ICML 2024
In contemporary radiotherapy planning (RTP), a key module leaf sequencing is predominantly addressed by optimization-based approaches. In this paper, we propose a novel deep reinforcement learning (DRL) model termed as Reinforced Leaf Sequencer (RLS) in a multi-agent framework for leaf sequencing. The RLS model offers improvements to time-consuming iterative optimization steps via large-scale training and can control movement patterns through the design of reward mechanisms. We have conducted experiments on four datasets with four metrics and compared our model with a leading optimization sequencer. Our findings reveal that the proposed RLS model can achieve reduced fluence reconstruction errors, and potential faster convergence when integrated in an optimization planner. Additionally, RLS has shown promising results in a full artificial intelligence RTP pipeline. We hope this pioneer multi-agent RL leaf sequencer can foster future research on machine learning for RTP.
https://proceedings.mlr.press/v235/gao24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24h/gao24h.pdf
https://openreview.net/forum?id=lcX5GbDIi8
DMTG: One-Shot Differentiable Multi-Task Grouping
https://proceedings.mlr.press/v235/gao24h.html
Yuan Gao, Shuguo Jiang, Moran Li, Jin-Gang Yu, Gui-Song Xia
https://proceedings.mlr.press/v235/gao24h.html
ICML 2024
We aim to address Multi-Task Learning (MTL) with a large number of tasks by Multi-Task Grouping (MTG). Given $N$ tasks, we propose to simultaneously identify the best task groups from $2^N$ candidates and train the model weights simultaneously in one-shot, with the high-order task-affinity fully exploited. This is distinct from the pioneering methods which sequentially identify the groups and train the model weights, where the group identification often relies on heuristics. As a result, our method not only improves the training efficiency, but also mitigates the objective bias introduced by the sequential procedures that potentially leads to a suboptimal solution. Specifically, we formulate MTG as a fully differentiable pruning problem on an adaptive network architecture determined by an unknown Categorical distribution. To categorize $N$ tasks into $K$ groups (represented by $K$ encoder branches), we initially set up $KN$ task heads, where each branch connects to all $N$ task heads to exploit the high-order task-affinity. Then, we gradually prune the $KN$ heads down to $N$ by learning a relaxed differentiable Categorical distribution, ensuring that each task is exclusively and uniquely categorized into only one branch. Extensive experiments on CelebA and Taskonomy datasets with detailed ablations show the promising performance and efficiency of our method. The codes are available at https://github.com/ethanygao/DMTG.
https://proceedings.mlr.press/v235/gao24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24i/gao24i.pdf
https://openreview.net/forum?id=sSAEhcdB9N
Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses
https://proceedings.mlr.press/v235/gao24i.html
Changyu Gao, Andrew Lowy, Xingyu Zhou, Stephen Wright
https://proceedings.mlr.press/v235/gao24i.html
ICML 2024
We revisit the problem of federated learning (FL) with private data from people who do not trust the server or other silos/clients. In this context, every silo (e.g. hospital) has data from several people (e.g. patients) and needs to protect the privacy of each person’s data (e.g. health records), even if the server and/or other silos try to uncover this data. Inter-Silo Record-Level Differential Privacy (ISRL-DP) prevents each silo’s data from being leaked, by requiring that silo $i$’s communications satisfy item-level differential privacy. Prior work (Lowy & Razaviyayn, 2023a) characterized the optimal excess risk bounds for ISRL-DP algorithms with homogeneous (i.i.d.) silo data and convex loss functions. However, two important questions were left open: 1) Can the same excess risk bounds be achieved with heterogeneous (non-i.i.d.) silo data? 2) Can the optimal risk bounds be achieved with fewer communication rounds? In this paper, we give positive answers to both questions. We provide novel ISRL-DP FL algorithms that achieve the optimal excess risk bounds in the presence of heterogeneous silo data. Moreover, our algorithms are more communication-efficient than the prior state-of-the-art. For smooth loss functions, our algorithm achieves the optimal excess risk bound and has communication complexity that matches the non-private lower bound. Additionally, our algorithms are more computationally efficient than the previous state-of-the-art.
https://proceedings.mlr.press/v235/gao24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24j/gao24j.pdf
https://openreview.net/forum?id=ecnpYYHjt9
Speech Self-Supervised Learning Using Diffusion Model Synthetic Data
https://proceedings.mlr.press/v235/gao24j.html
Heting Gao, Kaizhi Qian, Junrui Ni, Chuang Gan, Mark A. Hasegawa-Johnson, Shiyu Chang, Yang Zhang
https://proceedings.mlr.press/v235/gao24j.html
ICML 2024
While self-supervised learning (SSL) in speech has greatly reduced the reliance of speech processing systems on annotated corpora, the success of SSL still hinges on the availability of a large-scale unannotated corpus, which is still often impractical for many low-resource languages or under privacy concerns. Some existing work seeks to alleviate the problem by data augmentation, but most works are confined to introducing perturbations to real speech and do not introduce new variations in speech prosody, speakers, and speech content, which are important for SSL. Motivated by the recent finding that diffusion models have superior capabilities for modeling data distributions, we propose DiffS4L, a pretraining scheme that augments the limited unannotated data with synthetic data with different levels of variations, generated by a diffusion model trained on the limited unannotated data. Finally, an SSL model is pre-trained on the real and the synthetic speech. Our experiments show that DiffS4L can significantly improve the performance of SSL models, such as reducing the WER of the HuBERT pretrained model by 6.26 percentage points in the English ASR task. Notably, we find that the synthetic speech with all levels of variations, i.e. new prosody, new speakers, and even new content (despite the new content being mostly babble), accounts for significant performance improvement. The code is available at github.com/Hertin/DiffS4L.
https://proceedings.mlr.press/v235/gao24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24k/gao24k.pdf
https://openreview.net/forum?id=8WSNl2XA9r
Rethinking Specificity in SBDD: Leveraging Delta Score and Energy-Guided Diffusion
https://proceedings.mlr.press/v235/gao24k.html
Bowen Gao, Minsi Ren, Yuyan Ni, Yanwen Huang, Bo Qiang, Zhi-Ming Ma, Wei-Ying Ma, Yanyan Lan
https://proceedings.mlr.press/v235/gao24k.html
ICML 2024
In the field of Structure-based Drug Design (SBDD), deep learning-based generative models have achieved outstanding performance in terms of docking score. However, further study shows that the existing molecular generative methods and docking scores both have lacked consideration in terms of specificity, which means that generated molecules bind to almost every protein pocket with high affinity. To address this, we introduce the Delta Score, a new metric for evaluating the specificity of molecular binding. To further incorporate this insight for generation, we develop an innovative energy-guided approach using contrastive learning, with active compounds as decoys, to direct generative models toward creating molecules with high specificity. Our empirical results show that this method not only enhances the delta score but also maintains or improves traditional docking scores, successfully bridging the gap between SBDD and real-world needs.
https://proceedings.mlr.press/v235/gao24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24l/gao24l.pdf
https://openreview.net/forum?id=1ySQI9LE4w
Non-convex Stochastic Composite Optimization with Polyak Momentum
https://proceedings.mlr.press/v235/gao24l.html
Yuan Gao, Anton Rodomanov, Sebastian U Stich
https://proceedings.mlr.press/v235/gao24l.html
ICML 2024
The stochastic proximal gradient method is a powerful generalization of the widely used stochastic gradient descent (SGD) method and has found numerous applications in Machine Learning. However, it is notoriously known that this method fails to converge in non-convex settings where the stochastic noise is significant (i.e. when only small or bounded batch sizes are used). In this paper, we focus on the stochastic proximal gradient method with Polyak momentum. We prove this method attains an optimal convergence rate for non-convex composite optimization problems, regardless of batch size. Additionally, we rigorously analyze the variance reduction effect of the Polyak momentum in the composite optimization setting and we show the method also converges when the proximal step can only be solved inexactly. Finally, we provide numerical experiments to validate our theoretical results.
https://proceedings.mlr.press/v235/gao24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24m/gao24m.pdf
https://openreview.net/forum?id=S9DV6ZP4eE
Adaptive-Gradient Policy Optimization: Enhancing Policy Learning in Non-Smooth Differentiable Simulations
https://proceedings.mlr.press/v235/gao24m.html
Feng Gao, Liangzhi Shi, Shenao Zhang, Zhaoran Wang, Yi Wu
https://proceedings.mlr.press/v235/gao24m.html
ICML 2024
Recent advancements in differentiable simulators highlight the potential of policy optimization using simulation gradients. Yet, these approaches are largely contingent on the continuity and smoothness of the simulation, which precludes the use of certain simulation engines, such as Mujoco. To tackle this challenge, we introduce the adaptive analytic gradient. This method views the Q function as a surrogate for future returns, consistent with the Bellman equation. By analyzing the variance of batched gradients, our method can autonomously opt for a more resilient Q function to compute the gradient when encountering rough simulation transitions. We also put forth the Adaptive-Gradient Policy Optimization (AGPO) algorithm, which leverages our proposed method for policy learning. On the theoretical side, we demonstrate AGPO’s convergence, emphasizing its stable performance under non-smooth dynamics due to low variance. On the empirical side, our results show that AGPO effectively mitigates the challenges posed by non-smoothness in policy learning through differentiable simulation.
https://proceedings.mlr.press/v235/gao24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24n/gao24n.pdf
https://openreview.net/forum?id=1DyruVvVaQ
Decoupling Learning and Decision-Making: Breaking the $\mathcalO(\sqrtT)$ Barrier in Online Resource Allocation with First-Order Methods
https://proceedings.mlr.press/v235/gao24n.html
Wenzhi Gao, Chunlin Sun, Chenyu Xue, Yinyu Ye
https://proceedings.mlr.press/v235/gao24n.html
ICML 2024
Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve regret no better than $\mathcal{O}(\sqrt{T})$, which is suboptimal compared to the $\mathcal{O}(\log T)$ result guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes several important facts about online linear programming, which unveils the challenge for first-order online algorithms to achieve beyond $\mathcal{O}(\sqrt{T})$ regret. To address this challenge, we introduce a new algorithmic framework which decouples learning from decision-making. For the first time, we show that first-order methods can achieve regret $\mathcal{O}(T^{1/3})$ with this new framework.
https://proceedings.mlr.press/v235/gao24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24o/gao24o.pdf
https://openreview.net/forum?id=XUOHKSsurt
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
https://proceedings.mlr.press/v235/gao24o.html
Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, Jia Li
https://proceedings.mlr.press/v235/gao24o.html
ICML 2024
Low-rank adaptation (LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices $A$ and $B$ to represent the weight change, i.e., $\Delta W=BA$. Despite LoRA’s progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats $\Delta W$ as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover $\Delta W$. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA’s 33.5M. Our code is released at this link.
https://proceedings.mlr.press/v235/gao24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24p/gao24p.pdf
https://openreview.net/forum?id=Zos5wsaB5r
Fast-Slow Test-Time Adaptation for Online Vision-and-Language Navigation
https://proceedings.mlr.press/v235/gao24p.html
Junyu Gao, Xuan Yao, Changsheng Xu
https://proceedings.mlr.press/v235/gao24p.html
ICML 2024
The ability to accurately comprehend natural language instructions and navigate to the target location is essential for an embodied agent. Such agents are typically required to execute user instructions in an online manner, leading us to explore the use of unlabeled test samples for effective online model adaptation. However, for online Vision-and-Language Navigation (VLN), due to the intrinsic nature of inter-sample online instruction execution and intra-sample multi-step action decision, frequent updates can result in drastic changes in model parameters, while occasional updates can make the model ill-equipped to handle dynamically changing environments. Therefore, we propose a Fast-Slow Test-Time Adaptation (FSTTA) approach for online VLN by performing joint decomposition-accumulation analysis for both gradients and parameters in a unified framework. Extensive experiments show that our method obtains impressive performance gains on four popular benchmarks. Code is available at https://github.com/Feliciaxyao/ICML2024-FSTTA.
https://proceedings.mlr.press/v235/gao24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24q/gao24q.pdf
https://openreview.net/forum?id=ihv6pWuILN
Causal Customer Churn Analysis with Low-rank Tensor Block Hazard Model
https://proceedings.mlr.press/v235/gao24q.html
Chenyin Gao, Zhiming Zhang, Shu Yang
https://proceedings.mlr.press/v235/gao24q.html
ICML 2024
This study introduces an innovative method for analyzing the impact of various interventions on customer churn, using the potential outcomes framework. We present a new causal model, the tensorized latent factor block hazard model, which incorporates tensor completion methods for a principled causal analysis of customer churn. A crucial element of our approach is the formulation of a 1-bit tensor completion for the parameter tensor. This captures hidden customer characteristics and temporal elements from churn records, effectively addressing the binary nature of churn data and its time-monotonic trends. Our model also uniquely categorizes interventions by their similar impacts, enhancing the precision and practicality of implementing customer retention strategies. For computational efficiency, we apply a projected gradient descent algorithm combined with spectral clustering. We lay down the theoretical groundwork for our model, including its non-asymptotic properties. The efficacy and superiority of our model are further validated through comprehensive experiments on both simulated and real-world applications.
https://proceedings.mlr.press/v235/gao24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24r/gao24r.pdf
https://openreview.net/forum?id=PPoQz8K4GZ
Prompt-based Visual Alignment for Zero-shot Policy Transfer
https://proceedings.mlr.press/v235/gao24r.html
Haihan Gao, Rui Zhang, Qi Yi, Hantao Yao, Haochen Li, Jiaming Guo, Shaohui Peng, Yunkai Gao, Qicheng Wang, Xing Hu, Yuanbo Wen, Zihao Zhang, Zidong Du, Ling Li, Qi Guo, Yunji Chen
https://proceedings.mlr.press/v235/gao24r.html
ICML 2024
Overfitting in RL has become one of the main obstacles to applications in reinforcement learning(RL). Existing methods do not provide explicit semantic constrain for the feature extractor, hindering the agent from learning a unified cross-domain representation and resulting in performance degradation on unseen domains. Besides, abundant data from multiple domains are needed. To address these issues, in this work, we propose prompt-based visual alignment (PVA), a robust framework to mitigate the detrimental domain bias in the image for zero-shot policy transfer. Inspired that Visual-Language Model (VLM) can serve as a bridge to connect both text space and image space, we leverage the semantic information contained in a text sequence as an explicit constraint to train a visual aligner. Thus, the visual aligner can map images from multiple domains to a unified domain and achieve good generalization performance. To better depict semantic information, prompt tuning is applied to learn a sequence of learnable tokens. With explicit constraints of semantic information, PVA can learn unified cross-domain representation under limited access to cross-domain data and achieves great zero-shot generalization ability in unseen domains. We verify PVA on a vision-based autonomous driving task with CARLA simulator. Experiments show that the agent generalizes well on unseen domains under limited access to multi-domain data.
https://proceedings.mlr.press/v235/gao24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24s/gao24s.pdf
https://openreview.net/forum?id=Hjwx3H6Vci
Distribution Alignment Optimization through Neural Collapse for Long-tailed Classification
https://proceedings.mlr.press/v235/gao24s.html
Jintong Gao, He Zhao, Dan Dan Guo, Hongyuan Zha
https://proceedings.mlr.press/v235/gao24s.html
ICML 2024
A well-trained deep neural network on balanced datasets usually exhibits the Neural Collapse (NC) phenomenon, which is an informative indicator of the model achieving good performance. However, NC is usually hard to be achieved for a model trained on long-tailed datasets, leading to the deteriorated performance of test data. This work aims to induce the NC phenomenon in imbalanced learning from the perspective of distribution matching. By enforcing the distribution of last-layer representations to align the ideal distribution of the ETF structure, we develop a Distribution Alignment Optimization (DisA) loss, acting as a plug-and-play method can be combined with most of the existing long-tailed methods, we further instantiate it to the cases of fixing classifier and learning classifier. The extensive experiments show the effectiveness of DisA, providing a promising solution to the imbalanced issue. Our code is available at DisA.