abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/mustafa24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mustafa24a/mustafa24a.pdf
|
https://openreview.net/forum?id=Sjv5RcqfuH
|
GATE: How to Keep Out Intrusive Neighbors
|
https://proceedings.mlr.press/v235/mustafa24a.html
|
Nimrah Mustafa, Rebekka Burkholz
|
https://proceedings.mlr.press/v235/mustafa24a.html
|
ICML 2024
|
Graph Attention Networks (GATs) are designed to provide flexible neighborhood aggregation that assigns weights to neighbors according to their importance. In practice, however, GATs are often unable to switch off task-irrelevant neighborhood aggregation, as we show experimentally and analytically. To address this challenge, we propose GATE, a GAT extension that holds three major advantages: i) It alleviates over-smoothing by addressing its root cause of unnecessary neighborhood aggregation. ii) Similarly to perceptrons, it benefits from higher depth as it can still utilize additional layers for (non-)linear feature transformations in case of (nearly) switched-off neighborhood aggregation. iii) By down-weighting connections to unrelated neighbors, it often outperforms GATs on real-world heterophilic datasets. To further validate our claims, we construct a synthetic test bed to analyze a model’s ability to utilize the appropriate amount of neighborhood aggregation, which could be of independent interest.
|
https://proceedings.mlr.press/v235/mutti24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mutti24a/mutti24a.pdf
|
https://openreview.net/forum?id=LM7j0zrUZB
|
Test-Time Regret Minimization in Meta Reinforcement Learning
|
https://proceedings.mlr.press/v235/mutti24a.html
|
Mirco Mutti, Aviv Tamar
|
https://proceedings.mlr.press/v235/mutti24a.html
|
ICML 2024
|
Meta reinforcement learning sets a distribution over a set of tasks on which the agent can train at will, then is asked to learn an optimal policy for any test task efficiently. In this paper, we consider a finite set of tasks modeled through Markov decision processes with various dynamics. We assume to have endured a long training phase, from which the set of tasks is perfectly recovered, and we focus on regret minimization against the optimal policy in the unknown test task. Under a separation condition that states the existence of a state-action pair revealing a task against another, Chen et al. (2022) show that $O(M^2 \log(H))$ regret can be achieved, where $M, H$ are the number of tasks in the set and test episodes, respectively. In our first contribution, we demonstrate that the latter rate is nearly optimal by developing a novel lower bound for test-time regret minimization under separation, showing that a linear dependence with $M$ is unavoidable. Then, we present a family of stronger yet reasonable assumptions beyond separation, which we call strong identifiability, enabling algorithms achieving fast rates $\log (H)$ and sublinear dependence with $M$ simultaneously. Our paper provides a new understanding of the statistical barriers of test-time regret minimization and when fast rates can be achieved.
|
https://proceedings.mlr.press/v235/muzellec24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/muzellec24a/muzellec24a.pdf
|
https://openreview.net/forum?id=eC1OOpOGZW
|
Saliency strikes back: How filtering out high frequencies improves white-box explanations
|
https://proceedings.mlr.press/v235/muzellec24a.html
|
Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin Vanrullen, Thomas Serre
|
https://proceedings.mlr.press/v235/muzellec24a.html
|
ICML 2024
|
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model’s decision-making process. We have identified a significant limitation in one type of attribution methods, known as “white-box" methods. Although highly efficient, as we will show, these methods rely on a gradient signal that is often contaminated by high-frequency artifacts. To overcome this limitation, we introduce a new approach called "FORGrad". This simple method effectively filters out these high-frequency artifacts using optimal cut-off frequencies tailored to the unique characteristics of each model architecture. Our findings show that FORGrad consistently enhances the performance of already existing white-box methods, enabling them to compete effectively with more accurate yet computationally demanding "black-box" methods. We anticipate that our research will foster broader adoption of simpler and more efficient white-box methods for explainability, offering a better balance between faithfulness and computational efficiency.
|
https://proceedings.mlr.press/v235/myers24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/myers24a/myers24a.pdf
|
https://openreview.net/forum?id=xQiYCmDrjp
|
Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making
|
https://proceedings.mlr.press/v235/myers24a.html
|
Vivek Myers, Chongyi Zheng, Anca Dragan, Sergey Levine, Benjamin Eysenbach
|
https://proceedings.mlr.press/v235/myers24a.html
|
ICML 2024
|
Temporal distances lie at the heart of many algorithms for planning, control, and reinforcement learning that involve reaching goals, allowing one to estimate the transit time between two states. However, prior attempts to define such temporal distances in stochastic settings have been stymied by an important limitation: these prior approaches do not satisfy the triangle inequality. This is not merely a definitional concern, but translates to an inability to generalize and find shortest paths. In this paper, we build on prior work in contrastive learning and quasimetrics to show how successor features learned by contrastive learning (after a change of variables) form a temporal distance that does satisfy the triangle inequality, even in stochastic settings. Importantly, this temporal distance is computationally efficient to estimate, even in high-dimensional and stochastic settings. Experiments in controlled settings and benchmark suites demonstrate that an RL algorithm based on these new temporal distances exhibits combinatorial generalization (i.e., "stitching") and can sometimes learn more quickly than prior methods, including those based on quasimetrics.
|
https://proceedings.mlr.press/v235/na24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/na24a/na24a.pdf
|
https://openreview.net/forum?id=EsWJ5wd2ir
|
Diffusion Rejection Sampling
|
https://proceedings.mlr.press/v235/na24a.html
|
Byeonghu Na, Yeongmin Kim, Minsang Park, Donghyeok Shin, Wanmo Kang, Il-Chul Moon
|
https://proceedings.mlr.press/v235/na24a.html
|
ICML 2024
|
Recent advances in powerful pre-trained diffusion models encourage the development of methods to improve the sampling performance under well-trained diffusion models. This paper introduces Diffusion Rejection Sampling (DiffRS), which uses a rejection sampling scheme that aligns the sampling transition kernels with the true ones at each timestep. The proposed method can be viewed as a mechanism that evaluates the quality of samples at each intermediate timestep and refines them with varying effort depending on the sample. Theoretical analysis shows that DiffRS can achieve a tighter bound on sampling error compared to pre-trained models. Empirical results demonstrate the state-of-the-art performance of DiffRS on the benchmark datasets and the effectiveness of DiffRS for fast diffusion samplers and large-scale text-to-image diffusion models. Our code is available at https://github.com/aailabkaist/DiffRS.
|
https://proceedings.mlr.press/v235/na24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/na24b/na24b.pdf
|
https://openreview.net/forum?id=gtYdvSGMYV
|
LAGMA: LAtent Goal-guided Multi-Agent Reinforcement Learning
|
https://proceedings.mlr.press/v235/na24b.html
|
Hyungho Na, Il-Chul Moon
|
https://proceedings.mlr.press/v235/na24b.html
|
ICML 2024
|
In cooperative multi-agent reinforcement learning (MARL), agents collaborate to achieve common goals, such as defeating enemies and scoring a goal. However, learning goal-reaching paths toward such a semantic goal takes a considerable amount of time in complex tasks and the trained model often fails to find such paths. To address this, we present LAtent Goal-guided Multi-Agent reinforcement learning (LAGMA), which generates a goal-reaching trajectory in latent space and provides a latent goal-guided incentive to transitions toward this reference trajectory. LAGMA consists of three major components: (a) quantized latent space constructed via a modified VQ-VAE for efficient sample utilization, (b) goal-reaching trajectory generation via extended VQ codebook, and (c) latent goal-guided intrinsic reward generation to encourage transitions towards the sampled goal-reaching path. The proposed method is evaluated by StarCraft II with both dense and sparse reward settings and Google Research Football. Empirical results show further performance improvement over state-of-the-art baselines.
|
https://proceedings.mlr.press/v235/nabarro24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nabarro24a/nabarro24a.pdf
|
https://openreview.net/forum?id=6WYk5R86Wl
|
Learning in Deep Factor Graphs with Gaussian Belief Propagation
|
https://proceedings.mlr.press/v235/nabarro24a.html
|
Seth Nabarro, Mark Van Der Wilk, Andrew Davison
|
https://proceedings.mlr.press/v235/nabarro24a.html
|
ICML 2024
|
We propose an approach to do learning in Gaussian factor graphs. We treat all relevant quantities (inputs, outputs, parameters, activations) as random variables in a graphical model, and view training and prediction as inference problems with different observed nodes. Our experiments show that these problems can be efficiently solved with belief propagation (BP), whose updates are inherently local, presenting exciting opportunities for distributed and asynchronous training. Our approach can be scaled to deep networks and provides a natural means to do continual learning: use the BP-estimated posterior of the current task as a prior for the next. On a video denoising task we demonstrate the benefit of learnable parameters over a classical factor graph approach and we show encouraging performance of deep factor graphs for continual image classification.
|
https://proceedings.mlr.press/v235/naderiparizi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/naderiparizi24a/naderiparizi24a.pdf
|
https://openreview.net/forum?id=H8pMSJwRD5
|
Don’t be so Negative! Score-based Generative Modeling with Oracle-assisted Guidance
|
https://proceedings.mlr.press/v235/naderiparizi24a.html
|
Saeid Naderiparizi, Xiaoxuan Liang, Setareh Cohan, Berend Zwartsenberg, Frank Wood
|
https://proceedings.mlr.press/v235/naderiparizi24a.html
|
ICML 2024
|
Score-based diffusion models are a powerful class of generative models, widely utilized across diverse domains. Despite significant advancements in large-scale tasks such as text-to-image generation, their application to constrained domains has received considerably less attention. This work addresses model learning in a setting where, in addition to the training dataset, there further exists side-information in the form of an oracle that can label samples as being outside the support of the true data generating distribution. Specifically we develop a new denoising diffusion probabilistic modeling methodology, Gen-neG, that leverages this additional side-information. Gen-neG builds on classifier guidance in diffusion models to guide the generation process towards the positive support region indicated by the oracle. We empirically establish the utility of Gen-neG in applications including collision avoidance in self-driving simulators and safety-guarded human motion generation.
|
https://proceedings.mlr.press/v235/nadew24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nadew24a/nadew24a.pdf
|
https://openreview.net/forum?id=zgiT3uxvCF
|
Conditionally-Conjugate Gaussian Process Factor Analysis for Spike Count Data via Data Augmentation
|
https://proceedings.mlr.press/v235/nadew24a.html
|
Yididiya Y. Nadew, Xuhui Fan, Christopher John Quinn
|
https://proceedings.mlr.press/v235/nadew24a.html
|
ICML 2024
|
Gaussian process factor analysis (GPFA) is a latent variable modeling technique commonly used to identify smooth, low-dimensional latent trajectories underlying high-dimensional neural recordings. Specifically, researchers model spiking rates as Gaussian observations, resulting in tractable inference. Recently, GPFA has been extended to model spike count data. However, due to the non-conjugacy of the likelihood, the inference becomes intractable. Prior works rely on either black-box inference techniques, numerical integration or polynomial approximations of the likelihood to handle intractability. To overcome this challenge, we propose a conditionally-conjugate Gaussian process factor analysis (ccGPFA) resulting in both analytically and computationally tractable inference for modeling neural activity from spike count data. In particular, we develop a novel data augmentation based method that renders the model conditionally conjugate. Consequently, our model enjoys the advantage of simple closed-form updates using a variational EM algorithm. Furthermore, due to its conditional conjugacy, we show our model can be readily scaled using sparse Gaussian Processes and accelerated inference via natural gradients. To validate our method, we empirically demonstrate its efficacy through experiments.
|
https://proceedings.mlr.press/v235/nadjahi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nadjahi24a/nadjahi24a.pdf
|
https://openreview.net/forum?id=uWNUTRgBso
|
Slicing Mutual Information Generalization Bounds for Neural Networks
|
https://proceedings.mlr.press/v235/nadjahi24a.html
|
Kimia Nadjahi, Kristjan Greenewald, Rickard Brüel Gabrielsson, Justin Solomon
|
https://proceedings.mlr.press/v235/nadjahi24a.html
|
ICML 2024
|
The ability of machine learning (ML) algorithms to generalize well to unseen data has been studied through the lens of information theory, by bounding the generalization error with the input-output mutual information (MI), i.e., the MI between the training data and the learned hypothesis. Yet, these bounds have limited practicality for modern ML applications (e.g., deep learning), due to the difficulty of evaluating MI in high dimensions. Motivated by recent findings on the compressibility of neural networks, we consider algorithms that operate by slicing the parameter space, i.e., trained on random lower-dimensional subspaces. We introduce new, tighter information-theoretic generalization bounds tailored for such algorithms, demonstrating that slicing improves generalization. Our bounds offer significant computational and statistical advantages over standard MI bounds, as they rely on scalable alternative measures of dependence, i.e., disintegrated mutual information and $k$-sliced mutual information. Then, we extend our analysis to algorithms whose parameters do not need to exactly lie on random subspaces, by leveraging rate-distortion theory. This strategy yields generalization bounds that incorporate a distortion term measuring model compressibility under slicing, thereby tightening existing bounds without compromising performance or requiring model compression. Building on this, we propose a regularization scheme enabling practitioners to control generalization through compressibility. Finally, we empirically validate our results and achieve the computation of non-vacuous information-theoretic generalization bounds for neural networks, a task that was previously out of reach.
|
https://proceedings.mlr.press/v235/nagalapatti24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nagalapatti24a/nagalapatti24a.pdf
|
https://openreview.net/forum?id=o5SVr80Rgg
|
PairNet: Training with Observed Pairs to Estimate Individual Treatment Effect
|
https://proceedings.mlr.press/v235/nagalapatti24a.html
|
Lokesh Nagalapatti, Pranava Singhal, Avishek Ghosh, Sunita Sarawagi
|
https://proceedings.mlr.press/v235/nagalapatti24a.html
|
ICML 2024
|
Given a dataset of individuals each described by a covariate vector, a treatment, and an observed outcome on the treatment, the goal of the individual treatment effect (ITE) estimation task is to predict outcome changes resulting from a change in treatment. A fundamental challenge is that in the observational data, a covariate’s outcome is observed only under one treatment, whereas we need to infer the difference in outcomes under two different treatments. Several existing approaches address this issue through training with inferred pseudo-outcomes, but their success relies on the quality of these pseudo-outcomes. We propose PairNet, a novel ITE estimation training strategy that minimizes losses over pairs of examples based on their factual observed outcomes. Theoretical analysis for binary treatments reveals that PairNet is a consistent estimator of ITE risk, and achieves smaller generalization error than baseline models. Empirical comparison with thirteen existing methods across eight benchmarks, covering both discrete and continuous treatments, shows that PairNet achieves significantly lower ITE error compared to the baselines. Also, it is model-agnostic and easy to implement.
|
https://proceedings.mlr.press/v235/nagumo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nagumo24a/nagumo24a.pdf
|
https://openreview.net/forum?id=PykISfqvet
|
Density Ratio Estimation with Doubly Strong Robustness
|
https://proceedings.mlr.press/v235/nagumo24a.html
|
Ryosuke Nagumo, Hironori Fujisawa
|
https://proceedings.mlr.press/v235/nagumo24a.html
|
ICML 2024
|
We develop two density ratio estimation (DRE) methods with robustness to outliers. These are based on the divergence with a weight function to weaken the adverse effects of outliers. One is based on the Unnormalized Kullback-Leibler divergence, called Weighted DRE, and its optimization is a convex problem. The other is based on the γ-divergence, called γ-DRE, which improves a normalizing term problem of Weighted DRE. Its optimization is a DC (Difference of Convex functions) problem and needs more computation than a convex problem. These methods have doubly strong robustness, which means robustness to the heavy contamination of both the reference and target distributions. Numerical experiments show that our proposals are more robust than the previous methods.
|
https://proceedings.mlr.press/v235/nam24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nam24a/nam24a.pdf
|
https://openreview.net/forum?id=dQveBV9lZl
|
Solving Poisson Equations using Neural Walk-on-Spheres
|
https://proceedings.mlr.press/v235/nam24a.html
|
Hong Chul Nam, Julius Berner, Anima Anandkumar
|
https://proceedings.mlr.press/v235/nam24a.html
|
ICML 2024
|
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations. Leveraging stochastic representations and Walk-on-Spheres methods, we develop novel losses for neural networks based on the recursive solution of Poisson equations on spheres inside the domain. The resulting method is highly parallelizable and does not require spatial gradients for the loss. We provide a comprehensive comparison against competing methods based on PINNs, the Deep Ritz method, and (backward) stochastic differential equations. In several challenging, high-dimensional numerical examples, we demonstrate the superiority of NWoS in accuracy, speed, and computational costs. Compared to commonly used PINNs, our approach can reduce memory usage and errors by orders of magnitude. Furthermore, we apply NWoS to problems in PDE-constrained optimization and molecular dynamics to show its efficiency in practical applications.
|
https://proceedings.mlr.press/v235/narasimhan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/narasimhan24a/narasimhan24a.pdf
|
https://openreview.net/forum?id=WpKDeixmFr
|
Time Weaver: A Conditional Time Series Generation Model
|
https://proceedings.mlr.press/v235/narasimhan24a.html
|
Sai Shankar Narasimhan, Shubhankar Agarwal, Oguzhan Akcin, Sujay Sanghavi, Sandeep P. Chinchali
|
https://proceedings.mlr.press/v235/narasimhan24a.html
|
ICML 2024
|
Imagine generating a city’s electricity demand pattern based on weather, the presence of an electric vehicle, and location, which could be used for capacity planning during a winter freeze. Such real-world time series are often enriched with paired heterogeneous contextual metadata (e.g., weather and location). Current approaches to time series generation often ignore this paired metadata. Additionally, the heterogeneity in metadata poses several practical challenges in adapting existing conditional generation approaches from the image, audio, and video domains to the time series domain. To address this gap, we introduce TIME WEAVER, a novel diffusion-based model that leverages the heterogeneous metadata in the form of categorical, continuous, and even time-variant variables to significantly improve time series generation. Additionally, we show that naive extensions of standard evaluation metrics from the image to the time series domain are insufficient. These metrics do not penalize conditional generation approaches for their poor specificity in reproducing the metadata-specific features in the generated time series. Thus, we innovate a novel evaluation metric that accurately captures the specificity of conditional generation and the realism of the generated time series. We show that TIME WEAVER outperforms state-of-the-art benchmarks, such as Generative Adversarial Networks (GANs), by up to 30% in downstream classification tasks on real-world energy, medical, air quality, and traffic datasets.
|
https://proceedings.mlr.press/v235/nasiriany24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nasiriany24a/nasiriany24a.pdf
|
https://openreview.net/forum?id=051jaf8MQy
|
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
|
https://proceedings.mlr.press/v235/nasiriany24a.html
|
Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, Brian Ichter
|
https://proceedings.mlr.press/v235/nasiriany24a.html
|
ICML 2024
|
Vision language models (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce only textual outputs, while robotic control and other spatial tasks require outputting continuous coordinates, actions, or trajectories. How can we enable VLMs to handle such settings without fine-tuning on task-specific data? In this paper, we propose a novel visual prompting approach for VLMs that we call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as iterative visual question answering. In each iteration, the image is annotated with a visual representation of proposals that the VLM can refer to (e.g., candidate robot actions, localizations, or trajectories). The VLM then selects the best ones for the task. These proposals are iteratively refined, allowing the VLM to eventually zero in on the best available answer. We investigate PIVOT on real-world robotic navigation, real-world manipulation from images, instruction following in simulation, and additional spatial inference tasks such as localization. We find, perhaps surprisingly, that our approach enables zero-shot control of robotic systems without any robot training data, navigation in a variety of environments, and other capabilities. Although current performance is far from perfect, our work highlights potentials and limitations of this new regime and shows a promising approach for Internet-Scale VLMs in robotic and spatial reasoning domains.
|
https://proceedings.mlr.press/v235/nauman24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nauman24a/nauman24a.pdf
|
https://openreview.net/forum?id=5vZzmCeTYu
|
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning
|
https://proceedings.mlr.press/v235/nauman24a.html
|
Michal Nauman, Michał Bortkiewicz, Piotr Miłoś, Tomasz Trzcinski, Mateusz Ostaszewski, Marek Cygan
|
https://proceedings.mlr.press/v235/nauman24a.html
|
ICML 2024
|
Recent advancements in off-policy Reinforcement Learning (RL) have significantly improved sample efficiency, primarily due to the incorporation of various forms of regularization that enable more gradient update steps than traditional agents. However, many of these techniques have been tested in limited settings, often on tasks from single simulation benchmarks and against well-known algorithms rather than a range of regularization approaches. This limits our understanding of the specific mechanisms driving RL improvements. To address this, we implemented over 60 different off-policy agents, each integrating established regularization techniques from recent state-of-the-art algorithms. We tested these agents across 14 diverse tasks from 2 simulation benchmarks, measuring training metrics related to overestimation, overfitting, and plasticity loss — issues that motivate the examined regularization techniques. Our findings reveal that while the effectiveness of a specific regularization setup varies with the task, certain combinations consistently demonstrate robust and superior performance. Notably, a simple Soft Actor-Critic agent, appropriately regularized, reliably finds a better-performing policy within the training regime, which previously was achieved mainly through model-based approaches.
|
https://proceedings.mlr.press/v235/naumann24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/naumann24a/naumann24a.pdf
|
https://openreview.net/forum?id=JJSj8UXqd4
|
Box Facets and Cut Facets of Lifted Multicut Polytopes
|
https://proceedings.mlr.press/v235/naumann24a.html
|
Lucas Fabian Naumann, Jannik Irmai, Shengxian Zhao, Bjoern Andres
|
https://proceedings.mlr.press/v235/naumann24a.html
|
ICML 2024
|
The lifted multicut problem has diverse applications in the field of computer vision. Exact algorithms based on linear programming require an understanding of lifted multicut polytopes. Despite recent progress, two fundamental questions about these polytopes have remained open: Which lower box inequalities define facets, and which cut inequalities define facets? In this article, we answer the first question by establishing conditions that are necessary, sufficient and efficiently decidable. Toward the second question, we show that deciding facet-definingness of cut inequalities is NP-hard. This completes the analysis of canonical facets of lifted multicut polytopes.
|
https://proceedings.mlr.press/v235/navon24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/navon24a/navon24a.pdf
|
https://openreview.net/forum?id=nBPnmk6EeO
|
Equivariant Deep Weight Space Alignment
|
https://proceedings.mlr.press/v235/navon24a.html
|
Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron
|
https://proceedings.mlr.press/v235/navon24a.html
|
ICML 2024
|
Permutation symmetries of deep networks make basic operations like model merging and similarity estimation challenging. In many cases, aligning the weights of the networks, i.e., finding optimal permutations between their weights, is necessary. Unfortunately, weight alignment is an NP-hard problem. Prior research has mainly focused on solving relaxed versions of the alignment problem, leading to either time-consuming methods or sub-optimal solutions. To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align. To that end, we first prove that weight alignment adheres to two fundamental symmetries and then, propose a deep architecture that respects these symmetries. Notably, our framework does not require any labeled data. We provide a theoretical analysis of our approach and evaluate Deep-Align on several types of network architectures and learning setups. Our experimental results indicate that a feed-forward pass with Deep-Align produces better or equivalent alignments compared to those produced by current optimization algorithms. Additionally, our alignments can be used as an effective initialization for other methods, leading to improved solutions with a significant speedup in convergence.
|
https://proceedings.mlr.press/v235/nawrot24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nawrot24a/nawrot24a.pdf
|
https://openreview.net/forum?id=tDRYrAkOB7
|
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
|
https://proceedings.mlr.press/v235/nawrot24a.html
|
Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo Ponti
|
https://proceedings.mlr.press/v235/nawrot24a.html
|
ICML 2024
|
Transformers have emerged as the backbone of large language models (LLMs). However, generation remains inefficient due to the need to store in memory a cache of key–value representations for past tokens, whose size scales linearly with the input sequence length and batch size. As a solution, we propose Dynamic Memory Compression (DMC), a method for on-line key–value cache compression at inference time. Most importantly, the model learns to apply different compression ratios in different heads and layers. We retrofit pre-trained LLMs such as Llama 2 (7B, 13B and 70B) into DMC Transformers, achieving up to $\sim 3.7 \times$ throughput increase during auto-regressive inference on an NVIDIA H100 GPU. DMC is applied via continued pre-training on a negligible percentage of the original data without adding any extra parameters. We find that DMC preserves the original downstream performance with up to 4$\times$ cache compression, outperforming up-trained grouped-query attention (GQA) and key–value eviction policies (H$_2$O, TOVA). GQA and DMC can be even combined to obtain compounded gains. As a result DMC fits longer contexts and larger batches within any given memory budget. We release the DMC code and models at https://github.com/NVIDIA/Megatron-LM/tree/DMC.
|
https://proceedings.mlr.press/v235/nazaret24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nazaret24a/nazaret24a.pdf
|
https://openreview.net/forum?id=JJZBZW28Gn
|
Stable Differentiable Causal Discovery
|
https://proceedings.mlr.press/v235/nazaret24a.html
|
Achille Nazaret, Justin Hong, Elham Azizi, David Blei
|
https://proceedings.mlr.press/v235/nazaret24a.html
|
ICML 2024
|
Inferring causal relationships as directed acyclic graphs (DAGs) is an important but challenging problem. Differentiable Causal Discovery (DCD) is a promising approach to this problem, framing the search as a continuous optimization. But existing DCD methods are numerically unstable, with poor performance beyond tens of variables. In this paper, we propose Stable Differentiable Causal Discovery (SDCD), a new method that improves previous DCD methods in two ways: (1) It employs an alternative constraint for acyclicity; this constraint is more stable, both theoretically and empirically, and fast to compute. (2) It uses a training procedure tailored for sparse causal graphs, which are common in real-world scenarios. We first derive SDCD and prove its stability and correctness. We then evaluate it with both observational and interventional data and in both small-scale and large-scale settings. We find that SDCD outperforms existing methods in convergence speed and accuracy, and can scale to thousands of variables.
|
https://proceedings.mlr.press/v235/neekhara24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/neekhara24a/neekhara24a.pdf
|
https://openreview.net/forum?id=6kMMgmeM2U
|
SelfVC: Voice Conversion With Iterative Refinement using Self Transformations
|
https://proceedings.mlr.press/v235/neekhara24a.html
|
Paarth Neekhara, Shehzeen Samarah Hussain, Rafael Valle, Boris Ginsburg, Rishabh Ranjan, Shlomo Dubnov, Farinaz Koushanfar, Julian Mcauley
|
https://proceedings.mlr.press/v235/neekhara24a.html
|
ICML 2024
|
We propose SelfVC, a training strategy to iteratively improve a voice conversion model with self-synthesized examples. Previous efforts on voice conversion focus on factorizing speech into explicitly disentangled representations that separately encode speaker characteristics and linguistic content. However, disentangling speech representations to capture such attributes using task-specific loss terms can lead to information loss. In this work, instead of explicitly disentangling attributes with loss terms, we present a framework to train a controllable voice conversion model on entangled speech representations derived from self-supervised learning (SSL) and speaker verification models. First, we develop techniques to derive prosodic information from the audio signal and SSL representations to train predictive submodules in the synthesis model. Next, we propose a training strategy to iteratively improve the synthesis model for voice conversion, by creating a challenging training objective using self-synthesized examples. We demonstrate that incorporating such self-synthesized examples during training improves the speaker similarity of generated speech as compared to a baseline voice conversion model trained solely on heuristically perturbed inputs. Our framework is trained without any text and achieves state-of-the-art results in zero-shot voice conversion on metrics evaluating naturalness, speaker similarity, and intelligibility of synthesized audio.
|
https://proceedings.mlr.press/v235/neklyudov24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/neklyudov24a/neklyudov24a.pdf
|
https://openreview.net/forum?id=wwItuHdus6
|
A Computational Framework for Solving Wasserstein Lagrangian Flows
|
https://proceedings.mlr.press/v235/neklyudov24a.html
|
Kirill Neklyudov, Rob Brekelmans, Alexander Tong, Lazar Atanackovic, Qiang Liu, Alireza Makhzani
|
https://proceedings.mlr.press/v235/neklyudov24a.html
|
ICML 2024
|
The dynamical formulation of the optimal transport can be extended through various choices of the underlying geometry (kinetic energy), and the regularization of density paths (potential energy). These combinations yield different variational problems (Lagrangians), encompassing many variations of the optimal transport problem such as the Schrödinger bridge, unbalanced optimal transport, and optimal transport with physical constraints, among others. In general, the optimal density path is unknown, and solving these variational problems can be computationally challenging. We propose a novel deep learning based framework approaching all of these problems from a unified perspective. Leveraging the dual formulation of the Lagrangians, our method does not require simulating or backpropagating through the trajectories of the learned dynamics, and does not need access to optimal couplings. We showcase the versatility of the proposed framework by outperforming previous approaches for the single-cell trajectory inference, where incorporating prior knowledge into the dynamics is crucial for correct predictions.
|
https://proceedings.mlr.press/v235/nelaturu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nelaturu24a/nelaturu24a.pdf
|
https://openreview.net/forum?id=weixEb6Wjd
|
On The Fairness Impacts of Hardware Selection in Machine Learning
|
https://proceedings.mlr.press/v235/nelaturu24a.html
|
Sree Harsha Nelaturu, Nishaanth Kanna Ravichandran, Cuong Tran, Sara Hooker, Ferdinando Fioretto
|
https://proceedings.mlr.press/v235/nelaturu24a.html
|
ICML 2024
|
In the machine learning ecosystem, hardware selection is often regarded as a mere utility, overshadowed by the spotlight on algorithms and data. This is especially relevant in contexts like ML-as-a-service platforms, where users often lack control over the hardware used for model deployment. This paper investigates the influence of hardware on the delicate balance between model performance and fairness. We demonstrate that hardware choices can exacerbate existing disparities, attributing these discrepancies to variations in gradient flows and loss surfaces across different demographic groups. Through both theoretical and empirical analysis, the paper not only identifies the underlying factors but also proposes an effective strategy for mitigating hardware-induced performance imbalances.
|
https://proceedings.mlr.press/v235/neu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/neu24a/neu24a.pdf
|
https://openreview.net/forum?id=RPMTNGMq0O
|
Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization
|
https://proceedings.mlr.press/v235/neu24a.html
|
Gergely Neu, Nneka Okolo
|
https://proceedings.mlr.press/v235/neu24a.html
|
ICML 2024
|
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
|
https://proceedings.mlr.press/v235/ng24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ng24a/ng24a.pdf
|
https://openreview.net/forum?id=ZdSe1qnuia
|
Score-Based Causal Discovery of Latent Variable Causal Models
|
https://proceedings.mlr.press/v235/ng24a.html
|
Ignavier Ng, Xinshuai Dong, Haoyue Dai, Biwei Huang, Peter Spirtes, Kun Zhang
|
https://proceedings.mlr.press/v235/ng24a.html
|
ICML 2024
|
Identifying latent variables and the causal structure involving them is essential across various scientific fields. While many existing works fall under the category of constraint-based methods (with e.g. conditional independence or rank deficiency tests), they may face empirical challenges such as testing-order dependency, error propagation, and choosing an appropriate significance level. These issues can potentially be mitigated by properly designed score-based methods, such as Greedy Equivalence Search (GES) (Chickering, 2002) in the specific setting without latent variables. Yet, formulating score-based methods with latent variables is highly challenging. In this work, we develop score-based methods that are capable of identifying causal structures containing causally-related latent variables with identifiability guarantees. Specifically, we show that a properly formulated scoring function can achieve score equivalence and consistency for structure learning of latent variable causal models. We further provide a characterization of the degrees of freedom for the marginal over the observed variables under multiple structural assumptions considered in the literature, and accordingly develop both exact and continuous score-based methods. This offers a unified view of several existing constraint-based methods with different structural assumptions. Experimental results validate the effectiveness of the proposed methods.
|
https://proceedings.mlr.press/v235/ng24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ng24b/ng24b.pdf
|
https://openreview.net/forum?id=JNN6QHhLHB
|
Measuring Stochastic Data Complexity with Boltzmann Influence Functions
|
https://proceedings.mlr.press/v235/ng24b.html
|
Nathan Hoyen Ng, Roger Baker Grosse, Marzyeh Ghassemi
|
https://proceedings.mlr.press/v235/ng24b.html
|
ICML 2024
|
Estimating the uncertainty of a model’s prediction on a test point is a crucial part of ensuring reliability and calibration under distribution shifts.A minimum description length approach to this problem uses the predictive normalized maximum likelihood (pNML) distribution, which considers every possible label for a data point, and decreases confidence in a prediction if other labels are also consistent with the model and training data. In this work we propose IF-COMP, a scalable and efficient approximation of the pNML distribution that linearizes the model with a temperature-scaled Boltzmann influence function. IF-COMP can be used to produce well-calibrated predictions on test points as well as measure complexity in both labelled and unlabelled settings. We experimentally validate IF-COMP on uncertainty calibration, mislabel detection, and OOD detection tasks, where it consistently matches or beats strong baseline methods.
|
https://proceedings.mlr.press/v235/nguyen24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24a/nguyen24a.pdf
|
https://openreview.net/forum?id=e76GrGhIgf
|
Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?
|
https://proceedings.mlr.press/v235/nguyen24a.html
|
Huy Nguyen, Pedram Akbarian, Nhat Ho
|
https://proceedings.mlr.press/v235/nguyen24a.html
|
ICML 2024
|
Dense-to-sparse gating mixture of experts (MoE) has recently become an effective alternative to a well-known sparse MoE. Rather than fixing the number of activated experts as in the latter model, which could limit the investigation of potential experts, the former model utilizes the temperature to control the softmax weight distribution and the sparsity of the MoE during training in order to stabilize the expert specialization. Nevertheless, while there are previous attempts to theoretically comprehend the sparse MoE, a comprehensive analysis of the dense-to-sparse gating MoE has remained elusive. Therefore, we aim to explore the impacts of the dense-to-sparse gate on the maximum likelihood estimation under the Gaussian MoE in this paper. We demonstrate that due to interactions between the temperature and other model parameters via some partial differential equations, the convergence rates of parameter estimations are slower than any polynomial rates, and could be as slow as $\mathcal{O}(1/\log(n))$, where $n$ denotes the sample size. To address this issue, we propose using a novel activation dense-to-sparse gate, which routes the output of a linear layer to an activation function before delivering them to the softmax function. By imposing linearly independence conditions on the activation function and its derivatives, we show that the parameter estimation rates are significantly improved to polynomial rates. Finally, we conduct a simulation study to empirically validate our theoretical results.
|
https://proceedings.mlr.press/v235/nguyen24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24b/nguyen24b.pdf
|
https://openreview.net/forum?id=2Sl0lPF6ka
|
A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts
|
https://proceedings.mlr.press/v235/nguyen24b.html
|
Huy Nguyen, Pedram Akbarian, Trungtin Nguyen, Nhat Ho
|
https://proceedings.mlr.press/v235/nguyen24b.html
|
ICML 2024
|
Mixture-of-experts (MoE) model incorporates the power of multiple submodels via gating functions to achieve greater performance in numerous regression and classification applications. From a theoretical perspective, while there have been previous attempts to comprehend the behavior of that model under the regression settings through the convergence analysis of maximum likelihood estimation in the Gaussian MoE model, such analysis under the setting of a classification problem has remained missing in the literature. We close this gap by establishing the convergence rates of density estimation and parameter estimation in the softmax gating multinomial logistic MoE model. Notably, when part of the expert parameters vanish, these rates are shown to be slower than polynomial rates owing to an inherent interaction between the softmax gating and expert functions via partial differential equations. To address this issue, we propose using a novel class of modified softmax gating functions which transform the input before delivering them to the gating functions. As a result, the previous interaction disappears and the parameter estimation rates are significantly improved.
|
https://proceedings.mlr.press/v235/nguyen24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24c/nguyen24c.pdf
|
https://openreview.net/forum?id=T0zR4mdSce
|
PARCv2: Physics-aware Recurrent Convolutional Neural Networks for Spatiotemporal Dynamics Modeling
|
https://proceedings.mlr.press/v235/nguyen24c.html
|
Phong C.H. Nguyen, Xinlun Cheng, Shahab Azarfar, Pradeep Seshadri, Yen T. Nguyen, Munho Kim, Sanghun Choi, H.S. Udaykumar, Stephen Baek
|
https://proceedings.mlr.press/v235/nguyen24c.html
|
ICML 2024
|
Modeling unsteady, fast transient, and advection-dominated physics problems is a pressing challenge for physics-aware deep learning (PADL). The physics of complex systems is governed by large systems of partial differential equations (PDEs) and ancillary constitutive models with nonlinear structures, as well as evolving state fields exhibiting sharp gradients and rapidly deforming material interfaces. Here, we investigate an inductive bias approach that is versatile and generalizable to model generic nonlinear field evolution problems. Our study focuses on the recent physics-aware recurrent convolutions (PARC), which incorporates a differentiator-integrator architecture that inductively models the spatiotemporal dynamics of generic physical systems. We extend the capabilities of PARC to simulate unsteady, transient, and advection-dominant systems. The extended model, referred to as PARCv2, is equipped with differential operators to model advection-reaction-diffusion equations, as well as a hybrid integral solver for stable, long-time predictions. PARCv2 is tested on both standard benchmark problems in fluid dynamics, namely Burgers and Navier-Stokes equations, and then applied to more complex shock-induced reaction problems in energetic materials. We evaluate the behavior of PARCv2 in comparison to other physics-informed and learning bias models and demonstrate its potential to model unsteady and advection-dominant dynamics regimes.
|
https://proceedings.mlr.press/v235/nguyen24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24d/nguyen24d.pdf
|
https://openreview.net/forum?id=gbD9MAc9p0
|
Quality-Weighted Vendi Scores And Their Application To Diverse Experimental Design
|
https://proceedings.mlr.press/v235/nguyen24d.html
|
Quan Nguyen, Adji Bousso Dieng
|
https://proceedings.mlr.press/v235/nguyen24d.html
|
ICML 2024
|
Experimental design techniques such as active search and Bayesian optimization are widely used in the natural sciences for data collection and discovery. However, existing techniques tend to favor exploitation over exploration of the search space, which causes them to get stuck in local optima. This collapse problem prevents experimental design algorithms from yielding diverse high-quality data. In this paper, we extend the Vendi scores—a family of interpretable similarity-based diversity metrics—to account for quality. We then leverage these quality-weighted Vendi scores to tackle experimental design problems across various applications, including drug discovery, materials discovery, and reinforcement learning. We found that quality-weighted Vendi scores allow us to construct policies for experimental design that flexibly balance quality and diversity, and ultimately assemble rich and diverse sets of high-performing data points. Our algorithms led to a 70%–170% increase in the number of effective discoveries compared to baselines.
|
https://proceedings.mlr.press/v235/nguyen24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24e/nguyen24e.pdf
|
https://openreview.net/forum?id=d2E2i5rJ4x
|
Multiplicative Weights Update, Area Convexity and Random Coordinate Descent for Densest Subgraph Problems
|
https://proceedings.mlr.press/v235/nguyen24e.html
|
Ta Duy Nguyen, Alina Ene
|
https://proceedings.mlr.press/v235/nguyen24e.html
|
ICML 2024
|
We study the densest subgraph problem and give algorithms via multiplicative weights update and area convexity that converge in $O\left(\frac{\log m}{\epsilon^{2}}\right)$ and $O\left(\frac{\log m}{\epsilon}\right)$ iterations, respectively, both with nearly-linear time per iteration. Compared with the work by Bahmani et al. (2014), our MWU algorithm uses a very different and much simpler procedure for recovering the dense subgraph from the fractional solution and does not employ a binary search. Compared with the work by Boob et al. (2019), our algorithm via area convexity improves the iteration complexity by a factor $\Delta$—the maximum degree in the graph, and matches the fastest theoretical runtime currently known via flows (Chekuri et al., 2022) in total time. Next, we study the dense subgraph decomposition problem and give the first practical iterative algorithm with linear convergence rate $O\left(mn\log\frac{1}{\epsilon}\right)$ via accelerated random coordinate descent. This significantly improves over $O\left(\frac{m\sqrt{mn\Delta}}{\epsilon}\right)$ time of the FISTA-based algorithm by Harb et al. (2022). In the high precision regime $\epsilon\ll\frac{1}{n}$ where we can even recover the exact solution, our algorithm has a total runtime of $O\left(mn\log n\right)$, matching the state of the art exact algorithm via parametric flows (Gallo et al., 1989). Empirically, we show that this algorithm is very practical and scales to very large graphs, and its performance is competitive with widely used methods that have significantly weaker theoretical guarantees.
|
https://proceedings.mlr.press/v235/nguyen24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24f/nguyen24f.pdf
|
https://openreview.net/forum?id=BO0jookxk8
|
On Least Square Estimation in Softmax Gating Mixture of Experts
|
https://proceedings.mlr.press/v235/nguyen24f.html
|
Huy Nguyen, Nhat Ho, Alessandro Rinaldo
|
https://proceedings.mlr.press/v235/nguyen24f.html
|
ICML 2024
|
Mixture of experts (MoE) model is a statistical machine learning design that aggregates multiple expert networks using a softmax gating function in order to form a more intricate and expressive model. Despite being commonly used in several applications owing to their scalability, the mathematical and statistical properties of MoE models are complex and difficult to analyze. As a result, previous theoretical works have primarily focused on probabilistic MoE models by imposing the impractical assumption that the data are generated from a Gaussian MoE model. In this work, we investigate the performance of the least squares estimators (LSE) under a deterministic MoE model where the data are sampled according to a regression model, a setting that has remained largely unexplored. We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions. We demonstrate that the rates for estimating strongly identifiable experts, namely the widely used feed forward networks with activation functions $\mathrm{sigmoid}(\cdot)$ and $\tanh(\cdot)$, are substantially faster than those of polynomial experts, which we show to exhibit a surprising slow estimation rate. Our findings have important practical implications for expert selection.
|
https://proceedings.mlr.press/v235/nguyen24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24g/nguyen24g.pdf
|
https://openreview.net/forum?id=qGEEso256L
|
Structure-Aware E(3)-Invariant Molecular Conformer Aggregation Networks
|
https://proceedings.mlr.press/v235/nguyen24g.html
|
Duy Minh Ho Nguyen, Nina Lukashina, Tai Nguyen, An Thai Le, Trungtin Nguyen, Nhat Ho, Jan Peters, Daniel Sonntag, Viktor Zaverkin, Mathias Niepert
|
https://proceedings.mlr.press/v235/nguyen24g.html
|
ICML 2024
|
A molecule’s 2D representation consists of its atoms, their attributes, and the molecule’s covalent bonds. A 3D (geometric) representation of a molecule is called a conformer and consists of its atom types and Cartesian coordinates. Every conformer has a potential energy, and the lower this energy, the more likely it occurs in nature. Most existing machine learning methods for molecular property prediction consider either 2D molecular graphs or 3D conformer structure representations in isolation. Inspired by recent work on using ensembles of conformers in conjunction with 2D graph representations, we propose E(3)-invariant molecular conformer aggregation networks. The method integrates a molecule’s 2D representation with that of multiple of its conformers. Contrary to prior work, we propose a novel 2D–3D aggregation mechanism based on a differentiable solver for the Fused Gromov-Wasserstein Barycenter problem and the use of an efficient conformer generation method based on distance geometry. We show that the proposed aggregation mechanism is E(3) invariant and propose an efficient GPU implementation. Moreover, we demonstrate that the aggregation mechanism helps to significantly outperform state-of-the-art molecule property prediction methods on established datasets.
|
https://proceedings.mlr.press/v235/nguyen24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24h/nguyen24h.pdf
|
https://openreview.net/forum?id=FoRqdsN4IA
|
Generative Conditional Distributions by Neural (Entropic) Optimal Transport
|
https://proceedings.mlr.press/v235/nguyen24h.html
|
Bao Nguyen, Binh Nguyen, Hieu Trung Nguyen, Viet Anh Nguyen
|
https://proceedings.mlr.press/v235/nguyen24h.html
|
ICML 2024
|
Learning conditional distributions is challenging because the desired outcome is not a single distribution but multiple distributions that correspond to multiple instances of the covariates. We introduce a novel neural entropic optimal transport method designed to effectively learn generative models of conditional distributions, particularly in scenarios characterized by limited sample sizes. Our method relies on the minimax training of two neural networks: a generative network parametrizing the inverse cumulative distribution functions of the conditional distributions and another network parametrizing the conditional Kantorovich potential. To prevent overfitting, we regularize the objective function by penalizing the Lipschitz constant of the network output. Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques. Our implementation can be found at https://github.com/nguyenngocbaocmt02/GENTLE.
|
https://proceedings.mlr.press/v235/nguyen24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24i/nguyen24i.pdf
|
https://openreview.net/forum?id=SRzb3QDjdV
|
PIDformer: Transformer Meets Control Theory
|
https://proceedings.mlr.press/v235/nguyen24i.html
|
Tam Minh Nguyen, Cesar A Uribe, Tan Minh Nguyen, Richard Baraniuk
|
https://proceedings.mlr.press/v235/nguyen24i.html
|
ICML 2024
|
In this work, we address two main shortcomings of transformer architectures: input corruption and rank collapse in their output representation. We unveil self-attention as an autonomous state-space model that inherently promotes smoothness in its solutions, leading to lower-rank outputs and diminished representation capacity. Moreover, the steady-state solution of the model is sensitive to input perturbations. We incorporate a Proportional-Integral-Derivative (PID) closed-loop feedback control system with a reference point into the model to improve robustness and representation capacity. This integration aims to preserve high-frequency details while bolstering model stability, rendering it more noise-resilient. The resulting controlled state-space model is theoretically proven robust and adept at addressing the rank collapse. Motivated by this control framework, we derive a novel class of transformers, PID-controlled Transformer (PIDformer), aimed at improving robustness and mitigating the rank-collapse issue inherent in softmax transformers. We empirically evaluate the model for advantages and robustness against baseline transformers across various practical tasks, including object classification, image segmentation, and language modeling.
|
https://proceedings.mlr.press/v235/nguyen24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24j/nguyen24j.pdf
|
https://openreview.net/forum?id=MIRQ3L8vtn
|
Differentially private exact recovery for stochastic block models
|
https://proceedings.mlr.press/v235/nguyen24j.html
|
Dung Nguyen, Anil Kumar Vullikanti
|
https://proceedings.mlr.press/v235/nguyen24j.html
|
ICML 2024
|
Stochastic block models (SBMs) are a very commonly studied network model for community detection algorithms. In the standard form of an SBM, the $n$ vertices (or nodes) of a graph are generally divided into multiple pre-determined communities (or clusters). Connections between pairs of vertices are generated randomly and independently with pre-defined probabilities, which depend on the communities containing the two nodes. A fundamental problem in SBMs is the recovery of the community structure, and sharp information-theoretic bounds are known for recoverability for many versions of SBMs. Our focus here is the recoverability problem in SBMs when the network is private. Under the edge differential privacy model, we derive conditions for exact recoverability in three different versions of SBMs, namely Asymmetric SBM (when communities have non-uniform sizes), General Structure SBM (with outliers), and Censored SBM (with edge features). Our private algorithms have polynomial running time w.r.t. the input graph’s size, and match the recovery thresholds of the non-private setting when $\epsilon\rightarrow\infty$. In contrast, the previous best results for recoverability in SBMs only hold for the symmetric case (equal size communities), and run in quasi-polynomial time, or in polynomial time with recovery thresholds being tight up to some constants from the non-private settings.
|
https://proceedings.mlr.press/v235/nguyen24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24k/nguyen24k.pdf
|
https://openreview.net/forum?id=eW0pZmziBH
|
Novel Spectral Algorithms for the Partial Credit Model
|
https://proceedings.mlr.press/v235/nguyen24k.html
|
Duc Nguyen, Anderson Ye Zhang
|
https://proceedings.mlr.press/v235/nguyen24k.html
|
ICML 2024
|
The Partial Credit Model (PCM) of Andrich (1978) and Masters (1982) is a fundamental model within the psychometric literature with wide-ranging modern applications. It models the integer-valued response that a subject gives to an item where there is a natural notion of monotonic progress between consecutive response values, such as partial scores on a test and customer ratings of a product. In this paper, we introduce a novel, time-efficient and accurate statistical spectral algorithm for inference under the PCM model. We complement our algorithmic contribution with in-depth non-asymptotic statistical analysis, the first of its kind in the literature. We show that the spectral algorithm enjoys the optimal error guarantee under three different metrics, all under reasonable sampling assumptions. We leverage the efficiency of the spectral algorithm to propose a novel EM-based algorithm for learning mixtures of PCMs. We perform comprehensive experiments on synthetic and real-life datasets covering education testing, recommendation systems, and financial investment applications. We show that the proposed spectral algorithm is competitive with previously introduced algorithms in terms of accuracy while being orders of magnitude faster.
|
https://proceedings.mlr.press/v235/nguyen24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen24l/nguyen24l.pdf
|
https://openreview.net/forum?id=XyxuhLtFA2
|
Sliced Wasserstein with Random-Path Projecting Directions
|
https://proceedings.mlr.press/v235/nguyen24l.html
|
Khai Nguyen, Shujian Zhang, Tam Le, Nhat Ho
|
https://proceedings.mlr.press/v235/nguyen24l.html
|
ICML 2024
|
Slicing distribution selection has been used as an effective technique to improve the performance of parameter estimators based on minimizing sliced Wasserstein distance in applications. Previous works either utilize expensive optimization to select the slicing distribution or use slicing distributions that require expensive sampling methods. In this work, we propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation. In particular, we introduce the random-path projecting direction (RPD) which is constructed by leveraging the normalized difference between two random vectors following the two input measures. From the RPD, we derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW). We then discuss the topological, statistical, and computational properties of RPSW and IWRPSW. Finally, we showcase the favorable performance of RPSW and IWRPSW in gradient flow and the training of denoising diffusion generative models on images.
|
https://proceedings.mlr.press/v235/nguyen-tang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nguyen-tang24a/nguyen-tang24a.pdf
|
https://openreview.net/forum?id=dYDPcx78tm
|
On The Statistical Complexity of Offline Decision-Making
|
https://proceedings.mlr.press/v235/nguyen-tang24a.html
|
Thanh Nguyen-Tang, Raman Arora
|
https://proceedings.mlr.press/v235/nguyen-tang24a.html
|
ICML 2024
|
We study the statistical complexity of offline decision-making with function approximation, establishing (near) minimax-optimal rates for stochastic contextual bandits and Markov decision processes. The performance limits are captured by the pseudo-dimension of the (value) function class and a new characterization of the behavior policy that strictly subsumes all the previous notions of data coverage in the offline decision-making literature. In addition, we seek to understand the benefits of using offline data in online decision-making and show nearly minimax-optimal rates in a wide range of regimes.
|
https://proceedings.mlr.press/v235/ni24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ni24a/ni24a.pdf
|
https://openreview.net/forum?id=B1W712hMBi
|
NExT: Teaching Large Language Models to Reason about Code Execution
|
https://proceedings.mlr.press/v235/ni24a.html
|
Ansong Ni, Miltiadis Allamanis, Arman Cohan, Yinlin Deng, Kensen Shi, Charles Sutton, Pengcheng Yin
|
https://proceedings.mlr.press/v235/ni24a.html
|
ICML 2024
|
A fundamental skill among human developers is the ability to understand and reason about program execution. As an example, a programmer can mentally simulate code execution in natural language to debug and repair code (aka. rubber duck debugging). However, large language models (LLMs) of code are typically trained on the surface textual form of programs, thus may lack a semantic understanding of how programs execute at run-time. To address this issue, we propose NExT, a method to teach LLMs to inspect the execution traces of programs (variable states of executed lines) and reason about their run-time behavior through chain-of-thought (CoT) rationales. Specifically, NExT uses self-training to bootstrap a synthetic training set of execution-aware rationales that lead to correct task solutions (e.g., fixed programs) without laborious manual annotation. Experiments on program repair tasks based on MBPP and HumanEval demonstrate that NExT improves the fix rate of a PaLM 2 model, by 26.1% and 10.3% absolute, respectively, with significantly improved rationale quality as verified by automated metrics and human raters. Our model can also generalize to scenarios where program traces are absent at test-time.
|
https://proceedings.mlr.press/v235/ni24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ni24b/ni24b.pdf
|
https://openreview.net/forum?id=18f6iPn0zq
|
On the Nonlinearity of Layer Normalization
|
https://proceedings.mlr.press/v235/ni24b.html
|
Yunhao Ni, Yuxin Guo, Junlong Jia, Lei Huang
|
https://proceedings.mlr.press/v235/ni24b.html
|
ICML 2024
|
Layer normalization (LN) is a ubiquitous technique in deep learning but our theoretical understanding to it remains elusive. This paper investigates a new theoretical direction for LN, regarding to its nonlinearity and representation capacity. We investigate the representation capacity of a network with layerwise composition of linear and LN transformations, referred to as LN-Net. We theoretically show that, given $m$ samples with any label assignment, an LN-Net with only 3 neurons in each layer and $O(m)$ LN layers can correctly classify them. We further show the lower bound of the VC dimension of an LN-Net. The nonlinearity of LN can be amplified by group partition, which is also theoretically demonstrated with mild assumption and empirically supported by our experiments. Based on our analyses, we consider to design neural architecture by exploiting and amplifying the nonlinearity of LN, and the effectiveness is supported by our experiments.
|
https://proceedings.mlr.press/v235/ni24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ni24c/ni24c.pdf
|
https://openreview.net/forum?id=XGq30hC5MW
|
Risk-Sensitive Reward-Free Reinforcement Learning with CVaR
|
https://proceedings.mlr.press/v235/ni24c.html
|
Xinyi Ni, Guanlin Liu, Lifeng Lai
|
https://proceedings.mlr.press/v235/ni24c.html
|
ICML 2024
|
Exploration is a crucial phase in reinforcement learning (RL). The reward-free RL paradigm, as explored by (Jin et al., 2020), offers an efficient method to design exploration algorithms for risk-neutral RL across various reward functions with a single exploration phase. However, as RL applications in safety critical settings grow, there’s an increasing need for risk-sensitive RL, which considers potential risks in decision-making. Yet, efficient exploration strategies for risk-sensitive RL remain underdeveloped. This study presents a novel risk-sensitive reward-free framework based on Conditional Value-at-Risk (CVaR), designed to effectively address CVaR RL for any given reward function through a single exploration phase. We introduce the CVaR-RF-UCRL algorithm, which is shown to be $(\epsilon,p)$-PAC, with a sample complexity upper bounded by $\tilde{\mathcal{O}}\left(\frac{S^2AH^4}{\epsilon^2\tau^2}\right)$ with $\tau$ being the risk tolerance parameter. We also prove a $\Omega\left(\frac{S^2AH^2}{\epsilon^2\tau}\right)$ lower bound for any CVaR-RF exploration algorithm, demonstrating the near-optimality of our algorithm. Additionally, we propose the planning algorithms: CVaR-VI and its more practical variant, CVaR-VI-DISC. The effectiveness and practicality of our CVaR reward-free approach are further validated through numerical experiments.
|
https://proceedings.mlr.press/v235/nichani24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nichani24a/nichani24a.pdf
|
https://openreview.net/forum?id=jNM4imlHZv
|
How Transformers Learn Causal Structure with Gradient Descent
|
https://proceedings.mlr.press/v235/nichani24a.html
|
Eshaan Nichani, Alex Damian, Jason D. Lee
|
https://proceedings.mlr.press/v235/nichani24a.html
|
ICML 2024
|
The incredible success of transformers on sequence modeling tasks can be largely attributed to the self-attention mechanism, which allows information to be transferred between different parts of a sequence. Self-attention allows transformers to encode causal structure which makes them particularly suitable for sequence modeling. However, the process by which transformers learn such causal structure via gradient-based training algorithms remains poorly understood. To better understand this process, we introduce an in-context learning task that requires learning latent causal structure. We prove that gradient descent on a simplified two-layer transformer learns to solve this task by encoding the latent causal graph in the first attention layer. The key insight of our proof is that the gradient of the attention matrix encodes the mutual information between tokens. As a consequence of the data processing inequality, the largest entries of this gradient correspond to edges in the latent causal graph. As a special case, when the sequences are generated from in-context Markov chains, we prove that transformers learn an induction head (Olsson et al., 2022). We confirm our theoretical findings by showing that transformers trained on our in-context learning task are able to recover a wide variety of causal structures.
|
https://proceedings.mlr.press/v235/nie24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nie24a/nie24a.pdf
|
https://openreview.net/forum?id=Wz4lgc8dsN
|
Online Cascade Learning for Efficient Inference over Streams
|
https://proceedings.mlr.press/v235/nie24a.html
|
Lunyiu Nie, Zhimin Ding, Erdong Hu, Christopher Jermaine, Swarat Chaudhuri
|
https://proceedings.mlr.press/v235/nie24a.html
|
ICML 2024
|
Large Language Models (LLMs) have a natural role in answering complex queries about data streams, but the high computational cost of LLM inference makes them infeasible in many such tasks. We propose online cascade learning, the first approach to address this challenge. The objective here is to learn a “cascade” of models, starting with lower-capacity models (such as logistic regression) and ending with a powerful LLM, along with a deferral policy that determines the model to be used on a given input. We formulate the task of learning cascades online as an imitation-learning problem, where smaller models are updated over time imitating the collected LLM demonstrations, and give a no-regret algorithm for the problem. Experimental results across four benchmarks show that our method parallels LLMs in accuracy while cutting down inference costs by as much as 90% with strong robustness against input distribution shifts, underscoring its efficacy and adaptability in stream processing. Our source code is available at https://github.com/flitternie/online_cascade_learning.
|
https://proceedings.mlr.press/v235/nie24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nie24b/nie24b.pdf
|
https://openreview.net/forum?id=dMOhgHNYAf
|
Compositional Text-to-Image Generation with Dense Blob Representations
|
https://proceedings.mlr.press/v235/nie24b.html
|
Weili Nie, Sifei Liu, Morteza Mardani, Chao Liu, Benjamin Eckart, Arash Vahdat
|
https://proceedings.mlr.press/v235/nie24b.html
|
ICML 2024
|
Existing text-to-image models struggle to follow complex text prompts, raising the need for extra grounding inputs for better controllability. In this work, we propose to decompose a scene into visual primitives - denoted as dense blob representations - that contain fine-grained details of the scene while being modular, human-interpretable, and easy-to-construct. Based on blob representations, we develop a blob-grounded text-to-image diffusion model, termed BlobGEN, for compositional generation. Particularly, we introduce a new masked cross-attention module to disentangle the fusion between blob representations and visual features. To leverage the compositionality of large language models (LLMs), we introduce a new in-context learning approach to generate blob representations from text prompts. Our extensive experiments show that BlobGEN achieves superior zero-shot generation quality and better layout-guided controllability on MS-COCO. When augmented by LLMs, our method exhibits superior numerical and spatial correctness on compositional image generation benchmarks.
|
https://proceedings.mlr.press/v235/niedoba24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/niedoba24a/niedoba24a.pdf
|
https://openreview.net/forum?id=hqNz4LDuhn
|
Nearest Neighbour Score Estimators for Diffusion Generative Models
|
https://proceedings.mlr.press/v235/niedoba24a.html
|
Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood
|
https://proceedings.mlr.press/v235/niedoba24a.html
|
ICML 2024
|
Score function estimation is the cornerstone of both training and sampling from diffusion generative models. Despite this fact, the most commonly used estimators are either biased neural network approximations or high variance Monte Carlo estimators based on the conditional score. We introduce a novel nearest neighbour score function estimator which utilizes multiple samples from the training set to dramatically decrease estimator variance. We leverage our low variance estimator in two compelling applications. Training consistency models with our estimator, we report a significant increase in both convergence speed and sample quality. In diffusion models, we show that our estimator can replace a learned network for probability-flow ODE integration, opening promising new avenues of future research. Code will be released upon paper acceptance.
|
https://proceedings.mlr.press/v235/nika24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nika24a/nika24a.pdf
|
https://openreview.net/forum?id=JQlEUfzhuA
|
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences
|
https://proceedings.mlr.press/v235/nika24a.html
|
Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban, Georgios Tzannetos, Goran Radanovic, Adish Singla
|
https://proceedings.mlr.press/v235/nika24a.html
|
ICML 2024
|
In this paper, we take a step towards a deeper understanding of learning from human preferences by systematically comparing the paradigm of reinforcement learning from human feedback (RLHF) with the recently proposed paradigm of direct preference optimization (DPO). We focus our attention on the class of loglinear policy parametrization and linear reward functions. In order to compare the two paradigms, we first derive minimax statistical bounds on the suboptimality gap induced by both RLHF and DPO, assuming access to an oracle that exactly solves the optimization problems. We provide a detailed discussion on the relative comparison between the two paradigms, simultaneously taking into account the sample size, policy and reward class dimensions, and the regularization temperature. Moreover, we extend our analysis to the approximate optimization setting and derive exponentially decaying convergence rates for both RLHF and DPO. Next, we analyze the setting where the ground-truth reward is not realizable and find that, while RLHF incurs a constant additional error, DPO retains its asymptotically decaying gap by just tuning the temperature accordingly. Finally, we extend our comparison to the Markov decision process setting, where we generalize our results with exact optimization. To the best of our knowledge, we are the first to provide such a comparative analysis for RLHF and DPO.
|
https://proceedings.mlr.press/v235/nikdan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nikdan24a/nikdan24a.pdf
|
https://openreview.net/forum?id=FYvpxyS43U
|
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
|
https://proceedings.mlr.press/v235/nikdan24a.html
|
Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević, Dan Alistarh
|
https://proceedings.mlr.press/v235/nikdan24a.html
|
ICML 2024
|
We investigate parameter-efficient fine-tuning (PEFT) methods that can provide good accuracy under limited computational and memory budgets in the context of large language models (LLMs). We present a new PEFT method called Robust Adaptation (RoSA) inspired by robust principal component analysis that jointly trains $\textit{low-rank}$ and highly-sparse components on top of a set of fixed pretrained weights to efficiently approximate the performance of a full-fine-tuning (FFT) solution. Across a series of challenging generative tasks such as grade-school math and SQL query generation, which require fine-tuning for good performance, we show that RoSA outperforms LoRA, pure sparse fine-tuning, and alternative hybrid methods at the same parameter budget, and can even recover the performance of FFT on some tasks. We provide system support for RoSA to complement the training algorithm, specifically in the form of sparse GPU kernels which enable memory- and computationally-efficient training, and show that it is also compatible with low-precision base weights, resulting in the first joint representation combining quantization, low-rank and sparse approximations. Our code is available at https://github.com/IST-DASLab/RoSA.
|
https://proceedings.mlr.press/v235/nilsson24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nilsson24a/nilsson24a.pdf
|
https://openreview.net/forum?id=321GwKMtxO
|
REMEDI: Corrective Transformations for Improved Neural Entropy Estimation
|
https://proceedings.mlr.press/v235/nilsson24a.html
|
Viktor Nilsson, Anirban Samaddar, Sandeep Madireddy, Pierre Nyquist
|
https://proceedings.mlr.press/v235/nilsson24a.html
|
ICML 2024
|
Information theoretic quantities play a central role in machine learning. The recent surge in the complexity of data and models has increased the demand for accurate estimation of these quantities. However, as the dimension grows the estimation presents significant challenges, with existing methods struggling already in relatively low dimensions. To address this issue, in this work, we introduce REMEDI for efficient and accurate estimation of differential entropy, a fundamental information theoretic quantity. The approach combines the minimization of the cross-entropy for simple, adaptive base models and the estimation of their deviation, in terms of the relative entropy, from the data density. Our approach demonstrates improvement across a broad spectrum of estimation tasks, encompassing entropy estimation on both synthetic and natural data. Further, we extend important theoretical consistency results to a more generalized setting required by our approach. We illustrate how the framework can be naturally extended to information theoretic supervised learning models, with a specific focus on the Information Bottleneck approach. It is demonstrated that the method delivers better accuracy compared to the existing methods in Information Bottleneck. In addition, we explore a natural connection between REMEDI and generative modeling using rejection sampling and Langevin dynamics.
|
https://proceedings.mlr.press/v235/nilsson24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nilsson24b/nilsson24b.pdf
|
https://openreview.net/forum?id=GqsRKEhelH
|
Indirectly Parameterized Concrete Autoencoders
|
https://proceedings.mlr.press/v235/nilsson24b.html
|
Alfred Nilsson, Klas Wijk, Sai Bharath Chandra Gutha, Erik Englesson, Alexandra Hotti, Carlo Saccardi, Oskar Kviman, Jens Lagergren, Ricardo Vinuesa Motilva, Hossein Azizpour
|
https://proceedings.mlr.press/v235/nilsson24b.html
|
ICML 2024
|
Feature selection is a crucial task in settings where data is high-dimensional or acquiring the full set of features is costly. Recent developments in neural network-based embedded feature selection show promising results across a wide range of applications. Concrete Autoencoders (CAEs), considered state-of-the-art in embedded feature selection, may struggle to achieve stable joint optimization, hurting their training time and generalization. In this work, we identify that this instability is correlated with the CAE learning duplicate selections. To remedy this, we propose a simple and effective improvement: Indirectly Parameterized CAEs (IP-CAEs). IP-CAEs learn an embedding and a mapping from it to the Gumbel-Softmax distributions’ parameters. Despite being simple to implement, IP-CAE exhibits significant and consistent improvements over CAE in both generalization and training time across several datasets for reconstruction and classification. Unlike CAE, IP-CAE effectively leverages non-linear relationships and does not require retraining the jointly optimized decoder. Furthermore, our approach is, in principle, generalizable to Gumbel-Softmax distributions beyond feature selection.
|
https://proceedings.mlr.press/v235/nishino24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nishino24a/nishino24a.pdf
|
https://openreview.net/forum?id=cbZTnjqIib
|
Understanding the Impact of Introducing Constraints at Inference Time on Generalization Error
|
https://proceedings.mlr.press/v235/nishino24a.html
|
Masaaki Nishino, Kengo Nakamura, Norihito Yasuda
|
https://proceedings.mlr.press/v235/nishino24a.html
|
ICML 2024
|
Since machine learning technologies are being used in various practical situations, models with merely low prediction errors might not be satisfactory; prediction errors occurring with a low probability might yield dangerous results in some applications. Therefore, there are attempts to achieve an ML model whose input-output pairs are guaranteed to satisfy given constraints. Among such attempts, many previous works chose the approach of modifying the outputs of an ML model at the inference time to satisfy the constraints. Such a strategy is handy because we can control its output without expensive training or fine-tuning. However, it is unclear whether using constraints only in the inference time degrades a model’s predictive performance. This paper analyses how the generalization error bounds change when we only put constraints in the inference time. Our main finding is that a class of loss functions preserves the relative generalization error, i.e., the difference in generalization error compared with the best model will not increase by imposing constraints at the inference time on multi-class classification. Some popular loss functions preserve the relative error, including the softmax cross-entropy loss. On the other hand, we also show that some loss functions do not preserve relative error when we use constraints. Our results suggest the importance of choosing a suitable loss function when we only use constraints in the inference time.
|
https://proceedings.mlr.press/v235/nitsure24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nitsure24a/nitsure24a.pdf
|
https://openreview.net/forum?id=Mv8y13wfDm
|
Risk Aware Benchmarking of Large Language Models
|
https://proceedings.mlr.press/v235/nitsure24a.html
|
Apoorva Nitsure, Youssef Mroueh, Mattia Rigotti, Kristjan Greenewald, Brian Belgodere, Mikhail Yurochkin, Jiri Navratil, Igor Melnyk, Jarret Ross
|
https://proceedings.mlr.press/v235/nitsure24a.html
|
ICML 2024
|
We propose a distributional framework for benchmarking socio-technical risks of foundation models with quantified statistical significance. Our approach hinges on a new statistical relative testing based on first and second order stochastic dominance of real random variables. We show that the second order statistics in this test are linked to mean-risk models commonly used in econometrics and mathematical finance to balance risk and utility when choosing between alternatives. Using this framework, we formally develop a risk-aware approach for foundation model selection given guardrails quantified by specified metrics. Inspired by portfolio optimization and selection theory in mathematical finance, we define a metrics portfolio for each model as a means to aggregate a collection of metrics, and perform model selection based on the stochastic dominance of these portfolios. The statistical significance of our tests is backed theoretically by an asymptotic analysis via central limit theorems instantiated in practice via a bootstrap variance estimate. We use our framework to compare various large language models regarding risks related to drifting from instructions and outputting toxic content.
|
https://proceedings.mlr.press/v235/niu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24a/niu24a.pdf
|
https://openreview.net/forum?id=qz1Vx1v9iK
|
Test-Time Model Adaptation with Only Forward Passes
|
https://proceedings.mlr.press/v235/niu24a.html
|
Shuaicheng Niu, Chunyan Miao, Guohao Chen, Pengcheng Wu, Peilin Zhao
|
https://proceedings.mlr.press/v235/niu24a.html
|
ICML 2024
|
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts. However, in real-world scenarios, models are usually deployed on resource-limited devices, e.g., FPGAs, and are often quantized and hard-coded with non-modifiable parameters for acceleration. In light of this, existing methods are often infeasible since they heavily depend on computation-intensive backpropagation for model updating that may be not supported. To address this, we propose a test-time Forward-Optimization Adaptation (FOA) method. In FOA, we seek to solely learn a newly added prompt (as model’s input) via a derivative-free covariance matrix adaptation evolution strategy. To make this strategy work stably under our online unsupervised setting, we devise a novel fitness function by measuring test-training statistic discrepancy and model prediction entropy. Moreover, we design an activation shifting scheme that directly tunes the model activations for shifted test samples, making them align with the source training domain, thereby further enhancing adaptation performance. Without using any backpropagation and altering model weights, FOA runs on quantized 8-bit ViT outperforms gradient-based TENT on full-precision 32-bit ViT, while achieving an up to 24-fold memory reduction on ImageNet-C. The source code is available at: https://github.com/mr-eggplant/FOA.
|
https://proceedings.mlr.press/v235/niu24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24b/niu24b.pdf
|
https://openreview.net/forum?id=VSwrXRqD9o
|
Latent Optimal Paths by Gumbel Propagation for Variational Bayesian Dynamic Programming
|
https://proceedings.mlr.press/v235/niu24b.html
|
Xinlei Niu, Christian Walder, Jing Zhang, Charles Patrick Martin
|
https://proceedings.mlr.press/v235/niu24b.html
|
ICML 2024
|
We propose the stochastic optimal path which solves the classical optimal path problem by a probability-softening solution. This unified approach transforms a wide range of DP problems into directed acyclic graphs in which all paths follow a Gibbs distribution. We show the equivalence of the Gibbs distribution to a message-passing algorithm by the properties of the Gumbel distribution and give all the ingredients required for variational Bayesian inference of a latent path, namely Bayesian dynamic programming (BDP). We demonstrate the usage of BDP in the latent space of variational autoencoders (VAEs) and propose the BDP-VAE which captures structured sparse optimal paths as latent variables. This enables end-to-end training for generative tasks in which models rely on unobserved structural information. At last, we validate the behavior of our approach and showcase its applicability in two real-world applications: text-to-speech and singing voice synthesis. Our implementation code is available at https://github.com/XinleiNIU/LatentOptimalPathsBayesianDP.
|
https://proceedings.mlr.press/v235/niu24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24c/niu24c.pdf
|
https://openreview.net/forum?id=G1igwiBBUj
|
GFlowNet Training by Policy Gradients
|
https://proceedings.mlr.press/v235/niu24c.html
|
Puhua Niu, Shili Wu, Mingzhou Fan, Xiaoning Qian
|
https://proceedings.mlr.press/v235/niu24c.html
|
ICML 2024
|
Generative Flow Networks (GFlowNets) have been shown effective to generate combinatorial objects with desired properties. We here propose a new GFlowNet training framework, with policy-dependent rewards, that bridges keeping flow balance of GFlowNets to optimizing the expected accumulated reward in traditional Reinforcement-Learning (RL). This enables the derivation of new policy-based GFlowNet training methods, in contrast to existing ones resembling value-based RL. It is known that the design of backward policies in GFlowNet training affects efficiency. We further develop a coupled training strategy that jointly solves GFlowNet forward policy training and backward policy design. Performance analysis is provided with a theoretical guarantee of our policy-based GFlowNet training. Experiments on both simulated and real-world datasets verify that our policy-based strategies provide advanced RL perspectives for robust gradient estimation to improve GFlowNet performance. Our code is available at: github.com/niupuhua1234/GFN-PG.
|
https://proceedings.mlr.press/v235/niu24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/niu24d/niu24d.pdf
|
https://openreview.net/forum?id=QRDfBIhrJq
|
Multi-Fidelity Residual Neural Processes for Scalable Surrogate Modeling
|
https://proceedings.mlr.press/v235/niu24d.html
|
Ruijia Niu, Dongxia Wu, Kai Kim, Yian Ma, Duncan Watson-Parris, Rose Yu
|
https://proceedings.mlr.press/v235/niu24d.html
|
ICML 2024
|
Multi-fidelity surrogate modeling aims to learn an accurate surrogate at the highest fidelity level by combining data from multiple sources. Traditional methods relying on Gaussian processes can hardly scale to high-dimensional data. Deep learning approaches utilize neural network based encoders and decoders to improve scalability. These approaches share encoded representations across fidelities without including corresponding decoder parameters. This hinders inference performance, especially in out-of-distribution scenarios when the highest fidelity data has limited domain coverage. To address these limitations, we propose Multi-fidelity Residual Neural Processes (MFRNP), a novel multi-fidelity surrogate modeling framework. MFRNP explicitly models the residual between the aggregated output from lower fidelities and ground truth at the highest fidelity. The aggregation introduces decoders into the information sharing step and optimizes lower fidelity decoders to accurately capture both in-fidelity and cross-fidelity information. We show that MFRNP significantly outperforms state-of-the-art in learning partial differential equations and a real-world climate modeling task. Our code is published at: https://github.com/Rose-STL-Lab/MFRNP
|
https://proceedings.mlr.press/v235/nori24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nori24a/nori24a.pdf
|
https://openreview.net/forum?id=jxvqvZLBuU
|
RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching
|
https://proceedings.mlr.press/v235/nori24a.html
|
Divya Nori, Wengong Jin
|
https://proceedings.mlr.press/v235/nori24a.html
|
ICML 2024
|
The growing significance of RNA engineering in diverse biological applications has spurred interest in developing AI methods for structure-based RNA design. While diffusion models have excelled in protein design, adapting them for RNA presents new challenges due to RNA’s conformational flexibility and the computational cost of fine-tuning large structure prediction models. To this end, we propose RNAFlow, a flow matching model for protein-conditioned RNA sequence-structure design. Its denoising network integrates an RNA inverse folding model and a pre-trained RosettaFold2NA network for generation of RNA sequences and structures. The integration of inverse folding in the structure denoising process allows us to simplify training by fixing the structure prediction network. We further enhance the inverse folding model by conditioning it on inferred conformational ensembles to model dynamic RNA conformations. Evaluation on protein-conditioned RNA structure and sequence generation tasks demonstrates RNAFlow’s advantage over existing RNA design methods.
|
https://proceedings.mlr.press/v235/nottingham24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nottingham24a/nottingham24a.pdf
|
https://openreview.net/forum?id=9laB7ytoMp
|
Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills
|
https://proceedings.mlr.press/v235/nottingham24a.html
|
Kolby Nottingham, Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Sameer Singh, Peter Clark, Roy Fox
|
https://proceedings.mlr.press/v235/nottingham24a.html
|
ICML 2024
|
Large language models (LLMs) have recently been used for sequential decision making in interactive environments. However, leveraging environment reward signals for continual LLM actor improvement is not straightforward. We propose Skill Set Optimization (SSO) for improving LLM actor performance through constructing and refining sets of transferable skills. SSO constructs skills by extracting common subtrajectories with high rewards and generating subgoals and instructions to represent each skill. These skills are provided to the LLM actor in-context to reinforce behaviors with high rewards. Then, SSO further refines the skill set by pruning skills that do not continue to result in high rewards. We evaluate our method in the classic videogame NetHack and the text environment ScienceWorld to demonstrate SSO’s ability to optimize a set of skills and perform in-context policy improvement. SSO outperforms baselines by 40% in our custom NetHack task and outperforms the previous state-of-the-art in ScienceWorld by 35%.
|
https://proceedings.mlr.press/v235/novack24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/novack24a/novack24a.pdf
|
https://openreview.net/forum?id=z5Ux2u6t7U
|
DITTO: Diffusion Inference-Time T-Optimization for Music Generation
|
https://proceedings.mlr.press/v235/novack24a.html
|
Zachary Novack, Julian Mcauley, Taylor Berg-Kirkpatrick, Nicholas J. Bryan
|
https://proceedings.mlr.press/v235/novack24a.html
|
ICML 2024
|
We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose framework for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents. Our method can be used to optimize through any differentiable feature matching loss to achieve a target (stylized) output and leverages gradient checkpointing for memory efficiency. We demonstrate a surprisingly wide-range of applications for music generation including inpainting, outpainting, and looping as well as intensity, melody, and musical structure control – all without ever fine-tuning the underlying model. When we compare our approach against related training, guidance, and optimization-based methods, we find DITTO achieves state-of-the-art performance on nearly all tasks, including outperforming comparable approaches on controllability, audio quality, and computational efficiency, thus opening the door for high-quality, flexible, training-free control of diffusion models. Sound examples can be found at https://ditto-music.github.io/web/.
|
https://proceedings.mlr.press/v235/novello24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/novello24a/novello24a.pdf
|
https://openreview.net/forum?id=RFhkcqRmTD
|
$f$-Divergence Based Classification: Beyond the Use of Cross-Entropy
|
https://proceedings.mlr.press/v235/novello24a.html
|
Nicola Novello, Andrea M Tonello
|
https://proceedings.mlr.press/v235/novello24a.html
|
ICML 2024
|
In deep learning, classification tasks are formalized as optimization problems often solved via the minimization of the cross-entropy. However, recent advancements in the design of objective functions allow the usage of the $f$-divergence to generalize the formulation of the optimization problem for classification. We adopt a Bayesian perspective and formulate the classification task as a maximum a posteriori probability problem. We propose a class of objective functions based on the variational representation of the $f$-divergence. Furthermore, driven by the challenge of improving the state-of-the-art approach, we propose a bottom-up method that leads us to the formulation of an objective function corresponding to a novel $f$-divergence referred to as shifted log (SL). We theoretically analyze the objective functions proposed and numerically test them in three application scenarios: toy examples, image datasets, and signal detection/decoding problems. The analyzed scenarios demonstrate the effectiveness of the proposed approach and that the SL divergence achieves the highest classification accuracy in almost all the considered cases.
|
https://proceedings.mlr.press/v235/nowak24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nowak24a/nowak24a.pdf
|
https://openreview.net/forum?id=svm53KQAtN
|
Sparser, Better, Deeper, Stronger: Improving Static Sparse Training with Exact Orthogonal Initialization
|
https://proceedings.mlr.press/v235/nowak24a.html
|
Aleksandra Nowak, Łukasz Gniecki, Filip Szatkowski, Jacek Tabor
|
https://proceedings.mlr.press/v235/nowak24a.html
|
ICML 2024
|
Static sparse training aims to train sparse models from scratch, achieving remarkable results in recent years. A key design choice is given by the sparse initialization, which determines the trainable sub-network through a binary mask. Existing methods mainly select such mask based on a predefined dense initialization. Such an approach may not efficiently leverage the mask’s potential impact on the optimization. An alternative direction, inspired by research into dynamical isometry, is to introduce orthogonality in the sparse subnetwork, which helps in stabilizing the gradient signal. In this work, we propose Exact Orthogonal Initialization (EOI), a novel sparse orthogonal initialization scheme based on composing random Givens rotations. Contrary to other existing approaches, our method provides exact (not approximated) orthogonality and enables the creation of layers with arbitrary densities. We demonstrate the superior effectiveness and efficiency of EOI through experiments, consistently outperforming common sparse initialization techniques. Our method enables training highly sparse 1000-layer MLP and CNN networks without residual connections or normalization techniques, emphasizing the crucial role of weight initialization in static sparse training alongside sparse mask selection.
|
https://proceedings.mlr.press/v235/obando-ceron24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/obando-ceron24a/obando-ceron24a.pdf
|
https://openreview.net/forum?id=seo9V9QRZp
|
In value-based deep reinforcement learning, a pruned network is a good network
|
https://proceedings.mlr.press/v235/obando-ceron24a.html
|
Johan Samir Obando Ceron, Aaron Courville, Pablo Samuel Castro
|
https://proceedings.mlr.press/v235/obando-ceron24a.html
|
ICML 2024
|
Recent work has shown that deep reinforcement learning agents have difficulty in effectively using their network parameters. We leverage prior insights into the advantages of sparse training techniques and demonstrate that gradual magnitude pruning enables value-based agents to maximize parameter effectiveness. This results in networks that yield dramatic performance improvements over traditional networks, using only a small fraction of the full network parameters. Our code is publicly available, see Appendix A for details.
|
https://proceedings.mlr.press/v235/obando-ceron24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/obando-ceron24b/obando-ceron24b.pdf
|
https://openreview.net/forum?id=X9VMhfFxwn
|
Mixtures of Experts Unlock Parameter Scaling for Deep RL
|
https://proceedings.mlr.press/v235/obando-ceron24b.html
|
Johan Samir Obando Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Nicolaus Foerster, Gintare Karolina Dziugaite, Doina Precup, Pablo Samuel Castro
|
https://proceedings.mlr.press/v235/obando-ceron24b.html
|
ICML 2024
|
The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model’s performance scales proportionally to its size. Analogous scaling laws remain elusive for reinforcement learning domains, however, where increasing the parameter count of a model often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs (Puigcerver et al., 2023), into value-based networks results in more parameter-scalable models, evidenced by substantial performance increases across a variety of training regimes and model sizes. This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning.
|
https://proceedings.mlr.press/v235/oh24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oh24a/oh24a.pdf
|
https://openreview.net/forum?id=iC8l9DI1ZX
|
On the Effectiveness of Supervision in Asymmetric Non-Contrastive Learning
|
https://proceedings.mlr.press/v235/oh24a.html
|
Jeongheon Oh, Kibok Lee
|
https://proceedings.mlr.press/v235/oh24a.html
|
ICML 2024
|
Supervised contrastive representation learning has been shown to be effective in various transfer learning scenarios. However, while asymmetric non-contrastive learning (ANCL) often outperforms its contrastive learning counterpart in self-supervised representation learning, the extension of ANCL to supervised scenarios is less explored. To bridge the gap, we study ANCL for supervised representation learning, coined SupSiam and SupBYOL, leveraging labels in ANCL to achieve better representations. The proposed supervised ANCL framework improves representation learning while avoiding collapse. Our analysis reveals that providing supervision to ANCL reduces intra-class variance, and the contribution of supervision should be adjusted to achieve the best performance. Experiments demonstrate the superiority of supervised ANCL across various datasets and tasks. The code is available at: https://github.com/JH-Oh-23/Sup-ANCL.
|
https://proceedings.mlr.press/v235/oh24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oh24b/oh24b.pdf
|
https://openreview.net/forum?id=kfpe7Dg23G
|
Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion Beyond ReLU Network
|
https://proceedings.mlr.press/v235/oh24b.html
|
Hyunseok Oh, Youngki Lee
|
https://proceedings.mlr.press/v235/oh24b.html
|
ICML 2024
|
Spiking neural network (SNN) is studied in multidisciplinary domains to (i) enable order-of-magnitudes energy-efficient AI inference, and (ii) computationally simulate neuroscientific mechanisms. The lack of discrete theory obstructs the practical application of SNN by limiting its performance and nonlinearity support. We present a new optimization-theoretic perspective of the discrete dynamics of spiking neuron. We prove that a discrete dynamical system of simple integrate-and-fire models approximates the subgradient method over unconstrained optimization problems. We practically extend our theory to introduce a novel sign gradient descent (signGD)-based neuronal dynamics that can (i) approximate diverse nonlinearities beyond ReLU, and (ii) advance ANN-to-SNN conversion performance in low time-steps. Experiments on large-scale datasets show that our technique achieve (i) state-of-the-art performance in ANN-to-SNN conversion, and (ii) is first to convert new DNN architectures, e.g., ConvNext, MLP-Mixer, and ResMLP. We publicly share our source code at www.github.com/snuhcs/snn_signgd .
|
https://proceedings.mlr.press/v235/ohayon24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ohayon24a/ohayon24a.pdf
|
https://openreview.net/forum?id=jQA5iutPzd
|
The Perception-Robustness Tradeoff in Deterministic Image Restoration
|
https://proceedings.mlr.press/v235/ohayon24a.html
|
Guy Ohayon, Tomer Michaeli, Michael Elad
|
https://proceedings.mlr.press/v235/ohayon24a.html
|
ICML 2024
|
We study the behavior of deterministic methods for solving inverse problems in imaging. These methods are commonly designed to achieve two goals: (1) attaining high perceptual quality, and (2) generating reconstructions that are consistent with the measurements. We provide a rigorous proof that the better a predictor satisfies these two requirements, the larger its Lipschitz constant must be, regardless of the nature of the degradation involved. In particular, to approach perfect perceptual quality and perfect consistency, the Lipschitz constant of the model must grow to infinity. This implies that such methods are necessarily more susceptible to adversarial attacks. We demonstrate our theory on single image super-resolution algorithms, addressing both noisy and noiseless settings. We also show how this undesired behavior can be leveraged to explore the posterior distribution, thereby allowing the deterministic model to imitate stochastic methods.
|
https://proceedings.mlr.press/v235/oikarinen24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oikarinen24a/oikarinen24a.pdf
|
https://openreview.net/forum?id=WIbntm28cM
|
Linear Explanations for Individual Neurons
|
https://proceedings.mlr.press/v235/oikarinen24a.html
|
Tuomas Oikarinen, Tsui-Wei Weng
|
https://proceedings.mlr.press/v235/oikarinen24a.html
|
ICML 2024
|
In recent years many methods have been developed to understand the internal workings of neural networks, often by describing the function of individual neurons in the model. However, these methods typically only focus on explaining the very highest activations of a neuron. In this paper we show this is not sufficient, and that the highest activation range is only responsible for a very small percentage of the neuron’s causal effect. In addition, inputs causing lower activations are often very different and can’t be reliably predicted by only looking at high activations. We propose that neurons should instead be understood as a linear combination of concepts, and develop an efficient method for producing these linear explanations. In addition, we show how to automatically evaluate description quality using simulation, i.e. predicting neuron activations on unseen inputs in vision setting.
|
https://proceedings.mlr.press/v235/oikonomidis24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oikonomidis24a/oikonomidis24a.pdf
|
https://openreview.net/forum?id=SUxarNgrUT
|
Adaptive Proximal Gradient Methods Are Universal Without Approximation
|
https://proceedings.mlr.press/v235/oikonomidis24a.html
|
Konstantinos Oikonomidis, Emanuel Laude, Puya Latafat, Andreas Themelis, Panagiotis Patrinos
|
https://proceedings.mlr.press/v235/oikonomidis24a.html
|
ICML 2024
|
We show that adaptive proximal gradient methods for convex problems are not restricted to traditional Lipschitzian assumptions. Our analysis reveals that a class of linesearch-free methods is still convergent under mere local Hölder gradient continuity, covering in particular continuously differentiable semi-algebraic functions. To mitigate the lack of local Lipschitz continuity, popular approaches revolve around $\varepsilon$-oracles and/or linesearch procedures. In contrast, we exploit plain Hölder inequalities not entailing any approximation, all while retaining the linesearch-free nature of adaptive schemes. Furthermore, we prove full sequence convergence without prior knowledge of local Hölder constants nor of the order of Hölder continuity. Numerical experiments make comparisons with baseline methods on diverse tasks from machine learning covering both the locally and the globally Hölder setting.
|
https://proceedings.mlr.press/v235/oko24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oko24a/oko24a.pdf
|
https://openreview.net/forum?id=pOgMluzEIH
|
SILVER: Single-loop variance reduction and application to federated learning
|
https://proceedings.mlr.press/v235/oko24a.html
|
Kazusato Oko, Shunta Akiyama, Denny Wu, Tomoya Murata, Taiji Suzuki
|
https://proceedings.mlr.press/v235/oko24a.html
|
ICML 2024
|
Most variance reduction methods require multiple times of full gradient computation, which is time-consuming and hence a bottleneck in application to distributed optimization. We present a single-loop variance-reduced gradient estimator named SILVER (SIngle-Loop VariancE-Reduction) for the finite-sum non-convex optimization, which does not require multiple full gradients but nevertheless achieves the optimal gradient complexity. Notably, unlike existing methods, SILVER provably reaches second-order optimality, with exponential convergence in the Polyak-Łojasiewicz (PL) region, and achieves further speedup depending on the data heterogeneity. Owing to these advantages, SILVER serves as a new base method to design communication-efficient federated learning algorithms: we combine SILVER with local updates which gives the best communication rounds and number of communicated gradients across all range of Hessian heterogeneity, and, at the same time, guarantees second-order optimality and exponential convergence in the PL region.
|
https://proceedings.mlr.press/v235/oosterhuis24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oosterhuis24a/oosterhuis24a.pdf
|
https://openreview.net/forum?id=Msjovr9hUe
|
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions
|
https://proceedings.mlr.press/v235/oosterhuis24a.html
|
Harrie Oosterhuis, Lijun Lyu, Avishek Anand
|
https://proceedings.mlr.press/v235/oosterhuis24a.html
|
ICML 2024
|
Local feature selection in machine learning provides instance-specific explanations by focusing on the most relevant features for each prediction, enhancing the interpretability of complex models. However, such methods tend to produce misleading explanations by encoding additional information in their selections. In this work, we attribute the problem of misleading selections by formalizing the concepts of label and feature leakage. We rigorously derive the necessary and sufficient conditions under which we can guarantee no leakage, and show existing methods do not meet these conditions. Furthermore, we propose the first local feature selection method that is proven to have no leakage called SUWR. Our experimental results indicate that SUWR is less prone to overfitting and combines state-of-the-art predictive performance with high feature-selection sparsity. Our generic and easily extendable formal approach provides a strong theoretical basis for future work on interpretability with reliable explanations.
|
https://proceedings.mlr.press/v235/opedal24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/opedal24a/opedal24a.pdf
|
https://openreview.net/forum?id=k1JXxbpIY6
|
Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?
|
https://proceedings.mlr.press/v235/opedal24a.html
|
Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan
|
https://proceedings.mlr.press/v235/opedal24a.html
|
ICML 2024
|
There is increasing interest in employing large language models (LLMs) as cognitive models. For such purposes, it is central to understand which properties of human cognition are well-modeled by LLMs, and which are not. In this work, we study the biases of LLMs in relation to those known in children when solving arithmetic word problems. Surveying the learning science literature, we posit that the problem-solving process can be split into three distinct steps: text comprehension, solution planning and solution execution. We construct tests for each one in order to understand whether current LLMs display the same cognitive biases as children in these steps. We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features. We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not in the final step, in which the arithmetic expressions are executed to obtain the answer.
|
https://proceedings.mlr.press/v235/orlova24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/orlova24a/orlova24a.pdf
|
https://openreview.net/forum?id=64MQCia06B
|
Deep Stochastic Mechanics
|
https://proceedings.mlr.press/v235/orlova24a.html
|
Elena Orlova, Aleksei Ustimenko, Ruoxi Jiang, Peter Y. Lu, Rebecca Willett
|
https://proceedings.mlr.press/v235/orlova24a.html
|
ICML 2024
|
This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schrödinger equation inspired by stochastic mechanics and generative diffusion models. Unlike existing approaches, which exhibit computational complexity that scales exponentially in the problem dimension, our method allows us to adapt to the latent low-dimensional structure of the wave function by sampling from the Markovian diffusion. Depending on the latent dimension, our method may have far lower computational complexity in higher dimensions. Moreover, we propose novel equations for stochastic quantum mechanics, resulting in quadratic computational complexity with respect to the number of dimensions. Numerical simulations verify our theoretical findings and show a significant advantage of our method compared to other deep-learning-based approaches used for quantum mechanics.
|
https://proceedings.mlr.press/v235/ortega24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ortega24a/ortega24a.pdf
|
https://openreview.net/forum?id=1n3aC5rvdE
|
Variational Linearized Laplace Approximation for Bayesian Deep Learning
|
https://proceedings.mlr.press/v235/ortega24a.html
|
Luis A. Ortega, Simon Rodriguez Santana, Daniel Hernández-Lobato
|
https://proceedings.mlr.press/v235/ortega24a.html
|
ICML 2024
|
The Linearized Laplace Approximation (LLA) has been recently used to perform uncertainty estimation on the predictions of pre-trained deep neural networks (DNNs). However, its widespread application is hindered by significant computational costs, particularly in scenarios with a large number of training points or DNN parameters. Consequently, additional approximations of LLA, such as Kronecker-factored or diagonal approximate GGN matrices, are utilized, potentially compromising the model’s performance. To address these challenges, we propose a new method for approximating LLA using a variational sparse Gaussian Process (GP). Our method is based on the dual RKHS formulation of GPs and retains as the predictive mean the output of the original DNN. Furthermore, it allows for efficient stochastic optimization, which results in sub-linear training time in the size of the training dataset. Specifically, its training cost is independent of the number of training points. We compare our proposed method against accelerated LLA (ELLA), which relies on the Nyström approximation, as well as other LLA variants employing the sample-then-optimize principle. Experimental results, both on regression and classification datasets, show that our method outperforms these already existing efficient variants of LLA, both in terms of the quality of the predictive distribution and in terms of total computational time.
|
https://proceedings.mlr.press/v235/orvieto24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/orvieto24a/orvieto24a.pdf
|
https://openreview.net/forum?id=47ahBl70xb
|
Universality of Linear Recurrences Followed by Non-linear Projections: Finite-Width Guarantees and Benefits of Complex Eigenvalues
|
https://proceedings.mlr.press/v235/orvieto24a.html
|
Antonio Orvieto, Soham De, Caglar Gulcehre, Razvan Pascanu, Samuel L Smith
|
https://proceedings.mlr.press/v235/orvieto24a.html
|
ICML 2024
|
Deep neural networks based on linear RNNs interleaved with position-wise MLPs are gaining traction as competitive approaches for sequence modeling. Examples of such architectures include state-space models (SSMs) like S4, LRU, and Mamba: recently proposed models that achieve promising performance on text, genetics, and other data that require long-range reasoning. Despite experimental evidence highlighting these architectures’ effectiveness and computational efficiency, their expressive power remains relatively unexplored, especially in connection to specific choices crucial in practice - e.g., carefully designed initialization distribution and potential use of complex numbers. In this paper, we show that combining MLPs with both real or complex linear diagonal recurrences leads to arbitrarily precise approximation of regular causal sequence-to-sequence maps. At the heart of our proof, we rely on a separation of concerns: the linear RNN provides a lossless encoding of the input sequence, and the MLP performs non-linear processing on this encoding. While we show that real diagonal linear recurrences are enough to achieve universality in this architecture, we prove that employing complex eigenvalues near unit disk - i.e., empirically the most successful strategy in S4 - greatly helps the RNN in storing information. We connect this finding with the vanishing gradient issue and provide experiments supporting our claims.
|
https://proceedings.mlr.press/v235/osa24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/osa24a/osa24a.pdf
|
https://openreview.net/forum?id=j6rG1ETRyu
|
Discovering Multiple Solutions from a Single Task in Offline Reinforcement Learning
|
https://proceedings.mlr.press/v235/osa24a.html
|
Takayuki Osa, Tatsuya Harada
|
https://proceedings.mlr.press/v235/osa24a.html
|
ICML 2024
|
Recent studies on online reinforcement learning (RL) have demonstrated the advantages of learning multiple behaviors from a single task, as in the case of few-shot adaptation to a new environment. Although this approach is expected to yield similar benefits in offline RL, appropriate methods for learning multiple solutions have not been fully investigated in previous studies. In this study, we therefore addressed the problem of finding multiple solutions from a single task in offline RL. We propose algorithms that can learn multiple solutions in offline RL, and empirically investigate their performance. Our experimental results show that the proposed algorithm learns multiple qualitatively and quantitatively distinctive solutions in offline RL.
|
https://proceedings.mlr.press/v235/ostapenko24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ostapenko24a/ostapenko24a.pdf
|
https://openreview.net/forum?id=0ZFWfeVsaD
|
Towards Modular LLMs by Building and Reusing a Library of LoRAs
|
https://proceedings.mlr.press/v235/ostapenko24a.html
|
Oleksiy Ostapenko, Zhan Su, Edoardo Ponti, Laurent Charlin, Nicolas Le Roux, Lucas Caccia, Alessandro Sordoni
|
https://proceedings.mlr.press/v235/ostapenko24a.html
|
ICML 2024
|
Given the increasing number of parameter-efficient adapters of large language models (LLMs), how can we reuse them to improve LLM performance on new tasks? We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, $\texttt{MBC}$, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. In order to reuse the library, we present a novel zero-shot routing mechanism, $\texttt{Arrow}$, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. Thus, we make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.
|
https://proceedings.mlr.press/v235/ouasfi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ouasfi24a/ouasfi24a.pdf
|
https://openreview.net/forum?id=SLqdDWwibH
|
Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries
|
https://proceedings.mlr.press/v235/ouasfi24a.html
|
Amine Ouasfi, Adnane Boukhayma
|
https://proceedings.mlr.press/v235/ouasfi24a.html
|
ICML 2024
|
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities, encompassing a wide range from 3D shapes to images and audio. Within the realm of 3D shape representation, Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encoding intricate shape geometry. However, learning SDFs from sparse 3D point clouds in the absence of ground truth supervision remains a very challenging task. While recent methods rely on smoothness priors to regularize the learning, our method introduces a regularization term that leverages adversarial samples around the shape to improve the learned SDFs. Through extensive experiments and evaluations, we illustrate the efficacy of our proposed method, highlighting its capacity to improve SDF learning with respect to baselines and the state-of-the-art using synthetic and real data.
|
https://proceedings.mlr.press/v235/oulhaj24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/oulhaj24a/oulhaj24a.pdf
|
https://openreview.net/forum?id=QZ1DVzr6N9
|
Differentiable Mapper for Topological Optimization of Data Representation
|
https://proceedings.mlr.press/v235/oulhaj24a.html
|
Ziyad Oulhaj, Mathieu Carrière, Bertrand Michel
|
https://proceedings.mlr.press/v235/oulhaj24a.html
|
ICML 2024
|
Unsupervised data representation and visualization using tools from topology is an active and growing field of Topological Data Analysis (TDA) and data science. Its most prominent line of work is based on the so-called Mapper graph, which is a combinatorial graph whose topological structures (connected components, branches, loops) are in correspondence with those of the data itself. While highly generic and applicable, its use has been hampered so far by the manual tuning of its many parameters—among these, a crucial one is the so-called filter: it is a continuous function whose variations on the data set are the main ingredient for both building the Mapper representation and assessing the presence and sizes of its topological structures. However, while a few parameter tuning methods have already been investigated for the other Mapper parameters (i.e., resolution, gain, clustering), there is currently no method for tuning the filter itself. In this work, we build on a recently proposed optimization framework incorporating topology to provide the first filter optimization scheme for Mapper graphs. In order to achieve this, we propose a relaxed and more general version of the Mapper graph, whose convergence properties are investigated. Finally, we demonstrate the usefulness of our approach by optimizing Mapper graph representations on several datasets, and showcasing the superiority of the optimized representation over arbitrary ones.
|
https://proceedings.mlr.press/v235/ouyang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ouyang24a/ouyang24a.pdf
|
https://openreview.net/forum?id=7R3pzxTSlg
|
Structured Chemistry Reasoning with Large Language Models
|
https://proceedings.mlr.press/v235/ouyang24a.html
|
Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, Lianhui Qin
|
https://proceedings.mlr.press/v235/ouyang24a.html
|
ICML 2024
|
Large Language Models (LLMs) excel in diverse areas, yet struggle with complex scientific reasoning, especially in the field of chemistry. Different from the simple chemistry tasks (e.g., molecule classification) addressed in previous studies, complex chemistry problems require not only vast knowledge and precise calculation, but also compositional reasoning about rich dynamic interactions of different concepts (e.g., temperature changes). Our study shows that even advanced LLMs, like GPT-4, can fail easily in different ways. Interestingly, the errors often stem not from a lack of domain knowledge within the LLMs, but rather from the absence of an effective reasoning structure that guides the LLMs to elicit the right knowledge, incorporate the knowledge in step-by-step reasoning, and iteratively refine results for further improved quality. On this basis, we introduce StructChem, a simple yet effective prompting strategy that offers the desired guidance and substantially boosts the LLMs’ chemical reasoning capability. Testing across four chemistry areas—quantum chemistry, mechanics, physical chemistry, and kinetics—StructChem substantially enhances GPT-4’s performance, with up to 30% peak improvement. Our analysis also underscores the unique difficulties of precise grounded reasoning in science with LLMs, highlighting a need for more research in this area.
|
https://proceedings.mlr.press/v235/ozgul24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ozgul24a/ozgul24a.pdf
|
https://openreview.net/forum?id=sNjxqSnXFO
|
Stochastic Quantum Sampling for Non-Logconcave Distributions and Estimating Partition Functions
|
https://proceedings.mlr.press/v235/ozgul24a.html
|
Guneykan Ozgul, Xiantao Li, Mehrdad Mahdavi, Chunhao Wang
|
https://proceedings.mlr.press/v235/ozgul24a.html
|
ICML 2024
|
We present quantum algorithms for sampling from possibly non-logconcave probability distributions expressed as $\pi(x) \propto \exp(-\beta f(x))$ as well as quantum algorithms for estimating the partition function for such distributions. We also incorporate a stochastic gradient oracle that implements the quantum walk operators inexactly by only using mini-batch gradients when $f$ can be written as a finite sum. One challenge of quantizing the resulting Markov chains is that they do not satisfy the detailed balance condition in general. Consequently, the mixing time of the algorithm cannot be expressed in terms of the spectral gap of the transition density matrix, making the quantum algorithms nontrivial to analyze. We overcame these challenges by first building a reference reversible Markov chain that converges to the target distribution, then controlling the discrepancy between our algorithm’s output and the target distribution by using the reference Markov chain as a bridge to establish the total complexity. Our quantum algorithms exhibit polynomial speedups in terms of dimension or precision dependencies when compared to best-known classical algorithms under similar assumptions.
|
https://proceedings.mlr.press/v235/ozkara24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ozkara24a/ozkara24a.pdf
|
https://openreview.net/forum?id=tASXcrMekp
|
MADA: Meta-Adaptive Optimizers Through Hyper-Gradient Descent
|
https://proceedings.mlr.press/v235/ozkara24a.html
|
Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher
|
https://proceedings.mlr.press/v235/ozkara24a.html
|
ICML 2024
|
Following the introduction of Adam, several novel adaptive optimizers for deep learning have been proposed. These optimizers typically excel in some tasks but may not outperform Adam uniformly across all tasks. In this work, we introduce Meta-Adaptive Optimizers (MADA), a unified optimizer framework that can generalize several known optimizers and dynamically learn the most suitable one during training. The key idea in MADA is to parameterize the space of optimizers and dynamically search through it using hyper-gradient descent during training. We empirically compare MADA to other popular optimizers on vision and language tasks, and find that MADA consistently outperforms Adam and other popular optimizers, and is robust against sub-optimally tuned hyper-parameters. MADA achieves a greater validation performance improvement over Adam compared to other popular optimizers during GPT-2 training and fine-tuning. We also propose AVGrad, a modification of AMSGrad that replaces the maximum operator with averaging, which is more suitable for hyper-gradient optimization. Finally, we provide a convergence analysis to show that parameterized interpolations of optimizers can improve their error bounds (up to constants), hinting at an advantage for meta-optimizers.
|
https://proceedings.mlr.press/v235/paissan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/paissan24a/paissan24a.pdf
|
https://openreview.net/forum?id=kAfYYg6PX8
|
Listenable Maps for Audio Classifiers
|
https://proceedings.mlr.press/v235/paissan24a.html
|
Francesco Paissan, Mirco Ravanelli, Cem Subakan
|
https://proceedings.mlr.press/v235/paissan24a.html
|
ICML 2024
|
Despite the impressive performance of deep learning models across diverse tasks, their complexity poses challenges for interpretation. This challenge is particularly evident for audio signals, where conveying interpretations becomes inherently difficult. To address this issue, we introduce Listenable Maps for Audio Classifiers (L-MAC), a posthoc interpretation method that generates faithful and listenable interpretations. L-MAC utilizes a decoder on top of a pretrained classifier to generate binary masks that highlight relevant portions of the input audio. We train the decoder with a loss function that maximizes the confidence of the classifier decision on the masked-in portion of the audio while minimizing the probability of model output for the masked-out portion. Quantitative evaluations on both in-domain and out-of-domain data demonstrate that L-MAC consistently produces more faithful interpretations than several gradient and masking-based methodologies. Furthermore, a user study confirms that, on average, users prefer the interpretations generated by the proposed technique.
|
https://proceedings.mlr.press/v235/pal24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pal24a/pal24a.pdf
|
https://openreview.net/forum?id=2W3KUAaZgO
|
Implicit Representations via Operator Learning
|
https://proceedings.mlr.press/v235/pal24a.html
|
Sourav Pal, Harshavardhan Adepu, Clinton Wang, Polina Golland, Vikas Singh
|
https://proceedings.mlr.press/v235/pal24a.html
|
ICML 2024
|
The idea of representing a signal as the weights of a neural network, called Implicit Neural Representations (INRs), has led to exciting implications for compression, view synthesis and 3D volumetric data understanding. One problem in this setting pertains to the use of INRs for downstream processing tasks. Despite some conceptual results, this remains challenging because the INR for a given image/signal often exists in isolation. What does the neighborhood around a given INR correspond to? Based on this question, we offer an operator theoretic reformulation of the INR model, which we call Operator INR (or O-INR). At a high level, instead of mapping positional encodings to a signal, O-INR maps one function space to another function space. A practical form of this general casting is obtained by appealing to Integral Transforms. The resultant model does not need multi-layer perceptrons (MLPs), used in most existing INR models – we show that convolutions are sufficient and offer benefits including numerically stable behavior. We show that O-INR can easily handle most problem settings in the literature, and offers a similar performance profile as baselines. These benefits come with minimal, if any, compromise. Our code is available at https://github.com/vsingh-group/oinr.
|
https://proceedings.mlr.press/v235/palmarini24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/palmarini24a/palmarini24a.pdf
|
https://openreview.net/forum?id=CbIRQgAYE4
|
Bayesian Program Learning by Decompiling Amortized Knowledge
|
https://proceedings.mlr.press/v235/palmarini24a.html
|
Alessandro B. Palmarini, Christopher G. Lucas, Siddharth N
|
https://proceedings.mlr.press/v235/palmarini24a.html
|
ICML 2024
|
DreamCoder is an inductive program synthesis system that, whilst solving problems, learns to simplify search in an iterative wake-sleep procedure. The cost of search is amortized by training a neural search policy, reducing search breadth and effectively "compiling" useful information to compose program solutions across tasks. Additionally, a library of program components is learnt to compress and express discovered solutions in fewer components, reducing search depth. We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components. This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth. We integrate our approach with DreamCoder and demonstrate faster domain proficiency with improved generalization on a range of domains, particularly when fewer example solutions are available.
|
https://proceedings.mlr.press/v235/palumbo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/palumbo24a/palumbo24a.pdf
|
https://openreview.net/forum?id=vn92qYjL1F
|
Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection
|
https://proceedings.mlr.press/v235/palumbo24a.html
|
Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, Yingyu Liang, Somesh Jha
|
https://proceedings.mlr.press/v235/palumbo24a.html
|
ICML 2024
|
Both transduction and rejection have emerged as important techniques for defending against adversarial perturbations. A recent work by Goldwasser et. al showed that rejection combined with transduction can give provable guarantees (for certain problems) that cannot be achieved otherwise. Nevertheless, under recent strong adversarial attacks (GMSA), Goldwasser et al.’s work was shown to have low performance in a practical deep-learning setting. In this paper, we take a step towards realizing the promise of transduction+rejection in more realistic scenarios. Our key observation is that a novel application of a reduction technique by Tramèr, which was until now only used to demonstrate the vulnerability of certain defenses, can be used to actually construct effective defenses. Theoretically, we show that a careful application of this technique in the transductive setting can give significantly improved sample-complexity for robust generalization. Our theory guides us to design a new transductive algorithm for learning a selective model; extensive experiments using state of the art attacks (AutoAttack, GMSA) show that our approach provides significantly better robust accuracy (81.6% on CIFAR-10 and 57.9% on CIFAR-100 under $l_\infty$ with budget 8/255) than existing techniques. The implementation is available at https://github.com/nilspalumbo/transduction-rejection.
|
https://proceedings.mlr.press/v235/pan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24a/pan24a.pdf
|
https://openreview.net/forum?id=OXzkw7vFIO
|
Counterfactual Image Editing
|
https://proceedings.mlr.press/v235/pan24a.html
|
Yushu Pan, Elias Bareinboim
|
https://proceedings.mlr.press/v235/pan24a.html
|
ICML 2024
|
Counterfactual image editing is a challenging task within generative AI. The current literature on the topic focuses primarily on changing individual features while being silent about the causal relationships between features, which are present in the real world. In this paper, we first formalize this task through causal language, modeling the causal relationships between latent generative factors and images through a special type of causal model called augmented structural causal models (ASCMs). Second, we show two fundamental impossibility results: (1) counterfactual editing is impossible from i.i.d. image samples and their corresponding labels alone; (2) also, even when the causal relationships between latent generative factors and images are available, no guarantees regarding the output of the generative model can be provided. Third, we propose a relaxation over this hard problem aiming to approximate the non-identifiable target counterfactual distributions while still preserving features the users care about and that are causally consistent with the true generative model, which we call ctf-consistent estimators. Finally, we develop an efficient algorithm to generate counterfactual image samples leveraging neural causal models.
|
https://proceedings.mlr.press/v235/pan24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24b/pan24b.pdf
|
https://openreview.net/forum?id=3QM5SWfeov
|
MALIBO: Meta-learning for Likelihood-free Bayesian Optimization
|
https://proceedings.mlr.press/v235/pan24b.html
|
Jiarong Pan, Stefan Falkner, Felix Berkenkamp, Joaquin Vanschoren
|
https://proceedings.mlr.press/v235/pan24b.html
|
ICML 2024
|
Bayesian optimization (BO) is a popular method to optimize costly black-box functions, and meta-learning has emerged as a way to leverage knowledge from related tasks to optimize new tasks faster. However, existing meta-learning methods for BO rely on surrogate models that are not scalable or are sensitive to varying input scales and noise types across tasks. Moreover, they often overlook the uncertainty associated with task similarity, leading to unreliable task adaptation when a new task differs significantly or has not been sufficiently explored yet. We propose a novel meta-learning BO approach that bypasses the surrogate model and directly learns the utility of queries across tasks. It explicitly models task uncertainty and includes an auxiliary model to enable robust adaptation to new tasks. Extensive experiments show that our method achieves strong performance and outperforms multiple meta-learning BO methods across various benchmarks.
|
https://proceedings.mlr.press/v235/pan24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24c/pan24c.pdf
|
https://openreview.net/forum?id=qwQVV5R8Y7
|
$S^2$IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting
|
https://proceedings.mlr.press/v235/pan24c.html
|
Zijie Pan, Yushan Jiang, Sahil Garg, Anderson Schneider, Yuriy Nevmyvaka, Dongjin Song
|
https://proceedings.mlr.press/v235/pan24c.html
|
ICML 2024
|
Recently, there has been a growing interest in leveraging pre-trained large language models (LLMs) for various time series applications. However, the semantic space of LLMs, established through the pre-training, is still underexplored and may help yield more distinctive and informative representations to facilitate time series forecasting. To this end, we propose Semantic Space Informed Prompt learning with LLM ($S^2$IP-LLM) to align the pre-trained semantic space with time series embedding space and perform time series forecasting based on learned prompts from the joint space. We first design a tokenization module tailored for cross-modality alignment, which explicitly concatenates patches of decomposed time series components to create embeddings that effectively encode the temporal dynamics. Next, we leverage the pre-trained word token embeddings to derive semantic anchors and align selected anchors with time series embeddings by maximizing the cosine similarity in the joint space. This way, $S^2$IP-LLM can retrieve relevant semantic anchors as prompts to provide strong indicators (context) for time series that exhibit different temporal dynamics. With thorough empirical studies on multiple benchmark datasets, we demonstrate that the proposed $S^2$IP-LLM can achieve superior forecasting performance over state-of-the-art baselines. Furthermore, our ablation studies and visualizations verify the necessity of prompt learning informed by semantic space.
|
https://proceedings.mlr.press/v235/pan24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24d/pan24d.pdf
|
https://openreview.net/forum?id=EvHWlYTLWe
|
Feedback Loops With Language Models Drive In-Context Reward Hacking
|
https://proceedings.mlr.press/v235/pan24d.html
|
Alexander Pan, Erik Jones, Meena Jagadeesan, Jacob Steinhardt
|
https://proceedings.mlr.press/v235/pan24d.html
|
ICML 2024
|
Language models influence the external world: they query APIs that read and write to web pages, generate content that shapes human behavior, and run system commands as autonomous agents. These interactions form feedback loops: LLM outputs affect the world, which in turn affect subsequent LLM outputs. In this work, we show that feedback loops can cause in-context reward hacking (ICRH), where the LLM at test-time optimizes a (potentially implicit) objective but creates negative side effects in the process. For example, consider an LLM agent deployed to increase Twitter engagement; the LLM may retrieve its previous tweets into the context window and make them more controversial, increasing engagement but also toxicity. We identify and study two processes that lead to ICRH: output-refinement and policy-refinement. For these processes, evaluations on static datasets are insufficient—they miss the feedback effects and thus cannot capture the most harmful behavior. In response, we provide three recommendations for evaluation to capture more instances of ICRH. As AI development accelerates, the effects of feedback loops will proliferate, increasing the need to understand their role in shaping LLM behavior.
|
https://proceedings.mlr.press/v235/pan24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24e/pan24e.pdf
|
https://openreview.net/forum?id=DsVzHj7jcA
|
Stability and Generalization for Stochastic Recursive Momentum-based Algorithms for (Strongly-)Convex One to $K$-Level Stochastic Optimizations
|
https://proceedings.mlr.press/v235/pan24e.html
|
Xiaokang Pan, Xingyu Li, Jin Liu, Tao Sun, Kai Sun, Lixing Chen, Zhe Qu
|
https://proceedings.mlr.press/v235/pan24e.html
|
ICML 2024
|
STOchastic Recursive Momentum (STORM)-based algorithms have been widely developed to solve one to $K$-level ($K \geq 3$) stochastic optimization problems. Specifically, they use estimators to mitigate the biased gradient issue and achieve near-optimal convergence results. However, there is relatively little work on understanding their generalization performance, particularly evident during the transition from one to $K$-level optimization contexts. This paper provides a comprehensive generalization analysis of three representative STORM-based algorithms: STORM, COVER, and SVMR, for one, two, and $K$-level stochastic optimizations under both convex and strongly convex settings based on algorithmic stability. Firstly, we define stability for $K$-level optimizations and link it to generalization. Then, we detail the stability results for three prominent STORM-based algorithms. Finally, we derive their excess risk bounds by balancing stability results with optimization errors. Our theoretical results provide strong evidence to complete STORM-based algorithms: (1) Each estimator may decrease their stability due to variance with its estimation target. (2) Every additional level might escalate the generalization error, influenced by the stability and the variance between its cumulative stochastic gradient and the true gradient. (3) Increasing the batch size for the initial computation of estimators presents a favorable trade-off, enhancing the generalization performance.
|
https://proceedings.mlr.press/v235/pan24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24f/pan24f.pdf
|
https://openreview.net/forum?id=hsHIxrnrMx
|
RMIB: Representation Matching Information Bottleneck for Matching Text Representations
|
https://proceedings.mlr.press/v235/pan24f.html
|
Haihui Pan, Zhifang Liao, Wenrui Xie, Kun Han
|
https://proceedings.mlr.press/v235/pan24f.html
|
ICML 2024
|
Recent studies have shown that the domain matching of text representations will help improve the generalization ability of asymmetrical domains text matching tasks. This requires that the distribution of text representations should be as similar as possible, similar to matching with heterogeneous data domains, in order to make the data after feature extraction indistinguishable. However, how to match the distribution of text representations remains an open question, and the role of text representations distribution match is still unclear. In this work, we explicitly narrow the distribution of text representations by matching them with the same prior distribution. We theoretically prove that narrowing the distribution of text representations in asymmetrical domains text matching is equivalent to optimizing the information bottleneck (IB). Since the interaction between text representations plays an important role in asymmetrical domains text matching, IB does not restrict the interaction between text representations. Therefore, we propose the adequacy of interaction and the incompleteness of a single text representation on the basis of IB and obtain the representation matching information bottleneck (RMIB). We theoretically prove that the constraints on text representations in RMIB is equivalent to maximizing the mutual information between text representations on the premise that the task information is given. On four text matching models and five text matching datasets, we verify that RMIB can improve the performance of asymmetrical domains text matching. Our experimental code is available at https://github.com/chenxingphh/rmib.
|
https://proceedings.mlr.press/v235/pan24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24g/pan24g.pdf
|
https://openreview.net/forum?id=t3SEfoTaYQ
|
Coprocessor Actor Critic: A Model-Based Reinforcement Learning Approach For Adaptive Brain Stimulation
|
https://proceedings.mlr.press/v235/pan24g.html
|
Michelle Pan, Mariah L Schrum, Vivek Myers, Erdem Biyik, Anca Dragan
|
https://proceedings.mlr.press/v235/pan24g.html
|
ICML 2024
|
Adaptive brain stimulation can treat neurological conditions such as Parkinson’s disease and post-stroke motor deficits by influencing abnormal neural activity. Because of patient heterogeneity, each patient requires a unique stimulation policy to achieve optimal neural responses. Model-free reinforcement learning (MFRL) holds promise in learning effective policies for a variety of similar control tasks, but is limited in domains like brain stimulation by a need for numerous costly environment interactions. In this work we introduce Coprocessor Actor Critic, a novel, model-based reinforcement learning (MBRL) approach for learning neural coprocessor policies for brain stimulation. Our key insight is that coprocessor policy learning is a combination of learning how to act optimally in the world and learning how to induce optimal actions in the world through stimulation of an injured brain. We show that our approach overcomes the limitations of traditional MFRL methods in terms of sample efficiency and task success and outperforms baseline MBRL approaches in a neurologically realistic model of an injured brain.
|
https://proceedings.mlr.press/v235/pan24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pan24h/pan24h.pdf
|
https://openreview.net/forum?id=U97MIrs35l
|
Auto-Encoding Morph-Tokens for Multimodal LLM
|
https://proceedings.mlr.press/v235/pan24h.html
|
Kaihang Pan, Siliang Tang, Juncheng Li, Zhaoyu Fan, Wei Chow, Shuicheng Yan, Tat-Seng Chua, Yueting Zhuang, Hanwang Zhang
|
https://proceedings.mlr.press/v235/pan24h.html
|
ICML 2024
|
For multimodal LLMs, the synergy of visual comprehension (textual output) and generation (visual output) presents an ongoing challenge. This is due to a conflicting objective: for comprehension, an MLLM needs to abstract the visuals; for generation, it needs to preserve the visuals as much as possible. Thus, the objective is a dilemma for visual-tokens. To resolve the conflict, we propose encoding images into morph-tokens to serve a dual purpose: for comprehension, they act as visual prompts instructing MLLM to generate texts; for generation, they take on a different, non-conflicting role as complete visual-tokens for image reconstruction, where the missing visual cues are recovered by the MLLM. Extensive experiments show that morph-tokens can achieve a new SOTA for multimodal comprehension and generation simultaneously. Our project is available at https://github.com/DCDmllm/MorphTokens.
|
https://proceedings.mlr.press/v235/panaganti24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/panaganti24a/panaganti24a.pdf
|
https://openreview.net/forum?id=Yug1IEkvcb
|
Model-Free Robust $φ$-Divergence Reinforcement Learning Using Both Offline and Online Data
|
https://proceedings.mlr.press/v235/panaganti24a.html
|
Kishan Panaganti, Adam Wierman, Eric Mazumdar
|
https://proceedings.mlr.press/v235/panaganti24a.html
|
ICML 2024
|
The robust $\phi$-regularized Markov Decision Process (RRMDP) framework focuses on designing control policies that are robust against parameter uncertainties due to mismatches between the simulator (nominal) model and real-world settings. This work makes two important contributions. First, we propose a model-free algorithm called Robust $\phi$-regularized fitted Q-iteration for learning an $\epsilon$-optimal robust policy that uses only the historical data collected by rolling out a behavior policy (with robust exploratory requirement) on the nominal model. To the best of our knowledge, we provide the first unified analysis for a class of $\phi$-divergences achieving robust optimal policies in high-dimensional systems of arbitrary large state space with general function approximation. Second, we introduce the hybrid robust $\phi$-regularized reinforcement learning framework to learn an optimal robust policy using both historical data and online sampling. Towards this framework, we propose a model-free algorithm called Hybrid robust Total-variation-regularized Q-iteration. To the best of our knowledge, we provide the first improved out-of-data-distribution assumption in large-scale problems of arbitrary large state space with general function approximation under the hybrid robust $\phi$-regularized reinforcement learning framework.
|
https://proceedings.mlr.press/v235/panda24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/panda24a/panda24a.pdf
|
https://openreview.net/forum?id=5kXNMDpUVF
|
A New Linear Scaling Rule for Private Adaptive Hyperparameter Optimization
|
https://proceedings.mlr.press/v235/panda24a.html
|
Ashwinee Panda, Xinyu Tang, Saeed Mahloujifar, Vikash Sehwag, Prateek Mittal
|
https://proceedings.mlr.press/v235/panda24a.html
|
ICML 2024
|
An open problem in differentially private deep learning is hyperparameter optimization (HPO). DP-SGD introduces new hyperparameters and complicates existing ones, forcing researchers to painstakingly tune hyperparameters with hundreds of trials, which in turn makes it impossible to account for the privacy cost of HPO without destroying the utility. We propose an adaptive HPO method that uses cheap trials (in terms of privacy cost and runtime) to estimate optimal hyperparameters and scales them up. We obtain state-of-the-art performance on 22 benchmark tasks, across computer vision and natural language processing, across pretraining and finetuning, across architectures and a wide range of $\varepsilon \in [0.01,8.0]$, all while accounting for the privacy cost of HPO.
|
https://proceedings.mlr.press/v235/pandey24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pandey24a/pandey24a.pdf
|
https://openreview.net/forum?id=nxzXTLByXO
|
BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback
|
https://proceedings.mlr.press/v235/pandey24a.html
|
Gaurav Pandey, Yatin Nandwani, Tahira Naseem, Mayank Mishra, Guangxuan Xu, Dinesh Raghu, Sachindra Joshi, Asim Munawar, Ramón Fernandez Astudillo
|
https://proceedings.mlr.press/v235/pandey24a.html
|
ICML 2024
|
Distribution matching methods for language model alignment such as Generation with Distributional Control (GDC) and Distributional Policy Gradient (DPG) have not received the same level of attention in reinforcement learning from human feedback (RLHF) as contrastive methods such as Sequence Likelihood Calibration (SLiC), Direct Preference Optimization (DPO) and its variants. We identify high variance of the gradient estimate as the primary reason for the lack of success of these methods and propose a self-normalized baseline to reduce the variance. We further generalize the target distribution in DPG, GDC and DPO by using Bayes’ rule to define the reward-conditioned posterior. The resulting approach, referred to as BRAIn - Bayesian Reward-conditioned Amortized Inference acts as a bridge between distribution matching methods and DPO and significantly outperforms prior art in summarization and Antropic HH tasks.
|
https://proceedings.mlr.press/v235/pang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pang24a/pang24a.pdf
|
https://openreview.net/forum?id=l7shXGuGBT
|
Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation
|
https://proceedings.mlr.press/v235/pang24a.html
|
Xianghe Pang, Shuo Tang, Rui Ye, Yuxin Xiong, Bolun Zhang, Yanfeng Wang, Siheng Chen
|
https://proceedings.mlr.press/v235/pang24a.html
|
ICML 2024
|
Aligning large language models (LLMs) with human values is imperative to mitigate potential adverse effects resulting from their misuse. Drawing from the sociological insight that acknowledging all parties’ concerns is a key factor in shaping human values, this paper proposes a novel direction to align LLMs by themselves: social scene simulation. To achieve this, we present MATRIX, a novel social scene simulator that emulates realistic scenes around a user’s input query, enabling the LLM to take social consequences into account before responding. MATRIX serves as a virtual rehearsal space, akin to a Monopolylogue, where the LLM performs diverse roles related to the query and practice by itself. To inject this alignment, we fine-tune the LLM with MATRIX-simulated data, ensuring adherence to human values without compromising inference speed. We theoretically show that the LLM with MATRIX outperforms existing methods under mild assumptions. Finally, extensive experiments validate that our method outperforms over 10 baselines across 4 benchmarks. As evidenced by 875 user ratings, our tuned 13B-size LLM exceeds GPT-4 in aligning with human values. See our project page at https://shuotang123.github.io/MATRIX.
|
https://proceedings.mlr.press/v235/panigrahi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/panigrahi24a/panigrahi24a.pdf
|
https://openreview.net/forum?id=JcxlFe2fGC
|
Trainable Transformer in Transformer
|
https://proceedings.mlr.press/v235/panigrahi24a.html
|
Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, Sanjeev Arora
|
https://proceedings.mlr.press/v235/panigrahi24a.html
|
ICML 2024
|
Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose a new efficient construction, Transformer in Transformer (in short, TINT), that allows a transformer to simulate and fine-tune more complex models during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TINT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TINT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TINT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TINT for a OPT-125M model improves performance by 4 − 16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TINT is included.
|
https://proceedings.mlr.press/v235/paolo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/paolo24a/paolo24a.pdf
|
https://openreview.net/forum?id=e5admkWKgV
|
Position: A Call for Embodied AI
|
https://proceedings.mlr.press/v235/paolo24a.html
|
Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl
|
https://proceedings.mlr.press/v235/paolo24a.html
|
ICML 2024
|
We propose Embodied AI (E-AI) as the next fundamental step in the pursuit of Artificial General Intelligence (AGI), juxtaposing it against current AI advancements, particularly Large Language Models (LLMs). We traverse the evolution of the embodiment concept across diverse fields (philosophy, psychology, neuroscience, and robotics) to highlight how E-AI distinguishes itself from the classical paradigm of static learning. By broadening the scope of E-AI, we introduce a theoretical framework based on cognitive architectures, emphasizing perception, action, memory, and learning as essential components of an embodied agent. This framework is aligned with Friston’s active inference principle, offering a comprehensive approach to E-AI development. Despite the progress made in the field of AI, substantial challenges, such as the formulation of a novel AI learning theory and the innovation of advanced hardware, persist. Our discussion lays down a foundational guideline for future E-AI research. Highlighting the importance of creating E-AI agents capable of seamless communication, collaboration, and coexistence with humans and other intelligent entities within real-world environments, we aim to steer the AI community towards addressing the multifaceted challenges and seizing the opportunities that lie ahead in the quest for AGI.
|
https://proceedings.mlr.press/v235/papadopoulos24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/papadopoulos24a/papadopoulos24a.pdf
|
https://openreview.net/forum?id=UpSe7ag34v
|
Arrows of Time for Large Language Models
|
https://proceedings.mlr.press/v235/papadopoulos24a.html
|
Vassilis Papadopoulos, Jérémie Wenger, Clément Hongler
|
https://proceedings.mlr.press/v235/papadopoulos24a.html
|
ICML 2024
|
We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.