abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/papamarkou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/papamarkou24a/papamarkou24a.pdf
https://openreview.net/forum?id=Nl3RG5XWAt
Position: Topological Deep Learning is the New Frontier for Relational Learning
https://proceedings.mlr.press/v235/papamarkou24a.html
Theodore Papamarkou, Tolga Birdal, Michael M. Bronstein, Gunnar E. Carlsson, Justin Curry, Yue Gao, Mustafa Hajij, Roland Kwitt, Pietro Lio, Paolo Di Lorenzo, Vasileios Maroulas, Nina Miolane, Farzana Nasrin, Karthikeyan Natesan Ramamurthy, Bastian Rieck, Simone Scardapane, Michael T Schaub, Petar Veličković, Bei Wang, Yusu Wang, Guowei Wei, Ghada Zamzmi
https://proceedings.mlr.press/v235/papamarkou24a.html
ICML 2024
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models. This paper posits that TDL is the new frontier for relational learning. TDL may complement graph representation learning and geometric deep learning by incorporating topological concepts, and can thus provide a natural choice for various machine learning settings. To this end, this paper discusses open problems in TDL, ranging from practical benefits to theoretical foundations. For each problem, it outlines potential solutions and future research opportunities. At the same time, this paper serves as an invitation to the scientific community to actively participate in TDL research to unlock the potential of this emerging field.
https://proceedings.mlr.press/v235/papamarkou24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/papamarkou24b/papamarkou24b.pdf
https://openreview.net/forum?id=PrmxFWI1Fr
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI
https://proceedings.mlr.press/v235/papamarkou24b.html
Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
https://proceedings.mlr.press/v235/papamarkou24b.html
ICML 2024
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets. However, a broader perspective reveals a multitude of overlooked metrics, tasks, and data types, such as uncertainty, active and continual learning, and scientific data, that demand attention. Bayesian deep learning (BDL) constitutes a promising avenue, offering advantages across these diverse settings. This paper posits that BDL can elevate the capabilities of deep learning. It revisits the strengths of BDL, acknowledges existing challenges, and highlights some exciting research avenues aimed at addressing these obstacles. Looking ahead, the discussion focuses on possible ways to combine large-scale foundation models with BDL to unlock their full potential.
https://proceedings.mlr.press/v235/park24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24a/park24a.pdf
https://openreview.net/forum?id=yTz0u4B8ug
Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture
https://proceedings.mlr.press/v235/park24a.html
Sangjun Park, Jinyeong Bak
https://proceedings.mlr.press/v235/park24a.html
ICML 2024
Making neural networks remember over the long term has been a longstanding issue. Although several external memory techniques have been introduced, most focus on retaining recent information in the short term. Regardless of its importance, information tends to be fatefully forgotten over time. We present Memoria, a memory system for artificial neural networks, drawing inspiration from humans and applying various neuroscientific and psychological theories. The experimental results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification, surpassing conventional techniques. Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.
https://proceedings.mlr.press/v235/park24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24b/park24b.pdf
https://openreview.net/forum?id=cY9g0bwiZx
The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm
https://proceedings.mlr.press/v235/park24b.html
Giseung Park, Woohyeon Byeon, Seongmin Kim, Elad Havakuk, Amir Leshem, Youngchul Sung
https://proceedings.mlr.press/v235/park24b.html
ICML 2024
In this paper, we consider multi-objective reinforcement learning, which arises in many real-world problems with multiple optimization goals. We approach the problem with a max-min framework focusing on fairness among the multiple goals and develop a relevant theory and a practical model-free algorithm under the max-min framework. The developed theory provides a theoretical advance in multi-objective reinforcement learning, and the proposed algorithm demonstrates a notable performance improvement over existing baseline methods.
https://proceedings.mlr.press/v235/park24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24c/park24c.pdf
https://openreview.net/forum?id=UGpGkLzwpP
The Linear Representation Hypothesis and the Geometry of Large Language Models
https://proceedings.mlr.press/v235/park24c.html
Kiho Park, Yo Joong Choe, Victor Veitch
https://proceedings.mlr.press/v235/park24c.html
ICML 2024
Informally, the "linear representation hypothesis" is the idea that high-level concepts are represented linearly as directions in some representation space. In this paper, we address two closely related questions: What does "linear representation" actually mean? And, how do we make sense of geometric notions (e.g., cosine similarity and projection) in the representation space? To answer these, we use the language of counterfactuals to give two formalizations of linear representation, one in the output (word) representation space, and one in the input (context) space. We then prove that these connect to linear probing and model steering, respectively. To make sense of geometric notions, we use the formalization to identify a particular (non-Euclidean) inner product that respects language structure in a sense we make precise. Using this causal inner product, we show how to unify all notions of linear representation. In particular, this allows the construction of probes and steering vectors using counterfactual pairs. Experiments with LLaMA-2 demonstrate the existence of linear representations of concepts, the connection to interpretation and control, and the fundamental role of the choice of inner product.
https://proceedings.mlr.press/v235/park24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24d/park24d.pdf
https://openreview.net/forum?id=RLA4JTckXe
Mitigating Oversmoothing Through Reverse Process of GNNs for Heterophilic Graphs
https://proceedings.mlr.press/v235/park24d.html
Moonjeong Park, Jaeseung Heo, Dongwoo Kim
https://proceedings.mlr.press/v235/park24d.html
ICML 2024
Graph Neural Network (GNN) resembles the diffusion process, leading to the over-smoothing of learned representations when stacking many layers. Hence, the reverse process of message passing can produce the distinguishable node representations by inverting the forward message propagation. The distinguishable representations can help us to better classify neighboring nodes with different labels, such as in heterophilic graphs. In this work, we apply the design principle of the reverse process to the three variants of the GNNs. Through the experiments on heterophilic graph data, where adjacent nodes need to have different representations for successful classification, we show that the reverse process significantly improves the prediction performance in many cases. Additional analysis reveals that the reverse mechanism can mitigate the over-smoothing over hundreds of layers. Our code is available at https://github.com/ml-postech/reverse-gnn.
https://proceedings.mlr.press/v235/park24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24e/park24e.pdf
https://openreview.net/forum?id=u09gadH3BU
Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
https://proceedings.mlr.press/v235/park24e.html
Yeonhong Park, Jake Hyun, Sanglyul Cho, Bonggeun Sim, Jae W. Lee
https://proceedings.mlr.press/v235/park24e.html
ICML 2024
Recently, considerable efforts have been directed towards compressing Large Language Models (LLMs), which showcase groundbreaking capabilities across diverse applications but entail significant deployment costs due to their large sizes. Meanwhile, much less attention has been given to mitigating the costs associated with deploying multiple LLMs of varying sizes despite its practical significance. Thus, this paper introduces any-precision LLM, extending the concept of any-precision DNN to LLMs. Addressing challenges in any-precision LLM, we propose a lightweight method for any-precision quantization of LLMs, leveraging a post-training quantization framework, and develop a specialized software engine for its efficient serving. As a result, our solution significantly reduces the high costs of deploying multiple, different-sized LLMs by overlaying LLMs quantized to varying bit-widths, such as 3, 4, ..., $n$ bits, into a memory footprint comparable to a single $n$-bit LLM. All the supported LLMs with varying bit-widths demonstrate state-of-the-art model quality and inference throughput, proving itself to be a compelling option for deployment of multiple, different-sized LLMs.
https://proceedings.mlr.press/v235/park24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24f/park24f.pdf
https://openreview.net/forum?id=lgcFX4VFrM
Mean-field Chaos Diffusion Models
https://proceedings.mlr.press/v235/park24f.html
Sungwoo Park, Dongjun Kim, Ahmed Alaa
https://proceedings.mlr.press/v235/park24f.html
ICML 2024
In this paper, we introduce a new class of score-based generative models (SGMs) designed to handle high-cardinality data distributions by leveraging concepts from mean-field theory. We present mean-field chaos diffusion models (MF-CDMs), which address the curse of dimensionality inherent in high-cardinality data by utilizing the propagation of chaos property of interacting particles. By treating high-cardinality data as a large stochastic system of interacting particles, we develop a novel score-matching method for infinite-dimensional chaotic particle systems and propose an approximation scheme that employs a subdivision strategy for efficient training. Our theoretical and empirical results demonstrate the scalability and effectiveness of MF-CDMs for managing large high-cardinality data structures, such as 3D point clouds.
https://proceedings.mlr.press/v235/park24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24g/park24g.pdf
https://openreview.net/forum?id=LhNsSaAKub
Foundation Policies with Hilbert Representations
https://proceedings.mlr.press/v235/park24g.html
Seohong Park, Tobias Kreiman, Sergey Levine
https://proceedings.mlr.press/v235/park24g.html
ICML 2024
Unsupervised and self-supervised objectives, such as next token prediction, have enabled pre-training generalist models from large amounts of unlabeled data. In reinforcement learning (RL), however, finding a truly general and scalable unsupervised pre-training objective for generalist policies from offline data remains a major open question. While a number of methods have been proposed to enable generic self-supervised RL, based on principles such as goal-conditioned RL, behavioral cloning, and unsupervised skill learning, such methods remain limited in terms of either the diversity of the discovered behaviors, the need for high-quality demonstration data, or the lack of a clear adaptation mechanism for downstream tasks. In this work, we propose a novel unsupervised framework to pre-train generalist policies that capture diverse, optimal, long-horizon behaviors from unlabeled offline data such that they can be quickly adapted to any arbitrary new tasks in a zero-shot manner. Our key insight is to learn a structured representation that preserves the temporal structure of the underlying environment, and then to span this learned latent space with directional movements, which enables various zero-shot policy “prompting” schemes for downstream tasks. Through our experiments on simulated robotic locomotion and manipulation benchmarks, we show that our unsupervised policies can solve goal-conditioned and general RL tasks in a zero-shot fashion, even often outperforming prior methods designed specifically for each setting. Our code and videos are available at https://seohong.me/projects/hilp/
https://proceedings.mlr.press/v235/park24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24h/park24h.pdf
https://openreview.net/forum?id=zEqeNEuiJr
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding
https://proceedings.mlr.press/v235/park24h.html
Chanho Park, Namyoon Lee
https://proceedings.mlr.press/v235/park24h.html
ICML 2024
Distributed learning is an effective approach to accelerate model training by using parallel computing power of multiple workers. However, substantial communication delays arise between workers and a parameter server due to the massive costs associated with communicating gradients. SignSGD with majority voting (signSGD-MV) is a simple yet effective optimizer that reduces communication costs through sign quantization, but its convergence rate significantly decreases when adversarial workers arbitrarily manipulate datasets or local gradient updates. In this paper, we consider a distributed learning problem where the workforce comprises a mixture of honest and adversarial workers. In this setting, we show that the convergence rate can remain invariant as long as the number of honest workers providing trustworthy local updates to the parameter server exceeds the number of adversarial workers. The key idea behind this counter-intuitive result is our novel aggregation method, signSGD with federated defense (signSGD-FD). Unlike traditional approaches, signSGD-FD utilizes the gradient information sent by adversarial workers with appropriate weights, obtained through gradient sign decoding. Experimental results demonstrate that signSGD-FD achieves superior convergence rates compared to traditional algorithms in various adversarial attack scenarios.
https://proceedings.mlr.press/v235/park24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24i/park24i.pdf
https://openreview.net/forum?id=dslUyy1rN4
Position: Automatic Environment Shaping is the Next Frontier in RL
https://proceedings.mlr.press/v235/park24i.html
Younghyo Park, Gabriel B. Margolis, Pulkit Agrawal
https://proceedings.mlr.press/v235/park24i.html
ICML 2024
Many roboticists dream of presenting a robot with a task in the evening and returning the next morning to find the robot capable of solving the task. What is preventing us from achieving this? Sim-to-real reinforcement learning (RL) has achieved impressive performance on challenging robotics tasks, but requires substantial human effort to set up the task in a way that is amenable to RL. It’s our position that algorithmic improvements in policy optimization and other ideas should be guided towards resolving the primary bottleneck of shaping the training environment, i.e., designing observations, actions, rewards and simulation dynamics. Most practitioners don’t tune the RL algorithm, but other environment parameters to obtain a desirable controller. We posit that scaling RL to diverse robotic tasks will only be achieved if the community focuses on automating environment shaping procedures.
https://proceedings.mlr.press/v235/park24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24j/park24j.pdf
https://openreview.net/forum?id=GbFluKMmtE
Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks
https://proceedings.mlr.press/v235/park24j.html
Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
https://proceedings.mlr.press/v235/park24j.html
ICML 2024
State-space models (SSMs), such as Mamba (Gu & Dao, 2023), have been proposed as alternatives to Transformer networks in language modeling, incorporating gating, convolutions, and input-dependent token selection to mitigate the quadratic cost of multi-head attention. Although SSMs exhibit competitive performance, their in-context learning (ICL) capabilities, a remarkable emergent property of modern language models that enables task execution without parameter optimization, remain less explored compared to Transformers. In this study, we evaluate the ICL performance of SSMs, focusing on Mamba, against Transformer models across various tasks. Our results show that SSMs perform comparably to Transformers in standard regression ICL tasks, while outperforming them in tasks like sparse parity learning. However, SSMs fall short in tasks involving non-standard retrieval functionality. To address these limitations, we introduce a hybrid model, MambaFormer, that combines Mamba with attention blocks, surpassing individual models in tasks where they struggle independently. Our findings suggest that hybrid architectures offer promising avenues for enhancing ICL in language models.
https://proceedings.mlr.press/v235/park24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24k/park24k.pdf
https://openreview.net/forum?id=cj5HbaX14p
BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
https://proceedings.mlr.press/v235/park24k.html
Ji Won Park, Natasa Tagasovska, Michael Maser, Stephen Ra, Kyunghyun Cho
https://proceedings.mlr.press/v235/park24k.html
ICML 2024
Many scientific and industrial applications require the joint optimization of multiple, potentially competing objectives. Multi-objective Bayesian optimization (MOBO) is a sample-efficient framework for identifying Pareto-optimal solutions. At the heart of MOBO is the acquisition function, which determines the next candidate to evaluate by navigating the best compromises among the objectives. Acquisition functions that rely on integrating over the objective space scale poorly to a large number of objectives. In this paper, we show a natural connection between the non-dominated solutions and the highest multivariate rank, which coincides with the extreme level line of the joint cumulative distribution function (CDF). Motivated by this link, we propose the CDF indicator, a Pareto-compliant metric for evaluating the quality of approximate Pareto sets, that can complement the popular hypervolume indicator. We then introduce an acquisition function based on the CDF indicator, called BOtied. BOtied can be implemented efficiently with copulas, a statistical tool for modeling complex, high-dimensional distributions. Our experiments on a variety of synthetic and real-world experiments demonstrate that BOtied outperforms state-of-the-art MOBO algorithms while being computationally efficient for many objectives.
https://proceedings.mlr.press/v235/parnichkun24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/parnichkun24a/parnichkun24a.pdf
https://openreview.net/forum?id=DwwI9L67B5
State-Free Inference of State-Space Models: The *Transfer Function* Approach
https://proceedings.mlr.press/v235/parnichkun24a.html
Rom Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T.H. Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Re, Hajime Asama, Stefano Ermon, Taiji Suzuki, Michael Poli, Atsushi Yamashita
https://proceedings.mlr.press/v235/parnichkun24a.html
ICML 2024
We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size. We achieve this using properties of the proposed frequency domain transfer function parametrization, which enables direct computation of its corresponding convolutional kernel’s spectrum via a single Fast Fourier Transform. Our experimental results across multiple sequence lengths and state sizes illustrates, on average, a 35% training speed improvement over S4 layers – parametrized in time-domain – on the Long Range Arena benchmark, while delivering state-of-the-art downstream performances over other attention-free approaches. Moreover, we report improved perplexity in language modeling over a long convolutional Hyena baseline, by simply introducing our transfer function parametrization. Our code is available at https://github.com/ruke1ire/RTF.
https://proceedings.mlr.press/v235/patel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/patel24a/patel24a.pdf
https://openreview.net/forum?id=Wn4QwCrDvH
Variational Inference with Coverage Guarantees in Simulation-Based Inference
https://proceedings.mlr.press/v235/patel24a.html
Yash Patel, Declan Mcnamara, Jackson Loper, Jeffrey Regier, Ambuj Tewari
https://proceedings.mlr.press/v235/patel24a.html
ICML 2024
Amortized variational inference is an often employed framework in simulation-based inference that produces a posterior approximation that can be rapidly computed given any new observation. Unfortunately, there are few guarantees about the quality of these approximate posteriors. We propose Conformalized Amortized Neural Variational Inference (CANVI), a procedure that is scalable, easily implemented, and provides guaranteed marginal coverage. Given a collection of candidate amortized posterior approximators, CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor. CANVI ensures that the resulting predictor constructs regions that contain the truth with a user-specified level of probability. CANVI is agnostic to design decisions in formulating the candidate approximators and only requires access to samples from the forward model, permitting its use in likelihood-free settings. We prove lower bounds on the predictive efficiency of the regions produced by CANVI and explore how the quality of a posterior approximation relates to the predictive efficiency of prediction regions based on that approximation. Finally, we demonstrate the accurate calibration and high predictive efficiency of CANVI on a suite of simulation-based inference benchmark tasks and an important scientific task: analyzing galaxy emission spectra.
https://proceedings.mlr.press/v235/patel24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/patel24b/patel24b.pdf
https://openreview.net/forum?id=ClWdplZ12B
Towards Global Optimality for Practical Average Reward Reinforcement Learning without Mixing Time Oracles
https://proceedings.mlr.press/v235/patel24b.html
Bhrij Patel, Wesley A Suttle, Alec Koppel, Vaneet Aggarwal, Brian M. Sadler, Dinesh Manocha, Amrit Bedi
https://proceedings.mlr.press/v235/patel24b.html
ICML 2024
In the context of average-reward reinforcement learning, the requirement for oracle knowledge of the mixing time, a measure of the duration a Markov chain under a fixed policy needs to achieve its stationary distribution, poses a significant challenge for the global convergence of policy gradient methods. This requirement is particularly problematic due to the difficulty and expense of estimating mixing time in environments with large state spaces, leading to the necessity of impractically long trajectories for effective gradient estimation in practical applications. To address this limitation, we consider the Multi-level Actor-Critic (MAC) framework, which incorporates a Multi-level Monte-Carlo (MLMC) gradient estimator. With our approach, we effectively alleviate the dependency on mixing time knowledge, a first for average-reward MDPs global convergence. Furthermore, our approach exhibits the tightest available dependence of $\mathcal{O}(\sqrt{\tau_{mix}})$ known from prior work. With a 2D grid world goal-reaching navigation experiment, we demonstrate that MAC outperforms the existing state-of-the-art policy gradient-based method for average reward settings.
https://proceedings.mlr.press/v235/patil24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/patil24a/patil24a.pdf
https://openreview.net/forum?id=bvPYroQgc3
Optimal Ridge Regularization for Out-of-Distribution Prediction
https://proceedings.mlr.press/v235/patil24a.html
Pratik Patil, Jin-Hong Du, Ryan Tibshirani
https://proceedings.mlr.press/v235/patil24a.html
ICML 2024
We study the behavior of optimal ridge regularization and optimal ridge risk for out-of-distribution prediction, where the test distribution deviates arbitrarily from the train distribution. We establish general conditions that determine the sign of the optimal regularization level under covariate and regression shifts. These conditions capture the alignment between the covariance and signal structures in the train and test data and reveal stark differences compared to the in-distribution setting. For example, a negative regularization level can be optimal under covariate shift or regression shift, even when the training features are isotropic or the design is underparameterized. Furthermore, we prove that the optimally tuned risk is monotonic in the data aspect ratio, even in the out-of-distribution setting and when optimizing over negative regularization levels. In general, our results do not make any modeling assumptions for the train or the test distributions, except for moment bounds, and allow for arbitrary shifts and the widest possible range of (negative) regularization levels.
https://proceedings.mlr.press/v235/patwari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/patwari24a/patwari24a.pdf
https://openreview.net/forum?id=4BWCecFEcQ
PerceptAnon: Exploring the Human Perception of Image Anonymization Beyond Pseudonymization for GDPR
https://proceedings.mlr.press/v235/patwari24a.html
Kartik Patwari, Chen-Nee Chuah, Lingjuan Lyu, Vivek Sharma
https://proceedings.mlr.press/v235/patwari24a.html
ICML 2024
Current image anonymization techniques, largely focus on localized pseudonymization, typically modify identifiable features like faces or full bodies and evaluate anonymity through metrics such as detection and re-identification rates. However, this approach often overlooks information present in the entire image post-anonymization that can compromise privacy, such as specific locations, objects/items, or unique attributes. Acknowledging the pivotal role of human judgment in anonymity, our study conducts a thorough analysis of perceptual anonymization, exploring its spectral nature and its critical implications for image privacy assessment, particularly in light of regulations such as the General Data Protection Regulation (GDPR). To facilitate this, we curated a dataset specifically tailored for assessing anonymized images. We introduce a learning-based metric, PerceptAnon, which is tuned to align with the human Perception of Anonymity. PerceptAnon evaluates both original-anonymized image pairs and solely anonymized images. Trained using human annotations, our metric encompasses both anonymized subjects and their contextual backgrounds, thus providing a comprehensive evaluation of privacy vulnerabilities. We envision this work as a milestone for understanding and assessing image anonymization, and establishing a foundation for future research. The codes and dataset are available in https://github.com/SonyResearch/gdpr_perceptanon.
https://proceedings.mlr.press/v235/pauls24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pauls24a/pauls24a.pdf
https://openreview.net/forum?id=ZzCY0fRver
Estimating Canopy Height at Scale
https://proceedings.mlr.press/v235/pauls24a.html
Jan Pauls, Max Zimmer, Una M. Kelly, Martin Schwartz, Sassan Saatchi, Philippe Ciais, Sebastian Pokutta, Martin Brandt, Fabian Gieseke
https://proceedings.mlr.press/v235/pauls24a.html
ICML 2024
We propose a framework for global-scale canopy height estimation based on satellite data. Our model leverages advanced data preprocessing techniques, resorts to a novel loss function designed to counter geolocation inaccuracies inherent in the ground-truth height measurements, and employs data from the Shuttle Radar Topography Mission to effectively filter out erroneous labels in mountainous regions, enhancing the reliability of our predictions in those areas. A comparison between predictions and ground-truth labels yields an MAE/RMSE of 2.43 / 4.73 (meters) overall and 4.45 / 6.72 (meters) for trees taller than five meters, which depicts a substantial improvement compared to existing global-scale products. The resulting height map as well as the underlying framework will facilitate and enhance ecological analyses at a global scale, including, but not limited to, large-scale forest and biomass monitoring.
https://proceedings.mlr.press/v235/paulus24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/paulus24a/paulus24a.pdf
https://openreview.net/forum?id=TfWKkSAziC
LPGD: A General Framework for Backpropagation through Embedded Optimization Layers
https://proceedings.mlr.press/v235/paulus24a.html
Anselm Paulus, Georg Martius, Vı́t Musil
https://proceedings.mlr.press/v235/paulus24a.html
ICML 2024
Embedding parameterized optimization problems as layers into machine learning architectures serves as a powerful inductive bias. Training such architectures with stochastic gradient descent requires care, as degenerate derivatives of the embedded optimization problem often render the gradients uninformative. We propose Lagrangian Proximal Gradient Descent (LPGD), a flexible framework for training architectures with embedded optimization layers that seamlessly integrates into automatic differentiation libraries. LPGD efficiently computes meaningful replacements of the degenerate optimization layer derivatives by re-running the forward solver oracle on a perturbed input. LPGD captures various previously proposed methods as special cases, while fostering deep links to traditional optimization methods. We theoretically analyze our method and demonstrate on historical and synthetic data that LPGD converges faster than gradient descent even in a differentiable setup.
https://proceedings.mlr.press/v235/pavse24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pavse24a/pavse24a.pdf
https://openreview.net/forum?id=64fdhmogiD
Learning to Stabilize Online Reinforcement Learning in Unbounded State Spaces
https://proceedings.mlr.press/v235/pavse24a.html
Brahma S Pavse, Matthew Zurek, Yudong Chen, Qiaomin Xie, Josiah P. Hanna
https://proceedings.mlr.press/v235/pavse24a.html
ICML 2024
In many reinforcement learning (RL) applications, we want policies that reach desired states and then keep the controlled system within an acceptable region around the desired states over an indefinite period of time. This latter objective is called stability and is especially important when the state space is unbounded, such that the states can be arbitrarily far from each other and the agent can drift far away from the desired states. For example, in stochastic queuing networks, where queues of waiting jobs can grow without bound, the desired state is all-zero queue lengths. Here, a stable policy ensures queue lengths are finite while an optimal policy minimizes queue lengths. Since an optimal policy is also stable, one would expect that RL algorithms would implicitly give us stable policies. However, in this work, we find that deep RL algorithms that directly minimize the distance to the desired state during online training often result in unstable policies, i.e., policies that drift far away from the desired state. We attribute this instability to poor credit-assignment for destabilizing actions. We then introduce an approach based on two ideas: 1) a Lyapunov-based cost-shaping technique and 2) state transformations to the unbounded state space. We conduct an empirical study on various queueing networks and traffic signal control problems and find that our approach performs competitively against strong baselines with knowledge of the transition dynamics. Our code is available here: https://github.com/Badger-RL/STOP
https://proceedings.mlr.press/v235/pawelczyk24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pawelczyk24a/pawelczyk24a.pdf
https://openreview.net/forum?id=GKcwle8XC9
In-Context Unlearning: Language Models as Few-Shot Unlearners
https://proceedings.mlr.press/v235/pawelczyk24a.html
Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
https://proceedings.mlr.press/v235/pawelczyk24a.html
ICML 2024
Machine unlearning, the study of efficiently removing the impact of specific training instances on a model, has garnered increased attention in recent years due to regulatory guidelines such as the Right to be Forgotten. Achieving precise unlearning typically involves fully retraining the model and is computationally infeasible in case of very large models such as Large Language Models (LLMs). To this end, recent work has proposed several algorithms which approximate the removal of training data without retraining the model. These algorithms crucially rely on access to the model parameters in order to update them, an assumption that may not hold in practice due to computational constraints or having only query access to the LLMs. In this work, we propose a new class of unlearning methods for LLMs called “In-Context Unlearning.” This method unlearns instances from the model by simply providing specific kinds of inputs in context, without the need to update model parameters. To unlearn specific training instances, we present these instances to the LLMs at inference time along with labels that differ from their ground truth. Our experimental results demonstrate that in-context unlearning performs on par with, or in some cases outperforms other state-of-the-art methods that require access to model parameters, effectively removing the influence of specific instances on the model while preserving test accuracy.
https://proceedings.mlr.press/v235/pearce-crump24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pearce-crump24a/pearce-crump24a.pdf
https://openreview.net/forum?id=vjkq5fwsj3
Graph Automorphism Group Equivariant Neural Networks
https://proceedings.mlr.press/v235/pearce-crump24a.html
Edward Pearce-Crump, William Knottenbelt
https://proceedings.mlr.press/v235/pearce-crump24a.html
ICML 2024
Permutation equivariant neural networks are typically used to learn from data that lives on a graph. However, for any graph $G$ that has $n$ vertices, using the symmetric group $S_n$ as its group of symmetries does not take into account the relations that exist between the vertices. Given that the actual group of symmetries is the automorphism group Aut$(G)$, we show how to construct neural networks that are equivariant to Aut$(G)$ by obtaining a full characterisation of the learnable, linear, Aut$(G)$-equivariant functions between layers that are some tensor power of $\mathbb{R}^{n}$. In particular, we find a spanning set of matrices for these layer functions in the standard basis of $\mathbb{R}^{n}$. This result has important consequences for learning from data whose group of symmetries is a finite group because a theorem by Frucht (1938) showed that any finite group is isomorphic to the automorphism group of a graph.
https://proceedings.mlr.press/v235/pei24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pei24a/pei24a.pdf
https://openreview.net/forum?id=1sRuv4cnuZ
Multi-Track Message Passing: Tackling Oversmoothing and Oversquashing in Graph Learning via Preventing Heterophily Mixing
https://proceedings.mlr.press/v235/pei24a.html
Hongbin Pei, Yu Li, Huiqi Deng, Jingxin Hai, Pinghui Wang, Jie Ma, Jing Tao, Yuheng Xiong, Xiaohong Guan
https://proceedings.mlr.press/v235/pei24a.html
ICML 2024
The advancement toward deeper graph neural networks is currently obscured by two inherent issues in message passing, oversmoothing and oversquashing. We identify the root cause of these issues as information loss due to heterophily mixing in aggregation, where messages of diverse category semantics are mixed. We propose a novel multi-track graph convolutional network to address oversmoothing and oversquashing effectively. Our basic idea is intuitive: if messages are separated and independently propagated according to their category semantics, heterophilic mixing can be prevented. Consequently, we present a novel multi-track message passing scheme capable of preventing heterophilic mixing, enhancing long-distance information flow, and improving separation condition. Empirical validations show that our model achieved state-of-the-art performance on several graph datasets and effectively tackled oversmoothing and oversquashing, setting a new benchmark of $86.4$% accuracy on Cora.
https://proceedings.mlr.press/v235/pei24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pei24b/pei24b.pdf
https://openreview.net/forum?id=OLvgrLtv6J
Exploiting Code Symmetries for Learning Program Semantics
https://proceedings.mlr.press/v235/pei24b.html
Kexin Pei, Weichen Li, Qirui Jin, Shuyang Liu, Scott Geng, Lorenzo Cavallaro, Junfeng Yang, Suman Jana
https://proceedings.mlr.press/v235/pei24b.html
ICML 2024
This paper tackles the challenge of teaching code semantics to Large Language Models (LLMs) for program analysis by incorporating code symmetries into the model architecture. We introduce a group-theoretic framework that defines code symmetries as semantics-preserving transformations, where forming a code symmetry group enables precise and efficient reasoning of code semantics. Our solution, SymC, develops a novel variant of self-attention that is provably equivariant to code symmetries from the permutation group defined over the program dependence graph. SymC obtains superior performance on five program analysis tasks, outperforming state-of-the-art code models, including GPT-4, without any pre-training. Our results suggest that code LLMs that encode the code structural prior via the code symmetry group generalize better and faster.
https://proceedings.mlr.press/v235/pei24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pei24c/pei24c.pdf
https://openreview.net/forum?id=EEO4Iktfjp
Modeling Language Tokens as Functionals of Semantic Fields
https://proceedings.mlr.press/v235/pei24c.html
Zhengqi Pei, Anran Zhang, Shuhui Wang, Qingming Huang
https://proceedings.mlr.press/v235/pei24c.html
ICML 2024
Recent advances in natural language processing have relied heavily on using Transformer-based language models. However, Transformers often require large parameter sizes and model depth. Existing Transformer-free approaches using state-space models demonstrate superiority over Transformers, yet they still lack a neuro-biologically connection to the human brain. This paper proposes ${\it LasF}$, representing ${\bf L}$anguage tokens ${\bf as}$ ${\bf F}$unctionals of semantic fields, to simulate the neuronal behaviors for better language modeling. The ${\it LasF}$ module is equivalent to a nonlinear approximator tailored for sequential data. By replacing the final layers of pre-trained language models with the ${\it LasF}$ module, we obtain ${\it LasF}$-based models. Experiments conducted for standard reading comprehension and question-answering tasks demonstrate that the ${\it LasF}$-based models consistently improve accuracy with fewer parameters. Besides, we use CommonsenseQA’s blind test set to evaluate a full-parameter tuned ${\it LasF}$-based model, which outperforms the prior best ensemble and single models by $0.4%$ and $3.1%$, respectively. Furthermore, our ${\it LasF}$-only language model trained from scratch outperforms existing parameter-efficient language models on standard datasets such as WikiText103 and PennTreebank.
https://proceedings.mlr.press/v235/pei24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pei24d/pei24d.pdf
https://openreview.net/forum?id=LTifAl5bKb
Data-free Neural Representation Compression with Riemannian Neural Dynamics
https://proceedings.mlr.press/v235/pei24d.html
Zhengqi Pei, Anran Zhang, Shuhui Wang, Xiangyang Ji, Qingming Huang
https://proceedings.mlr.press/v235/pei24d.html
ICML 2024
Neural models are equivalent to dynamic systems from a physics-inspired view, implying that computation on neural networks can be interpreted as the dynamical interactions between neurons. However, existing work models neuronal interaction as a weight-based linear transformation, and the nonlinearity comes from the nonlinear activation functions, which leads to limited nonlinearity and data-fitting ability of the whole neural model. Inspired by Riemannian geometry, we interpret neural structures by projecting neurons onto the Riemannian neuronal state space and model neuronal interaction with Riemannian metric (${\it RieM}$), which provides a more efficient neural representation with higher parameter efficiency. With ${\it RieM}$, we further design a novel data-free neural compression mechanism that does not require additional fine-tuning with real data. Using backbones like ResNet and Vision Transformer, we conduct extensive experiments on datasets such as MNIST, CIFAR-100, ImageNet-1k, and COCO object detection. Empirical results show that, under equal compression rates and computational complexity, models compressed with ${\it RieM}$ achieve superior inference accuracy compared to existing data-free compression methods.
https://proceedings.mlr.press/v235/pei24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pei24e/pei24e.pdf
https://openreview.net/forum?id=jKnW7r7de1
BetterV: Controlled Verilog Generation with Discriminative Guidance
https://proceedings.mlr.press/v235/pei24e.html
Zehua Pei, Huiling Zhen, Mingxuan Yuan, Yu Huang, Bei Yu
https://proceedings.mlr.press/v235/pei24e.html
ICML 2024
Due to the growing complexity of modern Integrated Circuits (ICs), there is a need for automated circuit design methods. Recent years have seen increasing research in hardware design language generation to facilitate the design process. In this work, we propose a Verilog generation framework, BetterV, which fine-tunes large language models (LLMs) on processed domain-specific datasets and incorporates generative discriminators for guidance on particular design demands. Verilog modules are collected, filtered, and processed from the internet to form a clean and abundant dataset. Instruct-tuning methods are specially designed to fine-tune the LLMs to understand knowledge about Verilog. Furthermore, data are augmented to enrich the training set and are also used to train a generative discriminator on particular downstream tasks, providing guidance for the LLMs to optimize Verilog implementation. BetterV has the ability to generate syntactically and functionally correct Verilog, outperforming GPT-4 on the VerilogEval benchmark. With the help of task-specific generative discriminators, BetterV achieves remarkable improvements on various electronic design automation (EDA) downstream tasks, including netlist node reduction for synthesis and verification runtime reduction with Boolean Satisfiability (SAT) solving.
https://proceedings.mlr.press/v235/peleg24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peleg24a/peleg24a.pdf
https://openreview.net/forum?id=znz261CQK7
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
https://proceedings.mlr.press/v235/peleg24a.html
Amit Peleg, Matthias Hein
https://proceedings.mlr.press/v235/peleg24a.html
ICML 2024
Neural networks typically generalize well when fitting the data perfectly, even though they are heavily overparameterized. Many factors have been pointed out as the reason for this phenomenon, including an implicit bias of stochastic gradient descent (SGD) and a possible simplicity bias arising from the neural network architecture. The goal of this paper is to disentangle the factors that influence generalization stemming from optimization and architectural choices by studying random and SGD-optimized networks that achieve zero training error. We experimentally show, in the low sample regime, that overparameterization in terms of increasing width is beneficial for generalization, and this benefit is due to the bias of SGD and not due to an architectural bias. In contrast, for increasing depth, overparameterization is detrimental for generalization, but random and SGD-optimized networks behave similarly, so this can be attributed to an architectural bias.
https://proceedings.mlr.press/v235/peng24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24a/peng24a.pdf
https://openreview.net/forum?id=91QmrfztSP
Knowledge Distillation with Auxiliary Variable
https://proceedings.mlr.press/v235/peng24a.html
Bo Peng, Zhen Fang, Guangquan Zhang, Jie Lu
https://proceedings.mlr.press/v235/peng24a.html
ICML 2024
Knowledge distillation (KD) provides an efficient framework for transferring knowledge from a teacher model to a student model by aligning their predictive distributions. The existing KD methods adopt the same strategy as the teacher to formulate the student’s predictive distribution. However, employing the same distribution-modeling strategy typically causes sub-optimal knowledge transfer due to the discrepancy in model capacity between teacher and student models. Designing student-friendly teachers contributes to alleviating the capacity discrepancy, while it requires either complicated or student-specific training schemes. To cast off this dilemma, we propose to introduce an auxiliary variable to promote the ability of the student to model predictive distribution. The auxiliary variable is defined to be related to target variables, which will boost the model prediction. Specifically, we reformulate the predictive distribution with the auxiliary variable, deriving a novel objective function of KD. Theoretically, we provide insights to explain why the proposed objective function can outperform the existing KD methods. Experimentally, we demonstrate that the proposed objective function can considerably and consistently outperform existing KD methods.
https://proceedings.mlr.press/v235/peng24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24b/peng24b.pdf
https://openreview.net/forum?id=KU9mn6deDR
UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers
https://proceedings.mlr.press/v235/peng24b.html
Duo Peng, Qiuhong Ke, Jun Liu
https://proceedings.mlr.press/v235/peng24b.html
ICML 2024
Text-to-Image (T2I) models have raised security concerns due to their potential to generate inappropriate or harmful images. In this paper, we propose UPAM, a novel framework that investigates the robustness of T2I models from the attack perspective. Unlike most existing attack methods that focus on deceiving textual defenses, UPAM aims to deceive both textual and visual defenses in T2I models. UPAM enables gradient-based optimization, offering greater effectiveness and efficiency than previous methods. Given that T2I models might not return results due to defense mechanisms, we introduce a Sphere-Probing Learning (SPL) scheme to support gradient optimization even when no results are returned. Additionally, we devise a Semantic-Enhancing Learning (SEL) scheme to finetune UPAM for generating target-aligned images. Our framework also ensures attack stealthiness. Extensive experiments demonstrate UPAM’s effectiveness and efficiency.
https://proceedings.mlr.press/v235/peng24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24c/peng24c.pdf
https://openreview.net/forum?id=LWRI4uPG2X
eCeLLM: Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data
https://proceedings.mlr.press/v235/peng24c.html
Bo Peng, Xinyi Ling, Ziru Chen, Huan Sun, Xia Ning
https://proceedings.mlr.press/v235/peng24c.html
ICML 2024
With tremendous efforts on developing effective e-commerce models, conventional e-commerce models show limited success in generalist e-commerce modeling, and suffer from unsatisfactory performance on new users and new products – a typical out-of-domain generalization challenge. Meanwhile, large language models (LLMs) demonstrate outstanding performance in generalist modeling and out-of-domain generalizability in many fields. Toward fully unleashing their power for e-commerce, in this paper, we construct ECInstruct, the first open-sourced, large-scale, and high-quality benchmark instruction dataset for e-commerce. Leveraging ECInstruct, we develop eCeLLM, a series of e-commerce LLMs, by instruction-tuning general-purpose LLMs. Our comprehensive experiments and evaluation demonstrate that eCeLLM models substantially outperform baseline models, including the most advanced GPT-4, and the state-of-the-art task-specific models in in-domain evaluation. Moreover, eCeLLM exhibits excellent generalizability to out-of-domain settings, including unseen products and unseen instructions, highlighting its superiority as a generalist e-commerce model. Both the ECInstruct dataset and the eCeLLM models show great potential in empowering versatile and effective LLMs for e-commerce. ECInstruct and eCeLLM models are publicly accessible through this link.
https://proceedings.mlr.press/v235/peng24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24d/peng24d.pdf
https://openreview.net/forum?id=OgG0I5toZZ
Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input
https://proceedings.mlr.press/v235/peng24d.html
Andi Peng, Yuying Sun, Tianmin Shu, David Abel
https://proceedings.mlr.press/v235/peng24d.html
ICML 2024
Humans use context to specify preferences over behaviors, i.e. their reward functions. Yet, algorithms for inferring reward models from preference data do not take this social learning view into account. Inspired by pragmatic human communication, we study how to extract fine-grained data regarding why an example is preferred that is useful for learning an accurate reward model. We propose to enrich preference queries to ask both (1) which features of a given example are preferable in addition to (2) comparisons between objects. We derive an approach for learning from these feature-level preferences, both for cases where users specify which features are reward-relevant, and when users do not. We evaluate our approach on linear bandit settings in both visual and language-based domains. Results support the efficiency of our approach in quickly converging to accurate rewards with less comparisons vs. example-only labels. Finally, we validate the real-world applicability with a behavioral experiment on a mushroom foraging task. Our findings suggest that incorporating pragmatic feature preferences is a promising approach for more efficient user-aligned reward learning.
https://proceedings.mlr.press/v235/peng24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24e/peng24e.pdf
https://openreview.net/forum?id=rEZ24oJhbn
UPOCR: Towards Unified Pixel-Level OCR Interface
https://proceedings.mlr.press/v235/peng24e.html
Dezhi Peng, Zhenhua Yang, Jiaxin Zhang, Chongyu Liu, Yongxin Shi, Kai Ding, Fengjun Guo, Lianwen Jin
https://proceedings.mlr.press/v235/peng24e.html
ICML 2024
Existing optical character recognition (OCR) methods rely on task-specific designs with divergent paradigms, architectures, and training strategies, which significantly increases the complexity of research and maintenance and hinders the fast deployment in applications. To this end, we propose UPOCR, a simple-yet-effective generalist model for Unified Pixel-level OCR interface. Specifically, the UPOCR unifies the paradigm of diverse OCR tasks as image-to-image transformation and the architecture as a vision Transformer (ViT)-based encoder-decoder with learnable task prompts. The prompts push the general feature representations extracted by the encoder towards task-specific spaces, endowing the decoder with task awareness. Moreover, the model training is uniformly aimed at minimizing the discrepancy between the predicted and ground-truth images regardless of the inhomogeneity among tasks. Experiments are conducted on three pixel-level OCR tasks including text removal, text segmentation, and tampered text detection. Without bells and whistles, the experimental results showcase that the proposed method can simultaneously achieve state-of-the-art performance on three tasks with a unified single model, which provides valuable strategies and insights for future research on generalist OCR models. Code is available at https://github.com/shannanyinxiang/UPOCR.
https://proceedings.mlr.press/v235/peng24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24f/peng24f.pdf
https://openreview.net/forum?id=iLyUEPZ0fR
Block Acceleration Without Momentum: On Optimal Stepsizes of Block Gradient Descent for Least-Squares
https://proceedings.mlr.press/v235/peng24f.html
Liangzu Peng, Wotao Yin
https://proceedings.mlr.press/v235/peng24f.html
ICML 2024
Block coordinate descent is a powerful algorithmic template suitable for big data optimization. This template admits a lot of variants including block gradient descent (BGD), which performs gradient descent on a selected block of variables, while keeping other variables fixed. For a very long time, the stepsize for each block has tacitly been set to one divided by the block-wise Lipschitz smoothness constant, imitating the vanilla stepsize rule for gradient descent (GD). However, such a choice for BGD has not yet been able to theoretically justify its empirical superiority over GD, as existing convergence rates for BGD have worse constants than GD in the deterministic cases. To discover such theoretical justification, we set up a simple environment where we consider BGD applied to least-squares with two blocks of variables. Assuming the data matrix corresponding to each block is orthogonal, we find optimal stepsizes of BGD in closed form, which provably lead to asymptotic convergence rates twice as fast as GD with Polyak’s momentum; this means, under that orthogonality assumption, one can accelerate BGD by just tuning stepsizes and without adding any momentum. An application that satisfies this assumption is generalized alternating projection between two subspaces, and applying our stepsizes to it improves the prior convergence rate that was once claimed, slightly inaccurately, to be optimal. The main proof idea is to minimize, in stepsize variables, the spectral radius of a matrix that controls convergence rates.
https://proceedings.mlr.press/v235/peng24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24g/peng24g.pdf
https://openreview.net/forum?id=XecUTmB9yD
FedCal: Achieving Local and Global Calibration in Federated Learning via Aggregated Parameterized Scaler
https://proceedings.mlr.press/v235/peng24g.html
Hongyi Peng, Han Yu, Xiaoli Tang, Xiaoxiao Li
https://proceedings.mlr.press/v235/peng24g.html
ICML 2024
Federated learning (FL) enables collaborative machine learning across distributed data owners, but data heterogeneity poses a challenge for model calibration. While prior work focused on improving accuracy for non-iid data, calibration remains under-explored. This study reveals existing FL aggregation approaches lead to sub-optimal calibration, and theoretical analysis shows despite constraining variance in clients’ label distributions, global calibration error is still asymptotically lower bounded. To address this, we propose a novel Federated Calibration (FedCal) approach, emphasizing both local and global calibration. It leverages client-specific scalers for local calibration to effectively correct output misalignment without sacrificing prediction accuracy. These scalers are then aggregated via weight averaging to generate a global scaler, minimizing the global calibration error. Extensive experiments demonstrate that FedCal significantly outperforms the best-performing baseline, reducing global calibration error by 47.66% on average.
https://proceedings.mlr.press/v235/peng24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24h/peng24h.pdf
https://openreview.net/forum?id=DrE7jVF4VW
Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance
https://proceedings.mlr.press/v235/peng24h.html
Xinyu Peng, Ziyang Zheng, Wenrui Dai, Nuoqian Xiao, Chenglin Li, Junni Zou, Hongkai Xiong
https://proceedings.mlr.press/v235/peng24h.html
ICML 2024
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we reveal that recent methods can be uniformly interpreted as employing a Gaussian approximation with hand-crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. To achieve posterior covariance optimization without retraining, we provide general plug-and-play solutions based on two approaches specifically designed for leveraging pre-trained models with and without reverse covariance. We further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. Experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.
https://proceedings.mlr.press/v235/pensia24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pensia24a/pensia24a.pdf
https://openreview.net/forum?id=WSi4IiMaCx
A Subquadratic Time Algorithm for Robust Sparse Mean Estimation
https://proceedings.mlr.press/v235/pensia24a.html
Ankit Pensia
https://proceedings.mlr.press/v235/pensia24a.html
ICML 2024
We study the algorithmic problem of sparse mean estimation in the presence of adversarial outliers. Specifically, the algorithm observes a corrupted set of samples from $\mathcal{N}(\mu,\mathbf{I}_d)$, where the unknown mean $\mu \in \mathbb{R}^d$ is constrained to be $k$-sparse. A series of prior works has developed efficient algorithms for robust sparse mean estimation with sample complexity $\mathrm{poly}(k,\log d, 1/\epsilon)$ and runtime $d^2 \mathrm{poly}(k,\log d,1/\epsilon)$, where $\epsilon$ is the fraction of contamination. In particular, the fastest runtime of existing algorithms is quadratic in the dimension, which can be prohibitive in high dimensions. This quadratic barrier in the runtime stems from the reliance of these algorithms on the sample covariance matrix, which is of size $d^2$. Our main contribution is an algorithm for robust sparse mean estimation which runs in subquadratic time using $\mathrm{poly}(k,\log d,1/\epsilon)$ samples. Our results build on algorithmic advances in detecting weak correlations, a generalized version of the light-bulb problem by Valiant (2015).
https://proceedings.mlr.press/v235/pentyala24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pentyala24a/pentyala24a.pdf
https://openreview.net/forum?id=ZXsNkm3bxu
CaPS: Collaborative and Private Synthetic Data Generation from Distributed Sources
https://proceedings.mlr.press/v235/pentyala24a.html
Sikha Pentyala, Mayana Pereira, Martine De Cock
https://proceedings.mlr.press/v235/pentyala24a.html
ICML 2024
Data is the lifeblood of the modern world, forming a fundamental part of AI, decision-making, and research advances. With increase in interest in data, governments have taken important steps towards a regulated data world, drastically impacting data sharing and data usability and resulting in massive amounts of data confined within the walls of organizations. While synthetic data generation (SDG) is an appealing solution to break down these walls and enable data sharing, the main drawback of existing solutions is the assumption of a trusted aggregator for generative model training. Given that many data holders may not want to, or be legally allowed to, entrust a central entity with their raw data, we propose a framework for collaborative and private generation of synthetic tabular data from distributed data holders. Our solution is general, applicable to any marginal-based SDG, and provides input privacy by replacing the trusted aggregator with secure multi-party computation (MPC) protocols and output privacy via differential privacy (DP). We demonstrate the applicability and scalability of our approach for the state-of-the-art select-measure-generate SDG algorithms MWEM+PGM and AIM.
https://proceedings.mlr.press/v235/peralez24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/peralez24a/peralez24a.pdf
https://openreview.net/forum?id=n3smZl8itR
Solving Hierarchical Information-Sharing Dec-POMDPs: An Extensive-Form Game Approach
https://proceedings.mlr.press/v235/peralez24a.html
Johan Peralez, Aurélien Delage, Olivier Buffet, Jilles Steeve Dibangoye
https://proceedings.mlr.press/v235/peralez24a.html
ICML 2024
A recent theory shows that a multi-player decentralized partially observable Markov decision process can be transformed into an equivalent single-player game, enabling the application of Bellman’s principle of optimality to solve the single-player game by breaking it down into single-stage subgames. However, this approach entangles the decision variables of all players at each single-stage subgame, resulting in backups with a double-exponential complexity. This paper demonstrates how to disentangle these decision variables while maintaining optimality under hierarchical information sharing, a prominent management style in our society. To achieve this, we apply the principle of optimality to solve any single-stage subgame by breaking it down further into smaller subgames, enabling us to make single-player decisions at a time. Our approach reveals that extensive-form games always exist with solutions to a single-stage subgame, significantly reducing time complexity. Our experimental results show that the algorithms leveraging these findings can scale up to much larger multi-player games without compromising optimality.
https://proceedings.mlr.press/v235/perdomo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/perdomo24a/perdomo24a.pdf
https://openreview.net/forum?id=oaACFfNbXl
The Relative Value of Prediction in Algorithmic Decision Making
https://proceedings.mlr.press/v235/perdomo24a.html
Juan Carlos Perdomo
https://proceedings.mlr.press/v235/perdomo24a.html
ICML 2024
Algorithmic predictions are increasingly used to inform the allocations of goods and interventions in the public sphere. In these domains, predictions serve as a means to an end. They provide stakeholders with insights into likelihood of future events as a means to improve decision making quality, and enhance social welfare. However, if maximizing welfare is the ultimate goal, prediction is only a small piece of the puzzle. There are various other policy levers a social planner might pursue in order to improve bottom-line outcomes, such as expanding access to available goods, or increasing the effect sizes of interventions. Given this broad range of design decisions, a basic question to ask is: What is the relative value of prediction in algorithmic decision making? How do the improvements in welfare arising from better predictions compare to those of other policy levers? The goal of our work is to initiate the formal study of these questions. Our main results are theoretical in nature. We identify simple, sharp conditions determining the relative value of prediction vis-à-vis expanding access, within several statistical models that are popular amongst quantitative social scientists. Furthermore, we illustrate how these theoretical insights can guide the design of algorithmic decision making systems in practice.
https://proceedings.mlr.press/v235/permenter24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/permenter24a/permenter24a.pdf
https://openreview.net/forum?id=o2ND9v0CeK
Interpreting and Improving Diffusion Models from an Optimization Perspective
https://proceedings.mlr.press/v235/permenter24a.html
Frank Permenter, Chenyang Yuan
https://proceedings.mlr.press/v235/permenter24a.html
ICML 2024
Denoising is intuitively related to projection. Indeed, under the manifold hypothesis, adding random noise is approximately equivalent to orthogonal perturbation. Hence, learning to denoise is approximately learning to project. In this paper, we use this observation to interpret denoising diffusion models as approximate gradient descent applied to the Euclidean distance function. We then provide straight-forward convergence analysis of the DDIM sampler under simple assumptions on the projection error of the denoiser. Finally, we propose a new gradient-estimation sampler, generalizing DDIM using insights from our theoretical results. In as few as 5-10 function evaluations, our sampler achieves state-of-the-art FID scores on pretrained CIFAR-10 and CelebA models and can generate high quality samples on latent diffusion models.
https://proceedings.mlr.press/v235/pervez24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pervez24a/pervez24a.pdf
https://openreview.net/forum?id=pLtuwhoQh7
Mechanistic Neural Networks for Scientific Machine Learning
https://proceedings.mlr.press/v235/pervez24a.html
Adeel Pervez, Francesco Locatello, Stratis Gavves
https://proceedings.mlr.press/v235/pervez24a.html
ICML 2024
This paper presents Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences. It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations, revealing the underlying dynamics of data and enhancing interpretability and efficiency in data modeling. Central to our approach is a novel Relaxed Linear Programming Solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs. This integrates well with neural networks and surpasses the limitations of traditional ODE solvers enabling scalable GPU parallel processing. Overall, Mechanistic Neural Networks demonstrate their versatility for scientific machine learning applications, adeptly managing tasks from equation discovery to dynamic systems modeling. We prove their comprehensive capabilities in analyzing and interpreting complex scientific data across various applications, showing significant performance against specialized state-of-the-art methods. Source code is available at https://github.com/alpz/mech-nn.
https://proceedings.mlr.press/v235/petrik24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/petrik24a/petrik24a.pdf
https://openreview.net/forum?id=mz55Ox0Igz
Bayesian Regret Minimization in Offline Bandits
https://proceedings.mlr.press/v235/petrik24a.html
Marek Petrik, Guy Tennenholtz, Mohammad Ghavamzadeh
https://proceedings.mlr.press/v235/petrik24a.html
ICML 2024
We study how to make decisions that minimize Bayesian regret in offline linear bandits. Prior work suggests that one must take actions with maximum lower confidence bound (LCB) on their reward. We argue that reliance on LCB is inherently flawed in this setting and propose a new algorithm that directly minimizes upper-bounds on the Bayesian regret using efficient conic optimization solvers. Our bounds build heavily on new connections to monetary risk measures. Proving a matching lower-bound, we show that our upper-bounds are tight, and by minimizing them we are guaranteed to outperform the LCB approach. Our numerical results on synthetic domains confirm that our approach is superior to maximizing LCB.
https://proceedings.mlr.press/v235/petrov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/petrov24a/petrov24a.pdf
https://openreview.net/forum?id=3mQ6ZKTSQl
Prompting a Pretrained Transformer Can Be a Universal Approximator
https://proceedings.mlr.press/v235/petrov24a.html
Aleksandar Petrov, Philip Torr, Adel Bibi
https://proceedings.mlr.press/v235/petrov24a.html
ICML 2024
Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of a pretrained model by prompting or prefix-tuning it. Formally, whether prompting and prefix-tuning a pretrained model can universally approximate sequence-to-sequence functions. This paper answers in the affirmative and demonstrates that much smaller pretrained models than previously thought can be universal approximators when prefixed. In fact, prefix-tuning a single attention head is sufficient to approximate any continuous function making the attention mechanism uniquely suited for universal approximation. Moreover, any sequence-to-sequence function can be approximated by prefixing a transformer with depth linear in the sequence length. Beyond these density-type results, we also offer Jackson-type bounds on the length of the prefix needed to approximate a function to a desired precision.
https://proceedings.mlr.press/v235/pezeshki24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pezeshki24a/pezeshki24a.pdf
https://openreview.net/forum?id=gPStP3FSY9
Discovering Environments with XRM
https://proceedings.mlr.press/v235/pezeshki24a.html
Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, David Lopez-Paz
https://proceedings.mlr.press/v235/pezeshki24a.html
ICML 2024
Environment annotations are essential for the success of many out-of-distribution (OOD) generalization methods. Unfortunately, these are costly to obtain and often limited by human annotators’ biases. To achieve robust generalization, it is essential to develop algorithms for automatic environment discovery within datasets. Current proposals, which divide examples based on their training error, suffer from one fundamental problem. These methods introduce hyper-parameters and early-stopping criteria, which require a validation set with human-annotated environments, the very information subject to discovery. In this paper, we propose Cross-Risk Minimization (XRM) to address this issue. XRM trains twin networks, each learning from one random half of the training data, while imitating confident held-out mistakes made by its sibling. XRM provides a recipe for hyper-parameter tuning, does not require early-stopping, and can discover environments for all training and validation data. Algorithms built on top of XRM environments achieve oracle worst-group-accuracy, addressing a long-standing challenge in OOD generalization.
https://proceedings.mlr.press/v235/pfrommer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pfrommer24a/pfrommer24a.pdf
https://openreview.net/forum?id=rK6AZem0hX
Transport of Algebraic Structure to Latent Embeddings
https://proceedings.mlr.press/v235/pfrommer24a.html
Samuel Pfrommer, Brendon G. Anderson, Somayeh Sojoudi
https://proceedings.mlr.press/v235/pfrommer24a.html
ICML 2024
Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space. For example, in the field of 3D modeling, subsets of Euclidean space can be embedded as vectors using implicit neural representations. Such subsets also have a natural algebraic structure including operations (e.g., union) and corresponding laws (e.g., associativity). How can we learn to "union" two sets using only their latent embeddings while respecting associativity? We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space. This is achieved by learning a bijection from the latent space to a carefully designed mirrored algebra which is constructed on Euclidean space in accordance with desired laws. We evaluate these structural transport nets for a range of mirrored algebras against baselines that operate directly on the latent space. Our experiments provide strong evidence that respecting the underlying algebraic structure of the input space is key for learning accurate and self-consistent operations.
https://proceedings.mlr.press/v235/pham24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pham24a/pham24a.pdf
https://openreview.net/forum?id=6BYD121JFO
Neural NeRF Compression
https://proceedings.mlr.press/v235/pham24a.html
Tuan Pham, Stephan Mandt
https://proceedings.mlr.press/v235/pham24a.html
ICML 2024
Neural Radiance Fields (NeRFs) have emerged as powerful tools for capturing detailed 3D scenes through continuous volumetric representations. Recent NeRFs utilize feature grids to improve rendering quality and speed; however, these representations introduce significant storage overhead. This paper presents a novel method for efficiently compressing a grid-based NeRF model, addressing the storage overhead concern. Our approach is based on the non-linear transform coding paradigm, employing neural compression for compressing the model’s feature grids. Due to the lack of training data involving many i.i.d scenes, we design an encoder-free, end-to-end optimized approach for individual scenes, using lightweight decoders. To leverage the spatial inhomogeneity of the latent feature grids, we introduce an importance-weighted rate-distortion objective and a sparse entropy model employing a masking mechanism. Our experimental results validate that our proposed method surpasses existing works in terms of grid-based NeRF compression efficacy and reconstruction quality.
https://proceedings.mlr.press/v235/pham24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pham24b/pham24b.pdf
https://openreview.net/forum?id=jEoIkNkqyc
Cross-view Masked Diffusion Transformers for Person Image Synthesis
https://proceedings.mlr.press/v235/pham24b.html
Trung X. Pham, Kang Zhang, Chang D. Yoo
https://proceedings.mlr.press/v235/pham24b.html
ICML 2024
We present X-MDPT ($\underline{Cross}$-view $\underline{M}$asked $\underline{D}$iffusion $\underline{P}$rediction $\underline{T}$ransformers), a novel diffusion model designed for pose-guided human image generation. X-MDPT distinguishes itself by employing masked diffusion transformers that operate on latent patches, a departure from the commonly-used Unet structures in existing works. The model comprises three key modules: 1) a denoising diffusion Transformer, 2) an aggregation network that consolidates conditions into a single vector for the diffusion process, and 3) a mask cross-prediction module that enhances representation learning with semantic information from the reference image. X-MDPT demonstrates scalability, improving FID, SSIM, and LPIPS with larger models. Despite its simple design, our model outperforms state-of-the-art approaches on the DeepFashion dataset while exhibiting efficiency in terms of training parameters, training time, and inference speed. Our compact 33MB model achieves an FID of 7.42, surpassing a prior Unet latent diffusion approach (FID 8.07) using only $11\times$ fewer parameters. Our best model surpasses the pixel-based diffusion with $\frac{2}{3}$ of the parameters and achieves $5.43 \times$ faster inference. The code is available at https://github.com/trungpx/xmdpt.
https://proceedings.mlr.press/v235/phan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/phan24a/phan24a.pdf
https://openreview.net/forum?id=9yADTDHgGu
When is Transfer Learning Possible?
https://proceedings.mlr.press/v235/phan24a.html
My Phan, Kianté Brantley, Stephanie Milani, Soroush Mehri, Gokul Swamy, Geoffrey J. Gordon
https://proceedings.mlr.press/v235/phan24a.html
ICML 2024
We present a general framework for transfer learning that is flexible enough to capture transfer in supervised, reinforcement, and imitation learning. Our framework enables new insights into the fundamental question of when we can successfully transfer learned information across problems. We model the learner as interacting with a sequence of problem instances, or environments, each of which is generated from a common structural causal model (SCM) by choosing the SCM’s parameters from restricted sets. We derive a procedure that can propagate restrictions on SCM parameters through the SCM’s graph structure to other parameters that we are trying to learn. The propagated restrictions then enable more efficient learning (i.e., transfer). By analyzing the procedure, we are able to challenge widely-held beliefs about transfer learning. First, we show that having sparse changes across environments is neither necessary nor sufficient for transfer. Second, we show an example where the common heuristic of freezing a layer in a network causes poor transfer performance. We then use our procedure to select a more refined set of parameters to freeze, leading to successful transfer learning.
https://proceedings.mlr.press/v235/phan24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/phan24b/phan24b.pdf
https://openreview.net/forum?id=OYL91MHfuU
Controllable Prompt Tuning For Balancing Group Distributional Robustness
https://proceedings.mlr.press/v235/phan24b.html
Hoang Phan, Andrew Gordon Wilson, Qi Lei
https://proceedings.mlr.press/v235/phan24b.html
ICML 2024
Models trained on data composed of different groups or domains can suffer from severe performance degradation under distribution shifts. While recent methods have largely focused on optimizing the worst-group objective, this often comes at the expense of good performance on other groups. To address this problem, we introduce an optimization scheme to achieve good performance across groups and find a good solution for all without severely sacrificing performance on any of them. However, directly applying such optimization involves updating the parameters of the entire network, making it both computationally expensive and challenging. Thus, we introduce Controllable Prompt Tuning (CPT), which couples our approach with prompt-tuning techniques. On spurious correlation benchmarks, our procedures achieve state-of-the-art results across both transformer and non-transformer architectures, as well as unimodal and multimodal data, while requiring only $0.4%$ tunable parameters.
https://proceedings.mlr.press/v235/phillips24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/phillips24a/phillips24a.pdf
https://openreview.net/forum?id=vMUnnS4OWC
Particle Denoising Diffusion Sampler
https://proceedings.mlr.press/v235/phillips24a.html
Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
https://proceedings.mlr.press/v235/phillips24a.html
ICML 2024
Denoising diffusion models have become ubiquitous for generative modeling. The core idea is to transport the data distribution to a Gaussian by using a diffusion. Approximate samples from the data distribution are then obtained by estimating the time-reversal of this diffusion using score matching ideas. We follow here a similar strategy to sample from unnormalized probability densities and compute their normalizing constants. However, the time-reversed diffusion is here simulated by using an original iterative particle scheme relying on a novel score matching loss. Contrary to standard denoising diffusion models, the resulting Particle Denoising Diffusion Sampler (PDDS) provides asymptotically consistent estimates under mild assumptions. We demonstrate PDDS on multimodal and high dimensional sampling tasks.
https://proceedings.mlr.press/v235/piao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/piao24a/piao24a.pdf
https://openreview.net/forum?id=Kqa5JakTjB
Federated Continual Learning via Prompt-based Dual Knowledge Transfer
https://proceedings.mlr.press/v235/piao24a.html
Hongming Piao, Yichen Wu, Dapeng Wu, Ying Wei
https://proceedings.mlr.press/v235/piao24a.html
ICML 2024
In Federated Continual Learning (FCL), the challenge lies in effectively facilitating knowledge transfer and enhancing the performance across various tasks on different clients. Current FCL methods predominantly focus on avoiding interference between tasks, thereby overlooking the potential for positive knowledge transfer across tasks learned by different clients at separate time intervals. To address this issue, we introduce a Prompt-based knowledge transfer FCL algorithm, called Powder, designed to effectively foster the transfer of knowledge encapsulated in prompts between various sequentially learned tasks and clients. Furthermore, we have devised a unique approach for prompt generation and aggregation, intending to alleviate privacy protection concerns and communication overhead, while still promoting knowledge transfer. Comprehensive experimental results demonstrate the superiority of our method in terms of reduction in communication costs, and enhancement of knowledge transfer. Code is available at https://github.com/piaohongming/Powder.
https://proceedings.mlr.press/v235/pieroth24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pieroth24a/pieroth24a.pdf
https://openreview.net/forum?id=lm04PyXoEl
Detecting Influence Structures in Multi-Agent Reinforcement Learning
https://proceedings.mlr.press/v235/pieroth24a.html
Fabian Raoul Pieroth, Katherine Fitch, Lenz Belzner
https://proceedings.mlr.press/v235/pieroth24a.html
ICML 2024
We consider the problem of quantifying the amount of influence one agent can exert on another in the setting of multi-agent reinforcement learning (MARL). As a step towards a unified approach to express agents’ interdependencies, we introduce the total and state influence measurement functions. Both of these are valid for all common MARL systems, such as the discounted reward setting. Additionally, we propose novel quantities, called the total impact measurement (TIM) and state impact measurement (SIM), that characterize one agent’s influence on another by the maximum impact it can have on the other agents’ expected returns and represent instances of impact measurement functions in the average reward setting. Furthermore, we provide approximation algorithms for TIM and SIM with simultaneously learning approximations of agents’ expected returns, error bounds, stability analyses under changes of the policies, and convergence guarantees. The approximation algorithm relies only on observing other agents’ actions and is, other than that, fully decentralized. Through empirical studies, we validate our approach’s effectiveness in identifying intricate influence structures in complex interactions. Our work appears to be the first study of determining influence structures in the multi-agent average reward setting with convergence guarantees.
https://proceedings.mlr.press/v235/pierquin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pierquin24a/pierquin24a.pdf
https://openreview.net/forum?id=VZsxhPpu9T
Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration via Shift Reduction Lemmas
https://proceedings.mlr.press/v235/pierquin24a.html
Clément Pierquin, Aurélien Bellet, Marc Tommasi, Matthieu Boussard
https://proceedings.mlr.press/v235/pierquin24a.html
ICML 2024
Pufferfish privacy is a flexible generalization of differential privacy that allows to model arbitrary secrets and adversary’s prior knowledge about the data. Unfortunately, designing general and tractable Pufferfish mechanisms that do not compromise utility is challenging. Furthermore, this framework does not provide the composition guarantees needed for a direct use in iterative machine learning algorithms. To mitigate these issues, we introduce a Rényi divergence-based variant of Pufferfish and show that it allows us to extend the applicability of the Pufferfish framework. We first generalize the Wasserstein mechanism to cover a wide range of noise distributions and introduce several ways to improve its utility. Finally, as an alternative to composition, we prove privacy amplification results for contractive noisy iterations and showcase the first use of Pufferfish in private convex optimization. A common ingredient underlying our results is the use and extension of shift reduction lemmas.
https://proceedings.mlr.press/v235/pinheiro24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pinheiro24a/pinheiro24a.pdf
https://openreview.net/forum?id=K3fEkECWgu
Structure-based drug design by denoising voxel grids
https://proceedings.mlr.press/v235/pinheiro24a.html
Pedro O. Pinheiro, Arian Rokkum Jamasb, Omar Mahmood, Vishnu Sresht, Saeed Saremi
https://proceedings.mlr.press/v235/pinheiro24a.html
ICML 2024
We presents VoxBind, a new score-based generative model for 3D molecules conditioned on protein structures. Our approach represents molecules as 3D atomic density grids and leverages a 3D voxel-denoising network for learning and generation. We extend the neural empirical Bayes formalism (Saremi & Hyvärinen, 2019) to the conditional setting and generate structure-conditioned molecules with a two-step procedure: (i) sample noisy molecules from the Gaussian-smoothed conditional distribution with underdamped Langevin MCMC using the learned score function and (ii) estimate clean molecules from the noisy samples with single-step denoising. Compared to the current state of the art, our model is simpler to train, significantly faster to sample from, and achieves better results on extensive in silico benchmarks—the generated molecules are more diverse, exhibit fewer steric clashes, and bind with higher affinity to protein pockets.
https://proceedings.mlr.press/v235/pinto24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pinto24a/pinto24a.pdf
https://openreview.net/forum?id=qTX1vxzs8b
Extracting Training Data From Document-Based VQA Models
https://proceedings.mlr.press/v235/pinto24a.html
Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari
https://proceedings.mlr.press/v235/pinto24a.html
ICML 2024
Vision-Language Models (VLMs) have made remarkable progress in document-based Visual Question Answering (i.e., responding to queries about the contents of an input document provided as an image). In this work, we show these models can memorize responses for training samples and regurgitate them even when the relevant visual information has been removed. This includes Personal Identifiable Information (PII) repeated once in the training set, indicating these models could divulge memorised sensitive information and therefore pose a privacy risk. We quantitatively measure the extractability of information in controlled experiments and differentiate between cases where it arises from generalization capabilities or from memorization. We further investigate the factors that influence memorization across multiple state-of-the-art models and propose an effective heuristic countermeasure that empirically prevents the extractability of PII.
https://proceedings.mlr.press/v235/piran24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/piran24a/piran24a.pdf
https://openreview.net/forum?id=dV9B9qFeGi
Contrasting Multiple Representations with the Multi-Marginal Matching Gap
https://proceedings.mlr.press/v235/piran24a.html
Zoe Piran, Michal Klein, James Thornton, Marco Cuturi
https://proceedings.mlr.press/v235/piran24a.html
ICML 2024
Learning meaningful representations of complex objects that can be seen through multiple ($k\geq 3$) views or modalities is a core task in machine learning. Existing methods use losses originally intended for paired views, and extend them to $k$ views, either by instantiating $\tfrac12k(k-1)$ loss-pairs, or by using reduced embeddings, following a one vs. average-of-rest strategy. We propose the multi-marginal matching gap (M3G), a loss that borrows tools from multi-marginal optimal transport (MM-OT) theory to simultaneously incorporate all $k$ views. Given a batch of $n$ points, each seen as a $k$-tuple of views subsequently transformed into $k$ embeddings, our loss contrasts the cost of matching these $n$ ground-truth $k$-tuples with the MM-OT polymatching cost, which seeks $n$ optimally arranged $k$-tuples chosen within these $n\times k$ vectors. While the exponential complexity $O(n^k$) of the MM-OT problem may seem daunting, we show in experiments that a suitable generalization of the Sinkhorn algorithm for that problem can scale to, e.g., $k=3\sim 6$ views using mini-batches of size $64 \sim128$. Our experiments demonstrate improved performance over multiview extensions of pairwise losses, for both self-supervised and multimodal tasks.
https://proceedings.mlr.press/v235/piterbarg24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/piterbarg24a/piterbarg24a.pdf
https://openreview.net/forum?id=TJCUrzhbiH
diff History for Neural Language Agents
https://proceedings.mlr.press/v235/piterbarg24a.html
Ulyana Piterbarg, Lerrel Pinto, Rob Fergus
https://proceedings.mlr.press/v235/piterbarg24a.html
ICML 2024
Neural Language Models (LMs) offer an exciting solution for general-purpose embodied control. However, a key technical issue arises when using an LM-based controller: environment observations must be converted to text, which coupled with history, results in long and verbose textual prompts. As a result, prior work in LM agents is limited to restricted domains with small observation size as well as minimal needs for interaction history or instruction finetuning. In this paper, we introduce diff history, a simple and highly effective solution to these issues. By applying the Unix diff command on consecutive text observations in the interaction histories used to prompt LM policies, we can both abstract away redundant information and focus the content of textual inputs on the salient changes in the environment. On NetHack, an unsolved video game that requires long-horizon reasoning for decision-making, LMs tuned with diff history match state-of-the-art performance for neural agents while needing 1800X fewer training examples compared to prior work. Even on the simpler BabyAI-Text environment with concise text observations, we find that although diff history increases the length of prompts, the representation it provides offers a 25% improvement in the efficiency of low-sample instruction finetuning. Further, we show that diff history scales favorably across different finetuning dataset sizes. We open-source our code and data to https://diffhistory.github.io.
https://proceedings.mlr.press/v235/plaksin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/plaksin24a/plaksin24a.pdf
https://openreview.net/forum?id=UdXDUDxq11
Zero-Sum Positional Differential Games as a Framework for Robust Reinforcement Learning: Deep Q-Learning Approach
https://proceedings.mlr.press/v235/plaksin24a.html
Anton Plaksin, Vitaly Kalev
https://proceedings.mlr.press/v235/plaksin24a.html
ICML 2024
Robust Reinforcement Learning (RRL) is a promising Reinforcement Learning (RL) paradigm aimed at training robust to uncertainty or disturbances models, making them more efficient for real-world applications. Following this paradigm, uncertainty or disturbances are interpreted as actions of a second adversarial agent, and thus, the problem is reduced to seeking the agents’ policies robust to any opponent’s actions. This paper is the first to propose considering the RRL problems within the positional differential game theory, which helps us to obtain theoretically justified intuition to develop a centralized Q-learning approach. Namely, we prove that under Isaacs’s condition (sufficiently general for real-world dynamical systems), the same Q-function can be utilized as an approximate solution of both minimax and maximin Bellman equations. Based on these results, we present the Isaacs Deep Q-Network algorithms and demonstrate their superiority compared to other baseline RRL and Multi-Agent RL algorithms in various environments.
https://proceedings.mlr.press/v235/podkopaev24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/podkopaev24a/podkopaev24a.pdf
https://openreview.net/forum?id=lwWV4Zl3h1
Adaptive Conformal Inference by Betting
https://proceedings.mlr.press/v235/podkopaev24a.html
Aleksandr Podkopaev, Dong Xu, Kuang-Chih Lee
https://proceedings.mlr.press/v235/podkopaev24a.html
ICML 2024
Conformal prediction is a valuable tool for quantifying predictive uncertainty of machine learning models. However, its applicability relies on the assumption of data exchangeability, a condition which is often not met in real-world scenarios. In this paper, we consider the problem of adaptive conformal inference without any assumptions about the data generating process. Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent. A notable shortcoming of such approaches is in their explicit dependence on and sensitivity to the choice of the learning rates. In this paper, we propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques. We prove that our method controls long-term miscoverage frequency at a nominal level and demonstrate its convincing empirical performance without any need of performing cumbersome parameter tuning.
https://proceedings.mlr.press/v235/poli24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/poli24a/poli24a.pdf
https://openreview.net/forum?id=GDp7Gyd9nf
Mechanistic Design and Scaling of Hybrid Architectures
https://proceedings.mlr.press/v235/poli24a.html
Michael Poli, Armin W Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Re, Ce Zhang, Stefano Massaroli
https://proceedings.mlr.press/v235/poli24a.html
ICML 2024
The development of deep learning architectures is a resource-demanding process, due to a vast design space, long prototyping times, and high compute costs associated with at-scale model training and evaluation. We set out to simplify this process by grounding it in an end-to-end mechanistic architecture design (MAD) pipeline, encompassing small-scale capability unit tests predictive of scaling laws. Through a suite of synthetic token manipulation tasks such as compression and recall, designed to probe capabilities, we identify and test new hybrid architectures constructed from a variety of computational primitives. We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis, training over 500 language models between 70M to 7B parameters. Surprisingly, we find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures via isolated proxy tasks. The new architectures found via MAD, based on simple ideas such as hybridization and sparsity, outperform state-of-the-art Transformer, convolutional, and recurrent architectures (Transformer++, Hyena, Mamba) in scaling, both at compute-optimal budgets and in overtrained regimes. Overall, these results provide evidence that performance on curated synthetic tasks can be predictive of scaling laws, and that an optimal architecture should leverage specialized layers via a hybrid topology.
https://proceedings.mlr.press/v235/pouplin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pouplin24a/pouplin24a.pdf
https://openreview.net/forum?id=L8nSGvoyvb
Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise
https://proceedings.mlr.press/v235/pouplin24a.html
Thomas Pouplin, Alan Jeffares, Nabeel Seedat, Mihaela Van Der Schaar
https://proceedings.mlr.press/v235/pouplin24a.html
ICML 2024
Constructing valid prediction intervals rather than point estimates is a well-established approach for uncertainty quantification in the regression setting. Models equipped with this capacity output an interval of values in which the ground truth target will fall with some prespecified probability. This is an essential requirement in many real-world applications where simple point predictions’ inability to convey the magnitude and frequency of errors renders them insufficient for high-stakes decisions. Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the (non-parametric) distribution of outputs. This method is simple, computationally inexpensive, interpretable, assumption-free, and effective. However, it does require that the specific quantiles being learned are chosen a priori. This results in (a) intervals that are arbitrarily symmetric around the median which is sub-optimal for realistic skewed distributions, or (b) learning an excessive number of intervals. In this work, we propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint whilst maintaining its strengths. We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities (e.g. mean width) whilst retaining the essential coverage guarantees of quantile regression.
https://proceedings.mlr.press/v235/poursoltani24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/poursoltani24a/poursoltani24a.pdf
https://openreview.net/forum?id=bNgAdyv7ZP
Robust Data-driven Prescriptiveness Optimization
https://proceedings.mlr.press/v235/poursoltani24a.html
Mehran Poursoltani, Erick Delage, Angelos Georghiou
https://proceedings.mlr.press/v235/poursoltani24a.html
ICML 2024
The abundance of data has led to the emergence of a variety of optimization techniques that attempt to leverage available side information to provide more anticipative decisions. The wide range of methods and contexts of application have motivated the design of a universal unitless measure of performance known as the coefficient of prescriptiveness. This coefficient was designed to quantify both the quality of contextual decisions compared to a reference one and the prescriptive power of side information. To identify policies that maximize the former in a data-driven context, this paper introduces a distributionally robust contextual optimization model where the coefficient of prescriptiveness substitutes for the classical empirical risk minimization objective. We present a bisection algorithm to solve this model, which relies on solving a series of linear programs when the distributional ambiguity set has an appropriate nested form and polyhedral structure. Studying a contextual shortest path problem, we evaluate the robustness of the resulting policies against alternative methods when the out-of-sample dataset is subject to varying amounts of distribution shift.
https://proceedings.mlr.press/v235/powell24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/powell24a/powell24a.pdf
https://openreview.net/forum?id=JYcbgiSh0L
Stochastic Optimization with Arbitrary Recurrent Data Sampling
https://proceedings.mlr.press/v235/powell24a.html
William Powell, Hanbaek Lyu
https://proceedings.mlr.press/v235/powell24a.html
ICML 2024
For obtaining optimal first-order convergence guarantees for stochastic optimization, it is necessary to use a recurrent data sampling algorithm that samples every data point with sufficient frequency. Most commonly used data sampling algorithms (e.g., i.i.d., MCMC, random reshuffling) are indeed recurrent under mild assumptions. In this work, we show that for a particular class of stochastic optimization algorithms, we do not need any further property (e.g., independence, exponential mixing, and reshuffling) beyond recurrence in data sampling to guarantee optimal rate of first-order convergence. Namely, using regularized versions of Minimization by Incremental Surrogate Optimization (MISO), we show that for non-convex and possibly non-smooth objective functions with constraints, the expected optimality gap converges at an optimal rate $O(n^{-1/2})$ under general recurrent sampling schemes. Furthermore, the implied constant depends explicitly on the ’speed of recurrence’, measured by the expected amount of time to visit a data point, either averaged (’target time’) or supremized (’hitting time’) over the starting locations. We discuss applications of our general framework to decentralized optimization and distributed non-negative matrix factorization.
https://proceedings.mlr.press/v235/prabhu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/prabhu24a/prabhu24a.pdf
https://openreview.net/forum?id=A0N39kgRZq
Learning Multiple Secrets in Mastermind
https://proceedings.mlr.press/v235/prabhu24a.html
Milind Prabhu, David Woodruff
https://proceedings.mlr.press/v235/prabhu24a.html
ICML 2024
In the Generalized Mastermind problem, there is an unknown subset $H$ of the hypercube 0,1$^d$ containing $n$ points. The goal is to learn $H$ by making a few queries to an oracle which given a point $q$ in 0,1$^d$, returns the point in $H$ nearest to $q$. We give a two-round adaptive algorithm for this problem that learns $H$ while making at most $\exp(\widetilde{O}(\sqrt{d \log n}))$. Furthermore, we show that any $r$-round adaptive randomized algorithm that learns $H$ with constant probability must make $\exp(\Omega(d^{3^{-(r-1)}}))$ queries even when the input has poly$(d)$ points; thus, any poly$(d)$ query algorithm must necessarily use $\Omega(\log \log d)$ rounds of adaptivity. We give optimal query complexity bounds for the variant of the problem where queries are allowed to be from 0,1,2$^d$. We also study a continuous variant of the problem in which $H$ is a subset of unit vectors in $\mathbb{R}^d$ and one can query unit vectors in $\mathbb{R}^d$. For this setting, we give a $O(n^{\lfloor d/2 \rfloor})$ query deterministic algorithm to learn the hidden set of points.
https://proceedings.mlr.press/v235/prajwal24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/prajwal24a/prajwal24a.pdf
https://openreview.net/forum?id=kOczKjmYum
MusicFlow: Cascaded Flow Matching for Text Guided Music Generation
https://proceedings.mlr.press/v235/prajwal24a.html
K R Prajwal, Bowen Shi, Matthew Le, Apoorv Vyas, Andros Tjandra, Mahi Luthra, Baishan Guo, Huiyu Wang, Triantafyllos Afouras, David Kant, Wei-Ning Hsu
https://proceedings.mlr.press/v235/prajwal24a.html
ICML 2024
We introduce MusicFlow, a cascaded text-to-music generation model based on flow matching. Based on self-supervised representations to bridge between text descriptions and music audios, we construct two flow matching networks to model the conditional distribution of semantic and acoustic features. Additionally, we leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation in a zero-shot manner. Experiments on MusicCaps reveal that the music generated by MusicFlow exhibits superior quality and text coherence despite being over $2\sim5$ times smaller and requiring $5$ times fewer iterative steps. Simultaneously, the model can perform other music generation tasks and achieves competitive performance in music infilling and continuation.
https://proceedings.mlr.press/v235/press24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/press24a/press24a.pdf
https://openreview.net/forum?id=0bGsVoumFL
The Entropy Enigma: Success and Failure of Entropy Minimization
https://proceedings.mlr.press/v235/press24a.html
Ori Press, Ravid Shwartz-Ziv, Yann Lecun, Matthias Bethge
https://proceedings.mlr.press/v235/press24a.html
ICML 2024
Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they’re faced with new data at test time. EM is a self-supervised learning method that optimizes classifiers to assign even higher probabilities to their top predicted classes. In this paper, we analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps. We show that, at first, EM causes the model to embed test images close to training images, thereby increasing model accuracy. After many steps of optimization, EM makes the model embed test images far away from the embeddings of training images, which results in a degradation of accuracy. Building upon our insights, we present a method for solving a practical problem: estimating a model’s accuracy on a given arbitrary dataset without having access to its labels. Our method estimates accuracy by looking at how the embeddings of input images change as the model is optimized to minimize entropy. Experiments on 23 challenging datasets show that our method sets the SoTA with a mean absolute error of 5.75%, an improvement of 29.62% over the previous SoTA on this task. Our code is available at: https://github.com/oripress/EntropyEnigma
https://proceedings.mlr.press/v235/prinster24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/prinster24a/prinster24a.pdf
https://openreview.net/forum?id=F3936hVwQa
Conformal Validity Guarantees Exist for Any Data Distribution (and How to Find Them)
https://proceedings.mlr.press/v235/prinster24a.html
Drew Prinster, Samuel Don Stanton, Anqi Liu, Suchi Saria
https://proceedings.mlr.press/v235/prinster24a.html
ICML 2024
As artificial intelligence (AI) / machine learning (ML) gain widespread adoption, practitioners are increasingly seeking means to quantify and control the risk these systems incur. This challenge is especially salient when such systems have autonomy to collect their own data, such as in black-box optimization and active learning, where their actions induce sequential feedback-loop shifts in the data distribution. Conformal prediction is a promising approach to uncertainty and risk quantification, but prior variants’ validity guarantees have assumed some form of “quasi-exchangeability” on the data distribution, thereby excluding many types of sequential shifts. In this paper we prove that conformal prediction can theoretically be extended to any joint data distribution, not just exchangeable or quasi-exchangeable ones. Although the most general case is exceedingly impractical to compute, for concrete practical applications we outline a procedure for deriving specific conformal algorithms for any data distribution, and we use this procedure to derive tractable algorithms for a series of AI/ML-agent-induced covariate shifts. We evaluate the proposed algorithms empirically on synthetic black-box optimization and active learning tasks.
https://proceedings.mlr.press/v235/prokhorov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/prokhorov24a/prokhorov24a.pdf
https://openreview.net/forum?id=XuQPA4D396
Autoencoding Conditional Neural Processes for Representation Learning
https://proceedings.mlr.press/v235/prokhorov24a.html
Victor Prokhorov, Ivan Titov, Siddharth N
https://proceedings.mlr.press/v235/prokhorov24a.html
ICML 2024
Conditional neural processes (CNPs) are a flexible and efficient family of models that learn to learn a stochastic process from data. They have seen particular application in contextual image completion - observing pixel values at some locations to predict a distribution over values at other unobserved locations. However, the choice of pixels in learning CNPs is typically either random or derived from a simple statistical measure (e.g. pixel variance). Here, we turn the problem on its head and ask: which pixels would a CNP like to observe - do they facilitate fitting better CNPs, and do such pixels tell us something meaningful about the underlying image? To this end we develop the Partial Pixel Space Variational Autoencoder (PPS-VAE), an amortised variational framework that casts CNP context as latent variables learnt simultaneously with the CNP. We evaluate PPS-VAE over a number of tasks across different visual data, and find that not only can it facilitate better-fit CNPs, but also that the spatial arrangement and values meaningfully characterise image information - evaluated through the lens of classification on both within and out-of-data distributions. Our model additionally allows for dynamic adaption of context-set size and the ability to scale-up to larger images, providing a promising avenue to explore learning meaningful and effective visual representations.
https://proceedings.mlr.press/v235/provodin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/provodin24a/provodin24a.pdf
https://openreview.net/forum?id=njpTpkvUbO
Efficient Exploration in Average-Reward Constrained Reinforcement Learning: Achieving Near-Optimal Regret With Posterior Sampling
https://proceedings.mlr.press/v235/provodin24a.html
Danil Provodin, Maurits Clemens Kaptein, Mykola Pechenizkiy
https://proceedings.mlr.press/v235/provodin24a.html
ICML 2024
We present a new algorithm based on posterior sampling for learning in Constrained Markov Decision Processes (CMDP) in the infinite-horizon undiscounted setting. The algorithm achieves near-optimal regret bounds while being advantageous empirically compared to the existing algorithms. Our main theoretical result is a Bayesian regret bound for each cost component of $\tilde{O} (DS\sqrt{AT})$ for any communicating CMDP with $S$ states, $A$ actions, and diameter $D$. This regret bound matches the lower bound in order of time horizon $T$ and is the best-known regret bound for communicating CMDPs achieved by a computationally tractable algorithm. Empirical results show that our posterior sampling algorithm outperforms the existing algorithms for constrained reinforcement learning.
https://proceedings.mlr.press/v235/psenka24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/psenka24a/psenka24a.pdf
https://openreview.net/forum?id=35ahHydjXo
Learning a Diffusion Model Policy from Rewards via Q-Score Matching
https://proceedings.mlr.press/v235/psenka24a.html
Michael Psenka, Alejandro Escontrela, Pieter Abbeel, Yi Ma
https://proceedings.mlr.press/v235/psenka24a.html
ICML 2024
Diffusion models have become a popular choice for representing actor policies in behavior cloning and offline reinforcement learning. This is due to their natural ability to optimize an expressive class of distributions over a continuous space. However, previous works fail to exploit the score-based structure of diffusion models, and instead utilize a simple behavior cloning term to train the actor, limiting their ability in the actor-critic setting. In this paper, we present a theoretical framework linking the structure of diffusion model policies to a learned Q-function, by linking the structure between the score of the policy to the action gradient of the Q-function. We focus on off-policy reinforcement learning and propose a new policy update method from this theory, which we denote Q-score matching. Notably, this algorithm only needs to differentiate through the denoising model rather than the entire diffusion model evaluation, and converged policies through Q-score matching are implicitly multi-modal and explorative in continuous domains. We conduct experiments in simulated environments to demonstrate the viability of our proposed method and compare to popular baselines. Source code is available from the project website: https://www.michaelpsenka.io/qsm/.
https://proceedings.mlr.press/v235/pu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pu24a/pu24a.pdf
https://openreview.net/forum?id=D5IRvFF1lN
Learning-Efficient Yet Generalizable Collaborative Filtering for Item Recommendation
https://proceedings.mlr.press/v235/pu24a.html
Yuanhao Pu, Xiaolong Chen, Xu Huang, Jin Chen, Defu Lian, Enhong Chen
https://proceedings.mlr.press/v235/pu24a.html
ICML 2024
The weighted squared loss is a common component in several Collaborative Filtering (CF) algorithms for item recommendation, including the representative implicit Alternating Least Squares (iALS). Despite its widespread use, this loss function lacks a clear connection to ranking objectives such as Discounted Cumulative Gain (DCG), posing a fundamental challenge in explaining the exceptional ranking performance observed in these algorithms. In this work, we make a breakthrough by establishing a connection between squared loss and ranking metrics through a Taylor expansion of the DCG-consistent surrogate loss—softmax loss. We also discover a new surrogate squared loss function, namely Ranking-Generalizable Squared (RG$^2$) loss, and conduct thorough theoretical analyses on the DCG-consistency of the proposed loss function. Later, we present an example of utilizing the RG$^2$ loss with Matrix Factorization (MF), coupled with a generalization upper bound and an ALS optimization algorithm that leverages closed-form solutions over all items. Experimental results over three public datasets demonstrate the effectiveness of the RG$^2$ loss, exhibiting ranking performance on par with, or even surpassing, the softmax loss while achieving faster convergence.
https://proceedings.mlr.press/v235/pu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pu24b/pu24b.pdf
https://openreview.net/forum?id=meItvvCO7X
Unsupervised Domain Adaptation for Anatomical Structure Detection in Ultrasound Images
https://proceedings.mlr.press/v235/pu24b.html
Bin Pu, Xingguo Lv, Jiewen Yang, He Guannan, Xingbo Dong, Yiqun Lin, Li Shengli, Tan Ying, Liu Fei, Ming Chen, Zhe Jin, Kenli Li, Xiaomeng Li
https://proceedings.mlr.press/v235/pu24b.html
ICML 2024
Models trained on ultrasound images from one institution typically experience a decline in effectiveness when transferred directly to other institutions. Moreover, unlike natural images, dense and overlapped structures exist in fetus ultrasound images, making the detection of structures more challenging. Thus, to tackle this problem, we propose a new Unsupervised Domain Adaptation (UDA) method named ToMo-UDA for fetus structure detection, which consists of the Topology Knowledge Transfer (TKT) and the Morphology Knowledge Transfer (MKT) module. The TKT leverages prior knowledge of the medical anatomy of fetal as topological information, reconstructing and aligning anatomy features across source and target domains. Then, the MKT formulates a more consistent and independent morphological representation for each substructure of an organ. To evaluate the proposed ToMo-UDA for ultrasound fetal anatomical structure detection, we introduce FUSH$^2$, a new Fetal UltraSound benchmark, comprises Heart and Head images collected from Two health centers, with 16 annotated regions. Our experiments show that utilizing topological and morphological anatomy information in ToMo-UDA can greatly improve organ structure detection. This expands the potential for structure detection tasks in medical image analysis.
https://proceedings.mlr.press/v235/pu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pu24c/pu24c.pdf
https://openreview.net/forum?id=yj8h567Ia7
Amortizing Pragmatic Program Synthesis with Rankings
https://proceedings.mlr.press/v235/pu24c.html
Yewen Pu, Saujas Vaduguru, Priyan Vaithilingam, Elena Glassman, Daniel Fried
https://proceedings.mlr.press/v235/pu24c.html
ICML 2024
The usage of Rational Speech Acts (RSA) framework has been successful in building pragmatic program synthesizers that return programs which, in addition to being logically consistent with user-generated examples, account for the fact that a user chooses their examples informatively. We present a general method of amortizing the slow, exact RSA synthesizer. Our method first compiles a communication dataset of partially ranked programs by querying the exact RSA synthesizer. It then distills a global ranking – a single, total ordering of all programs, to approximate the partial rankings from this dataset. This global ranking is then used at inference time to rank multiple logically consistent candidate programs generated from a fast, non-pragmatic synthesizer. Experiments on two program synthesis domains using our ranking method resulted in orders of magnitudes of speed ups compared to the exact RSA synthesizer, while being more accurate than a non-pragmatic synthesizer. Finally, we prove that in the special case of synthesis from a single example, this approximation is exact.
https://proceedings.mlr.press/v235/puigdemont24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/puigdemont24a/puigdemont24a.pdf
https://openreview.net/forum?id=k10805cgak
Learning to Remove Cuts in Integer Linear Programming
https://proceedings.mlr.press/v235/puigdemont24a.html
Pol Puigdemont, Stratis Skoulakis, Grigorios Chrysos, Volkan Cevher
https://proceedings.mlr.press/v235/puigdemont24a.html
ICML 2024
Cutting plane methods are a fundamental approach for solving integer linear programs (ILPs). In each iteration of such methods, additional linear constraints (cuts) are introduced to the constraint set with the aim of excluding the previous fractional optimal solution while not affecting the optimal integer solution. In this work, we explore a novel approach within cutting plane methods: instead of only adding new cuts, we also consider the removal of previous cuts introduced at any of the preceding iterations of the method under a learnable parametric criteria. We demonstrate that in fundamental combinatorial optimization settings such cut removal policies can lead to significant improvements over both human-based and machine learning-guided cut addition policies even when implemented with simple models.
https://proceedings.mlr.press/v235/pushkin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pushkin24a/pushkin24a.pdf
https://openreview.net/forum?id=Xeh8171Fce
On the Minimal Degree Bias in Generalization on the Unseen for non-Boolean Functions
https://proceedings.mlr.press/v235/pushkin24a.html
Denys Pushkin, Raphaël Berthier, Emmanuel Abbe
https://proceedings.mlr.press/v235/pushkin24a.html
ICML 2024
We investigate the out-of-domain generalization of random feature (RF) models and Transformers. We first prove that in the ‘generalization on the unseen (GOTU)’ setting, where training data is fully seen in some part of the domain but testing is made on another part, and for RF models in the small feature regime, the convergence takes place to interpolators of minimal degree as in the Boolean case (Abbe et al., 2023). We then consider the sparse target regime and explain how this regime relates to the small feature regime, but with a different regularization term that can alter the picture in the non-Boolean case. We show two different outcomes for the sparse regime with q-ary data tokens: (1) if the data is embedded with roots of unities, then a min-degree interpolator is learned like in the Boolean case for RF models, (2) if the data is not embedded as such, e.g., simply as integers, then RF models and Transformers may not learn minimal degree interpolators. This shows that the Boolean setting and its roots of unities generalization are special cases where the minimal degree interpolator offers a rare characterization of how learning takes place. For more general integer and real-valued settings, a more nuanced picture remains to be fully characterized.
https://proceedings.mlr.press/v235/pyakurel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/pyakurel24a/pyakurel24a.pdf
https://openreview.net/forum?id=KfN76nAcOO
Hierarchical Novelty Detection via Fine-Grained Evidence Allocation
https://proceedings.mlr.press/v235/pyakurel24a.html
Spandan Pyakurel, Qi Yu
https://proceedings.mlr.press/v235/pyakurel24a.html
ICML 2024
By leveraging a hierarchical structure of known classes, Hierarchical Novelty Detection (HND) offers fine-grained detection results that pair detected novel samples with their closest (known) parent class in the hierarchy. Prior knowledge on the parent class provides valuable insights to better understand these novel samples. However, traditional novelty detection methods try to separate novel samples from all known classes using uncertainty or distance based metrics so they are incapable of locating the closest known parent class. Since the novel class is also part of the hierarchy, the model can more easily get confused between samples from known classes and those from novel ones. To achieve effective HND, we propose to augment the known (leaf-level) classes with a set of novel classes, each of which is associated with one parent (i.e., non-leaf) class in the original hierarchy. Such a structure allows us to perform novel fine-grained evidence allocation to differentiate known and novel classes guided by a uniquely designed loss function. Our thorough theoretical analysis shows that fine-grained evidence allocation creates an evidence margin to more precisely separate known and novel classes. Extensive experiments conducted on real-world hierarchical datasets demonstrate the proposed model outperforms the strongest baselines and achieves the best HND performance.
https://proceedings.mlr.press/v235/qi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qi24a/qi24a.pdf
https://openreview.net/forum?id=jr0W36wOBx
Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration
https://proceedings.mlr.press/v235/qi24a.html
Shi-Ang Qi, Yakun Yu, Russell Greiner
https://proceedings.mlr.press/v235/qi24a.html
ICML 2024
Discrimination and calibration represent two important properties of survival analysis, with the former assessing the model’s ability to accurately rank subjects and the latter evaluating the alignment of predicted outcomes with actual events. With their distinct nature, it is hard for survival models to simultaneously optimize both of them especially as many previous results found improving calibration tends to diminish discrimination performance. This paper introduces a novel approach utilizing conformal regression that can improve a model’s calibration without degrading discrimination. We provide theoretical guarantees for the above claim, and rigorously validate the efficiency of our approach across 11 real-world datasets, showcasing its practical applicability and robustness in diverse scenarios.
https://proceedings.mlr.press/v235/qian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qian24a/qian24a.pdf
https://openreview.net/forum?id=e3geukCBw6
Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning
https://proceedings.mlr.press/v235/qian24a.html
Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, Tat-Seng Chua, Yueting Zhuang, Siliang Tang
https://proceedings.mlr.press/v235/qian24a.html
ICML 2024
Large Language Models (LLMs) demonstrate remarkable proficiency in comprehending and handling text-based tasks. Many efforts are being made to transfer these attributes to video modality, which are termed Video-LLMs. However, existing Video-LLMs can only capture the coarse-grained semantics and are unable to effectively handle tasks related to comprehension or localization of specific video segments. In light of these challenges, we propose Momentor, a Video-LLM capable of accomplishing fine-grained temporal understanding tasks. To support the training of Momentor, we design an automatic data generation engine to construct Moment-10M, a large-scale video instruction dataset with segment-level instruction data. We train Momentor on Moment-10M, enabling it to perform segment-level reasoning and localization. Zero-shot evaluations on several tasks demonstrate that Momentor excels in fine-grained temporally grounded comprehension and localization.
https://proceedings.mlr.press/v235/qian24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qian24b/qian24b.pdf
https://openreview.net/forum?id=G0z4bCNmkG
ByMI: Byzantine Machine Identification with False Discovery Rate Control
https://proceedings.mlr.press/v235/qian24b.html
Chengde Qian, Mengyuan Wang, Haojie Ren, Changliang Zou
https://proceedings.mlr.press/v235/qian24b.html
ICML 2024
Various robust estimation methods or algorithms have been proposed to hedge against Byzantine failures in distributed learning. However, there is a lack of systematic approaches to provide theoretical guarantees of significance in detecting those Byzantine machines. In this paper, we develop a general detection procedure, ByMI, via error rate control to address this issue, which is applicable to many robust learning problems. The key idea is to apply the sample-splitting strategy on each worker machine to construct a score statistic integrated with a general robust estimation and then to utilize the symmetry property of those scores to derive a data-driven threshold. The proposed method is dimension insensitive and p-value free with the help of the symmetry property and can achieve false discovery rate control under mild conditions. Numerical experiments on both synthetic and real data validate the theoretical results and demonstrate the effectiveness of our proposed method on Byzantine machine identification.
https://proceedings.mlr.press/v235/qian24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qian24c/qian24c.pdf
https://openreview.net/forum?id=KNedb3bQ4h
Efficient Non-stationary Online Learning by Wavelets with Applications to Online Distribution Shift Adaptation
https://proceedings.mlr.press/v235/qian24c.html
Yu-Yang Qian, Peng Zhao, Yu-Jie Zhang, Masashi Sugiyama, Zhi-Hua Zhou
https://proceedings.mlr.press/v235/qian24c.html
ICML 2024
Dynamic regret minimization offers a principled way for non-stationary online learning, where the algorithm’s performance is evaluated against changing comparators. Prevailing methods often employ a two-layer online ensemble, consisting of a group of base learners with different configurations and a meta learner that combines their outputs. Given the evident computational overhead associated with two-layer algorithms, this paper investigates how to attain optimal dynamic regret without deploying a model ensemble. To this end, we introduce the notion of underlying dynamic regret, a specific form of the general dynamic regret that can encompass many applications of interest. We show that almost optimal dynamic regret can be obtained using a single-layer model alone. This is achieved by an adaptive restart equipped with wavelet detection, wherein a novel streaming wavelet operator is introduced to online update the wavelet coefficients via a carefully designed binary indexed tree. We apply our method to the online label shift adaptation problem, leading to new algorithms with optimal dynamic regret and significantly improved computation/storage efficiency compared to prior arts. Extensive experiments validate our proposal.
https://proceedings.mlr.press/v235/qiao24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiao24a/qiao24a.pdf
https://openreview.net/forum?id=eP3vsbB5wW
Ensemble Pruning for Out-of-distribution Generalization
https://proceedings.mlr.press/v235/qiao24a.html
Fengchun Qiao, Xi Peng
https://proceedings.mlr.press/v235/qiao24a.html
ICML 2024
Ensemble of deep neural networks has achieved great success in hedging against single-model failure under distribution shift. However, existing techniques suffer from producing redundant models, limiting predictive diversity and yielding compromised generalization performance. Existing ensemble pruning methods can only guarantee predictive diversity for in-distribution data, which may not transfer well to out-of-distribution (OoD) data. To address this gap, we propose a principled optimization framework for ensemble pruning under distribution shifts. Since the annotations of test data are not available, we explore relationships between prediction distributions of the models, encapsulated in a topology graph. By incorporating this topology into a combinatorial optimization framework, complementary models with high predictive diversity are selected with theoretical guarantees. Our approach is model-agnostic and can be applied on top of a broad spectrum of off-the-shelf ensembling methods for improved generalization performance. Experiments on common benchmarks demonstrate the superiority of our approach in both multi- and single-source OoD generalization. The source codes are publicly available at: https://github.com/joffery/TEP.
https://proceedings.mlr.press/v235/qiao24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiao24b/qiao24b.pdf
https://openreview.net/forum?id=pPNMhdYMaz
Near-Optimal Reinforcement Learning with Self-Play under Adaptivity Constraints
https://proceedings.mlr.press/v235/qiao24b.html
Dan Qiao, Yu-Xiang Wang
https://proceedings.mlr.press/v235/qiao24b.html
ICML 2024
We study the problem of multi-agent reinforcement learning (MARL) with adaptivity constraints — a new problem motivated by real-world applications where deployments of new policies are costly and the number of policy updates must be minimized. For two-player zero-sum Markov Games, we design a (policy) elimination based algorithm that achieves a regret of $\widetilde{O}(\sqrt{H^3 S^2 ABK})$, while the batch complexity is only $O(H+\log\log K)$. In the above, $S$ denotes the number of states, $A,B$ are the number of actions for the two players respectively, $H$ is the horizon and $K$ is the number of episodes. Furthermore, we prove a batch complexity lower bound $\Omega(\frac{H}{\log_{A}K}+\log\log K)$ for all algorithms with $\widetilde{O}(\sqrt{K})$ regret bound, which matches our upper bound up to logarithmic factors. As a byproduct, our techniques naturally extend to learning bandit games and reward-free MARL within near optimal batch complexity. To the best of our knowledge, these are the first line of results towards understanding MARL with low adaptivity.
https://proceedings.mlr.press/v235/qiao24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiao24c/qiao24c.pdf
https://openreview.net/forum?id=ssFMq35UUY
ULAREF: A Unified Label Refinement Framework for Learning with Inaccurate Supervision
https://proceedings.mlr.press/v235/qiao24c.html
Congyu Qiao, Ning Xu, Yihao Hu, Xin Geng
https://proceedings.mlr.press/v235/qiao24c.html
ICML 2024
Learning with inaccurate supervision is often encountered in weakly supervised learning, and researchers have invested a considerable amount of time and effort in designing specialized algorithms for different forms of annotations in inaccurate supervision. In fact, different forms of these annotations share the fundamental characteristic that they all still incorporate some portion of correct labeling information. This commonality can serve as a lever, enabling the creation of a cohesive framework designed to tackle the challenges associated with various forms of annotations in learning with inaccurate supervision. In this paper, we propose a unified label refinement framework named ULAREF, i.e., a Unified LAbel REfinement Framework for learning with inaccurate supervision, which is capable of leveraging label refinement to handle inaccurate supervision. Specifically, our framework trains the predictive model with refined labels through global detection of reliability and local enhancement using an enhanced model fine-tuned by a proposed consistency loss. Also, we theoretically justify that the enhanced model in local enhancement can achieve higher accuracy than the predictive model on the detected unreliable set under mild assumptions.
https://proceedings.mlr.press/v235/qin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qin24a/qin24a.pdf
https://openreview.net/forum?id=cit0hg4sEz
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes
https://proceedings.mlr.press/v235/qin24a.html
Zhen Qin, Daoyuan Chen, Bingchen Qian, Bolin Ding, Yaliang Li, Shuiguang Deng
https://proceedings.mlr.press/v235/qin24a.html
ICML 2024
Pre-trained large language models (LLMs) need fine-tuning to improve their responsiveness to natural language instructions. Federated learning offers a way to fine-tune LLMs using the abundant data on end devices without compromising data privacy. Most existing federated fine-tuning methods for LLMs rely on parameter-efficient fine-tuning techniques, which may not reach the performance height possible with full-parameter tuning. However, federated full-parameter tuning of LLMs is a non-trivial problem due to the immense communication cost. This work introduces FedKSeed that employs zeroth-order optimization with a finite set of random seeds. It significantly reduces transmission requirements between the server and clients to just a few random seeds and scalar gradients, amounting to only a few thousand bytes, making federated full-parameter tuning of billion-sized LLMs possible on devices. Building on it, we develop a strategy enabling probability-differentiated seed sampling, prioritizing perturbations with greater impact on model accuracy. Experiments across six scenarios with various LLMs, datasets and data partitions demonstrate that our approach outperforms existing federated LLM fine-tuning methods in both communication efficiency and zero-shot generalization.
https://proceedings.mlr.press/v235/qin24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qin24b/qin24b.pdf
https://openreview.net/forum?id=jQ92egz5Ym
Accurate LoRA-Finetuning Quantization of LLMs via Information Retention
https://proceedings.mlr.press/v235/qin24b.html
Haotong Qin, Xudong Ma, Xingyu Zheng, Xiaoyang Li, Yang Zhang, Shouda Liu, Jie Luo, Xianglong Liu, Michele Magno
https://proceedings.mlr.press/v235/qin24b.html
ICML 2024
The LoRA-finetuning quantization of LLMs has been extensively studied to obtain accurate yet compact LLMs for deployment on resource-constrained hardware. However, existing methods cause the quantized LLM to severely degrade and even fail to benefit from the finetuning of LoRA. This paper proposes a novel IR-QLoRA for pushing quantized LLMs with LoRA to be highly accurate through information retention. The proposed IR-QLoRA mainly relies on two technologies derived from the perspective of unified information: (1) statistics-based Information Calibration Quantization allows the quantized parameters of LLM to retain original information accurately; (2) finetuning-based Information Elastic Connection makes LoRA utilizes elastic representation transformation with diverse information. Comprehensive experiments show that IR-QLoRA can significantly improve accuracy across LLaMA and LLaMA2 families under 2-4 bit-widths, e.g., 4-bit LLaMA-7B achieves 1.4% improvement on MMLU compared with the state-of-the-art methods. The significant performance gain requires only a tiny 0.31% additional time consumption, revealing the satisfactory efficiency of our IR-QLoRA. We highlight that IR-QLoRA enjoys excellent versatility, compatible with various frameworks (e.g., NormalFloat and Integer quantization) and brings general accuracy gains. The code is available at https://github.com/htqin/ir-qlora .
https://proceedings.mlr.press/v235/qin24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qin24c/qin24c.pdf
https://openreview.net/forum?id=Lwm6TiUP4X
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention
https://proceedings.mlr.press/v235/qin24c.html
Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong
https://proceedings.mlr.press/v235/qin24c.html
ICML 2024
We present Lightning Attention, the first linear attention implementation that maintains a constant training speed for various sequence lengths under fixed memory consumption. Due to the issue with cumulative summation operations (cumsum), previous linear attention implementations cannot achieve their theoretical advantage in a casual setting. However, this issue can be effectively solved by utilizing different attention calculation strategies to compute the different parts of attention. Specifically, we split the attention calculation into intra-blocks and inter-blocks and use conventional attention computation for intra-blocks and linear attention kernel tricks for inter-blocks. This eliminates the need for cumsum in the linear attention calculation. Furthermore, a tiling technique is adopted through both forward and backward procedures to take full advantage of the GPU hardware. To enhance accuracy while preserving efficacy, we introduce TransNormerLLM (TNL), a new architecture that is tailored to our lightning attention. We conduct rigorous testing on standard and self-collected datasets with varying model sizes and sequence lengths. TNL is notably more efficient than other language models. In addition, benchmark results indicate that TNL performs on par with state-of-the-art LLMs utilizing conventional transformer structures. The source code is released at github.com/OpenNLPLab/TransnormerLLM.
https://proceedings.mlr.press/v235/qin24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qin24d/qin24d.pdf
https://openreview.net/forum?id=ks8qSwkkuZ
Feasible Reachable Policy Iteration
https://proceedings.mlr.press/v235/qin24d.html
Shentao Qin, Yujie Yang, Yao Mu, Jie Li, Wenjun Zou, Jingliang Duan, Shengbo Eben Li
https://proceedings.mlr.press/v235/qin24d.html
ICML 2024
The goal-reaching tasks with safety constraints are common control problems in real world, such as intelligent driving and robot manipulation. The difficulty of this kind of problem comes from the exploration termination caused by safety constraints and the sparse rewards caused by goals. The existing safe RL avoids unsafe exploration by restricting the search space to a feasible region, the essence of which is the pruning of the search space. However, there are still many ineffective explorations in the feasible region because of the ignorance of the goals. Our approach considers both safety and goals; the policy space pruning is achieved by a function called feasible reachable function, which describes whether there is a policy to make the agent safely reach the goals in the finite time domain. This function naturally satisfies the self-consistent condition and the risky Bellman equation, which can be solved by the fixed point iteration method. On this basis, we propose feasible reachable policy iteration (FRPI), which is divided into three steps: policy evaluation, region expansion, and policy improvement. In the region expansion step, by using the information of agent to reach the goals, the convergence of the feasible region is accelerated, and simultaneously a smaller feasible reachable region is identified. The experimental results verify the effectiveness of the proposed FR function in both improving the convergence speed of better or comparable performance without sacrificing safety and identifying a smaller policy space with higher sample efficiency.
https://proceedings.mlr.press/v235/qiu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24a/qiu24a.pdf
https://openreview.net/forum?id=2PVjIQdq7N
Efficient PAC Learnability of Dynamical Systems Over Multilayer Networks
https://proceedings.mlr.press/v235/qiu24a.html
Zirou Qiu, Abhijin Adiga, Madhav Marathe, S. S. Ravi, Daniel Rosenkrantz, Richard Stearns, Anil Kumar Vullikanti
https://proceedings.mlr.press/v235/qiu24a.html
ICML 2024
Networked dynamical systems are widely used as formal models of real-world cascading phenomena, such as the spread of diseases and information. Prior research has addressed the problem of learning the behavior of an unknown dynamical system when the underlying network has a single layer. In this work, we study the learnability of dynamical systems over multilayer networks, which are more realistic and challenging. First, we present an efficient PAC learning algorithm with provable guarantees to show that the learner only requires a small number of training examples to infer an unknown system. We further provide a tight analysis of the Natarajan dimension which measures the model complexity. Asymptotically, our bound on the Nararajan dimension is tight for almost all multilayer graphs. The techniques and insights from our work provide the theoretical foundations for future investigations of learning problems for multilayer dynamical systems.
https://proceedings.mlr.press/v235/qiu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24b/qiu24b.pdf
https://openreview.net/forum?id=WC14xZIaC2
Learning High-Order Relationships of Brain Regions
https://proceedings.mlr.press/v235/qiu24b.html
Weikang Qiu, Huangrui Chu, Selena Wang, Haolan Zuo, Xiaoxiao Li, Yize Zhao, Rex Ying
https://proceedings.mlr.press/v235/qiu24b.html
ICML 2024
Discovering reliable and informative relationships among brain regions from functional magnetic resonance imaging (fMRI) signals is essential in phenotypic predictions in neuroscience. Most of the current methods fail to accurately characterize those interactions because they only focus on pairwise connections and overlook the high-order relationships of brain regions. We propose that these high-order relationships should be maximally informative and minimally redundant (MIMR). However, identifying such high-order relationships is challenging and under-explored due to the exponential search space and the absence of a tractable objective. In response to this gap, we propose a novel method named HyBRiD, which aims to extract MIMR high-order relationships from fMRI data. HyBRiD employs a Constructor to identify hyperedge structures, and a Weighter to compute a weight for each hyperedge, which avoids searching in exponential space. HyBRiD achieves the MIMR objective through an innovative information bottleneck framework named multi-head drop-bottleneck with theoretical guarantees. Our comprehensive experiments demonstrate the effectiveness of our model. Our model outperforms the state-of-the-art predictive model by an average of 11.2%, regarding the quality of hyperedges measured by CPM, a standard protocol for studying brain connections.
https://proceedings.mlr.press/v235/qiu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24c/qiu24c.pdf
https://openreview.net/forum?id=YWuSLBkfOw
To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO
https://proceedings.mlr.press/v235/qiu24c.html
Zi-Hao Qiu, Siqi Guo, Mao Xu, Tuo Zhao, Lijun Zhang, Tianbao Yang
https://proceedings.mlr.press/v235/qiu24c.html
ICML 2024
The temperature parameter plays a profound role during training and/or inference with large foundation models (LFMs) such as large language models (LLMs) and CLIP models. Particularly, it adjusts the logits in the softmax function in LLMs, which is crucial for next token generation, and it scales the similarities in the contrastive loss for training CLIP models. A significant question remains: “ Is it viable to learn a neural network to predict a personalized temperature of any input data for enhancing LFMs?" In this paper, we present a principled framework for learning a small yet generalizable temperature prediction network (TempNet) to improve LFMs. Our solution is composed of a novel learning framework with robust losses underpinned by constrained distributionally robust optimization (DRO), and a properly designed TempNet with theoretical inspiration. TempNet can be trained together with a large foundation model from scratch or learned separately given a pretrained foundation model. It is not only useful for predicting personalized temperature to promote the training of LFMs but also generalizable and transferable to new tasks. Our experiments on LLMs and CLIP models demonstrate that TempNet greatly improves the performance of existing solutions or models.
https://proceedings.mlr.press/v235/qiu24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24d/qiu24d.pdf
https://openreview.net/forum?id=XtDJaSe8jE
Transferring Knowledge From Large Foundation Models to Small Downstream Models
https://proceedings.mlr.press/v235/qiu24d.html
Shikai Qiu, Boran Han, Danielle C. Maddix, Shuai Zhang, Bernie Wang, Andrew Gordon Wilson
https://proceedings.mlr.press/v235/qiu24d.html
ICML 2024
How do we transfer the relevant knowledge from ever larger foundation models into small, task-specific downstream models that can run at much lower costs? Standard transfer learning using pre-trained weights as the initialization transfers limited information and commits us to often massive pre-trained architectures. This procedure also precludes combining multiple pre-trained models that learn complementary information. To address these shortcomings, we introduce Adaptive Feature Transfer (AFT). Instead of transferring weights, AFT operates purely on features, thereby decoupling the choice of the pre-trained model from the smaller downstream model. Rather than indiscriminately compressing all pre-trained features, AFT adaptively transfers pre-trained features that are most useful for performing the downstream task, using a simple regularization that adds minimal overhead. Across multiple vision, language, and multi-modal datasets, AFT achieves significantly better downstream performance compared to alternatives with a similar computational cost. Furthermore, AFT reliably translates improvement in pre-trained models into improvement in downstream performance, even if the downstream model is over $50\times$ smaller, and can effectively transfer complementary information learned by multiple pre-trained models.
https://proceedings.mlr.press/v235/qiu24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24e/qiu24e.pdf
https://openreview.net/forum?id=0tuwdgBiSN
Complexity Matters: Feature Learning in the Presence of Spurious Correlations
https://proceedings.mlr.press/v235/qiu24e.html
Guanwen Qiu, Da Kuang, Surbhi Goel
https://proceedings.mlr.press/v235/qiu24e.html
ICML 2024
Existing research often posits spurious features as easier to learn than core features in neural network optimization, but the impact of their relative simplicity remains under-explored. Moreover, studies mainly focus on end performance rather than the learning dynamics of feature learning. In this paper, we propose a theoretical framework and an associated synthetic dataset grounded in boolean function analysis. This setup allows for fine-grained control over the relative complexity (compared to core features) and correlation strength (with respect to the label) of spurious features to study the dynamics of feature learning under spurious correlations. Our findings uncover several interesting phenomena: (1) stronger spurious correlations or simpler spurious features slow down the learning rate of the core features, (2) two distinct subnetworks are formed to learn core and spurious features separately, (3) learning phases of spurious and core features are not always separable, (4) spurious features are not forgotten even after core features are fully learned. We demonstrate that our findings justify the success of retraining the last layer to remove spurious correlation and also identifies limitations of popular debiasing algorithms that exploit early learning of spurious features. We support our empirical findings with theoretical analyses for the case of learning XOR features with a one-hidden-layer ReLU network.
https://proceedings.mlr.press/v235/qiu24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24f/qiu24f.pdf
https://openreview.net/forum?id=ExHTFXEhc9
Compute Better Spent: Replacing Dense Layers with Structured Matrices
https://proceedings.mlr.press/v235/qiu24f.html
Shikai Qiu, Andres Potapczynski, Marc Anton Finzi, Micah Goldblum, Andrew Gordon Wilson
https://proceedings.mlr.press/v235/qiu24f.html
ICML 2024
Dense linear layers are the dominant computational bottleneck in foundation models. Identifying more efficient alternatives to dense matrices has enormous potential for building more compute-efficient models, as exemplified by the success of convolutional networks in the image domain. In this work, we systematically explore structured matrices as replacements for dense matrices. We show that different structures often require drastically different initialization scales and learning rates, which are crucial to performance, especially as models scale. Using insights from the Maximal Update Parameterization, we determine the optimal scaling for initialization and learning rates of these unconventional layers. Finally, we measure the scaling laws of different structures to compare how quickly their performance improves with compute. We propose a novel matrix family containing Monarch matrices, the Block Tensor-Train (BTT), which we show performs better than dense matrices for the same compute on multiple tasks. On CIFAR-10/100 with augmentation, BTT achieves exponentially lower training loss than dense when training MLPs and ViTs. BTT matches dense ViT-S/32 performance on ImageNet-1k with 3.8 times less compute and is more efficient than dense for training small GPT-2 language models.
https://proceedings.mlr.press/v235/qiu24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qiu24g/qiu24g.pdf
https://openreview.net/forum?id=vG7YpsJT74
Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization
https://proceedings.mlr.press/v235/qiu24g.html
Ruizhong Qiu, Hanghang Tong
https://proceedings.mlr.press/v235/qiu24g.html
ICML 2024
We study nonconvex zeroth-order optimization (ZOO) in a high-dimensional space $\mathbb R^d$ for functions with approximately $s$-sparse gradients. To reduce the dependence on the dimensionality $d$ in the query complexity, high-dimensional ZOO methods seek to leverage gradient sparsity to design gradient estimators. The previous best method needs $O\big(s\log\frac ds\big)$ queries per step to achieve $O\big(\frac1T\big)$ rate of convergence w.r.t. the number T of steps. In this paper, we propose Gradient Compressed Sensing (GraCe), a query-efficient and accurate estimator for sparse gradients that uses only $O\big(s\log\log\frac ds\big)$ queries per step and still achieves $O\big(\frac1T\big)$ rate of convergence. To our best knowledge, we are the first to achieve a double-logarithmic dependence on $d$ in the query complexity under weaker assumptions. Our proposed GraCe generalizes the Indyk–Price–Woodruff (IPW) algorithm in compressed sensing from linear measurements to nonlinear functions. Furthermore, since the IPW algorithm is purely theoretical due to its impractically large constant, we improve the IPW algorithm via our dependent random partition technique together with our corresponding novel analysis and successfully reduce the constant by a factor of nearly $4300$. Our GraCe is not only theoretically query-efficient but also achieves strong empirical performance. We benchmark our GraCe against $12$ existing ZOO methods with $10000$-dimensional functions and demonstrate that GraCe significantly outperforms existing methods. Our code is publicly available at https://github.com/q-rz/ICML24-GraCe.
https://proceedings.mlr.press/v235/qu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qu24a/qu24a.pdf
https://openreview.net/forum?id=KaAQu5rNU1
MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space
https://proceedings.mlr.press/v235/qu24a.html
Yanru Qu, Keyue Qiu, Yuxuan Song, Jingjing Gong, Jiawei Han, Mingyue Zheng, Hao Zhou, Wei-Ying Ma
https://proceedings.mlr.press/v235/qu24a.html
ICML 2024
Generative models for structure-based drug design (SBDD) have shown promising results in recent years. Existing works mainly focus on how to generate molecules with higher binding affinity, ignoring the feasibility prerequisites for generated 3D poses and resulting in false positives. We conduct thorough studies on key factors of ill-conformational problems when applying autoregressive methods and diffusion to SBDD, including mode collapse and hybrid continuous-discrete space. In this paper, we introduce MolCRAFT, the first SBDD model that operates in the continuous parameter space, together with a novel noise reduced sampling strategy. Empirical results show that our model consistently achieves superior performance in binding affinity with more stable 3D structure, demonstrating our ability to accurately model interatomic interactions. To our best knowledge, MolCRAFT is the first to achieve reference-level Vina Scores (-6.59 kcal/mol) with comparable molecular size, outperforming other strong baselines by a wide margin (-0.84 kcal/mol). Code is available at https://github.com/AlgoMole/MolCRAFT.
https://proceedings.mlr.press/v235/qu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/qu24b/qu24b.pdf
https://openreview.net/forum?id=Uz4Qr40Y3C
Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations
https://proceedings.mlr.press/v235/qu24b.html
Helen Qu, Sang Michael Xie
https://proceedings.mlr.press/v235/qu24b.html
ICML 2024
Models trained on a labeled source domain often generalize poorly when deployed on an out-of-distribution (OOD) target domain. In the domain adaptation setting where unlabeled target data is available, self-supervised pretraining (e.g., contrastive learning or masked autoencoding) is a promising method to mitigate this performance drop. Pretraining depends on generic data augmentations (e.g., cropping or masking) to learn representations that generalize across domains, which may not work for all distribution shifts. In this paper, we show on real-world tasks that standard fine-tuning after pretraining does not consistently improve OOD error over simply training from scratch on labeled source data. To better leverage pretraining for distribution shifts, we propose the Connect Later framework, which fine-tunes the model with targeted augmentations designed with knowledge of the shift. Intuitively, pretraining learns good representations within the source and target domains, while fine-tuning with targeted augmentations improves generalization across domains. Connect Later achieves state-of-the-art OOD accuracy while maintaining comparable or better in-distribution accuracy on 4 real-world tasks in wildlife identification (iWildCam-WILDS), tumor detection (Camelyon17-WILDS), and astronomy (AstroClassification, Redshifts).
https://proceedings.mlr.press/v235/quan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/quan24a/quan24a.pdf
https://openreview.net/forum?id=Ax90jQPbgF
Learning Constraints from Offline Demonstrations via Superior Distribution Correction Estimation
https://proceedings.mlr.press/v235/quan24a.html
Guorui Quan, Zhiqiang Xu, Guiliang Liu
https://proceedings.mlr.press/v235/quan24a.html
ICML 2024
An effective approach for learning both safety constraints and control policies is Inverse Constrained Reinforcement Learning (ICRL). Previous ICRL algorithms commonly employ an online learning framework that permits unlimited sampling from an interactive environment. This setting, however, is infeasible in many realistic applications where data collection is dangerous and expensive. To address this challenge, we propose Inverse Constrained Superior Distribution Correction Estimation (ICSDICE) as an offline ICRL solver. ICSDICE extracts feasible constraints from superior distributions, thereby highlighting policies with expert-exceeding rewards maximization ability. To estimate these distributions, ICSDICE solves a regularized dual optimization problem for safe control by exploiting the observed reward signals and expert preferences. Striving for transferable constraints and unbiased estimations, ICSDICE actively encourages sparsity and incorporates a discounting effect within the learned and observed distributions. Empirical studies show that ICSDICE outperforms other baselines by accurately recovering the constraints and adapting to high-dimensional environments. The code is available at https://github.com/quangr/ICSDICE.
https://proceedings.mlr.press/v235/quang24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/quang24a/quang24a.pdf
https://openreview.net/forum?id=NeO2hoSexj
Augmenting Decision with Hypothesis in Reinforcement Learning
https://proceedings.mlr.press/v235/quang24a.html
Nguyen Minh Quang, Hady W. Lauw
https://proceedings.mlr.press/v235/quang24a.html
ICML 2024
Value-based reinforcement learning is the current State-Of-The-Art due to high sampling efficiency. However, our study shows it suffers from low exploitation in early training period and bias sensitiveness. To address these issues, we propose to augment the decision-making process with hypothesis, a weak form of environment description. Our approach relies on prompting the learning agent with accurate hypotheses, and designing a ready-to-adapt policy through incremental learning. We propose the ALH algorithm, showing detailed analyses on a typical learning scheme and a diverse set of Mujoco benchmarks. Our algorithm produces a significant improvement over value-based learning algorithms and other strong baselines. Our code is available at Github URL.