abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/lim24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lim24a/lim24a.pdf
https://openreview.net/forum?id=acTLXagzqd
Graph Geometry-Preserving Autoencoders
https://proceedings.mlr.press/v235/lim24a.html
Jungbin Lim, Jihwan Kim, Yonghyeon Lee, Cheongjae Jang, Frank C. Park
https://proceedings.mlr.press/v235/lim24a.html
ICML 2024
When using an autoencoder to learn the low-dimensional manifold of high-dimensional data, it is crucial to find the latent representations that preserve the geometry of the data manifold. However, most existing studies assume a Euclidean nature for the high-dimensional data space, which is arbitrary and often does not precisely reflect the underlying semantic or domain-specific attributes of the data. In this paper, we propose a novel autoencoder regularization framework based on the premise that the geometry of the data manifold can often be better captured with a well-designed similarity graph associated with data points. Given such a graph, we utilize a Riemannian geometric distortion measure as a regularizer to preserve the geometry derived from the graph Laplacian and make it suitable for larger-scale autoencoder training. Through extensive experiments, we show that our method outperforms existing state-of-the-art geometry-preserving and graph-based autoencoders with respect to learning accurate latent structures that preserve the graph geometry, and is particularly effective in learning dynamics in the latent space. Code is available at https://github.com/JungbinLim/GGAE-public.
https://proceedings.mlr.press/v235/lim24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lim24b/lim24b.pdf
https://openreview.net/forum?id=ngjmcfowtc
Momentum Particle Maximum Likelihood
https://proceedings.mlr.press/v235/lim24b.html
Jen Ning Lim, Juan Kuntz, Samuel Power, Adam Michael Johansen
https://proceedings.mlr.press/v235/lim24b.html
ICML 2024
Maximum likelihood estimation (MLE) of latent variable models is often recast as the minimization of a free energy functional over an extended space of parameters and probability distributions. This perspective was recently combined with insights from optimal transport to obtain novel particle-based algorithms for fitting latent variable models to data. Drawing inspiration from prior works which interpret ‘momentum-enriched’ optimization algorithms as discretizations of ordinary differential equations, we propose an analogous dynamical-systems-inspired approach to minimizing the free energy functional. The result is a dynamical system that blends elements of Nesterov’s Accelerated Gradient method, the underdamped Langevin diffusion, and particle methods. Under suitable assumptions, we prove that the continuous-time system minimizes the functional. By discretizing the system, we obtain a practical algorithm for MLE in latent variable models. The algorithm outperforms existing particle methods in numerical experiments and compares favourably with other MLE algorithms.
https://proceedings.mlr.press/v235/lin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24a/lin24a.pdf
https://openreview.net/forum?id=q14AbM4kdv
An Effective Dynamic Gradient Calibration Method for Continual Learning
https://proceedings.mlr.press/v235/lin24a.html
Weichen Lin, Jiaxiang Chen, Ruomin Huang, Hu Ding
https://proceedings.mlr.press/v235/lin24a.html
ICML 2024
Continual learning (CL) is a fundamental topic in machine learning, where the goal is to train a model with continuously incoming data and tasks. Due to the memory limit, we cannot store all the historical data, and therefore confront the “catastrophic forgetting” problem, i.e., the performance on the previous tasks can substantially decrease because of the missing information in the latter period. Though a number of elegant methods have been proposed, the catastrophic forgetting phenomenon still cannot be well avoided in practice. In this paper, we study the problem from the gradient perspective, where our aim is to develop an effective algorithm to calibrate the gradient in each updating step of the model; namely, our goal is to guide the model to be updated in the right direction under the situation that a large amount of historical data are unavailable. Our idea is partly inspired by the seminal stochastic variance reduction methods (e.g., SVRG and SAGA) for reducing the variance of gradient estimation in stochastic gradient descent algorithms. Another benefit is that our approach can be used as a general tool, which is able to be incorporated with several existing popular CL methods to achieve better performance. We also conduct a set of experiments on several benchmark datasets to evaluate the performance in practice.
https://proceedings.mlr.press/v235/lin24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24b/lin24b.pdf
https://openreview.net/forum?id=VRv8KjJNuj
Equivariant Diffusion for Crystal Structure Prediction
https://proceedings.mlr.press/v235/lin24b.html
Peijia Lin, Pin Chen, Rui Jiao, Qing Mo, Cen Jianhuan, Wenbing Huang, Yang Liu, Dan Huang, Yutong Lu
https://proceedings.mlr.press/v235/lin24b.html
ICML 2024
In addressing the challenge of Crystal Structure Prediction (CSP), symmetry-aware deep learning models, particularly diffusion models, have been extensively studied, which treat CSP as a conditional generation task. However, ensuring permutation, rotation, and periodic translation equivariance during diffusion process remains incompletely addressed. In this work, we propose EquiCSP, a novel equivariant diffusion-based generative model. We not only address the overlooked issue of lattice permutation equivariance in existing models, but also develop a unique noising algorithm that rigorously maintains periodic translation equivariance throughout both training and inference processes. Our experiments indicate that EquiCSP significantly surpasses existing models in terms of generating accurate structures and demonstrates faster convergence during the training process.
https://proceedings.mlr.press/v235/lin24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24c/lin24c.pdf
https://openreview.net/forum?id=J5VB1h3Aed
Revisiting the Role of Language Priors in Vision-Language Models
https://proceedings.mlr.press/v235/lin24c.html
Zhiqiu Lin, Xinyue Chen, Deepak Pathak, Pengchuan Zhang, Deva Ramanan
https://proceedings.mlr.press/v235/lin24c.html
ICML 2024
Vision-language models (VLMs) are impactful in part because they can be applied to a variety of visual understanding tasks in a zero-shot fashion, without any fine-tuning. We study $\textit{generative VLMs}$ that are trained for next-word generation given an image. We explore their zero-shot performance on the illustrative task of image-text retrieval across nine popular vision-language benchmarks. Our first observation is that they can be repurposed for discriminative tasks (such as image-text retrieval) by simply computing the match score of generating a particular text string given an image. We call this probabilistic score the Visual Generative Pre-Training Score (VisualGPTScore). While the VisualGPTScore produces near-perfect accuracy on some retrieval benchmarks, it yields poor accuracy on others. We analyze this behavior through a probabilistic lens, pointing out that some benchmarks inadvertently capture unnatural language distributions by creating adversarial but unlikely text captions. In fact, we demonstrate that even a "blind" language model that ignores any image evidence can sometimes outperform all prior art, reminiscent of similar challenges faced by the visual-question answering (VQA) community many years ago. We derive a probabilistic post-processing scheme that controls for the amount of linguistic bias in generative VLMs at test time without having to retrain or fine-tune the model. We show that the VisualGPTScore, when appropriately debiased, is a strong zero-shot baseline for vision-language understanding, oftentimes producing state-of-the-art accuracy.
https://proceedings.mlr.press/v235/lin24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24d/lin24d.pdf
https://openreview.net/forum?id=XoencoHWy7
Non-confusing Generation of Customized Concepts in Diffusion Models
https://proceedings.mlr.press/v235/lin24d.html
Wang Lin, Jingyuan Chen, Jiaxin Shi, Yichen Zhu, Chen Liang, Junzhong Miao, Tao Jin, Zhou Zhao, Fei Wu, Shuicheng Yan, Hanwang Zhang
https://proceedings.mlr.press/v235/lin24d.html
ICML 2024
We tackle the common challenge of inter-concept visual confusion in compositional concept generation using text-guided diffusion models (TGDMs). It becomes even more pronounced in the generation of customized concepts, due to the scarcity of user-provided concept visual examples. By revisiting the two major stages leading to the success of TGDMs—1) contrastive image-language pre-training (CLIP) for text encoder that encodes visual semantics, and 2) training TGDM that decodes the textual embeddings into pixels—we point that existing customized generation methods only focus on fine-tuning the second stage while overlooking the first one. To this end, we propose a simple yet effective solution called CLIF: contrastive image-language fine-tuning. Specifically, given a few samples of customized concepts, we obtain non-confusing textual embeddings of a concept by fine-tuning CLIP via contrasting a concept and the over-segmented visual regions of other concepts. Experimental results demonstrate the effectiveness of CLIF in preventing the confusion of multi-customized concept generation. Project page: https://clif-official.github.io/clif.
https://proceedings.mlr.press/v235/lin24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24e/lin24e.pdf
https://openreview.net/forum?id=vuMD71R20q
Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective
https://proceedings.mlr.press/v235/lin24e.html
Wu Lin, Felix Dangel, Runa Eschenhagen, Juhan Bae, Richard E. Turner, Alireza Makhzani
https://proceedings.mlr.press/v235/lin24e.html
ICML 2024
Adaptive gradient optimizers like Adam(W) are the default training algorithms for many deep learning architectures, such as transformers. Their diagonal preconditioner is based on the gradient outer product which is incorporated into the parameter update via a square root. While these methods are often motivated as approximate second-order methods, the square root represents a fundamental difference. In this work, we investigate how the behavior of adaptive methods changes when we remove the root, i.e. strengthen their second-order motivation. Surprisingly, we find that such square-root-free adaptive methods close the generalization gap to SGD on convolutional architectures, while maintaining their root-based counterpart’s performance on transformers. The second-order perspective also has practical benefits for the development of non-diagonal adaptive methods through the concept of preconditioner invariance. In contrast to root-based methods like Shampoo, the root-free counterparts do not require numerically unstable matrix root decompositions and inversions, thus work well in half precision. Our findings provide new insights into the development of adaptive methods and raise important questions regarding the currently overlooked role of adaptivity for their success.
https://proceedings.mlr.press/v235/lin24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24f/lin24f.pdf
https://openreview.net/forum?id=Y2wRKE0Qor
Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC
https://proceedings.mlr.press/v235/lin24f.html
Wu Lin, Felix Dangel, Runa Eschenhagen, Kirill Neklyudov, Agustinus Kristiadi, Richard E. Turner, Alireza Makhzani
https://proceedings.mlr.press/v235/lin24f.html
ICML 2024
Second-order methods such as KFAC can be useful for neural net training. However, they are often memory-inefficient since their preconditioning Kronecker factors are dense, and numerically unstable in low precision as they require matrix inversion or decomposition. These limitations render such methods unpopular for modern mixed-precision training. We address them by (i) formulating an inverse-free KFAC update and (ii) imposing structures in the Kronecker factors, resulting in structured inverse-free natural gradient descent (SINGD). On modern neural networks, we show that SINGD is memory-efficient and numerically robust, in contrast to KFAC, and often outperforms AdamW even in half precision. Our work closes a gap between first- and second-order methods in modern low-precision training.
https://proceedings.mlr.press/v235/lin24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24g/lin24g.pdf
https://openreview.net/forum?id=7dP6Yq9Uwv
Learning to Model the World With Language
https://proceedings.mlr.press/v235/lin24g.html
Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, Anca Dragan
https://proceedings.mlr.press/v235/lin24g.html
ICML 2024
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world. While current agents can learn to execute simple language instructions, we aim to build agents that leverage diverse language—language like "this button turns on the TV" or "I put the bowls away"—that conveys general knowledge, describes the state of the world, provides interactive feedback, and more. Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future: what they will observe, how the world will behave, and which situations will be rewarded. This perspective unifies language understanding with future prediction as a powerful self-supervised learning objective. We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations, and learns to act from imagined model rollouts. While current methods that learn language-conditioned policies degrade in performance with more diverse types of language, we show that Dynalang learns to leverage environment descriptions, game rules, and instructions to excel on tasks ranging from game-playing to navigating photorealistic home scans. Finally, we show that our method enables additional capabilities due to learning a generative model: Dynalang can be pretrained on text-only data, enabling learning from offline datasets, and generate language grounded in an environment.
https://proceedings.mlr.press/v235/lin24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24h/lin24h.pdf
https://openreview.net/forum?id=nvfZgdHtHc
Robustness of Deep Learning for Accelerated MRI: Benefits of Diverse Training Data
https://proceedings.mlr.press/v235/lin24h.html
Kang Lin, Reinhard Heckel
https://proceedings.mlr.press/v235/lin24h.html
ICML 2024
Deep learning based methods for image reconstruction are state-of-the-art for a variety of imaging tasks. However, neural networks often perform worse if the training data differs significantly from the data they are applied to. For example, a model trained for accelerated magnetic resonance imaging (MRI) on one scanner performs worse on another scanner. In this work, we investigate the impact of the training data on a model’s performance and robustness for accelerated MRI. We find that models trained on the combination of various data distributions, such as those obtained from different MRI scanners and anatomies, exhibit robustness equal or superior to models trained on the best single distribution for a specific target distribution. Thus training on such diverse data tends to improve robustness. Furthermore, training on such a diverse dataset does not compromise in-distribution performance, i.e., a model trained on diverse data yields in-distribution performance at least as good as models trained on the more narrow individual distributions. Our results suggest that training a model for imaging on a variety of distributions tends to yield a more effective and robust model than maintaining separate models for individual distributions.
https://proceedings.mlr.press/v235/lin24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24i/lin24i.pdf
https://openreview.net/forum?id=guFsTBXsov
Equivariance via Minimal Frame Averaging for More Symmetries and Efficiency
https://proceedings.mlr.press/v235/lin24i.html
Yuchao Lin, Jacob Helwig, Shurui Gui, Shuiwang Ji
https://proceedings.mlr.press/v235/lin24i.html
ICML 2024
We consider achieving equivariance in machine learning systems via frame averaging. Current frame averaging methods involve a costly sum over large frames or rely on sampling-based approaches that only yield approximate equivariance. Here, we propose Minimal Frame Averaging (MFA), a mathematical framework for constructing provably minimal frames that are exactly equivariant. The general foundations of MFA also allow us to extend frame averaging to more groups than previously considered, including the Lorentz group for describing symmetries in space-time, and the unitary group for complex-valued domains. Results demonstrate the efficiency and effectiveness of encoding symmetries via MFA across a diverse range of tasks, including $n$-body simulation, top tagging in collider physics, and relaxed energy prediction. Our code is available at https://github.com/divelab/MFA.
https://proceedings.mlr.press/v235/lin24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24j/lin24j.pdf
https://openreview.net/forum?id=Bq2THeNXRr
Selecting Large Language Model to Fine-tune via Rectified Scaling Law
https://proceedings.mlr.press/v235/lin24j.html
Haowei Lin, Baizhou Huang, Haotian Ye, Qinyu Chen, Zihao Wang, Sujian Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang
https://proceedings.mlr.press/v235/lin24j.html
ICML 2024
The ever-growing ecosystem of LLMs has posed a challenge in selecting the most appropriate pre-trained model to fine-tune amidst a sea of options. Given constrained resources, fine-tuning all models and making selections afterward is unrealistic. In this work, we formulate this resource-constrained selection task into predicting fine-tuning performance and illustrate its natural connection with Scaling Law. Unlike pre-training, we find that the fine-tuning scaling curve includes not just the well-known "power phase" but also the previously unobserved "pre-power phase". We also explain why existing Scaling Law fails to capture this phase transition phenomenon both theoretically and empirically. To address this, we introduce the concept of "pre-learned data size" into our Rectified Scaling Law, which overcomes theoretical limitations and fits experimental results much better. By leveraging our law, we propose a novel LLM selection algorithm that selects the near-optimal model with hundreds of times less resource consumption, while other methods may provide negatively correlated selection. The project page is available at rectified-scaling-law.github.io.
https://proceedings.mlr.press/v235/lin24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24k/lin24k.pdf
https://openreview.net/forum?id=eVGpdivOnQ
Graph-enhanced Large Language Models in Asynchronous Plan Reasoning
https://proceedings.mlr.press/v235/lin24k.html
Fangru Lin, Emanuele La Malfa, Valentin Hofmann, Elle Michelle Yang, Anthony G. Cohn, Janet B. Pierrehumbert
https://proceedings.mlr.press/v235/lin24k.html
ICML 2024
Planning is a fundamental property of human intelligence. Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs. Can large language models (LLMs) succeed at this task? Here, we present the first large-scale study investigating this question. We find that a representative set of closed and open-source LLMs, including GPT-4 and LLaMA-2, behave poorly when not supplied with illustrations about the task-solving process in our benchmark AsyncHow. We propose a novel technique called Plan Like a Graph (PLaG) that combines graphs with natural language prompts and achieves state-of-the-art results. We show that although PLaG can boost model performance, LLMs still suffer from drastic degradation when task complexity increases, highlighting the limits of utilizing LLMs for simulating digital devices. We see our study as an exciting step towards using LLMs as efficient autonomous agents. Our code and data are available at https://github.com/fangru-lin/graph-llm-asynchow-plan.
https://proceedings.mlr.press/v235/lin24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24l/lin24l.pdf
https://openreview.net/forum?id=ElVHUWyL3n
Dual Operating Modes of In-Context Learning
https://proceedings.mlr.press/v235/lin24l.html
Ziqian Lin, Kangwook Lee
https://proceedings.mlr.press/v235/lin24l.html
ICML 2024
In-context learning (ICL) exhibits dual operating modes: task learning, i.e., acquiring a new skill from in-context samples, and task retrieval, i.e., locating and activating a relevant pretrained skill. Recent theoretical work proposes various mathematical models to analyze ICL, but they cannot fully explain the duality. In this work, we analyze a generalized probabilistic model for pretraining data, obtaining a quantitative understanding of the two operating modes of ICL. Leveraging our analysis, we provide the first explanation of an unexplained phenomenon observed with real-world large language models (LLMs). Under some settings, the ICL risk initially increases and then decreases with more in-context examples. Our analysis offers a plausible explanation for this "early ascent" phenomenon: a limited number of in-context samples may lead to the retrieval of an incorrect skill, thereby increasing the risk, which will eventually diminish as task learning takes effect with more in-context samples. We also analyze ICL with biased labels, e.g., zero-shot ICL, where in-context examples are assigned random labels, and predict the bounded efficacy of such approaches. We corroborate our analysis and predictions with extensive experiments with Transformers and LLMs.
https://proceedings.mlr.press/v235/lin24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24m/lin24m.pdf
https://openreview.net/forum?id=KpUdNe9lsr
HGAP: Boosting Permutation Invariant and Permutation Equivariant in Multi-Agent Reinforcement Learning via Graph Attention Network
https://proceedings.mlr.press/v235/lin24m.html
Bor-Jiun Lin, Chun-Yi Lee
https://proceedings.mlr.press/v235/lin24m.html
ICML 2024
Graph representation has gained widespread application across various machine learning domains, attributed to its ability to discern correlations among input nodes. In the realm of Multi- agent Reinforcement Learning (MARL), agents are tasked with observing other entities within their environment to determine their behavior. Conventional MARL methodologies often suffer from training difficulties if Permutation Invariant (PI) and Permutation Equivariant (PE) properties are not considered during training. The adoption of graph representation offers a solution to these challenges by conceptualizing observed entities as a graph. In this context, we introduce the Hyper Graphical Attention Policy (HGAP) Network, which employs a graph attention mechanism to fulfill the PI and PE properties, while also understanding inter-entity interactions for decision-making. HGAP is assessed across various MARL benchmarks to confirm its effectiveness and efficiency. In addition, a series of ablation studies are provided to demonstrate its adaptability, transferability, and the capability to alleviate the complexities introduced by the POMDP constraint.
https://proceedings.mlr.press/v235/lin24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24n/lin24n.pdf
https://openreview.net/forum?id=54NSHO0lFe
SparseTSF: Modeling Long-term Time Series Forecasting with *1k* Parameters
https://proceedings.mlr.press/v235/lin24n.html
Shengsheng Lin, Weiwei Lin, Wentai Wu, Haojun Chen, Junjie Yang
https://proceedings.mlr.press/v235/lin24n.html
ICML 2024
This paper introduces SparseTSF, a novel, extremely lightweight model for Long-term Time Series Forecasting (LTSF), designed to address the challenges of modeling complex temporal dependencies over extended horizons with minimal computational resources. At the heart of SparseTSF lies the Cross-Period Sparse Forecasting technique, which simplifies the forecasting task by decoupling the periodicity and trend in time series data. This technique involves downsampling the original sequences to focus on cross-period trend prediction, effectively extracting periodic features while minimizing the model’s complexity and parameter count. Based on this technique, the SparseTSF model uses fewer than 1k parameters to achieve competitive or superior performance compared to state-of-the-art models. Furthermore, SparseTSF showcases remarkable generalization capabilities, making it well-suited for scenarios with limited computational resources, small samples, or low-quality data. The code is publicly available at this repository: https://github.com/lss-1138/SparseTSF.
https://proceedings.mlr.press/v235/lin24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24o/lin24o.pdf
https://openreview.net/forum?id=uog14iBFLA
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
https://proceedings.mlr.press/v235/lin24o.html
Jiabin Lin, Shana Moothedath, Namrata Vaswani
https://proceedings.mlr.press/v235/lin24o.html
ICML 2024
We study how representation learning can improve the learning efficiency of contextual bandit problems. We study the setting where we play T linear contextual bandits with dimension simultaneously, and these T bandit tasks collectively share a common linear representation with a dimensionality of r ≪ d. We present a new algorithm based on alternating projected gradient descent (GD) and minimization estimator to recover a low-rank feature matrix. We obtain constructive provable guarantees for our estimator that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity (total number of iterations needed to achieve a certain error level). Using the proposed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms.
https://proceedings.mlr.press/v235/lin24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24p/lin24p.pdf
https://openreview.net/forum?id=mGsF8Q0fGZ
On Hypothesis Transfer Learning of Functional Linear Models
https://proceedings.mlr.press/v235/lin24p.html
Haotian Lin, Matthew Reimherr
https://proceedings.mlr.press/v235/lin24p.html
ICML 2024
We study the transfer learning (TL) for the functional linear regression (FLR) under the Reproducing Kernel Hilbert Space (RKHS) framework, observing the TL techniques in existing high-dimensional linear regression is not compatible with the truncation-based FLR methods as functional data are intrinsically infinite-dimensional and generated by smooth underlying processes. We measure the similarity across tasks using RKHS distance, allowing the type of information being transferred to be tied to the properties of the imposed RKHS. Building on the hypothesis offset transfer learning paradigm, two algorithms are proposed: one conducts the transfer when positive sources are known, while the other leverages aggregation techniques to achieve robust transfer without prior information about the sources. We establish asymptotic lower bounds for this learning problem and show the proposed algorithms enjoy a matching upper bound. These analyses provide statistical insights into factors that contribute to the dynamics of the transfer. We also extend the results to functional generalized linear models. The effectiveness of the proposed algorithms is demonstrated via extensive synthetic data as well as real-world data applications.
https://proceedings.mlr.press/v235/lin24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24q/lin24q.pdf
https://openreview.net/forum?id=v0VUsQI5yw
Smoothness Adaptive Hypothesis Transfer Learning
https://proceedings.mlr.press/v235/lin24q.html
Haotian Lin, Matthew Reimherr
https://proceedings.mlr.press/v235/lin24q.html
ICML 2024
Many existing two-phase kernel-based hypothesis transfer learning algorithms employ the same kernel regularization across phases and rely on the known smoothness of functions to obtain optimality. Therefore, they fail to adapt to the varying and unknown smoothness between the target/source and their offset. This paper introduces Smoothness Adaptive Transfer Learning (SATL), a two-phase kernel ridge regression (KRR)-based algorithm to address these limitations. We first demonstrate that employing a misspecified fixed bandwidth Gaussian kernel in target-only KRR learning can achieve minimax optimality when the true function resides in Sobolev spaces. Leveraging this result, SATL enables the estimators to provably and universally adapt to the varying and unknown Sobolev smoothness of the source and offset functions. We derive the minimax lower bound of the learning problem in excess risk and show that SATL achieves a matching upper bound up to logarithmic factors. The optimal statistical rate reveals the factors influencing the transfer dynamics and efficacy, including the source sample size and the relative strength between domains. The theoretical findings and the effectiveness of SATL are confirmed by several experiments.
https://proceedings.mlr.press/v235/lin24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24r/lin24r.pdf
https://openreview.net/forum?id=RLENZ8pNnn
Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers
https://proceedings.mlr.press/v235/lin24r.html
Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low
https://proceedings.mlr.press/v235/lin24r.html
ICML 2024
Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications. However, the performances of LLMs depend heavily on the instructions given to them, which are typically manually tuned with substantial human efforts. Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs. However, BO usually falls short when optimizing highly sophisticated (e.g., high-dimensional) objective functions, such as the functions mapping an instruction to the performance of an LLM. This is mainly due to the limited expressive power of the Gaussian process (GP) which is used by BO as a surrogate to model the objective function. Meanwhile, it has been repeatedly shown that neural networks (NNs), especially pre-trained transformers, possess strong expressive power and can model highly complex functions. So, we adopt a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs. More importantly, the neural bandit algorithm allows us to naturally couple the NN surrogate with the hidden representation learned by a pre-trained transformer (i.e., an open-source LLM), which significantly boosts its performance. These motivate us to propose our INSTruction optimization usIng Neural bandits Coupled with Transformers (INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use extensive experiments to show that INSTINCT consistently outperforms baselines in different tasks, e.g., various instruction induction tasks and the task of improving zero-shot chain-of-thought instructions. Our code is available at https://github.com/xqlin98/INSTINCT.
https://proceedings.mlr.press/v235/lin24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24s/lin24s.pdf
https://openreview.net/forum?id=6pHP51F55x
GeoAB: Towards Realistic Antibody Design and Reliable Affinity Maturation
https://proceedings.mlr.press/v235/lin24s.html
Haitao Lin, Lirong Wu, Yufei Huang, Yunfan Liu, Odin Zhang, Yuanqing Zhou, Rui Sun, Stan Z. Li
https://proceedings.mlr.press/v235/lin24s.html
ICML 2024
Increasing works for antibody design are emerging to generate sequences and structures in Complementarity Determining Regions (CDRs), but problems still exist. We focus on two of them: (i) authenticity of the generated structure and (ii) rationality of the affinity maturation, and propose GeoAB as a solution. In specific, GeoAB-Designergenerates CDR structures with realistic internal geometries, composed of a generative geometry initializer (Geo-Initializer) and a position refiner (Geo-Refiner); GeoAB-Optimizer achieves affinity maturation by accurately predicting both the mutation effects and structures of mutant antibodies with the same network architecture as Geo-Refiner. Experiments show that GeoAB achieves state-of-the-art performance in CDR co-design and mutation effect predictions, and fulfills the discussed tasks effectively.
https://proceedings.mlr.press/v235/lin24t.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24t/lin24t.pdf
https://openreview.net/forum?id=mbBehLOAqR
Distributionally Robust Data Valuation
https://proceedings.mlr.press/v235/lin24t.html
Xiaoqiang Lin, Xinyi Xu, Zhaoxuan Wu, See-Kiong Ng, Bryan Kian Hsiang Low
https://proceedings.mlr.press/v235/lin24t.html
ICML 2024
Data valuation quantifies the contribution of each data point to the performance of a machine learning model. Existing works typically define the value of data by its improvement of the validation performance of the trained model. However, this approach can be impractical to apply in collaborative machine learning and data marketplace since it is difficult for the parties/buyers to agree on a common validation dataset or determine the exact validation distribution a priori. To address this, we propose a distributionally robust data valuation approach to perform data valuation without known/fixed validation distributions. Our approach defines the value of data by its improvement of the distributionally robust generalization error (DRGE), thus providing a worst-case performance guarantee without a known/fixed validation distribution. However, since computing DRGE directly is infeasible, we propose using model deviation as a proxy for the marginal improvement of DRGE (for kernel regression and neural networks) to compute data values. Furthermore, we identify a notion of uniqueness where low uniqueness characterizes low-value data. We empirically demonstrate that our approach outperforms existing data valuation approaches in data selection and data removal tasks on real-world datasets (e.g., housing price prediction, diabetes hospitalization prediction).
https://proceedings.mlr.press/v235/lin24u.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24u/lin24u.pdf
https://openreview.net/forum?id=VaZVZQSgTP
A Single-Loop Robust Policy Gradient Method for Robust Markov Decision Processes
https://proceedings.mlr.press/v235/lin24u.html
Zhenwei Lin, Chenyu Xue, Qi Deng, Yinyu Ye
https://proceedings.mlr.press/v235/lin24u.html
ICML 2024
Robust Markov Decision Processes (RMDPs) have recently been recognized as a valuable and promising approach to discovering a policy with creditable performance, particularly in the presence of a dynamic environment and estimation errors in the transition matrix due to limited data. Despite extensive exploration of dynamic programming algorithms for solving RMDPs, there has been a notable upswing in interest in developing efficient algorithms using the policy gradient method. In this paper, we propose the first single-loop robust policy gradient (SRPG) method with the global optimality guarantee for solving RMDPs through its minimax formulation. Moreover, we complement the convergence analysis of the nonconvex-nonconcave min-max optimization problem with the objective function’s gradient dominance property, which is not explored in the prior literature. Numerical experiments validate the efficacy of SRPG, demonstrating its faster and more robust convergence behavior compared to its nested-loop counterpart.
https://proceedings.mlr.press/v235/lin24v.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24v/lin24v.pdf
https://openreview.net/forum?id=m8lCi7rG4u
Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency
https://proceedings.mlr.press/v235/lin24v.html
Runqi Lin, Chaojian Yu, Bo Han, Hang Su, Tongliang Liu
https://proceedings.mlr.press/v235/lin24v.html
ICML 2024
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks. However, the underlying factors that lead to the distortion of decision boundaries remain unclear. In this work, we delve into the specific changes within different DNN layers and discover that during CO, the former layers are more susceptible, experiencing earlier and greater distortion, while the latter layers show relative insensitivity. Our analysis further reveals that this increased sensitivity in former layers stems from the formation of $\textit{pseudo-robust shortcuts}$, which alone can impeccably defend against single-step adversarial attacks but bypass genuine-robust learning, resulting in distorted decision boundaries. Eliminating these shortcuts can partially restore robustness in DNNs from the CO state, thereby verifying that dependence on them triggers the occurrence of CO. This understanding motivates us to implement adaptive weight perturbations across different layers to hinder the generation of $\textit{pseudo-robust shortcuts}$, consequently mitigating CO. Extensive experiments demonstrate that our proposed method, $\textbf{L}$ayer-$\textbf{A}$ware Adversarial Weight $\textbf{P}$erturbation (LAP), can effectively prevent CO and further enhance robustness.
https://proceedings.mlr.press/v235/lin24w.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24w/lin24w.pdf
https://openreview.net/forum?id=0NacraIYrA
Autonomous Sparse Mean-CVaR Portfolio Optimization
https://proceedings.mlr.press/v235/lin24w.html
Yizun Lin, Yangyu Zhang, Zhao-Rong Lai, Cheng Li
https://proceedings.mlr.press/v235/lin24w.html
ICML 2024
The $\ell_0$-constrained mean-CVaR model poses a significant challenge due to its NP-hard nature, typically tackled through combinatorial methods characterized by high computational demands. From a markedly different perspective, we propose an innovative autonomous sparse mean-CVaR portfolio model, capable of approximating the original $\ell_0$-constrained mean-CVaR model with arbitrary accuracy. The core idea is to convert the $\ell_0$ constraint into an indicator function and subsequently handle it through a tailed approximation. We then propose a proximal alternating linearized minimization algorithm, coupled with a nested fixed-point proximity algorithm (both convergent), to iteratively solve the model. Autonomy in sparsity refers to retaining a significant portion of assets within the selected asset pool during adjustments in pool size. Consequently, our framework offers a theoretically guaranteed approximation of the $\ell_0$-constrained mean-CVaR model, improving computational efficiency while providing a robust asset selection scheme.
https://proceedings.mlr.press/v235/lin24x.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24x/lin24x.pdf
https://openreview.net/forum?id=xJUhgvM2u8
Graph Neural Stochastic Diffusion for Estimating Uncertainty in Node Classification
https://proceedings.mlr.press/v235/lin24x.html
Xixun Lin, Wenxiao Zhang, Fengzhao Shi, Chuan Zhou, Lixin Zou, Xiangyu Zhao, Dawei Yin, Shirui Pan, Yanan Cao
https://proceedings.mlr.press/v235/lin24x.html
ICML 2024
Graph neural networks (GNNs) have advanced the state of the art in various domains. Despite their remarkable success, the uncertainty estimation of GNN predictions remains under-explored, which limits their practical applications especially in risk-sensitive areas. Current works suffer from either intractable posteriors or inflexible prior specifications, leading to sub-optimal empirical results. In this paper, we present graph neural stochastic diffusion (GNSD), a novel framework for estimating predictive uncertainty on graphs by establishing theoretical connections between GNNs and stochastic partial differential equation. GNSD represents a GNN-based parameterization of the proposed graph stochastic diffusion equation which includes a $Q$-Wiener process to model the stochastic evolution of node representations. GNSD introduces a drift network to guarantee accurate prediction and a stochastic forcing network to model the propagation of epistemic uncertainty among nodes. Extensive experiments are conducted on multiple detection tasks, demonstrating that GNSD yields the superior performance over existing strong approaches.
https://proceedings.mlr.press/v235/lin24y.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24y/lin24y.pdf
https://openreview.net/forum?id=m4dO5L6eCp
Smooth Tchebycheff Scalarization for Multi-Objective Optimization
https://proceedings.mlr.press/v235/lin24y.html
Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, Qingfu Zhang
https://proceedings.mlr.press/v235/lin24y.html
ICML 2024
Multi-objective optimization problems can be found in many real-world applications, where the objectives often conflict each other and cannot be optimized by a single solution. In the past few decades, numerous methods have been proposed to find Pareto solutions that represent optimal trade-offs among the objectives for a given problem. However, these existing methods could have high computational complexity or may not have good theoretical properties for solving a general differentiable multi-objective optimization problem. In this work, by leveraging the smooth optimization technique, we propose a lightweight and efficient smooth Tchebycheff scalarization approach for gradient-based multi-objective optimization. It has good theoretical properties for finding all Pareto solutions with valid trade-off preferences, while enjoying significantly lower computational complexity compared to other methods. Experimental results on various real-world application problems fully demonstrate the effectiveness of our proposed method.
https://proceedings.mlr.press/v235/lin24z.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24z/lin24z.pdf
https://openreview.net/forum?id=70jplnkLMe
PPFLOW: Target-Aware Peptide Design with Torsional Flow Matching
https://proceedings.mlr.press/v235/lin24z.html
Haitao Lin, Odin Zhang, Huifeng Zhao, Dejun Jiang, Lirong Wu, Zicheng Liu, Yufei Huang, Stan Z. Li
https://proceedings.mlr.press/v235/lin24z.html
ICML 2024
Therapeutic peptides have proven to have great pharmaceutical value and potential in recent decades. However, methods of AI-assisted peptide drug discovery are not fully explored. To fill the gap, we propose a target-aware peptide design method called PPFlow, based on conditional flow matching on torus manifolds, to model the internal geometries of torsion angles for the peptide structure design. Besides, we establish a protein-peptide binding dataset named PPBench2024 to fill the void of massive data for the task of structure-based peptide drug design and to allow the training of deep learning methods. Extensive experiments show that PPFlow reaches state-of-the-art performance in tasks of peptide drug generation and optimization in comparison with baseline models, and can be generalized to other tasks including docking and side-chain packing.
https://proceedings.mlr.press/v235/lin24aa.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24aa/lin24aa.pdf
https://openreview.net/forum?id=1bJLl4fY6i
Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie Algebras
https://proceedings.mlr.press/v235/lin24aa.html
Tzu-Yuan Lin, Minghan Zhu, Maani Ghaffari
https://proceedings.mlr.press/v235/lin24aa.html
ICML 2024
This paper proposes an equivariant neural network that takes data in any finite-dimensional semi-simple Lie algebra as input. The corresponding group acts on the Lie algebra as adjoint operations, making our proposed network adjoint-equivariant. Our framework generalizes the Vector Neurons, a simple $\mathrm{SO}(3)$-equivariant network, from 3-D Euclidean space to Lie algebra spaces, building upon the invariance property of the Killing form. Furthermore, we propose novel Lie bracket layers and geometric channel mixing layers that extend the modeling capacity. Experiments are conducted for the $\mathfrak{so}(3)$, $\mathfrak{sl}(3)$, and $\mathfrak{sp}(4)$ Lie algebras on various tasks, including fitting equivariant and invariant functions, learning system dynamics, point cloud registration, and homography-based shape classification. Our proposed equivariant network shows wide applicability and competitive performance in various domains.
https://proceedings.mlr.press/v235/lin24ab.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lin24ab/lin24ab.pdf
https://openreview.net/forum?id=jh7FDDwDBf
Plug-in Performative Optimization
https://proceedings.mlr.press/v235/lin24ab.html
Licong Lin, Tijana Zrnic
https://proceedings.mlr.press/v235/lin24ab.html
ICML 2024
When predictions are performative, the choice of which predictor to deploy influences the distribution of future observations. The overarching goal in learning under performativity is to find a predictor that has low performative risk, that is, good performance on its induced distribution. One family of solutions for optimizing the performative risk, including bandits and other derivative-free methods, is agnostic to any structure in the performative feedback, leading to exceedingly slow convergence rates. A complementary family of solutions makes use of explicit models for the feedback, such as best-response models in strategic classification, enabling faster rates. However, these rates critically rely on the feedback model being correct. In this work we study a general protocol for making use of possibly misspecified models in performative prediction, called plug-in performative optimization. We show this solution can be far superior to model-agnostic strategies, as long as the misspecification is not too extreme. Our results support the hypothesis that models, even if misspecified, can indeed help with learning in performative settings.
https://proceedings.mlr.press/v235/lindauer24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lindauer24a/lindauer24a.pdf
https://openreview.net/forum?id=wELbEYgnmo
Position: A Call to Action for a Human-Centered AutoML Paradigm
https://proceedings.mlr.press/v235/lindauer24a.html
Marius Lindauer, Florian Karl, Anne Klier, Julia Moosbauer, Alexander Tornede, Andreas C Mueller, Frank Hutter, Matthias Feurer, Bernd Bischl
https://proceedings.mlr.press/v235/lindauer24a.html
ICML 2024
Automated machine learning (AutoML) was formed around the fundamental objectives of automatically and efficiently configuring machine learning (ML) workflows, aiding the research of new ML algorithms, and contributing to the democratization of ML by making it accessible to a broader audience. Over the past decade, commendable achievements in AutoML have primarily focused on optimizing predictive performance. This focused progress, while substantial, raises questions about how well AutoML has met its broader, original goals. In this position paper, we argue that a key to unlocking AutoML’s full potential lies in addressing the currently underexplored aspect of user interaction with AutoML systems, including their diverse roles, expectations, and expertise. We envision a more human-centered approach in future AutoML research, promoting the collaborative design of ML systems that tightly integrates the complementary strengths of human expertise and AutoML methodologies.
https://proceedings.mlr.press/v235/ling24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ling24a/ling24a.pdf
https://openreview.net/forum?id=ddjRdm3wUW
Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures
https://proceedings.mlr.press/v235/ling24a.html
Zenan Ling, Longbo Li, Zhanbo Feng, Yixuan Zhang, Feng Zhou, Robert C Qiu, Zhenyu Liao
https://proceedings.mlr.press/v235/ling24a.html
ICML 2024
Deep equilibrium models (DEQs), as typical implicit neural networks, have demonstrated remarkable success on various tasks. There is, however, a lack of theoretical understanding of the connections and differences between implicit DEQs and explicit neural network models. In this paper, leveraging recent advances in random matrix theory (RMT), we perform an in-depth analysis on the eigenspectra of the conjugate kernel (CK) and neural tangent kernel (NTK) matrices for implicit DEQs, when the input data are drawn from a high-dimensional Gaussia mixture. We prove that, in this setting, the spectral behavior of these Implicit-CKs and NTKs depend on the DEQ activation function and initial weight variances, but only via a system of four nonlinear equations. As a direct consequence of this theoretical result, we demonstrate that a shallow explicit network can be carefully designed to produce the same CK or NTK as a given DEQ. Despite derived here for Gaussian mixture data, empirical results show the proposed theory and design principles also apply to popular real-world datasets.
https://proceedings.mlr.press/v235/lingsch24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lingsch24a/lingsch24a.pdf
https://openreview.net/forum?id=aVqqoFAavs
Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains
https://proceedings.mlr.press/v235/lingsch24a.html
Levi E. Lingsch, Mike Yan Michelis, Emmanuel De Bezenac, Sirani M. Perera, Robert K. Katzschmann, Siddhartha Mishra
https://proceedings.mlr.press/v235/lingsch24a.html
ICML 2024
The computational efficiency of many neural operators, widely used for learning solutions of PDEs, relies on the fast Fourier transform (FFT) for performing spectral computations. As the FFT is limited to equispaced (rectangular) grids, this limits the efficiency of such neural operators when applied to problems where the input and output functions need to be processed on general non-equispaced point distributions. Leveraging the observation that a limited set of Fourier (Spectral) modes suffice to provide the required expressivity of a neural operator, we propose a simple method, based on the efficient direct evaluation of the underlying spectral transformation, to extend neural operators to arbitrary domains. An efficient implementation of such direct spectral evaluations is coupled with existing neural operator models to allow the processing of data on arbitrary non-equispaced distributions of points. With extensive empirical evaluation, we demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines, while retaining or improving the accuracy of Fourier neural operators (FNOs) and related neural operators.
https://proceedings.mlr.press/v235/liu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24a/liu24a.pdf
https://openreview.net/forum?id=BIbjwcrg0V
Scaling Tractable Probabilistic Circuits: A Systems Perspective
https://proceedings.mlr.press/v235/liu24a.html
Anji Liu, Kareem Ahmed, Guy Van Den Broeck
https://proceedings.mlr.press/v235/liu24a.html
ICML 2024
Probabilistic Circuits (PCs) are a general framework for tractable deep generative models, which support exact and efficient probabilistic inference on their learned distributions. Recent modeling and training advancements have enabled their application to complex real-world tasks. However, the time and memory inefficiency of existing PC implementations hinders further scaling up. This paper proposes PyJuice, a general GPU implementation design for PCs that improves prior art in several regards. Specifically, PyJuice is 1-2 orders of magnitude faster than existing systems (including very recent ones) at training large-scale PCs. Moreover, PyJuice consumes 2-5x less GPU memory, which enables us to train larger models. At the core of our system is a compilation process that converts a PC into a compact representation amenable to efficient block-based parallelization, which significantly reduces IO and makes it possible to leverage Tensor Cores available in modern GPUs. Empirically, PyJuice can be used to improve state-of-the-art PCs trained on image (e.g., ImageNet32) and language (e.g., WikiText, CommonGen) datasets. We further establish a new set of baselines on natural image and language datasets by benchmarking existing PC structures but with much larger sizes and more training epochs, with the hope of incentivizing future research. Code is available at https://github.com/Tractables/pyjuice.
https://proceedings.mlr.press/v235/liu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24b/liu24b.pdf
https://openreview.net/forum?id=U1uKihiG39
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction to Linear Bandits, and Limitations around Unknown Marginals
https://proceedings.mlr.press/v235/liu24b.html
Ziyi Liu, Idan Attias, Daniel M. Roy
https://proceedings.mlr.press/v235/liu24b.html
ICML 2024
In this work, we investigate the problem of adapting to the presence or absence of causal structure in multi-armed bandit problems. In addition to the usual reward signal, we assume the learner has access to additional variables, observed in each round after acting. When these variables $d$-separate the action from the reward, existing work in causal bandits demonstrates that one can achieve strictly better (minimax) rates of regret (Lu et al., 2020). Our goal is to adapt to this favorable “conditionally benign” structure, if it is present in the environment, while simultaneously recovering worst-case minimax regret, if it is not. Notably, the learner has no prior knowledge of whether the favorable structure holds. In this paper, we establish the Pareto optimal frontier of adaptive rates. We prove upper and matching lower bounds on the possible trade-offs in the performance of learning in conditionally benign and arbitrary environments, resolving an open question raised by Bilodeau et al. (2022). Furthermore, we are the first to obtain instance-dependent bounds for causal bandits, by reducing the problem to the linear bandit setting. Finally, we examine the common assumption that the marginal distributions of the post-action contexts are known and show that a nontrivial estimate is necessary for better-than-worst-case minimax rates.
https://proceedings.mlr.press/v235/liu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24c/liu24c.pdf
https://openreview.net/forum?id=HOoVTsPPn7
Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty
https://proceedings.mlr.press/v235/liu24c.html
Kaizhao Liu, Jose Blanchet, Lexing Ying, Yiping Lu
https://proceedings.mlr.press/v235/liu24c.html
ICML 2024
Bootstrap is a popular methodology for simulating input uncertainty. However, it can be computationally expensive when the number of samples is large. We propose a new approach called Orthogonal Bootstrap that reduces the number of required Monte Carlo replications. We decomposes the target being simulated into two parts: the non-orthogonal part which has a closed-form result known as Infinitesimal Jackknife and the orthogonal part which is easier to be simulated. We theoretically and numerically show that Orthogonal Bootstrap significantly reduces the computational cost of Bootstrap while improving empirical accuracy and maintaining the same width of the constructed interval.
https://proceedings.mlr.press/v235/liu24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24d/liu24d.pdf
https://openreview.net/forum?id=DYN66IJCI9
Graph Distillation with Eigenbasis Matching
https://proceedings.mlr.press/v235/liu24d.html
Yang Liu, Deyu Bo, Chuan Shi
https://proceedings.mlr.press/v235/liu24d.html
ICML 2024
The increasing amount of graph data places requirements on the efficient training of graph neural networks (GNNs). The emerging graph distillation (GD) tackles this challenge by distilling a small synthetic graph to replace the real large graph, ensuring GNNs trained on real and synthetic graphs exhibit comparable performance. However, existing methods rely on GNN-related information as supervision, including gradients, representations, and trajectories, which have two limitations. First, GNNs can affect the spectrum (i.e., eigenvalues) of the real graph, causing spectrum bias in the synthetic graph. Second, the variety of GNN architectures leads to the creation of different synthetic graphs, requiring traversal to obtain optimal performance. To tackle these issues, we propose Graph Distillation with Eigenbasis Matching (GDEM), which aligns the eigenbasis and node features of real and synthetic graphs. Meanwhile, it directly replicates the spectrum of the real graph and thus prevents the influence of GNNs. Moreover, we design a discrimination constraint to balance the effectiveness and generalization of GDEM. Theoretically, the synthetic graphs distilled by GDEM are restricted spectral approximations of the real graphs. Extensive experiments demonstrate that GDEM outperforms state-of-the-art GD methods with powerful cross-architecture generalization ability and significant distillation efficiency. Our code is available at https://github.com/liuyang-tian/GDEM.
https://proceedings.mlr.press/v235/liu24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24e/liu24e.pdf
https://openreview.net/forum?id=7emOSb5UfX
Adaptive Text Watermark for Large Language Models
https://proceedings.mlr.press/v235/liu24e.html
Yepeng Liu, Yuheng Bu
https://proceedings.mlr.press/v235/liu24e.html
ICML 2024
The advancement of Large Language Models (LLMs) has led to increasing concerns about the misuse of AI-generated text, and watermarking LLM-generated text has emerged as a potential solution. However, it is challenging to generate high-quality watermarked text while maintaining robustness, security, and the ability to detect watermarks without prior knowledge of the prompt and model. This paper proposes an adaptive text watermarking strategy to address such a challenge. To improve the text quality and maintain robustness, we adaptively add watermarking to token distributions with high entropy measured by an auxiliary model and keep the low-entropy token distributions untouched. For the sake of security and to further minimize the watermark’s impact on text quality, instead of using a fixed green/red list generated from a random secret key, which can be vulnerable to decryption and forgery, we adaptively scale up the output logits based on the semantic embedding of previously generated text using a well designed semantic mapping model. Our experiments involving various LLMs demonstrate that our approach achieves comparable robustness performance to existing watermark methods. Additionally, the text generated by our method has perplexity comparable to that of un-watermarked LLMs while maintaining sufficient security.
https://proceedings.mlr.press/v235/liu24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24f/liu24f.pdf
https://openreview.net/forum?id=QvABoVGdRp
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
https://proceedings.mlr.press/v235/liu24f.html
Yujia Liu, Tong Bu, Jianhao Ding, Zecheng Hao, Tiejun Huang, Zhaofei Yu
https://proceedings.mlr.press/v235/liu24f.html
ICML 2024
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures, offering potential advantages over Artificial Neural Networks (ANNs) in terms of energy efficiency and interpretability. Nonetheless, similar to ANNs, the robustness of SNNs remains a challenge, especially when facing adversarial attacks. Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks. In this paper, we propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization. We observe that SNNs exhibit greater resilience to random perturbations compared to adversarial perturbations, even at larger scales. Motivated by this, we aim to narrow the gap between SNNs under adversarial and random perturbations, thereby improving their overall robustness. To achieve this, we theoretically prove that this performance gap is upper bounded by the gradient sparsity of the probability associated with the true label concerning the input image, laying the groundwork for a practical strategy to train robust SNNs by regularizing the gradient sparsity. We validate the effectiveness of our approach through extensive experiments on both image-based and event-based datasets. The results demonstrate notable improvements in the robustness of SNNs. Our work highlights the importance of gradient sparsity in SNNs and its role in enhancing robustness.
https://proceedings.mlr.press/v235/liu24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24g/liu24g.pdf
https://openreview.net/forum?id=o4HF3N6CZR
ReLU Network with Width $d+\mathcalO(1)$ Can Achieve Optimal Approximation Rate
https://proceedings.mlr.press/v235/liu24g.html
Chenghao Liu, Minghua Chen
https://proceedings.mlr.press/v235/liu24g.html
ICML 2024
The prevalent employment of narrow neural networks, characterized by their minimal parameter count per layer, has led to a surge in research exploring their potential as universal function approximators. A notable result in this field states that networks with just a width of $d+1$ can approximate any continuous function for input dimension $d$ arbitrarily well. However, the optimal approximation rate for these narrowest networks, i.e., the optimal relation between the count of tunable parameters and the approximation error, remained unclear. In this paper, we address this gap by proving that ReLU networks with width $d+1$ can achieve the optimal approximation rate for continuous functions over the domain $[0,1]^d$ under $L^p$ norm for $p\in[1,\infty)$. We further show that for the uniform norm, a width of $d+11$ is sufficient. We also extend the results to narrow feed-forward networks with various activations, confirming their capability to approximate at the optimal rate. This work adds to the understanding of universal approximation of narrow networks.
https://proceedings.mlr.press/v235/liu24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24h/liu24h.pdf
https://openreview.net/forum?id=ICvWruTEDH
Graph Adversarial Diffusion Convolution
https://proceedings.mlr.press/v235/liu24h.html
Songtao Liu, Jinghui Chen, Tianfan Fu, Lu Lin, Marinka Zitnik, Dinghao Wu
https://proceedings.mlr.press/v235/liu24h.html
ICML 2024
This paper introduces a min-max optimization formulation for the Graph Signal Denoising (GSD) problem. In this formulation, we first maximize the second term of GSD by introducing perturbations to the graph structure based on Laplacian distance and then minimize the overall loss of the GSD. By solving the min-max optimization problem, we derive a new variant of the Graph Diffusion Convolution (GDC) architecture, called Graph Adversarial Diffusion Convolution (GADC). GADC differs from GDC by incorporating an additional term that enhances robustness against adversarial attacks on the graph structure and noise in node features. Moreover, GADC improves the performance of GDC on heterophilic graphs. Extensive experiments demonstrate the effectiveness of GADC across various datasets. Code is available at https://github.com/SongtaoLiu0823/GADC.
https://proceedings.mlr.press/v235/liu24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24i/liu24i.pdf
https://openreview.net/forum?id=LLdeUPOUXk
Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers
https://proceedings.mlr.press/v235/liu24i.html
Yuxing Liu, Lesi Chen, Luo Luo
https://proceedings.mlr.press/v235/liu24i.html
ICML 2024
This paper studies decentralized optimization problem, where the local objective on each node is an average of a finite set of convex functions and the global function is strongly convex. We propose an efficient stochastic variance reduced first-order method that allows the different nodes to establish their stochastic local gradient estimator with different mini-batch sizes per iteration. We prove the upper bound on the computation time of the proposed method contains the dependence on the global condition number, which is sharper than the previous results that only depend on the local condition numbers. Compared with the state-of-the-art methods, we also show that our method requires less local incremental first-order oracle calls and comparable communication cost. We further perform numerical experiments to validate the advantage of our method.
https://proceedings.mlr.press/v235/liu24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24j/liu24j.pdf
https://openreview.net/forum?id=PxHmxoFOgI
Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization
https://proceedings.mlr.press/v235/liu24j.html
Zhuanghua Liu, Cheng Chen, Luo Luo, Bryan Kian Hsiang Low
https://proceedings.mlr.press/v235/liu24j.html
ICML 2024
This paper studies the problem of solving nonconvex nonsmooth optimization over a closed convex set. Most previous works tackle such problems by transforming the constrained problem into an unconstrained problem that can be solved by the techniques developed in the unconstrained setting. However, they only provide asymptotic convergence analysis for their methods. In this work, we provide the non-asymptotic analysis for solving constrained nonconvex nonsmooth optimization. We first generalize classical gradient mapping and the Frank–Wolfe gap in the nonsmooth setting. Then we introduce novel notions of approximate stationarity concerning such generalized quantities. We also propose several stochastic zeroth-order algorithms for the problem, along with their non-asymptotic convergence guarantees of obtaining the proposed approximate stationarity. Finally, we conduct numerical experiments that demonstrate the effectiveness of our algorithms.
https://proceedings.mlr.press/v235/liu24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24k/liu24k.pdf
https://openreview.net/forum?id=MUXTt9Yr4T
Unifying Image Processing as Visual Prompting Question Answering
https://proceedings.mlr.press/v235/liu24k.html
Yihao Liu, Xiangyu Chen, Xianzheng Ma, Xintao Wang, Jiantao Zhou, Yu Qiao, Chao Dong
https://proceedings.mlr.press/v235/liu24k.html
ICML 2024
Image processing is a fundamental task in computer vision, which aims at enhancing image quality and extracting essential features for subsequent vision applications. Traditionally, task-specific models are developed for individual tasks and designing such models requires distinct expertise. Building upon the success of large language models (LLMs) in natural language processing (NLP), there is a similar trend in computer vision, which focuses on developing large-scale models through pretraining and in-context learning. This paradigm shift reduces the reliance on task-specific models, yielding a powerful unified model to deal with various tasks. However, these advances have predominantly concentrated on high-level vision tasks, with less attention paid to low-level vision tasks. To address this issue, we propose a universal model for general image processing that covers image restoration, image enhancement, image feature extraction tasks, etc. Our proposed framework, named PromptGIP, unifies these diverse image processing tasks within a universal framework. Inspired by NLP question answering (QA) techniques, we employ a visual prompting question answering paradigm. Specifically, we treat the input-output image pair as a structured question-answer sentence, thereby reprogramming the image processing task as a prompting QA problem. PromptGIP can undertake diverse cross-domain tasks using provided visual prompts, eliminating the need for task-specific finetuning. Capable of handling up to 15 different image processing tasks, PromptGIP represents a versatile and adaptive approach to general image processing. While PromptGIP has demonstrated a certain degree of out-of-domain task generalization capability, further research is expected to fully explore its more powerful emergent generalization. Codes will be available at https://github.com/lyh-18/PromptGIP.
https://proceedings.mlr.press/v235/liu24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24l/liu24l.pdf
https://openreview.net/forum?id=SERrqPDvoY
ESNet: Evolution and Succession Network for High-Resolution Salient Object Detection
https://proceedings.mlr.press/v235/liu24l.html
Hongyu Liu, Runmin Cong, Hua Li, Qianqian Xu, Qingming Huang, Wei Zhang
https://proceedings.mlr.press/v235/liu24l.html
ICML 2024
Preserving details and avoiding high computational costs are the two main challenges for the High-Resolution Salient Object Detection (HRSOD) task. In this paper, we propose a two-stage HRSOD model from the perspective of evolution and succession, including an evolution stage with Low-resolution Location Model (LrLM) and a succession stage with High-resolution Refinement Model (HrRM). The evolution stage achieves detail-preserving salient objects localization on the low-resolution image through the evolution mechanisms on supervision and feature; the succession stage utilizes the shallow high-resolution features to complement and enhance the features inherited from the first stage in a lightweight manner and generate the final high-resolution saliency prediction. Besides, a new metric named Boundary-Detail-aware Mean Absolute Error (${MAE}_{{BD}}$) is designed to evaluate the ability to detect details in high-resolution scenes. Extensive experiments on five datasets demonstrate that our network achieves superior performance at real-time speed (49 FPS) compared to state-of-the-art methods.
https://proceedings.mlr.press/v235/liu24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24m/liu24m.pdf
https://openreview.net/forum?id=2xLyc5TkFl
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
https://proceedings.mlr.press/v235/liu24m.html
Ziquan Liu, Yufei Cui, Yan Yan, Yi Xu, Xiangyang Ji, Xue Liu, Antoni B. Chan
https://proceedings.mlr.press/v235/liu24m.html
ICML 2024
In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient health and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making. With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used $l_{\infty}$-norm bounded attack if the model is not adversarially trained, which underpins the importance of adversarial training for CP. Our paper next demonstrates that the prediction set size (PSS) of CP using adversarially trained models with AT variants is often worse than using standard AT, inspiring us to research into CP-efficient AT for improved PSS. We propose to optimize a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, where the Beta-weighting loss is shown to be an upper bound of PSS at the population level by our theoretical analysis. Moreover, our empirical study on four image classification datasets across three popular AT baselines validates the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR).
https://proceedings.mlr.press/v235/liu24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24n/liu24n.pdf
https://openreview.net/forum?id=oLfq1KKneW
Preference Optimization for Molecule Synthesis with Conditional Residual Energy-based Models
https://proceedings.mlr.press/v235/liu24n.html
Songtao Liu, Hanjun Dai, Yue Zhao, Peng Liu
https://proceedings.mlr.press/v235/liu24n.html
ICML 2024
Molecule synthesis through machine learning is one of the fundamental problems in drug discovery. Current data-driven strategies employ one-step retrosynthesis models and search algorithms to predict synthetic routes in a top-bottom manner. Despite their effective performance, these strategies face limitations in the molecule synthetic route generation due to a greedy selection of the next molecule set without any lookahead. Furthermore, existing strategies cannot control the generation of synthetic routes based on possible criteria such as material costs, yields, and step count. In this work, we propose a general and principled framework via conditional residual energy-based models (EBMs), that focus on the quality of the entire synthetic route based on the specific criteria. By incorporating an additional energy-based function into our probabilistic model, our proposed algorithm can enhance the quality of the most probable synthetic routes (with higher probabilities) generated by various strategies in a plug-and-play fashion. Extensive experiments demonstrate that our framework can consistently boost performance across various strategies and outperforms previous state-of-the-art top-1 accuracy by a margin of 2.5%. Code is available at https://github.com/SongtaoLiu0823/CREBM.
https://proceedings.mlr.press/v235/liu24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24o/liu24o.pdf
https://openreview.net/forum?id=0urN0PnNDj
PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation
https://proceedings.mlr.press/v235/liu24o.html
Runze Liu, Yali Du, Fengshuo Bai, Jiafei Lyu, Xiu Li
https://proceedings.mlr.press/v235/liu24o.html
ICML 2024
In preference-based Reinforcement Learning (RL), obtaining a large number of preference labels are both time-consuming and costly. Furthermore, the queried human preferences cannot be utilized for the new tasks. In this paper, we propose Zero-shot Cross-task Preference Alignment and Robust Reward Learning (PEARL), which learns policies from cross-task preference transfer without any human labels of the target task. Our contributions include two novel components that facilitate the transfer and learning process. The first is Cross-task Preference Alignment (CPA), which transfers the preferences between tasks via optimal transport. The key idea of CPA is to use Gromov-Wasserstein distance to align the trajectories between tasks, and the solved optimal transport matrix serves as the correspondence between trajectories. The target task preferences are computed as the weighted sum of source task preference labels with the correspondence as weights. Moreover, to ensure robust learning from these transferred labels, we introduce Robust Reward Learning (RRL), which considers both reward mean and uncertainty by modeling rewards as Gaussian distributions. Empirical results on robotic manipulation tasks from Meta-World and Robomimic demonstrate that our method is capable of transferring preference labels across tasks accurately and then learns well-behaved policies. Notably, our approach significantly exceeds existing methods when there are few human preferences. The code and videos of our method are available at: https://sites.google.com/view/pearl-preference.
https://proceedings.mlr.press/v235/liu24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24p/liu24p.pdf
https://openreview.net/forum?id=ToHkAg936Y
Harnessing the Power of Neural Operators with Automatically Encoded Conservation Laws
https://proceedings.mlr.press/v235/liu24p.html
Ning Liu, Yiming Fan, Xianyi Zeng, Milan Klöwer, Lu Zhang, Yue Yu
https://proceedings.mlr.press/v235/liu24p.html
ICML 2024
Neural operators (NOs) have emerged as effective tools for modeling complex physical systems in scientific machine learning. In NOs, a central characteristic is to learn the governing physical laws directly from data. In contrast to other machine learning applications, partial knowledge is often known a priori about the physical system at hand whereby quantities such as mass, energy and momentum are exactly conserved. Currently, NOs have to learn these conservation laws from data and can only approximately satisfy them due to finite training data and random noise. In this work, we introduce conservation law-encoded neural operators (clawNOs), a suite of NOs that endow inference with automatic satisfaction of such conservation laws. ClawNOs are built with a divergence-free prediction of the solution field, with which the continuity equation is automatically guaranteed. As a consequence, clawNOs are compliant with the most fundamental and ubiquitous conservation laws essential for correct physical consistency. As demonstrations, we consider a wide variety of scientific applications ranging from constitutive modeling of material deformation, incompressible fluid dynamics, to atmospheric simulation. ClawNOs significantly outperform the state-of-the-art NOs in learning efficacy, especially in small-data regimes.
https://proceedings.mlr.press/v235/liu24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24q/liu24q.pdf
https://openreview.net/forum?id=tdomF3PW6A
Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss
https://proceedings.mlr.press/v235/liu24q.html
Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
https://proceedings.mlr.press/v235/liu24q.html
ICML 2024
Machine learning models are susceptible to membership inference attacks (MIAs), which aim to infer whether a sample is in the training set. Existing work utilizes gradient ascent to enlarge the loss variance of training data, alleviating the privacy risk. However, optimizing toward a reverse direction may cause the model parameters to oscillate near local minima, leading to instability and suboptimal performance. In this work, we propose a novel method – Convex Concave Loss (CCL), which enables a high variance of training loss distribution by gradient descent. Our method is motivated by the theoretical analysis that convex losses tend to decrease the loss variance during training. Thus, our key idea behind CCL is to reduce the convexity of loss functions with a concave term. Trained with CCL, neural networks produce losses with high variance for training data, reinforcing the defense against MIAs. Extensive experiments demonstrate the superiority of CCL, achieving a state-of-the-art balance in the privacy-utility trade-off.
https://proceedings.mlr.press/v235/liu24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24r/liu24r.pdf
https://openreview.net/forum?id=n8g6WMxt09
Decoding-time Realignment of Language Models
https://proceedings.mlr.press/v235/liu24r.html
Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares-López, Jessica Hoffmann, Lucas Dixon, Michal Valko, Mathieu Blondel
https://proceedings.mlr.press/v235/liu24r.html
ICML 2024
Aligning language models with human preferences is crucial for reducing errors and biases in these models. Alignment techniques, such as reinforcement learning from human feedback (RLHF), are typically cast as optimizing a tradeoff between human preference rewards and a proximity regularization term that encourages staying close to the unaligned model. Selecting an appropriate level of regularization is critical: insufficient regularization can lead to reduced model capabilities due to reward hacking, whereas excessive regularization hinders alignment. Traditional methods for finding the optimal regularization level require retraining multiple models with varying regularization strengths. This process, however, is resource-intensive, especially for large models. To address this challenge, we propose decoding-time realignment (DeRa), a simple method to explore and evaluate different regularization strengths in aligned models without retraining. DeRa enables control over the degree of alignment, allowing users to smoothly transition between unaligned and aligned models. It also enhances the efficiency of hyperparameter tuning by enabling the identification of effective regularization strengths using a validation dataset.
https://proceedings.mlr.press/v235/liu24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24s/liu24s.pdf
https://openreview.net/forum?id=8296yUBoXr
DIDI: Diffusion-Guided Diversity for Offline Behavioral Generation
https://proceedings.mlr.press/v235/liu24s.html
Jinxin Liu, Xinghong Guo, Zifeng Zhuang, Donglin Wang
https://proceedings.mlr.press/v235/liu24s.html
ICML 2024
In this paper, we propose a novel approach called DIffusion-guided DIversity (DIDI) for offline behavioral generation. The goal of DIDI is to learn a diverse set of skills from a mixture of label-free offline data. We achieve this by leveraging diffusion probabilistic models as priors to guide the learning process and regularize the policy. By optimizing a joint objective that incorporates diversity and diffusion-guided regularization, we encourage the emergence of diverse behaviors while maintaining the similarity to the offline data. Experimental results in four decision-making domains (Push, Kitchen, Humanoid, and D4RL tasks) show that DIDI is effective in discovering diverse and discriminative skills. We also introduce skill stitching and skill interpolation, which highlight the generalist nature of the learned skill space. Further, by incorporating an extrinsic reward function, DIDI enables reward-guided behavior generation, facilitating the learning of diverse and optimal behaviors from sub-optimal data.
https://proceedings.mlr.press/v235/liu24t.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24t/liu24t.pdf
https://openreview.net/forum?id=uRz9GZN17X
Bidirectional Reciprocative Information Communication for Few-Shot Semantic Segmentation
https://proceedings.mlr.press/v235/liu24t.html
Yuanwei Liu, Junwei Han, Xiwen Yao, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Nian Liu, Fahad Shahbaz Khan
https://proceedings.mlr.press/v235/liu24t.html
ICML 2024
Existing few-shot semantic segmentation methods typically rely on a one-way flow of category information from support to query, ignoring the impact of intra-class diversity. To address this, drawing inspiration from cybernetics, we introduce a Query Feedback Branch (QFB) to propagate query information back to support, generating a query-related support prototype that is more aligned with the query. Subsequently, a Query Amplifier Branch (QAB) is employed to amplify target objects in the query using the acquired support prototype. To further improve the model, we propose a Query Rectification Module (QRM), which utilizes the prediction disparity in the query before and after support activation to identify challenging positive and negative samples from ambiguous regions for query self-rectification. Furthermore, we integrate the QFB, QAB, and QRM into a feedback and rectification layer and incorporate it into an iterative pipeline. This configuration enables the progressive enhancement of bidirectional reciprocative flow of category information between query and support, effectively providing query-adaptive support information and addressing the intra-class diversity problem. Extensive experiments conducted on both PASCAL-5i and COCO-20i datasets validate the effectiveness of our approach. The code is available at https://github.com/LIUYUANWEI98/IFRNet .
https://proceedings.mlr.press/v235/liu24u.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24u/liu24u.pdf
https://openreview.net/forum?id=CtyLla0DU8
Unlock the Cognitive Generalization of Deep Reinforcement Learning via Granular Ball Representation
https://proceedings.mlr.press/v235/liu24u.html
Jiashun Liu, Jianye Hao, Yi Ma, Shuyin Xia
https://proceedings.mlr.press/v235/liu24u.html
ICML 2024
The policies learned by humans in simple scenarios can be deployed in complex scenarios with the same task logic through limited feature alignment training, a process referred to as cognitive generalization or systematic generalization. Thus, a plausible conjecture is that unlocking cognitive generalization in DRL could enable effective generalization of policies from simple to complex scenarios through reward-agnostic fine-tuning. This would eliminate the need for designing reward functions in complex scenarios, thus reducing environment-building costs. In this paper, we propose a general framework to enhance the cognitive generalization ability of standard DRL methods. Our framework builds a cognitive latent space in a simple scenario, then segments the latent space to cluster samples with similar environmental influences into same subregion. During the fine-tuning in the complex scenario, the policy uses cognitive latent space to align the new sample with the same subregion sample collected from the simple scenario and approximates the rewards and Q values of the new samples for policy update. Based on this framework, we propose Granular Ball Reinforcement Leaning (GBRL), a practical algorithm via Variational Autoencoder (VAE) and Granular Ball Representation. GBRL achieves effective policy generalization on various difficult scenarios with the same task logic.
https://proceedings.mlr.press/v235/liu24v.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24v/liu24v.pdf
https://openreview.net/forum?id=RtCmp5F9lN
PAPM: A Physics-aware Proxy Model for Process Systems
https://proceedings.mlr.press/v235/liu24v.html
Pengwei Liu, Zhongkai Hao, Xingyu Ren, Hangjie Yuan, Jiayang Ren, Dong Ni
https://proceedings.mlr.press/v235/liu24v.html
ICML 2024
In the context of proxy modeling for process systems, traditional data-driven deep learning approaches frequently encounter significant challenges, such as substantial training costs induced by large amounts of data, and limited generalization capabilities. As a promising alternative, physics-aware models incorporate partial physics knowledge to ameliorate these challenges. Although demonstrating efficacy, they fall short in terms of exploration depth and universality. To address these shortcomings, we introduce a physics-aware proxy model (PAPM) that fully incorporates partial prior physics of process systems, which includes multiple input conditions and the general form of conservation relations, resulting in better out-of-sample generalization. Additionally, PAPM contains a holistic temporal-spatial stepping module for flexible adaptation across various process systems. Through systematic comparisons with state-of-the-art pure data-driven and physics-aware models across five two-dimensional benchmarks in nine generalization tasks, PAPM notably achieves an average performance improvement of 6.7%, while requiring fewer FLOPs, and just 1% of the parameters compared to the prior leading method.
https://proceedings.mlr.press/v235/liu24w.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24w/liu24w.pdf
https://openreview.net/forum?id=dhrNfAJAH6
ELTA: An Enhancer against Long-Tail for Aesthetics-oriented Models
https://proceedings.mlr.press/v235/liu24w.html
Limin Liu, Shuai He, Anlong Ming, Rui Xie, Huadong Ma
https://proceedings.mlr.press/v235/liu24w.html
ICML 2024
Real-world datasets often exhibit long-tailed distributions, compromising the generalization and fairness of learning-based models. This issue is particularly pronounced in Image Aesthetics Assessment (IAA) tasks, where such imbalance is difficult to mitigate due to a severe distribution mismatch between features and labels, as well as the great sensitivity of aesthetics to image variations. To address these issues, we propose an Enhancer against Long-Tail for Aesthetics-oriented models (ELTA). ELTA first utilizes a dedicated mixup technique to enhance minority feature representation in high-level space while preserving their intrinsic aesthetic qualities. Next, it aligns features and labels through a similarity consistency approach, effectively alleviating the distribution mismatch. Finally, ELTA adopts a specific strategy to refine the output distribution, thereby enhancing the quality of pseudo-labels. Experiments on four representative datasets (AVA, AADB, TAD66K, and PARA) show that our proposed ELTA achieves state-of-the-art performance by effectively mitigating the long-tailed issue in IAA datasets. Moreover, ELTA is designed with plug-and-play capabilities for seamless integration with existing methods. To our knowledge, this is the first contribution in the IAA community addressing long-tail. All resources are available in here.
https://proceedings.mlr.press/v235/liu24x.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24x/liu24x.pdf
https://openreview.net/forum?id=l7vQQi0I2d
On the Feasibility of Single-Pass Full-Capacity Learning in Linear Threshold Neurons with Binary Input Vectors
https://proceedings.mlr.press/v235/liu24x.html
Ruipeng Liu, Borui He, Naveed Tahir, Garrett Ethan Katz
https://proceedings.mlr.press/v235/liu24x.html
ICML 2024
Known learning rules tend to fall near one of two extremes: single-pass associative learning with low complexity and capacity, and multi-pass iterative learning with high complexity and capacity. In this work we investigate the mathematical feasibility of learning rules that are both single-pass and achieve the theoretical upper bound on capacity. We consider a fairly broad family of learning rules we call “span rules,” which include known rules such as Hebbian learning, perceptron learning, and backpropagation as special cases. To our knowledge, previous work has not determined whether single-pass, full-capacity span rules exist, even in the most fundamental case of a linear threshold neuron with binary input vectors, which is the focus of this study. We derive a necessary condition for the existence of such learning rules, which takes the form of a linear program, and show that the linear program is infeasible. This establishes an impossibility result that span rules can not be both single-pass and full-capacity.
https://proceedings.mlr.press/v235/liu24y.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24y/liu24y.pdf
https://openreview.net/forum?id=BPQHXwVNvl
Online Speculative Decoding
https://proceedings.mlr.press/v235/liu24y.html
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang
https://proceedings.mlr.press/v235/liu24y.html
ICML 2024
Speculative decoding is a pivotal technique to accelerate the inference of large language models (LLMs) by employing a smaller draft model to predict the target model’s outputs. However, its efficacy can be limited due to the low predictive accuracy of the draft model, particularly when faced with diverse text inputs and a significant capability gap between the draft and target models. We introduce online speculative decoding to address this challenge. The main idea is to continuously update the (multiple) draft model(s) on observed user query data. Adapting to query distribution mitigates the shifts between the training distribution of the draft model and the query distribution, enabling the draft model to more accurately predict the target model’s outputs. We develop a prototype of online speculative decoding based on knowledge distillation and evaluate it using both synthetic and real query data. The results show a substantial increase in the token acceptance rate by 0.1 to 0.65, bringing 1.42x to 2.17x latency reduction. Our code is available at https://github.com/LiuXiaoxuanPKU/OSD.
https://proceedings.mlr.press/v235/liu24z.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24z/liu24z.pdf
https://openreview.net/forum?id=15MpDbv3IQ
Tuning-free Estimation and Inference of Cumulative Distribution Function under Local Differential Privacy
https://proceedings.mlr.press/v235/liu24z.html
Yi Liu, Qirui Hu, Linglong Kong
https://proceedings.mlr.press/v235/liu24z.html
ICML 2024
We introduce a novel algorithm for estimating Cumulative Distribution Function (CDF) values under Local Differential Privacy (LDP) by exploiting an unexpected connection between LDP and the current status problem, a classical survival data problem in statistics. This connection leads to the development of tools for constrained isotonic estimation based on binary queries. Through mathematical proofs and extensive numerical testing, we demonstrate that our method achieves uniform and $L_2$ error bounds when estimating the entire CDF curve. By employing increasingly dense grids, the error bound can be improved, exhibiting an asymptotic normal distribution of the proposed estimator. Theoretically, we show that the error bound smoothly changes as the number of grids increases relative to the sample size $n$. Computationally, we demonstrate that our constrained isotonic estimator can be efficiently computed deterministically, eliminating the need for hyperparameters or random optimization.
https://proceedings.mlr.press/v235/liu24aa.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24aa/liu24aa.pdf
https://openreview.net/forum?id=hZ0fWhgVch
Referee Can Play: An Alternative Approach to Conditional Generation via Model Inversion
https://proceedings.mlr.press/v235/liu24aa.html
Xuantong Liu, Tianyang Hu, Wenjia Wang, Kenji Kawaguchi, Yuan Yao
https://proceedings.mlr.press/v235/liu24aa.html
ICML 2024
As a dominant force in text-to-image generation tasks, Diffusion Probabilistic Models (DPMs) face a critical challenge in controllability, struggling to adhere strictly to complex, multi-faceted instructions. In this work, we aim to address this alignment challenge for conditional generation tasks. First, we provide an alternative view of state-of-the-art DPMs as a way of inverting advanced Vision-Language Models (VLMs). With this formulation, we naturally propose a training-free approach that bypasses the conventional sampling process associated with DPMs. By directly optimizing images with the supervision of discriminative VLMs, the proposed method can potentially achieve a better text-image alignment. As proof of concept, we demonstrate the pipeline with the pre-trained BLIP-2 model and identify several key designs for improved image generation. To further enhance the image fidelity, a Score Distillation Sampling module of Stable Diffusion is incorporated. By carefully balancing the two components during optimization, our method can produce high-quality images with near state-of-the-art performance on T2I-Compbench. The code is available at https://github.com/Pepper-lll/VLMinv.
https://proceedings.mlr.press/v235/liu24ab.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ab/liu24ab.pdf
https://openreview.net/forum?id=MGkeWJxQVl
Reason for Future, Act for Now: A Principled Architecture for Autonomous LLM Agents
https://proceedings.mlr.press/v235/liu24ab.html
Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi Ke, Boyi Liu, Zhaoran Wang
https://proceedings.mlr.press/v235/liu24ab.html
ICML 2024
Large language models (LLMs) demonstrate impressive reasoning abilities, but translating reasoning into actions in the real world remains challenging. In particular, it is unclear how to complete a given task provably within a minimum number of interactions with the external environment, e.g., through an internal mechanism of reasoning. To this end, we propose the first framework with provable regret guarantees to orchestrate reasoning and acting, which we call reason for future, act for now (RAFA). Specifically, we design a prompt template for reasoning that learns from the memory buffer and plans a future trajectory over a long horizon (reason for future). At each step, the LLM agent takes the initial action of the planned trajectory (act for now), stores the collected feedback in the memory buffer, and reinvokes the reasoning routine to replan the future trajectory from the new state. The key idea is to cast reasoning in LLMs as learning and planning in Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt LLMs with the memory buffer to estimate the unknown environment (learning) and generate an optimal trajectory for multiple future steps that maximize a value function (planning). The learning and planning subroutines are performed in an in-context manner to emulate the actor-critic update for MDPs. Our theoretical analysis establishes a $\sqrt{T}$ regret, while our experimental validation demonstrates superior empirical performance.
https://proceedings.mlr.press/v235/liu24ac.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ac/liu24ac.pdf
https://openreview.net/forum?id=cxiqxDnrCx
Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation
https://proceedings.mlr.press/v235/liu24ac.html
Pei Liu, Luping Ji
https://proceedings.mlr.press/v235/liu24ac.html
ICML 2024
Uncertainty estimation (UE), as an effective means of quantifying predictive uncertainty, is crucial for safe and reliable decision-making, especially in high-risk scenarios. Existing UE schemes usually assume that there are completely-labeled samples to support fully-supervised learning. In practice, however, many UE tasks often have no sufficiently-labeled data to use, such as the Multiple Instance Learning (MIL) with only weak instance annotations. To bridge this gap, this paper, for the first time, addresses the weakly-supervised issue of Multi-Instance UE (MIUE) and proposes a new baseline scheme, Multi-Instance Residual Evidential Learning (MIREL). Particularly, at the fine-grained instance UE with only weak supervision, we derive a multi-instance residual operator through the Fundamental Theorem of Symmetric Functions. On this operator derivation, we further propose MIREL to jointly model the high-order predictive distribution at bag and instance levels for MIUE. Extensive experiments empirically demonstrate that our MIREL not only could often make existing MIL networks perform better in MIUE, but also could surpass representative UE methods by large margins, especially in instance-level UE tasks. Our source code is available at https://github.com/liupei101/MIREL.
https://proceedings.mlr.press/v235/liu24ad.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ad/liu24ad.pdf
https://openreview.net/forum?id=DBlkjCDg2i
StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization
https://proceedings.mlr.press/v235/liu24ad.html
Songhua Liu, Xin Jin, Xingyi Yang, Jingwen Ye, Xinchao Wang
https://proceedings.mlr.press/v235/liu24ad.html
ICML 2024
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain, making it a highly ambitious and challenging task. State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data and thus increase robustness. Nevertheless, they have largely overlooked the underlying coherence between the augmented domains, which in turn leads to inferior results in real-world scenarios. In this paper, we propose a simple yet effective scheme, termed as StyDeSty, to explicitly account for the alignment of the source and pseudo domains in the process of data augmentation, enabling them to interact with each other in a self-consistent manner and further giving rise to a latent domain with strong generalization power. The heart of StyDeSty lies in the interaction between a stylization module for generating novel stylized samples using the source domain, and a destylization module for transferring stylized and source samples to a latent domain to learn content-invariant features. The stylization and destylization modules work adversarially and reinforce each other. During inference, the destylization module transforms the input sample with an arbitrary style shift to the latent domain, in which the downstream tasks are carried out. Specifically, the location of the destylization layer within the backbone network is determined by a dedicated neural architecture search (NAS) strategy. We evaluate StyDeSty on multiple benchmarks and demonstrate that it yields encouraging results, outperforming the state of the art by up to 13.44% on classification accuracy. Codes are available https://github.com/Huage001/StyDeSty.
https://proceedings.mlr.press/v235/liu24ae.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ae/liu24ae.pdf
https://openreview.net/forum?id=SMUXPVKUBg
Time-Series Forecasting for Out-of-Distribution Generalization Using Invariant Learning
https://proceedings.mlr.press/v235/liu24ae.html
Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang, B. Aditya Prakash
https://proceedings.mlr.press/v235/liu24ae.html
ICML 2024
Time-series forecasting (TSF) finds broad applications in real-world scenarios. Due to the dynamic nature of time-series data, it is crucial for TSF models to preserve out-of-distribution (OOD) generalization abilities, as training and test sets represent historical and future data respectively. In this paper, we aim to alleviate the inherent OOD problem in TSF via invariant learning. We identify fundamental challenges of invariant learning for TSF. First, the target variables in TSF may not be sufficiently determined by the input due to unobserved core variables in TSF, breaking the fundamental assumption of invariant learning. Second, time-series datasets lack adequate environment labels, while existing environmental inference methods are not suitable for TSF. To address these challenges, we propose FOIL, a model-agnostic framework that endows time-series forecasting for out-of-distribution generalization via invariant learning. Specifically, FOIL employs a novel surrogate loss to mitigate the impact of unobserved variables. Further, FOIL implements joint optimization by alternately inferring environments effectively with a multi-head network while preserving the temporal adjacency structure and learning invariant representations across inferred environments for OOD generalized TSF. Extensive experiments demonstrate that the proposed FOIL significantly and consistently improves the performance of various TSF models, achieving gains of up to 85%.
https://proceedings.mlr.press/v235/liu24af.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24af/liu24af.pdf
https://openreview.net/forum?id=Mfk6ZbD6eY
Stereo Risk: A Continuous Modeling Approach to Stereo Matching
https://proceedings.mlr.press/v235/liu24af.html
Ce Liu, Suryansh Kumar, Shuhang Gu, Radu Timofte, Yao Yao, Luc Van Gool
https://proceedings.mlr.press/v235/liu24af.html
ICML 2024
We introduce Stereo Risk, a new deep-learning approach to solve the classical stereo-matching problem in computer vision. As it is well-known that stereo matching boils down to a per-pixel disparity estimation problem, the popular state-of-the-art stereo-matching approaches widely rely on regressing the scene disparity values, yet via discretization of scene disparity values. Such discretization often fails to capture the nuanced, continuous nature of scene depth. Stereo Risk departs from the conventional discretization approach by formulating the scene disparity as an optimal solution to a continuous risk minimization problem, hence the name "stereo risk". We demonstrate that $L^1$ minimization of the proposed continuous risk function enhances stereo-matching performance for deep networks, particularly for disparities with multi-modal probability distributions. Furthermore, to enable the end-to-end network training of the non-differentiable $L^1$ risk optimization, we exploited the implicit function theorem, ensuring a fully differentiable network. A comprehensive analysis demonstrates our method’s theoretical soundness and superior performance over the state-of-the-art methods across various benchmark datasets, including KITTI 2012, KITTI 2015, ETH3D, SceneFlow, and Middlebury 2014.
https://proceedings.mlr.press/v235/liu24ag.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ag/liu24ag.pdf
https://openreview.net/forum?id=qmUbSAgz08
Multi-Source Conformal Inference Under Distribution Shift
https://proceedings.mlr.press/v235/liu24ag.html
Yi Liu, Alexander Levis, Sharon-Lise Normand, Larry Han
https://proceedings.mlr.press/v235/liu24ag.html
ICML 2024
Recent years have experienced increasing utilization of complex machine learning models across multiple sources of data to inform more generalizable decision-making. However, distribution shifts across data sources and privacy concerns related to sharing individual-level data, coupled with a lack of uncertainty quantification from machine learning predictions, make it challenging to achieve valid inferences in multi-source environments. In this paper, we consider the problem of obtaining distribution-free prediction intervals for a target population, leveraging multiple potentially biased data sources. We derive the efficient influence functions for the quantiles of unobserved outcomes in the target and source populations, and show that one can incorporate machine learning prediction algorithms in the estimation of nuisance functions while still achieving parametric rates of convergence to nominal coverage probabilities. Moreover, when conditional outcome invariance is violated, we propose a data-adaptive strategy to upweight informative data sources for efficiency gain and downweight non-informative data sources for bias reduction. We highlight the robustness and efficiency of our proposals for a variety of conformal scores and data-generating mechanisms via extensive synthetic experiments. Hospital length of stay prediction intervals for pediatric patients undergoing a high-risk cardiac surgical procedure between 2016-2022 in the U.S. illustrate the utility of our methodology.
https://proceedings.mlr.press/v235/liu24ah.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ah/liu24ah.pdf
https://openreview.net/forum?id=WjNzXeiOSL
From Generalization Analysis to Optimization Designs for State Space Models
https://proceedings.mlr.press/v235/liu24ah.html
Fusheng Liu, Qianxiao Li
https://proceedings.mlr.press/v235/liu24ah.html
ICML 2024
A State Space Model (SSM) is a foundation model in time series analysis, which has recently been shown as an alternative to transformers in sequence modeling. In this paper, we theoretically study the generalization of SSMs and propose improvements to training algorithms based on the generalization results. Specifically, we give a data-dependent generalization bound for SSMs, showing an interplay between the SSM parameters and the temporal dependencies of the training sequences. Leveraging the generalization bound, we (1) set up a scaling rule for model initialization based on the proposed generalization measure, which significantly improves the robustness of the output value scales on SSMs to different temporal patterns in the sequence data; (2) introduce a new regularization method for training SSMs to enhance the generalization performance. Numerical results are conducted to validate our results.
https://proceedings.mlr.press/v235/liu24ai.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ai/liu24ai.pdf
https://openreview.net/forum?id=FV3kY9FBW6
Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning
https://proceedings.mlr.press/v235/liu24ai.html
Tenglong Liu, Yang Li, Yixing Lan, Hao Gao, Wei Pan, Xin Xu
https://proceedings.mlr.press/v235/liu24ai.html
ICML 2024
In offline reinforcement learning, the challenge of out-of-distribution (OOD) is pronounced. To address this, existing methods often constrain the learned policy through policy regularization. However, these methods often suffer from the issue of unnecessary conservativeness, hampering policy improvement. This occurs due to the indiscriminate use of all actions from the behavior policy that generates the offline dataset as constraints. The problem becomes particularly noticeable when the quality of the dataset is suboptimal. Thus, we propose Adaptive Advantage-guided Policy Regularization (A2PR), obtaining high-advantage actions from an augmented behavior policy combined with VAE to guide the learned policy. A2PR can select high-advantage actions that differ from those present in the dataset, while still effectively maintaining conservatism from OOD actions. This is achieved by harnessing the VAE capacity to generate samples matching the distribution of the data points. We theoretically prove that the improvement of the behavior policy is guaranteed. Besides, it effectively mitigates value overestimation with a bounded performance gap. Empirically, we conduct a series of experiments on the D4RL benchmark, where A2PR demonstrates state-of-the-art performance. Furthermore, experimental results on additional suboptimal mixed datasets reveal that A2PR exhibits superior performance. Code is available at https://github.com/ltlhuuu/A2PR.
https://proceedings.mlr.press/v235/liu24aj.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24aj/liu24aj.pdf
https://openreview.net/forum?id=Aj18fUB6Th
Two-timescale Derivative Free Optimization for Performative Prediction with Markovian Data
https://proceedings.mlr.press/v235/liu24aj.html
Haitong Liu, Qiang Li, Hoi To Wai
https://proceedings.mlr.press/v235/liu24aj.html
ICML 2024
This paper studies the performative prediction problem where a learner aims to minimize the expected loss with a decision-dependent data distribution. Such setting is motivated when outcomes can be affected by the prediction model, e.g., in strategic classification. We consider a state-dependent setting where the data distribution evolves according to an underlying controlled Markov chain. We focus on stochastic derivative free optimization (DFO) where the learner is given access to a loss function evaluation oracle with the above Markovian data. We propose a two-timescale DFO($\lambda$) algorithm that features (i) a sample accumulation mechanism that utilizes every observed sample to estimate the overall gradient of performative risk, and (ii) a two-timescale diminishing step size that balances the rates of DFO updates and bias reduction. Under a general non-convex optimization setting, we show that DFO($\lambda$) requires ${\cal O}( 1 /\epsilon^3)$ samples (up to a log factor) to attain a near-stationary solution with expected squared gradient norm less than $\epsilon > 0$. Numerical experiments verify our analysis.
https://proceedings.mlr.press/v235/liu24ak.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ak/liu24ak.pdf
https://openreview.net/forum?id=TRrXkVdhwi
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
https://proceedings.mlr.press/v235/liu24ak.html
Zicheng Liu, Siyuan Li, Li Wang, Zedong Wang, Yunfan Liu, Stan Z. Li
https://proceedings.mlr.press/v235/liu24ak.html
ICML 2024
To mitigate the computational complexity in the self-attention mechanism on long sequences, linear attention utilizes computation tricks to achieve linear complexity, while state space models (SSMs) popularize a favourable practice of using non-data-dependent memory pattern, i.e., emphasize the near and neglect the distant, to processing sequences. Recent studies have shown the priorities by combining them as one. However, the efficiency of linear attention remains only at the theoretical level in a causal setting, and SSMs require various designed constraints to operate effectively on specific data. Therefore, in order to unveil the true power of the hybrid design, the following two issues need to be addressed: (1) hardware-efficient implementation for linear attention and (2) stabilization of SSMs. To achieve this, we leverage the thought of tiling and hierarchy to propose CHELA (short-long Convolutions with Hardware-Efficient Linear Attention), which replaces SSMs with short-long convolutions and implements linear attention in a divide-and-conquer manner. This approach enjoys global abstraction and data-dependent selection from stable SSM and linear attention while maintaining real linear complexity. Our comprehensive experiments on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method.
https://proceedings.mlr.press/v235/liu24al.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24al/liu24al.pdf
https://openreview.net/forum?id=3Pq6uI1MTE
Differentiable Combinatorial Scheduling at Scale
https://proceedings.mlr.press/v235/liu24al.html
Mingju Liu, Yingjie Li, Jiaqi Yin, Zhiru Zhang, Cunxi Yu
https://proceedings.mlr.press/v235/liu24al.html
ICML 2024
This paper addresses the complex issue of resource-constrained scheduling, an NP-hard problem that spans critical areas including chip design and high-performance computing. Traditional scheduling methods often stumble over scalability and applicability challenges. We propose a novel approach using a differentiable combinatorial scheduling framework, utilizing Gumbel-Softmax differentiable sampling technique. This new technical allows for a fully differentiable formulation of linear programming (LP) based scheduling, extending its application to a broader range of LP formulations. To encode inequality constraints for scheduling tasks, we introduce constrained Gumbel Trick, which adeptly encodes arbitrary inequality constraints. Consequently, our method facilitates an efficient and scalable scheduling via gradient descent without the need for training data. Comparative evaluations on both synthetic and real-world benchmarks highlight our capability to significantly improve the optimization efficiency of scheduling, surpassing state-of-the-art solutions offered by commercial and open-source solvers such as CPLEX, Gurobi, and CP-SAT in the majority of the designs.
https://proceedings.mlr.press/v235/liu24am.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24am/liu24am.pdf
https://openreview.net/forum?id=z7zHsNFXHc
Characterizing ResNet’s Universal Approximation Capability
https://proceedings.mlr.press/v235/liu24am.html
Chenghao Liu, Enming Liang, Minghua Chen
https://proceedings.mlr.press/v235/liu24am.html
ICML 2024
Since its debut in 2016, ResNet has become arguably the most favorable architecture in deep neural network (DNN) design. It effectively addresses the gradient vanishing/exploding issue in DNN training, allowing engineers to fully unleash DNN’s potential in tackling challenging problems in various domains. Despite its practical success, an essential theoretical question remains largely open: how well/best can ResNet approximate functions? In this paper, we answer this question for several important function classes, including polynomials and smooth functions. In particular, we show that ResNet with constant width can approximate Lipschitz continuous function with a Lipschitz constant $\mu$ using $\mathcal{O}(c(d)(\varepsilon/\mu)^{-d/2})$ tunable weights, where $c(d)$ is a constant depending on the input dimension $d$ and $\epsilon>0$ is the target approximation error. Further, we extend such a result to Lebesgue-integrable functions with the upper bound characterized by the modulus of continuity. These results indicate a factor of $d$ reduction in the number of tunable weights compared with the classical results for ReLU networks. Our results are also order-optimal in $\varepsilon$, thus achieving optimal approximation rate, as they match a generalized lower bound derived in this paper. This work adds to the theoretical justifications for ResNet’s stellar practical performance.
https://proceedings.mlr.press/v235/liu24an.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24an/liu24an.pdf
https://openreview.net/forum?id=ULKvSqmSgA
Convergence of Online Learning Algorithm for a Mixture of Multiple Linear Regressions
https://proceedings.mlr.press/v235/liu24an.html
Yujing Liu, Zhixin Liu, Lei Guo
https://proceedings.mlr.press/v235/liu24an.html
ICML 2024
This paper considers the parameter learning and data clustering problem for MLR with multiple sub-models and arbitrary mixing weights. To deal with the data streaming case, we propose an online learning algorithm to estimate the unknown parameters. By utilizing Ljung’s ODE method, we establish the almost sure convergence results of this MLR problem without the traditional i.i.d. assumption on the input data for the first time. Based on the convergence property and using the classical stochastic Lyapunov function method, we also obtain the convergence rate analysis of the proposed algorithm for the first time. In addition, the data clustering can asymptotically achieve the same performance as the case with known parameters. Future work will consider how to relax the asymptotically stationary and ergodic assumption on the input data, and how to design algorithms with global convergence performance for the MLR problem.
https://proceedings.mlr.press/v235/liu24ao.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ao/liu24ao.pdf
https://openreview.net/forum?id=hunSEjeCPE
Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement Learning
https://proceedings.mlr.press/v235/liu24ao.html
Xu-Hui Liu, Tian-Shuo Liu, Shengyi Jiang, Ruifeng Chen, Zhilong Zhang, Xinwei Chen, Yang Yu
https://proceedings.mlr.press/v235/liu24ao.html
ICML 2024
Combining offline and online reinforcement learning (RL) techniques is indeed crucial for achieving efficient and safe learning where data acquisition is expensive. Existing methods replay offline data directly in the online phase, resulting in a significant challenge of data distribution shift and subsequently causing inefficiency in online fine-tuning. To address this issue, we introduce an innovative approach, Energy-guided DIffusion Sampling (EDIS), which utilizes a diffusion model to extract prior knowledge from the offline dataset and employs energy functions to distill this knowledge for enhanced data generation in the online phase. The theoretical analysis demonstrates that EDIS exhibits reduced suboptimality compared to solely utilizing online data or directly reusing offline data. EDIS is a plug-in approach and can be combined with existing methods in offline-to-online RL setting. By implementing EDIS to off-the-shelf methods Cal-QL and IQL, we observe a notable 20% average improvement in empirical performance on MuJoCo, AntMaze, and Adroit environments. Code is available at https://github.com/liuxhym/EDIS.
https://proceedings.mlr.press/v235/liu24ap.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ap/liu24ap.pdf
https://openreview.net/forum?id=rZD9hV0Bc4
Moreau Envelope for Nonconvex Bi-Level Optimization: A Single-Loop and Hessian-Free Solution Strategy
https://proceedings.mlr.press/v235/liu24ap.html
Risheng Liu, Zhu Liu, Wei Yao, Shangzhi Zeng, Jin Zhang
https://proceedings.mlr.press/v235/liu24ap.html
ICML 2024
This work focuses on addressing two major challenges in the context of large-scale nonconvex Bi-Level Optimization (BLO) problems, which are increasingly applied in machine learning due to their ability to model nested structures. These challenges involve ensuring computational efficiency and providing theoretical guarantees. While recent advances in scalable BLO algorithms have primarily relied on lower-level convexity simplification, our work specifically tackles large-scale BLO problems involving nonconvexity in both the upper and lower levels. We simultaneously address computational and theoretical challenges by introducing an innovative single-loop gradient-based algorithm, utilizing the Moreau envelope-based reformulation, and providing non-asymptotic convergence analysis for general nonconvex BLO problems. Notably, our algorithm relies solely on first-order gradient information, enhancing its practicality and efficiency, especially for large-scale BLO learning tasks. We validate our approach’s effectiveness through experiments on various synthetic problems, two typical hyper-parameter learning tasks, and a real-world neural architecture search application, collectively demonstrating its superior performance.
https://proceedings.mlr.press/v235/liu24aq.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24aq/liu24aq.pdf
https://openreview.net/forum?id=jzHmElqpPe
Position: Foundation Agents as the Paradigm Shift for Decision Making
https://proceedings.mlr.press/v235/liu24aq.html
Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao, Junge Zhang
https://proceedings.mlr.press/v235/liu24aq.html
ICML 2024
Decision making demands intricate interplay between perception, memory, and reasoning to discern optimal policies. Conventional approaches to decision making face challenges related to low sample efficiency and poor generalization. In contrast, foundation models in language and vision have showcased rapid adaptation to diverse new tasks. Therefore, we advocate for the construction of foundation agents as a transformative shift in the learning paradigm of agents. This proposal is underpinned by the formulation of foundation agents with their fundamental characteristics and challenges motivated by the success of large language models (LLMs). Moreover, we specify the roadmap of foundation agents from large interactive data collection or generation, to self-supervised pretraining and adaptation, and knowledge and value alignment with LLMs. Lastly, we pinpoint critical research questions derived from the formulation and delineate trends for foundation agents supported by real-world use cases, addressing both technical and theoretical aspects to propel the field towards a more comprehensive and impactful future.
https://proceedings.mlr.press/v235/liu24ar.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ar/liu24ar.pdf
https://openreview.net/forum?id=LmzsgSDkWs
Learning with Partial-Label and Unlabeled Data: A Uniform Treatment for Supervision Redundancy and Insufficiency
https://proceedings.mlr.press/v235/liu24ar.html
Yangfan Liu, Jiaqi Lv, Xin Geng, Ning Xu
https://proceedings.mlr.press/v235/liu24ar.html
ICML 2024
One major challenge in weakly supervised learning is learning from inexact supervision, ranging from partial labels (PLs) with redundant information to the extreme of unlabeled data with insufficient information. While recent work has made significant strides in specific inexact supervision contexts, supervision forms typically coexist in complex combinations. This is exemplified in semi-supervised partial label learning, where PLs act as the exclusive supervision in a semi-supervised setting. Current strategies addressing combined inexact scenarios are usually composite, which can lead to incremental solutions that essentially replicate existing methods. In this paper, we propose a novel approach to uniformly tackle both label redundancy and insufficiency, derived from a mutual information-based perspective. We design a label channel that facilitates dynamic label exchange within the candidate label sets, which identifies potential true labels and filters out likely incorrect ones, thereby minimizing error accumulation. Experimental results demonstrate the superiority of our method over existing state-of-the-art PL and semi-supervised learning approaches by directly integrating them. Furthermore, our extended experiments on partial-complementary label learning underscore the flexibility of our uniform treatment in managing diverse supervision scenarios.
https://proceedings.mlr.press/v235/liu24as.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24as/liu24as.pdf
https://openreview.net/forum?id=sjJZHPV9Id
Revisiting Context Aggregation for Image Matting
https://proceedings.mlr.press/v235/liu24as.html
Qinglin Liu, Xiaoqian Lv, Quanling Meng, Zonglin Li, Xiangyuan Lan, Shuo Yang, Shengping Zhang, Liqiang Nie
https://proceedings.mlr.press/v235/liu24as.html
ICML 2024
Traditional studies emphasize the significance of context information in improving matting performance. Consequently, deep learning-based matting methods delve into designing pooling or affinity-based context aggregation modules to achieve superior results. However, these modules cannot well handle the context scale shift caused by the difference in image size during training and inference, resulting in matting performance degradation. In this paper, we revisit the context aggregation mechanisms of matting networks and find that a basic encoder-decoder network without any context aggregation modules can actually learn more universal context aggregation, thereby achieving higher matting performance compared to existing methods. Building on this insight, we present AEMatter, a matting network that is straightforward yet very effective. AEMatter adopts a Hybrid-Transformer backbone with appearance-enhanced axis-wise learning (AEAL) blocks to build a basic network with strong context aggregation learning capability. Furthermore, AEMatter leverages a large image training strategy to assist the network in learning context aggregation from data. Extensive experiments on five popular matting datasets demonstrate that the proposed AEMatter outperforms state-of-the-art matting methods by a large margin. The source code is available at https://github.com/aipixel/AEMatter.
https://proceedings.mlr.press/v235/liu24at.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24at/liu24at.pdf
https://openreview.net/forum?id=uqWfZ23O9g
Amortized Equation Discovery in Hybrid Dynamical Systems
https://proceedings.mlr.press/v235/liu24at.html
Yongtuo Liu, Sara Magliacane, Miltiadis Kofinas, Stratis Gavves
https://proceedings.mlr.press/v235/liu24at.html
ICML 2024
Hybrid dynamical systems are prevalent in science and engineering to express complex systems with continuous and discrete states. To learn laws of systems, all previous methods for equation discovery in hybrid systems follow a two-stage paradigm, i.e. they first group time series into small cluster fragments and then discover equations in each fragment separately through methods in non-hybrid systems. Although effective, performance is then limited because these methods ignore the commonalities in the shared dynamics of fragments that are driven by the same equations. Besides, the two-stage paradigm breaks the interdependence between categorizing and representing dynamics that jointly form hybrid systems. In this paper, we reformulate the problem and propose an end-to-end learning framework, i.e. Amortized Equation Discovery (AMORE), to jointly categorize modes and discover equations characterizing motion dynamics of each mode by all segments of the mode. Experiments on four hybrid and six non-hybrid systems demonstrate the superior performance of our method against previous methods on equation discovery, segmentation, and forecasting.
https://proceedings.mlr.press/v235/liu24au.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24au/liu24au.pdf
https://openreview.net/forum?id=EncFNR3hxM
KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning
https://proceedings.mlr.press/v235/liu24au.html
Junnan Liu, Qianren Mao, Weifeng Jiang, Jianxin Li
https://proceedings.mlr.press/v235/liu24au.html
ICML 2024
Knowledge graph reasoning plays a vital role in various applications and has garnered considerable attention. Recently, path-based methods have achieved impressive performance. However, they may face limitations stemming from constraints in message-passing neural networks, such as missing paths and information over-squashing. In this paper, we revisit the application of transformers for knowledge graph reasoning to address the constraints faced by path-based methods and propose a novel method KnowFormer. KnowFormer utilizes a transformer architecture to perform reasoning on knowledge graphs from the message-passing perspective, rather than reasoning by textual information like previous pretrained language model based methods. Specifically, we define the attention computation based on the query prototype of knowledge graph reasoning, facilitating convenient construction and efficient optimization. To incorporate structural information into the self-attention mechanism, we introduce structure-aware modules to calculate query, key, and value respectively. Additionally, we present an efficient attention computation method for better scalability. Experimental results demonstrate the superior performance of KnowFormer compared to prominent baseline methods on both transductive and inductive benchmarks.
https://proceedings.mlr.press/v235/liu24av.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24av/liu24av.pdf
https://openreview.net/forum?id=CtgJUQxmEo
Floating Anchor Diffusion Model for Multi-motif Scaffolding
https://proceedings.mlr.press/v235/liu24av.html
Ke Liu, Weian Mao, Shuaike Shen, Xiaoran Jiao, Zheng Sun, Hao Chen, Chunhua Shen
https://proceedings.mlr.press/v235/liu24av.html
ICML 2024
Motif scaffolding seeks to design scaffold structures for constructing proteins with functions derived from the desired motif, which is crucial for the design of vaccines and enzymes. Previous works approach the problem by inpainting or conditional generation. Both of them can only scaffold motifs with fixed positions, and the conditional generation cannot guarantee the presence of motifs. However, prior knowledge of the relative motif positions in a protein is not readily available, and constructing a protein with multiple functions in one protein is more general and significant because of the synergies between functions. We propose a Floating Anchor Diffusion (FADiff) model. FADiff allows motifs to float rigidly and independently in the process of diffusion, which guarantees the presence of motifs and automates the motif position design. Our experiments demonstrate the efficacy of FADiff with high success rates and designable novel scaffolds. To the best of our knowledge, FADiff is the first work to tackle the challenge of scaffolding multiple motifs without relying on the expertise of relative motif positions in the protein. Code is available at https://github.com/aim-uofa/FADiff.
https://proceedings.mlr.press/v235/liu24aw.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24aw/liu24aw.pdf
https://openreview.net/forum?id=dHXKCyaIkp
Deep Functional Factor Models: Forecasting High-Dimensional Functional Time Series via Bayesian Nonparametric Factorization
https://proceedings.mlr.press/v235/liu24aw.html
Yirui Liu, Xinghao Qiao, Yulong Pei, Liying Wang
https://proceedings.mlr.press/v235/liu24aw.html
ICML 2024
This paper introduces the Deep Functional Factor Model (DF2M), a Bayesian nonparametric model designed for analysis of high-dimensional functional time series. DF2M is built upon the Indian Buffet Process and the multi-task Gaussian Process, incorporating a deep kernel function that captures non-Markovian and nonlinear temporal dynamics. Unlike many black-box deep learning models, DF2M offers an explainable approach to utilizing neural networks by constructing a factor model and integrating deep neural networks within the kernel function. Additionally, we develop a computationally efficient variational inference algorithm to infer DF2M. Empirical results from four real-world datasets demonstrate that DF2M provides better explainability and superior predictive accuracy compared to conventional deep learning models for high-dimensional functional time series.
https://proceedings.mlr.press/v235/liu24ax.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ax/liu24ax.pdf
https://openreview.net/forum?id=eQaOb4r6YC
Fast Decision Boundary based Out-of-Distribution Detector
https://proceedings.mlr.press/v235/liu24ax.html
Litian Liu, Yao Qin
https://proceedings.mlr.press/v235/liu24ax.html
ICML 2024
Efficient and effective Out-of-Distribution (OOD) detection is essential for the safe deployment of AI systems. Existing feature space methods, while effective, often incur significant computational overhead due to their reliance on auxiliary models built from training features. In this paper, we propose a computationally-efficient OOD detector without using auxiliary models while still leveraging the rich information embedded in the feature space. Specifically, we detect OOD samples based on their feature distances to decision boundaries. To minimize computational cost, we introduce an efficient closed-form estimation, analytically proven to tightly lower bound the distance. Based on our estimation, we discover that In-Distribution (ID) features tend to be further from decision boundaries than OOD features. Additionally, ID and OOD samples are better separated when compared at equal deviation levels from the mean of training features. By regularizing the distances to decision boundaries based on feature deviation from the mean, we develop a hyperparameter-free, auxiliary model-free OOD detector. Our method matches or surpasses the effectiveness of state-of-the-art methods in extensive experiments while incurring negligible overhead in inference latency. Overall, our approach significantly improves the efficiency-effectiveness trade-off in OOD detection. Code is available at: https://github.com/litianliu/fDBD-OOD.
https://proceedings.mlr.press/v235/liu24ay.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ay/liu24ay.pdf
https://openreview.net/forum?id=pPnkpvBeZN
Class-Imbalanced Graph Learning without Class Rebalancing
https://proceedings.mlr.press/v235/liu24ay.html
Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Hyunsik Yoo, David Zhou, Zhe Xu, Yada Zhu, Kommy Weldemariam, Jingrui He, Hanghang Tong
https://proceedings.mlr.press/v235/liu24ay.html
ICML 2024
Class imbalance is prevalent in real-world node classification tasks and poses great challenges for graph learning models. Most existing studies are rooted in a class-rebalancing (CR) perspective and address class imbalance with class-wise reweighting or resampling. In this work, we approach the root cause of class-imbalance bias from an topological paradigm. Specifically, we theoretically reveal two fundamental phenomena in the graph topology that greatly exacerbate the predictive bias stemming from class imbalance. On this basis, we devise a lightweight topological augmentation framework BAT to mitigate the class-imbalance bias without class rebalancing. Being orthogonal to CR, BAT can function as an efficient plug-and-play module that can be seamlessly combined with and significantly boost existing CR techniques. Systematic experiments on real-world imbalanced graph learning tasks show that BAT can deliver up to 46.27% performance gain and up to 72.74% bias reduction over existing techniques. Code, examples, and documentations are available at https://github.com/ZhiningLiu1998/BAT.
https://proceedings.mlr.press/v235/liu24az.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24az/liu24az.pdf
https://openreview.net/forum?id=XmLNDlQuzO
Generative Marginalization Models
https://proceedings.mlr.press/v235/liu24az.html
Sulin Liu, Peter Ramadge, Ryan P Adams
https://proceedings.mlr.press/v235/liu24az.html
ICML 2024
We introduce marginalization models (MAMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling by explicitly modeling all induced marginal distributions. Marginalization models enable fast approximation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of arbitrary marginal inference models, such as any-order autoregressive models. MAMs also address the scalability bottleneck encountered in training any-order generative models for high-dimensional problems under the context of energy-based training, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized log-probability function such as energy or reward function). We propose scalable methods for learning the marginals, grounded in the concept of "marginalization self-consistency". We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including images, text, physical systems, and molecules, for maximum likelihood and energy-based training settings. MAMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MAMs enable any-order generative modeling of high-dimensional problems beyond the scale of previous methods. Code is available at github.com/PrincetonLIPS/MaM.
https://proceedings.mlr.press/v235/liu24ba.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ba/liu24ba.pdf
https://openreview.net/forum?id=LIQYhV45D4
Federated Representation Learning in the Under-Parameterized Regime
https://proceedings.mlr.press/v235/liu24ba.html
Renpu Liu, Cong Shen, Jing Yang
https://proceedings.mlr.press/v235/liu24ba.html
ICML 2024
Federated representation learning (FRL) is a popular personalized federated learning (FL) framework where clients work together to train a common representation while retaining their personalized heads. Existing studies, however, largely focus on the over-parameterized regime. In this paper, we make the initial efforts to investigate FRL in the under-parameterized regime, where the FL model is insufficient to express the variations in all ground-truth models. We propose a novel FRL algorithm FLUTE, and theoretically characterize its sample complexity and convergence rate for linear models in the under-parameterized regime. To the best of our knowledge, this is the first FRL algorithm with provable performance guarantees in this regime. FLUTE features a data-independent random initialization and a carefully designed objective function that aids the distillation of subspace spanned by the global optimal representation from the misaligned local representations. On the technical side, we bridge low-rank matrix approximation techniques with the FL analysis, which may be of broad interest. We also extend FLUTE beyond linear representations. Experimental results demonstrate that FLUTE outperforms state-of-the-art FRL solutions in both synthetic and real-world tasks.
https://proceedings.mlr.press/v235/liu24bb.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bb/liu24bb.pdf
https://openreview.net/forum?id=685vj0lC9z
How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?
https://proceedings.mlr.press/v235/liu24bb.html
Ryan Liu, Theodore Sumers, Ishita Dasgupta, Thomas L. Griffiths
https://proceedings.mlr.press/v235/liu24bb.html
ICML 2024
In day-to-day communication, people often approximate the truth — for example, rounding the time or omitting details — in order to be maximally helpful to the listener. How do large language models (LLMs) handle such nuanced trade-offs? To address this question, we use psychological models and experiments designed to characterize human behavior to analyze LLMs. We test a range of LLMs and explore how optimization for human preferences or inference-time reasoning affects these trade-offs. We find that reinforcement learning from human feedback improves both honesty and helpfulness, while chain-of-thought prompting skews LLMs towards helpfulness over honesty. Finally, GPT-4 Turbo demonstrates human-like response patterns including sensitivity to the conversational framing and listener’s decision context. Our findings reveal the conversational values internalized by LLMs and suggest that even these abstract values can, to a degree, be steered by zero-shot prompting.
https://proceedings.mlr.press/v235/liu24bc.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bc/liu24bc.pdf
https://openreview.net/forum?id=l1YbS3qkdk
Causal Discovery via Conditional Independence Testing with Proxy Variables
https://proceedings.mlr.press/v235/liu24bc.html
Mingzhou Liu, Xinwei Sun, Yu Qiao, Yizhou Wang
https://proceedings.mlr.press/v235/liu24bc.html
ICML 2024
Distinguishing causal connections from correlations is important in many scenarios. However, the presence of unobserved variables, such as the latent confounder, can introduce bias in conditional independence testing commonly employed in constraint-based causal discovery for identifying causal relations. To address this issue, existing methods introduced proxy variables to adjust for the bias caused by unobserveness. However, these methods were either limited to categorical variables or relied on strong parametric assumptions for identification. In this paper, we propose a novel hypothesis-testing procedure that can effectively examine the existence of the causal relationship over continuous variables, without any parametric constraint. Our procedure is based on discretization, which under completeness conditions, is able to asymptotically establish a linear equation whose coefficient vector is identifiable under the causal null hypothesis. Based on this, we introduce our test statistic and demonstrate its asymptotic level and power. We validate the effectiveness of our procedure using both synthetic and real-world data.
https://proceedings.mlr.press/v235/liu24bd.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bd/liu24bd.pdf
https://openreview.net/forum?id=wdezvnc9EG
Perfect Alignment May be Poisonous to Graph Contrastive Learning
https://proceedings.mlr.press/v235/liu24bd.html
Jingyu Liu, Huayi Tang, Yong Liu
https://proceedings.mlr.press/v235/liu24bd.html
ICML 2024
Graph Contrastive Learning (GCL) aims to learn node representations by aligning positive pairs and separating negative ones. However, few of researchers have focused on the inner law behind specific augmentations used in graph-based learning. What kind of augmentation will help downstream performance, how does contrastive learning actually influence downstream tasks, and why the magnitude of augmentation matters so much? This paper seeks to address these questions by establishing a connection between augmentation and downstream performance. Our findings reveal that GCL contributes to downstream tasks mainly by separating different classes rather than gathering nodes of the same class. So perfect alignment and augmentation overlap which draw all intra-class samples the same can not fully explain the success of contrastive learning. Therefore, in order to understand how augmentation aids the contrastive learning process, we conduct further investigations into the generalization, finding that perfect alignment that draw positive pair the same could help contrastive loss but is poisonous to generalization, as a result, perfect alignment may not lead to best downstream performance, so specifically designed augmentation is needed to achieve appropriate alignment performance and improve downstream accuracy. We further analyse the result by information theory and graph spectrum theory and propose two simple but effective methods to verify the theories. The two methods could be easily applied to various GCL algorithms and extensive experiments are conducted to prove its effectiveness. The code is available at https://github.com/somebodyhh1/GRACEIS
https://proceedings.mlr.press/v235/liu24be.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24be/liu24be.pdf
https://openreview.net/forum?id=F3Ds71Xgo1
Entropy-Reinforced Planning with Large Language Models for Drug Discovery
https://proceedings.mlr.press/v235/liu24be.html
Xuefeng Liu, Chih-Chan Tien, Peng Ding, Songhao Jiang, Rick L. Stevens
https://proceedings.mlr.press/v235/liu24be.html
ICML 2024
The objective of drug discovery is to identify chemical compounds that possess specific pharmaceutical properties toward a binding target. Existing large language models (LLMS) can achieve high token matching scores in terms of likelihood for molecule generation. However, relying solely on LLM decoding often results in the generation of molecules that are either invalid due to a single misused token, or suboptimal due to unbalanced exploration and exploitation as a consequence of the LLM’s prior experience. Here we propose ERP, Entropy-Reinforced Planning for Transformer Decoding, which employs an entropy-reinforced planning algorithm to enhance the Transformer decoding process and strike a balance between exploitation and exploration. ERP aims to achieve improvements in multiple properties compared to direct sampling from the Transformer. We evaluated ERP on the SARS-CoV-2 virus (3CLPro) and human cancer cell target protein (RTCB) benchmarks and demonstrated that, in both benchmarks, ERP consistently outperforms the current state-of-the-art algorithm by 1-5 percent, and baselines by 5-10 percent, respectively. Moreover, such improvement is robust across Transformer models trained with different objectives. Finally, to further illustrate the capabilities of ERP, we tested our algorithm on three code generation benchmarks and outperformed the current state-of-the-art approach as well. Our code is publicly available at: https://github.com/xuefeng-cs/ERP.
https://proceedings.mlr.press/v235/liu24bf.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bf/liu24bf.pdf
https://openreview.net/forum?id=2hWd4CVhXz
New Sample Complexity Bounds for Sample Average Approximation in Heavy-Tailed Stochastic Programming
https://proceedings.mlr.press/v235/liu24bf.html
Hongcheng Liu, Jindong Tong
https://proceedings.mlr.press/v235/liu24bf.html
ICML 2024
This paper studies sample average approximation (SAA) and its simple regularized variation in solving convex or strongly convex stochastic programming problems. Under heavy-tailed assumptions and comparable regularity conditions as in the typical SAA literature, we show — perhaps for the first time — that the sample complexity can be completely free from any complexity measure (e.g., logarithm of the covering number) of the feasible region. As a result, our new bounds can be more advantageous than the state-of-the-art in terms of the dependence on the problem dimensionality.
https://proceedings.mlr.press/v235/liu24bg.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bg/liu24bg.pdf
https://openreview.net/forum?id=ZvJ2lQQKjz
Zero-Shot ECG Classification with Multimodal Learning and Test-time Clinical Knowledge Enhancement
https://proceedings.mlr.press/v235/liu24bg.html
Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, Rossella Arcucci
https://proceedings.mlr.press/v235/liu24bg.html
ICML 2024
Electrocardiograms (ECGs) are non-invasive diagnostic tools crucial for detecting cardiac arrhythmic diseases in clinical practice. While ECG Self-supervised Learning (eSSL) methods show promise in representation learning from unannotated ECG data, they often overlook the clinical knowledge that can be found in reports. This oversight and the requirement for annotated samples for downstream tasks limit eSSL’s versatility. In this work, we address these issues with the Multimodal ECG Representation Learning (MERL) framework. Through multimodal learning on ECG records and associated reports, MERL is capable of performing zero-shot ECG classification with text prompts, eliminating the need for training data in downstream tasks. At test time, we propose the Clinical Knowledge Enhanced Prompt Engineering (CKEPE) approach, which uses Large Language Models (LLMs) to exploit external expert-verified clinical knowledge databases, generating more descriptive prompts and reducing hallucinations in LLM-generated content to boost zero-shot classification. Based on MERL, we perform the first benchmark across six public ECG datasets, showing the superior performance of MERL compared against eSSL methods. Notably, MERL achieves an average AUC score of 75.2% in zero-shot classification (without training data), 3.2% higher than linear probed eSSL methods with 10% annotated training data, averaged across all six datasets.
https://proceedings.mlr.press/v235/liu24bh.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bh/liu24bh.pdf
https://openreview.net/forum?id=igRjCCAz2a
Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding
https://proceedings.mlr.press/v235/liu24bh.html
Guangyi Liu, Yu Wang, Zeyu Feng, Qiyu Wu, Liping Tang, Yuan Gao, Zhen Li, Shuguang Cui, Julian Mcauley, Zichao Yang, Eric P. Xing, Zhiting Hu
https://proceedings.mlr.press/v235/liu24bh.html
ICML 2024
The vast applications of deep generative models are anchored in three core capabilities—generating new instances, reconstructing inputs, and learning compact representations—across various data types, such as discrete text/protein sequences and continuous images. Existing model families, like variational autoencoders (VAEs), generative adversarial networks (GANs), autoregressive models, and (latent) diffusion models, generally excel in specific capabilities and data types but fall short in others. We introduce Generalized Encoding-Decoding Diffusion Probabilistic Models (EDDPMs) which integrate the core capabilities for broad applicability and enhanced performance. EDDPMs generalize the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding. Crucially, EDDPMs are compatible with the well-established diffusion model objective and training recipes, allowing effective learning of the encoder-decoder parameters jointly with diffusion. By choosing appropriate encoder/decoder (e.g., large language models), EDDPMs naturally apply to different data types. Extensive experiments on text, proteins, and images demonstrate the flexibility to handle diverse data and tasks and the strong improvement over various existing models. Code is available at https://github.com/guangyliu/EDDPM .
https://proceedings.mlr.press/v235/liu24bi.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bi/liu24bi.pdf
https://openreview.net/forum?id=bULHOW1RXM
Differentiable Model Scaling using Differentiable Topk
https://proceedings.mlr.press/v235/liu24bi.html
Kai Liu, Ruohui Wang, Jianfei Gao, Kai Chen
https://proceedings.mlr.press/v235/liu24bi.html
ICML 2024
Over the past few years, as large language models have ushered in an era of intelligence emergence, there has been an intensified focus on scaling networks. Although Neural Architecture Search (NAS) methods have been proposed to automate this process, they suffer from low search efficiency. This study introduces Differentiable Model Scaling (DMS), increasing the efficiency for searching optimal width and depth in networks. DMS can model both width and depth in a direct and fully differentiable way, making it easy to optimize. We have evaluated our DMS across diverse tasks, ranging from vision tasks to NLP tasks and various network architectures, including CNNs and Transformers. Results consistently indicate that our DMS can find improved structures and outperforms state-of-the-art NAS methods. Specifically, for image classification on ImageNet, our DMS improves the top-1 accuracy of EfficientNet-B0 and Deit-Tiny by 1.4% and 0.6%, respectively, and outperforms the state-of-the-art zero-shot NAS method, ZiCo, by 1.3% while requiring only 0.4 GPU days for searching. For object detection on COCO, DMS improves the mAP of Yolo-v8-n by 2.0%. For language modeling, our pruned Llama-7B outperforms the prior method with lower perplexity and higher zero-shot classification accuracy. Our code is available at https://github.com/LKJacky/Differentiable-Model-Scaling.
https://proceedings.mlr.press/v235/liu24bj.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bj/liu24bj.pdf
https://openreview.net/forum?id=VxI0gInNlh
Symmetric Matrix Completion with ReLU Sampling
https://proceedings.mlr.press/v235/liu24bj.html
Huikang Liu, Peng Wang, Longxiu Huang, Qing Qu, Laura Balzano
https://proceedings.mlr.press/v235/liu24bj.html
ICML 2024
We study the problem of symmetric positive semi-definite low-rank matrix completion (MC) with deterministic entry-dependent sampling. In particular, we consider rectified linear unit (ReLU) sampling, where only positive entries are observed, as well as a generalization to threshold-based sampling. We first empirically demonstrate that the landscape of this MC problem is not globally benign: Gradient descent (GD) with random initialization will generally converge to stationary points that are not globally optimal. Nevertheless, we prove that when the matrix factor with a small rank satisfies mild assumptions, the nonconvex objective function is geodesically strongly convex on the quotient manifold in a neighborhood of a planted low-rank matrix. Moreover, we show that our assumptions are satisfied by a matrix factor with i.i.d. Gaussian entries. Finally, we develop a tailor-designed initialization for GD to solve our studied formulation, which empirically always achieves convergence to the global minima. We also conduct extensive experiments and compare MC methods, investigating convergence and completion performance with respect to initialization, noise level, dimension, and rank.
https://proceedings.mlr.press/v235/liu24bk.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bk/liu24bk.pdf
https://openreview.net/forum?id=OERwuPzHdh
DNA-SE: Towards Deep Neural-Nets Assisted Semiparametric Estimation
https://proceedings.mlr.press/v235/liu24bk.html
Qinshuo Liu, Zixin Wang, Xi’An Li, Xinyao Ji, Lei Zhang, Lin Liu, Zhonghua Liu
https://proceedings.mlr.press/v235/liu24bk.html
ICML 2024
Semiparametric statistics play a pivotal role in a wide range of domains, including but not limited to missing data, causal inference, and transfer learning, to name a few. In many settings, semiparametric theory leads to (nearly) statistically optimal procedures that yet involve numerically solving Fredholm integral equations of the second kind. Traditional numerical methods, such as polynomial or spline approximations, are difficult to scale to multi-dimensional problems. Alternatively, statisticians may choose to approximate the original integral equations by ones with closed-form solutions, resulting in computationally more efficient, but statistically suboptimal or even incorrect procedures. To bridge this gap, we propose a novel framework by formulating the semiparametric estimation problem as a bi-level optimization problem; and then we propose a scalable algorithm called Deep Neural-Nets Assisted Semiparametric Estimation ($\mathsf{DNA\mbox{-}SE}$) by leveraging the universal approximation property of Deep Neural-Nets (DNN) to streamline semiparametric procedures. Through extensive numerical experiments and a real data analysis, we demonstrate the numerical and statistical advantages of $\mathsf{DNA\mbox{-}SE}$ over traditional methods. To the best of our knowledge, we are the first to bring DNN into semiparametric statistics as a numerical solver of integral equations in our proposed general framework.
https://proceedings.mlr.press/v235/liu24bl.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bl/liu24bl.pdf
https://openreview.net/forum?id=t6dBpwkbea
TimeX++: Learning Time-Series Explanations with Information Bottleneck
https://proceedings.mlr.press/v235/liu24bl.html
Zichuan Liu, Tianchun Wang, Jimeng Shi, Xu Zheng, Zhuomin Chen, Lei Song, Wenqian Dong, Jayantha Obeysekera, Farhad Shirani, Dongsheng Luo
https://proceedings.mlr.press/v235/liu24bl.html
ICML 2024
Explaining deep learning models operating on time series data is crucial in various applications of interest which require interpretable and transparent insights from time series signals. In this work, we investigate this problem from an information theoretic perspective and show that most existing measures of explainability may suffer from trivial solutions and distributional shift issues. To address these issues, we introduce a simple yet practical objective function for time series explainable learning. The design of the objective function builds upon the principle of information bottleneck (IB), and modifies the IB objective function to avoid trivial solutions and distributional shift issues. We further present TimeX++, a novel explanation framework that leverages a parametric network to produce explanation-embedded instances that are both in-distributed and label-preserving. We evaluate TimeX++ on both synthetic and real-world datasets comparing its performance against leading baselines, and validate its practical efficacy through case studies in a real-world environmental application. Quantitative and qualitative evaluations show that TimeX++ outperforms baselines across all datasets, demonstrating a substantial improvement in explanation quality for time series data. The source code is available at https://github.com/zichuan-liu/TimeXplusplus.
https://proceedings.mlr.press/v235/liu24bm.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bm/liu24bm.pdf
https://openreview.net/forum?id=NsHxeSCtgr
LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models
https://proceedings.mlr.press/v235/liu24bm.html
Tianci Liu, Haoyu Wang, Shiyang Wang, Yu Cheng, Jing Gao
https://proceedings.mlr.press/v235/liu24bm.html
ICML 2024
Large language models (LLMs) have achieved impressive performance on various natural language generation tasks. Nonetheless, they suffer from generating negative and harmful contents that are biased against certain demographic groups (e.g., female), raising severe fairness concerns. As remedies, prior works intervened the generation by removing attitude or demographic information, inevitably degrading the generation quality and resulting in notable fairness-fluency trade-offs. However, it is still under-explored to what extent the fluency has to be affected in order to achieve a desired level of fairness. In this work, we conduct the first formal study from an information-theoretic perspective. We show that previous approaches are excessive for debiasing and propose LIDAO, a general framework to debias a (L)LM at a better fluency provably. We further robustify LIDAO in adversarial scenarios, where a carefully-crafted prompt may stimulate LLMs exhibiting instruction-following abilities to generate texts with fairness issue appears only when the prompt is also taken into account. Experiments on three LMs ranging from 0.7B to 7B parameters demonstrate the superiority of our method.
https://proceedings.mlr.press/v235/liu24bn.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bn/liu24bn.pdf
https://openreview.net/forum?id=3d5CIRG1n2
DoRA: Weight-Decomposed Low-Rank Adaptation
https://proceedings.mlr.press/v235/liu24bn.html
Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen
https://proceedings.mlr.press/v235/liu24bn.html
ICML 2024
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT). In this work, we first introduce a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings, we propose Weight-Decomposed Low-Rank Adaptation (DoRA). DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters. By employing DoRA, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead. DoRA consistently outperforms LoRA on fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as commonsense reasoning, visual instruction tuning, and image/video-text understanding. The code is available at https://github.com/NVlabs/DoRA.
https://proceedings.mlr.press/v235/liu24bo.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bo/liu24bo.pdf
https://openreview.net/forum?id=klKk9ETAyU
High-Probability Bound for Non-Smooth Non-Convex Stochastic Optimization with Heavy Tails
https://proceedings.mlr.press/v235/liu24bo.html
Langqi Liu, Yibo Wang, Lijun Zhang
https://proceedings.mlr.press/v235/liu24bo.html
ICML 2024
Recently, Cutkosky et al. introduce the online-to-non-convex framework, which utilizes online learning methods to solve non-smooth non-convex optimization problems, and achieves an $\mathcal{O}(\epsilon^{-3}\delta^{-1})$ gradient complexity for finding $(\delta,\epsilon)$-stationary points. However, their results rely on the bounded variance assumption of stochastic gradients and only hold in expectation. To address these limitations, we investigate the case that stochastic gradients obey heavy-tailed distributions with finite $\mathfrak{p}$-th moments for some $\mathfrak{p}\in(1,2]$, and propose a novel algorithm which is able to identify a $(\delta,\epsilon)$-stationary point with high probability, after consuming $\tilde{\mathcal{O}}(\epsilon^{-\frac{2\mathfrak{p}-1}{\mathfrak{p}-1}}\delta^{-1})$ stochastic gradients. The key idea is first incorporating the gradient clipping technique into the online-to-non-convex framework to produce a sequence of points, the averaged gradient norms of which is no greater than $\epsilon$. Then, we propose a validation method to select one $(\delta,\epsilon)$-stationary point among the candidates. When gradient distributions have bounded variance, i.e., $\mathfrak{p}=2$, our result turns into $\tilde{\mathcal{O}}(\epsilon^{-3}\delta^{-1})$, which improves the existing $\tilde{\mathcal{O}}(\epsilon^{-4}\delta^{-1})$ high-probability bound. When the objective is smooth, our algorithm can also find an $\epsilon$-stationary point with $\tilde{\mathcal{O}}(\epsilon^{-\frac{3\mathfrak{p}-2}{\mathfrak{p}-1}})$ gradient queries.