abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/li24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24k/li24k.pdf
|
https://openreview.net/forum?id=DbMm8pmoAP
|
Evolving Subnetwork Training for Large Language Models
|
https://proceedings.mlr.press/v235/li24k.html
|
Hanqi Li, Lu Chen, Da Ma, Zijian Wu, Su Zhu, Kai Yu
|
https://proceedings.mlr.press/v235/li24k.html
|
ICML 2024
|
Large language models have ushered in a new era of artificial intelligence research. However, their substantial training costs hinder further development and widespread adoption. In this paper, inspired by the redundancy in the parameters of large language models, we propose a novel training paradigm: Evolving Subnetwork Training (EST). EST samples subnetworks from the layers of the large language model and from commonly used modules within each layer, Multi-Head Attention (MHA) and Multi-Layer Perceptron (MLP). By gradually increasing the size of the subnetworks during the training process, EST can save the cost of training. We apply EST to train GPT2 model and TinyLlama model, resulting in 26.7% FLOPs saving for GPT2 and 25.0% for TinyLlama without an increase in loss on the pre-training dataset. Moreover, EST leads to performance improvements in downstream tasks, indicating that it benefits generalization. Additionally, we provide intuitive theoretical studies based on training dynamics and Dropout theory to ensure the feasibility of EST.
|
https://proceedings.mlr.press/v235/li24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24l/li24l.pdf
|
https://openreview.net/forum?id=f49AkFT5jf
|
Data Poisoning Attacks against Conformal Prediction
|
https://proceedings.mlr.press/v235/li24l.html
|
Yangyi Li, Aobo Chen, Wei Qian, Chenxu Zhao, Divya Lidder, Mengdi Huai
|
https://proceedings.mlr.press/v235/li24l.html
|
ICML 2024
|
The efficient and theoretically sound uncertainty quantification is crucial for building trust in deep learning models. This has spurred a growing interest in conformal prediction (CP), a powerful technique that provides a model-agnostic and distribution-free method for obtaining conformal prediction sets with theoretical guarantees. However, the vulnerabilities of such CP methods with regard to dedicated data poisoning attacks have not been studied previously. To bridge this gap, for the first time, we in this paper propose a new class of black-box data poisoning attacks against CP, where the adversary aims to cause the desired manipulations of some specific examples’ prediction uncertainty results (instead of misclassifications). Additionally, we design novel optimization frameworks for our proposed attacks. Further, we conduct extensive experiments to validate the effectiveness of our attacks on various settings (e.g., the full and split CP settings). Notably, our extensive experiments show that our attacks are more effective in manipulating uncertainty results than traditional poisoning attacks that aim at inducing misclassifications, and existing defenses against conventional attacks are ineffective against our proposed attacks.
|
https://proceedings.mlr.press/v235/li24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24m/li24m.pdf
|
https://openreview.net/forum?id=hlvKd7Vdxm
|
ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking
|
https://proceedings.mlr.press/v235/li24m.html
|
Wenshuo Li, Xinghao Chen, Han Shu, Yehui Tang, Yunhe Wang
|
https://proceedings.mlr.press/v235/li24m.html
|
ICML 2024
|
Large language models (LLM) have recently attracted significant attention in the field of artificial intelligence. However, the training process of these models poses significant challenges in terms of computational and storage capacities, thus compressing checkpoints has become an urgent problem. In this paper, we propose a novel Extreme Checkpoint Compression (ExCP) framework, which significantly reduces the required storage of training checkpoints while achieving nearly lossless performance. We first calculate the residuals of adjacent checkpoints to obtain the essential but sparse information for higher compression ratio. To further excavate the redundancy parameters in checkpoints, we then propose a weight-momentum joint shrinking method to utilize another important information during the model optimization, i.e., momentum. In particular, we exploit the information of both model and optimizer to discard as many parameters as possible while preserving critical information to ensure optimal performance. Furthermore, we utilize non-uniform quantization to further compress the storage of checkpoints. We extensively evaluate our proposed ExCP framework on several models ranging from 410M to 7B parameters and demonstrate significant storage reduction while maintaining strong performance. For instance, we achieve approximately $70\times$ compression for the Pythia-410M model, with the final performance being as accurate as the original model on various downstream tasks. Codes will be available at https://github.com/Gaffey/ExCP.
|
https://proceedings.mlr.press/v235/li24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24n/li24n.pdf
|
https://openreview.net/forum?id=BajM6YzKvm
|
Two-sided Competing Matching Recommendation Markets With Quota and Complementary Preferences Constraints
|
https://proceedings.mlr.press/v235/li24n.html
|
Yuantong Li, Guang Cheng, Xiaowu Dai
|
https://proceedings.mlr.press/v235/li24n.html
|
ICML 2024
|
In this paper, we propose a new recommendation algorithm for addressing the problem of two-sided online matching markets with complementary preferences and quota constraints, where agents’ preferences are unknown a priori and must be learned from data. The presence of mixed quota and complementary preferences constraints can lead to instability in the matching process, making this problem challenging to solve. To overcome this challenge, we formulate the problem as a bandit learning framework and propose the Multi-agent Multi-type Thompson Sampling (MMTS) algorithm. The algorithm combines the strengths of Thompson Sampling for exploration with a new double matching technique to provide a stable matching outcome. Our theoretical analysis demonstrates the effectiveness of MMTS as it can achieve stability and has a total $\widetilde{\mathcal{O}}(Q{\sqrt{K_{\max}T}})$-Bayesian regret with high probability, which exhibits linearity with respect to the total firm’s quota $Q$, the square root of the maximum size of available type workers $\sqrt{K_{\max}}$ and time horizon $T$. In addition, simulation studies also demonstrate MMTS’ effectiveness in various settings. We provide code used in our experiments https://github.com/Likelyt/Double-Matching.
|
https://proceedings.mlr.press/v235/li24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24o/li24o.pdf
|
https://openreview.net/forum?id=5tPB5VXo87
|
Full-Atom Peptide Design based on Multi-modal Flow Matching
|
https://proceedings.mlr.press/v235/li24o.html
|
Jiahan Li, Chaoran Cheng, Zuofan Wu, Ruihan Guo, Shitong Luo, Zhizhou Ren, Jian Peng, Jianzhu Ma
|
https://proceedings.mlr.press/v235/li24o.html
|
ICML 2024
|
Peptides, short chains of amino acid residues, play a vital role in numerous biological processes by interacting with other target molecules, offering substantial potential in drug discovery. In this work, we present PepFlow, the first multi-modal deep generative model grounded in the flow-matching framework for the design of full-atom peptides that target specific protein receptors. Drawing inspiration from the crucial roles of residue backbone orientations and side-chain dynamics in protein-peptide interactions, we characterize the peptide structure using rigid backbone frames within the $\mathrm{SE}(3)$ manifold and side-chain angles on high-dimensional tori. Furthermore, we represent discrete residue types in the peptide sequence as categorical distributions on the probability simplex. By learning the joint distributions of each modality using derived flows and vector fields on corresponding manifolds, our method excels in the fine-grained design of full-atom peptides. Harnessing the multi-modal paradigm, our approach adeptly tackles various tasks such as fix-backbone sequence design and side-chain packing through partial sampling. Through meticulously crafted experiments, we demonstrate that PepFlow exhibits superior performance in comprehensive benchmarks, highlighting its significant potential in computational peptide design and analysis.
|
https://proceedings.mlr.press/v235/li24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24p/li24p.pdf
|
https://openreview.net/forum?id=xbQqhojHTg
|
Positive and Unlabeled Learning with Controlled Probability Boundary Fence
|
https://proceedings.mlr.press/v235/li24p.html
|
Changchun Li, Yuanchao Dai, Lei Feng, Ximing Li, Bing Wang, Jihong Ouyang
|
https://proceedings.mlr.press/v235/li24p.html
|
ICML 2024
|
Positive and Unlabeled (PU) learning refers to a special case of binary classification, and technically, it aims to induce a binary classifier from a few labeled positive training instances and loads of unlabeled instances. In this paper, we derive a theorem indicating that the probability boundary of the asymmetric disambiguation-free expected risk of PU learning is controlled by its asymmetric penalty, and we further empirically evaluated this theorem. Inspired by the theorem and its empirical evaluations, we propose an easy-to-implement two-stage PU learning method, namely Positive and Unlabeled Learning with Controlled Probability Boundary Fence (PULCPBF). In the first stage, we train a set of weak binary classifiers concerning different probability boundaries by minimizing the asymmetric disambiguation-free empirical risks with specific asymmetric penalty values. We can interpret these induced weak binary classifiers as a probability boundary fence. For each unlabeled instance, we can use the predictions to locate its class posterior probability and generate a stochastic label. In the second stage, we train a strong binary classifier over labeled positive training instances and all unlabeled instances with stochastic labels in a self-training manner. Extensive empirical results demonstrate that PULCPBF can achieve competitive performance compared with the existing PU learning baselines.
|
https://proceedings.mlr.press/v235/li24q.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24q/li24q.pdf
|
https://openreview.net/forum?id=FQQ4476dT2
|
FightLadder: A Benchmark for Competitive Multi-Agent Reinforcement Learning
|
https://proceedings.mlr.press/v235/li24q.html
|
Wenzhe Li, Zihan Ding, Seth Karten, Chi Jin
|
https://proceedings.mlr.press/v235/li24q.html
|
ICML 2024
|
Recent advances in reinforcement learning (RL) heavily rely on a variety of well-designed benchmarks, which provide environmental platforms and consistent criteria to evaluate existing and novel algorithms. Specifically, in multi-agent RL (MARL), a plethora of benchmarks based on cooperative games have spurred the development of algorithms that improve the scalability of cooperative multi-agent systems. However, for the competitive setting, a lightweight and open-sourced benchmark with challenging gaming dynamics and visual inputs has not yet been established. In this work, we present FightLadder, a real-time fighting game platform, to empower competitive MARL research. Along with the platform, we provide implementations of state-of-the-art MARL algorithms for competitive games, as well as a set of evaluation metrics to characterize the performance and exploitability of agents. We demonstrate the feasibility of this platform by training a general agent that consistently defeats 12 built-in characters in single-player mode, and expose the difficulty of training a non-exploitable agent without human knowledge and demonstrations in two-player mode. FightLadder provides meticulously designed environments to address critical challenges in competitive MARL research, aiming to catalyze a new era of discovery and advancement in the field. Videos and code at https://sites.google.com/view/fightladder/home.
|
https://proceedings.mlr.press/v235/li24r.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24r/li24r.pdf
|
https://openreview.net/forum?id=L1W9ZWPq9E
|
Debiased Distribution Compression
|
https://proceedings.mlr.press/v235/li24r.html
|
Lingxiao Li, Raaz Dwivedi, Lester Mackey
|
https://proceedings.mlr.press/v235/li24r.html
|
ICML 2024
|
Modern compression methods can summarize a target distribution $\mathbb{P}$ more succinctly than i.i.d. sampling but require access to a low-bias input sequence like a Markov chain converging quickly to $\mathbb{P}$. We introduce a new suite of compression methods suitable for compression with biased input sequences. Given $n$ points targeting the wrong distribution and quadratic time, Stein kernel thinning (SKT) returns $\sqrt{n}$ equal-weighted points with $\widetilde{O}(n^{-1/2})$ maximum mean discrepancy (MMD) to $\mathbb{P}$. For larger-scale compression tasks, low-rank SKT achieves the same feat in sub-quadratic time using an adaptive low-rank debiasing procedure that may be of independent interest. For downstream tasks that support simplex or constant-preserving weights, Stein recombination and Stein Cholesky achieve even greater parsimony, matching the guarantees of SKT with as few as $\text{poly-log}(n)$ weighted points. Underlying these advances are new guarantees for the quality of simplex-weighted coresets, the spectral decay of kernel matrices, and the covering numbers of Stein kernel Hilbert spaces. In our experiments, our techniques provide succinct and accurate posterior summaries while overcoming biases due to burn-in, approximate Markov chain Monte Carlo, and tempering.
|
https://proceedings.mlr.press/v235/li24s.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24s/li24s.pdf
|
https://openreview.net/forum?id=Nm6jYZsBum
|
Improving Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning
|
https://proceedings.mlr.press/v235/li24s.html
|
Wei Li, Hehe Fan, Yongkang Wong, Yi Yang, Mohan Kankanhalli
|
https://proceedings.mlr.press/v235/li24s.html
|
ICML 2024
|
Previous efforts using frozen Large Language Models (LLMs) for visual understanding, via image captioning or image-text retrieval tasks, face challenges when dealing with complex multimodal scenarios. In order to enhance the capabilities of Multimodal Large Language Models (MLLM) in comprehending the context of vision and language, we introduce Multimodal Composition Learning (MCL) for the purpose of mapping or aligning the vision and language input. In particular, we introduce two tasks: Multimodal-Context Captioning (MC-Cap) and Multimodal-Context Retrieval (MC-Ret) to guide a frozen LLM in comprehending the vision and language context. These specialized tasks are crafted to improve the LLM’s capacity for efficient processing and utilization of multimodal inputs, thereby enhancing its proficiency in generating more accurate text or visual representations. Extensive experiments on both retrieval tasks (i.e., zero-shot composed image retrieval, visual storytelling image retrieval and visual dialog image retrieval) and text generation tasks (i.e., visual question answering) demonstrate the effectiveness of the proposed method. The code is available at: https://github.com/dhg-wei/MCL.
|
https://proceedings.mlr.press/v235/li24t.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24t/li24t.pdf
|
https://openreview.net/forum?id=pA2Q5Wfspp
|
RL-CFR: Improving Action Abstraction for Imperfect Information Extensive-Form Games with Reinforcement Learning
|
https://proceedings.mlr.press/v235/li24t.html
|
Boning Li, Zhixuan Fang, Longbo Huang
|
https://proceedings.mlr.press/v235/li24t.html
|
ICML 2024
|
Effective action abstraction is crucial in tackling challenges associated with large action spaces in Imperfect Information Extensive-Form Games (IIEFGs). However, due to the vast state space and computational complexity in IIEFGs, existing methods often rely on fixed abstractions, resulting in sub-optimal performance. In response, we introduce RL-CFR, a novel reinforcement learning (RL) approach for dynamic action abstraction. RL-CFR builds upon our innovative Markov Decision Process (MDP) formulation, with states corresponding to public information and actions represented as feature vectors indicating specific action abstractions. The reward is defined as the expected payoff difference between the selected and default action abstractions. RL-CFR constructs a game tree with RL-guided action abstractions and utilizes counterfactual regret minimization (CFR) for strategy derivation. Impressively, it can be trained from scratch, achieving higher expected payoff without increased CFR solving time. In experiments on Heads-up No-limit Texas Hold’em, RL-CFR outperforms ReBeL’s replication and Slumbot, demonstrating significant win-rate margins of $64\pm 11$ and $84\pm 17$ mbb/hand, respectively.
|
https://proceedings.mlr.press/v235/li24u.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24u/li24u.pdf
|
https://openreview.net/forum?id=FvLd8Gr7xq
|
Vague Prototype-Oriented Diffusion Model for Multi-Class Anomaly Detection
|
https://proceedings.mlr.press/v235/li24u.html
|
Yuxin Li, Yaoxuan Feng, Bo Chen, Wenchao Chen, Yubiao Wang, Xinyue Hu, Baolin Sun, Chunhui Qu, Mingyuan Zhou
|
https://proceedings.mlr.press/v235/li24u.html
|
ICML 2024
|
Multi-class unsupervised anomaly detection aims to create a unified model for identifying anomalies in objects from multiple classes when only normal data is available. In such a challenging setting, widely used reconstruction-based networks persistently grapple with the "identical shortcut" problem, wherein the infiltration of abnormal information from the condition biases the output towards an anomalous distribution. In response to this critical challenge, we introduce a Vague Prototype-Oriented Diffusion Model (VPDM) that extracts only fundamental information from the condition to prevent the occurrence of the "identical shortcut" problem from the input layer. This model leverages prototypes that contain only vague information about the target as the initial condition. Subsequently, a novel conditional diffusion model is introduced to incrementally enhance details based on vague conditions. Finally, a Vague Prototype-Oriented Optimal Transport (VPOT) method is proposed to provide more accurate information about conditions. All these components are seamlessly integrated into a unified optimization objective. The effectiveness of our approach is demonstrated across diverse datasets, including the MVTec, VisA, and MPDD benchmarks, achieving state-of-the-art results.
|
https://proceedings.mlr.press/v235/li24v.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24v/li24v.pdf
|
https://openreview.net/forum?id=B5906M4Wnd
|
Automated Statistical Model Discovery with Language Models
|
https://proceedings.mlr.press/v235/li24v.html
|
Michael Y. Li, Emily Fox, Noah Goodman
|
https://proceedings.mlr.press/v235/li24v.html
|
ICML 2024
|
Statistical model discovery is a challenging search over a vast space of models subject to domain-specific constraints. Efficiently searching over this space requires expertise in modeling and the problem domain. Motivated by the domain knowledge and programming capabilities of large language models (LMs), we introduce a method for language model driven automated statistical model discovery. We cast our automated procedure within the principled framework of Box’s Loop: the LM iterates between proposing statistical models represented as probabilistic programs, acting as a modeler, and critiquing those models, acting as a domain expert. By leveraging LMs, we do not have to define a domain-specific language of models or design a handcrafted search procedure, which are key restrictions of previous systems. We evaluate our method in three settings in probabilistic modeling: searching within a restricted space of models, searching over an open-ended space, and improving expert models under natural language constraints (e.g., this model should be interpretable to an ecologist). Our method identifies models on par with human expert designed models and extends classic models in interpretable ways. Our results highlight the promise of LM-driven model discovery.
|
https://proceedings.mlr.press/v235/li24w.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24w/li24w.pdf
|
https://openreview.net/forum?id=LZkhKZvhHs
|
Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity
|
https://proceedings.mlr.press/v235/li24w.html
|
Xudong Li, Timin Gao, Runze Hu, Yan Zhang, Shengchuan Zhang, Xiawu Zheng, Jingyuan Zheng, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Rongrong Ji
|
https://proceedings.mlr.press/v235/li24w.html
|
ICML 2024
|
The current state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods typically rely on feature extraction from upstream semantic backbone networks, assuming that all extracted features are relevant. However, we make a key observation that not all features are beneficial, and some may even be harmful, necessitating careful selection. Empirically, we find that many image pairs with small feature spatial distances can have vastly different quality scores, indicating that the extracted features may contain quality-irrelevant noise. To address this issue, we propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) that employs an adversarial perspective to remove harmful semantic noise features from the upstream task. Specifically, QFM-IQM enhances the semantic noise distinguish capabilities by matching image pairs with similar quality scores but varying semantic features as adversarial semantic noise and adaptively adjusting the upstream task’s features by reducing sensitivity to adversarial noise perturbation. Furthermore, we utilize a distillation framework to expand the dataset and improve the model’s generalization ability. Extensive experiments conducted on eight standard IQA datasets have demonstrated the effectiveness of our proposed QFM-IQM.
|
https://proceedings.mlr.press/v235/li24x.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24x/li24x.pdf
|
https://openreview.net/forum?id=xIRKB5nRJl
|
Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning
|
https://proceedings.mlr.press/v235/li24x.html
|
Jiachen Li, Qiaozi Gao, Michael Johnston, Xiaofeng Gao, Xuehai He, Hangjie Shi, Suhaila Shakiah, Reza Ghanadan, William Yang Wang
|
https://proceedings.mlr.press/v235/li24x.html
|
ICML 2024
|
Prompt-based learning has been demonstrated as a compelling paradigm contributing to large language models’ tremendous success (LLMs). Inspired by their success in language tasks, existing research has leveraged LLMs in embodied instruction following and task planning. In this work, we tackle the problem of training a robot to understand multimodal prompts, interleaving vision signals with text descriptions. This type of task poses a major challenge to robots’ capability to understand the interconnection and complementarity between vision and language signals. In this work, we introduce an effective framework that learns a policy to perform robot manipulation with multimodal prompts from multi-task expert trajectories. Our methods consist of a two-stage training pipeline that performs inverse dynamics pretraining and multi-task finetuning. To facilitate multimodal understanding, we design our multimodal prompt encoder by augmenting a pretrained LM with a residual connection to the visual input and model the dependencies among action dimensions. Empirically, we evaluate the efficacy of our method on the VIMA-BENCH and establish a new state-of-the-art (10% improvement in success rate). Moreover, we demonstrate that our model exhibits remarkable in-context learning ability.
|
https://proceedings.mlr.press/v235/li24y.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24y/li24y.pdf
|
https://openreview.net/forum?id=Xgrey8uQhr
|
Graph Structure Extrapolation for Out-of-Distribution Generalization
|
https://proceedings.mlr.press/v235/li24y.html
|
Xiner Li, Shurui Gui, Youzhi Luo, Shuiwang Ji
|
https://proceedings.mlr.press/v235/li24y.html
|
ICML 2024
|
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where test distribution shifts from training distribution. With rising application demands and inherent complexity, graph OOD problems call for specialized solutions. While data-centric methods exhibit performance enhancements on many generic machine learning tasks, there is a notable absence of data augmentation methods tailored for graph OOD generalization. In this work, we propose to achieve graph OOD generalization with the novel design of non-Euclidean-space linear extrapolation. The proposed augmentation strategy extrapolates structure spaces to generate OOD graph data. Our design tailors OOD samples for specific shifts without corrupting underlying causal mechanisms. Theoretical analysis and empirical results evidence the effectiveness of our method in solving target shifts, showing substantial and constant improvements across various graph OOD tasks.
|
https://proceedings.mlr.press/v235/li24z.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24z/li24z.pdf
|
https://openreview.net/forum?id=XobPpcN4yZ
|
Value-Evolutionary-Based Reinforcement Learning
|
https://proceedings.mlr.press/v235/li24z.html
|
Pengyi Li, Jianye Hao, Hongyao Tang, Yan Zheng, Fazl Barez
|
https://proceedings.mlr.press/v235/li24z.html
|
ICML 2024
|
Combining Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for policy search has been proven to improve RL performance. However, previous works largely overlook value-based RL in favor of merging EAs with policy-based RL. This paper introduces Value-Evolutionary-Based Reinforcement Learning (VEB-RL) that focuses on the integration of EAs with value-based RL. The framework maintains a population of value functions instead of policies and leverages negative Temporal Difference error as the fitness metric for evolution. The metric is more sample-efficient for population evaluation than cumulative rewards and is closely associated with the accuracy of the value function approximation. Additionally, VEB-RL enables elites of the population to interact with the environment to offer high-quality samples for RL optimization, whereas the RL value function participates in the population’s evolution in each generation. Experiments on MinAtar and Atari demonstrate the superiority of VEB-RL in significantly improving DQN, Rainbow, and SPR. Our code is available on https://github.com/yeshenpy/VEB-RL.
|
https://proceedings.mlr.press/v235/li24aa.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24aa/li24aa.pdf
|
https://openreview.net/forum?id=JSYN891WnB
|
Image Clustering with External Guidance
|
https://proceedings.mlr.press/v235/li24aa.html
|
Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Jianping Fan, Xi Peng
|
https://proceedings.mlr.press/v235/li24aa.html
|
ICML 2024
|
The core of clustering lies in incorporating prior knowledge to construct supervision signals. From classic k-means based on data compactness to recent contrastive clustering guided by self-supervision, the evolution of clustering methods intrinsically corresponds to the progression of supervision signals. At present, substantial efforts have been devoted to mining internal supervision signals from data. Nevertheless, the abundant external knowledge such as semantic descriptions, which naturally conduces to clustering, is regrettably overlooked. In this work, we propose leveraging external knowledge as a new supervision signal to guide clustering. To implement and validate our idea, we design an externally guided clustering method (Text-Aided Clustering, TAC), which leverages the textual semantics of WordNet to facilitate image clustering. Specifically, TAC first selects and retrieves WordNet nouns that best distinguish images to enhance the feature discriminability. Then, TAC collaborates text and image modalities by mutually distilling cross-modal neighborhood information. Experiments demonstrate that TAC achieves state-of-the-art performance on five widely used and three more challenging image clustering benchmarks, including the full ImageNet-1K dataset. The code can be accessed at https://github.com/XLearning-SCU/2024-ICML-TAC.
|
https://proceedings.mlr.press/v235/li24ab.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ab/li24ab.pdf
|
https://openreview.net/forum?id=gjoUXwuZdy
|
VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual Context
|
https://proceedings.mlr.press/v235/li24ab.html
|
Yunxin Li, Baotian Hu, Haoyuan Shi, Wei Wang, Longyue Wang, Min Zhang
|
https://proceedings.mlr.press/v235/li24ab.html
|
ICML 2024
|
Large Multimodal Models (LMMs) have achieved impressive success in visual reasoning, particularly in visual mathematics. However, problem-solving capabilities in graph theory remain less explored for LMMs, despite being a crucial aspect of mathematical reasoning that requires an accurate understanding of graphical structures and multi-step reasoning on visual graphs. To step forward in this direction, we are the first to design a benchmark named VisionGraph, used to explore the capabilities of advanced LMMs in solving multimodal graph theory problems. It encompasses eight complex graph problem tasks, from connectivity to shortest path problems. Subsequently, we present a Description-Program-Reasoning (DPR) chain to enhance the logical accuracy of reasoning processes through graphical structure description generation and algorithm-aware multi-step reasoning. Our extensive study shows that 1) GPT-4V outperforms Gemini Pro in multi-step graph reasoning; 2) All LMMs exhibit inferior perception accuracy for graphical structures, whether in zero/few-shot settings or with supervised fine-tuning (SFT), which further affects problem-solving performance; 3) DPR significantly improves the multi-step graph reasoning capabilities of LMMs and the GPT-4V (DPR) agent achieves SOTA performance.
|
https://proceedings.mlr.press/v235/li24ac.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ac/li24ac.pdf
|
https://openreview.net/forum?id=MRYS3Zb4iV
|
Integrating Global Context Contrast and Local Sensitivity for Blind Image Quality Assessment
|
https://proceedings.mlr.press/v235/li24ac.html
|
Xudong Li, Runze Hu, Jingyuan Zheng, Yan Zhang, Shengchuan Zhang, Xiawu Zheng, Ke Li, Yunhang Shen, Yutao Liu, Pingyang Dai, Rongrong Ji
|
https://proceedings.mlr.press/v235/li24ac.html
|
ICML 2024
|
Blind Image Quality Assessment (BIQA) mirrors subjective made by human observers. Generally, humans favor comparing relative qualities over predicting absolute qualities directly. However, current BIQA models focus on mining the "local" context, i.e., the relationship between information among individual images and the absolute quality of the image, ignoring the "global" context of the relative quality contrast among different images in the training data. In this paper, we present the Perceptual Context and Sensitivity BIQA (CSIQA), a novel contrastive learning paradigm that seamlessly integrates "global” and "local” perspectives into the BIQA. Specifically, the CSIQA comprises two primary components: 1) A Quality Context Contrastive Learning module, which is equipped with different contrastive learning strategies to effectively capture potential quality correlations in the global context of the dataset. 2) A Quality-aware Mask Attention Module, which employs the random mask to ensure the consistency with visual local sensitivity, thereby improving the model’s perception of local distortions. Extensive experiments on eight standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods.
|
https://proceedings.mlr.press/v235/li24ad.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ad/li24ad.pdf
|
https://openreview.net/forum?id=KB6slOUQP9
|
Accelerating Convergence of Score-Based Diffusion Models, Provably
|
https://proceedings.mlr.press/v235/li24ad.html
|
Gen Li, Yu Huang, Timofey Efimov, Yuting Wei, Yuejie Chi, Yuxin Chen
|
https://proceedings.mlr.press/v235/li24ad.html
|
ICML 2024
|
Score-based diffusion models, while achieving remarkable empirical performance, often suffer from low sampling speed, due to extensive function evaluations needed during the sampling phase. Despite a flurry of recent activities towards speeding up diffusion generative modeling in practice, theoretical underpinnings for acceleration techniques remain severely limited. In this paper, we design novel training-free algorithms to accelerate popular deterministic (i.e., DDIM) and stochastic (i.e., DDPM) samplers. Our accelerated deterministic sampler converges at a rate $O(\frac{1}{{T}^2})$ with $T$ the number of steps, improving upon the $O(\frac{1}{T})$ rate for the DDIM sampler; and our accelerated stochastic sampler converges at a rate $O(\frac{1}{T})$, outperforming the rate $O(\frac{1}{\sqrt{T}})$ for the DDPM sampler. The design of our algorithms leverages insights from higher-order approximation, and shares similar intuitions as popular high-order ODE solvers like the DPM-Solver-2. Our theory accommodates $\ell_2$-accurate score estimates, and does not require log-concavity or smoothness on the target distribution.
|
https://proceedings.mlr.press/v235/li24ae.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ae/li24ae.pdf
|
https://openreview.net/forum?id=gxOQEMRbRa
|
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models
|
https://proceedings.mlr.press/v235/li24ae.html
|
Kenneth Li, Samy Jelassi, Hugh Zhang, Sham M. Kakade, Martin Wattenberg, David Brandfonbrener
|
https://proceedings.mlr.press/v235/li24ae.html
|
ICML 2024
|
We present an approach called Q-probing to adapt a pre-trained language model to maximize a task-specific reward function. At a high level, Q-probing sits between heavier approaches such as finetuning and lighter approaches such as few shot prompting, but can also be combined with either. The idea is to learn a simple linear function on a model’s embedding space that can be used to reweight candidate completions. We theoretically show that this sampling procedure is equivalent to a KL-constrained maximization of the Q-probe as the number of samples increases. To train the Q-probes we consider either reward modeling or a class of novel direct policy learning objectives based on importance-weighted policy gradients. With this technique, we see gains in domains with ground-truth rewards (code generation) as well as implicit rewards defined by preference data, even outperforming finetuning in data-limited regimes. Moreover, a Q-probe can be trained on top of an API since it only assumes access to sampling and embeddings. Code: https://github.com/likenneth/q_probe.
|
https://proceedings.mlr.press/v235/li24af.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24af/li24af.pdf
|
https://openreview.net/forum?id=BTkaKA74mS
|
Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines
|
https://proceedings.mlr.press/v235/li24af.html
|
Yuchen Li, Alexandre Kirchmeyer, Aashay Mehta, Yilong Qin, Boris Dadachev, Kishore Papineni, Sanjiv Kumar, Andrej Risteski
|
https://proceedings.mlr.press/v235/li24af.html
|
ICML 2024
|
Autoregressive language models are the currently dominant paradigm for text generation, however they have some fundamental limitations that cannot be remedied by scale—for example inherently sequential and unidirectional generation. While alternate classes of models have been explored, we have limited mathematical understanding of their fundamental power and limitations. In this paper we focus on Generative Masked Language Models (GMLMs), a non-autoregressive paradigm in which we train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model. These models empirically strike a promising speed-quality trade-off as each step can be typically parallelized by decoding the entire sequence in parallel. We develop a mathematical framework for analyzing and improving such models which sheds light on questions of sample complexity and inference speed and quality. Empirically, we adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality compared with autoregressive models. We run careful ablation experiments to give recommendations on key design choices, and make fine-grained observations on the common error modes in connection with our theory. Our mathematical analyses and empirical observations characterize both potentials and limitations of this approach, and can be applied to future works on improving understanding and performance of GMLMs. We released codes for our experiments.
|
https://proceedings.mlr.press/v235/li24ag.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ag/li24ag.pdf
|
https://openreview.net/forum?id=JymXv7mkrQ
|
Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models
|
https://proceedings.mlr.press/v235/li24ag.html
|
Jinhao Li, Haopeng Li, Sarah Monazam Erfani, Lei Feng, James Bailey, Feng Liu
|
https://proceedings.mlr.press/v235/li24ag.html
|
ICML 2024
|
It has recently been discovered that using a pre-trained vision-language model (VLM), e.g., CLIP, to align a whole query image with several finer text descriptions generated by a large language model can significantly enhance zero-shot performance. However, in this paper, we empirically find that the finer descriptions tend to align more effectively with local areas of the query image rather than the whole image, and then we theoretically validate this finding. Thus, we present a method called weighted visual-text cross alignment (WCA). This method begins with a localized visual prompting technique, designed to identify local visual areas within the query image. The local visual areas are then cross-aligned with the finer descriptions by creating a similarity matrix using the pre-trained VLM. To determine how well a query image aligns with each category, we develop a score function based on the weighted similarities in this matrix. Extensive experiments demonstrate that our method significantly improves zero-shot performance across various datasets, achieving results that are even comparable to few-shot learning methods.
|
https://proceedings.mlr.press/v235/li24ah.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ah/li24ah.pdf
|
https://openreview.net/forum?id=Sv4u9PtvT5
|
Generalizing Knowledge Graph Embedding with Universal Orthogonal Parameterization
|
https://proceedings.mlr.press/v235/li24ah.html
|
Rui Li, Chaozhuo Li, Yanming Shen, Zeyu Zhang, Xu Chen
|
https://proceedings.mlr.press/v235/li24ah.html
|
ICML 2024
|
Recent advances in knowledge graph embedding (KGE) rely on Euclidean/hyperbolic orthogonal relation transformations to model intrinsic logical patterns and topological structures. However, existing approaches are confined to rigid relational orthogonalization with restricted dimension and homogeneous geometry, leading to deficient modeling capability. In this work, we move beyond these approaches in terms of both dimension and geometry by introducing a powerful framework named GoldE, which features a universal orthogonal parameterization based on a generalized form of Householder reflection. Such parameterization can naturally achieve dimensional extension and geometric unification with theoretical guarantees, enabling our framework to simultaneously capture crucial logical patterns and inherent topological heterogeneity of knowledge graphs. Empirically, GoldE achieves state-of-the-art performance on three standard benchmarks. Codes are available at https://github.com/xxrep/GoldE.
|
https://proceedings.mlr.press/v235/li24ai.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ai/li24ai.pdf
|
https://openreview.net/forum?id=y8NevOhrnW
|
Neural Collapse in Multi-label Learning with Pick-all-label Loss
|
https://proceedings.mlr.press/v235/li24ai.html
|
Pengyu Li, Xiao Li, Yutong Wang, Qing Qu
|
https://proceedings.mlr.press/v235/li24ai.html
|
ICML 2024
|
We study deep neural networks for the multi-label classification (MLab) task through the lens of neural collapse (NC). Previous works have been restricted to the multi-class classification setting and discovered a prevalent NC phenomenon comprising of the following properties for the last-layer features: (i) the variability of features within every class collapses to zero, (ii) the set of feature means form an equi-angular tight frame (ETF), and (iii) the last layer classifiers collapse to the feature mean upon some scaling. We generalize the study to multi-label learning, and prove for the first time that a generalized NC phenomenon holds with the "pick-all-label” formulation, which we term as MLab NC. While the ETF geometry remains consistent for features with a single label, multi-label scenarios introduce a unique combinatorial aspect we term the "tag-wise average" property, where the means of features with multiple labels are the scaled averages of means for single-label instances. Theoretically, under proper assumptions on the features, we establish that the only global optimizer of the pick-all-label cross-entropy loss satisfy the multi-label NC. In practice, we demonstrate that our findings can lead to better test performance with more efficient training techniques for MLab learning.
|
https://proceedings.mlr.press/v235/li24aj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24aj/li24aj.pdf
|
https://openreview.net/forum?id=2FKzbEE24s
|
A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing
|
https://proceedings.mlr.press/v235/li24aj.html
|
Chengrui Li, Weihan Li, Yule Wang, Anqi Wu
|
https://proceedings.mlr.press/v235/li24aj.html
|
ICML 2024
|
The partially observable generalized linear model (POGLM) is a powerful tool for understanding neural connectivities under the assumption of existing hidden neurons. With spike trains only recorded from visible neurons, existing works use variational inference to learn POGLM meanwhile presenting the difficulty of learning this latent variable model. There are two main issues: (1) the sampled Poisson hidden spike count hinders the use of the pathwise gradient estimator in VI; and (2) the existing design of the variational model is neither expressive nor time-efficient, which further affects the performance. For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works. For (2), we propose the forward-backward message-passing sampling scheme for the variational model. Comprehensive experiments show that our differentiable POGLMs with our forward-backward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience.
|
https://proceedings.mlr.press/v235/li24ak.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ak/li24ak.pdf
|
https://openreview.net/forum?id=us6zMORsMe
|
Multi-Region Markovian Gaussian Process: An Efficient Method to Discover Directional Communications Across Multiple Brain Regions
|
https://proceedings.mlr.press/v235/li24ak.html
|
Weihan Li, Chengrui Li, Yule Wang, Anqi Wu
|
https://proceedings.mlr.press/v235/li24ak.html
|
ICML 2024
|
Studying the complex interactions between different brain regions is crucial in neuroscience. Various statistical methods have explored the latent communication across multiple brain regions. Two main categories are the Gaussian Process (GP) and Linear Dynamical System (LDS), each with unique strengths. The GP-based approach effectively discovers latent variables with frequency bands and communication directions. Conversely, the LDS-based approach is computationally efficient but lacks powerful expressiveness in latent representation. In this study, we merge both methodologies by creating an LDS mirroring a multi-output GP, termed Multi-Region Markovian Gaussian Process (MRM-GP). Our work establishes a connection between an LDS and a multi-output GP that explicitly models frequencies and phase delays within the latent space of neural recordings. Consequently, the model achieves a linear inference cost over time points and provides an interpretable low-dimensional representation, revealing communication directions across brain regions and separating oscillatory communications into different frequency bands.
|
https://proceedings.mlr.press/v235/li24al.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24al/li24al.pdf
|
https://openreview.net/forum?id=kUj9b2CezT
|
A Generative Approach for Treatment Effect Estimation under Collider Bias: From an Out-of-Distribution Perspective
|
https://proceedings.mlr.press/v235/li24al.html
|
Baohong Li, Haoxuan Li, Anpeng Wu, Minqin Zhu, Shiyuan Peng, Qingyu Cao, Kun Kuang
|
https://proceedings.mlr.press/v235/li24al.html
|
ICML 2024
|
Resulting from non-random sample selection caused by both the treatment and outcome, collider bias poses a unique challenge to treatment effect estimation using observational data whose distribution differs from that of the target population. In this paper, we rethink collider bias from an out-of-distribution (OOD) perspective, considering that the entire data space of the target population consists of two different environments: The observational data selected from the target population belongs to a seen environment labeled with $S=1$ and the missing unselected data belongs to another unseen environment labeled with $S=0$. Based on this OOD formulation, we utilize small-scale representative data from the entire data space with no environmental labels and propose a novel method, i.e., Coupled Counterfactual Generative Adversarial Model (C$^2$GAM), to simultaneously generate the missing $S=0$ samples in observational data and the missing $S$ labels in the small-scale representative data. With the help of C$^2$GAM, collider bias can be addressed by combining the generated $S=0$ samples and the observational data to estimate treatment effects. Extensive experiments on synthetic and real-world data demonstrate that plugging C$^2$GAM into existing treatment effect estimators achieves significant performance improvements.
|
https://proceedings.mlr.press/v235/li24am.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24am/li24am.pdf
|
https://openreview.net/forum?id=ycXo4tQIpN
|
Learning Shadow Variable Representation for Treatment Effect Estimation under Collider Bias
|
https://proceedings.mlr.press/v235/li24am.html
|
Baohong Li, Haoxuan Li, Ruoxuan Xiong, Anpeng Wu, Fei Wu, Kun Kuang
|
https://proceedings.mlr.press/v235/li24am.html
|
ICML 2024
|
One of the significant challenges in treatment effect estimation is collider bias, a specific form of sample selection bias induced by the common causes of both the treatment and outcome. Identifying treatment effects under collider bias requires well-defined shadow variables in observational data, which are assumed to be related to the outcome and independent of the sample selection mechanism, conditional on the other observed variables. However, finding a valid shadow variable is not an easy task in real-world scenarios and requires domain-specific knowledge from experts. Therefore, in this paper, we propose a novel method that can automatically learn shadow-variable representations from observational data without prior knowledge. To ensure the learned representations satisfy the assumptions of the shadow variable, we introduce a tester to perform hypothesis testing in the representation learning process. We iteratively generate representations and test whether they satisfy the shadow-variable assumptions until they pass the test. With the help of the learned shadow-variable representations, we propose a novel treatment effect estimator to address collider bias. Experiments show that the proposed methods outperform existing treatment effect estimation methods under collider bias and prove their potential application value.
|
https://proceedings.mlr.press/v235/li24an.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24an/li24an.pdf
|
https://openreview.net/forum?id=U841CrDUx9
|
Configurable Mirror Descent: Towards a Unification of Decision Making
|
https://proceedings.mlr.press/v235/li24an.html
|
Pengdeng Li, Shuxin Li, Chang Yang, Xinrun Wang, Shuyue Hu, Xiao Huang, Hau Chan, Bo An
|
https://proceedings.mlr.press/v235/li24an.html
|
ICML 2024
|
Decision-making problems, categorized as single-agent, e.g., Atari, cooperative multi-agent, e.g., Hanabi, competitive multi-agent, e.g., Hold’em poker, and mixed cooperative and competitive, e.g., football, are ubiquitous in the real world. Although various methods have been proposed to address the specific decision-making categories, these methods typically evolve independently and cannot generalize to other categories. Therefore, a fundamental question for decision-making is: Can we develop a single algorithm to tackle ALL categories of decision-making problems? There are several main challenges to address this question: i) different decision-making categories involve different numbers of agents and different relationships between agents, ii) different categories have different solution concepts and evaluation measures, and iii) there lacks a comprehensive benchmark covering all the categories. This work presents a preliminary attempt to address the question with three main contributions. i) We propose the generalized mirror descent (GMD), a generalization of MD variants, which considers multiple historical policies and works with a broader class of Bregman divergences. ii) We propose the configurable mirror descent (CMD) where a meta-controller is introduced to dynamically adjust the hyper-parameters in GMD conditional on the evaluation measures. iii) We construct the GameBench with 15 academic-friendly games across different decision-making categories. Extensive experiments demonstrate that CMD achieves empirically competitive or better outcomes compared to baselines while providing the capability of exploring diverse dimensions of decision making.
|
https://proceedings.mlr.press/v235/li24ao.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ao/li24ao.pdf
|
https://openreview.net/forum?id=O4nXWHPl6g
|
Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation
|
https://proceedings.mlr.press/v235/li24ao.html
|
Lan Li, Xin-Chun Li, Han-Jia Ye, De-Chuan Zhan
|
https://proceedings.mlr.press/v235/li24ao.html
|
ICML 2024
|
In class-imbalanced learning, the scarcity of information about minority classes presents challenges in obtaining generalizable features for these classes. Leveraging large-scale pre-trained models with powerful generalization capabilities as teacher models can help fill this information gap. Traditional knowledge distillation transfers the label distribution $p(\boldsymbol{y}|\boldsymbol{x})$ predicted by the teacher model to the student model. However, this method falls short on imbalanced data as it fails to capture the class-conditional probability distribution $p(\boldsymbol{x}|\boldsymbol{y})$ from the teacher model, which is crucial for enhancing generalization. To overcome this, we propose Class-Conditional Knowledge Distillation (CCKD), a novel approach that enables learning of the teacher model’s class-conditional probability distribution during the distillation process. Additionally, we introduce Augmented CCKD (ACCKD), which involves distillation on a constructed class-balanced dataset (formed through data mixing) and feature imitation on the entire dataset to further facilitate the learning of features. Experimental results on various imbalanced datasets demonstrate an average accuracy improvement of 7.4% using our method.
|
https://proceedings.mlr.press/v235/li24ap.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ap/li24ap.pdf
|
https://openreview.net/forum?id=IejxxE9DO2
|
A Neural-Guided Dynamic Symbolic Network for Exploring Mathematical Expressions from Data
|
https://proceedings.mlr.press/v235/li24ap.html
|
Wenqiang Li, Weijun Li, Lina Yu, Min Wu, Linjun Sun, Jingyi Liu, Yanjie Li, Shu Wei, Deng Yusong, Meilan Hao
|
https://proceedings.mlr.press/v235/li24ap.html
|
ICML 2024
|
Symbolic regression (SR) is a powerful technique for discovering the underlying mathematical expressions from observed data. Inspired by the success of deep learning, recent deep generative SR methods have shown promising results. However, these methods face difficulties in processing high-dimensional problems and learning constants due to the large search space, and they don’t scale well to unseen problems. In this work, we propose DySymNet, a novel neural-guided Dynamic Symbolic Network for SR. Instead of searching for expressions within a large search space, we explore symbolic networks with various structures, guided by reinforcement learning, and optimize them to identify expressions that better-fitting the data. Based on extensive numerical experiments on low-dimensional public standard benchmarks and the well-known SRBench with more variables, DySymNet shows clear superiority over several representative baseline models. Open source code is available at https://github.com/AILWQ/DySymNet.
|
https://proceedings.mlr.press/v235/li24aq.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24aq/li24aq.pdf
|
https://openreview.net/forum?id=WUdq1WFUPr
|
Cascade-CLIP: Cascaded Vision-Language Embeddings Alignment for Zero-Shot Semantic Segmentation
|
https://proceedings.mlr.press/v235/li24aq.html
|
Yunheng Li, Zhong-Yu Li, Quan-Sheng Zeng, Qibin Hou, Ming-Ming Cheng
|
https://proceedings.mlr.press/v235/li24aq.html
|
ICML 2024
|
Pre-trained vision-language models, e.g., CLIP, have been successfully applied to zero-shot semantic segmentation. Existing CLIP-based approaches primarily utilize visual features from the last layer to align with text embeddings, while they neglect the crucial information in intermediate layers that contain rich object details. However, we find that directly aggregating the multi-level visual features weakens the zero-shot ability for novel classes. The large differences between the visual features from different layers make these features hard to align well with the text embeddings. We resolve this problem by introducing a series of independent decoders to align the multi-level visual features with the text embeddings in a cascaded way, forming a novel but simple framework named Cascade-CLIP. Our Cascade-CLIP is flexible and can be easily applied to existing zero-shot semantic segmentation methods. Experimental results show that our simple Cascade-CLIP achieves superior zero-shot performance on segmentation benchmarks, like COCO-Stuff, Pascal-VOC, and Pascal-Context. Our code is available at https://github.com/HVision-NKU/Cascade-CLIP.
|
https://proceedings.mlr.press/v235/li24ar.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ar/li24ar.pdf
|
https://openreview.net/forum?id=vKtomqlSxm
|
Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
|
https://proceedings.mlr.press/v235/li24ar.html
|
Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, Brian Ichter
|
https://proceedings.mlr.press/v235/li24ar.html
|
ICML 2024
|
Code provides a general syntactic structure to build complex programs and perform precise computations when paired with a code interpreter – we hypothesize that language models (LMs) can leverage code-writing to improve Chain of Thought reasoning not only for logic and arithmetic tasks, but also for semantic ones (and in particular, those that are a mix of both). For example, consider prompting an LM to write code that counts the number of times it detects sarcasm in an essay: the LM may struggle to write an implementation for "detect_sarcasm(string)" that can be executed by the interpreter (handling the edge cases would be insurmountable). However, LMs may still produce a valid solution if they not only write code, but also selectively "emulate" the interpreter by generating the expected output of "detect_sarcasm(string)". In this work, we propose Chain of Code (CoC), a simple yet surprisingly effective extension that improves LM code-driven reasoning. The key idea is to encourage LMs to format semantic sub-tasks in a program as flexible pseudocode that the interpreter can explicitly catch undefined behaviors and hand off to simulate with an LM (as an "LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard, Chain of Code achieves 84%, a gain of 12% over Chain of Thought. In a nutshell, CoC broadens the scope of reasoning questions that LMs can answer by "thinking in code".
|
https://proceedings.mlr.press/v235/li24as.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24as/li24as.pdf
|
https://openreview.net/forum?id=4byOXWrJay
|
Preventing Model Collapse in Gaussian Process Latent Variable Models
|
https://proceedings.mlr.press/v235/li24as.html
|
Ying Li, Zhidi Lin, Feng Yin, Michael Minyi Zhang
|
https://proceedings.mlr.press/v235/li24as.html
|
ICML 2024
|
Gaussian process latent variable models (GPLVMs) are a versatile family of unsupervised learning models commonly used for dimensionality reduction. However, common challenges in modeling data with GPLVMs include inadequate kernel flexibility and improper selection of the projection noise, leading to a type of model collapse characterized by vague latent representations that do not reflect the underlying data structure. This paper addresses these issues by, first, theoretically examining the impact of projection variance on model collapse through the lens of a linear GPLVM. Second, we tackle model collapse due to inadequate kernel flexibility by integrating the spectral mixture (SM) kernel and a differentiable random Fourier feature (RFF) kernel approximation, which ensures computational scalability and efficiency through off-the-shelf automatic differentiation tools for learning the kernel hyperparameters, projection variance, and latent representations within the variational inference framework. The proposed GPLVM, named advisedRFLVM, is evaluated across diverse datasets and consistently outperforms various salient competing models, including state-of-the-art variational autoencoders (VAEs) and other GPLVM variants, in terms of informative latent representations and missing data imputation.
|
https://proceedings.mlr.press/v235/li24at.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24at/li24at.pdf
|
https://openreview.net/forum?id=SfcB4cVvPz
|
A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks
|
https://proceedings.mlr.press/v235/li24at.html
|
Boqi Li, Weiwei Liu
|
https://proceedings.mlr.press/v235/li24at.html
|
ICML 2024
|
The rising threat of backdoor poisoning attacks (BPAs) on Deep Neural Networks (DNNs) has become a significant concern in recent years. In such attacks, the adversaries strategically target a specific class and generate a poisoned training set. The neural network (NN), well-trained on the poisoned training set, is able to predict any input with the trigger pattern as the targeted label, while maintaining accurate outputs for clean inputs. However, why the BPAs work remains less explored. To fill this gap, we employ a dirty-label attack and conduct a detailed analysis of BPAs in a two-layer convolutional neural network. We provide theoretical insights and results on the effectiveness of BPAs. Our experimental results on two real-world datasets validate our theoretical findings.
|
https://proceedings.mlr.press/v235/li24au.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24au/li24au.pdf
|
https://openreview.net/forum?id=JHRvP84SQ5
|
Concentration Inequalities for General Functions of Heavy-Tailed Random Variables
|
https://proceedings.mlr.press/v235/li24au.html
|
Shaojie Li, Yong Liu
|
https://proceedings.mlr.press/v235/li24au.html
|
ICML 2024
|
Concentration inequalities play an essential role in the study of machine learning and high dimensional statistics. In this paper, we obtain unbounded analogues of the popular bounded difference inequality for functions of independent random variables with heavy-tailed distributions. The main results provide a general framework applicable to all heavy-tailed distributions with finite variance. To illustrate the strength of our results, we present applications to sub-exponential tails, sub-Weibull tails, and heavier polynomially decaying tails. Applied to some standard problems in statistical learning theory (vector valued concentration, Rademacher complexity, and algorithmic stability), we show that these inequalities allow an extension of existing results to heavy-tailed distributions up to finite variance.
|
https://proceedings.mlr.press/v235/li24av.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24av/li24av.pdf
|
https://openreview.net/forum?id=OrVl8R13Wy
|
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once
|
https://proceedings.mlr.press/v235/li24av.html
|
Zhangheng Li, Shiwei Liu, Tianlong Chen, Ajay Kumar Jaiswal, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang
|
https://proceedings.mlr.press/v235/li24av.html
|
ICML 2024
|
Sparse Neural Networks (SNNs) have received voluminous attention for mitigating the explosion in computational costs and memory footprints of modern deep neural networks. Despite their popularity, most state-of-the-art training approaches seek to find a single high-quality sparse subnetwork with a preset sparsity pattern and ratio, making them inadequate to satiate platform and resource variability. Recently proposed approaches attempt to jointly train multiple subnetworks (we term as “sparse co-training") with a fixed sparsity pattern, to allow switching sparsity ratios subject to resource requirements. In this work, we take one more step forward and expand the scope of sparse co-training to cover diverse sparsity patterns and multiple sparsity ratios at once. We introduce Sparse Cocktail, the first sparse co-training framework that co-trains a suite of sparsity patterns simultaneously, loaded with multiple sparsity ratios which facilitate harmonious switch across various sparsity patterns and ratios at inference depending on the hardware availability. More specifically, Sparse Cocktail alternatively trains subnetworks generated from different sparsity patterns with a gradual increase in sparsity ratios across patterns and relies on an unified mask generation process and the Dense Pivot Co-training to ensure the subnetworks of different patterns orchestrate their shared parameters without canceling each other’s performance. Experiment results on image classification, object detection, and instance segmentation illustrate the favorable effectiveness and flexibility of Sparse Cocktail, pointing to a promising direction for sparse co-training. Codes will be released.
|
https://proceedings.mlr.press/v235/li24aw.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24aw/li24aw.pdf
|
https://openreview.net/forum?id=zCmMkWK4Ly
|
Individual Contributions as Intrinsic Exploration Scaffolds for Multi-agent Reinforcement Learning
|
https://proceedings.mlr.press/v235/li24aw.html
|
Xinran Li, Zifan Liu, Shibo Chen, Jun Zhang
|
https://proceedings.mlr.press/v235/li24aw.html
|
ICML 2024
|
In multi-agent reinforcement learning (MARL), effective exploration is critical, especially in sparse reward environments. Although introducing global intrinsic rewards can foster exploration in such settings, it often complicates credit assignment among agents. To address this difficulty, we propose Individual Contributions as intrinsic Exploration Scaffolds (ICES), a novel approach to motivate exploration by assessing each agent’s contribution from a global view. In particular, ICES constructs exploration scaffolds with Bayesian surprise, leveraging global transition information during centralized training. These scaffolds, used only in training, help to guide individual agents towards actions that significantly impact the global latent state transitions. Additionally, ICES separates exploration policies from exploitation policies, enabling the former to utilize privileged global information during training. Extensive experiments on cooperative benchmark tasks with sparse rewards, including Google Research Football (GRF) and StarCraft Multi-agent Challenge (SMAC), demonstrate that ICES exhibits superior exploration capabilities compared with baselines. The code is publicly available at https://github.com/LXXXXR/ICES.
|
https://proceedings.mlr.press/v235/li24ax.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ax/li24ax.pdf
|
https://openreview.net/forum?id=eaNLvrP8n1
|
Learning Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking
|
https://proceedings.mlr.press/v235/li24ax.html
|
Yongxin Li, Mengyuan Liu, You Wu, Xucheng Wang, Xiangyang Yang, Shuiwang Li
|
https://proceedings.mlr.press/v235/li24ax.html
|
ICML 2024
|
Harnessing transformer-based models, visual tracking has made substantial strides. However, the sluggish performance of current trackers limits their practicality on devices with constrained computational capabilities, especially for real-time unmanned aerial vehicle (UAV) tracking. Addressing this challenge, we introduce AVTrack, an adaptive computation framework tailored to selectively activate transformer blocks for real-time UAV tracking in this work. Our novel Activation Module (AM) dynamically optimizes ViT architecture, selectively engaging relevant components and enhancing inference efficiency without compromising much tracking performance. Moreover, we bolster the effectiveness of ViTs, particularly in addressing challenges arising from extreme changes in viewing angles commonly encountered in UAV tracking, by learning view-invariant representations through mutual information maximization. Extensive experiments on five tracking benchmarks affirm the effectiveness and versatility of our approach, positioning it as a state-of-the-art solution in visual tracking. Code is released at: https://github.com/wuyou3474/AVTrack.
|
https://proceedings.mlr.press/v235/li24ay.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ay/li24ay.pdf
|
https://openreview.net/forum?id=1N7pjXKkx8
|
PID: Prompt-Independent Data Protection Against Latent Diffusion Models
|
https://proceedings.mlr.press/v235/li24ay.html
|
Ang Li, Yichuan Mo, Mingjie Li, Yisen Wang
|
https://proceedings.mlr.press/v235/li24ay.html
|
ICML 2024
|
The few-shot fine-tuning of Latent Diffusion Models (LDMs) has enabled them to grasp new concepts from a limited number of images. However, given the vast amount of personal images accessible online, this capability raises critical concerns about civil privacy. While several previous defense methods have been developed to prevent such misuse of LDMs, they typically assume that the textual prompts used by data protectors exactly match those employed by data exploiters. In this paper, we first empirically demonstrate that breaking this assumption, i.e., in cases where discrepancies exist between the textual conditions used by protectors and exploiters, could substantially reduces the effectiveness of these defenses. Furthermore, considering the visual encoder’s independence from textual prompts, we delve into the visual encoder and thoroughly investigate how manipulating the visual encoder affects the few-shot fine-tuning process of LDMs. Drawing on these insights, we propose a simple yet effective method called Prompt-Independent Defense (PID) to safeguard privacy against LDMs. We show that PID can act as a strong privacy shield on its own while requiring significantly less computational power. We believe our studies, along with the comprehensive understanding and new defense method, provide a notable advance toward reliable data protection against LDMs.
|
https://proceedings.mlr.press/v235/li24az.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24az/li24az.pdf
|
https://openreview.net/forum?id=9BWRs6XF8P
|
A Contextual Combinatorial Bandit Approach to Negotiation
|
https://proceedings.mlr.press/v235/li24az.html
|
Yexin Li, Zhancun Mu, Siyuan Qi
|
https://proceedings.mlr.press/v235/li24az.html
|
ICML 2024
|
Learning effective negotiation strategies poses two key challenges: the exploration-exploitation dilemma and dealing with large action spaces. However, there is an absence of learning-based approaches that effectively address these challenges in negotiation. This paper introduces a comprehensive formulation to tackle various negotiation problems. Our approach leverages contextual combinatorial multi-armed bandits, with the bandits resolving the exploration-exploitation dilemma, and the combinatorial nature handles large action spaces. Building upon this formulation, we introduce NegUCB, a novel method that also handles common issues such as partial observations and complex reward functions in negotiation. NegUCB is contextual and tailored for full-bandit feedback without constraints on the reward functions. Under mild assumptions, it ensures a sub-linear regret upper bound. Experiments conducted on three negotiation tasks demonstrate the superiority of our approach.
|
https://proceedings.mlr.press/v235/li24ba.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ba/li24ba.pdf
|
https://openreview.net/forum?id=MlzUD5CKvZ
|
Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining
|
https://proceedings.mlr.press/v235/li24ba.html
|
Aaron Jiaxun Li, Robin Netzorg, Zhihan Cheng, Zhuoqin Zhang, Bin Yu
|
https://proceedings.mlr.press/v235/li24ba.html
|
ICML 2024
|
In recent years, work has gone into developing deep interpretable methods for image classification that clearly attributes a model’s output to specific features of the data. One such of these methods is the Prototypical Part Network (ProtoPNet), which attempts to classify images based on meaningful parts of the input. While this architecture is able to produce visually interpretable classifications, it often learns to classify based on parts of the image that are not semantically meaningful. To address this problem, we propose the Reward Reweighing, Reselecting, and Retraining (R3) post-processing framework, which performs three additional corrective updates to a pretrained ProtoPNet in an offline and efficient manner. The first two steps involve learning a reward model based on collected human feedback and then aligning the prototypes with human preferences. The final step is retraining, which realigns the base features and the classifier layer of the original model with the updated prototypes. We find that our R3 framework consistently improves both the interpretability and the predictive accuracy of ProtoPNet and its variants.
|
https://proceedings.mlr.press/v235/li24bb.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bb/li24bb.pdf
|
https://openreview.net/forum?id=DKKg5EFAFr
|
Evaluating Quantized Large Language Models
|
https://proceedings.mlr.press/v235/li24bb.html
|
Shiyao Li, Xuefei Ning, Luning Wang, Tengxuan Liu, Xiangsheng Shi, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
|
https://proceedings.mlr.press/v235/li24bb.html
|
ICML 2024
|
Post-training quantization (PTQ) has emerged as a promising technique to reduce the cost of large language models (LLMs). Specifically, PTQ can effectively mitigate memory consumption and reduce computational overhead in LLMs. To meet the requirements of both high efficiency and performance across diverse scenarios, a comprehensive evaluation of quantized LLMs is essential to guide the selection of quantization methods. This paper presents a thorough evaluation of these factors by evaluating the effect of PTQ on Weight, Activation, and KV Cache on 11 model families, including OPT, LLaMA2, Falcon, Bloomz, Mistral, ChatGLM, Vicuna, LongChat, StableLM, Gemma, and Mamba, with parameters ranging from 125M to 180B. The evaluation encompasses five types of tasks: basic NLP, emergent ability, trustworthiness, dialogue, and long-context tasks. Moreover, we also evaluate the state-of-the-art (SOTA) quantization methods to demonstrate their applicability. Based on the extensive experiments, we systematically summarize the effect of quantization, provide recommendations to apply quantization techniques, and point out future directions. The code can be found in https://github.com/thu-nics/qllm-eval.
|
https://proceedings.mlr.press/v235/li24bc.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bc/li24bc.pdf
|
https://openreview.net/forum?id=xlr6AUDuJz
|
The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning
|
https://proceedings.mlr.press/v235/li24bc.html
|
Nathaniel Li, Alexander Pan, Anjali Gopal, Summer Yue, Daniel Berrios, Alice Gatti, Justin D. Li, Ann-Kathrin Dombrowski, Shashwat Goel, Gabriel Mukobi, Nathan Helm-Burger, Rassin Lababidi, Lennart Justen, Andrew Bo Liu, Michael Chen, Isabelle Barrass, Oliver Zhang, Xiaoyuan Zhu, Rishub Tamirisa, Bhrugu Bharathi, Ariel Herbert-Voss, Cort B Breuer, Andy Zou, Mantas Mazeika, Zifan Wang, Palash Oswal, Weiran Lin, Adam Alfred Hunt, Justin Tienken-Harder, Kevin Y. Shih, Kemper Talley, John Guan, Ian Steneker, David Campbell, Brad Jokubaitis, Steven Basart, Stephen Fitz, Ponnurangam Kumaraguru, Kallol Krishna Karmakar, Uday Tupakula, Vijay Varadharajan, Yan Shoshitaishvili, Jimmy Ba, Kevin M. Esvelt, Alexandr Wang, Dan Hendrycks
|
https://proceedings.mlr.press/v235/li24bc.html
|
ICML 2024
|
The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private and restricted to a narrow range of malicious use scenarios, which limits further research into reducing malicious use. To fill these gaps, we release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai.
|
https://proceedings.mlr.press/v235/li24bd.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bd/li24bd.pdf
|
https://openreview.net/forum?id=qIOSNyPPwB
|
Graph Neural Network Explanations are Fragile
|
https://proceedings.mlr.press/v235/li24bd.html
|
Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang
|
https://proceedings.mlr.press/v235/li24bd.html
|
ICML 2024
|
Explainable Graph Neural Network (GNN) has emerged recently to foster the trust of using GNNs. Existing GNN explainers are developed from various perspectives to enhance the explanation performance. We take the first step to study GNN explainers under adversarial attack—We found that an adversary slightly perturbing graph structure can ensure GNN model makes correct predictions, but the GNN explainer yields a drastically different explanation on the perturbed graph. Specifically, we first formulate the attack problem under a practical threat model (i.e., the adversary has limited knowledge about the GNN explainer and a restricted perturbation budget). We then design two methods (i.e., one is loss-based and the other is deduction-based) to realize the attack. We evaluate our attacks on various GNN explainers and the results show these explainers are fragile.
|
https://proceedings.mlr.press/v235/li24be.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24be/li24be.pdf
|
https://openreview.net/forum?id=39UqOkTjFn
|
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
|
https://proceedings.mlr.press/v235/li24be.html
|
Zhening Li, Gabriel Poesia, Armando Solar-Lezama
|
https://proceedings.mlr.press/v235/li24be.html
|
ICML 2024
|
Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, a precise characterization has been absent. We provide the first such characterization, focusing on the utility of deterministic skills in deterministic sparse-reward environments with finite action spaces. We show theoretically and empirically that RL performance gain from skills is worse in environments where solutions to states are less compressible. Additional theoretical results suggest that skills benefit exploration more than they benefit learning from existing experience, and that using unexpressive skills such as macroactions may worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills.
|
https://proceedings.mlr.press/v235/li24bf.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bf/li24bf.pdf
|
https://openreview.net/forum?id=phGHQOKmaU
|
DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching
|
https://proceedings.mlr.press/v235/li24bf.html
|
Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang
|
https://proceedings.mlr.press/v235/li24bf.html
|
ICML 2024
|
In offline reinforcement learning (RL), the performance of the learned policy highly depends on the quality of offline datasets. However, the offline dataset contains very limited optimal trajectories in many cases. This poses a challenge for offline RL algorithms, as agents must acquire the ability to transit to high-reward regions. To address this issue, we introduce Diffusionbased Trajectory Stitching (DiffStitch), a novel diffusion-based data augmentation pipeline that systematically generates stitching transitions between trajectories. DiffStitch effectively connects low-reward trajectories with high-reward trajectories, forming globally optimal trajectories and thereby mitigating the challenges faced by offline RL algorithms in learning trajectory stitching. Empirical experiments conducted on D4RL datasets demonstrate the effectiveness of our pipeline across RL methodologies. Notably, DiffStitch demonstrates substantial enhancements in the performance of one-step methods(IQL), imitation learning methods(TD3+BC) and trajectory optimization methods(DT). Our code is publicly available at https://github.com/guangheli12/DiffStitch
|
https://proceedings.mlr.press/v235/li24bg.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bg/li24bg.pdf
|
https://openreview.net/forum?id=1QmFKwVwwI
|
Privacy Preserving Adaptive Experiment Design
|
https://proceedings.mlr.press/v235/li24bg.html
|
Jiachun Li, Kaining Shi, David Simchi-Levi
|
https://proceedings.mlr.press/v235/li24bg.html
|
ICML 2024
|
Adaptive experiment is widely adopted to estimate conditional average treatment effect (CATE) in clinical trials and many other scenarios. While the primary goal in experiment is to maximize estimation accuracy, due to the imperative of social welfare, it’s also crucial to provide treatment with superior outcomes to patients, which is measured by regret in contextual bandit framework. Furthermore, privacy concerns arise in clinical scenarios containing sensitive data like patients health records. Therefore, it’s essential for the treatment allocation mechanism to incorporate robust privacy protection measures. In this paper, we investigate the tradeoff between loss of social welfare and statistical power of CATE estimation in contextual bandit experiment. We propose a matched upper and lower bound for the multi-objective optimization problem, and then adopt the concept of Pareto optimality to mathematically characterize the optimality condition. Furthermore, we propose differentially private algorithms which still matches the lower bound, showing that privacy is "almost free". Additionally, we derive the asymptotic normality of the estimator, which is essential in statistical inference and hypothesis testing.
|
https://proceedings.mlr.press/v235/li24bh.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bh/li24bh.pdf
|
https://openreview.net/forum?id=nB6ERIud2y
|
Combining Experimental and Historical Data for Policy Evaluation
|
https://proceedings.mlr.press/v235/li24bh.html
|
Ting Li, Chengchun Shi, Qianglin Wen, Yang Sui, Yongli Qin, Chunbo Lai, Hongtu Zhu
|
https://proceedings.mlr.press/v235/li24bh.html
|
ICML 2024
|
This paper studies policy evaluation with multiple data sources, especially in scenarios that involve one experimental dataset with two arms, complemented by a historical dataset generated under a single control arm. We propose novel data integration methods that linearly integrate base policy value estimators constructed based on the experimental and historical data, with weights optimized to minimize the mean square error (MSE) of the resulting combined estimator. We further apply the pessimistic principle to obtain more robust estimators, and extend these developments to sequential decision making. Theoretically, we establish non-asymptotic error bounds for the MSEs of our proposed estimators, and derive their oracle, efficiency and robustness properties across a broad spectrum of reward shift scenarios. Numerical experiments and real-data-based analyses from a ridesharing company demonstrate the superior performance of the proposed estimators.
|
https://proceedings.mlr.press/v235/li24bi.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bi/li24bi.pdf
|
https://openreview.net/forum?id=mhI5nc5QwX
|
LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models
|
https://proceedings.mlr.press/v235/li24bi.html
|
Guangyan Li, Yongqiang Tang, Wensheng Zhang
|
https://proceedings.mlr.press/v235/li24bi.html
|
ICML 2024
|
Large language models (LLMs) show excellent performance in difficult tasks, but they often require massive memories and computational resources. How to reduce the parameter scale of LLMs has become research hotspots. In this study, we get an important observation that the multi-head self-attention (MHA) sub-layer of Transformer exhibits noticeable low-rank structure, while the feed-forward network (FFN) sub-layer does not. With this regard, we design a novel structured compression method LoRAP, which organically combines Low-Rank matrix approximation And structured Pruning. For the MHA sub-layer, we proposal an input activation weighted singular value decomposition method and allocate different parameter amounts for each weight matrix based on the differences in low-rank properties of matrices.For the FFN sub-layer, we propose a gradient-free structured channel pruning method and save the least important 1% of parameters which actually play a vital role in model performance. Extensive evaluations on zero-shot perplexity and zero-shot task classification indicate that our proposal is superior to previous structured compression rivals under multiple compression ratios. Our code will be released soon.
|
https://proceedings.mlr.press/v235/li24bj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bj/li24bj.pdf
|
https://openreview.net/forum?id=7E4c2gyP0R
|
DiffFPR: Diffusion Prior for Oversampled Fourier Phase Retrieval
|
https://proceedings.mlr.press/v235/li24bj.html
|
Ji Li, Chao Wang
|
https://proceedings.mlr.press/v235/li24bj.html
|
ICML 2024
|
This paper tackled the challenging Fourier phase retrieval problem, the absolute uniqueness of which does not hold. The existence of equivalent solution (a.k.a. trivial solution ambiguity) hinders the successful recovery, especially for multi-channel color image. The traditional iterative engine, such as the Relaxed Averaged Alternating Reflections (RAAR), can be applied to reconstruct the image channel-wisely. However, due to the relative uniqueness of the solution, the restoration is not automatically aligned with the accurate orientation for each channel, resulting in a reconstructed image that deviates significantly from the true solution manifold. To address this issue, by penalizing the mismatch of the image channels, a diffusion model as the strong prior of the color image is integrated into the iterative engine. The combination of the traditional iterative engine and the diffusion model provides an effective solution to the oversampled Fourier phase retrieval. The formed algorithm, DiffFPR, is validated by experiments. The code is available at https://github.com/Chilie/DiffFPR.
|
https://proceedings.mlr.press/v235/li24bk.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bk/li24bk.pdf
|
https://openreview.net/forum?id=eDtty9ZCvt
|
Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning
|
https://proceedings.mlr.press/v235/li24bk.html
|
Depeng Li, Tianqi Wang, Junwei Chen, Wei Dai, Zhigang Zeng
|
https://proceedings.mlr.press/v235/li24bk.html
|
ICML 2024
|
Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model’s convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.
|
https://proceedings.mlr.press/v235/li24bl.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bl/li24bl.pdf
|
https://openreview.net/forum?id=5sgkNtexs2
|
Compress Clean Signal from Noisy Raw Image: A Self-Supervised Approach
|
https://proceedings.mlr.press/v235/li24bl.html
|
Zhihao Li, Yufei Wang, Alex Kot, Bihan Wen
|
https://proceedings.mlr.press/v235/li24bl.html
|
ICML 2024
|
Raw images offer unique advantages in many low-level visual tasks due to their unprocessed nature. However, this unprocessed state accentuates noise, making raw images challenging to compress effectively. Current compression methods often overlook the ubiquitous noise in raw space, leading to increased bitrates and reduced quality. In this paper, we propose a novel raw image compression scheme that selectively compresses the noise-free component of the input, while discarding its real noise using a self-supervised approach. By excluding noise from the bitstream, both the coding efficiency and reconstruction quality are significantly enhanced. We curate an full-day dataset of raw images with calibrated noise parameters and reference images to evaluate the performance of models under a wide range of input signal-noise ratios. Experimental results demonstrate that our method surpasses existing compression techniques, achieving a more advantageous rate-distortion balance with improvements ranging from +2 to +10dB and yielding a bit saving of 2 to 50 times. The code will be released upon paper acceptance.
|
https://proceedings.mlr.press/v235/li24bm.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bm/li24bm.pdf
|
https://openreview.net/forum?id=BOunbuapcv
|
VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling
|
https://proceedings.mlr.press/v235/li24bm.html
|
Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Jiangbin Zheng, Yufei Huang, Stan Z. Li
|
https://proceedings.mlr.press/v235/li24bm.html
|
ICML 2024
|
Similar to natural language models, pre-trained genome language models are proposed to capture the underlying intricacies within genomes with unsupervised sequence modeling. They have become essential tools for researchers and practitioners in biology. However, the hand-crafted tokenization policies used in these models may not encode the most discriminative patterns from the limited vocabulary of genomic data. In this paper, we introduce VQDNA, a general-purpose framework that renovates genome tokenization from the perspective of genome vocabulary learning. By leveraging vector-quantized codebook as learnable vocabulary, VQDNA can adaptively tokenize genomes into pattern-aware embeddings in an end-to-end manner. To further push its limits, we propose Hierarchical Residual Quantization (HRQ), where varying scales of codebooks are designed in a hierarchy to enrich the genome vocabulary in a coarse-to-fine manner. Extensive experiments on 32 genome datasets demonstrate VQDNA’s superiority and favorable parameter efficiency compared to existing genome language models. Notably, empirical analysis of SARS-CoV-2 mutations reveals the fine-grained pattern awareness and biological significance of learned HRQ vocabulary, highlighting its untapped potential for broader applications in genomics.
|
https://proceedings.mlr.press/v235/li24bn.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bn/li24bn.pdf
|
https://openreview.net/forum?id=I4HTPws9P6
|
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?
|
https://proceedings.mlr.press/v235/li24bn.html
|
Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, Pin-Yu Chen
|
https://proceedings.mlr.press/v235/li24bn.html
|
ICML 2024
|
Transformer-based large language models have displayed impressive in-context learning capabilities, where a pre-trained model can handle new tasks without fine-tuning by simply augmenting the query with some input-output examples from that task. Despite the empirical success, the mechanics of how to train a Transformer to achieve ICL and the corresponding ICL capacity is mostly elusive due to the technical challenges of analyzing the nonconvex training problems resulting from the nonlinear self-attention and nonlinear activation in Transformers. To the best of our knowledge, this paper provides the first theoretical analysis of the training dynamics of Transformers with nonlinear self-attention and nonlinear MLP, together with the ICL generalization capability of the resulting model. Focusing on a group of binary classification tasks, we train Transformers using data from a subset of these tasks and quantify the impact of various factors on the ICL generalization performance on the remaining unseen tasks with and without data distribution shifts. We also analyze how different components in the learned Transformers contribute to the ICL performance. Furthermore, we provide the first theoretical analysis of how model pruning affects ICL performance and prove that proper magnitude-based pruning can have a minimal impact on ICL while reducing inference costs. These theoretical findings are justified through numerical experiments.
|
https://proceedings.mlr.press/v235/li24bo.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bo/li24bo.pdf
|
https://openreview.net/forum?id=mJhXlsZzzE
|
What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
|
https://proceedings.mlr.press/v235/li24bo.html
|
Hongkang Li, Meng Wang, Tengfei Ma, Sijia Liu, Zaixi Zhang, Pin-Yu Chen
|
https://proceedings.mlr.press/v235/li24bo.html
|
ICML 2024
|
Graph Transformers, which incorporate self-attention and positional encoding, have recently emerged as a powerful architecture for various graph learning tasks. Despite their impressive performance, the complex non-convex interactions across layers and the recursive graph structure have made it challenging to establish a theoretical foundation for learning and generalization. This study introduces the first theoretical investigation of a shallow Graph Transformer for semi-supervised node classification, comprising a self-attention layer with relative positional encoding and a two-layer perception. Focusing on a graph data model with discriminative nodes that determine node labels and non-discriminative nodes that are class-irrelevant, we characterize the sample complexity required to achieve a desirable generalization error by training with stochastic gradient descent (SGD). This paper provides the quantitative characterization of the sample complexity and number of iterations for convergence dependent on the fraction of discriminative nodes, the dominant patterns, and the initial model errors. Furthermore, we demonstrate that self-attention and positional encoding enhance generalization by making the attention map sparse and promoting the core neighborhood during training, which explains the superior feature representation of Graph Transformers. Our theoretical results are supported by empirical experiments on synthetic and real-world benchmarks.
|
https://proceedings.mlr.press/v235/li24bp.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bp/li24bp.pdf
|
https://openreview.net/forum?id=kAFevjEYsz
|
OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift
|
https://proceedings.mlr.press/v235/li24bp.html
|
Lin Li, Yifei Wang, Chawin Sitawarin, Michael W. Spratling
|
https://proceedings.mlr.press/v235/li24bp.html
|
ICML 2024
|
Existing works have made great progress in improving adversarial robustness, but typically test their method only on data from the same distribution as the training data, i.e. in-distribution (ID) testing. As a result, it is unclear how such robustness generalizes under input distribution shifts, i.e. out-of-distribution (OOD) testing. This omission is concerning as such distribution shifts are unavoidable when methods are deployed in the wild. To address this issue we propose a benchmark named OODRobustBench to comprehensively assess OOD adversarial robustness using 23 dataset-wise shifts (i.e. naturalistic shifts in input distribution) and 6 threat-wise shifts (i.e., unforeseen adversarial threat models). OODRobustBench is used to assess 706 robust models using 60.7K adversarial evaluations. This large-scale analysis shows that: 1) adversarial robustness suffers from a severe OOD generalization issue; 2) ID robustness correlates strongly with OOD robustness in a positive linear way. The latter enables the prediction of OOD robustness from ID robustness. We then predict and verify that existing methods are unlikely to achieve high OOD robustness. Novel methods are therefore required to achieve OOD robustness beyond our prediction. To facilitate the development of these methods, we investigate a wide range of techniques and identify several promising directions. Code and models are available at: https://github.com/OODRobustBench/OODRobustBench.
|
https://proceedings.mlr.press/v235/li24bq.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bq/li24bq.pdf
|
https://openreview.net/forum?id=VfWrXJtLSL
|
Improved Bounds for Pure Private Agnostic Learning: Item-Level and User-Level Privacy
|
https://proceedings.mlr.press/v235/li24bq.html
|
Bo Li, Wei Wang, Peng Ye
|
https://proceedings.mlr.press/v235/li24bq.html
|
ICML 2024
|
Machine Learning has made remarkable progress in a wide range of fields. In many scenarios, learning is performed on datasets involving sensitive information, in which privacy protection is essential for learning algorithms. In this work, we study pure private learning in the agnostic model – a framework reflecting the learning process in practice. We examine the number of users required under item-level (where each user contributes one example) and user-level (where each user contributes multiple examples) privacy and derive several improved upper bounds. For item-level privacy, our algorithm achieves a near optimal bound for general concept classes. We extend this to the user-level setting, rendering a tighter upper bound than the one proved by Ghazi et al. (2023). Lastly, we consider the problem of learning thresholds under user-level privacy and present an algorithm with a nearly tight user complexity.
|
https://proceedings.mlr.press/v235/li24br.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24br/li24br.pdf
|
https://openreview.net/forum?id=OS0szhkPmF
|
Disentangled Graph Self-supervised Learning for Out-of-Distribution Generalization
|
https://proceedings.mlr.press/v235/li24br.html
|
Haoyang Li, Xin Wang, Zeyang Zhang, Haibo Chen, Ziwei Zhang, Wenwu Zhu
|
https://proceedings.mlr.press/v235/li24br.html
|
ICML 2024
|
Graph out-of-distribution (OOD) generalization, aiming to generalize graph neural networks (GNNs) under distribution shifts between training and testing environments, has attracted ever-increasing attention recently. However, existing literature heavily relies on sufficient task-dependent graph labels, which are often scarce or even unavailable, limiting their applications in real-world scenarios. In this paper, we study the self-supervised graph OOD generalization problem, i.e., learning GNNs capable of achieving relatively stable performances under distribution shifts without graph labels. However, the problem remains largely unexplored, with the critical challenge that the invariant and variant information are highly entangled in graphs. To solve this problem, we propose an OOD generalized disentangled graph contrastive learning model (OOD-GCL), which is capable of learning disentangled graph-level representations with self-supervision that can handle distribution shifts between training and testing graph data. Specifically, we first introduce a disentangled graph encoder to map each input graph into the factorized graph representation. Then we propose a tailored disentangled invariant self-supervised learning module to maximize predictive ability of the representations and make sure the representations other than from one specific channel are invariant to the environments partitioned by this latent factor for excluding the information corresponding to this latent factor for disentanglement. Finally, the disentangled graph representations are fed into a linear predictor and finetuned for the downstream tasks. We provide comprehensive theoretical analyses to show that our model can learn disentangled graph representations and achieve OOD generalization. Extensive experiments on real-world datasets demonstrate the superiority of our model against state-of-the-art baselines under distribution shifts for graph classification tasks.
|
https://proceedings.mlr.press/v235/li24bs.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bs/li24bs.pdf
|
https://openreview.net/forum?id=wlOaG9g0uq
|
The Good, The Bad, and Why: Unveiling Emotions in Generative AI
|
https://proceedings.mlr.press/v235/li24bs.html
|
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
|
https://proceedings.mlr.press/v235/li24bs.html
|
ICML 2024
|
Emotion significantly impacts our daily behaviors and interactions. While recent generative AI models, such as large language models, have shown impressive performance in various tasks, it remains unclear whether they truly comprehend emotions and why. This paper aims to address this gap by incorporating psychological theories to gain a holistic understanding of emotions in generative AI models. Specifically, we propose three approaches: 1) EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI model performance, and 3) EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it. More importantly, EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain. Our work heralds a novel avenue for exploring psychology to enhance our understanding of generative AI models, thus boosting the research and development of human-AI collaboration and mitigating potential risks.
|
https://proceedings.mlr.press/v235/li24bt.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bt/li24bt.pdf
|
https://openreview.net/forum?id=1NdN7eXyb4
|
EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty
|
https://proceedings.mlr.press/v235/li24bt.html
|
Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang
|
https://proceedings.mlr.press/v235/li24bt.html
|
ICML 2024
|
Autoregressive decoding makes the inference of Large Language Models (LLMs) time-consuming. In this paper, we reconsider speculative sampling and derive two key observations. Firstly, autoregression at the feature (second-to-top-layer) level is more straightforward than at the token level. Secondly, the inherent uncertainty in feature (second-to-top-layer) level autoregression constrains its performance. Based on these insights, we introduce EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency), a simple yet highly efficient speculative sampling framework. By incorporating a token sequence advanced by one time step, EAGLE effectively resolves the uncertainty, enabling precise second-to-top-layer feature prediction with minimal overhead. We conducted comprehensive evaluations of EAGLE, including all models from the Vicuna and LLaMA2-Chat series, the MoE model Mixtral 8x7B Instruct, and tasks in dialogue, code generation, mathematical reasoning, and instruction following. For LLaMA2-Chat 70B, EAGLE achieved a latency speedup ratio of 2.7x-3.5x, doubled throughput, while maintaining the distribution of the generated text.
|
https://proceedings.mlr.press/v235/li24bu.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bu/li24bu.pdf
|
https://openreview.net/forum?id=YRWdiaupCr
|
Two-Stage Shadow Inclusion Estimation: An IV Approach for Causal Inference under Latent Confounding and Collider Bias
|
https://proceedings.mlr.press/v235/li24bu.html
|
Baohong Li, Anpeng Wu, Ruoxuan Xiong, Kun Kuang
|
https://proceedings.mlr.press/v235/li24bu.html
|
ICML 2024
|
Latent confounding bias and collider bias are two key challenges of causal inference in observational studies. Latent confounding bias occurs when failing to control the unmeasured covariates that are common causes of treatments and outcomes, which can be addressed by using the Instrumental Variable (IV) approach. Collider bias comes from non-random sample selection caused by both treatments and outcomes, which can be addressed by using a different type of instruments, i.e., shadow variables. However, in most scenarios, these two biases simultaneously exist in observational data, and the previous methods focusing on either one are inadequate. To the best of our knowledge, no approach has been developed for causal inference when both biases exist. In this paper, we propose a novel IV approach, Two-Stage Shadow Inclusion (2SSI), which can simultaneously address latent confounding bias and collider bias by utilizing the residual of the treatment as a shadow variable. Extensive experimental results on benchmark synthetic datasets and a real-world dataset show that 2SSI achieves noticeable performance improvement when both biases exist compared to existing methods.
|
https://proceedings.mlr.press/v235/li24bv.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bv/li24bv.pdf
|
https://openreview.net/forum?id=VoMPNYTZud
|
Towards Realistic Model Selection for Semi-supervised Learning
|
https://proceedings.mlr.press/v235/li24bv.html
|
Muyang Li, Xiaobo Xia, Runze Wu, Fengming Huang, Jun Yu, Bo Han, Tongliang Liu
|
https://proceedings.mlr.press/v235/li24bv.html
|
ICML 2024
|
Semi-supervised Learning (SSL) has shown remarkable success in applications with limited supervision. However, due to the scarcity of labels in the training process, SSL algorithms are known to be impaired by the lack of proper model selection, as splitting a validation set will further reduce the limited labeled data, and the size of the validation set could be too small to provide a reliable indication to the generalization error. Therefore, we seek alternatives that do not rely on validation data to probe the generalization performance of SSL models. Specifically, we find that the distinct margin distribution in SSL can be effectively utilized in conjunction with the model’s spectral complexity, to provide a non-vacuous indication of the generalization error. Built upon this, we propose a novel model selection method, specifically tailored for SSL, known as Spectral-normalized Labeled-margin Minimization (SLAM). We prove that the model selected by SLAM has upper-bounded differences w.r.t. the best model within the search space. In addition, comprehensive experiments showcase that SLAM can achieve significant improvements compared to its counterparts, verifying its efficacy from both theoretical and empirical standpoints.
|
https://proceedings.mlr.press/v235/li24bw.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bw/li24bw.pdf
|
https://openreview.net/forum?id=vye4OgLaTy
|
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction
|
https://proceedings.mlr.press/v235/li24bw.html
|
Zhonghang Li, Lianghao Xia, Yong Xu, Chao Huang
|
https://proceedings.mlr.press/v235/li24bw.html
|
ICML 2024
|
The objective of traffic prediction is to accurately forecast and analyze the dynamics of transportation patterns, considering both space and time. However, the presence of distribution shift poses a significant challenge in this field, as existing models struggle to generalize well when faced with test data that significantly differs from the training distribution. To tackle this issue, this paper introduces a simple and universal spatio-temporal prompt-tuning framework-FlashST, which adapts pre-trained models to the specific characteristics of diverse downstream datasets, improving generalization in diverse traffic prediction scenarios. Specifically, the FlashST framework employs a lightweight spatio-temporal prompt network for in-context learning, capturing spatio-temporal invariant knowledge and facilitating effective adaptation to diverse scenarios. Additionally, we incorporate a distribution mapping mechanism to align the data distributions of pre-training and downstream data, facilitating effective knowledge transfer in spatio-temporal forecasting. Empirical evaluations demonstrate the effectiveness of our FlashST across different spatio-temporal prediction tasks using diverse urban datasets. Code is available at https://github.com/HKUDS/FlashST.
|
https://proceedings.mlr.press/v235/li24bx.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bx/li24bx.pdf
|
https://openreview.net/forum?id=4HCi7JGCZk
|
Size-invariance Matters: Rethinking Metrics and Losses for Imbalanced Multi-object Salient Object Detection
|
https://proceedings.mlr.press/v235/li24bx.html
|
Feiran Li, Qianqian Xu, Shilong Bao, Zhiyong Yang, Runmin Cong, Xiaochun Cao, Qingming Huang
|
https://proceedings.mlr.press/v235/li24bx.html
|
ICML 2024
|
This paper explores the size-invariance of evaluation metrics in Salient Object Detection (SOD), especially when multiple targets of diverse sizes co-exist in the same image. We observe that current metrics are size-sensitive, where larger objects are focused, and smaller ones tend to be ignored. We argue that the evaluation should be size-invariant because bias based on size is unjustified without additional semantic information. In pursuit of this, we propose a generic approach that evaluates each salient object separately and then combines the results, effectively alleviating the imbalance. We further develop an optimization framework tailored to this goal, achieving considerable improvements in detecting objects of different sizes. Theoretically, we provide evidence supporting the validity of our new metrics and present the generalization analysis of SOD. Extensive experiments demonstrate the effectiveness of our method.
|
https://proceedings.mlr.press/v235/li24by.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24by/li24by.pdf
|
https://openreview.net/forum?id=OF7e0w1uon
|
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent
|
https://proceedings.mlr.press/v235/li24by.html
|
Yingru Li, Jiawei Xu, Lei Han, Zhi-Quan Luo
|
https://proceedings.mlr.press/v235/li24by.html
|
ICML 2024
|
We propose HyperAgent, a reinforcement learning (RL) algorithm based on the hypermodel framework for exploration in RL. HyperAgent allows for the efficient incremental approximation of posteriors associated with an optimal action-value function ($Q^\star$) without the need for conjugacy and follows the greedy policies w.r.t. these approximate posterior samples. We demonstrate that HyperAgent offers robust performance in large-scale deep RL benchmarks. It can solve Deep Sea hard exploration problems with episodes that optimally scale with problem size and exhibits significant efficiency gains in the Atari suite. Implementing HyperAgent requires minimal code addition to well-established deep RL frameworks like DQN. We theoretically prove that, under tabular assumptions, HyperAgent achieves logarithmic per-step computational complexity while attaining sublinear regret, matching the best known randomized tabular RL algorithm.
|
https://proceedings.mlr.press/v235/li24bz.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bz/li24bz.pdf
|
https://openreview.net/forum?id=eMQyb1tvvc
|
Towards efficient deep spiking neural networks construction with spiking activity based pruning
|
https://proceedings.mlr.press/v235/li24bz.html
|
Yaxin Li, Qi Xu, Jiangrong Shen, Hongming Xu, Long Chen, Gang Pan
|
https://proceedings.mlr.press/v235/li24bz.html
|
ICML 2024
|
The emergence of deep and large-scale spiking neural networks (SNNs) exhibiting high performance across diverse complex datasets has led to a need for compressing network models due to the presence of a significant number of redundant structural units, aiming to more effectively leverage their low-power consumption and biological interpretability advantages. Currently, most model compression techniques for SNNs are based on unstructured pruning of individual connections, which requires specific hardware support. Hence, we propose a structured pruning approach based on the activity levels of convolutional kernels named Spiking Channel Activity-based (SCA) network pruning framework. Inspired by synaptic plasticity mechanisms, our method dynamically adjusts the network’s structure by pruning and regenerating convolutional kernels during training, enhancing the model’s adaptation to the current target task. While maintaining model performance, this approach refines the network architecture, ultimately reducing computational load and accelerating the inference process. This indicates that structured dynamic sparse learning methods can better facilitate the application of deep SNNs in low-power and high-efficiency scenarios.
|
https://proceedings.mlr.press/v235/li24ca.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ca/li24ca.pdf
|
https://openreview.net/forum?id=x2zxPwCkAZ
|
FedBAT: Communication-Efficient Federated Learning via Learnable Binarization
|
https://proceedings.mlr.press/v235/li24ca.html
|
Shiwei Li, Wenchao Xu, Haozhao Wang, Xing Tang, Yining Qi, Shijie Xu, Weihong Luo, Yuhua Li, Xiuqiang He, Ruixuan Li
|
https://proceedings.mlr.press/v235/li24ca.html
|
ICML 2024
|
Federated learning is a promising distributed machine learning paradigm that can effectively exploit large-scale data without exposing users’ privacy. However, it may incur significant communication overhead, thereby potentially impairing the training efficiency. To address this challenge, numerous studies suggest binarizing the model updates. Nonetheless, traditional methods usually binarize model updates in a post-training manner, resulting in significant approximation errors and consequent degradation in model accuracy. To this end, we propose Federated Binarization-Aware Training (FedBAT), a novel framework that directly learns binary model updates during the local training process, thus inherently reducing the approximation errors. FedBAT incorporates an innovative binarization operator, along with meticulously designed derivatives to facilitate efficient learning. In addition, we establish theoretical guarantees regarding the convergence of FedBAT. Extensive experiments are conducted on four popular datasets. The results show that FedBAT significantly accelerates the convergence and exceeds the accuracy of baselines by up to 9%, even surpassing that of FedAvg in some cases.
|
https://proceedings.mlr.press/v235/li24cb.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cb/li24cb.pdf
|
https://openreview.net/forum?id=CpI37NA7MO
|
Beyond Point Prediction: Score Matching-based Pseudolikelihood Estimation of Neural Marked Spatio-Temporal Point Process
|
https://proceedings.mlr.press/v235/li24cb.html
|
Zichong Li, Qunzhi Xu, Zhenghao Xu, Yajun Mei, Tuo Zhao, Hongyuan Zha
|
https://proceedings.mlr.press/v235/li24cb.html
|
ICML 2024
|
Spatio-temporal point processes (STPPs) are potent mathematical tools for modeling and predicting events with both temporal and spatial features. Despite their versatility, most existing methods for learning STPPs either assume a restricted form of the spatio-temporal distribution, or suffer from inaccurate approximations of the intractable integral in the likelihood training objective. These issues typically arise from the normalization term of the probability density function. Moreover, existing works only provide point prediction for events without quantifying their uncertainty, such as confidence intervals for the event’s arrival time and confidence regions for the event’s location, which is crucial given the considerable randomness of the data. To tackle these challenges, we introduce SMASH: a Score MAtching-based pSeudolikeliHood estimator for learning marked STPPs. Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of marked STPPs through score-matching and predicts confidence intervals/regions for event time and location by generating samples through a score-based sampling algorithm. The superior performance of our proposed framework is demonstrated through extensive experiments on both point and confidence interval/region prediction of events.
|
https://proceedings.mlr.press/v235/li24cc.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cc/li24cc.pdf
|
https://openreview.net/forum?id=iqAyWVLUEO
|
Statistical Properties of Robust Satisficing
|
https://proceedings.mlr.press/v235/li24cc.html
|
Zhiyi Li, Yunbei Xu, Ruohan Zhan
|
https://proceedings.mlr.press/v235/li24cc.html
|
ICML 2024
|
The Robust Satisficing (RS) model is an emerging approach to robust optimization, offering streamlined procedures and robust generalization across various applications. However, the statistical theory of RS remains unexplored in the literature. This paper fills in the gap by comprehensively analyzing the theoretical properties of the RS model. Notably, the RS structure offers a more straightforward path to deriving statistical guarantees compared to the seminal Distributionally Robust Optimization (DRO), resulting in a richer set of results. In particular, we establish two-sided confidence intervals for the optimal loss without the need to solve a minimax optimization problem explicitly. We further provide finite-sample generalization error bounds for the RS optimizer. Importantly, our results extend to scenarios involving distribution shifts, where discrepancies exist between the sampling and target distributions. Our numerical experiments show that the RS model consistently outperforms the baseline empirical risk minimization in small-sample regimes and under distribution shifts. Furthermore, compared to the DRO model, the RS model exhibits lower sensitivity to hyperparameter tuning, highlighting its practicability for robustness considerations.
|
https://proceedings.mlr.press/v235/li24cd.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cd/li24cd.pdf
|
https://openreview.net/forum?id=Stn8hXkpe6
|
ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models
|
https://proceedings.mlr.press/v235/li24cd.html
|
Ziniu Li, Tian Xu, Yushun Zhang, Zhihang Lin, Yang Yu, Ruoyu Sun, Zhi-Quan Luo
|
https://proceedings.mlr.press/v235/li24cd.html
|
ICML 2024
|
Reinforcement Learning from Human Feedback (RLHF) is key to aligning Large Language Models (LLMs), typically paired with the Proximal Policy Optimization (PPO) algorithm. While PPO is a powerful method designed for general reinforcement learning tasks, it is overly sophisticated for LLMs, leading to laborious hyper-parameter tuning and significant computation burdens. To make RLHF efficient, we present ReMax, which leverages 3 properties of RLHF: fast simulation, deterministic transitions, and trajectory-level rewards. These properties are not exploited in PPO, making it less suitable for RLHF. Building on the renowned REINFORCE algorithm, ReMax does not require training an additional value model as in PPO and is further enhanced with a new variance reduction technique. ReMax offers several benefits over PPO: it is simpler to implement, eliminates more than 4 hyper-parameters in PPO, reduces GPU memory usage, and shortens training time. ReMax can save about 46% GPU memory than PPO when training a 7B model and enables training on A800-80GB GPUs without the memory-saving offloading technique needed by PPO. Applying ReMax to a Mistral-7B model resulted in a 94.78% win rate on the AlpacaEval leaderboard and a 7.739 score on MT-bench, setting a new SOTA for open-source 7B models. These results show the effectiveness of ReMax while addressing the limitations of PPO in LLMs.
|
https://proceedings.mlr.press/v235/li24ce.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ce/li24ce.pdf
|
https://openreview.net/forum?id=2cXzNDe614
|
PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming
|
https://proceedings.mlr.press/v235/li24ce.html
|
Bingheng Li, Linxin Yang, Yupeng Chen, Senmiao Wang, Haitao Mao, Qian Chen, Yao Ma, Akang Wang, Tian Ding, Jiliang Tang, Ruoyu Sun
|
https://proceedings.mlr.press/v235/li24ce.html
|
ICML 2024
|
Solving large-scale linear programming (LP) problems is an important task in various areas such as communication networks, power systems, finance and logistics. Recently, two distinct approaches have emerged to expedite LP solving: (i) First-order methods (FOMs); (ii) Learning to optimize (L2O). In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solve large-scale LP problems. The new architecture PDHG-Net is designed by unrolling the recently emerged PDHG method into a neural network, combined with channel-expansion techniques borrowed from graph neural networks. We prove that the proposed PDHG-Net can recover PDHG algorithm, thus can approximate optimal solutions of LP instances with a polynomial number of neurons. We propose a two-stage inference approach: first use PDHG-Net to generate an approximate solution, and then apply PDHG algorithm to further improve the solution. Experiments show that our approach can significantly accelerate LP solving, achieving up to a 3$\times$ speedup compared to FOMs for large-scale LP problems.
|
https://proceedings.mlr.press/v235/li24cf.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cf/li24cf.pdf
|
https://openreview.net/forum?id=FM61SQzF3N
|
IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation
|
https://proceedings.mlr.press/v235/li24cf.html
|
Kai Li, Runxuan Yang, Fuchun Sun, Xiaolin Hu
|
https://proceedings.mlr.press/v235/li24cf.html
|
ICML 2024
|
Recent research has made significant progress in designing fusion modules for audio-visual speech separation. However, they predominantly focus on multi-modal fusion at a single temporal scale of auditory and visual features without employing selective attention mechanisms, which is in sharp contrast with the brain. To address this, We propose a novel model called intra- and inter-attention network (IIANet), which leverages the attention mechanism for efficient audio-visual feature fusion. IIANet consists of two types of attention blocks: intra-attention (IntraA) and inter-attention (InterA) blocks, where the InterA blocks are distributed at the top, middle and bottom of IIANet. Heavily inspired by the way how human brain selectively focuses on relevant content at various temporal scales, these blocks maintain the ability to learn modality-specific features and enable the extraction of different semantics from audio-visual features. Comprehensive experiments on three standard audio-visual separation benchmarks (LRS2, LRS3, and VoxCeleb2) demonstrate the effectiveness of IIANet, outperforming previous state-of-the-art methods while maintaining comparable inference time. In particular, the fast version of IIANet (IIANet-fast) has only 7% of CTCNet’s MACs and is 40% faster than CTCNet on CPUs while achieving better separation quality, showing the great potential of attention mechanism for efficient and effective multimodal fusion.
|
https://proceedings.mlr.press/v235/li24cg.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cg/li24cg.pdf
|
https://openreview.net/forum?id=0e8SEDSpNT
|
KernelWarehouse: Rethinking the Design of Dynamic Convolution
|
https://proceedings.mlr.press/v235/li24cg.html
|
Chao Li, Anbang Yao
|
https://proceedings.mlr.press/v235/li24cg.html
|
ICML 2024
|
Dynamic convolution learns a linear mixture of $n$ static kernels weighted with their input-dependent attentions, demonstrating superior performance than normal convolution. However, it increases the number of convolutional parameters by $n$ times, and thus is not parameter efficient. This leads to no research progress that can allow researchers to explore the setting $n > 100$ (an order of magnitude larger than the typical setting $n < 10$) for pushing forward the performance boundary of dynamic convolution while enjoying parameter efficiency. To fill this gap, in this paper, we propose KernelWarehouse, a more general form of dynamic convolution, which redefines the basic concepts of “kernels”, “assembling kernels” and “attention function” through the lens of exploiting convolutional parameter dependencies within the same layer and across neighboring layers of a ConvNet. We testify the effectiveness of KernelWarehouse on ImageNet and MS-COCO datasets using various ConvNet architectures. Intriguingly, KernelWarehouse is also applicable to Vision Transformers, and it can even reduce the model size of a backbone while improving the model accuracy. For instance, KernelWarehouse ($n = 4$) achieves 5.61%|3.90%|4.38% absolute top-1 accuracy gain on the ResNet18|MobileNetV2|DeiT-Tiny backbone, and KernelWarehouse ($n = 1/4$) with 65.10% model size reduction still achieves 2.29% gain on the ResNet18 backbone. The code and models are available at https://github.com/OSVAI/KernelWarehouse.
|
https://proceedings.mlr.press/v235/li24ch.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ch/li24ch.pdf
|
https://openreview.net/forum?id=WWo9G5zyh0
|
GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model
|
https://proceedings.mlr.press/v235/li24ch.html
|
Ling Li, Yu Ye, Bingchuan Jiang, Wei Zeng
|
https://proceedings.mlr.press/v235/li24ch.html
|
ICML 2024
|
This work tackles the problem of geo-localization with a new paradigm using a large vision-language model (LVLM) augmented with human inference knowledge. A primary challenge here is the scarcity of data for training the LVLM - existing street-view datasets often contain numerous low-quality images lacking visual clues, and lack any reasoning inference. To address the data-quality issue, we devise a CLIP-based network to quantify the degree of street-view images being locatable, leading to the creation of a new dataset comprising highly locatable street views. To enhance reasoning inference, we integrate external knowledge obtained from real geo-localization games, tapping into valuable human inference capabilities. The data are utilized to train GeoReasoner, which undergoes fine-tuning through dedicated reasoning and location-tuning stages. Qualitative and quantitative evaluations illustrate that GeoReasoner outperforms counterpart LVLMs by more than 25% at country-level and 38% at city-level geo-localization tasks, and surpasses StreetCLIP performance while requiring fewer training resources. The data and code are available at https://github.com/lingli1996/GeoReasoner.
|
https://proceedings.mlr.press/v235/li24ci.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ci/li24ci.pdf
|
https://openreview.net/forum?id=n2kq2EOHFE
|
Learning the Uncertainty Sets of Linear Control Systems via Set Membership: A Non-asymptotic Analysis
|
https://proceedings.mlr.press/v235/li24ci.html
|
Yingying Li, Jing Yu, Lauren Conger, Taylan Kargin, Adam Wierman
|
https://proceedings.mlr.press/v235/li24ci.html
|
ICML 2024
|
This paper studies uncertainty set estimation for unknown linear systems. Uncertainty sets are crucial for the quality of robust control since they directly influence the conservativeness of the control design. Departing from the confidence region analysis of least squares estimation, this paper focuses on set membership estimation (SME). Though good numerical performances have attracted applications of SME in the control literature, the non-asymptotic convergence rate of SME for linear systems remains an open question. This paper provides the first convergence rate bounds for SME and discusses variations of SME under relaxed assumptions. We also provide numerical results demonstrating SME’s practical promise.
|
https://proceedings.mlr.press/v235/li24cj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cj/li24cj.pdf
|
https://openreview.net/forum?id=jklD0TV5Hw
|
Seesaw: Compensating for Nonlinear Reduction with Linear Computations for Private Inference
|
https://proceedings.mlr.press/v235/li24cj.html
|
Fabing Li, Yuanhao Zhai, Shuangyu Cai, Mingyu Gao
|
https://proceedings.mlr.press/v235/li24cj.html
|
ICML 2024
|
With increasingly serious data privacy concerns and strict regulations, privacy-preserving machine learning (PPML) has emerged to securely execute machine learning tasks without violating privacy. Unfortunately, the computational cost to securely execute nonlinear computations in PPML remains significant, calling for new model architecture designs with fewer nonlinear operations. We propose Seesaw, a novel neural architecture search method tailored for PPML. Seesaw exploits a previously unexplored opportunity to leverage more linear computations and nonlinear result reuse, in order to compensate for the accuracy loss due to nonlinear reduction. It incorporates specifically designed pruning and search strategies, not only to efficiently handle the much larger design space of both linear and nonlinear operators, but also to achieve a better balance between the model accuracy and the online/offline execution latencies. Compared to the state-of-the-art design for image classification on ImageNet, Seesaw achieves 1.68$\times$ lower online latency and 1.55$\times$ lower total online + offline latency at 71% iso-accuracy, or 3.65% higher accuracy at iso-latency of 190 seconds, while using much simpler and faster search and training methods.
|
https://proceedings.mlr.press/v235/li24ck.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24ck/li24ck.pdf
|
https://openreview.net/forum?id=WCwxFM7n5S
|
Agnostic Interactive Imitation Learning: New Theory and Practical Algorithms
|
https://proceedings.mlr.press/v235/li24ck.html
|
Yichen Li, Chicheng Zhang
|
https://proceedings.mlr.press/v235/li24ck.html
|
ICML 2024
|
We study interactive imitation learning, where a learner interactively queries a demonstrating expert for action annotations, aiming to learn a policy that has performance competitive with the expert, using as few annotations as possible. We focus on the general agnostic setting where the expert demonstration policy may not be contained in the policy class used by the learner. We propose a new oracle-efficient algorithm MFTPL-P (abbreviation for Mixed Follow the Perturbed Leader with Poisson perturbations) with provable finite-sample guarantees, under the assumption that the learner is given access to samples from some “explorative” distribution over states. Our guarantees hold for any policy class, which is considerably broader than prior state of the art. We further propose Bootstrap-DAgger, a more practical variant that does not require additional sample access.
|
https://proceedings.mlr.press/v235/li24cl.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cl/li24cl.pdf
|
https://openreview.net/forum?id=pgI9inG2Ny
|
Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error
|
https://proceedings.mlr.press/v235/li24cl.html
|
Haoran Li, Zicheng Zhang, Wang Luo, Congying Han, Yudong Hu, Tiande Guo, Shichen Liao
|
https://proceedings.mlr.press/v235/li24cl.html
|
ICML 2024
|
Establishing robust policies is essential to counter attacks or disturbances affecting deep reinforcement learning (DRL) agents. Recent studies explore state-adversarial robustness and suggest the potential lack of an optimal robust policy (ORP), posing challenges in setting strict robustness constraints. This work further investigates ORP: At first, we introduce a consistency assumption of policy (CAP) stating that optimal actions in the Markov decision process remain consistent with minor perturbations, supported by empirical and theoretical evidence. Building upon CAP, we crucially prove the existence of a deterministic and stationary ORP that aligns with the Bellman optimal policy. Furthermore, we illustrate the necessity of $L^{\infty}$-norm when minimizing Bellman error to attain ORP. This finding clarifies the vulnerability of prior DRL algorithms that target the Bellman optimal policy with $L^{1}$-norm and motivates us to train a Consistent Adversarial Robust Deep Q-Network (CAR-DQN) by minimizing a surrogate of Bellman Infinity-error. The top-tier performance of CAR-DQN across various benchmarks validates its practical effectiveness and reinforces the soundness of our theoretical analysis.
|
https://proceedings.mlr.press/v235/li24cm.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cm/li24cm.pdf
|
https://openreview.net/forum?id=7rTbqkKvA6
|
Predicting and Interpreting Energy Barriers of Metallic Glasses with Graph Neural Networks
|
https://proceedings.mlr.press/v235/li24cm.html
|
Haoyu Li, Shichang Zhang, Longwen Tang, Mathieu Bauchy, Yizhou Sun
|
https://proceedings.mlr.press/v235/li24cm.html
|
ICML 2024
|
Metallic Glasses (MGs) are widely used materials that are stronger than steel while being shapeable as plastic. While understanding the structure-property relationship of MGs remains a challenge in materials science, studying their energy barriers (EBs) as an intermediary step shows promise. In this work, we utilize Graph Neural Networks (GNNs) to model MGs and study EBs. We contribute a new dataset for EB prediction and a novel Symmetrized GNN (SymGNN) model that is E(3)-invariant in expectation. SymGNN handles invariance by aggregating over orthogonal transformations of the graph structure. When applied to EB prediction, SymGNN are more accurate than molecular dynamics (MD) local-sampling methods and other machine-learning models. Compared to precise MD simulations, SymGNN reduces the inference time on new MGs from roughly 41 days to less than one second. We apply explanation algorithms to reveal the relationship between structures and EBs. The structures that we identify through explanations match the medium-range order (MRO) hypothesis and possess unique topological properties. Our work enables effective prediction and interpretation of MG EBs, bolstering material science research.
|
https://proceedings.mlr.press/v235/li24cn.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cn/li24cn.pdf
|
https://openreview.net/forum?id=E4qjDAdVte
|
From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems
|
https://proceedings.mlr.press/v235/li24cn.html
|
Xin Li, Jingdong Zhang, Qunxi Zhu, Chengli Zhao, Xue Zhang, Xiaojun Duan, Wei Lin
|
https://proceedings.mlr.press/v235/li24cn.html
|
ICML 2024
|
Modeling complex systems using standard neural ordinary differential equations (NODEs) often faces some essential challenges, including high computational costs and susceptibility to local optima. To address these challenges, we propose a simulation-free framework, called Fourier NODEs (FNODEs), that effectively trains NODEs by directly matching the target vector field based on Fourier analysis. Specifically, we employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data. We then incorporate the estimated spatial gradients as additional inputs to a neural network. Furthermore, we utilize the estimated temporal gradient as the optimization objective for the output of the neural network. Later, the trained neural network generates more data points through an ODE solver without participating in the computational graph, facilitating more accurate estimations of gradients based on Fourier analysis. These two steps form a positive feedback loop, enabling accurate dynamics modeling in our framework. Consequently, our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness. Finally, we demonstrate the superior performance of our framework using a number of representative complex systems.
|
https://proceedings.mlr.press/v235/li24co.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24co/li24co.pdf
|
https://openreview.net/forum?id=l9ga3iQuHt
|
Feel-Good Thompson Sampling for Contextual Dueling Bandits
|
https://proceedings.mlr.press/v235/li24co.html
|
Xuheng Li, Heyang Zhao, Quanquan Gu
|
https://proceedings.mlr.press/v235/li24co.html
|
ICML 2024
|
Contextual dueling bandits, where a learner compares two options based on context and receives feedback indicating which was preferred, extends classic dueling bandits by incorporating contextual information for decision-making and preference learning. Several algorithms based on the upper confidence bound (UCB) have been proposed for linear contextual dueling bandits. However, no algorithm based on posterior sampling has been developed in this setting, despite the empirical success observed in traditional contextual bandits. In this paper, we propose a Thompson sampling algorithm, named FGTS.CDB, for linear contextual dueling bandits. At the core of our algorithm is a new Feel-Good exploration term specifically tailored for dueling bandits. This term leverages the independence of the two selected arms, thereby avoiding a cross term in the analysis. We show that our algorithm achieves nearly minimax-optimal regret, i.e., $\tilde{\mathcal{O}}(d\sqrt T)$, where $d$ is the model dimension and $T$ is the time horizon. Finally, we evaluate our algorithm on synthetic data and observe that FGTS.CDB outperforms existing algorithms by a large margin.
|
https://proceedings.mlr.press/v235/li24cp.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cp/li24cp.pdf
|
https://openreview.net/forum?id=75Hes6Zse4
|
EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search
|
https://proceedings.mlr.press/v235/li24cp.html
|
Pengyi Li, Yan Zheng, Hongyao Tang, Xian Fu, Jianye Hao
|
https://proceedings.mlr.press/v235/li24cp.html
|
ICML 2024
|
Both Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) have demonstrated powerful capabilities in policy search with different principles. A promising direction is to combine the respective strengths of both for efficient policy optimization. To this end, many works have proposed various mechanisms to integrate EAs and RL. However, it is still unclear which of these mechanisms are complementary and can be fully combined. In this paper, we revisit different mechanisms from five perspectives: 1) Interaction Mode, 2) Individual Architecture, 3) EAs and operators, 4) Impact of EA on RL, and 5) Fitness Surrogate and Usage. We evaluate the effectiveness of each mechanism and experimentally analyze the reasons for the more effective mechanisms. Using the most effective mechanisms, we develop EvoRainbow and EvoRainbow-Exp, which outperform strong baselines and provide state-of-the-art performance across various tasks with distinct characteristics. To promote community development, we release the code on https://github.com/yeshenpy/EvoRainbow.
|
https://proceedings.mlr.press/v235/li24cq.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cq/li24cq.pdf
|
https://openreview.net/forum?id=Ln3moCobjO
|
Relaxing the Accurate Imputation Assumption in Doubly Robust Learning for Debiased Collaborative Filtering
|
https://proceedings.mlr.press/v235/li24cq.html
|
Haoxuan Li, Chunyuan Zheng, Shuyi Wang, Kunhan Wu, Eric Wang, Peng Wu, Zhi Geng, Xu Chen, Xiao-Hua Zhou
|
https://proceedings.mlr.press/v235/li24cq.html
|
ICML 2024
|
Recommender system aims to recommend items or information that may interest users based on their behaviors and preferences. However, there may be sampling selection bias in the data collection process, i.e., the collected data is not a representative of the target population. Many debiasing methods are developed based on pseudo-labelings. Nevertheless, the validity of these methods relies heavily on accurate pseudo-labelings (i.e., the imputed labels), which is difficult to satisfy in practice. In this paper, we theoretically propose several novel doubly robust estimators that are unbiased when either (a) the pseudo-labelings deviate from the true labels with an arbitrary user-specific inductive bias, item-specific inductive bias, or a combination of both, or (b) the learned propensities are accurate. We further propose a propensity reconstruction learning approach that adaptively updates the constraint weights using an attention mechanism and effectively controls the variance. Extensive experiments show that our approach outperforms the state-of-the-art on one semi-synthetic and three real-world datasets.
|
https://proceedings.mlr.press/v235/li24cr.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cr/li24cr.pdf
|
https://openreview.net/forum?id=1sesUtOIH5
|
DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning
|
https://proceedings.mlr.press/v235/li24cr.html
|
Jianxiong Li, Jinliang Zheng, Yinan Zheng, Liyuan Mao, Xiao Hu, Sijie Cheng, Haoyi Niu, Jihao Liu, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan
|
https://proceedings.mlr.press/v235/li24cr.html
|
ICML 2024
|
Multimodal pretraining is an effective strategy for the trinity of goals of representation learning in autonomous robots: $1)$ extracting both local and global task progressions; $2)$ enforcing temporal consistency of visual representation; $3)$ capturing trajectory-level language grounding. Most existing methods approach these via separate objectives, which often reach sub-optimal solutions. In this paper, we propose a universal unified objective that can simultaneously extract meaningful task progression information from image sequences and seamlessly align them with language instructions. We discover that via implicit preferences, where a visual trajectory inherently aligns better with its corresponding language instruction than mismatched pairs, the popular Bradley-Terry model can transform into representation learning through proper reward reparameterizations. The resulted framework, DecisionNCE, mirrors an InfoNCE-style objective but is distinctively tailored for decision-making tasks, providing an embodied representation learning framework that elegantly extracts both local and global task progression features, with temporal consistency enforced through implicit time contrastive learning, while ensuring trajectory-level instruction grounding via multimodal joint encoding. Evaluation on both simulated and real robots demonstrates that DecisionNCE effectively facilitates diverse downstream policy learning tasks, offering a versatile solution for unified representation and reward learning. Project Page: https://2toinf.github.io/DecisionNCE/
|
https://proceedings.mlr.press/v235/li24cs.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cs/li24cs.pdf
|
https://openreview.net/forum?id=6yQ5mIYxjj
|
Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses
|
https://proceedings.mlr.press/v235/li24cs.html
|
Shaojie Li, Bowei Zhu, Yong Liu
|
https://proceedings.mlr.press/v235/li24cs.html
|
ICML 2024
|
One of the central problems of statistical learning theory is quantifying the generalization ability of learning algorithms within a probabilistic framework. Algorithmic stability is a powerful tool for deriving generalization bounds, however, it typically builds on a critical assumption that losses are bounded. In this paper, we relax this condition to unbounded loss functions with subweibull diameter. This gives new generalization bounds for algorithmic stability and also includes existing results of subgaussian and subexponential diameters as specific cases. Furthermore, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the previous results, where $n$ is the sample size. Our main technical contribution is general concentration inequalities for subweibull random variables, which may be of independent interest.
|
https://proceedings.mlr.press/v235/lian24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lian24a/lian24a.pdf
|
https://openreview.net/forum?id=5ILo43JIzg
|
Kepler codebook
|
https://proceedings.mlr.press/v235/lian24a.html
|
Junrong Lian, Ziyue Dong, Pengxu Wei, Wei Ke, Chang Liu, Qixiang Ye, Xiangyang Ji, Liang Lin
|
https://proceedings.mlr.press/v235/lian24a.html
|
ICML 2024
|
A codebook designed for learning discrete distributions in latent space has demonstrated state-of-the-art results on generation tasks. This inspires us to explore what distribution of codebook is better. Following the spirit of Kepler’s Conjecture, we cast the codebook training as solving the sphere packing problem and derive a Kepler codebook with a compact and structured distribution to obtain a codebook for image representations. Furthermore, we implement the Kepler codebook training by simply employing this derived distribution as regularization and using the codebook partition method. We conduct extensive experiments to evaluate our trained codebook for image reconstruction and generation on natural and human face datasets, respectively, achieving significant performance improvement. Besides, our Kepler codebook has demonstrated superior performance when evaluated across datasets and even for reconstructing images with different resolutions. Our trained models and source codes will be publicly released.
|
https://proceedings.mlr.press/v235/lian24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lian24b/lian24b.pdf
|
https://openreview.net/forum?id=HGSIpeNNfM
|
Receptive Fields As Experts in Convolutional Neural Architectures
|
https://proceedings.mlr.press/v235/lian24b.html
|
Dongze Lian, Weihao Yu, Xinchao Wang
|
https://proceedings.mlr.press/v235/lian24b.html
|
ICML 2024
|
The size of spatial receptive fields, from the early 3$\times$3 convolutions in VGGNet to the recent 7$\times$7 convolutions in ConvNeXt, has always played a critical role in architecture design. In this paper, we propose a Mixture of Receptive Fields (MoRF) instead of using a single receptive field. MoRF contains the combinations of multiple receptive fields with different sizes, e.g., convolutions with different kernel sizes, which can be regarded as experts. Such an approach serves two functions: one is to select the appropriate receptive field according to the input, and the other is to expand the network capacity. Furthermore, we also introduce two types of routing mechanisms, hard routing and soft routing to automatically select the appropriate receptive field experts. In the inference stage, the selected receptive field experts are merged via re-parameterization to maintain a similar inference speed compared to the single receptive field. To demonstrate the effectiveness of MoRF, we integrate the MoRF concept into multiple architectures, e.g., ResNet and ConvNeXt. Extensive experiments show that our approach outperforms the baselines in image classification, object detection, and segmentation tasks without significantly increasing the inference time.
|
https://proceedings.mlr.press/v235/lian24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lian24c/lian24c.pdf
|
https://openreview.net/forum?id=snhurpZt63
|
Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset
|
https://proceedings.mlr.press/v235/lian24c.html
|
Shijie Lian, Ziyi Zhang, Hua Li, Wenjie Li, Laurence Tianruo Yang, Sam Kwong, Runmin Cong
|
https://proceedings.mlr.press/v235/lian24c.html
|
ICML 2024
|
With the breakthrough of large models, Segment Anything Model (SAM) and its extensions have been attempted to apply in diverse tasks of computer vision. Underwater salient instance segmentation is a foundational and vital step for various underwater vision tasks, which often suffer from low segmentation accuracy due to the complex underwater circumstances and the adaptive ability of models. Moreover, the lack of large-scale datasets with pixel-level salient instance annotations has impeded the development of machine learning techniques in this field. To address these issues, we construct the first large-scale underwater salient instance segmentation dataset (USIS10K), which contains 10,632 underwater images with pixel-level annotations in 7 categories from various underwater scenes. Then, we propose an Underwater Salient Instance Segmentation architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain. We devise an Underwater Adaptive Visual Transformer (UA-ViT) encoder to incorporate underwater domain visual prompts into the segmentation network. We further design an out-of-the-box underwater Salient Feature Prompter Generator (SFPG) to automatically generate salient prompters instead of explicitly providing foreground points or boxes as prompts in SAM. Comprehensive experimental results show that our USIS-SAM method can achieve superior performance on USIS10K datasets compared to the state-of-the-art methods. Datasets and codes are released on https://github.com/LiamLian0727/USIS10K.
|
https://proceedings.mlr.press/v235/liang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24a/liang24a.pdf
|
https://openreview.net/forum?id=0rV7VIrcjX
|
Graph External Attention Enhanced Transformer
|
https://proceedings.mlr.press/v235/liang24a.html
|
Jianqing Liang, Min Chen, Jiye Liang
|
https://proceedings.mlr.press/v235/liang24a.html
|
ICML 2024
|
The Transformer architecture has recently gained considerable attention in the field of graph representation learning, as it naturally overcomes several limitations of Graph Neural Networks (GNNs) with customized attention mechanisms or positional and structural encodings. Despite making some progress, existing works tend to overlook external information of graphs, specifically the correlation between graphs. Intuitively, graphs with similar structures should have similar representations. Therefore, we propose Graph External Attention (GEA) — a novel attention mechanism that leverages multiple external node/edge key-value units to capture inter-graph correlations implicitly. On this basis, we design an effective architecture called Graph External Attention Enhanced Transformer (GEAET), which integrates local structure and global interaction information for more comprehensive graph representations. Extensive experiments on benchmark datasets demonstrate that GEAET achieves state-of-the-art empirical performance. The source code is available for reproducibility at: https://github.com/icm1018/GEAET.
|
https://proceedings.mlr.press/v235/liang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24b/liang24b.pdf
|
https://openreview.net/forum?id=bX3J7ho18S
|
Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews
|
https://proceedings.mlr.press/v235/liang24b.html
|
Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuandong Zhao, Lingjiao Chen, Haotian Ye, Sheng Liu, Zhi Huang, Daniel Mcfarland, James Y. Zou
|
https://proceedings.mlr.press/v235/liang24b.html
|
ICML 2024
|
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM). Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level. We apply this approach to a case study of scientific peer review in AI conferences that took place after the release of ChatGPT: ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. Our results suggest that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates. The circumstances in which generated text occurs offer insight into user behavior: the estimated fraction of LLM-generated text is higher in reviews which report lower confidence, were submitted close to the deadline, and from reviewers who are less likely to respond to author rebuttals. We also observe corpus-level trends in generated text which may be too subtle to detect at the individual level, and discuss the implications of such trends on peer review. We call for future interdisciplinary work to examine how LLM use is changing our information and knowledge practices.
|
https://proceedings.mlr.press/v235/liang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24c/liang24c.pdf
|
https://openreview.net/forum?id=dGDFZM018a
|
Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs
|
https://proceedings.mlr.press/v235/liang24c.html
|
Langzhang Liang, Sunwoo Kim, Kijung Shin, Zenglin Xu, Shirui Pan, Yuan Qi
|
https://proceedings.mlr.press/v235/liang24c.html
|
ICML 2024
|
Graph Neural Networks (GNNs) have gained significant attention as a powerful modeling and inference method, especially for homophilic graph-structured data. To empower GNNs in heterophilic graphs, where adjacent nodes exhibit dissimilar labels or features, Signed Message Passing (SMP) has been widely adopted. However, there is a lack of theoretical and empirical analysis regarding the limitations of SMP. In this work, we unveil the potential pitfalls of SMP and their remedies. We first identify two limitations of SMP: undesirable representation update for multi-hop neighbors and vulnerability against oversmoothing issues. To overcome these challenges, we propose a novel message-passing function called Multiset to Multiset GNN (M2M-GNN). Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the limitations of SMP, yielding superior performance in comparison.
|
https://proceedings.mlr.press/v235/liang24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24d/liang24d.pdf
|
https://openreview.net/forum?id=3B6vmW2L80
|
Single-Trajectory Distributionally Robust Reinforcement Learning
|
https://proceedings.mlr.press/v235/liang24d.html
|
Zhipeng Liang, Xiaoteng Ma, Jose Blanchet, Jun Yang, Jiheng Zhang, Zhengyuan Zhou
|
https://proceedings.mlr.press/v235/liang24d.html
|
ICML 2024
|
To mitigate the limitation that the classical reinforcement learning (RL) framework heavily relies on identical training and test environments, Distributionally Robust RL (DRRL) has been proposed to enhance performance across a range of environments, possibly including unknown test environments. As a price for robustness gain, DRRL involves optimizing over a set of distributions, which is inherently more challenging than optimizing over a fixed distribution in the non-robust case. Existing DRRL algorithms are either model-based or fail to learn from a single sample trajectory. In this paper, we design a first fully model-free DRRL algorithm, called distributionally robust Q-learning with single trajectory (DRQ). We delicately design a multi-timescale framework to fully utilize each incrementally arriving sample and directly learn the optimal distributionally robust policy without modeling the environment, thus the algorithm can be trained along a single trajectory in a model-free fashion. Despite the algorithm’s complexity, we provide asymptotic convergence guarantees by generalizing classical stochastic approximation tools.Comprehensive experimental results demonstrate the superior robustness and sample complexity of our proposed algorithm, compared to non-robust methods and other robust RL algorithms.
|
https://proceedings.mlr.press/v235/liang24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24e/liang24e.pdf
|
https://openreview.net/forum?id=XxCfToC9pJ
|
Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization
|
https://proceedings.mlr.press/v235/liang24e.html
|
Jian Liang, Lijun Sheng, Zhengbo Wang, Ran He, Tieniu Tan
|
https://proceedings.mlr.press/v235/liang24e.html
|
ICML 2024
|
The emergence of vision-language models, such as CLIP, has spurred a significant research effort towards their application for downstream supervised learning tasks. Although some previous studies have explored the unsupervised fine-tuning of CLIP, they often rely on prior knowledge in the form of class names associated with ground truth labels. This paper explores a realistic unsupervised fine-tuning scenario, considering the presence of out-of-distribution samples from unknown classes within the unlabeled data. In particular, we focus on simultaneously enhancing out-of-distribution detection and the recognition of instances associated with known classes. To tackle this problem, we present a simple, efficient, and effective approach called Universal Entropy Optimization (UEO). UEO leverages sample-level confidence to approximately minimize the conditional entropy of confident instances and maximize the marginal entropy of less confident instances. Apart from optimizing the textual prompt, UEO incorporates optimization of channel-wise affine transformations within the visual branch of CLIP. Extensive experiments across 15 domains and 4 different types of prior knowledge validate the effectiveness of UEO compared to baseline methods. The code is at https://github.com/tim-learn/UEO.
|
https://proceedings.mlr.press/v235/liang24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24f/liang24f.pdf
|
https://openreview.net/forum?id=jnps5YwNlU
|
Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling
|
https://proceedings.mlr.press/v235/liang24f.html
|
Yuanbang Liang, Jing Wu, Yu-Kun Lai, Yipeng Qin
|
https://proceedings.mlr.press/v235/liang24f.html
|
ICML 2024
|
Despite impressive results, deep generative models require massive datasets for training, and as dataset size increases, effective evaluation metrics like precision and recall (P&R) become computationally infeasible on commodity hardware. In this paper, we address this challenge by proposing efficient P&R (eP&R) metrics that give almost identical results as the original P&R but with much lower computational costs. Specifically, we identify two redundancies in the original P&R: i) redundancy in ratio computation and ii) redundancy in manifold inside/outside identification. We find both can be effectively removed via hubness-aware sampling, which extracts representative elements from synthetic/real image samples based on their hubness values, i.e., the number of times a sample becomes a k-nearest neighbor to others in the feature space. Thanks to the insensitivity of hubness-aware sampling to exact k-nearest neighbor (k-NN) results, we further improve the efficiency of our eP&R metrics by using approximate k-NN methods. Extensive experiments show that our eP&R matches the original P&R but is far more efficient in time and space. Our code is available at: https://github.com/Byronliang8/Hubness_Precision_Recall
|
https://proceedings.mlr.press/v235/liang24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liang24g/liang24g.pdf
|
https://openreview.net/forum?id=l5lgbVR6BP
|
Scalable Multiple Kernel Clustering: Learning Clustering Structure from Expectation
|
https://proceedings.mlr.press/v235/liang24g.html
|
Weixuan Liang, En Zhu, Shengju Yu, Huiying Xu, Xinzhong Zhu, Xinwang Liu
|
https://proceedings.mlr.press/v235/liang24g.html
|
ICML 2024
|
In this paper, we derive an upper bound of the difference between a kernel matrix and its expectation under a mild assumption. Specifically, we assume that the true distribution of the training data is an unknown isotropic Gaussian distribution. When the kernel function is a Gaussian kernel, and the mean of each cluster is sufficiently separated, we find that the expectation of a kernel matrix can be close to a rank-$k$ matrix, where $k$ is the cluster number. Moreover, we prove that the normalized kernel matrix of the training set deviates (w.r.t. Frobenius norm) from its expectation in the order of $\widetilde{\mathcal{O}}(1/\sqrt{d})$, where $d$ is the dimension of samples. Based on the above theoretical results, we propose a novel multiple kernel clustering framework which attempts to learn the information of the expectation kernel matrices. First, we aim to minimize the distance between each base kernel and a rank-$k$ matrix, which is a proxy of the expectation kernel. Then, we fuse these rank-$k$ matrices into a consensus rank-$k$ matrix to find the clustering structure. Using an anchor-based method, the proposed framework is flexible with the sizes of input kernel matrices and able to handle large-scale datasets. We also provide the approximation guarantee by deriving two non-asymptotic bounds for the consensus kernel and clustering indicator matrices. Finally, we conduct extensive experiments to verify the clustering performance of the proposed method and the correctness of the proposed theoretical results.
|
https://proceedings.mlr.press/v235/liao24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liao24a/liao24a.pdf
|
https://openreview.net/forum?id=8dX4YnosqG
|
On the Error-Propagation of Inexact Hotelling’s Deflation for Principal Component Analysis
|
https://proceedings.mlr.press/v235/liao24a.html
|
Fangshuo Liao, Junhyung Lyle Kim, Cruz Barnum, Anastasios Kyrillidis
|
https://proceedings.mlr.press/v235/liao24a.html
|
ICML 2024
|
Principal Component Analysis (PCA) aims to find subspaces spanned by the so-called principal components that best represent the variance in the dataset. The deflation method is a popular meta-algorithm that sequentially finds individual principal components, starting from the most important ones and working towards the less important ones. However, as deflation proceeds, numerical errors from the imprecise estimation of principal components propagate due to its sequential nature. This paper mathematically characterizes the error propagation of the inexact Hotelling’s deflation method. We consider two scenarios: $i)$ when the sub-routine for finding the leading eigenvector is abstract and can represent various algorithms; and $ii)$ when power iteration is used as the sub-routine. In the latter case, the additional directional information from power iteration allows us to obtain a tighter error bound than the sub-routine agnostic case. For both scenarios, we explicitly characterize how the errors progress and affect subsequent principal component estimations.
|
https://proceedings.mlr.press/v235/liao24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liao24b/liao24b.pdf
|
https://openreview.net/forum?id=PApqOVbHYF
|
Bootstrapping Fisher Market Equilibrium and First-Price Pacing Equilibrium
|
https://proceedings.mlr.press/v235/liao24b.html
|
Luofeng Liao, Christian Kroer
|
https://proceedings.mlr.press/v235/liao24b.html
|
ICML 2024
|
Linear Fisher market (LFM) is an equilibrium model for fair and efficient resource allocation, and first-price pacing equilibrium (FPPE) is a model for budget-management in first-price auctions. One thing they have in common is that both have a corresponding Eisenberg-Gale convex program characterization. In this paper, we introduce and devise several statistically valid bootstrap inference procedures for LFM and FPPE. The most challenging part is to bootstrap general FPPE, which reduces to bootstrapping constrained M-estimators, a largely unexplored problem. We are able to devise a bootstrap procedure for FPPE with structures by using the powerful tool of epi-convergence theory. Experiments with synthetic and semi-real data verify our theory.
|
https://proceedings.mlr.press/v235/lien24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lien24a/lien24a.pdf
|
https://openreview.net/forum?id=nSGnx8lNJ6
|
Enhancing Value Function Estimation through First-Order State-Action Dynamics in Offline Reinforcement Learning
|
https://proceedings.mlr.press/v235/lien24a.html
|
Yun-Hsuan Lien, Ping-Chun Hsieh, Tzu-Mao Li, Yu-Shuen Wang
|
https://proceedings.mlr.press/v235/lien24a.html
|
ICML 2024
|
In offline reinforcement learning (RL), updating the value function with the discrete-time Bellman Equation often encounters challenges due to the limited scope of available data. This limitation stems from the Bellman Equation, which cannot accurately predict the value of unvisited states. To address this issue, we have introduced an innovative solution that bridges the continuous- and discrete-time RL methods, capitalizing on their advantages. Our method uses a discrete-time RL algorithm to derive the value function from a dataset while ensuring that the function’s first derivative aligns with the local characteristics of states and actions, as defined by the Hamilton-Jacobi-Bellman equation in continuous RL. We provide practical algorithms for both deterministic policy gradient methods and stochastic policy gradient methods. Experiments on the D4RL dataset show that incorporating the first-order information significantly improves policy performance for offline RL problems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.