abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/liu24bp.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bp/liu24bp.pdf
https://openreview.net/forum?id=pAdI75JG3G
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond
https://proceedings.mlr.press/v235/liu24bp.html
Xutong Liu, Siwei Wang, Jinhang Zuo, Han Zhong, Xuchuang Wang, Zhiyong Wang, Shuai Li, Mohammad Hajiesmaili, John C.S. Lui, Wei Chen
https://proceedings.mlr.press/v235/liu24bp.html
ICML 2024
We introduce a novel framework of combinatorial multi-armed bandits (CMAB) with multivariant and probabilistically triggering arms (CMAB-MT), where the outcome of each arm is a $d$-dimensional multivariant random variable and the feedback follows a general arm triggering process. Compared with existing CMAB works, CMAB-MT not only enhances the modeling power but also allows improved results by leveraging distinct statistical properties for multivariant random variables. For CMAB-MT, we propose a general 1-norm multivariant and triggering probability-modulated smoothness condition, and an optimistic CUCB-MT algorithm built upon this condition. Our framework can include many important problems as applications, such as episodic reinforcement learning (RL) and probabilistic maximum coverage for goods distribution, all of which meet the above smoothness condition and achieve matching or improved regret bounds compared to existing works. Through our new framework, we build the first connection between the episodic RL and CMAB literature, by offering a new angle to solve the episodic RL through the lens of CMAB, which may encourage more interactions between these two important directions.
https://proceedings.mlr.press/v235/liu24bq.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bq/liu24bq.pdf
https://openreview.net/forum?id=sHswzNWUW2
Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition
https://proceedings.mlr.press/v235/liu24bq.html
Yicheng Liu, Jie Wen, Chengliang Liu, Xiaozhao Fang, Zuoyong Li, Yong Xu, Zheng Zhang
https://proceedings.mlr.press/v235/liu24bq.html
ICML 2024
Large-scale pre-trained vision-language models (e.g., CLIP) have shown powerful zero-shot transfer capabilities in image recognition tasks. Recent approaches typically employ supervised fine-tuning methods to adapt CLIP for zero-shot multi-label image recognition tasks. However, obtaining sufficient multi-label annotated image data for training is challenging and not scalable. In this paper, we propose a new language-driven framework for zero-shot multi-label recognition that eliminates the need for annotated images during training. Leveraging the aligned CLIP multi-modal embedding space, our method utilizes language data generated by LLMs to train a cross-modal classifier, which is subsequently transferred to the visual modality. During inference, directly applying the classifier to visual inputs may limit performance due to the modality gap. To address this issue, we introduce a cross-modal mapping method that maps image embeddings to the language modality while retaining crucial visual information. Comprehensive experiments demonstrate that our method outperforms other zero-shot multi-label recognition methods and achieves competitive results compared to few-shot methods.
https://proceedings.mlr.press/v235/liu24br.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24br/liu24br.pdf
https://openreview.net/forum?id=NgaYcefBnZ
Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications
https://proceedings.mlr.press/v235/liu24br.html
Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui
https://proceedings.mlr.press/v235/liu24br.html
ICML 2024
Machine learning algorithms minimizing average risk are susceptible to distributional shifts. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case risk within an uncertainty set. However, DRO suffers from over-pessimism, leading to low-confidence predictions, poor parameter estimations as well as poor generalization. In this work, we conduct a theoretical analysis of a probable root cause of over-pessimism: excessive focus on noisy samples. To alleviate the impact of noise, we incorporate data geometry into calibration terms in DRO, resulting in our novel Geometry-Calibrated DRO (GCDRO) for regression. We establish the connection between our risk objective and the Helmholtz free energy in statistical physics, and this free-energy-based risk can extend to standard DRO methods. Leveraging gradient flow in Wasserstein space, we develop an approximate minimax optimization algorithm with a bounded error ratio and elucidate how our approach mitigates noisy sample effects. Comprehensive experiments confirm GCDRO’s superiority over conventional DRO methods.
https://proceedings.mlr.press/v235/liu24bs.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bs/liu24bs.pdf
https://openreview.net/forum?id=BwAkaxqiLB
Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model
https://proceedings.mlr.press/v235/liu24bs.html
Fei Liu, Tong Xialiang, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, Qingfu Zhang
https://proceedings.mlr.press/v235/liu24bs.html
ICML 2024
Heuristics are widely used for dealing with complex search and optimization problems. However, manual design of heuristics can be often very labour extensive and requires rich working experience and knowledge. This paper proposes Evolution of Heuristic (EoH), a novel evolutionary paradigm that leverages both Large Language Models (LLMs) and Evolutionary Computation (EC) methods for Automatic Heuristic Design (AHD). EoH represents the ideas of heuristics in natural language, termed thoughts. They are then translated into executable codes by LLMs. The evolution of both thoughts and codes in an evolutionary search framework makes it very effective and efficient for generating high-performance heuristics. Experiments on three widely studied combinatorial optimization benchmark problems demonstrate that EoH outperforms commonly used handcrafted heuristics and other recent AHD methods including FunSearch. Particularly, the heuristic produced by EoH with a low computational budget (in terms of the number of queries to LLMs) significantly outperforms widely-used human hand-crafted baseline algorithms for the online bin packing problem.
https://proceedings.mlr.press/v235/liu24bt.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bt/liu24bt.pdf
https://openreview.net/forum?id=IuvpVcGUOB
Correlation-Induced Label Prior for Semi-Supervised Multi-Label Learning
https://proceedings.mlr.press/v235/liu24bt.html
Biao Liu, Ning Xu, Xiangyu Fang, Xin Geng
https://proceedings.mlr.press/v235/liu24bt.html
ICML 2024
Semi-supervised multi-label learning (SSMLL) aims to address the challenge of limited labeled data availability in multi-label learning (MLL) by leveraging unlabeled data to improve the model’s performance. Due to the difficulty of estimating the reliable label correlation on minimal multi-labeled data, previous SSMLL methods fail to unlash the power of the correlation among multiple labels to improve the performance of the predictive model in SSMLL. To deal with this problem, we propose a novel SSMLL method named PCLP where the correlation-induced label prior is inferred to enhance the pseudo-labeling instead of dirtily estimating the correlation among labels. Specifically, we construct the correlated label prior probability distribution using structural causal model (SCM), constraining the correlations of generated pseudo-labels to conform to the prior, which can be integrated into a variational label enhancement framework optimized by both labeled and unlabeled instances in a unified manner. Theoretically, we demonstrate the accuracy of the generated pseudo-labels and guarantee the learning consistency of the proposed method. Comprehensive experiments on several benchmark datasets have validated the superiority of the proposed method.
https://proceedings.mlr.press/v235/liu24bu.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bu/liu24bu.pdf
https://openreview.net/forum?id=dmHHVcHFdM
Causality Based Front-door Defense Against Backdoor Attack on Language Models
https://proceedings.mlr.press/v235/liu24bu.html
Yiran Liu, Xiaoang Xu, Zhiyi Hou, Yang Yu
https://proceedings.mlr.press/v235/liu24bu.html
ICML 2024
We have developed a new framework based on the theory of causal inference to protect language models against backdoor attacks. Backdoor attackers can poison language models with different types of triggers, such as words, sentences, grammar, and style, enabling them to selectively modify the decision-making of the victim model. However, existing defense approaches are only effective when the backdoor attack form meets specific assumptions, making it difficult to counter diverse backdoor attacks. We propose a new defense framework Front-door Adjustment for Backdoor Elimination (FABE) based on causal reasoning that does not rely on assumptions about the form of triggers. This method effectively differentiates between spurious and legitimate associations by creating a ’front door’ that maps out the actual causal relationships. The term ’front door’ refers to a text that retains the semantic equivalence of the initial input, which is generated by an additional, fine-tuned language model, denoted as the defense model. Our defense experiments against various attack methods at the token, sentence, and syntactic levels reduced the attack success rate from 93.63% to 15.12%, improving the defense effect by 2.91 times compared to the best baseline result of 66.61%, achieving state-of-the-art results. Through ablation study analysis, we analyzed the effect of each module in FABE, demonstrating the importance of complying with the front-door criterion and front-door adjustment formula, which also explains why previous methods failed. Our code to reproduce the experiments is available at: https://github.com/lyr17/Frontdoor-Adjustment-Backdoor-Elimination.
https://proceedings.mlr.press/v235/liu24bv.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bv/liu24bv.pdf
https://openreview.net/forum?id=5ap1MmUqO6
Partial Multi-View Multi-Label Classification via Semantic Invariance Learning and Prototype Modeling
https://proceedings.mlr.press/v235/liu24bv.html
Chengliang Liu, Gehui Xu, Jie Wen, Yabo Liu, Chao Huang, Yong Xu
https://proceedings.mlr.press/v235/liu24bv.html
ICML 2024
The difficulty of partial multi-view multi-label learning lies in coupling the consensus of multi-view data with the task relevance of multi-label classification, under the condition where partial views and labels are unavailable. In this paper, we seek to compress cross-view representation to maximize the proportion of shared information to better predict semantic tags. To achieve this, we establish a model consistent with the information bottleneck theory for learning cross-view shared representation, minimizing non-shared information while maintaining feature validity to help increase the purity of task-relevant information. Furthermore, we model multi-label prototype instances in the latent space and learn label correlations in a data-driven manner. Our method outperforms existing state-of-the-art methods on multiple public datasets while exhibiting good compatibility with both partial and complete data. Finally, we experimentally reveal the importance of condensing shared information under the premise of information balancing, in the process of multi-view information encoding and compression.
https://proceedings.mlr.press/v235/liu24bw.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bw/liu24bw.pdf
https://openreview.net/forum?id=PudBRuNa8r
Building Socially-Equitable Public Models
https://proceedings.mlr.press/v235/liu24bw.html
Yejia Liu, Jianyi Yang, Pengfei Li, Tongxin Li, Shaolei Ren
https://proceedings.mlr.press/v235/liu24bw.html
ICML 2024
Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications, showcasing their proficiency in accurate predictions. However, the exclusive emphasis on prediction accuracy may not align with the diverse end objectives of downstream agents. Recognizing the public model’s predictions as a service, we advocate for integrating the objectives of downstream agents into the optimization process. Concretely, to address performance disparities and foster fairness among heterogeneous agents in training, we propose a novel Equitable Objective. This objective, coupled with a policy gradient algorithm, is crafted to train the public model to produce a more equitable/uniform performance distribution across downstream agents, each with their unique concerns. Both theoretical analysis and empirical case studies have proven the effectiveness of our method in advancing performance equity across diverse downstream agents utilizing the public model for their decision-making. Codes and datasets are released at https://github.com/Ren-Research/Socially-Equitable-Public-Models.
https://proceedings.mlr.press/v235/liu24bx.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bx/liu24bx.pdf
https://openreview.net/forum?id=dJTChKgv3a
In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
https://proceedings.mlr.press/v235/liu24bx.html
Sheng Liu, Haotian Ye, Lei Xing, James Y. Zou
https://proceedings.mlr.press/v235/liu24bx.html
ICML 2024
Large language models (LLMs) demonstrate emergent in-context learning capabilities, where they adapt to new tasks based on example demonstrations. However, in-context learning has seen limited effectiveness in many settings, is difficult to quantitatively control and takes up context window space. To overcome these limitations, we propose an alternative approach that recasts in-context learning as in-context vectors (ICV). Using ICV has two steps. We first use a forward pass on demonstration examples to create the in-context vector from the latent embedding of the LLM. This vector captures essential information about the intended task. On a new query, instead of adding demonstrations to the prompt, we shift the latent states of the LLM using the ICV. The ICV approach has several benefits: 1) it enables the LLM to more effectively follow the demonstration examples; 2) it’s easy to control by adjusting the magnitude of the ICV; 3) it reduces the length of the prompt by removing the in-context demonstrations; 4) ICV is computationally much more efficient than fine-tuning. We demonstrate that ICV achieves better performance compared to standard in-context learning and fine-tuning on diverse tasks including safety, style transfer, role-playing and formatting. Moreover, we show that we can flexibly teach LLM to simultaneously follow different types of instructions by simple vector arithmetics on the corresponding ICVs.
https://proceedings.mlr.press/v235/liu24by.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24by/liu24by.pdf
https://openreview.net/forum?id=jvVWPtJYbc
Minimizing $f$-Divergences by Interpolating Velocity Fields
https://proceedings.mlr.press/v235/liu24by.html
Song Liu, Jiahao Yu, Jack Simons, Mingxuan Yi, Mark Beaumont
https://proceedings.mlr.press/v235/liu24by.html
ICML 2024
Many machine learning problems can be seen as approximating a target distribution using a particle distribution by minimizing their statistical discrepancy. Wasserstein Gradient Flow can move particles along a path that minimizes the $f$-divergence between the target and particle distributions. To move particles, we need to calculate the corresponding velocity fields derived from a density ratio function between these two distributions. Previous works estimated such density ratio functions and then differentiated the estimated ratios. These approaches may suffer from overfitting, leading to a less accurate estimate of the velocity fields. Inspired by non-parametric curve fitting, we directly estimate these velocity fields using interpolation techniques. We prove that our estimators are consistent under mild conditions. We validate their effectiveness using novel applications on domain adaptation and missing data imputation. The code for reproducing our results can be found at https://github.com/anewgithubname/gradest2.
https://proceedings.mlr.press/v235/liu24bz.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24bz/liu24bz.pdf
https://openreview.net/forum?id=L057s2Rq8O
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
https://proceedings.mlr.press/v235/liu24bz.html
Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, Xia Hu
https://proceedings.mlr.press/v235/liu24bz.html
ICML 2024
Efficiently serving large language models (LLMs) requires batching many requests together to reduce the cost per request. Yet, the key-value (KV) cache, which stores attention keys and values to avoid re-computations, significantly increases memory demands and becomes the new bottleneck in speed and memory usage. This memory demand increases with larger batch sizes and longer context lengths. Additionally, the inference speed is limited by the size of KV cache, as the GPU’s SRAM must load the entire KV cache from the main GPU memory for each token generated, causing the computational core to be idle during this process. A straightforward and effective solution to reduce KV cache size is quantization, which decreases the total bytes taken by KV cache. However, there is a lack of in-depth studies that explore the element distribution of KV cache to understand the hardness and limitation of KV cache quantization. To fill the gap, we conducted a comprehensive study on the element distribution in KV cache of popular LLMs. Our findings indicate that the key cache should be quantized per-channel, i.e., group elements along the channel dimension and quantize them together. In contrast, the value cache should be quantized per-token. From this analysis, we developed a tuning-free 2bit KV cache quantization algorithm, named KIVI. With the hardware-friendly implementation, KIVI can enable Llama (Llama-2), Falcon, and Mistral models to maintain almost the same quality while using 2.6$\times$ less peak memory usage (including the model weight). This reduction in memory usage enables up to 4x larger batch size, bringing $2.35 \times \sim 3.47 \times$ throughput on real LLM inference workload.
https://proceedings.mlr.press/v235/liu24ca.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ca/liu24ca.pdf
https://openreview.net/forum?id=kpDd2HCBka
Efficient Policy Evaluation with Offline Data Informed Behavior Policy Design
https://proceedings.mlr.press/v235/liu24ca.html
Shuze Liu, Shangtong Zhang
https://proceedings.mlr.press/v235/liu24ca.html
ICML 2024
Most reinforcement learning practitioners evaluate their policies with online Monte Carlo estimators for either hyperparameter tuning or testing different algorithmic design choices, where the policy is repeatedly executed in the environment to get the average outcome. Such massive interactions with the environment are prohibitive in many scenarios. In this paper, we propose novel methods that improve the data efficiency of online Monte Carlo estimators while maintaining their unbiasedness. We first propose a tailored closed-form behavior policy that provably reduces the variance of an online Monte Carlo estimator. We then design efficient algorithms to learn this closed-form behavior policy from previously collected offline data. Theoretical analysis is provided to characterize how the behavior policy learning error affects the amount of reduced variance. Compared with previous works, our method achieves better empirical performance in a broader set of environments, with fewer requirements for offline data.
https://proceedings.mlr.press/v235/liu24cb.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24cb/liu24cb.pdf
https://openreview.net/forum?id=bYRYb7DMNo
Timer: Generative Pre-trained Transformers Are Large Time Series Models
https://proceedings.mlr.press/v235/liu24cb.html
Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long
https://proceedings.mlr.press/v235/liu24cb.html
ICML 2024
Deep learning has contributed remarkably to the advancement of time series analysis. Still, deep models can encounter performance bottlenecks in real-world data-scarce scenarios, which can be concealed due to the performance saturation with small models on current benchmarks. Meanwhile, large models have demonstrated great powers in these scenarios through large-scale pre-training. Continuous progress has been achieved with the emergence of large language models, exhibiting unprecedented abilities such as few-shot generalization, scalability, and task generality, which are however absent in small deep models. To change the status quo of training scenario-specific small models from scratch, this paper aims at the early development of large time series models (LTSM). During pre-training, we curate large-scale datasets with up to 1 billion time points, unify heterogeneous time series into single-series sequence (S3) format, and develop the GPT-style architecture toward LTSMs. To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task. The outcome of this study is a Time Series Transformer (Timer), which is generative pre-trained by next token prediction and adapted to various downstream tasks with promising capabilities as an LTSM. Code and datasets are available at: https://github.com/thuml/Large-Time-Series-Model.
https://proceedings.mlr.press/v235/liu24cc.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24cc/liu24cc.pdf
https://openreview.net/forum?id=tDMlQkJRhZ
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
https://proceedings.mlr.press/v235/liu24cc.html
Dongyang Liu, Renrui Zhang, Longtian Qiu, Siyuan Huang, Weifeng Lin, Shitian Zhao, Shijie Geng, Ziyi Lin, Peng Jin, Kaipeng Zhang, Wenqi Shao, Chao Xu, Conghui He, Junjun He, Hao Shao, Pan Lu, Yu Qiao, Hongsheng Li, Peng Gao
https://proceedings.mlr.press/v235/liu24cc.html
ICML 2024
We propose SPHINX-X, an extensive Multi-modality Large Language Model (MLLM) series developed upon SPHINX. To improve the architecture and training efficiency, we modify the SPHINX framework by removing redundant visual encoders, bypassing fully-padded sub-images with skip tokens, and simplifying multi-stage training into a one-stage all-in-one paradigm. To fully unleash the potential of MLLMs, we assemble a comprehensive multi-domain and multi-modal dataset covering publicly available resources in language, vision, and vision-language tasks. We further enrich this collection with our curated OCR intensive and Set-of-Mark datasets, extending the diversity and generality. By training over different base LLMs including TinyLlama-1.1B, InternLM2-7B, LLaMA2-13B, and Mixtral-8$\times$7B, we obtain a spectrum of MLLMs that vary in parameter size and multilingual capabilities. Comprehensive benchmarking reveals a strong correlation between the multi-modal performance with the data and parameter scales. Code and models are released at https://github.com/Alpha-VLLM/LLaMA2-Accessory.
https://proceedings.mlr.press/v235/liu24cd.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24cd/liu24cd.pdf
https://openreview.net/forum?id=KI3JKFKciG
DFD: Distilling the Feature Disparity Differently for Detectors
https://proceedings.mlr.press/v235/liu24cd.html
Kang Liu, Yingyi Zhang, Jingyun Zhang, Jinmin Li, Jun Wang, Shaoming Wang, Chun Yuan, Rizen Guo
https://proceedings.mlr.press/v235/liu24cd.html
ICML 2024
Knowledge distillation is a widely adopted model compression technique that has been successfully applied to object detection. In feature distillation, it is common practice for the student model to imitate the feature responses of the teacher model, with the underlying objective of improving its own abilities by reducing the disparity with the teacher. However, it is crucial to recognize that the disparities between the student and teacher are inconsistent, highlighting their varying abilities. In this paper, we explore the inconsistency in the disparity between teacher and student feature maps and analyze their impact on the efficiency of the distillation. We find that regions with varying degrees of difference should be treated separately, with different distillation constraints applied accordingly. We introduce our distillation method called Disparity Feature Distillation(DFD). The core idea behind DFD is to apply different treatments to regions with varying learning difficulties, simultaneously incorporating leniency and strictness. It enables the student to better assimilate the teacher’s knowledge. Through extensive experiments, we demonstrate the effectiveness of our proposed DFD in achieving significant improvements. For instance, when applied to detectors based on ResNet50 such as RetinaNet, FasterRCNN, and RepPoints, our method enhances their mAP from 37.4%, 38.4%, 38.6% to 41.7%, 42.4%, 42.7%, respectively. Our approach also demonstrates substantial improvements on YOLO and ViT-based models. The code is available at https://github.com/luckin99/DFD.
https://proceedings.mlr.press/v235/liu24ce.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ce/liu24ce.pdf
https://openreview.net/forum?id=EIGbXbxcUQ
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
https://proceedings.mlr.press/v235/liu24ce.html
Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, Vikas Chandra
https://proceedings.mlr.press/v235/liu24ce.html
ICML 2024
This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment. Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs. Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models. Additionally, we propose an immediate block-wise weight-sharing approach with no increase in model size and only marginal latency overhead. The resultant models, denoted as MobileLLM-LS, demonstrate a further accuracy enhancement of 0.7%/0.8% than MobileLLM 125M/350M. Moreover, MobileLLM model family shows significant improvements compared to previous sub-billion models on chat benchmarks, and demonstrates close correctness to LLaMA-v2 7B in API calling tasks, highlighting the capability of small models for common on-device use cases.
https://proceedings.mlr.press/v235/liu24cf.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24cf/liu24cf.pdf
https://openreview.net/forum?id=rk4kmL8aOY
Reducing Item Discrepancy via Differentially Private Robust Embedding Alignment for Privacy-Preserving Cross Domain Recommendation
https://proceedings.mlr.press/v235/liu24cf.html
Weiming Liu, Xiaolin Zheng, Chaochao Chen, Jiahe Xu, Xinting Liao, Fan Wang, Yanchao Tan, Yew-Soon Ong
https://proceedings.mlr.press/v235/liu24cf.html
ICML 2024
Cross-Domain Recommendation (CDR) have become increasingly appealing by leveraging useful information to tackle the data sparsity problem across domains. Most of latest CDR models assume that domain-shareable user-item information (e.g., rating and review on overlapped users or items) are accessible across domains. However, these assumptions become impractical due to the strict data privacy protection policy. In this paper, we propose Reducing Item Discrepancy (RidCDR) model on solving Privacy-Preserving Cross-Domain Recommendation (PPCDR) problem. Specifically, we aim to enhance the model performance on both source and target domains without overlapped users and items while protecting the data privacy. We innovatively propose private-robust embedding alignment module in RidCDR for knowledge sharing across domains while avoiding negative transfer privately. Our empirical study on Amazon and Douban datasets demonstrates that RidCDR significantly outperforms the state-of-the-art models under the PPCDR without overlapped users and items.
https://proceedings.mlr.press/v235/liu24cg.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24cg/liu24cg.pdf
https://openreview.net/forum?id=Xdy9bjwHDu
On the Last-Iterate Convergence of Shuffling Gradient Methods
https://proceedings.mlr.press/v235/liu24cg.html
Zijian Liu, Zhengyuan Zhou
https://proceedings.mlr.press/v235/liu24cg.html
ICML 2024
Shuffling gradient methods are widely used in modern machine learning tasks and include three popular implementations: Random Reshuffle (RR), Shuffle Once (SO), and Incremental Gradient (IG). Compared to the empirical success, the theoretical guarantee of shuffling gradient methods was not well-understood for a long time. Until recently, the convergence rates had just been established for the average iterate for convex functions and the last iterate for strongly convex problems (using squared distance as the metric). However, when using the function value gap as the convergence criterion, existing theories cannot interpret the good performance of the last iterate in different settings (e.g., constrained optimization). To bridge this gap between practice and theory, we prove the first last-iterate convergence rates for shuffling gradient methods with respect to the objective value even without strong convexity. Our new results either (nearly) match the existing last-iterate lower bounds or are as fast as the previous best upper bounds for the average iterate.
https://proceedings.mlr.press/v235/liu24ch.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ch/liu24ch.pdf
https://openreview.net/forum?id=akyElNlUVA
FedLMT: Tackling System Heterogeneity of Federated Learning via Low-Rank Model Training with Theoretical Guarantees
https://proceedings.mlr.press/v235/liu24ch.html
Jiahao Liu, Yipeng Zhou, Di Wu, Miao Hu, Mohsen Guizani, Quan Z. Sheng
https://proceedings.mlr.press/v235/liu24ch.html
ICML 2024
Federated learning (FL) is an emerging machine learning paradigm for preserving data privacy. However, diverse client hardware often has varying computation resources. Such system heterogeneity limits the participation of resource-constrained clients in FL, and hence degrades the global model accuracy. To enable heterogeneous clients to participate in and contribute to FL training, previous works tackle this problem by assigning customized sub-models to individual clients with model pruning, distillation, or low-rank based techniques. Unfortunately, the global model trained by these methods still encounters performance degradation due to heterogeneous sub-model aggregation. Besides, most methods are heuristic-based and lack convergence analysis. In this work, we propose the FedLMT framework to bridge the performance gap, by assigning clients with a homogeneous pre-factorized low-rank model to substantially reduce resource consumption without conducting heterogeneous aggregation. We theoretically prove that the convergence of the low-rank model can guarantee the convergence of the original full model. To further meet clients’ personalized resource needs, we extend FedLMT to pFedLMT, by separating model parameters into common and custom ones. Finally, extensive experiments are conducted to verify our theoretical analysis and show that FedLMT and pFedLMT outperform other baselines with much less communication and computation costs.
https://proceedings.mlr.press/v235/liu24ci.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24ci/liu24ci.pdf
https://openreview.net/forum?id=ttnbM598vZ
Pairwise Alignment Improves Graph Domain Adaptation
https://proceedings.mlr.press/v235/liu24ci.html
Shikun Liu, Deyu Zou, Han Zhao, Pan Li
https://proceedings.mlr.press/v235/liu24ci.html
ICML 2024
Graph-based methods, pivotal for label inference over interconnected objects in many real-world applications, often encounter generalization challenges, if the graph used for model training differs significantly from the graph used for testing. This work delves into Graph Domain Adaptation (GDA) to address the unique complexities of distribution shifts over graph data, where interconnected data points experience shifts in features, labels, and in particular, connecting patterns. We propose a novel, theoretically principled method, Pairwise Alignment (Pair-Align) to counter graph structure shift by mitigating conditional structure shift (CSS) and label shift (LS). Pair-Align uses edge weights to recalibrate the influence among neighboring nodes to handle CSS and adjusts the classification loss with label weights to handle LS. Our method demonstrates superior performance in real-world applications, including node classification with region shift in social networks, and the pileup mitigation task in particle colliding experiments. For the first application, we also curate the largest dataset by far for GDA studies. Our method shows strong performance in synthetic and other existing benchmark datasets.
https://proceedings.mlr.press/v235/liu-schiaffini24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu-schiaffini24a/liu-schiaffini24a.pdf
https://openreview.net/forum?id=vl9GB3fbht
Neural Operators with Localized Integral and Differential Kernels
https://proceedings.mlr.press/v235/liu-schiaffini24a.html
Miguel Liu-Schiaffini, Julius Berner, Boris Bonev, Thorsten Kurth, Kamyar Azizzadenesheli, Anima Anandkumar
https://proceedings.mlr.press/v235/liu-schiaffini24a.html
ICML 2024
Neural operators learn mappings between function spaces, which is practical for learning solution operators of PDEs and other scientific modeling applications. Among them, the Fourier neural operator (FNO) is a popular architecture that performs global convolutions in the Fourier space. However, such global operations are often prone to over-smoothing and may fail to capture local details. In contrast, convolutional neural networks (CNN) can capture local features but are limited to training and inference at a single resolution. In this work, we present a principled approach to operator learning that can capture local features under two frameworks by learning differential operators and integral operators with locally supported kernels. Specifically, inspired by stencil methods, we prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs. To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions. Both these approaches preserve the properties of operator learning and, hence, the ability to predict at any resolution. Adding our layers to FNOs significantly improves their performance, reducing the relative L2-error by 34-72% in our experiments, which include a turbulent 2D Navier-Stokes and the spherical shallow water equations.
https://proceedings.mlr.press/v235/lizaire24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lizaire24a/lizaire24a.pdf
https://openreview.net/forum?id=EsSSDjwFra
A Tensor Decomposition Perspective on Second-order RNNs
https://proceedings.mlr.press/v235/lizaire24a.html
Maude Lizaire, Michael Rizvi-Martel, Marawan Gamal, Guillaume Rabusseau
https://proceedings.mlr.press/v235/lizaire24a.html
ICML 2024
Second-order Recurrent Neural Networks (2RNNs) extend RNNs by leveraging second-order interactions for sequence modelling. These models are provably more expressive than their first-order counterparts and have connections to well-studied models from formal language theory. However, their large parameter tensor makes computations intractable. To circumvent this issue, one approach known as MIRNN consists in limiting the type of interactions used by the model. Another is to leverage tensor decomposition to diminish the parameter count. In this work, we study the model resulting from parameterizing 2RNNs using the CP decomposition, which we call CPRNN. Intuitively, the rank of the decomposition should reduce expressivity. We analyze how rank and hidden size affect model capacity and show the relationships between RNNs, 2RNNs, MIRNNs, and CPRNNs based on these parameters. We support these results empirically with experiments on the Penn Treebank dataset which demonstrate that, with a fixed parameter budget, CPRNNs outperforms RNNs, 2RNNs, and MIRNNs with the right choice of rank and hidden size.
https://proceedings.mlr.press/v235/loeschcke24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/loeschcke24a/loeschcke24a.pdf
https://openreview.net/forum?id=lGZUvfP2ZF
Coarse-To-Fine Tensor Trains for Compact Visual Representations
https://proceedings.mlr.press/v235/loeschcke24a.html
Sebastian Bugge Loeschcke, Dan Wang, Christian Munklinde Leth-Espensen, Serge Belongie, Michael Kastoryano, Sagie Benaim
https://proceedings.mlr.press/v235/loeschcke24a.html
ICML 2024
The ability to learn compact, high-quality, and easy-to-optimize representations for visual data is paramount to many applications such as novel view synthesis and 3D reconstruction. Recent work has shown substantial success in using tensor networks to design such compact and high-quality representations. However, the ability to optimize tensor-based representations, and in particular, the highly compact tensor train representation, is still lacking. This has prevented practitioners from deploying the full potential of tensor networks for visual data. To this end, we propose ’Prolongation Upsampling Tensor Train (PuTT)’, a novel method for learning tensor train representations in a coarse-to-fine manner. Our method involves the prolonging or ‘upsampling’ of a learned tensor train representation, creating a sequence of ’coarse-to-fine’ tensor trains that are incrementally refined. We evaluate our representation along three axes: (1). compression, (2). denoising capability, and (3). image completion capability. To assess these axes, we consider the tasks of image fitting, 3D fitting, and novel view synthesis, where our method shows an improved performance compared to state-of-the-art tensor-based methods.
https://proceedings.mlr.press/v235/loffredo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/loffredo24a/loffredo24a.pdf
https://openreview.net/forum?id=rHylzxK3HU
Restoring balance: principled under/oversampling of data for optimal classification
https://proceedings.mlr.press/v235/loffredo24a.html
Emanuele Loffredo, Mauro Pastore, Simona Cocco, Remi Monasson
https://proceedings.mlr.press/v235/loffredo24a.html
ICML 2024
Class imbalance in real-world data poses a common bottleneck for machine learning tasks, since achieving good generalization on under-represented examples is often challenging. Mitigation strategies, such as under or oversampling the data depending on their abundances, are routinely proposed and tested empirically, but how they should adapt to the data statistics remains poorly understood. In this work, we determine exact analytical expressions of the generalization curves in the high-dimensional regime for linear classifiers (Support Vector Machines). We also provide a sharp prediction of the effects of under/oversampling strategies depending on class imbalance, first and second moments of the data, and the metrics of performance considered. We show that mixed strategies involving under and oversampling of data lead to performance improvement. Through numerical experiments, we show the relevance of our theoretical predictions on real datasets, on deeper architectures and with sampling strategies based on unsupervised probabilistic models.
https://proceedings.mlr.press/v235/loftus24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/loftus24a/loftus24a.pdf
https://openreview.net/forum?id=dBMLtuKH01
Position: The Causal Revolution Needs Scientific Pragmatism
https://proceedings.mlr.press/v235/loftus24a.html
Joshua R. Loftus
https://proceedings.mlr.press/v235/loftus24a.html
ICML 2024
Causal models and methods have great promise, but their progress has been stalled. Proposals using causality get squeezed between two opposing worldviews. Scientific perfectionism–an insistence on only using “correct” models–slows the adoption of causal methods in knowledge generating applications. Pushing in the opposite direction, the academic discipline of computer science prefers algorithms with no or few assumptions, and technologies based on automation and scalability are often selected for economic and business applications. We argue that these system-centric inductive biases should be replaced with a human-centric philosophy we refer to as scientific pragmatism. The machine learning community must strike the right balance to make space for the causal revolution to prosper.
https://proceedings.mlr.press/v235/long24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/long24a/long24a.pdf
https://openreview.net/forum?id=da7MMwICjC
Reparameterized Importance Sampling for Robust Variational Bayesian Neural Networks
https://proceedings.mlr.press/v235/long24a.html
Yunfei Long, Zilin Tian, Liguo Zhang, Huosheng Xu
https://proceedings.mlr.press/v235/long24a.html
ICML 2024
Mean-field variational inference (MFVI) methods provide computationally cheap approximations to the posterior of Bayesian Neural Networks (BNNs) when compared to alternatives like MCMC. However, applying MFVI to BNNs encounters limitations due to the Monte Carlo sampling problem. This problem stems from two main issues. First, most samples do not accurately represent the most probable weights. Second, random sampling from variational distributions introduces high variance in gradient estimates, which can hinder the optimization process, leading to slow convergence or even failure. In this paper, we introduce a novel sampling method called Reparameterized Importance Sampling (RIS) to estimate the first moment in neural networks, reducing variance during feed-forward propagation. We begin by analyzing the generalized form of the optimal proposal distribution and presenting an inexpensive approximation. Next, we describe the sampling process from the proposal distribution as a transformation that combines exogenous randomness with the variational parameters. Our experimental results demonstrate the effectiveness of the proposed RIS method in three critical aspects: improved convergence, enhanced predictive performance, and successful uncertainty estimation for out-of-distribution data.
https://proceedings.mlr.press/v235/longpre24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/longpre24a/longpre24a.pdf
https://openreview.net/forum?id=dLojMSgSFW
Position: A Safe Harbor for AI Evaluation and Red Teaming
https://proceedings.mlr.press/v235/longpre24a.html
Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Alex Pentland, Arvind Narayanan, Percy Liang, Peter Henderson
https://proceedings.mlr.press/v235/longpre24a.html
ICML 2024
Independent evaluation and red teaming are critical for identifying the risks posed by generative AI systems. However, the terms of service and enforcement strategies used by prominent AI companies to deter model misuse have disincentives on good faith safety evaluations. This causes some researchers to fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal. Although some companies offer researcher access programs, they are an inadequate substitute for independent research access, as they have limited community representation, receive inadequate funding, and lack independence from corporate incentives. We propose that major generative AI developers commit to providing a legal and technical safe harbor, protecting public interest safety research and removing the threat of account suspensions or legal reprisal. These proposals emerged from our collective experience conducting safety, privacy, and trustworthiness research on generative AI systems, where norms and incentives could be better aligned with public interests, without exacerbating model misuse. We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
https://proceedings.mlr.press/v235/longpre24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/longpre24b/longpre24b.pdf
https://openreview.net/forum?id=3hSTecKy1b
Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?
https://proceedings.mlr.press/v235/longpre24b.html
Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Katy Ilonka Gero, Alex Pentland, Jad Kabbara
https://proceedings.mlr.press/v235/longpre24b.html
ICML 2024
New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in tracing authenticity, verifying consent, preserving privacy, addressing representation and bias, respecting copyright, and overall developing ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models’ limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.
https://proceedings.mlr.press/v235/lonnqvist24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lonnqvist24a/lonnqvist24a.pdf
https://openreview.net/forum?id=tSjyKR8WIf
Latent Noise Segmentation: How Neural Noise Leads to the Emergence of Segmentation and Grouping
https://proceedings.mlr.press/v235/lonnqvist24a.html
Ben Lonnqvist, Zhengqing Wu, Michael Herzog
https://proceedings.mlr.press/v235/lonnqvist24a.html
ICML 2024
Humans are able to segment images effortlessly without supervision using perceptual grouping. Here, we propose a counter-intuitive computational approach to solving unsupervised perceptual grouping and segmentation: that they arise because of neural noise, rather than in spite of it. We (1) mathematically demonstrate that under realistic assumptions, neural noise can be used to separate objects from each other; (2) that adding noise in a DNN enables the network to segment images even though it was never trained on any segmentation labels; and (3) that segmenting objects using noise results in segmentation performance that aligns with the perceptual grouping phenomena observed in humans, and is sample-efficient. We introduce the Good Gestalt (GG) datasets — six datasets designed to specifically test perceptual grouping, and show that our DNN models reproduce many important phenomena in human perception, such as illusory contours, closure, continuity, proximity, and occlusion. Finally, we (4) show that our model improves performance on our GG datasets compared to other tested unsupervised models by $24.9$%. Together, our results suggest a novel unsupervised segmentation method requiring few assumptions, a new explanation for the formation of perceptual grouping, and a novel potential benefit of neural noise.
https://proceedings.mlr.press/v235/loo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/loo24a/loo24a.pdf
https://openreview.net/forum?id=0FWPKHMCSc
Large Scale Dataset Distillation with Domain Shift
https://proceedings.mlr.press/v235/loo24a.html
Noel Loo, Alaa Maalouf, Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus
https://proceedings.mlr.press/v235/loo24a.html
ICML 2024
Dataset Distillation seeks to summarize a large dataset by generating a reduced set of synthetic samples. While there has been much success at distilling small datasets such as CIFAR-10 on smaller neural architectures, Dataset Distillation methods fail to scale to larger high-resolution datasets and architectures. In this work, we introduce Dataset Distillation with Domain Shift (D3S), a scalable distillation algorithm, made by reframing the dataset distillation problem as a domain shift one. In doing so, we derive a universal bound on the distillation loss, and provide a method for efficiently approximately optimizing it. We achieve state-of-the-art results on Tiny-ImageNet, ImageNet-1k, and ImageNet-21K over a variety of recently proposed baselines, including high cross-architecture generalization. Additionally, our ablation studies provide lessons on the importance of validation-time hyperparameters on distillation performance, motivating the need for standardization.
https://proceedings.mlr.press/v235/lopardo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lopardo24a/lopardo24a.pdf
https://openreview.net/forum?id=wnkC5T11Z9
Attention Meets Post-hoc Interpretability: A Mathematical Perspective
https://proceedings.mlr.press/v235/lopardo24a.html
Gianluigi Lopardo, Frederic Precioso, Damien Garreau
https://proceedings.mlr.press/v235/lopardo24a.html
ICML 2024
Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.
https://proceedings.mlr.press/v235/lotfi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lotfi24a/lotfi24a.pdf
https://openreview.net/forum?id=6Kg9p8URlj
Non-Vacuous Generalization Bounds for Large Language Models
https://proceedings.mlr.press/v235/lotfi24a.html
Sanae Lotfi, Marc Anton Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson
https://proceedings.mlr.press/v235/lotfi24a.html
ICML 2024
Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply parrot their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss using prediction smoothing, and we extend the bound to handle subsampling, making bound computation 900 times faster on massive datasets. To achieve the extreme level of compression required for non-vacuous bounds, we devise SubLoRA, a simple low-dimensional nonlinear parameterization that leads to non-vacuous generalization bounds for very large models with up to 849 million parameters. Finally, we use our bounds to understand LLM generalization and find that larger models have better generalization bounds and are more compressible than smaller models.
https://proceedings.mlr.press/v235/lou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lou24a/lou24a.pdf
https://openreview.net/forum?id=CNicRIVIPA
Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution
https://proceedings.mlr.press/v235/lou24a.html
Aaron Lou, Chenlin Meng, Stefano Ermon
https://proceedings.mlr.press/v235/lou24a.html
ICML 2024
Despite their groundbreaking performance for many generative modeling tasks, diffusion models have fallen short on discrete data domains such as natural language. Crucially, standard diffusion models rely on the well-established theory of score matching, but efforts to generalize this to discrete structures have not yielded the same empirical gains. In this work, we bridge this gap by proposing score entropy, a novel loss that naturally extends score matching to discrete spaces, integrates seamlessly to build discrete diffusion models, and significantly boosts performance. Experimentally, we test our Score Entropy Discrete Diffusion models (SEDD) on standard language modeling tasks. For comparable model sizes, SEDD beats existing language diffusion paradigms (reducing perplexity by $25$-$75$%) and is competitive with autoregressive models, in particular outperforming GPT-2. Furthermore, compared to autoregressive mdoels, SEDD generates faithful text without requiring distribution annealing techniques like temperature scaling (around $6$-$8\times$ better generative perplexity than un-annealed GPT-2), can trade compute and quality (similar quality with $32\times$ fewer network evaluations), and enables controllable infilling (matching nucleus sampling quality while enabling other strategies besides left to right prompting).
https://proceedings.mlr.press/v235/lowy24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lowy24a/lowy24a.pdf
https://openreview.net/forum?id=NFEJQn7vX0
Optimal Differentially Private Model Training with Public Data
https://proceedings.mlr.press/v235/lowy24a.html
Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn
https://proceedings.mlr.press/v235/lowy24a.html
ICML 2024
Differential privacy (DP) ensures that training a machine learning model does not leak private data. In practice, we may have access to auxiliary public data that is free of privacy concerns. In this work, we assume access to a given amount of public data and settle the following fundamental open questions: 1. What is the optimal (worst-case) error of a DP model trained over a private data set while having access to side public data? 2. How can we harness public data to improve DP model training in practice? We consider these questions in both the local and central models of pure and approximate DP. To answer the first question, we prove tight (up to log factors) lower and upper bounds that characterize the optimal error rates of three fundamental problems: mean estimation, empirical risk minimization, and stochastic convex optimization. We show that the optimal error rates can be attained (up to log factors) by either discarding private data and training a public model, or treating public data like it is private and using an optimal DP algorithm. To address the second question, we develop novel algorithms that are "even more optimal" (i.e. better constants) than the asymptotically optimal approaches described above. For local DP mean estimation, our algorithm is optimal including constants. Empirically, our algorithms show benefits over the state-of-the-art.
https://proceedings.mlr.press/v235/lowy24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lowy24b/lowy24b.pdf
https://openreview.net/forum?id=XoSF46Pc2e
How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization
https://proceedings.mlr.press/v235/lowy24b.html
Andrew Lowy, Jonathan Ullman, Stephen Wright
https://proceedings.mlr.press/v235/lowy24b.html
ICML 2024
We provide a simple and flexible framework for designing differentially private algorithms to find approximate stationary points of non-convex loss functions. Our framework is based on using a private approximate risk minimizer to "warm start" another private algorithm for finding stationary points. We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions. First, we obtain improved rates for finding stationary points of smooth non-convex empirical loss functions. Second, we specialize to quasar-convex functions, which generalize star-convex functions and arise in learning dynamical systems and training some neural nets. We achieve the optimal rate for this class. Third, we give an optimal algorithm for finding stationary points of functions satisfying the Kurdyka-Lojasiewicz (KL) condition. For example, over-parameterized neural networks often satisfy this condition. Fourth, we provide new state-of-the-art rates for stationary points of non-convex population loss functions. Fifth, we obtain improved rates for non-convex generalized linear models. A modification of our algorithm achieves nearly the same rates for second-order stationary points of functions with Lipschitz Hessian, improving over the previous state-of-the-art for each of the above problems.
https://proceedings.mlr.press/v235/lu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24a/lu24a.pdf
https://openreview.net/forum?id=Lc1HlMo77m
Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models
https://proceedings.mlr.press/v235/lu24a.html
Zhihe Lu, Jiawang Bai, Xin Li, Zeyu Xiao, Xinchao Wang
https://proceedings.mlr.press/v235/lu24a.html
ICML 2024
Fine-tuning pre-trained vision-language models (VLMs), e.g., CLIP, for the open-world generalization has gained increasing popularity due to its practical value. However, performance advancements are limited when relying solely on intricate algorithmic designs for a single model, even one exhibiting strong performance, e.g., CLIP-ViT-B/16. This paper, for the first time, explores the collaborative potential of leveraging much weaker VLMs to enhance the generalization of a robust single model. The affirmative findings motivate us to address the generalization problem from a novel perspective, i.e., ensemble of pre-trained VLMs. We introduce three customized ensemble strategies, each tailored to one specific scenario. Firstly, we introduce the zero-shot ensemble, automatically adjusting the logits of different models based on their confidence when only pre-trained VLMs are available. Furthermore, for scenarios with extra few-shot samples, we propose the training-free and tuning ensemble, offering flexibility based on the availability of computing resources. The code is available at https://github.com/zhiheLu/Ensemble_VLM.git.
https://proceedings.mlr.press/v235/lu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24b/lu24b.pdf
https://openreview.net/forum?id=maVIKlGqr7
HumanTOMATO: Text-aligned Whole-body Motion Generation
https://proceedings.mlr.press/v235/lu24b.html
Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang, Lei Zhang, Heung-Yeung Shum
https://proceedings.mlr.press/v235/lu24b.html
ICML 2024
This work targets a novel text-driven whole-body motion generation task, which takes a given textual description as input and aims at generating high-quality, diverse, and coherent facial expressions, hand gestures, and body motions simultaneously. Previous works on text-driven motion generation tasks mainly have two limitations: they ignore the key role of fine-grained hand and face controlling in vivid whole-body motion generation, and lack a good alignment between text and motion. To address such limitations, we propose a Text-aligned whOle-body Motion generATiOn framework, named HumanTOMATO, which is the first attempt to our knowledge towards applicable holistic motion generation in this research area. To tackle this challenging task, our solution includes two key designs: (1) a Holistic Hierarchical VQ-VAE (aka H${}^{2}$VQ) and a Hierarchical-GPT for fine-grained body and hand motion reconstruction and generation with two structured codebooks; and (2) a pre-trained text-motion-alignment model to help generated motion align with the input textual description explicitly. Comprehensive experiments verify that our model has significant advantages in both the quality of generated motions and their alignment with text.
https://proceedings.mlr.press/v235/lu24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24c/lu24c.pdf
https://openreview.net/forum?id=WPfYVdJHPk
Position: Exploring the Robustness of Pipeline-Parallelism-Based Decentralized Training
https://proceedings.mlr.press/v235/lu24c.html
Lin Lu, Chenxi Dai, Wangcheng Tao, Binhang Yuan, Yanan Sun, Pan Zhou
https://proceedings.mlr.press/v235/lu24c.html
ICML 2024
Modern machine learning applications increasingly demand greater computational resources for training large models. Decentralized training has emerged as an effective means to democratize this technology. However, the potential threats associated with this approach remain inadequately discussed, posing a hurdle to the development of decentralized training infrastructures. This paper aims to initiate discussion towards this end by exploring the robustness of decentralized training from three primary perspectives. Firstly, we articulate our position on establishing robust decentralized training by outlining potential threats and the corresponding countermeasures. Secondly, we illustrate a nascent poisoning attack targeting decentralized training frameworks, easily executable by malicious stages. To mitigate this security threat and ensure efficient training, we propose a robust training framework, integrating a 100% detection strategy and efficient training mechanisms. Finally, we demonstrate the severity of the proposed attack and the effectiveness of our robust training framework. This position paper emphasizes the urgency of exploring the robustness of decentralized training and proposes a feasible solution. The code is available at https://github.com/dcx001016/pipeline_attack.
https://proceedings.mlr.press/v235/lu24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24d/lu24d.pdf
https://openreview.net/forum?id=1lDAGDe0UR
CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables
https://proceedings.mlr.press/v235/lu24d.html
Jiecheng Lu, Xu Han, Yan Sun, Shihao Yang
https://proceedings.mlr.press/v235/lu24d.html
ICML 2024
For Multivariate Time Series Forecasting (MTSF), recent deep learning applications show that univariate models frequently outperform multivariate ones. To address the deficiency in multivariate models, we introduce a method to Construct Auxiliary Time Series (CATS) that functions like a 2D temporal-contextual attention mechanism, which generates Auxiliary Time Series (ATS) from Original Time Series (OTS) to effectively represent and incorporate inter-series relationships for forecasting. Key principles of ATS—continuity, sparsity, and variability—are identified and implemented through different modules. Even with a basic 2-layer MLP as the core predictor, CATS achieves state-of-the-art, significantly reducing complexity and parameters compared to previous multivariate models, marking it as an efficient and transferable MTSF solution.
https://proceedings.mlr.press/v235/lu24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24e/lu24e.pdf
https://openreview.net/forum?id=mUSPhG4uDW
WebLINX: Real-World Website Navigation with Multi-Turn Dialogue
https://proceedings.mlr.press/v235/lu24e.html
Xing Han Lu, Zdeněk Kasner, Siva Reddy
https://proceedings.mlr.press/v235/lu24e.html
ICML 2024
We propose the problem of conversational web navigation, where a digital agent controls a web browser and follows user instructions to solve real-world tasks in a multi-turn dialogue fashion. To support this problem, we introduce WEBLINX - a large-scale benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation. Our benchmark covers a broad range of patterns on over 150 real-world websites and can be used to train and evaluate agents in diverse scenarios. Due to the magnitude of information present, Large Language Models (LLMs) cannot process entire web pages in real-time. To solve this bottleneck, we design a retrieval-inspired model that efficiently prunes HTML pages by ranking relevant elements. We use the selected elements, along with screenshots and action history, to assess a variety of models for their ability to replicate human behavior when navigating the web. Our experiments span from small text-only to proprietary multimodal LLMs. We find that smaller finetuned decoders surpass the best zero-shot LLMs (including GPT-4V), but also larger finetuned multimodal models which were explicitly pretrained on screenshots. However, all finetuned models struggle to generalize to unseen websites. Our findings highlight the need for large multimodal models that can generalize to novel settings.
https://proceedings.mlr.press/v235/lu24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24f/lu24f.pdf
https://openreview.net/forum?id=9HdQr68Zyl
Open-Domain Text Evaluation via Contrastive Distribution Methods
https://proceedings.mlr.press/v235/lu24f.html
Sidi Lu, Hongyi Liu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng
https://proceedings.mlr.press/v235/lu24f.html
ICML 2024
Recent advancements in open-domain text generation, driven by the power of large pre-trained language models (LLMs), have demonstrated remarkable performance. However, assessing these models’ generation quality remains a challenge. In this paper, we introduce a novel method for evaluating open-domain text generation called Contrastive Distribution Methods (CDM). Leveraging the connection between increasing model parameters and enhanced LLM performance, CDM creates a mapping from the contrast of two probabilistic distributions – one known to be superior to the other – to quality measures. We investigate CDM for open-domain text generation evaluation under two paradigms: 1) Generative CDM, which harnesses the contrast of two language models’ distributions to generate synthetic examples for training discriminator-based metrics; 2) Discriminative CDM, which directly uses distribution disparities between two language models for evaluation. Our experiments on coherence evaluation for multi-turn dialogue and commonsense evaluation for controllable generation demonstrate CDM’s superior correlate with human judgment than existing automatic evaluation metrics, highlighting the strong performance and generalizability of our approach.
https://proceedings.mlr.press/v235/lu24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24g/lu24g.pdf
https://openreview.net/forum?id=HO0g6cHVZx
EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time
https://proceedings.mlr.press/v235/lu24g.html
Shengyao Lu, Bang Liu, Keith G Mills, Jiao He, Di Niu
https://proceedings.mlr.press/v235/lu24g.html
ICML 2024
Understanding and explaining the predictions of Graph Neural Networks (GNNs), is crucial for enhancing their safety and trustworthiness. Subgraph-level explanations are gaining attention for their intuitive appeal. However, most existing subgraph-level explainers face efficiency challenges in explaining GNNs due to complex search processes. The key challenge is to find a balance between intuitiveness and efficiency while ensuring transparency. Additionally, these explainers usually induce subgraphs by nodes, which may introduce less-intuitive disconnected nodes in the subgraph-level explanations or omit many important subgraph structures. In this paper, we reveal that inducing subgraph explanations by edges is more comprehensive than other subgraph inducing techniques. We also emphasize the need of determining the subgraph explanation size for each data instance, as different data instances may involve different important substructures. Building upon these considerations, we introduce a training-free approach, named EiG-Search. We employ an efficient linear-time search algorithm over the edge-induced subgraphs, where the edges are ranked by an enhanced gradient-based importance. We conduct extensive experiments on a total of seven datasets, demonstrating its superior performance and efficiency both quantitatively and qualitatively over the leading baselines.
https://proceedings.mlr.press/v235/lu24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24h/lu24h.pdf
https://openreview.net/forum?id=SyY7ScNpGL
Rethinking Transformers in Solving POMDPs
https://proceedings.mlr.press/v235/lu24h.html
Chenhao Lu, Ruizhe Shi, Yuyao Liu, Kaizhe Hu, Simon Shaolei Du, Huazhe Xu
https://proceedings.mlr.press/v235/lu24h.html
ICML 2024
Sequential decision-making algorithms such as reinforcement learning (RL) in real-world scenarios inevitably face environments with partial observability. This paper scrutinizes the effectiveness of a popular architecture, namely Transformers, in Partially Observable Markov Decision Processes (POMDPs) and reveals its theoretical limitations. We establish that regular languages, which Transformers struggle to model, are reducible to POMDPs. This poses a significant challenge for Transformers in learning POMDP-specific inductive biases, due to their lack of inherent recurrence found in other models like RNNs. This paper casts doubt on the prevalent belief in Transformers as sequence models for RL and proposes to introduce a point-wise recurrent structure. The Deep Linear Recurrent Unit (LRU) emerges as a well-suited alternative for Partially Observable RL, with empirical results highlighting the sub-optimal performance of the Transformer and considerable strength of LRU.
https://proceedings.mlr.press/v235/lu24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24i/lu24i.pdf
https://openreview.net/forum?id=lsavZkUjFZ
CauDiTS: Causal Disentangled Domain Adaptation of Multivariate Time Series
https://proceedings.mlr.press/v235/lu24i.html
Junxin Lu, Shiliang Sun
https://proceedings.mlr.press/v235/lu24i.html
ICML 2024
Unsupervised domain adaptation of multivariate time series aims to train a model to adapt its classification ability from a labeled source domain to an unlabeled target domain, where there are differences in the distribution between domains. Existing methods extract domain-invariant features directly via a shared feature extractor, neglecting the exploration of the underlying causal patterns, which undermines their reliability, especially in complex multivariate dynamic systems. To address this problem, we propose CauDiTS, an innovative framework for unsupervised domain adaptation of multivariate time series. CauDiTS adopts an adaptive rationale disentangler to disentangle domain-common causal rationales and domain-specific correlations from variable interrelationships. The stability of causal rationales across domains is vital for filtering domainspecific perturbations and facilitating the extraction of domain-invariant representations. Moreover, we promote the cross-domain consistency of intra-class causal rationales employing the learning strategies of causal prototype consistency and domain-intervention causality invariance. CauDiTS is evaluated on four benchmark datasets, demonstrating its effectiveness and outperforming state-of-the-art methods.
https://proceedings.mlr.press/v235/lu24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24j/lu24j.pdf
https://openreview.net/forum?id=181hXof7ho
NeWRF: A Deep Learning Framework for Wireless Radiation Field Reconstruction and Channel Prediction
https://proceedings.mlr.press/v235/lu24j.html
Haofan Lu, Christopher Vattheuer, Baharan Mirzasoleiman, Omid Abari
https://proceedings.mlr.press/v235/lu24j.html
ICML 2024
We present NeWRF, a novel deep-learning-based framework for predicting wireless channels. Wireless channel prediction is a long-standing problem in the wireless community and is a key technology for improving the coverage of wireless network deployments. Today, a wireless deployment is evaluated by a site survey which is a cumbersome process requiring an experienced engineer to perform extensive channel measurements. To reduce the cost of site surveys, we develop NeWRF, which is based on recent advances in Neural Radiance Fields (NeRF). NeWRF trains a neural network model with a sparse set of channel measurements, and predicts the wireless channel accurately at any location in the site. We introduce a series of techniques that integrate wireless propagation properties into the NeRF framework to account for the fundamental differences between the behavior of light and wireless signals. We conduct extensive evaluations of our framework and show that our approach can accurately predict channels at unvisited locations with significantly lower measurement density than prior state-of-the-art.
https://proceedings.mlr.press/v235/lu24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24k/lu24k.pdf
https://openreview.net/forum?id=jZVen2JguY
FiT: Flexible Vision Transformer for Diffusion Model
https://proceedings.mlr.press/v235/lu24k.html
Zeyu Lu, Zidong Wang, Di Huang, Chengyue Wu, Xihui Liu, Wanli Ouyang, Lei Bai
https://proceedings.mlr.press/v235/lu24k.html
ICML 2024
In the context of this reality, existing diffusion models, such as Diffusion Transformers, often face challenges when processing image resolutions outside of their trained domain. To overcome this limitation, we present the Flexible Vision Transformer (FiT), a transformer architecture specifically designed for generating images with unrestricted resolutions and aspect ratios. Unlike traditional methods that perceive images as static-resolution grids, FiT conceptualizes images as sequences of dynamically-sized tokens. This perspective enables a flexible training strategy that effortlessly adapts to diverse aspect ratios during both training and inference phases, thus promoting resolution generalization and eliminating biases induced by image cropping. Enhanced by a meticulously adjusted network structure and the integration of training-free extrapolation techniques, FiT exhibits remarkable flexibility in resolution extrapolation generation. Comprehensive experiments demonstrate the exceptional performance of FiT across a broad range of resolutions. Repository available at https://github.com/whlzy/FiT.
https://proceedings.mlr.press/v235/lu24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24l/lu24l.pdf
https://openreview.net/forum?id=pz4B2kHVKo
Probabilistic Routing for Graph-Based Approximate Nearest Neighbor Search
https://proceedings.mlr.press/v235/lu24l.html
Kejing Lu, Chuan Xiao, Yoshiharu Ishikawa
https://proceedings.mlr.press/v235/lu24l.html
ICML 2024
Approximate nearest neighbor search (ANNS) in high-dimensional spaces is a pivotal challenge in the field of machine learning. In recent years graph-based methods have emerged as the superior approach to ANNS, establishing a new state of the art. Although various optimizations for graph-based ANNS have been introduced, they predominantly rely on heuristic methods that lack formal theoretical backing. This paper aims to enhance routing within graph-based ANNS by introducing a method that offers a probabilistic guarantee when exploring a node’s neighbors in the graph. We formulate the problem as probabilistic routing and develop two baseline strategies by incorporating locality-sensitive techniques. Subsequently, we introduce PEOs, a novel approach that efficiently identifies which neighbors in the graph should be considered for exact distance computation, thus significantly improving efficiency in practice. Our experiments demonstrate that equipping PEOs can increase throughput on a commonly utilized graph index (HNSW) by a factor of 1.6 to 2.5, and its efficiency consistently outperforms the leading-edge routing technique by 1.1 to 1.4 times. The code and datasets used for our evaluations are publicly accessible at https//github.com/ICML2024-code/PEOs .
https://proceedings.mlr.press/v235/lu24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24m/lu24m.pdf
https://openreview.net/forum?id=q5Bg858Hef
Disguised Copyright Infringement of Latent Diffusion Models
https://proceedings.mlr.press/v235/lu24m.html
Yiwei Lu, Matthew Y. R. Yang, Zuoqiu Liu, Gautam Kamath, Yaoliang Yu
https://proceedings.mlr.press/v235/lu24m.html
ICML 2024
Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access.
https://proceedings.mlr.press/v235/lu24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24n/lu24n.pdf
https://openreview.net/forum?id=0HUInAsdoo
OxyGenerator: Reconstructing Global Ocean Deoxygenation Over a Century with Deep Learning
https://proceedings.mlr.press/v235/lu24n.html
Bin Lu, Ze Zhao, Luyu Han, Xiaoying Gan, Yuntao Zhou, Lei Zhou, Luoyi Fu, Xinbing Wang, Chenghu Zhou, Jing Zhang
https://proceedings.mlr.press/v235/lu24n.html
ICML 2024
Accurately reconstructing the global ocean deoxygenation over a century is crucial for assessing and protecting marine ecosystem. Existing expert-dominated numerical simulations fail to catch up with the dynamic variation caused by global warming and human activities. Besides, due to the high-cost data collection, the historical observations are severely sparse, leading to big challenge for precise reconstruction. In this work, we propose OxyGenerator, the first deep learning based model, to reconstruct the global ocean deoxygenation from 1920 to 2023. Specifically, to address the heterogeneity across large temporal and spatial scales, we propose zoning-varying graph message-passing to capture the complex oceanographic correlations between missing values and sparse observations. Additionally, to further calibrate the uncertainty, we incorporate inductive bias from dissolved oxygen (DO) variations and chemical effects. Compared with in-situ DO observations, OxyGenerator significantly outperforms CMIP6 numerical simulations, reducing MAPE by 38.77%, demonstrating a promising potential to understand the “breathless ocean” in data-driven manner.
https://proceedings.mlr.press/v235/lu24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24o/lu24o.pdf
https://openreview.net/forum?id=pvg1OdUtDQ
DiNADO: Norm-Disentangled Neurally-Decomposed Oracles for Controlling Language Models
https://proceedings.mlr.press/v235/lu24o.html
Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng
https://proceedings.mlr.press/v235/lu24o.html
ICML 2024
NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllable generation with large language models. It is designed to avoid catastrophic forgetting while achieving guaranteed convergence to an entropy-maximized closed-form optimal solution with reasonable modeling capacity. Despite the success, several challenges arise when apply NADO to a wide range of scenarios. Vanilla NADO suffers from gradient vanishing for low-probability control signals and is highly reliant on a regularization to satisfy the stochastic version of Bellman equation. In addition, the vanilla implementation of NADO introduces a few additional transformer layers, suffering from a limited capacity especially compared to other finetune-based model adaptation methods like LoRA. In this paper, we propose a improved version of the NADO algorithm, namely DiNADO (norm-Disentangled NeurAlly-Decomposed Oracles), which improves the performance of the NADO algorithm through disentangling the step-wise global norm over the approximated oracle $R$-value for all potential next-tokens, allowing DiNADO to be combined with finetuning methods like LoRA. We discuss in depth how DiNADO achieves better capacity, stability and flexibility with both empirical and theoretical results. Experiments on formality control in machine translation and the lexically constrained generation task CommonGen demonstrates the significance of the improvements.
https://proceedings.mlr.press/v235/lu24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lu24p/lu24p.pdf
https://openreview.net/forum?id=9Rroj9GIOQ
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models
https://proceedings.mlr.press/v235/lu24p.html
Xudong Lu, Aojun Zhou, Yuhui Xu, Renrui Zhang, Peng Gao, Hongsheng Li
https://proceedings.mlr.press/v235/lu24p.html
ICML 2024
Large Language Models (LLMs) have become pivotal in advancing the field of artificial intelligence, yet their immense sizes pose significant challenges for both fine-tuning and deployment. Current post-training pruning methods, while reducing the sizes of LLMs, often fail to maintain their original performance. To address these challenges, this paper introduces SPP, a Sparsity-Preserved Parameter-efficient fine-tuning method. Different from existing post-training pruning approaches that struggle with performance retention, SPP proposes to employ lightweight learnable column and row matrices to optimize sparse LLM weights, keeping the structure and sparsity of pruned pre-trained models intact. By element-wise multiplication and residual addition, SPP ensures the consistency of model sparsity pattern and ratio during both training and weight-merging processes. We demonstrate the effectiveness of SPP by applying it to the LLaMA and LLaMA-2 model families with recent post-training pruning methods. Our results show that SPP significantly enhances the performance of models with different sparsity patterns (i.e. unstructured and N:M sparsity), especially for those with high sparsity ratios (e.g. 75%), making it a promising solution for the efficient fine-tuning of sparse LLMs. Code will be made available at https://github.com/Lucky-Lance/SPP.
https://proceedings.mlr.press/v235/ludziejewski24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ludziejewski24a/ludziejewski24a.pdf
https://openreview.net/forum?id=yoqdlynCRs
Scaling Laws for Fine-Grained Mixture of Experts
https://proceedings.mlr.press/v235/ludziejewski24a.html
Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygóźdź, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur
https://proceedings.mlr.press/v235/ludziejewski24a.html
ICML 2024
Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, the modification of which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.
https://proceedings.mlr.press/v235/luo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24a/luo24a.pdf
https://openreview.net/forum?id=scFlbJQdm1
Projecting Molecules into Synthesizable Chemical Spaces
https://proceedings.mlr.press/v235/luo24a.html
Shitong Luo, Wenhao Gao, Zuofan Wu, Jian Peng, Connor W. Coley, Jianzhu Ma
https://proceedings.mlr.press/v235/luo24a.html
ICML 2024
Discovering new drug molecules is a pivotal yet challenging process due to the near-infinitely large chemical space and notorious demands on time and resources. Numerous generative models have recently been introduced to accelerate the drug discovery process, but their progression to experimental validation remains limited, largely due to a lack of consideration for synthetic accessibility in practical settings. In this work, we introduce a novel framework that is capable of generating new chemical structures while ensuring synthetic accessibility. Specifically, we introduce a postfix notation of synthetic pathways to represent molecules in chemical space. Then, we design a transformer-based model to translate molecular graphs into postfix notations of synthesis. We highlight the model’s ability to: (a) perform bottom-up synthesis planning more accurately, (b) generate structurally similar, synthesizable analogs for unsynthesizable molecules proposed by generative models with their properties preserved, and (c) explore the local synthesizable chemical space around hit molecules.
https://proceedings.mlr.press/v235/luo24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24b/luo24b.pdf
https://openreview.net/forum?id=jrE7geZekq
PGODE: Towards High-quality System Dynamics Modeling
https://proceedings.mlr.press/v235/luo24b.html
Xiao Luo, Yiyang Gu, Huiyu Jiang, Hang Zhou, Jinsheng Huang, Wei Ju, Zhiping Xiao, Ming Zhang, Yizhou Sun
https://proceedings.mlr.press/v235/luo24b.html
ICML 2024
This paper studies the problem of modeling multi-agent dynamical systems, where agents could interact mutually to influence their behaviors. Recent research predominantly uses geometric graphs to depict these mutual interactions, which are then captured by powerful graph neural networks (GNNs). However, predicting interacting dynamics in challenging scenarios such as out-of-distribution shift and complicated underlying rules remains unsolved. In this paper, we propose a new approach named Prototypical Graph ODE (PGODE) to address the problem. The core of PGODE is to incorporate prototype decomposition from contextual knowledge into a continuous graph ODE framework. Specifically, PGODE employs representation disentanglement and system parameters to extract both object-level and system-level contexts from historical trajectories, which allows us to explicitly model their independent influence and thus enhances the generalization capability under system changes. Then, we integrate these disentangled latent representations into a graph ODE model, which determines a combination of various interacting prototypes for enhanced model expressivity. The entire model is optimized using an end-to-end variational inference framework to maximize the likelihood. Extensive experiments in both in-distribution and out-of-distribution settings validate the superiority of PGODE compared to various baselines.
https://proceedings.mlr.press/v235/luo24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24c/luo24c.pdf
https://openreview.net/forum?id=Yn8xnK90mS
Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear Regression
https://proceedings.mlr.press/v235/luo24c.html
Zhankun Luo, Abolfazl Hashemi
https://proceedings.mlr.press/v235/luo24c.html
ICML 2024
We study the trajectory of iterations and the convergence rates of the Expectation-Maximization (EM) algorithm for two-component Mixed Linear Regression (2MLR). The fundamental goal of MLR is to learn the regression models from unlabeled observations. The EM algorithm finds extensive applications in solving the mixture of linear regressions. Recent results have established the super-linear convergence of EM for 2MLR in the noiseless and high SNR settings under some assumptions and its global convergence rate with random initialization has been affirmed. However, the exponent of convergence has not been theoretically estimated and the geometric properties of the trajectory of EM iterations are not well-understood. In this paper, first, using Bessel functions we provide explicit closed-form expressions for the EM updates under all SNR regimes. Then, in the noiseless setting, we completely characterize the behavior of EM iterations by deriving a recurrence relation at the population level and notably show that all the iterations lie on a certain cycloid. Based on this new trajectory-based analysis, we exhibit the theoretical estimate for the exponent of super-linear convergence and further improve the statistical error bound at the finite-sample level. Our analysis provides a new framework for studying the behavior of EM for Mixed Linear Regression.
https://proceedings.mlr.press/v235/luo24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24d/luo24d.pdf
https://openreview.net/forum?id=R83VIZtHXA
OMPO: A Unified Framework for RL under Policy and Dynamics Shifts
https://proceedings.mlr.press/v235/luo24d.html
Yu Luo, Tianying Ji, Fuchun Sun, Jianwei Zhang, Huazhe Xu, Xianyuan Zhan
https://proceedings.mlr.press/v235/luo24d.html
ICML 2024
Training reinforcement learning policies using environment interaction data collected from varying policies or dynamics presents a fundamental challenge. Existing works often overlook the distribution discrepancies induced by policy or dynamics shifts, or rely on specialized algorithms with task priors, thus often resulting in suboptimal policy performances and high learning variances. In this paper, we identify a unified strategy for online RL policy learning under diverse settings of policy and dynamics shifts: transition occupancy matching. In light of this, we introduce a surrogate policy learning objective by considering the transition occupancy discrepancies and then cast it into a tractable min-max optimization problem through dual reformulation. Our method, dubbed Occupancy-Matching Policy Optimization (OMPO), features a specialized actor-critic structure equipped with a distribution discriminator and a small-size local buffer. We conduct extensive experiments based on the OpenAI Gym, Meta-World, and Panda Robots environments, encompassing policy shifts under stationary and non-stationary dynamics, as well as domain adaption. The results demonstrate that OMPO outperforms the specialized baselines from different categories in all settings. We also find that OMPO exhibits particularly strong performance when combined with domain randomization, highlighting its potential in RL-based robotics applications.
https://proceedings.mlr.press/v235/luo24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24e/luo24e.pdf
https://openreview.net/forum?id=7joG3i2pUR
Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
https://proceedings.mlr.press/v235/luo24e.html
Yu Luo, Tianying Ji, Fuchun Sun, Jianwei Zhang, Huazhe Xu, Xianyuan Zhan
https://proceedings.mlr.press/v235/luo24e.html
ICML 2024
Off-policy reinforcement learning (RL) has achieved notable success in tackling many complex real-world tasks, by leveraging previously collected data for policy learning. However, most existing off-policy RL algorithms fail to maximally exploit the information in the replay buffer, limiting sample efficiency and policy performance. In this work, we discover that concurrently training an offline RL policy based on the shared online replay buffer can sometimes outperform the original online learning policy, though the occurrence of such performance gains remains uncertain. This motivates a new possibility of harnessing the emergent outperforming offline optimal policy to improve online policy learning. Based on this insight, we present Offline-Boosted Actor-Critic (OBAC), a model-free online RL framework that elegantly identifies the outperforming offline policy through value comparison, and uses it as an adaptive constraint to guarantee stronger policy learning performance. Our experiments demonstrate that OBAC outperforms other popular model-free RL baselines and rivals advanced model-based RL methods in terms of sample efficiency and asymptotic performance across 53 tasks spanning 6 task suites.
https://proceedings.mlr.press/v235/luo24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24f/luo24f.pdf
https://openreview.net/forum?id=xtKWwB6lzT
Position: Reinforcement Learning in Dynamic Treatment Regimes Needs Critical Reexamination
https://proceedings.mlr.press/v235/luo24f.html
Zhiyao Luo, Yangchen Pan, Peter Watkinson, Tingting Zhu
https://proceedings.mlr.press/v235/luo24f.html
ICML 2024
In the rapidly changing healthcare landscape, the implementation of offline reinforcement learning (RL) in dynamic treatment regimes (DTRs) presents a mix of unprecedented opportunities and challenges. This position paper offers a critical examination of the current status of offline RL in the context of DTRs. We argue for a reassessment of applying RL in DTRs, citing concerns such as inconsistent and potentially inconclusive evaluation metrics, the absence of naive and supervised learning baselines, and the diverse choice of RL formulation in existing research. Through a case study with more than 17,000 evaluation experiments using a publicly available Sepsis dataset, we demonstrate that the performance of RL algorithms can significantly vary with changes in evaluation metrics and Markov Decision Process (MDP) formulations. Surprisingly, it is observed that in some instances, RL algorithms can be surpassed by random baselines subjected to policy evaluation methods and reward design. This calls for more careful policy evaluation and algorithm development in future DTR works. Additionally, we discussed potential enhancements toward more reliable development of RL-based dynamic treatment regimes and invited further discussion within the community. Code is available at https://github.com/GilesLuo/ReassessDTR.
https://proceedings.mlr.press/v235/luo24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24g/luo24g.pdf
https://openreview.net/forum?id=LhAuVPWq6q
Hierarchical Neural Operator Transformer with Learnable Frequency-aware Loss Prior for Arbitrary-scale Super-resolution
https://proceedings.mlr.press/v235/luo24g.html
Xihaier Luo, Xiaoning Qian, Byung-Jun Yoon
https://proceedings.mlr.press/v235/luo24g.html
ICML 2024
In this work, we present an arbitrary-scale super-resolution (SR) method to enhance the resolution of scientific data, which often involves complex challenges such as continuity, multi-scale physics, and the intricacies of high-frequency signals. Grounded in operator learning, the proposed method is resolution-invariant. The core of our model is a hierarchical neural operator that leverages a Galerkin-type self-attention mechanism, enabling efficient learning of mappings between function spaces. Sinc filters are used to facilitate the information transfer across different levels in the hierarchy, thereby ensuring representation equivalence in the proposed neural operator. Additionally, we introduce a learnable prior structure that is derived from the spectral resizing of the input data. This loss prior is model-agnostic and is designed to dynamically adjust the weighting of pixel contributions, thereby balancing gradients effectively across the model. We conduct extensive experiments on diverse datasets from different domains and demonstrate consistent improvements compared to strong baselines, which consist of various state-of-the-art SR methods.
https://proceedings.mlr.press/v235/luo24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24h/luo24h.pdf
https://openreview.net/forum?id=Qb68Rs0p9f
Potential Based Diffusion Motion Planning
https://proceedings.mlr.press/v235/luo24h.html
Yunhao Luo, Chen Sun, Joshua B. Tenenbaum, Yilun Du
https://proceedings.mlr.press/v235/luo24h.html
ICML 2024
Effective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An advantage of potential based motion planning is composability – different motion constraints can easily combined by adding corresponding potentials. However, constructing motion paths from potentials requires solving a global optimization across configuration space potential landscape, which is often prone to local minima. We propose a new approach towards learning potential based motion planning, where we train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories. We illustrate the effectiveness of such approach, significantly outperforming both classical and recent learned motion planning approaches and avoiding issues with local minima. We further illustrate its inherent composability, enabling us to generalize to a multitude of different motion constraints. Project website at https://energy-based-model.github.io/potential-motion-plan.
https://proceedings.mlr.press/v235/luo24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24i/luo24i.pdf
https://openreview.net/forum?id=qMG3OK7Xcg
Cluster-Aware Similarity Diffusion for Instance Retrieval
https://proceedings.mlr.press/v235/luo24i.html
Jifei Luo, Hantao Yao, Changsheng Xu
https://proceedings.mlr.press/v235/luo24i.html
ICML 2024
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph. However, existing techniques that construct the affinity graph based on pairwise instances can lead to the propagation of misinformation from outliers and other manifolds, resulting in inaccurate results. To overcome this issue, we propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval. The primary concept of CAS is to conduct similarity diffusion within local clusters, which can reduce the influence from other manifolds explicitly. To obtain a symmetrical and smooth similarity matrix, our Bidirectional Similarity Diffusion strategy introduces an inverse constraint term to the optimization objective of local cluster diffusion. Additionally, we have optimized a Neighbor-guided Similarity Smoothing approach to ensure similarity consistency among the local neighbors of each instance. Evaluations in instance retrieval and object re-identification validate the effectiveness of the proposed CAS, our code is publicly available.
https://proceedings.mlr.press/v235/luo24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/luo24j/luo24j.pdf
https://openreview.net/forum?id=0P3kaNluGj
End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations
https://proceedings.mlr.press/v235/luo24j.html
Lirui Luo, Guoxi Zhang, Hongming Xu, Yaodong Yang, Cong Fang, Qing Li
https://proceedings.mlr.press/v235/luo24j.html
ICML 2024
Neuro-symbolic reinforcement learning (NS-RL) has emerged as a promising paradigm for explainable decision-making, characterized by the interpretability of symbolic policies. NS-RL entails structured state representations for tasks with visual observations, but previous methods cannot refine the structured states with rewards due to a lack of efficiency. Accessibility also remains an issue, as extensive domain knowledge is required to interpret symbolic policies. In this paper, we present a neuro-symbolic framework for jointly learning structured states and symbolic policies, whose key idea is to distill the vision foundation model into an efficient perception module and refine it during policy learning. Moreover, we design a pipeline to prompt GPT-4 to generate textual explanations for the learned policies and decisions, significantly reducing users’ cognitive load to understand the symbolic policies. We verify the efficacy of our approach on nine Atari tasks and present GPT-generated explanations for policies and decisions.
https://proceedings.mlr.press/v235/lv24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lv24a/lv24a.pdf
https://openreview.net/forum?id=eJFQROkaj0
RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models
https://proceedings.mlr.press/v235/lv24a.html
Qi Lv, Hao Li, Xiang Deng, Rui Shao, Michael Y Wang, Liqiang Nie
https://proceedings.mlr.press/v235/lv24a.html
ICML 2024
Multimodal Large Language Models (MLLMs) have shown impressive reasoning abilities and general intelligence in various domains. It inspires researchers to train end-to-end MLLMs or utilize large models to generate policies with human-selected prompts for embodied agents. However, these methods exhibit limited generalization capabilities on unseen tasks or scenarios, and overlook the multimodal environment information which is critical for robots to make decisions. In this paper, we introduce a novel Robotic Multimodal Perception-Planning (RoboMP$^2$) framework for robotic manipulation which consists of a Goal-Conditioned Multimodal Preceptor (GCMP) and a Retrieval-Augmented Multimodal Planner (RAMP). Specially, GCMP captures environment states by employing a tailored MLLMs for embodied agents with the abilities of semantic reasoning and localization. RAMP utilizes coarse-to-fine retrieval method to find the $k$ most-relevant policies as in-context demonstrations to enhance the planner. Extensive experiments demonstrate the superiority of RoboMP$^2$ on both VIMA benchmark and real-world tasks, with around 10% improvement over the baselines.
https://proceedings.mlr.press/v235/lv24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lv24b/lv24b.pdf
https://openreview.net/forum?id=6PTiCmGcNx
Contamination-Resilient Anomaly Detection via Adversarial Learning on Partially-Observed Normal and Anomalous Data
https://proceedings.mlr.press/v235/lv24b.html
Wenxi Lv, Qinliang Su, Hai Wan, Hongteng Xu, Wenchao Xu
https://proceedings.mlr.press/v235/lv24b.html
ICML 2024
Many existing anomaly detection methods assume the availability of a large-scale normal dataset. But for many applications, limited by resources, removing all anomalous samples from a large un-labeled dataset is unrealistic, resulting in contaminated datasets. To detect anomalies accurately under such scenarios, from the probabilistic perspective, the key question becomes how to learn the normal-data distribution from a contaminated dataset. To this end, we propose to collect two additional small datasets that are comprised of partially-observed normal and anomaly samples, and then use them to help learn the distribution under an adversarial learning scheme. We prove that under some mild conditions, the proposed method is able to learn the correct normal-data distribution. Then, we consider the overfitting issue caused by the small size of the two additional datasets, and a correctness-guaranteed flipping mechanism is further developed to alleviate it. Theoretical results under incomplete observed anomaly types are also presented. Extensive experimental results demonstrate that our method outperforms representative baselines when detecting anomalies under contaminated datasets.
https://proceedings.mlr.press/v235/lv24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lv24c/lv24c.pdf
https://openreview.net/forum?id=JCG0KTPVYy
Coarse-to-Fine Highlighting: Reducing Knowledge Hallucination in Large Language Models
https://proceedings.mlr.press/v235/lv24c.html
Qitan Lv, Jie Wang, Hanzhu Chen, Bin Li, Yongdong Zhang, Feng Wu
https://proceedings.mlr.press/v235/lv24c.html
ICML 2024
Generation of plausible but incorrect factual information, often termed hallucination, has attracted significant research interest. Retrieval-augmented language model (RALM)—which enhances models with up-to-date knowledge—emerges as a promising method to reduce hallucination. However, existing RALMs may instead exacerbate hallucination when retrieving lengthy contexts. To address this challenge, we propose COFT, a novel COarse-to-Fine highlighTing method to focus on different granularity-level key texts, thereby avoiding getting lost in lengthy contexts. Specifically, COFT consists of three components: recaller, scorer, and selector. First, recaller applies a knowledge graph to extract potential key entities in a given context. Second, scorer measures the importance of each entity by calculating its contextual weight. Finally, selector selects high contextual weight entities with a dynamic threshold algorithm and highlights the corresponding paragraphs, sentences, or words in a coarse-to-fine manner. Extensive experiments on knowledge hallucination benchmark demonstrate the effectiveness of COFT, leading to a superior performance over 30% in F1 score metric. Moreover, COFT also exhibits remarkable versatility across various long-form tasks, such as reading comprehension and question answering.
https://proceedings.mlr.press/v235/lv24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lv24d/lv24d.pdf
https://openreview.net/forum?id=SkI6u81AkI
Efficient and Effective Time-Series Forecasting with Spiking Neural Networks
https://proceedings.mlr.press/v235/lv24d.html
Changze Lv, Yansen Wang, Dongqi Han, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li
https://proceedings.mlr.press/v235/lv24d.html
ICML 2024
Spiking neural networks (SNNs), inspired by the spiking behavior of biological neurons, provide a unique pathway for capturing the intricacies of temporal data. However, applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection. In this paper, we propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information. Through a series of experiments, we demonstrate that our proposed SNN-based approaches achieve comparable or superior results to traditional time-series forecasting methods on diverse benchmarks with much less energy consumption. Furthermore, we conduct detailed analysis experiments to assess the SNN’s capacity to capture temporal dependencies within time-series data, offering valuable insights into its nuanced strengths and effectiveness in modeling the intricate dynamics of temporal data. Our study contributes to the expanding field of SNNs and offers a promising alternative for time-series forecasting tasks, presenting a pathway for the development of more biologically inspired and temporally aware forecasting models. Our code is available at https://github.com/microsoft/SeqSNN.
https://proceedings.mlr.press/v235/lyu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lyu24a/lyu24a.pdf
https://openreview.net/forum?id=3uPSQmjXzd
Cross-Domain Policy Adaptation by Capturing Representation Mismatch
https://proceedings.mlr.press/v235/lyu24a.html
Jiafei Lyu, Chenjia Bai, Jing-Wen Yang, Zongqing Lu, Xiu Li
https://proceedings.mlr.press/v235/lyu24a.html
ICML 2024
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL). In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain, and one can get access to sufficient source domain data, while can only have limited interactions with the target domain. Existing methods address this problem by learning domain classifiers, performing data filtering from a value discrepancy perspective, etc. Instead, we tackle this challenge from a decoupled representation learning perspective. We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain, which we show can be a signal of dynamics mismatch. We also show that representation deviation upper bounds performance difference of a given policy in the source domain and target domain, which motivates us to adopt representation deviation as a reward penalty. The produced representations are not involved in either policy or value function, but only serve as a reward penalizer. We conduct extensive experiments on environments with kinematic and morphology mismatch, and the results show that our method exhibits strong performance on many tasks. Our code is publicly available at https://github.com/dmksjfl/PAR.
https://proceedings.mlr.press/v235/lyu24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/lyu24b/lyu24b.pdf
https://openreview.net/forum?id=ZPiEIhQpos
Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models
https://proceedings.mlr.press/v235/lyu24b.html
Junlong Lyu, Zhitang Chen, Shoubo Feng
https://proceedings.mlr.press/v235/lyu24b.html
ICML 2024
We provide the first convergence guarantee for the Consistency Models (CMs), a newly emerging type of one-step generative models that is capable of generating comparable samples to those sampled from state-of-the-art Diffusion Models. Our main result is that, under the basic assumptions on score-matching errors, consistency errors, and smoothness of the data distribution, CMs can efficiently generate samples in one step with small $W_2$ error to any real data distribution. Our results (1) hold for $L^2$-accurate assumptions on both score and consistency functions (rather than $L^\infty$-accurate assumptions); (2) do not require strong assumptions on the data distribution such as log-Sobelev conditions; (3) scale polynomially in all parameters; and (4) match the state-of-the-art convergence guarantee for score-based generative models. We also show that the Multi-step Consistency Sampling procedure can further reduce the error comparing to one step sampling, which supports the original statement from Song Yang’s work. Our result can be generalized to arbitrary bounded data distributions that may be supported on some low-dimensional sub-manifolds. Our results further imply TV error guarantees when making some Langevin-based modifications to the output distributions.
https://proceedings.mlr.press/v235/ma24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24a/ma24a.pdf
https://openreview.net/forum?id=1zFkjbTgwC
Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation
https://proceedings.mlr.press/v235/ma24a.html
Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao
https://proceedings.mlr.press/v235/ma24a.html
ICML 2024
With the increasingly powerful performances and enormous scales of pretrained models, promoting parameter efficiency in fine-tuning has become a crucial need for effective and efficient adaptation to various downstream tasks. One representative line of fine-tuning methods is Orthogonal Fine-tuning (OFT), which rigorously preserves the angular distances within the parameter space to preserve the pretrained knowledge. Despite the empirical effectiveness, OFT still suffers low parameter efficiency at $\mathcal{O}(d^2)$ and limited capability of downstream adaptation. Inspired by Givens rotation, in this paper, we proposed quasi-Givens Orthogonal Fine-Tuning (qGOFT) to address the problems. We first use $\mathcal{O}(d)$ Givens rotations to accomplish arbitrary orthogonal transformation in $SO(d)$ with provable equivalence, reducing parameter complexity from $\mathcal{O}(d^2)$ to $\mathcal{O}(d)$. Then we introduce flexible norm and relative angular adjustments under soft orthogonality regularization to enhance the adaptation capability of downstream semantic deviations. Extensive experiments on various tasks and pretrained models validate the effectiveness of our methods.
https://proceedings.mlr.press/v235/ma24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24b/ma24b.pdf
https://openreview.net/forum?id=WsM4TVsZpJ
Rethinking Decision Transformer via Hierarchical Reinforcement Learning
https://proceedings.mlr.press/v235/ma24b.html
Yi Ma, Jianye Hao, Hebin Liang, Chenjun Xiao
https://proceedings.mlr.press/v235/ma24b.html
ICML 2024
Decision Transformer (DT) is an innovative algorithm leveraging recent advances of the transformer architecture in reinforcement learning (RL). However, a notable limitation of DT is its reliance on recalling trajectories from datasets, losing the capability to seamlessly stitch sub-optimal trajectories together. In this work we introduce a general sequence modeling framework for studying sequential decision making through the lens of Hierarchical RL. At the time of making decisions, a high-level policy first proposes an ideal prompt for the current state, a low-level policy subsequently generates an action conditioned on the given prompt. We show DT emerges as a special case of this framework with certain choices of high-level and low-level policies, and discuss the potential failure of these choices. Inspired by these observations, we study how to jointly optimize the high-level and low-level policies to enable the stitching ability, which further leads to the development of new offline RL algorithms. Our empirical results clearly show that the proposed algorithms significantly surpass DT on several control and navigation benchmarks. We hope our contributions can inspire the integration of transformer architectures within the field of RL.
https://proceedings.mlr.press/v235/ma24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24c/ma24c.pdf
https://openreview.net/forum?id=wlBtHP8KqS
Better Locally Private Sparse Estimation Given Multiple Samples Per User
https://proceedings.mlr.press/v235/ma24c.html
Yuheng Ma, Ke Jia, Hanfang Yang
https://proceedings.mlr.press/v235/ma24c.html
ICML 2024
Previous studies yielded discouraging results for item-level locally differentially private linear regression with $s$-sparsity assumption, where the minimax rate for $nm$ samples is $\mathcal{O}(sd / nm\varepsilon^2)$. This can be challenging for high-dimensional data, where the dimension $d$ is extremely large. In this work, we investigate user-level locally differentially private sparse linear regression. We show that with $n$ users each contributing $m$ samples, the linear dependency of dimension $d$ can be eliminated, yielding an error upper bound of $\mathcal{O}(s/ nm\varepsilon^2)$. We propose a framework that first selects candidate variables and then conducts estimation in the narrowed low-dimensional space, which is extendable to general sparse estimation problems with tight error bounds. Experiments on both synthetic and real datasets demonstrate the superiority of the proposed methods. Both the theoretical and empirical results suggest that, with the same number of samples, locally private sparse estimation is better conducted when multiple samples per user are available.
https://proceedings.mlr.press/v235/ma24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24d/ma24d.pdf
https://openreview.net/forum?id=lmiurzioja
Learning Modality Knowledge Alignment for Cross-Modality Transfer
https://proceedings.mlr.press/v235/ma24d.html
Wenxuan Ma, Shuang Li, Lincan Cai, Jingxuan Kang
https://proceedings.mlr.press/v235/ma24d.html
ICML 2024
Cross-modality transfer aims to leverage large pretrained models to complete tasks that may not belong to the modality of pretraining data. Existing works achieve certain success in extending classical finetuning to cross-modal scenarios, yet we still lack understanding about the influence of modality gap on the transfer. In this work, a series of experiments focusing on the source representation quality during transfer are conducted, revealing the connection between larger modality gap and lesser knowledge reuse which means ineffective transfer. We then formalize the gap as the knowledge misalignment between modalities using conditional distribution $P(Y|X)$. Towards this problem, we present Modality kNowledge Alignment (MoNA), a meta-learning approach that learns target data transformation to reduce the modality knowledge discrepancy ahead of the transfer. Experiments show that the approach significantly improves upon cross-modal finetuning methods, and most importantly leads to better reuse of source modality knowledge.
https://proceedings.mlr.press/v235/ma24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24e/ma24e.pdf
https://openreview.net/forum?id=2zLt2Odckx
Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients
https://proceedings.mlr.press/v235/ma24e.html
Mengmeng Ma, Tang Li, Xi Peng
https://proceedings.mlr.press/v235/ma24e.html
ICML 2024
Federated Learning is widely employed to tackle distributed sensitive data. Existing methods primarily focus on addressing in-federation data heterogeneity. However, we observed that they suffer from significant performance degradation when applied to unseen clients for out-of-federation (OOF) generalization. The recent attempts to address generalization to unseen clients generally struggle to scale up to large-scale distributed settings due to high communication or computation costs. Moreover, methods that scale well often demonstrate poor generalization capability. To achieve OOF-resiliency in a scalable manner, we propose Topology-aware Federated Learning (TFL) that leverages client topology - a graph representing client relationships - to effectively train robust models against OOF data. We formulate a novel optimization problem for TFL, consisting of two key modules: Client Topology Learning, which infers the client relationships in a privacy-preserving manner, and Learning on Client Topology, which leverages the learned topology to identify influential clients and harness this information into the FL optimization process to efficiently build robust models. Empirical evaluation on a variety of real-world datasets verifies TFL’s superior OOF robustness and scalability.
https://proceedings.mlr.press/v235/ma24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24f/ma24f.pdf
https://openreview.net/forum?id=Uh5XN9d2J4
Outlier-aware Slicing for Post-Training Quantization in Vision Transformer
https://proceedings.mlr.press/v235/ma24f.html
Yuexiao Ma, Huixia Li, Xiawu Zheng, Feng Ling, Xuefeng Xiao, Rui Wang, Shilei Wen, Fei Chao, Rongrong Ji
https://proceedings.mlr.press/v235/ma24f.html
ICML 2024
Post-Training Quantization (PTQ) is a vital technique for network compression and acceleration, gaining prominence as model sizes increase. This paper addresses a critical challenge in PTQ: the severe impact of outliers on the accuracy of quantized transformer architectures. Specifically, we introduce the concept of ‘reconstruction granularity’ as a novel solution to this issue, which has been overlooked in previous works. Our work provides theoretical insights into the role of reconstruction granularity in mitigating the outlier problem in transformer models. This theoretical framework is supported by empirical analysis, demonstrating that varying reconstruction granularities significantly influence quantization performance. Our findings indicate that different architectural designs necessitate distinct optimal reconstruction granularities. For instance, the multi-stage Swin Transformer architecture benefits from finer granularity, a deviation from the trends observed in ViT and DeiT models. We further develop an algorithm for determining the optimal reconstruction granularity for various ViT models, achieving state-of-the-art (SOTA) performance in PTQ. For example, applying our method to $4$-bit quantization, the Swin-Base model achieves a Top-1 accuracy of $82.24%$ on the ImageNet classification task. This result surpasses the RepQ-ViT by $3.92%$ ($82.24%$ VS $78.32%$). Similarly, our approach elevates the ViT-Small to a Top-1 accuracy of $80.50%$, outperforming NoisyQuant by $3.64%$ ($80.50%$ VS $76.86%$). Codes are available in Supplementary Materials.
https://proceedings.mlr.press/v235/ma24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24g/ma24g.pdf
https://openreview.net/forum?id=AYbXN9poJl
X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation
https://proceedings.mlr.press/v235/ma24g.html
Yiwei Ma, Zhekai Lin, Jiayi Ji, Yijun Fan, Xiaoshuai Sun, Rongrong Ji
https://proceedings.mlr.press/v235/ma24g.html
ICML 2024
Recent advancements in automatic 3D avatar generation guided by text have made significant progress. However, existing methods have limitations such as oversaturation and low-quality output. To address these challenges, we propose X-Oscar, a progressive framework for generating high-quality animatable avatars from text prompts. It follows a sequential "Geometry→Texture→Animation" paradigm, simplifying optimization through step-by-step generation. To tackle oversaturation, we introduce Adaptive Variational Parameter (AVP), representing avatars as an adaptive distribution during training. Additionally, we present Avatar-aware Score Distillation Sampling (ASDS), a novel technique that incorporates avatar-aware noise into rendered images for improved generation quality during optimization. Extensive evaluations confirm the superiority of X-Oscar over existing text-to-3D and text-to-avatar approaches. Our anonymous project page: https://anonymous1440.github.io/.
https://proceedings.mlr.press/v235/ma24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24h/ma24h.pdf
https://openreview.net/forum?id=K9NTPRvVRI
Neighboring Perturbations of Knowledge Editing on Large Language Models
https://proceedings.mlr.press/v235/ma24h.html
Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang, Jia-Chen Gu
https://proceedings.mlr.press/v235/ma24h.html
ICML 2024
Despite their exceptional capabilities, large language models (LLMs) are prone to generating unintended text due to false or outdated knowledge. Given the resource-intensive nature of retraining LLMs, there has been a notable increase in the development of knowledge editing. However, current approaches and evaluations rarely explore the perturbation of editing on neighboring knowledge. This paper studies whether updating new knowledge to LLMs perturbs the neighboring knowledge encapsulated within them. Specifically, we seek to figure out whether appending a new answer into an answer list to a factual question leads to catastrophic forgetting of original correct answers in this list, as well as unintentional inclusion of incorrect answers. A metric of additivity is introduced and a benchmark dubbed as Perturbation Evaluation of Appending Knowledge (PEAK) is constructed to evaluate the degree of perturbation to neighboring knowledge when appending new knowledge. Besides, a plug-and-play framework termed Appending via Preservation and Prevention (APP) is proposed to mitigate the neighboring perturbation by maintaining the integrity of the answer list. Experiments demonstrate the effectiveness of APP coupling with four editing methods on three LLMs.
https://proceedings.mlr.press/v235/ma24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24i/ma24i.pdf
https://openreview.net/forum?id=Uoved2xD81
Do Transformer World Models Give Better Policy Gradients?
https://proceedings.mlr.press/v235/ma24i.html
Michel Ma, Tianwei Ni, Clement Gehring, Pierluca D’Oro, Pierre-Luc Bacon
https://proceedings.mlr.press/v235/ma24i.html
ICML 2024
A natural approach for reinforcement learning is to predict future rewards by unrolling a neural network world model, and to backpropagate through the resulting computational graph to learn a control policy. However, this method often becomes impractical for long horizons, since typical world models induce hard-to-optimize loss landscapes. Transformers are known to efficiently propagate gradients over long horizons: could they be the solution to this problem? Surprisingly, we show that commonly-used transformer world models produce circuitous gradient paths, which can be detrimental to long-range policy gradients. To tackle this challenge, we propose a class of world models called Action-conditioned World Models (AWMs), designed to provide more direct routes for gradient propagation. We integrate such AWMs into a policy gradient framework that underscores the relationship between network architectures and the policy gradient updates they inherently represent. We demonstrate that AWMs can generate optimization landscapes that are easier to navigate even when compared to those from the simulator itself. This property allows transformer AWMs to produce better policies than competitive baselines in realistic long-horizon tasks.
https://proceedings.mlr.press/v235/ma24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24j/ma24j.pdf
https://openreview.net/forum?id=HUJK9dFOW6
Differentiable Distributionally Robust Optimization Layers
https://proceedings.mlr.press/v235/ma24j.html
Xutao Ma, Chao Ning, Wenli Du
https://proceedings.mlr.press/v235/ma24j.html
ICML 2024
In recent years, there has been a growing research interest in decision-focused learning, which embeds optimization problems as a layer in learning pipelines and demonstrates a superior performance than the prediction-focused approach. However, for distributionally robust optimization (DRO), a popular paradigm for decision-making under uncertainty, it is still unknown how to embed it as a layer, i.e., how to differentiate decisions with respect to an ambiguity set. In this paper, we develop such differentiable DRO layers for generic mixed-integer DRO problems with parameterized second-order conic ambiguity sets and discuss its extension to Wasserstein ambiguity sets. To differentiate the mixed-integer decisions, we propose a novel dual-view methodology by handling continuous and discrete parts of decisions via different principles. Specifically, we construct a differentiable energy-based surrogate to implement the dual-view methodology and use importance sampling to estimate its gradient. We further prove that such a surrogate enjoys the asymptotic convergency under regularization. As an application of the proposed differentiable DRO layers, we develop a novel decision-focused learning pipeline for contextual distributionally robust decision-making tasks and compare it with the prediction-focused approach in experiments
https://proceedings.mlr.press/v235/ma24k.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24k/ma24k.pdf
https://openreview.net/forum?id=KgfGxXbjjE
CKGConv: General Graph Convolution with Continuous Kernels
https://proceedings.mlr.press/v235/ma24k.html
Liheng Ma, Soumyasundar Pal, Yitian Zhang, Jiaming Zhou, Yingxue Zhang, Mark Coates
https://proceedings.mlr.press/v235/ma24k.html
ICML 2024
The existing definitions of graph convolution, either from spatial or spectral perspectives, are inflexible and not unified. Defining a general convolution operator in the graph domain is challenging due to the lack of canonical coordinates, the presence of irregular structures, and the properties of graph symmetries. In this work, we propose a novel and general graph convolution framework by parameterizing the kernels as continuous functions of pseudo-coordinates derived via graph positional encoding. We name this Continuous Kernel Graph Convolution (CKGConv). Theoretically, we demonstrate that CKGConv is flexible and expressive. CKGConv encompasses many existing graph convolutions, and exhibits a stronger expressiveness, as powerful as graph transformers in terms of distinguishing non-isomorphic graphs. Empirically, we show that CKGConv-based Networks outperform existing graph convolutional networks and perform comparably to the best graph transformers across a variety of graph datasets. The code and models are publicly available at https://github.com/networkslab/CKGConv.
https://proceedings.mlr.press/v235/ma24l.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24l/ma24l.pdf
https://openreview.net/forum?id=a3XFF0PGLU
Reward Shaping for Reinforcement Learning with An Assistant Reward Agent
https://proceedings.mlr.press/v235/ma24l.html
Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo, Di Fu, Tze-Yun Leong
https://proceedings.mlr.press/v235/ma24l.html
ICML 2024
Reward shaping is a promising approach to tackle the sparse-reward challenge of reinforcement learning by reconstructing more informative and dense rewards. This paper introduces a novel dual-agent reward shaping framework, composed of two synergistic agents: a policy agent to learn the optimal behavior and a reward agent to generate auxiliary reward signals. The proposed method operates as a self-learning approach, without reliance on expert knowledge or hand-crafted functions. By restructuring the rewards to capture future-oriented information, our framework effectively enhances the sample efficiency and convergence stability. Furthermore, the auxiliary reward signals facilitate the exploration of the environment in the early stage and the exploitation of the policy agent in the late stage, achieving a self-adaptive balance. We evaluate our framework on continuous control tasks with sparse and delayed rewards, demonstrating its robustness and superiority over existing methods.
https://proceedings.mlr.press/v235/ma24m.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24m/ma24m.pdf
https://openreview.net/forum?id=hz8cFsdz7P
LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery
https://proceedings.mlr.press/v235/ma24m.html
Pingchuan Ma, Tsun-Hsuan Wang, Minghao Guo, Zhiqing Sun, Joshua B. Tenenbaum, Daniela Rus, Chuang Gan, Wojciech Matusik
https://proceedings.mlr.press/v235/ma24m.html
ICML 2024
Large Language Models have recently gained significant attention in scientific discovery for their extensive knowledge and advanced reasoning capabilities. However, they encounter challenges in effectively simulating observational feedback and grounding it with language to propel advancements in physical scientific discovery. Conversely, human scientists undertake scientific discovery by formulating hypotheses, conducting experiments, and revising theories through observational analysis. Inspired by this, we propose to enhance the knowledge-driven, abstract reasoning abilities of LLMs with the computational strength of simulations. We introduce Scientific Generative Agent (SGA), a bilevel optimization framework: LLMs act as knowledgeable and versatile thinkers, proposing scientific hypotheses and reason about discrete components, such as physics equations or molecule structures; meanwhile, simulations function as experimental platforms, providing observational feedback and optimizing via differentiability for continuous parts, such as physical parameters. We conduct extensive experiments to demonstrate our framework’s efficacy in constitutive law discovery and molecular design, unveiling novel solutions that differ from conventional human expectations yet remain coherent upon analysis.
https://proceedings.mlr.press/v235/ma24n.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24n/ma24n.pdf
https://openreview.net/forum?id=CBcNl5Eo32
Fast Peer Adaptation with Context-aware Exploration
https://proceedings.mlr.press/v235/ma24n.html
Long Ma, Yuanfei Wang, Fangwei Zhong, Song-Chun Zhu, Yizhou Wang
https://proceedings.mlr.press/v235/ma24n.html
ICML 2024
Fast adapting to unknown peers (partners or opponents) with different strategies is a key challenge in multi-agent games. To do so, it is crucial for the agent to probe and identify the peer’s strategy efficiently, as this is the prerequisite for carrying out the best response in adaptation. However, exploring the strategies of unknown peers is difficult, especially when the games are partially observable and have a long horizon. In this paper, we propose a peer identification reward, which rewards the learning agent based on how well it can identify the behavior pattern of the peer over the historical context, such as the observation over multiple episodes. This reward motivates the agent to learn a context-aware policy for effective exploration and fast adaptation, i.e., to actively seek and collect informative feedback from peers when uncertain about their policies and to exploit the context to perform the best response when confident. We evaluate our method on diverse testbeds that involve competitive (Kuhn Poker), cooperative (PO-Overcooked), or mixed (Predator-Prey-W) games with peer agents. We demonstrate that our method induces more active exploration behavior, achieving faster adaptation and better outcomes than existing methods.
https://proceedings.mlr.press/v235/ma24o.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24o/ma24o.pdf
https://openreview.net/forum?id=x0yIaw2fgk
HarmonyDream: Task Harmonization Inside World Models
https://proceedings.mlr.press/v235/ma24o.html
Haoyu Ma, Jialong Wu, Ningya Feng, Chenjun Xiao, Dong Li, Jianye Hao, Jianmin Wang, Mingsheng Long
https://proceedings.mlr.press/v235/ma24o.html
ICML 2024
Model-based reinforcement learning (MBRL) holds the promise of sample-efficient learning by utilizing a world model, which models how the environment works and typically encompasses components for two tasks: observation modeling and reward modeling. In this paper, through a dedicated empirical investigation, we gain a deeper understanding of the role each task plays in world models and uncover the overlooked potential of sample-efficient MBRL by mitigating the domination of either observation or reward modeling. Our key insight is that while prevalent approaches of explicit MBRL attempt to restore abundant details of the environment via observation models, it is difficult due to the environment’s complexity and limited model capacity. On the other hand, reward models, while dominating implicit MBRL and adept at learning compact task-centric dynamics, are inadequate for sample-efficient learning without richer learning signals. Motivated by these insights and discoveries, we propose a simple yet effective approach, HarmonyDream, which automatically adjusts loss coefficients to maintain task harmonization, i.e. a dynamic equilibrium between the two tasks in world model learning. Our experiments show that the base MBRL method equipped with HarmonyDream gains 10%-69% absolute performance boosts on visual robotic tasks and sets a new state-of-the-art result on the Atari 100K benchmark. Code is available at https://github.com/thuml/HarmonyDream.
https://proceedings.mlr.press/v235/ma24p.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24p/ma24p.pdf
https://openreview.net/forum?id=4lghifYrSU
High-dimensional Linear Bandits with Knapsacks
https://proceedings.mlr.press/v235/ma24p.html
Wanteng Ma, Dong Xia, Jiashuo Jiang
https://proceedings.mlr.press/v235/ma24p.html
ICML 2024
We study the contextual bandits with knapsack (CBwK) problem under the high-dimensional setting where the dimension of the feature is large. We investigate how to exploit the sparsity structure to achieve improved regret for the CBwK problem. To this end, we first develop an online variant of the hard thresholding algorithm that performs the optimal sparse estimation. We further combine our online estimator with a primal-dual framework, where we assign a dual variable to each knapsack constraint and utilize an online learning algorithm to update the dual variable, thereby controlling the consumption of the knapsack capacity. We show that this integrated approach allows us to achieve a sublinear regret that depends logarithmically on the feature dimension, thus improving the polynomial dependency established in the previous literature. We also apply our framework to the high-dimension contextual bandit problem without the knapsack constraint and achieve optimal regret in both the data-poor regime and the data-rich regime.
https://proceedings.mlr.press/v235/ma24q.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24q/ma24q.pdf
https://openreview.net/forum?id=jWHU4b7Yk6
SyCoCa: Symmetrizing Contrastive Captioners with Attentive Masking for Multimodal Alignment
https://proceedings.mlr.press/v235/ma24q.html
Ziping Ma, Furong Xu, Jian Liu, Ming Yang, Qingpei Guo
https://proceedings.mlr.press/v235/ma24q.html
ICML 2024
Multimodal alignment between language and vision is the fundamental topic in current vision-language model research. Contrastive Captioners (CoCa), as a representative method, integrates Contrastive Language-Image Pretraining (CLIP) and Image Caption (IC) into a unified framework, resulting in impressive results. CLIP imposes a bidirectional constraints on global representations of entire images and sentences. Although IC conducts an unidirectional image-to-text generation on local representation, it lacks any constraint on local text-to-image reconstruction, which limits the ability to understand images at a fine-grained level when aligned with texts. To achieve multimodal alignment from both global and local perspectives, this paper proposes Symmetrizing Contrastive Captioners (SyCoCa), which introduces bidirectional interactions on images and texts across the global and local representation levels. Specifically, we expand a Text-Guided Masked Image Modeling (TG-MIM) head based on ITC and IC heads. The improved SyCoCa further leverages textual cues to reconstruct contextual images and visual cues to predict textual contents. When implementing bidirectional local interactions, the local contents of images tend to be cluttered or unrelated to their textual descriptions. Thus, we employ an attentive masking strategy to select effective image patches for interaction. Extensive experiments on five vision-language tasks, including image-text retrieval, image-captioning, visual question answering, and zero-shot/finetuned image classification, validate the effectiveness of our proposed method.
https://proceedings.mlr.press/v235/ma24r.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24r/ma24r.pdf
https://openreview.net/forum?id=2pYTCy4GUV
The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling
https://proceedings.mlr.press/v235/ma24r.html
Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi
https://proceedings.mlr.press/v235/ma24r.html
ICML 2024
With the incorporation of the UNet architecture, diffusion probabilistic models have become a dominant force in image generation tasks. One key design in UNet is the skip connections between the encoder and decoder blocks. Although skip connections have been shown to improve training stability and model performance, we point out that such shortcuts can be a limiting factor for the complexity of the transformation. As the sampling steps decrease, the generation process and the role of the UNet get closer to the push-forward transformations from Gaussian distribution to the target, posing a challenge for the network’s complexity. To address this challenge, we propose Skip-Tuning, a simple yet surprisingly effective training-free tuning method on the skip connections. For instance, our method can achieve 100% FID improvement for pretrained EDM on ImageNet 64 with only 19 NFEs (1.75), breaking the limit of ODE samplers regardless of sampling steps. Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1.58) with only 39 NFEs (1.57). Comprehensive exploratory experiments are conducted to shed light on the surprising effectiveness of our Skip-Tuning. We observe that while Skip-Tuning increases the score-matching losses in the pixel space, the losses in the feature space are reduced, particularly at intermediate noise levels, which coincide with the most effective range accounting for image quality improvement.
https://proceedings.mlr.press/v235/ma24s.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24s/ma24s.pdf
https://openreview.net/forum?id=1WWpIEFdlk
Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder
https://proceedings.mlr.press/v235/ma24s.html
Yiyang Ma, Wenhan Yang, Jiaying Liu
https://proceedings.mlr.press/v235/ma24s.html
ICML 2024
The images produced by diffusion models can attain excellent perceptual quality. However, it is challenging for diffusion models to guarantee distortion, hence the integration of diffusion models and image compression models still needs more comprehensive explorations. This paper presents a diffusion-based image compression method that employs a privileged end-to-end decoder model as correction, which achieves better perceptual quality while guaranteeing the distortion to an extent. We build a diffusion model and design a novel paradigm that combines the diffusion model and an end-to-end decoder, and the latter is responsible for transmitting the privileged information extracted at the encoder side. Specifically, we theoretically analyze the reconstruction process of the diffusion models at the encoder side with the original images being visible. Based on the analysis, we introduce an end-to-end convolutional decoder to provide a better approximation of the score function $\nabla_{\mathbf{x}_t}\log p(\mathbf{x}_t)$ at the encoder side and effectively transmit the combination. Experiments demonstrate the superiority of our method in both distortion and perception compared with previous perceptual compression methods.
https://proceedings.mlr.press/v235/ma24t.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ma24t/ma24t.pdf
https://openreview.net/forum?id=SPygKwms0X
A Provable Decision Rule for Out-of-Distribution Detection
https://proceedings.mlr.press/v235/ma24t.html
Xinsong Ma, Xin Zou, Weiwei Liu
https://proceedings.mlr.press/v235/ma24t.html
ICML 2024
Out-of-distribution (OOD) detection task plays the key role in reliable and safety-critical applications. Existing researches mainly devote to designing or training the powerful score function but overlook investigating the decision rule based on the proposed score function. Different from previous work, this paper aims to design a decision rule with rigorous theoretical guarantee and well empirical performance. Specifically, we provide a new insight for the OOD detection task from a hypothesis testing perspective and propose a novel generalized Benjamini Hochberg (g-BH) procedure with empirical p-values to solve the testing problem. Theoretically, the g-BH procedure controls false discovery rate (FDR) at pre-specified level. Furthermore, we derive an upper bound of the expectation of false positive rate (FPR) for the g-BH procedure based on the tailed generalized Gaussian distribution family, indicating that the FPR of g-BH procedure converges to zero in probability. Finally, the extensive experimental results verify the superiority of g-BH procedure over the traditional threshold-based decision rule on several OOD detection benchmarks.
https://proceedings.mlr.press/v235/machiraju24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/machiraju24a/machiraju24a.pdf
https://openreview.net/forum?id=PjVqEErDgK
Prospector Heads: Generalized Feature Attribution for Large Models & Data
https://proceedings.mlr.press/v235/machiraju24a.html
Gautam Machiraju, Alexander Derry, Arjun D Desai, Neel Guha, Amir-Hossein Karimi, James Zou, Russ B Altman, Christopher Re, Parag Mallick
https://proceedings.mlr.press/v235/machiraju24a.html
ICML 2024
Feature attribution, the ability to localize regions of the input data that are relevant for classification, is an important capability for ML models in scientific and biomedical domains. Current methods for feature attribution, which rely on "explaining" the predictions of end-to-end classifiers, suffer from imprecise feature localization and are inadequate for use with small sample sizes and high-dimensional datasets due to computational challenges. We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods that can be applied to any encoder and any data modality. Prospector heads generalize across modalities through experiments on sequences (text), images (pathology), and graphs (protein structures), outperforming baseline attribution methods by up to 26.3 points in mean localization AUPRC. We also demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data. Through their high performance, flexibility, and generalizability, prospectors provide a framework for improving trust and transparency for ML models in complex domains.
https://proceedings.mlr.press/v235/madhu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/madhu24a/madhu24a.pdf
https://openreview.net/forum?id=wmljUnbjy6
Unsupervised Parameter-free Simplicial Representation Learning with Scattering Transforms
https://proceedings.mlr.press/v235/madhu24a.html
Hiren Madhu, Sravanthi Gurugubelli, Sundeep Prabhakar Chepuri
https://proceedings.mlr.press/v235/madhu24a.html
ICML 2024
Simplicial neural network models are becoming popular for processing and analyzing higher-order graph data, but they suffer from high training complexity and dependence on task-specific labels. To address these challenges, we propose simplicial scattering networks (SSNs), a parameter-free model inspired by scattering transforms designed to extract task-agnostic features from simplicial complex data without labels in a principled manner. Specifically, we propose a simplicial scattering transform based on random walk matrices for various adjacencies underlying a simplicial complex. We then use the simplicial scattering transform to construct a deep filter bank network that captures high-frequency information at multiple scales. The proposed simplicial scattering transform possesses properties such as permutation invariance, robustness to perturbations, and expressivity. We theoretically prove that including higher-order information improves the robustness of SSNs to perturbations. Empirical evaluations demonstrate that SSNs outperform existing simplicial or graph neural models in many tasks like node classification, simplicial closure, graph classification, trajectory prediction, and simplex prediction while being computationally efficient.
https://proceedings.mlr.press/v235/madsen24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/madsen24a/madsen24a.pdf
https://openreview.net/forum?id=tw1PwpuAuN
Faithfulness Measurable Masked Language Models
https://proceedings.mlr.press/v235/madsen24a.html
Andreas Madsen, Siva Reddy, Sarath Chandar
https://proceedings.mlr.press/v235/madsen24a.html
ICML 2024
A common approach to explaining NLP models is to use importance measures that express which tokens are important for a prediction. Unfortunately, such explanations are often wrong despite being persuasive. Therefore, it is essential to measure their faithfulness. One such metric is if tokens are truly important, then masking them should result in worse model performance. However, token masking introduces out-of-distribution issues, and existing solutions that address this are computationally expensive and employ proxy models. Furthermore, other metrics are very limited in scope. This work proposes an inherently faithfulness measurable model that addresses these challenges. This is achieved using a novel fine-tuning method that incorporates masking, such that masking tokens become in-distribution by design. This differs from existing approaches, which are completely model-agnostic but are inapplicable in practice. We demonstrate the generality of our approach by applying it to 16 different datasets and validate it using statistical in-distribution tests. The faithfulness is then measured with 9 different importance measures. Because masking is in-distribution, importance measures that themselves use masking become consistently more faithful. Additionally, because the model makes faithfulness cheap to measure, we can optimize explanations towards maximal faithfulness; thus, our model becomes indirectly inherently explainable.
https://proceedings.mlr.press/v235/maene24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/maene24a/maene24a.pdf
https://openreview.net/forum?id=vxPmrxKe0J
On the Hardness of Probabilistic Neurosymbolic Learning
https://proceedings.mlr.press/v235/maene24a.html
Jaron Maene, Vincent Derkinderen, Luc De Raedt
https://proceedings.mlr.press/v235/maene24a.html
ICML 2024
The limitations of purely neural learning have sparked an interest in probabilistic neurosymbolic models, which combine neural networks with probabilistic logical reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating these gradients is intractable in general, it becomes tractable during training. Furthermore, we introduce WeightME, an unbiased gradient estimator based on model sampling. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Lastly, we evaluate the necessity of these guarantees on the gradient. Our experiments indicate that the existing biased approximations indeed struggle to optimize even when exact solving is still feasible.
https://proceedings.mlr.press/v235/mahankali24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mahankali24a/mahankali24a.pdf
https://openreview.net/forum?id=Y9qzwNlKVU
Random Latent Exploration for Deep Reinforcement Learning
https://proceedings.mlr.press/v235/mahankali24a.html
Srinath V. Mahankali, Zhang-Wei Hong, Ayush Sekhari, Alexander Rakhlin, Pulkit Agrawal
https://proceedings.mlr.press/v235/mahankali24a.html
ICML 2024
The ability to efficiently explore high-dimensional state spaces is essential for the practical success of deep Reinforcement Learning (RL). This paper introduces a new exploration technique called Random Latent Exploration (RLE), that combines the strengths of exploration bonuses and randomized value functions (two popular approaches for effective exploration in deep RL). RLE leverages the idea of perturbing rewards by adding structured random rewards to the original task rewards in certain (random) states of the environment, to encourage the agent to explore the environment during training. RLE is straightforward to implement and performs well in practice. To demonstrate the practical effectiveness of RLE, we evaluate it on the challenging Atari and IsaacGym benchmarks and show that RLE exhibits higher overall scores across all the tasks than other approaches, including action-noise and randomized value function exploration.
https://proceedings.mlr.press/v235/mahlau24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mahlau24a/mahlau24a.pdf
https://openreview.net/forum?id=SoqxSnEUi1
Mastering Zero-Shot Interactions in Cooperative and Competitive Simultaneous Games
https://proceedings.mlr.press/v235/mahlau24a.html
Yannik Mahlau, Frederik Schubert, Bodo Rosenhahn
https://proceedings.mlr.press/v235/mahlau24a.html
ICML 2024
The combination of self-play and planning has achieved great successes in sequential games, for instance in Chess and Go. However, adapting algorithms such as AlphaZero to simultaneous games poses a new challenge. In these games, missing information about concurrent actions of other agents is a limiting factor as they may select different Nash equilibria or do not play optimally at all. Thus, it is vital to model the behavior of the other agents when interacting with them in simultaneous games. To this end, we propose Albatross: AlphaZero for Learning Bounded-rational Agents and Temperature-based Response Optimization using Simulated Self-play. Albatross learns to play the novel equilibrium concept of a Smooth Best Response Logit Equilibrium (SBRLE), which enables cooperation and competition with agents of any playing strength. We perform an extensive evaluation of Albatross on a set of cooperative and competitive simultaneous perfect-information games. In contrast to AlphaZero, Albatross is able to exploit weak agents in the competitive game of Battlesnake. Additionally, it yields an improvement of 37.6% compared to previous state of the art in the cooperative Overcooked benchmark.
https://proceedings.mlr.press/v235/mai24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/mai24a/mai24a.pdf
https://openreview.net/forum?id=bZ4fzw1iz7
Split-and-Denoise: Protect large language model inference with local differential privacy
https://proceedings.mlr.press/v235/mai24a.html
Peihua Mai, Ran Yan, Zhe Huang, Youjia Yang, Yan Pang
https://proceedings.mlr.press/v235/mai24a.html
ICML 2024
Large Language Models (LLMs) excel in natural language understanding by capturing hidden semantics in vector space. This process enriches the value of text embeddings for various downstream tasks, thereby fostering the Embedding-as-a-Service (EaaS) business model. However, the risk of privacy leakage due to direct text transmission to servers remains a critical concern. To address this, we introduce Split-N-Denoise (SnD), an private inference framework that splits the model to execute the token embedding layer on the client side at minimal computational cost. This allows the client to introduce noise prior to transmitting the embeddings to the server, and subsequently receive and denoise the perturbed output embeddings for downstream tasks. Our approach is designed for the inference stage of LLMs and requires no modifications to the model parameters. Extensive experiments demonstrate SnD’s effectiveness in optimizing the privacy-utility tradeoff across various LLM architectures and diverse downstream tasks. The results reveal an improvement in performance under the same privacy budget compared to the baselines by over 10% on average, offering clients a privacy-preserving solution for local privacy protection.
https://proceedings.mlr.press/v235/maia-polo24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/maia-polo24a/maia-polo24a.pdf
https://openreview.net/forum?id=qAml3FpfhG
tinyBenchmarks: evaluating LLMs with fewer examples
https://proceedings.mlr.press/v235/maia-polo24a.html
Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, Mikhail Yurochkin
https://proceedings.mlr.press/v235/maia-polo24a.html
ICML 2024
The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models’ abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.
https://proceedings.mlr.press/v235/majee24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/majee24a/majee24a.pdf
https://openreview.net/forum?id=G8zDeKOp0R
SCoRe: Submodular Combinatorial Representation Learning
https://proceedings.mlr.press/v235/majee24a.html
Anay Majee, Suraj Nandkishor Kothawade, Krishnateja Killamsetty, Rishabh K Iyer
https://proceedings.mlr.press/v235/majee24a.html
ICML 2024
In this paper we introduce the SCoRe (Submodular Combinatorial Representation Learning) framework, a novel approach in representation learning that addresses inter-class bias and intra-class variance. SCoRe provides a new combinatorial viewpoint to representation learning, by introducing a family of loss functions based on set-based submodular information measures. We develop two novel combinatorial formulations for loss functions, using the Total Information and Total Correlation, that naturally minimize intra-class variance and inter-class bias. Several commonly used metric/contrastive learning loss functions like supervised contrastive loss, orthogonal projection loss, and N-pairs loss, are all instances of SCoRe, thereby underlining the versatility and applicability of SCoRe in a broad spectrum of learning scenarios. Novel objectives in SCoRe naturally model class-imbalance with up to 7.6% improvement in classification on CIFAR-10-LT, CIFAR-100-LT, MedMNIST, 2.1% on ImageNet-LT, and 19.4% in object detection on IDD and LVIS (v1.0), demonstrating its effectiveness over existing approaches.
https://proceedings.mlr.press/v235/majumder24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/majumder24a/majumder24a.pdf
https://openreview.net/forum?id=5SpjhZNXtt
Position: Data-driven Discovery with Large Generative Models
https://proceedings.mlr.press/v235/majumder24a.html
Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, Peter Clark
https://proceedings.mlr.press/v235/majumder24a.html
ICML 2024
With the accumulation of data at an unprecedented rate, its potential to fuel scientific discovery is growing exponentially. This position paper urges the Machine Learning (ML) community to exploit the capabilities of large generative models (LGMs) to develop automated systems for end-to-end data-driven discovery—a paradigm encompassing the search and verification of hypotheses purely from a set of provided datasets, without the need for additional data collection or physical experiments. We first outline several desiderata for an ideal data-driven discovery system. Then, through DataVoyager, a proof-of-concept utilizing GPT-4, we demonstrate how LGMs fulfill several of these desiderata—a feat previously unattainable—while also highlighting important limitations in the current system that open up opportunities for novel ML research. We contend that achieving accurate, reliable, and robust end-to-end discovery systems solely through the current capabilities of LGMs is challenging. We instead advocate for fail-proof tool integration, along with active user moderation through feedback mechanisms, to foster data-driven scientific discoveries with efficiency and reproducibility.
https://proceedings.mlr.press/v235/makkuva24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/makkuva24a/makkuva24a.pdf
https://openreview.net/forum?id=sDjszMb2Ir
LASER: Linear Compression in Wireless Distributed Optimization
https://proceedings.mlr.press/v235/makkuva24a.html
Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim, Michael Gastpar
https://proceedings.mlr.press/v235/makkuva24a.html
ICML 2024
Data-parallel SGD is the de facto algorithm for distributed optimization, especially for large scale machine learning. Despite its merits, communication bottleneck is one of its persistent issues. Most compression schemes to alleviate this either assume noiseless communication links, or fail to achieve good performance on practical tasks. In this paper, we close this gap and introduce LASER: LineAr CompreSsion in WirEless DistRibuted Optimization. LASER capitalizes on the inherent low-rank structure of gradients and transmits them efficiently over the noisy channels. Whilst enjoying theoretical guarantees similar to those of the classical SGD, LASER shows consistent gains over baselines on a variety of practical benchmarks. In particular, it outperforms the state-of-the-art compression schemes on challenging computer vision and GPT language modeling tasks. On the latter, we obtain 50-64% improvement in perplexity over our baselines for noisy channels.
https://proceedings.mlr.press/v235/malach24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/malach24a/malach24a.pdf
https://openreview.net/forum?id=i56plqPpEa
Auto-Regressive Next-Token Predictors are Universal Learners
https://proceedings.mlr.press/v235/malach24a.html
Eran Malach
https://proceedings.mlr.press/v235/malach24a.html
ICML 2024
Large language models display remarkable capabilities in logical and mathematical reasoning, allowing them to solve complex tasks. Interestingly, these abilities emerge in networks trained on the simple task of next-token prediction. In this work, we present a theoretical framework for studying auto-regressive next-token predictors. We demonstrate that even simple models such as linear next-token predictors, trained on Chain-of-Thought (CoT) data, can approximate any function efficiently computed by a Turing machine. We introduce a new complexity measure—length complexity—which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity. Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of today’s LLMs can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture.