abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/zhang24ci.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24ci/zhang24ci.pdf
|
https://openreview.net/forum?id=BvBdYSIkpb
|
Uncertainty-Aware Reward-Free Exploration with General Function Approximation
|
https://proceedings.mlr.press/v235/zhang24ci.html
|
Junkai Zhang, Weitong Zhang, Dongruo Zhou, Quanquan Gu
|
https://proceedings.mlr.press/v235/zhang24ci.html
|
ICML 2024
|
Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we proposed a reward-free RL algorithm called GFA-RFE. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an $\epsilon$-optimal policy, GFA-RFE needs to collect $\tilde{O} (H^2 \log N_{\mathcal{F}} (\epsilon) \text{dim} (\mathcal{F}) / \epsilon^2 )$ number of episodes, where $\mathcal{F}$ is the value function class with covering number $N_{\mathcal{F}} (\epsilon)$ and generalized eluder dimension $\text{dim} (\mathcal{F})$. Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.
|
https://proceedings.mlr.press/v235/zhang24cj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cj/zhang24cj.pdf
|
https://openreview.net/forum?id=Ljhrv1Wmbr
|
Feature Contamination: Neural Networks Learn Uncorrelated Features and Fail to Generalize
|
https://proceedings.mlr.press/v235/zhang24cj.html
|
Tianren Zhang, Chujie Zhao, Guanyu Chen, Yizhou Jiang, Feng Chen
|
https://proceedings.mlr.press/v235/zhang24cj.html
|
ICML 2024
|
Learning representations that generalize under distribution shifts is critical for building robust machine learning models. However, despite significant efforts in recent years, algorithmic advances in this direction have been limited. In this work, we seek to understand the fundamental difficulty of out-of-distribution generalization with deep neural networks. We first empirically show that perhaps surprisingly, even allowing a neural network to explicitly fit the representations obtained from a teacher network that can generalize out-of-distribution is insufficient for the generalization of the student network. Then, by a theoretical study of two-layer ReLU networks optimized by stochastic gradient descent (SGD) under a structured feature model, we identify a fundamental yet unexplored feature learning proclivity of neural networks, feature contamination: neural networks can learn uncorrelated features together with predictive features, resulting in generalization failure under distribution shifts. Notably, this mechanism essentially differs from the prevailing narrative in the literature that attributes the generalization failure to spurious correlations. Overall, our results offer new insights into the non-linear feature learning dynamics of neural networks and highlight the necessity of considering inductive biases in out-of-distribution generalization.
|
https://proceedings.mlr.press/v235/zhang24ck.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24ck/zhang24ck.pdf
|
https://openreview.net/forum?id=kmugaw9Kfq
|
On the Expressive Power of Spectral Invariant Graph Neural Networks
|
https://proceedings.mlr.press/v235/zhang24ck.html
|
Bohang Zhang, Lingxiao Zhao, Haggai Maron
|
https://proceedings.mlr.press/v235/zhang24ck.html
|
ICML 2024
|
Incorporating spectral information to enhance Graph Neural Networks (GNNs) has shown promising results but raises a fundamental challenge due to the inherent ambiguity of eigenvectors. Various architectures have been proposed to address this ambiguity, referred to as spectral invariant architectures. Notable examples include GNNs and Graph Transformers that use spectral distances, spectral projection matrices, or other invariant spectral features. However, the potential expressive power of these spectral invariant architectures remains largely unclear. The goal of this work is to gain a deep theoretical understanding of the expressive power obtainable when using spectral features. We first introduce a novel message-passing framework for designing spectral invariant GNNs, called Eigenspace Projection GNN (EPNN). Our comprehensive analysis shows that EPNN essentially unifies all prior spectral invariant architectures, in that they are either strictly less expressive or equivalent to EPNN. A fine-grained expressiveness hierarchy among different architectures is also established. On the other hand, we present a surprising result that EPNN itself is bounded by a recently proposed class of Subgraph GNNs, implying that all these spectral invariant architectures are strictly less expressive than 3-WL. Finally, we demonstrate that these spectral features offer no additional advantage when combined with more expressive GNNs.
|
https://proceedings.mlr.press/v235/zhang24cl.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cl/zhang24cl.pdf
|
https://openreview.net/forum?id=S6a6gHvMWx
|
Position: Social Environment Design Should be Further Developed for AI-based Policy-Making
|
https://proceedings.mlr.press/v235/zhang24cl.html
|
Edwin Zhang, Sadie Zhao, Tonghan Wang, Safwan Hossain, Henry Gasztowtt, Stephan Zheng, David C. Parkes, Milind Tambe, Yiling Chen
|
https://proceedings.mlr.press/v235/zhang24cl.html
|
ICML 2024
|
Artificial Intelligence (AI) holds promise as a technology that can be used to improve government and economic policy-making. This paper proposes a new research agenda towards this end by introducing Social Environment Design, a general framework for the use of AI in automated policy-making that connects with the Reinforcement Learning, EconCS, and Computational Social Choice communities. The framework seeks to capture general economic environments, includes voting on policy objectives, and gives a direction for the systematic analysis of government and economic policy through AI simulation. We highlight key open problems for future research in AI-based policymaking. By solving these challenges, we hope to achieve various social welfare objectives, thereby promoting more ethical and responsible decision making.
|
https://proceedings.mlr.press/v235/zhang24cm.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cm/zhang24cm.pdf
|
https://openreview.net/forum?id=d1P6GtRzuV
|
Neural Jump-Diffusion Temporal Point Processes
|
https://proceedings.mlr.press/v235/zhang24cm.html
|
Shuai Zhang, Chuan Zhou, Yang Aron Liu, Peng Zhang, Xixun Lin, Zhi-Ming Ma
|
https://proceedings.mlr.press/v235/zhang24cm.html
|
ICML 2024
|
We present a novel perspective on temporal point processes (TPPs) by reformulating their intensity processes as solutions to stochastic differential equations (SDEs). In particular, we first prove the equivalent SDE formulations of several classical TPPs, including Poisson processes, Hawkes processes, and self-correcting processes. Based on these proofs, we introduce a unified TPP framework called Neural Jump-Diffusion Temporal Point Process (NJDTPP), whose intensity process is governed by a neural jump-diffusion SDE (NJDSDE) where the drift, diffusion, and jump coefficient functions are parameterized by neural networks. Compared to previous works, NJDTPP exhibits model flexibility in capturing intensity dynamics without relying on any specific functional form, and provides theoretical guarantees regarding the existence and uniqueness of the solution to the proposed NJDSDE. Experiments on both synthetic and real-world datasets demonstrate that NJDTPP is capable of capturing the dynamics of intensity processes in different scenarios and significantly outperforms the state-of-the-art TPP models in prediction tasks.
|
https://proceedings.mlr.press/v235/zhang24cn.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cn/zhang24cn.pdf
|
https://openreview.net/forum?id=HsliOqZkc0
|
The Emergence of Reproducibility and Consistency in Diffusion Models
|
https://proceedings.mlr.press/v235/zhang24cn.html
|
Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, Qing Qu
|
https://proceedings.mlr.press/v235/zhang24cn.html
|
ICML 2024
|
In this work, we investigate an intriguing and prevalent phenomenon of diffusion models which we term as "consistent model reproducibility”: given the same starting noise input and a deterministic sampler, different diffusion models often yield remarkably similar outputs. We confirm this phenomenon through comprehensive experiments, implying that different diffusion models consistently reach the same data distribution and score function regardless of diffusion model frameworks, model architectures, or training procedures. More strikingly, our further investigation implies that diffusion models are learning distinct distributions influenced by the training data size. This is evident in two distinct training regimes: (I) "memorization regime,” where the diffusion model overfits to the training data distribution, and (ii) "generalization regime,” where the model learns the underlying data distribution. Our study also finds that this valuable property generalizes to many variants of diffusion models, including those for conditional generation and solving inverse problems. Lastly, we discuss how our findings connect to existing research and highlight the practical implications of our discoveries.
|
https://proceedings.mlr.press/v235/zhang24co.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24co/zhang24co.pdf
|
https://openreview.net/forum?id=st2BTty53v
|
Transferable Facial Privacy Protection against Blind Face Restoration via Domain-Consistent Adversarial Obfuscation
|
https://proceedings.mlr.press/v235/zhang24co.html
|
Kui Zhang, Hang Zhou, Jie Zhang, Wenbo Zhou, Weiming Zhang, Nenghai Yu
|
https://proceedings.mlr.press/v235/zhang24co.html
|
ICML 2024
|
With the rise of social media and the proliferation of facial recognition surveillance, concerns surrounding privacy have escalated significantly. While numerous studies have concentrated on safeguarding users against unauthorized face recognition, a new and often overlooked issue has emerged due to advances in facial restoration techniques: traditional methods of facial obfuscation may no longer provide a secure shield, as they can potentially expose anonymous information to human perception. Our empirical study shows that blind face restoration (BFR) models can restore obfuscated faces with high probability by simply retraining them on obfuscated (e.g., pixelated) faces. To address it, we propose a transferable adversarial obfuscation method for privacy protection against BFR models. Specifically, we observed a common characteristic among BFR models, namely, their capability to approximate an inverse mapping of a transformation from a high-quality image domain to a low-quality image domain. Leveraging this shared model attribute, we have developed a domain-consistent adversarial method for generating obfuscated images. In essence, our method is designed to minimize overfitting to surrogate models during the perturbation generation process, thereby enhancing the generalization of adversarial obfuscated facial images. Extensive experiments on various BFR models demonstrate the effectiveness and transferability of the proposed method.
|
https://proceedings.mlr.press/v235/zhang24cp.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cp/zhang24cp.pdf
|
https://openreview.net/forum?id=4boDu42RtE
|
Fast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric Regularization
|
https://proceedings.mlr.press/v235/zhang24cp.html
|
Jinlu Zhang, Yiyi Zhou, Qiancheng Zheng, Xiaoxiong Du, Gen Luo, Jun Peng, Xiaoshuai Sun, Rongrong Ji
|
https://proceedings.mlr.press/v235/zhang24cp.html
|
ICML 2024
|
Text-to-3D-aware face (T3D Face) generation and manipulation is an emerging research hot spot in machine learning, which still suffers from low efficiency and poor quality. In this paper, we propose an E*nd-to-End Efficient and Effective network for fast and accurate T3D face generation and manipulation, termed $E^3$-FaceNet. Different from existing complex generation paradigms, $E^3$-FaceNet resorts to a direct mapping from text instructions to 3D-aware visual space. We introduce a novel Style Code Enhancer to enhance cross-modal semantic alignment, alongside an innovative Geometric Regularization* objective to maintain consistency across multi-view generations. Extensive experiments on three benchmark datasets demonstrate that $E^3$-FaceNet can not only achieve picture-like 3D face generation and manipulation, but also improve inference speed by orders of magnitudes. For instance, compared with Latent3D, $E^3$-FaceNet speeds up the five-view generations by almost 470 times, while still exceeding in generation quality. Our code is released at https://github.com/Aria-Zhangjl/E3-FaceNet.
|
https://proceedings.mlr.press/v235/zhang24cq.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhang24cq/zhang24cq.pdf
|
https://openreview.net/forum?id=CDnv4vg02f
|
Accelerating Iterative Retrieval-augmented Language Model Serving with Speculation
|
https://proceedings.mlr.press/v235/zhang24cq.html
|
Zhihao Zhang, Alan Zhu, Lijie Yang, Yihua Xu, Lanting Li, Phitchaya Mangpo Phothilimthana, Zhihao Jia
|
https://proceedings.mlr.press/v235/zhang24cq.html
|
ICML 2024
|
This paper introduces RaLMSpec, a framework that accelerates iterative retrieval-augmented language model (RaLM) with speculative retrieval and batched verification. RaLMSpec further introduces several important systems optimizations, including prefetching, optimal speculation stride scheduler, and asynchronous verification. The combination of these techniques allows RaLMSPec to significantly outperform existing systems. For document-level iterative RaLM serving, evaluation over three LLMs on four QA datasets shows that RaLMSpec improves over existing approaches by $1.75$-$2.39\times$, $1.04$-$1.39\times$, and $1.31$-$1.77\times$ when the retriever is an exact dense retriever, approximate dense retriever, and sparse retriever respectively. For token-level iterative RaLM (KNN-LM) serving, RaLMSpec is up to $7.59\times$ and $2.45\times$ faster than existing methods for exact dense and approximate dense retrievers, respectively.
|
https://proceedings.mlr.press/v235/zhao24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24a/zhao24a.pdf
|
https://openreview.net/forum?id=jsKr6RVDDs
|
Position: Measure Dataset Diversity, Don’t Just Claim It
|
https://proceedings.mlr.press/v235/zhao24a.html
|
Dora Zhao, Jerone Andrews, Orestis Papakyriakopoulos, Alice Xiang
|
https://proceedings.mlr.press/v235/zhao24a.html
|
ICML 2024
|
Machine learning (ML) datasets, often perceived as neutral, inherently encapsulate abstract and disputed social constructs. Dataset curators frequently employ value-laden terms such as diversity, bias, and quality to characterize datasets. Despite their prevalence, these terms lack clear definitions and validation. Our research explores the implications of this issue by analyzing "diversity" across 135 image and text datasets. Drawing from social sciences, we apply principles from measurement theory to identify considerations and offer recommendations for conceptualizing, operationalizing, and evaluating diversity in datasets. Our findings have broader implications for ML research, advocating for a more nuanced and precise approach to handling value-laden properties in dataset construction.
|
https://proceedings.mlr.press/v235/zhao24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24b/zhao24b.pdf
|
https://openreview.net/forum?id=0AZAjkXhit
|
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning
|
https://proceedings.mlr.press/v235/zhao24b.html
|
Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion
|
https://proceedings.mlr.press/v235/zhao24b.html
|
ICML 2024
|
There is a consensus that instruction fine-tuning of LLMs requires high-quality data, but what are they? LIMA (NeurIPS 2023) and AlpaGasus (ICLR 2024) are state-of-the-art methods for selecting such high-quality examples, either via manual curation or using GPT-3.5-Turbo as a quality scorer. We show that the extremely simple baseline of selecting the 1,000 instructions with longest responses—that intuitively contain more learnable information and are harder to overfit—from standard datasets can consistently outperform these sophisticated methods according to GPT-4 and PaLM-2 as judges, while remaining competitive on the Open LLM benchmarks that test factual knowledge. We demonstrate this for several LLMs (Llama-2-7B, Llama-2-13B, Mistral-7B-v0.1) and datasets (Alpaca-52k, Evol-Instruct-70k). In addition, a lightweight refinement of such long instructions can further improve the abilities of the fine-tuned LLMs, and allows us to obtain competitive results on MT-Bench and the 2nd highest-ranked Llama-2-7B-based model on AlpacaEval 2.0, while training on only 1,000 examples and no extra preference data. We also conduct a thorough analysis of our models to ensure that their enhanced performance is not simply due to GPT-4’s preference for longer responses. Overall, our findings suggest that fine-tuning on the longest responses should be the default baseline for any work on instruction fine-tuning. We provide our code in this GitHub repository.
|
https://proceedings.mlr.press/v235/zhao24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24c/zhao24c.pdf
|
https://openreview.net/forum?id=frA0NNBS1n
|
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo
|
https://proceedings.mlr.press/v235/zhao24c.html
|
Stephen Zhao, Rob Brekelmans, Alireza Makhzani, Roger Baker Grosse
|
https://proceedings.mlr.press/v235/zhao24c.html
|
ICML 2024
|
Numerous capability and safety techniques of Large Language Models (LLMs), including RLHF, automated red-teaming, prompt engineering, and infilling, can be cast as sampling from an unnormalized target distribution defined by a given reward or potential function over the full sequence. In this work, we leverage the rich toolkit of Sequential Monte Carlo (SMC) for these probabilistic inference problems. In particular, we use learned twist functions to estimate the expected future value of the potential at each timestep, which enables us to focus inference-time computation on promising partial sequences. We propose a novel contrastive method for learning the twist functions, and establish connections with the rich literature of soft reinforcement learning. As a complementary application of our twisted SMC framework, we present methods for evaluating the accuracy of language model inference techniques using novel bidirectional SMC bounds on the log partition function. These bounds can be used to estimate the KL divergence between the inference and target distributions in both directions. We apply our inference evaluation techniques to show that twisted SMC is effective for sampling undesirable outputs from a pretrained model (a useful component of harmlessness training and automated red-teaming), generating reviews with varied sentiment, and performing infilling tasks.
|
https://proceedings.mlr.press/v235/zhao24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24d/zhao24d.pdf
|
https://openreview.net/forum?id=eqY64Z1rsT
|
Image Fusion via Vision-Language Model
|
https://proceedings.mlr.press/v235/zhao24d.html
|
Zixiang Zhao, Lilun Deng, Haowen Bai, Yukun Cui, Zhipeng Zhang, Yulun Zhang, Haotong Qin, Dongdong Chen, Jiangshe Zhang, Peng Wang, Luc Van Gool
|
https://proceedings.mlr.press/v235/zhao24d.html
|
ICML 2024
|
Image fusion integrates essential information from multiple images into a single composite, enhancing structures, textures, and refining imperfections. Existing methods predominantly focus on pixel-level and semantic visual features for recognition, but often overlook the deeper text-level semantic information beyond vision. Therefore, we introduce a novel fusion paradigm named image Fusion via vIsion-Language Model (FILM), for the first time, utilizing explicit textual information from source images to guide the fusion process. Specifically, FILM generates semantic prompts from images and inputs them into ChatGPT for comprehensive textual descriptions. These descriptions are fused within the textual domain and guide the visual information fusion, enhancing feature extraction and contextual understanding, directed by textual semantic information via cross-attention. FILM has shown promising results in four image fusion tasks: infrared-visible, medical, multi-exposure, and multi-focus image fusion. We also propose a vision-language dataset containing ChatGPT-generated paragraph descriptions for the eight image fusion datasets across four fusion tasks, facilitating future research in vision-language model-based image fusion. Code and dataset are available at https://github.com/Zhaozixiang1228/IF-FILM.
|
https://proceedings.mlr.press/v235/zhao24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24e/zhao24e.pdf
|
https://openreview.net/forum?id=RYmmgedVjR
|
Learning and Forgetting Unsafe Examples in Large Language Models
|
https://proceedings.mlr.press/v235/zhao24e.html
|
Jiachen Zhao, Zhun Deng, David Madras, James Zou, Mengye Ren
|
https://proceedings.mlr.press/v235/zhao24e.html
|
ICML 2024
|
As the number of large language models (LLMs) released to the public grows, there is a pressing need to understand the safety implications associated with these models learning from third-party custom finetuning data. We explore the behavior of LLMs finetuned on noisy custom data containing unsafe content, represented by datasets that contain biases, toxicity, and harmfulness, finding that while aligned LLMs can readily learn this unsafe content, they also tend to forget it more significantly than other examples when subsequently finetuned on safer content. Drawing inspiration from the discrepancies in forgetting, we introduce the “ForgetFilter” algorithm, which filters unsafe data based on how strong the model’s forgetting signal is for that data. We demonstrate that the ForgetFilter algorithm ensures safety in customized finetuning without compromising downstream task performance, unlike sequential safety finetuning. ForgetFilter outperforms alternative strategies like replay and moral self-correction in curbing LLMs’ ability to assimilate unsafe content during custom finetuning, e.g. 75% lower than not applying any safety measures and 62% lower than using self-correction in toxicity score.
|
https://proceedings.mlr.press/v235/zhao24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24f/zhao24f.pdf
|
https://openreview.net/forum?id=oBP8vXFJNQ
|
VideoPrism: A Foundational Visual Encoder for Video Understanding
|
https://proceedings.mlr.press/v235/zhao24f.html
|
Long Zhao, Nitesh Bharadwaj Gundavarapu, Liangzhe Yuan, Hao Zhou, Shen Yan, Jennifer J. Sun, Luke Friedman, Rui Qian, Tobias Weyand, Yue Zhao, Rachel Hornung, Florian Schroff, Ming-Hsuan Yang, David A Ross, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Ting Liu, Boqing Gong
|
https://proceedings.mlr.press/v235/zhao24f.html
|
ICML 2024
|
We introduce VideoPrism, a general-purpose video encoder that tackles diverse video understanding tasks with a single frozen model. We pretrain VideoPrism on a heterogeneous corpus containing 36M high-quality video-caption pairs and 582M video clips with noisy parallel text (e.g., ASR transcripts). The pretraining approach improves upon masked autoencoding by global-local distillation of semantic video embeddings and a token shuffling scheme, enabling VideoPrism to focus primarily on the video modality while leveraging the invaluable text associated with videos. We extensively test VideoPrism on four broad groups of video understanding tasks, from web video question answering to CV for science, achieving state-of-the-art performance on 31 out of 33 video understanding benchmarks.
|
https://proceedings.mlr.press/v235/zhao24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24g/zhao24g.pdf
|
https://openreview.net/forum?id=sb81Xl50JG
|
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
|
https://proceedings.mlr.press/v235/zhao24g.html
|
Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao
|
https://proceedings.mlr.press/v235/zhao24g.html
|
ICML 2024
|
Fine-tuning and inference with large Language Models (LM) are generally known to be expensive. Parameter-efficient fine-tuning over pretrained LMs reduces training memory by updating a small number of LM parameters but does not improve inference efficiency. Structured pruning improves LM inference efficiency by removing consistent parameter blocks, yet often increases training memory and time. To improve both training and inference efficiency, we introduce APT that adaptively prunes and tunes parameters for the LMs. At the early stage of fine-tuning, APT dynamically adds salient tuning parameters for fast and accurate convergence while discarding unimportant parameters for efficiency. Compared to baselines, our experiments show that APT maintains up to 98% task performance when pruning RoBERTa and T5 models with 40% parameters left while keeping 86.4% LLaMA models’ performance with 70% parameters remaining. Furthermore, APT speeds up LMs’ fine-tuning by up to 8$\times$ and reduces large LMs’ memory training footprint by up to 70%. Our code and models are publicly available at https://github.com/ROIM1998/APT.
|
https://proceedings.mlr.press/v235/zhao24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24h/zhao24h.pdf
|
https://openreview.net/forum?id=pSnhA7Em1P
|
Subgoal-based Demonstration Learning for Formal Theorem Proving
|
https://proceedings.mlr.press/v235/zhao24h.html
|
Xueliang Zhao, Wenda Li, Lingpeng Kong
|
https://proceedings.mlr.press/v235/zhao24h.html
|
ICML 2024
|
Large language models (LLMs) present a promising pathway for advancing the domain of formal theorem proving. In this paper, we aim to improve the performance of LLMs in formal theorem proving by thoroughly examining the structure and organization of demonstrative in-context examples. We introduce a subgoal-based demonstration learning framework, specifically designed to enhance the efficiency of proof search in LLMs. First, drawing upon the insights of subgoal learning from reinforcement learning and robotics, we propose the construction of distinct subgoals for each demonstration example and refine these subgoals in accordance with the pertinent theories of subgoal learning. Second, we build upon recent advances in diffusion models to predict the optimal organization, simultaneously addressing two intricate issues that persist within the domain of demonstration organization: subset selection and order determination. Our integration of subgoal-based learning has notably increased proof accuracy from 38.9% to 44.1% on the miniF2F benchmark. Furthermore, the adoption of diffusion models for demonstration organization can lead to an additional enhancement in accuracy to 45.5%, or a $5\times$ improvement in sampling efficiency compared to previously established methods.
|
https://proceedings.mlr.press/v235/zhao24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24i/zhao24i.pdf
|
https://openreview.net/forum?id=Ss3h1ixJAU
|
Absolute Policy Optimization: Enhancing Lower Probability Bound of Performance with High Confidence
|
https://proceedings.mlr.press/v235/zhao24i.html
|
Weiye Zhao, Feihan Li, Yifan Sun, Rui Chen, Tianhao Wei, Changliu Liu
|
https://proceedings.mlr.press/v235/zhao24i.html
|
ICML 2024
|
In recent years, trust region on-policy reinforcement learning has achieved impressive results in addressing complex control tasks and gaming scenarios. However, contemporary state-of-the-art algorithms within this category primarily emphasize improvement in expected performance, lacking the ability to control over the worst-case performance outcomes. To address this limitation, we introduce a novel objective function, optimizing which leads to guaranteed monotonic improvement in the lower probability bound of performance with high confidence. Building upon this groundbreaking theoretical advancement, we further introduce a practical solution called Absolute Policy Optimization (APO). Our experiments demonstrate the effectiveness of our approach across challenging continuous control benchmark tasks and extend its applicability to mastering Atari games. Our findings reveal that APO as well as its efficient variation Proximal Absolute Policy Optimization (PAPO) significantly outperforms state-of-the-art policy gradient algorithms, resulting in substantial improvements in worst-case performance, as well as expected performance.
|
https://proceedings.mlr.press/v235/zhao24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24j/zhao24j.pdf
|
https://openreview.net/forum?id=mWV8NeU79e
|
Spider: A Unified Framework for Context-dependent Concept Segmentation
|
https://proceedings.mlr.press/v235/zhao24j.html
|
Xiaoqi Zhao, Youwei Pang, Wei Ji, Baicheng Sheng, Jiaming Zuo, Lihe Zhang, Huchuan Lu
|
https://proceedings.mlr.press/v235/zhao24j.html
|
ICML 2024
|
Different from the context-independent (CI) concepts such as human, car, and airplane, context-dependent (CD) concepts require higher visual understanding ability, such as camouflaged object and medical lesion. Despite the rapid advance of many CD understanding tasks in respective branches, the isolated evolution leads to their limited cross-domain generalisation and repetitive technique innovation. Since there is a strong coupling relationship between foreground and background context in CD tasks, existing methods require to train separate models in their focused domains. This restricts their real-world CD concept understanding towards artificial general intelligence (AGI). We propose a unified model with a single set of parameters, Spider, which only needs to be trained once. With the help of the proposed concept filter driven by the image-mask group prompt, Spider is able to understand and distinguish diverse strong context-dependent concepts to accurately capture the Prompter’s intention. Without bells and whistles, Spider significantly outperforms the state-of-the-art specialized models in 8 different context-dependent segmentation tasks, including 4 natural scenes (salient, camouflaged, and transparent objects and shadow) and 4 medical lesions (COVID-19, polyp, breast, and skin lesion with color colonoscopy, CT, ultrasound, and dermoscopy modalities). Besides, Spider shows obvious advantages in continuous learning. It can easily complete the training of new tasks by fine-tuning parameters less than 1% and bring a tolerable performance degradation of less than 5% for all old tasks. The source code will be publicly available at https://github.com/Xiaoqi-Zhao-DLUT/Spider-UniCDSeg.
|
https://proceedings.mlr.press/v235/zhao24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24k/zhao24k.pdf
|
https://openreview.net/forum?id=tmUorldOWN
|
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten
|
https://proceedings.mlr.press/v235/zhao24k.html
|
Chenxu Zhao, Wei Qian, Yangyi Li, Aobo Chen, Mengdi Huai
|
https://proceedings.mlr.press/v235/zhao24k.html
|
ICML 2024
|
The past few years have seen an intense research interest in the practical needs of the "right to be forgotten", which has motivated researchers to develop machine unlearning methods to unlearn a fraction of training data and its lineage. While existing machine unlearning methods prioritize the protection of individuals’ private data, they overlook investigating the unlearned models’ susceptibility to adversarial attacks and security breaches. In this work, we uncover a novel security vulnerability of machine unlearning based on the insight that adversarial vulnerabilities can be bolstered, especially for adversarially robust models. To exploit this observed vulnerability, we propose a novel attack called Adversarial Unlearning Attack (AdvUA), which aims to generate a small fraction of malicious unlearning requests during the unlearning process. AdvUA causes a significant reduction of adversarial robustness in the unlearned model compared to the original model, providing an entirely new capability for adversaries that is infeasible in conventional machine learning pipelines. Notably, we also show that AdvUA can effectively enhance model stealing attacks by extracting additional decision boundary information, further emphasizing the breadth and significance of our research. We also conduct both theoretical analysis and computational complexity of AdvUA. Extensive numerical studies are performed to demonstrate the effectiveness and efficiency of the proposed attack.
|
https://proceedings.mlr.press/v235/zhao24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24l/zhao24l.pdf
|
https://openreview.net/forum?id=50vc4HBuKU
|
Quantum Implicit Neural Representations
|
https://proceedings.mlr.press/v235/zhao24l.html
|
Jiaming Zhao, Wenbo Qiao, Peng Zhang, Hui Gao
|
https://proceedings.mlr.press/v235/zhao24l.html
|
ICML 2024
|
Implicit neural representations have emerged as a powerful paradigm to represent signals such as images and sounds. This approach aims to utilize neural networks to parameterize the implicit function of the signal. However, when representing implicit functions, traditional neural networks such as ReLU-based multilayer perceptrons face challenges in accurately modeling high-frequency components of signals. Recent research has begun to explore the use of Fourier Neural Networks (FNNs) to overcome this limitation. In this paper, we propose Quantum Implicit Representation Network (QIREN), a novel quantum generalization of FNNs. Furthermore, through theoretical analysis, we demonstrate that QIREN possesses a quantum advantage over classical FNNs. Lastly, we conducted experiments in signal representation, image superresolution, and image generation tasks to show the superior performance of QIREN compared to state-of-the-art (SOTA) models. Our work not only incorporates quantum advantages into implicit neural representations but also uncovers a promising application direction for Quantum Neural Networks.
|
https://proceedings.mlr.press/v235/zhao24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24m/zhao24m.pdf
|
https://openreview.net/forum?id=6dKUu2EkZy
|
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective
|
https://proceedings.mlr.press/v235/zhao24m.html
|
Lei Zhao, Mengdi Wang, Yu Bai
|
https://proceedings.mlr.press/v235/zhao24m.html
|
ICML 2024
|
Inverse Reinforcement Learning (IRL)—the problem of learning reward functions from demonstrations of an expert policy—plays a critical role in developing intelligent systems. While widely used in applications, theoretical understandings of IRL present unique challenges and remain less developed compared with standard RL. For example, it remains open how to do IRL efficiently in standard offline settings with pre-collected data, where states are obtained from a behavior policy (which could be the expert policy itself), and actions are sampled from the expert policy. This paper provides the first line of results for efficient IRL in vanilla offline and online settings using polynomial samples and runtime. Our algorithms and analyses seamlessly adapt the pessimism principle commonly used in offline RL, and achieve IRL guarantees in stronger metrics than considered in existing work. We provide lower bounds showing that our sample complexities are nearly optimal. As an application, we also show that the learned rewards can transfer to another target MDP with suitable guarantees when the target MDP satisfies certain similarity assumptions with the original (source) MDP.
|
https://proceedings.mlr.press/v235/zhao24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24n/zhao24n.pdf
|
https://openreview.net/forum?id=A54CXWn9VB
|
A Statistical Theory of Regularization-Based Continual Learning
|
https://proceedings.mlr.press/v235/zhao24n.html
|
Xuyang Zhao, Huiyuan Wang, Weiran Huang, Wei Lin
|
https://proceedings.mlr.press/v235/zhao24n.html
|
ICML 2024
|
We provide a statistical analysis of regularization-based continual learning on a sequence of linear regression tasks, with emphasis on how different regularization terms affect the model performance. We first derive the convergence rate for the oracle estimator obtained as if all data were available simultaneously. Next, we consider a family of generalized $\ell_2$-regularization algorithms indexed by matrix-valued hyperparameters, which includes the minimum norm estimator and continual ridge regression as special cases. As more tasks are introduced, we derive an iterative update formula for the estimation error of generalized $\ell_2$-regularized estimators, from which we determine the hyperparameters resulting in the optimal algorithm. Interestingly, the choice of hyperparameters can effectively balance the trade-off between forward and backward knowledge transfer and adjust for data heterogeneity. Moreover, the estimation error of the optimal algorithm is derived explicitly, which is of the same order as that of the oracle estimator. In contrast, our lower bounds for the minimum norm estimator and continual ridge regression show their suboptimality. A byproduct of our theoretical analysis is the equivalence between early stopping and generalized $\ell_2$-regularization in continual learning, which may be of independent interest. Finally, we conduct experiments to complement our theory.
|
https://proceedings.mlr.press/v235/zhao24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24o/zhao24o.pdf
|
https://openreview.net/forum?id=MEZydkOr3l
|
Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning
|
https://proceedings.mlr.press/v235/zhao24o.html
|
Zhe Zhao, Pengkun Wang, Haibin Wen, Wei Xu, Lai Song, Qingfu Zhang, Yang Wang
|
https://proceedings.mlr.press/v235/zhao24o.html
|
ICML 2024
|
Real-world data generally follows a long-tailed distribution, which makes traditional high-performance training strategies unable to show their usual effects. Various insights have been proposed to alleviate this challenging distribution. However, some observations indicate that models trained on long-tailed distributions always show a trade-off between the performance of head and tail classes. For a profound understanding of the trade-off, we first theoretically analyze the trade-off problem in long-tailed learning and creatively transform the trade-off problem in long-tailed learning into a multi-objective optimization (MOO) problem. Motivated by these analyses, we propose the idea of strategy fusion for MOO long-tailed learning and point out the potential conflict problem. We further design a Multi-Objective Optimization based Strategy Fusion (MOOSF), which effectively resolves conflicts, and achieves an efficient fusion of heterogeneous strategies. Comprehensive experiments on mainstream datasets show that even the simplest strategy fusion can outperform complex long-tailed strategies. More importantly, it provides a new perspective for generalized long-tailed learning. The code is available in the accompanying supplementary materials.
|
https://proceedings.mlr.press/v235/zhao24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24p/zhao24p.pdf
|
https://openreview.net/forum?id=WT4X3QYopC
|
Gradient-based Visual Explanation for Transformer-based CLIP
|
https://proceedings.mlr.press/v235/zhao24p.html
|
Chenyang Zhao, Kun Wang, Xingyu Zeng, Rui Zhao, Antoni B. Chan
|
https://proceedings.mlr.press/v235/zhao24p.html
|
ICML 2024
|
Significant progress has been achieved on the improvement and downstream usages of the Contrastive Language-Image Pre-training (CLIP) vision-language model, while less attention is paid to the interpretation of CLIP. We propose a Gradient-based visual Explanation method for CLIP (Grad-ECLIP), which interprets the matching result of CLIP for specific input image-text pair. By decomposing the architecture of the encoder and discovering the relationship between the matching similarity and intermediate spatial features, Grad-ECLIP produces effective heat maps that show the influence of image regions or words on the CLIP results. Different from the previous Transformer interpretation methods that focus on the utilization of self-attention maps, which are typically extremely sparse in CLIP, we produce high-quality visual explanations by applying channel and spatial weights on token features. Qualitative and quantitative evaluations verify the superiority of Grad-ECLIP compared with the state-of-the-art methods. A series of analysis are conducted based on our visual explanation results, from which we explore the working mechanism of image-text matching, and the strengths and limitations in attribution identification of CLIP. Codes are available here: https://github.com/Cyang-Zhao/Grad-Eclip.
|
https://proceedings.mlr.press/v235/zhao24q.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24q/zhao24q.pdf
|
https://openreview.net/forum?id=wGtzp4ZT1n
|
CompeteAI: Understanding the Competition Dynamics of Large Language Model-based Agents
|
https://proceedings.mlr.press/v235/zhao24q.html
|
Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie
|
https://proceedings.mlr.press/v235/zhao24q.html
|
ICML 2024
|
Large language models (LLMs) have been widely used as agents to complete different tasks, such as personal assistance or event planning. Although most of the work has focused on cooperation and collaboration between agents, little work explores competition, another important mechanism that promotes the development of society and economy. In this paper, we seek to examine the competition dynamics in LLM-based agents. We first propose a general framework for studying the competition between agents. Then, we implement a practical competitive environment using GPT-4 to simulate a virtual town with two types of agents, including restaurant agents and customer agents. Specifically, the restaurant agents compete with each other to attract more customers, where competition encourages them to transform, such as cultivating new operating strategies. Simulation experiments reveal several interesting findings at the micro and macro levels, which align well with existing market and sociological theories. We hope that the framework and environment can be a promising testbed to study the competition that fosters understanding of society. Code is available at: https://github.com/microsoft/competeai.
|
https://proceedings.mlr.press/v235/zhao24r.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24r/zhao24r.pdf
|
https://openreview.net/forum?id=1SiEfsCecd
|
Defense against Backdoor Attack on Pre-trained Language Models via Head Pruning and Attention Normalization
|
https://proceedings.mlr.press/v235/zhao24r.html
|
Xingyi Zhao, Depeng Xu, Shuhan Yuan
|
https://proceedings.mlr.press/v235/zhao24r.html
|
ICML 2024
|
Pre-trained language models (PLMs) are commonly used for various downstream natural language processing tasks via fine-tuning. However, recent studies have demonstrated that PLMs are vulnerable to backdoor attacks, which can mislabel poisoned samples to target outputs even after a vanilla fine-tuning process. The key challenge for defending against the backdoored PLMs is that end users who adopt the PLMs for their downstream tasks usually do not have any knowledge about the attacking strategies, such as triggers. To tackle this challenge, in this work, we propose a backdoor mitigation approach, PURE, via head pruning and normalization of attention weights. The idea is to prune the attention heads that are potentially affected by poisoned texts with only clean texts on hand and then further normalize the weights of remaining attention heads to mitigate the backdoor impacts. We conduct experiments to defend against various backdoor attacks on the classification task. The experimental results show the effectiveness of PURE in lowering the attack success rate without sacrificing the performance on clean texts.
|
https://proceedings.mlr.press/v235/zhao24s.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24s/zhao24s.pdf
|
https://openreview.net/forum?id=hYHsrKDiX7
|
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
|
https://proceedings.mlr.press/v235/zhao24s.html
|
Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian
|
https://proceedings.mlr.press/v235/zhao24s.html
|
ICML 2024
|
Training Large Language Models (LLMs) presents significant memory challenges, predominantly due to the growing size of weights and optimizer states. Common memory-reduction approaches, such as low-rank adaptation (LoRA), add a trainable low-rank matrix to the frozen pre-trained weight in each layer, reducing trainable parameters and optimizer states. However, such approaches typically underperform training with full-rank weights in both pre-training and fine-tuning stages since they limit the parameter search to a low-rank subspace and alter the training dynamics, and further, may require full-rank warm start. In this work, we propose Gradient Low-Rank Projection (GaLore), a training strategy that allows full-parameter learning but is more memory-efficient than common low-rank adaptation methods such as LoRA. Our approach reduces memory usage by up to 65.5% in optimizer states while maintaining both efficiency and performance for pre-training on LLaMA 1B and 7B architectures with C4 dataset with up to 19.7B tokens, and on fine-tuning RoBERTa on GLUE tasks. Our 8-bit GaLore further reduces optimizer memory by up to 82.5% and total training memory by 63.3%, compared to a BF16 baseline. Notably, we demonstrate, for the first time, the feasibility of pre-training a 7B model on consumer GPUs with 24GB memory (e.g., NVIDIA RTX 4090) without model parallel, checkpointing, or offloading strategies.
|
https://proceedings.mlr.press/v235/zhao24t.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24t/zhao24t.pdf
|
https://openreview.net/forum?id=60vC1FY0dZ
|
When Will Gradient Regularization Be Harmful?
|
https://proceedings.mlr.press/v235/zhao24t.html
|
Yang Zhao, Hao Zhang, Xiuyuan Hu
|
https://proceedings.mlr.press/v235/zhao24t.html
|
ICML 2024
|
Gradient regularization (GR), which aims to penalize the gradient norm atop the loss function, has shown promising results in training modern over-parameterized deep neural networks. However, can we trust this powerful technique? This paper reveals that GR can cause performance degeneration in adaptive optimization scenarios, particularly with learning rate warmup. Our empirical and theoretical analyses suggest this is due to GR inducing instability and divergence in gradient statistics of adaptive optimizers at the initial training stage. Inspired by the warmup heuristic, we propose three GR warmup strategies, each relaxing the regularization effect to a certain extent during the warmup course to ensure the accurate and stable accumulation of gradients. With experiments on Vision Transformer family, we confirm the three GR warmup strategies can effectively circumvent these issues, thereby largely improving the model performance. Meanwhile, we note that scalable models tend to rely more on the GR warmup, where the performance can be improved by up to 3% on Cifar10 compared to baseline GR. Code is available at https://github.com/zhaoyang-0204/gnp.
|
https://proceedings.mlr.press/v235/zhao24u.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24u/zhao24u.pdf
|
https://openreview.net/forum?id=GcZjpKA37R
|
LangCell: Language-Cell Pre-training for Cell Identity Understanding
|
https://proceedings.mlr.press/v235/zhao24u.html
|
Suyuan Zhao, Jiahuan Zhang, Yushuai Wu, Yizhen Luo, Zaiqing Nie
|
https://proceedings.mlr.press/v235/zhao24u.html
|
ICML 2024
|
Cell identity encompasses various semantic aspects of a cell, including cell type, pathway information, disease information, and more, which are essential for biologists to gain insights into its biological characteristics. Understanding cell identity from the transcriptomic data, such as annotating cell types, has become an important task in bioinformatics. As these semantic aspects are determined by human experts, it is impossible for AI models to effectively carry out cell identity understanding tasks without the supervision signals provided by single-cell and label pairs. The single-cell pre-trained language models (PLMs) currently used for this task are trained only on a single modality, transcriptomics data, lack an understanding of cell identity knowledge. As a result, they have to be fine-tuned for downstream tasks and struggle when lacking labeled data with the desired semantic labels. To address this issue, we propose an innovative solution by constructing a unified representation of single-cell data and natural language during the pre-training phase, allowing the model to directly incorporate insights related to cell identity. More specifically, we introduce LangCell, the first Language-Cell pre-training framework. LangCell utilizes texts enriched with cell identity information to gain a profound comprehension of cross-modal knowledge. Results from experiments conducted on different benchmarks show that LangCell is the only single-cell PLM that can work effectively in zero-shot cell identity understanding scenarios, and also significantly outperforms existing models in few-shot and fine-tuning cell identity understanding scenarios.
|
https://proceedings.mlr.press/v235/zhao24v.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhao24v/zhao24v.pdf
|
https://openreview.net/forum?id=MWTicAxmRP
|
Optimistic Multi-Agent Policy Gradient
|
https://proceedings.mlr.press/v235/zhao24v.html
|
Wenshuai Zhao, Yi Zhao, Zhiyuan Li, Juho Kannala, Joni Pajarinen
|
https://proceedings.mlr.press/v235/zhao24v.html
|
ICML 2024
|
Relative overgeneralization (RO) occurs in cooperative multi-agent learning tasks when agents converge towards a suboptimal joint policy due to overfitting to suboptimal behaviors of other agents. No methods have been proposed for addressing RO in multi-agent policy gradient (MAPG) methods although these methods produce state-of-the-art results. To address this gap, we propose a general, yet simple, framework to enable optimistic updates in MAPG methods that alleviate the RO problem. Our approach involves clipping the advantage to eliminate negative values, thereby facilitating optimistic updates in MAPG. The optimism prevents individual agents from quickly converging to a local optimum. Additionally, we provide a formal analysis to show that the proposed method retains optimality at a fixed point. In extensive evaluations on a diverse set of tasks including the Multi-agent MuJoCo and Overcooked benchmarks, our method outperforms strong baselines on 13 out of 19 tested tasks and matches the performance on the rest.
|
https://proceedings.mlr.press/v235/zhdanov24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhdanov24a/zhdanov24a.pdf
|
https://openreview.net/forum?id=XTglHJjzQI
|
Clifford-Steerable Convolutional Neural Networks
|
https://proceedings.mlr.press/v235/zhdanov24a.html
|
Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré
|
https://proceedings.mlr.press/v235/zhdanov24a.html
|
ICML 2024
|
We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of ${\operatorname{E}}(p, q)$-equivariant CNNs. CS-CNNs process multivector fields on pseudo-Euclidean spaces $\mathbb{R}^{p,q}$. They specialize, for instance, to ${\operatorname{E}}(3)$-equivariance on $\mathbb{R}^3$ and Poincaré-equivariance on Minkowski spacetime $\mathbb{R}^{1,3}$. Our approach is based on an implicit parametrization of ${\operatorname{O}}(p,q)$-steerable kernels via Clifford group equivariant neural networks. We significantly and consistently outperform baseline methods on fluid dynamics as well as relativistic electrodynamics forecasting tasks.
|
https://proceedings.mlr.press/v235/zhen24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhen24a/zhen24a.pdf
|
https://openreview.net/forum?id=EZcFK8HupF
|
3D-VLA: A 3D Vision-Language-Action Generative World Model
|
https://proceedings.mlr.press/v235/zhen24a.html
|
Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, Chuang Gan
|
https://proceedings.mlr.press/v235/zhen24a.html
|
ICML 2024
|
Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world. Furthermore, they perform action prediction by learning a direct mapping from perception to action, neglecting the vast dynamics of the world and the relations between actions and dynamics. In contrast, human beings are endowed with world models that depict imagination about future scenarios to plan action accordingly. To this end, we propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action through a generative world model. Specifically, 3D-VLA is built on top of a 3D-based large language model (LLM) and a set of action tokens is introduced to engage with the embodied environment. Furthermore, to inject generation abilities into the model, we train the embodied diffusion models and align them into the LLM for predicting the goal image and point cloud. To train our 3D-VLA, we curate a large-scale 3D embodied instruction dataset by extracting vast 3D-related information from existing robotics datasets. Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodality generation and planning capabilities in embodied environments, showcasing its potential in real-world applications.
|
https://proceedings.mlr.press/v235/zheng24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24a/zheng24a.pdf
|
https://openreview.net/forum?id=99UFZV2VpU
|
How Does Goal Relabeling Improve Sample Efficiency?
|
https://proceedings.mlr.press/v235/zheng24a.html
|
Sirui Zheng, Chenjia Bai, Zhuoran Yang, Zhaoran Wang
|
https://proceedings.mlr.press/v235/zheng24a.html
|
ICML 2024
|
Hindsight experience replay and goal relabeling are successful in reinforcement learning (RL) since they enable agents to learn from failures. Despite their successes, we lack a theoretical understanding, such as (i) why hindsight experience replay improves sample efficiency and (ii) how to design a relabeling method that achieves sample efficiency. To this end, we construct an example to show the information-theoretical improvement in sample efficiency achieved by goal relabeling. Our example reveals that goal relabeling can enhance sample efficiency and exploit the rich information in observations through better hypothesis elimination. Based on these insights, we develop an RL algorithm called GOALIVE. To analyze the sample complexity of GOALIVE, we introduce a complexity measure, the goal-conditioned Bellman-Eluder (GOAL-BE) dimension, which characterizes the sample complexity of goal-conditioned RL problems. Compared to the Bellman-Eluder dimension, the goal-conditioned version offers an exponential improvement in the best case. To the best of our knowledge, our work provides the first characterization of the theoretical improvement in sample efficiency achieved by goal relabeling.
|
https://proceedings.mlr.press/v235/zheng24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24b/zheng24b.pdf
|
https://openreview.net/forum?id=p225Od0aYt
|
PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in Control
|
https://proceedings.mlr.press/v235/zheng24b.html
|
Ruijie Zheng, Ching-An Cheng, Hal Daumé Iii, Furong Huang, Andrey Kolobov
|
https://proceedings.mlr.press/v235/zheng24b.html
|
ICML 2024
|
Temporal action abstractions, along with belief state representations, are a powerful knowledge sharing mechanism for sequential decision making. In this work, we propose a novel view that treats inducing temporal action abstractions as a sequence compression problem. To do so, we bring a subtle but critical component of LLM training pipelines – input tokenization via byte pair encoding (BPE) – to bear on the seemingly distant task of learning skills of variable time span in continuous control domains. We introduce an approach called Primitive Sequence Encoding (PRISE) that combines continuous action quantization with BPE to learn powerful action abstractions. We empirically show that high-level skills discovered by PRISE from a multitask set of robotic manipulation demonstrations significantly boost the learning performance of behavior cloning on downstream tasks.
|
https://proceedings.mlr.press/v235/zheng24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24c/zheng24c.pdf
|
https://openreview.net/forum?id=k1J2GbamLi
|
Exploiting Negative Samples: A Catalyst for Cohort Discovery in Healthcare Analytics
|
https://proceedings.mlr.press/v235/zheng24c.html
|
Kaiping Zheng, Horng-Ruey Chua, Melanie Herschel, H. V. Jagadish, Beng Chin Ooi, James Wei Luen Yip
|
https://proceedings.mlr.press/v235/zheng24c.html
|
ICML 2024
|
In healthcare analytics, addressing binary diagnosis or prognosis tasks presents unique challenges due to the inherent asymmetry between positive and negative samples. While positive samples, indicating patients with a disease, are defined based on stringent medical criteria, negative samples are defined in an open-ended manner and remain underexplored in prior research. To bridge this gap, we propose an innovative approach to facilitate cohort discovery within negative samples, leveraging a Shapley-based exploration of interrelationships between these samples, which holds promise for uncovering valuable insights concerning the studied disease, and related comorbidity and complications. We quantify each sample’s contribution using data Shapley values, subsequently constructing the Negative Sample Shapley Field to model the distribution of all negative samples. Next, we transform this field through manifold learning, preserving the essential data structure information while imposing an isotropy constraint in data Shapley values. Within this transformed space, we pinpoint cohorts of medical interest via density-based clustering. We empirically evaluate the effectiveness of our approach on the real-world electronic medical records from National University Hospital in Singapore, yielding clinically valuable insights aligned with existing knowledge, and benefiting medical research and clinical decision-making.
|
https://proceedings.mlr.press/v235/zheng24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24d/zheng24d.pdf
|
https://openreview.net/forum?id=DCmahCZJYb
|
Constrained Exploration via Reflected Replica Exchange Stochastic Gradient Langevin Dynamics
|
https://proceedings.mlr.press/v235/zheng24d.html
|
Haoyang Zheng, Hengrong Du, Qi Feng, Wei Deng, Guang Lin
|
https://proceedings.mlr.press/v235/zheng24d.html
|
ICML 2024
|
Replica exchange stochastic gradient Langevin dynamics (reSGLD) is an effective sampler for non-convex learning in large-scale datasets. However, the simulation may encounter stagnation issues when the high-temperature chain delves too deeply into the distribution tails. To tackle this issue, we propose reflected reSGLD (r2SGLD): an algorithm tailored for constrained non-convex exploration by utilizing reflection steps within a bounded domain. Theoretically, we observe that reducing the diameter of the domain enhances mixing rates, exhibiting a quadratic behavior. Empirically, we test its performance through extensive experiments, including identifying dynamical systems with physical constraints, simulations of constrained multi-modal distributions, and image classification tasks. The theoretical and empirical findings highlight the crucial role of constrained exploration in improving the simulation efficiency.
|
https://proceedings.mlr.press/v235/zheng24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24e/zheng24e.pdf
|
https://openreview.net/forum?id=piecKJ2DlB
|
GPT-4V(ision) is a Generalist Web Agent, if Grounded
|
https://proceedings.mlr.press/v235/zheng24e.html
|
Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
|
https://proceedings.mlr.press/v235/zheng24e.html
|
ICML 2024
|
The recent development on large multimodal models (LMMs), especially GPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries of multimodal models beyond traditional tasks like image captioning and visual question answering. In this work, we explore the potential of LMMs like GPT-4V as a generalist web agent that can follow natural language instructions to complete tasks on any given website. We propose SEEACT, a generalist web agent that harnesses the power of LMMs for integrated visual understanding and acting on the web. We evaluate on the recent MIND2WEB benchmark. In addition to standard offline evaluation on cached websites, we enable a new online evaluation setting by developing a tool that allows running web agents on live websites. We show that GPT-4V presents a great potential for web agents—it can successfully complete 51.1% of the tasks on live websites if we manually ground its textual plans into actions on the websites. This substantially outperforms text-only LLMs like GPT-4 or smaller models (FLAN-T5 and BLIP-2) specifically fine-tuned for web agents. However, grounding still remains a major challenge. Existing LMM grounding strategies like set-of-mark prompting turns out to be not effective for web agents, and the best grounding strategy we develop in this paper leverages both the HTML structure and visuals. Yet, there is still a substantial gap with oracle grounding, leaving ample room for further improvement. All code, data, and evaluation tools are available at https://github.com/OSU-NLP-Group/SeeAct.
|
https://proceedings.mlr.press/v235/zheng24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24f/zheng24f.pdf
|
https://openreview.net/forum?id=eOtjMYdGLt
|
Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale
|
https://proceedings.mlr.press/v235/zheng24f.html
|
Candi Zheng, Yuan Lan
|
https://proceedings.mlr.press/v235/zheng24f.html
|
ICML 2024
|
Popular guidance for denoising diffusion probabilistic model (DDPM) linearly combines distinct conditional models together to provide enhanced control over samples. However, this approach overlooks nonlinear effects that become significant when guidance scale is large. To address this issue, we propose characteristic guidance, a guidance method that provides first-principle non-linear correction for classifier-free guidance. Such correction forces the guided DDPMs to respect the Fokker-Planck (FP) equation of diffusion process, in a way that is training-free and compatible with existing sampling methods. Experiments show that characteristic guidance enhances semantic characteristics of prompts and mitigate irregularities in image generation, proving effective in diverse applications ranging from simulating magnet phase transitions to latent space sampling.
|
https://proceedings.mlr.press/v235/zheng24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24g/zheng24g.pdf
|
https://openreview.net/forum?id=KSNl7VgeVr
|
Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss
|
https://proceedings.mlr.press/v235/zheng24g.html
|
Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Shuang Ma, Hal Daumé Iii, Huazhe Xu, John Langford, Praveen Palanisamy, Kalyan Shankar Basu, Furong Huang
|
https://proceedings.mlr.press/v235/zheng24g.html
|
ICML 2024
|
We present Premier-TACO, a multitask feature representation learning approach designed to improve few-shot policy learning efficiency in sequential decision-making tasks. Premier-TACO leverages a subset of multitask offline datasets for pretraining a general feature representation, which captures critical environmental dynamics and is fine-tuned using minimal expert demonstrations. It advances the temporal action contrastive learning (TACO) objective, known for state-of-the-art results in visual control tasks, by incorporating a novel negative example sampling strategy. This strategy is crucial in significantly boosting TACO’s computational efficiency, making large-scale multitask offline pretraining feasible. Our extensive empirical evaluation in a diverse set of continuous control benchmarks including Deepmind Control Suite, MetaWorld, and LIBERO demonstrate Premier-TACO’s effective- ness in pretraining visual representations, significantly enhancing few-shot imitation learning of novel tasks.
|
https://proceedings.mlr.press/v235/zheng24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24h/zheng24h.pdf
|
https://openreview.net/forum?id=283cGgWfM2
|
ESM All-Atom: Multi-Scale Protein Language Model for Unified Molecular Modeling
|
https://proceedings.mlr.press/v235/zheng24h.html
|
Kangjie Zheng, Siyu Long, Tianyu Lu, Junwei Yang, Xinyu Dai, Ming Zhang, Zaiqing Nie, Wei-Ying Ma, Hao Zhou
|
https://proceedings.mlr.press/v235/zheng24h.html
|
ICML 2024
|
Protein language models have demonstrated significant potential in the field of protein engineering. However, current protein language models primarily operate at the residue scale, which limits their ability to provide information at the atom level. This limitation prevents us from fully exploiting the capabilities of protein language models for applications involving both proteins and small molecules. In this paper, we propose ESM-AA (ESM All-Atom), a novel approach that enables atom-scale and residue-scale unified molecular modeling. ESM-AA achieves this by pre-training on multi-scale code-switch protein sequences and utilizing a multi-scale position encoding to capture relationships among residues and atoms. Experimental results indicate that ESM-AA surpasses previous methods in protein-molecule tasks, demonstrating the full utilization of protein language models. Further investigations reveal that through unified molecular modeling, ESM-AA not only gains molecular knowledge but also retains its understanding of proteins.
|
https://proceedings.mlr.press/v235/zheng24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24i/zheng24i.pdf
|
https://openreview.net/forum?id=kao5hRX9YA
|
BAT: Learning to Reason about Spatial Sounds with Large Language Models
|
https://proceedings.mlr.press/v235/zheng24i.html
|
Zhisheng Zheng, Puyuan Peng, Ziyang Ma, Xie Chen, Eunsol Choi, David Harwath
|
https://proceedings.mlr.press/v235/zheng24i.html
|
ICML 2024
|
Spatial sound reasoning is a fundamental human skill, enabling us to navigate and interpret our surroundings based on sound. In this paper we present BAT, which combines the spatial sound perception ability of a binaural acoustic scene analysis model with the natural language reasoning capabilities of a large language model (LLM) to replicate this innate ability. To address the lack of existing datasets of in-the-wild spatial sounds, we synthesized a binaural audio dataset using AudioSet and SoundSpaces 2.0. Next, we developed SpatialSoundQA, a spatial sound-based question-answering dataset, offering a range of QA tasks that train BAT in various aspects of spatial sound perception and reasoning. The acoustic front end encoder of BAT is a novel spatial audio encoder named Spatial Audio Spectrogram Transformer, or Spatial-AST, which by itself achieves strong performance across sound event detection, spatial localization, and distance estimation. By integrating Spatial-AST with LLaMA-2 7B model, BAT transcends standard Sound Event Localization and Detection (SELD) tasks, enabling the model to reason about the relationships between the sounds in its environment. Our experiments demonstrate BAT’s superior performance on both spatial sound perception and reasoning, showcasing the immense potential of LLMs in navigating and interpreting complex spatial audio environments.
|
https://proceedings.mlr.press/v235/zheng24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24j/zheng24j.pdf
|
https://openreview.net/forum?id=efzkSbpyRw
|
Conformal Predictions under Markovian Data
|
https://proceedings.mlr.press/v235/zheng24j.html
|
Frédéric Zheng, Alexandre Proutiere
|
https://proceedings.mlr.press/v235/zheng24j.html
|
ICML 2024
|
We study the split Conformal Prediction method when applied to Markovian data. We quantify the gap in terms of coverage induced by the correlations in the data (compared to exchangeable data). This gap strongly depends on the mixing properties of the underlying Markov chain, and we prove that it typically scales as $\sqrt{t_\mathrm{mix}\ln(n)/n}$ (where $t_\mathrm{mix}$ is the mixing time of the chain). We also derive upper bounds on the impact of the correlations on the size of the prediction set. Finally we present $K$-split CP, a method that consists in thinning the calibration dataset and that adapts to the mixing properties of the chain. Its coverage gap is reduced to $t_\mathrm{mix}/(n\ln(n))$ without really affecting the size of the prediction set. We finally test our algorithms on synthetic and real-world datasets.
|
https://proceedings.mlr.press/v235/zheng24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24k/zheng24k.pdf
|
https://openreview.net/forum?id=5PQhu8flSO
|
Detecting and Identifying Selection Structure in Sequential Data
|
https://proceedings.mlr.press/v235/zheng24k.html
|
Yujia Zheng, Zeyu Tang, Yiwen Qiu, Bernhard Schölkopf, Kun Zhang
|
https://proceedings.mlr.press/v235/zheng24k.html
|
ICML 2024
|
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences. Since this selection process often distorts statistical analysis, previous work primarily views it as a bias to be corrected and proposes various methods to mitigate its effect. However, while controlling this bias is crucial, selection also offers an opportunity to provide a deeper insight into the hidden generation process, as it is a fundamental mechanism underlying what we observe. In particular, overlooking selection in sequential data can lead to an incomplete or overcomplicated inductive bias in modeling, such as assuming a universal autoregressive structure for all dependencies. Therefore, rather than merely viewing it as a bias, we explore the causal structure of selection in sequential data to delve deeper into the complete causal process. Specifically, we show that selection structure is identifiable without any parametric assumptions or interventional experiments. Moreover, even in cases where selection variables coexist with latent confounders, we still establish the nonparametric identifiability under appropriate structural conditions. Meanwhile, we also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies. The framework has been validated empirically on both synthetic data and real-world music.
|
https://proceedings.mlr.press/v235/zheng24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24l/zheng24l.pdf
|
https://openreview.net/forum?id=tVwzR1myUp
|
ContPhy: Continuum Physical Concept Learning and Reasoning from Videos
|
https://proceedings.mlr.press/v235/zheng24l.html
|
Zhicheng Zheng, Xin Yan, Zhenfang Chen, Jingzhou Wang, Qin Zhi Eddie Lim, Joshua B. Tenenbaum, Chuang Gan
|
https://proceedings.mlr.press/v235/zheng24l.html
|
ICML 2024
|
We introduce the Continuum Physical Dataset (ContPhy), a novel benchmark for assessing machine physical commonsense. ContPhy complements existing physical reasoning benchmarks by encompassing the inference of diverse physical properties, such as mass and density, across various scenarios and predicting corresponding dynamics. We evaluated a range of AI models and found that they still struggle to achieve satisfactory performance on ContPhy, which shows that current AI models still lack physical commonsense for the continuum, especially soft-bodies, and illustrates the value of the proposed dataset. We also introduce an oracle model (ContPRO) that marries the particle-based physical dynamic models with the recent large language models, which enjoy the advantages of both models, precise dynamic predictions, and interpretable reasoning. ContPhy aims to spur progress in perception and reasoning within diverse physical settings, narrowing the divide between human and machine intelligence in understanding the physical world.
|
https://proceedings.mlr.press/v235/zheng24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24m/zheng24m.pdf
|
https://openreview.net/forum?id=ar174skI9u
|
DPN: Decoupling Partition and Navigation for Neural Solvers of Min-max Vehicle Routing Problems
|
https://proceedings.mlr.press/v235/zheng24m.html
|
Zhi Zheng, Shunyu Yao, Zhenkun Wang, Tong Xialiang, Mingxuan Yuan, Ke Tang
|
https://proceedings.mlr.press/v235/zheng24m.html
|
ICML 2024
|
The min-max vehicle routing problem (min-max VRP) traverses all given customers by assigning several routes and aims to minimize the length of the longest route. Recently, reinforcement learning (RL)-based sequential planning methods have exhibited advantages in solving efficiency and optimality. However, these methods fail to exploit the problem-specific properties in learning representations, resulting in less effective features for decoding optimal routes. This paper considers the sequential planning process of min-max VRPs as two coupled optimization tasks: customer partition for different routes and customer navigation in each route (i.e., partition and navigation). To effectively process min-max VRP instances, we present a novel attention-based Partition-and-Navigation encoder (P&N Encoder) that learns distinct embeddings for partition and navigation. Furthermore, we utilize an inherent symmetry in decoding routes and develop an effective agent-permutation-symmetric (APS) loss function. Experimental results demonstrate that the proposed Decoupling-Partition-Navigation (DPN) method significantly surpasses existing learning-based methods in both single-depot and multi-depot min-max VRPs. Our code is available at
|
https://proceedings.mlr.press/v235/zheng24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24n/zheng24n.pdf
|
https://openreview.net/forum?id=ugxGpOEkox
|
On Prompt-Driven Safeguarding for Large Language Models
|
https://proceedings.mlr.press/v235/zheng24n.html
|
Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie Zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng
|
https://proceedings.mlr.press/v235/zheng24n.html
|
ICML 2024
|
Prepending model inputs with safety prompts is a common practice for safeguarding large language models (LLMs) against queries with harmful intents. However, the underlying working mechanisms of safety prompts have not been unraveled yet, restricting the possibility of automatically optimizing them to improve LLM safety. In this work, we investigate how LLMs’ behavior (i.e., complying with or refusing user queries) is affected by safety prompts from the perspective of model representation. We find that in the representation space, the input queries are typically moved by safety prompts in a "higher-refusal" direction, in which models become more prone to refusing to provide assistance, even when the queries are harmless. On the other hand, LLMs are naturally capable of distinguishing harmful and harmless queries without safety prompts. Inspired by these findings, we propose a method for safety prompt optimization, namely DRO (Directed Representation Optimization). Treating a safety prompt as continuous, trainable embeddings, DRO learns to move the queries’ representations along or opposite the refusal direction, depending on their harmfulness. Experiments with eight LLMs on out-of-domain and jailbreak benchmarks demonstrate that DRO remarkably improves the safeguarding performance of human-crafted safety prompts, without compromising the models’ general performance.
|
https://proceedings.mlr.press/v235/zheng24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24o/zheng24o.pdf
|
https://openreview.net/forum?id=bV9yT24t9B
|
Self-Infilling Code Generation
|
https://proceedings.mlr.press/v235/zheng24o.html
|
Lin Zheng, Jianbo Yuan, Zhi Zhang, Hongxia Yang, Lingpeng Kong
|
https://proceedings.mlr.press/v235/zheng24o.html
|
ICML 2024
|
In this work, we introduce self-infilling code generation, a general framework that incorporates infilling operations into auto-regressive decoding. Our approach capitalizes on the observation that recent infilling-capable code language models can perform self-infilling: whereas conventional infilling is designed to fill in the middle based on a predefined prefix and suffix, self-infilling sequentially generates both such surrounding context and the infilled content. We utilize self-infilling to introduce novel interruption and looping mechanisms in conventional decoding, evolving it into a non-monotonic process. Interruptions allow for postponing the generation of specific code until a definitive suffix is established, enhancing control during decoding. Meanwhile, the looping mechanism, which leverages the complementary nature of self-infilling and left-to-right decoding, can iteratively update and synchronize each piece of generation cyclically. Extensive experiments across a variety of code generation benchmarks demonstrate that decoding with self-infilling not only improves the output quality but also regularizes the overall generation, which effectively mitigates potential degeneration and scaffolds code to be more consistent with intended functionality.
|
https://proceedings.mlr.press/v235/zheng24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zheng24p/zheng24p.pdf
|
https://openreview.net/forum?id=aksdU1KOpT
|
Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning
|
https://proceedings.mlr.press/v235/zheng24p.html
|
Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
|
https://proceedings.mlr.press/v235/zheng24p.html
|
ICML 2024
|
Class-Incremental Learning (CIL) seeks to learn new concepts without forgetting previously learned knowledge. To achieve this, rehearsal-based methods keep a replay memory consisting of a small number of trained samples from previous tasks. However, recent studies show that rehearsal-based methods are prone to overfitting on rehearsal samples, resulting in poor generalization on previous tasks. Since the generalization error is bounded by the margin on the training dataset, in this paper, we study the generalization by all-layer margin on deep neural networks to alleviate catastrophic forgetting. Specifically, we show that the average margin of the rehearsal samples are smaller during incremental learning. To acquire larger margin thus better generalization on rehearsal samples, we propose Multi-layer Rehearsal Feature Augmentation (MRFA) in rehearsal training to optimize the all-layer margin on rehearsal samples. The proposed method augments the features of rehearsal samples at each layer by gradient ascent step of the current model with respect to the feature. With such augmentations on layer features, the margin on rehearsal samples are larger, rehearsal samples are able to provide more information for refining the decision boundary during incremental learning, thus alleviating catastrophic forgetting. Extensive experiments show the effectiveness of MRFA on various CIL scenarios.
|
https://proceedings.mlr.press/v235/zhong24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24a/zhong24a.pdf
|
https://openreview.net/forum?id=jKUWlgra9b
|
ERQ: Error Reduction for Post-Training Quantization of Vision Transformers
|
https://proceedings.mlr.press/v235/zhong24a.html
|
Yunshan Zhong, Jiawei Hu, You Huang, Yuxin Zhang, Rongrong Ji
|
https://proceedings.mlr.press/v235/zhong24a.html
|
ICML 2024
|
Post-training quantization (PTQ) for vision transformers (ViTs) has garnered significant attention due to its efficiency in compressing models. However, existing methods typically overlook the intricate interdependence between quantized weight and activation, leading to considerable quantization error. In this paper, we propose ERQ, a two-step PTQ approach meticulously crafted to sequentially reduce the quantization error arising from activation and weight quantization. ERQ first introduces Activation quantization error reduction (Aqer) that strategically formulates the minimization of activation quantization error as a Ridge Regression problem, tackling it by updating weights with full-precision. Subsequently, ERQ introduces Weight quantization error reduction (Wqer) that adopts an iterative approach to mitigate the quantization error induced by weight quantization. In each iteration, an empirically derived, efficient proxy is employed to refine the rounding directions of quantized weights, coupled with a Ridge Regression solver to curtail weight quantization error. Experimental results attest to the effectiveness of our approach. Notably, ERQ surpasses the state-of-the-art GPTQ by 22.36% in accuracy for W3A4 ViT-S.
|
https://proceedings.mlr.press/v235/zhong24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24b/zhong24b.pdf
|
https://openreview.net/forum?id=eFvoL7BOny
|
Provably Efficient Exploration in Quantum Reinforcement Learning with Logarithmic Worst-Case Regret
|
https://proceedings.mlr.press/v235/zhong24b.html
|
Han Zhong, Jiachen Hu, Yecheng Xue, Tongyang Li, Liwei Wang
|
https://proceedings.mlr.press/v235/zhong24b.html
|
ICML 2024
|
While quantum reinforcement learning (RL) has attracted a surge of attention recently, its theoretical understanding is limited. In particular, it remains elusive how to design provably efficient quantum RL algorithms that can address the exploration-exploitation trade-off. To this end, we propose a novel UCRL-style algorithm that takes advantage of quantum computing for tabular Markov decision processes (MDPs) with $S$ states, $A$ actions, and horizon $H$, and establish an $\mathcal{O}(\mathrm{poly}(S, A, H, \log T))$ worst-case regret for it, where $T$ is the number of episodes. Furthermore, we extend our results to quantum RL with linear function approximation, which is capable of handling problems with large state spaces. Specifically, we develop a quantum algorithm based on value target regression (VTR) for linear mixture MDPs with $d$-dimensional linear representation and prove that it enjoys $\mathcal{O}(\mathrm{poly}(d, H, \log T))$ regret. Our algorithms are variants of UCRL/UCRL-VTR algorithms in classical RL, which also leverage a novel combination of lazy updating mechanisms and quantum estimation subroutines. This is the key to breaking the $\Omega(\sqrt{T})$-regret barrier in classical RL. To the best of our knowledge, this is the first work studying the online exploration in quantum RL with provable logarithmic worst-case regret.
|
https://proceedings.mlr.press/v235/zhong24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24c/zhong24c.pdf
|
https://openreview.net/forum?id=2bUFIsg2f5
|
Towards Efficient Training and Evaluation of Robust Models against $l_0$ Bounded Adversarial Perturbations
|
https://proceedings.mlr.press/v235/zhong24c.html
|
Xuyang Zhong, Yixiao Huang, Chen Liu
|
https://proceedings.mlr.press/v235/zhong24c.html
|
ICML 2024
|
This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models’ robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.
|
https://proceedings.mlr.press/v235/zhong24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24d/zhong24d.pdf
|
https://openreview.net/forum?id=rIc9adYbH2
|
GNNs Also Deserve Editing, and They Need It More Than Once
|
https://proceedings.mlr.press/v235/zhong24d.html
|
Shaochen Zhong, Duy Le, Zirui Liu, Zhimeng Jiang, Andrew Ye, Jiamu Zhang, Jiayi Yuan, Kaixiong Zhou, Zhaozhuo Xu, Jing Ma, Shuai Xu, Vipin Chaudhary, Xia Hu
|
https://proceedings.mlr.press/v235/zhong24d.html
|
ICML 2024
|
Suppose a self-driving car is crashing into pedestrians, or a chatbot is instructing its users to conduct criminal wrongdoing; the stakeholders of such products will undoubtedly want to patch these catastrophic errors as soon as possible. To address such concerns, Model Editing: the study of efficiently patching model behaviors without significantly altering their general performance, has seen considerable activity, with hundreds of editing techniques developed in various domains such as CV and NLP. However, the graph learning community has objectively fallen behind with only a few Graph Neural Network-compatible — and just one GNN-specific — model editing methods available, where all of which are limited in their practical scope. We argue that the impracticality of these methods lies in their lack of Sequential Editing Robustness: the ability to edit multiple errors sequentially, and therefore fall short in effectiveness, as this approach mirrors how errors are discovered and addressed in the real world. In this paper, we delve into the specific reasons behind the difficulty of editing GNNs in succession and observe the root cause to be model overfitting. We subsequently propose a simple yet effective solution — SEED-GNN — by leveraging overfit-prevention techniques in a GNN-specific context to derive the first and only GNN model editing method that scales practically. Additionally, we formally frame the task paradigm of GNN editing and hope to inspire future research in this crucial but currently overlooked field. Please refer to our GitHub repository for code and checkpoints.
|
https://proceedings.mlr.press/v235/zhong24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhong24e/zhong24e.pdf
|
https://openreview.net/forum?id=gKPkipJ3gm
|
Causal-IQA: Towards the Generalization of Image Quality Assessment Based on Causal Inference
|
https://proceedings.mlr.press/v235/zhong24e.html
|
Yan Zhong, Xingyu Wu, Li Zhang, Chenxi Yang, Tingting Jiang
|
https://proceedings.mlr.press/v235/zhong24e.html
|
ICML 2024
|
Due to the high cost of Image Quality Assessment (IQA) datasets, achieving robust generalization remains challenging for prevalent deep learning-based IQA methods. To address this, this paper proposes a novel end-to-end blind IQA method: Causal-IQA. Specifically, we first analyze the causal mechanisms in IQA tasks and construct a causal graph to understand the interplay and confounding effects between distortion types, image contents, and subjective human ratings. Then, through shifting the focus from correlations to causality, Causal-IQA aims to improve the estimation accuracy of image quality scores by mitigating the confounding effects using a causality-based optimization strategy. This optimization strategy is implemented on the sample subsets constructed by a Counterfactual Division process based on the Backdoor Criterion. Extensive experiments illustrate the superiority of Causal-IQA.
|
https://proceedings.mlr.press/v235/zhou24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24a/zhou24a.pdf
|
https://openreview.net/forum?id=FG5hjRBtpm
|
Jacobian Regularizer-based Neural Granger Causality
|
https://proceedings.mlr.press/v235/zhou24a.html
|
Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen
|
https://proceedings.mlr.press/v235/zhou24a.html
|
ICML 2024
|
With the advancement of neural networks, diverse methods for neural Granger causality have emerged, which demonstrate proficiency in handling complex data, and nonlinear relationships. However, the existing framework of neural Granger causality has several limitations. It requires the construction of separate predictive models for each target variable, and the relationship depends on the sparsity on the weights of the first layer, resulting in challenges in effectively modeling complex relationships between variables as well as unsatisfied estimation accuracy of Granger causality. Moreover, most of them cannot grasp full-time Granger causality. To address these drawbacks, we propose a Jacobian Regularizer-based Neural Granger Causality (JRNGC) approach, a straightforward yet highly effective method for learning multivariate summary Granger causality and full-time Granger causality by constructing a single model for all target variables. Specifically, our method eliminates the sparsity constraints of weights by leveraging an input-output Jacobian matrix regularizer, which can be subsequently represented as the weighted causal matrix in the post-hoc analysis. Extensive experiments show that our proposed approach achieves competitive performance with the state-of-the-art methods for learning summary Granger causality and full-time Granger causality while maintaining lower model complexity and high scalability.
|
https://proceedings.mlr.press/v235/zhou24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24b/zhou24b.pdf
|
https://openreview.net/forum?id=ElNxZ40tBJ
|
Differentially Private Worst-group Risk Minimization
|
https://proceedings.mlr.press/v235/zhou24b.html
|
Xinyu Zhou, Raef Bassily
|
https://proceedings.mlr.press/v235/zhou24b.html
|
ICML 2024
|
We initiate a systematic study of worst-group risk minimization under $(\epsilon, \delta)$-differential privacy (DP). The goal is to privately find a model that approximately minimizes the maximal risk across $p$ sub-populations (groups) with different distributions, where each group distribution is accessed via a sample oracle. We first present a new algorithm that achieves excess worst-group population risk of $\tilde{O}(\frac{p\sqrt{d}}{K\epsilon} + \sqrt{\frac{p}{K}})$, where $K$ is the total number of samples drawn from all groups and $d$ is the problem dimension. Our rate is nearly optimal when each distribution is observed via a fixed-size dataset of size $K/p$. Our result is based on a new stability-based analysis for the generalization error. In particular, we show that $\Delta$-uniform argument stability implies $\tilde{O}(\Delta + \frac{1}{\sqrt{n}})$ generalization error w.r.t. the worst-group risk, where $n$ is the number of samples drawn from each sample oracle. Next, we propose an algorithmic framework for worst-group population risk minimization using any DP online convex optimization algorithm as a subroutine. Hence, we give another excess risk bound of $\tilde{O}\left( \sqrt{\frac{d^{1/2}}{\epsilon K}} +\sqrt{\frac{p}{K\epsilon^2}} + \sqrt{\frac{p}{K}} \right)$. Assuming the typical setting of $\epsilon=\Theta(1)$, this bound is more favorable than our first bound in a certain range of $p$ as a function of $K$ and $d$. Finally, we study differentially private worst-group empirical risk minimization in the offline setting, where each group distribution is observed by a fixed-size dataset. We present a new algorithm with nearly optimal excess risk of $\tilde{O}(\frac{p\sqrt{d}}{K\epsilon})$.
|
https://proceedings.mlr.press/v235/zhou24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24c/zhou24c.pdf
|
https://openreview.net/forum?id=lsQnneYa8p
|
MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts
|
https://proceedings.mlr.press/v235/zhou24c.html
|
Jianan Zhou, Zhiguang Cao, Yaoxin Wu, Wen Song, Yining Ma, Jie Zhang, Xu Chi
|
https://proceedings.mlr.press/v235/zhou24c.html
|
ICML 2024
|
Learning to solve vehicle routing problems (VRPs) has garnered much attention. However, most neural solvers are only structured and trained independently on a specific problem, making them less generic and practical. In this paper, we aim to develop a unified neural solver that can cope with a range of VRP variants simultaneously. Specifically, we propose a multi-task vehicle routing solver with mixture-of-experts (MVMoE), which greatly enhances the model capacity without a proportional increase in computation. We further develop a hierarchical gating mechanism for the MVMoE, delivering a good trade-off between empirical performance and computational complexity. Experimentally, our method significantly promotes zero-shot generalization performance on 10 unseen VRP variants, and showcases decent results on the few-shot setting and real-world benchmark instances. We further conduct extensive studies on the effect of MoE configurations in solving VRPs, and observe the superiority of hierarchical gating when facing out-of-distribution data. The source code is available at: https://github.com/RoyalSkye/Routing-MVMoE.
|
https://proceedings.mlr.press/v235/zhou24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24d/zhou24d.pdf
|
https://openreview.net/forum?id=teHPKqjX8q
|
MD tree: a model-diagnostic tree grown on loss landscape
|
https://proceedings.mlr.press/v235/zhou24d.html
|
Yefan Zhou, Jianlong Chen, Qinxue Cao, Konstantin Schürholt, Yaoqing Yang
|
https://proceedings.mlr.press/v235/zhou24d.html
|
ICML 2024
|
This paper considers ”model diagnosis”, which we formulate as a classification problem. Given a pre-trained neural network (NN), the goal is to predict the source of failure from a set of failure modes (such as a wrong hyperparameter, inadequate model size, and insufficient data) without knowing the training configuration of the pre-trained NN. The conventional diagnosis approach uses training and validation errors to determine whether the model is underfitting or overfitting. However, we show that rich information about NN performance is encoded in the optimization loss landscape, which provides more actionable insights than validation-based measurements. Therefore, we propose a diagnosis method called MD tree based on loss landscape metrics and experimentally demonstrate its advantage over classical validation-based approaches. We verify the effectiveness of MD tree in multiple practical scenarios: (1) use several models trained on one dataset to diagnose a model trained on another dataset, essentially a few-shot dataset transfer problem; (2) use small models (or models trained with small data) to diagnose big models (or models trained with big data), essentially a scale transfer problem. In a dataset transfer task, MD tree achieves an accuracy of 87.7%, outperforming validation-based approaches by 14.88%. Our code is available at https://github.com/YefanZhou/ModelDiagnosis.
|
https://proceedings.mlr.press/v235/zhou24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24e/zhou24e.pdf
|
https://openreview.net/forum?id=qg6AlnpEQH
|
On the Emergence of Cross-Task Linearity in Pretraining-Finetuning Paradigm
|
https://proceedings.mlr.press/v235/zhou24e.html
|
Zhanpeng Zhou, Zijun Chen, Yilan Chen, Bo Zhang, Junchi Yan
|
https://proceedings.mlr.press/v235/zhou24e.html
|
ICML 2024
|
The pretraining-finetuning paradigm has become the prevailing trend in modern deep learning. In this work, we discover an intriguing linear phenomenon in models that are initialized from a common pretrained checkpoint and finetuned on different tasks, termed as Cross-Task Linearity (CTL). Specifically, we show that if we linearly interpolate the weights of two finetuned models, the features in the weight-interpolated model are often approximately equal to the linear interpolation of features in two finetuned models at each layer. We provide comprehensive empirical evidence supporting that CTL consistently occurs for finetuned models that start from the same pretrained checkpoint. We conjecture that in the pretraining-finetuning paradigm, neural networks approximately function as linear maps, mapping from the parameter space to the feature space. Based on this viewpoint, our study unveils novel insights into explaining model merging/editing, particularly by translating operations from the parameter space to the feature space. Furthermore, we delve deeper into the root cause for the emergence of CTL, highlighting the role of pretraining.
|
https://proceedings.mlr.press/v235/zhou24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24f/zhou24f.pdf
|
https://openreview.net/forum?id=kHjOmAUfVe
|
RoboDreamer: Learning Compositional World Models for Robot Imagination
|
https://proceedings.mlr.press/v235/zhou24f.html
|
Siyuan Zhou, Yilun Du, Jiaben Chen, Yandong Li, Dit-Yan Yeung, Chuang Gan
|
https://proceedings.mlr.press/v235/zhou24f.html
|
ICML 2024
|
Text-to-video models have demonstrated substantial potential in robotic decision-making, enabling the imagination of realistic plans of future actions as well as accurate environment simulation. However, one major issue in such models is generalization – models are limited to synthesizing videos subject to language instructions similar to those seen at training time. This is heavily limiting in decision-making, where we seek a powerful world model to synthesize plans of unseen combinations of objects and actions in order to solve previously unseen tasks in new environments. To resolve this issue, we introduce RoboDreamer, an innovative approach for learning a compositional world model by factorizing the video generation. We leverage the natural compositionality of language to parse instructions into a set of lower-level primitives, which we condition a set of models on to generate videos. We illustrate how this factorization naturally enables compositional generalization, by allowing us to formulate a new natural language instruction as a combination of previously seen components. We further show how such a factorization enables us to add additional multimodal goals, allowing us to specify a video we wish to generate given both natural language instructions and a goal image. Our approach can successfully synthesize video plans on unseen goals in the RT-X, enables successful robot execution in simulation, and substantially outperforms monolithic baseline approaches to video generation.
|
https://proceedings.mlr.press/v235/zhou24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24g/zhou24g.pdf
|
https://openreview.net/forum?id=IC9UZ8lm25
|
MultiMax: Sparse and Multi-Modal Attention Learning
|
https://proceedings.mlr.press/v235/zhou24g.html
|
Yuxuan Zhou, Mario Fritz, Margret Keuper
|
https://proceedings.mlr.press/v235/zhou24g.html
|
ICML 2024
|
SoftMax is a ubiquitous ingredient of modern machine learning algorithms. It maps an input vector onto a probability simplex and reweights the input by concentrating the probability mass at large entries. Yet, as a smooth approximation to the Argmax function, a significant amount of probability mass is distributed to other, residual entries, leading to poor interpretability and noise. Although sparsity can be achieved by a family of SoftMax variants, they often require an alternative loss function and do not preserve multimodality. We show that this trade-off between multi-modality and sparsity limits the expressivity of SoftMax as well as its variants. We provide a solution to this tension between objectives by proposing a piece-wise differentiable function, termed MultiMax, which adaptively modulates the output distribution according to input entry range. Through comprehensive analysis and evaluation, we show that MultiMax successfully produces a distribution that supresses irrelevant entries while preserving multi-modality, with benefits in image classification, language modeling and machine translation.
|
https://proceedings.mlr.press/v235/zhou24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24h/zhou24h.pdf
|
https://openreview.net/forum?id=18rzx2PXKm
|
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning
|
https://proceedings.mlr.press/v235/zhou24h.html
|
Tianchen Zhou, Fnu Hairi, Haibo Yang, Jia Liu, Tian Tong, Fan Yang, Michinari Momma, Yan Gao
|
https://proceedings.mlr.press/v235/zhou24h.html
|
ICML 2024
|
Reinforcement learning with multiple, potentially conflicting objectives is pervasive in real-world applications, while this problem remains theoretically under-explored. This paper tackles the multi-objective reinforcement learning (MORL) problem and introduces an innovative actor-critic algorithm named MOAC which finds a policy by iteratively making trade-offs among conflicting reward signals. Notably, we provide the first analysis of finite-time Pareto-stationary convergence and corresponding sample complexity in both discounted and average reward settings. Our approach has two salient features: (a) MOAC mitigates the cumulative estimation bias resulting from finding an optimal common gradient descent direction out of stochastic samples. This enables provable convergence rate and sample complexity guarantees independent of the number of objectives; (b) With proper momentum coefficient, MOAC initializes the weights of individual policy gradients using samples from the environment, instead of manual initialization. This enhances the practicality and robustness of our algorithm. Finally, experiments conducted on a real-world dataset validate the effectiveness of our proposed method.
|
https://proceedings.mlr.press/v235/zhou24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24i/zhou24i.pdf
|
https://openreview.net/forum?id=8ERo4jph0A
|
Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data
|
https://proceedings.mlr.press/v235/zhou24i.html
|
Yujun Zhou, Yufei Han, Haomin Zhuang, Hongyan Bao, Xiangliang Zhang
|
https://proceedings.mlr.press/v235/zhou24i.html
|
ICML 2024
|
Research on adversarial robustness has predominantly focused on continuous inputs, leaving categorical inputs, especially tabular attributes, less examined. To echo this challenge, our work aims to evaluate and enhance the robustness of classification over categorical attributes against adversarial perturbations through efficient attack-free approaches. We propose a robustness evaluation metric named Integrated Gradient-Smoothed Gradient (IGSG). It is designed to evaluate the attributional sensitivity of each feature and the decision boundary of the classifier, two aspects that significantly influence adversarial risk, according to our theoretical analysis. Leveraging this metric, we develop an IGSG-based regularization to reduce adversarial risk by suppressing the sensitivity of categorical attributes. We conduct extensive empirical studies over categorical datasets of various application domains. The results affirm the efficacy of both IGSG and IGSG-based regularization. Notably, IGSG-based regularization surpasses the state-of-the-art robust training methods by a margin of approximately 0.4% to 12.2% on average in terms of adversarial accuracy, especially on high-dimension datasets. The code is available at https://github.com/YujunZhou/IGSG.
|
https://proceedings.mlr.press/v235/zhou24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24j/zhou24j.pdf
|
https://openreview.net/forum?id=CD2xl1L5es
|
Pedestrian Attribute Recognition as Label-balanced Multi-label Learning
|
https://proceedings.mlr.press/v235/zhou24j.html
|
Yibo Zhou, Hai-Miao Hu, Yirong Xiang, Xiaokang Zhang, Haotian Wu
|
https://proceedings.mlr.press/v235/zhou24j.html
|
ICML 2024
|
Rooting in the scarcity of most attributes, realistic pedestrian attribute datasets exhibit unduly skewed data distribution, from which two types of model failures are delivered: (1) label imbalance: model predictions lean greatly towards the side of majority labels; (2) semantics imbalance: model is easily overfitted on the under-represented attributes due to their insufficient semantic diversity. To render perfect label balancing, we propose a novel framework that successfully decouples label-balanced data re-sampling from the curse of attributes co-occurrence, i.e., we equalize the sampling prior of an attribute while not biasing that of the co-occurred others. To diversify the attributes semantics and mitigate the feature noise, we propose a Bayesian feature augmentation method to introduce true in-distribution novelty. Handling both imbalances jointly, our work achieves best accuracy on various popular benchmarks, and importantly, with minimal computational budget.
|
https://proceedings.mlr.press/v235/zhou24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24k/zhou24k.pdf
|
https://openreview.net/forum?id=XKxuTZRCXq
|
Bounding the Excess Risk for Linear Models Trained on Marginal-Preserving, Differentially-Private, Synthetic Data
|
https://proceedings.mlr.press/v235/zhou24k.html
|
Yvonne Zhou, Mingyu Liang, Ivan Brugere, Danial Dervovic, Antigoni Polychroniadou, Min Wu, Dana Dachman-Soled
|
https://proceedings.mlr.press/v235/zhou24k.html
|
ICML 2024
|
The growing use of machine learning (ML) has raised concerns that an ML model may reveal private information about an individual who has contributed to the training dataset. To prevent leakage of sensitive data, we consider using differentially- private (DP), synthetic training data instead of real training data to train an ML model. A key desirable property of synthetic data is its ability to preserve the low-order marginals of the original distribution. Our main contribution comprises novel upper and lower bounds on the excess empirical risk of linear models trained on such synthetic data, for continuous and Lipschitz loss functions. We perform extensive experimentation alongside our theoretical results.
|
https://proceedings.mlr.press/v235/zhou24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24l/zhou24l.pdf
|
https://openreview.net/forum?id=nbpwNmXTTw
|
Conformalized Adaptive Forecasting of Heterogeneous Trajectories
|
https://proceedings.mlr.press/v235/zhou24l.html
|
Yanfei Zhou, Lars Lindemann, Matteo Sesia
|
https://proceedings.mlr.press/v235/zhou24l.html
|
ICML 2024
|
This paper presents a new conformal method for generating simultaneous forecasting bands guaranteed to cover the entire path of a new random trajectory with sufficiently high probability. Prompted by the need for dependable uncertainty estimates in motion planning applications where the behavior of diverse objects may be more or less unpredictable, we blend different techniques from online conformal prediction of single and multiple time series, as well as ideas for addressing heteroscedasticity in regression. This solution is both principled, providing precise finite-sample guarantees, and effective, often leading to more informative predictions than prior methods.
|
https://proceedings.mlr.press/v235/zhou24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24m/zhou24m.pdf
|
https://openreview.net/forum?id=bmeUeCUMHA
|
Sequential Kernel Goodness-of-fit Testing
|
https://proceedings.mlr.press/v235/zhou24m.html
|
Zhengyu Zhou, Weiwei Liu
|
https://proceedings.mlr.press/v235/zhou24m.html
|
ICML 2024
|
Goodness-of-fit testing, a classical statistical tool, has been extensively explored in the batch setting, where the sample size is predetermined. However, practitioners often prefer methods that adapt to the complexity of a problem rather than fixing the sample size beforehand. Classical batch tests are generally unsuitable for streaming data, as valid inference after data peeking requires multiple testing corrections, resulting in reduced statistical power. To address this issue, we delve into the design of consistent sequential goodness-of-fit tests. Following the principle of testing by betting, we reframe this task as selecting a sequence of payoff functions that maximize the wealth of a fictitious bettor, betting against the null in a repeated game. We conduct experiments to demonstrate the adaptability of our sequential test across varying difficulty levels of problems while maintaining control over type-I errors.
|
https://proceedings.mlr.press/v235/zhou24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24n/zhou24n.pdf
|
https://openreview.net/forum?id=pBTLGM9uWx
|
RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation
|
https://proceedings.mlr.press/v235/zhou24n.html
|
Jiawei Zhou, Linye Lyu, Daojing He, Yu Li
|
https://proceedings.mlr.press/v235/zhou24n.html
|
ICML 2024
|
Adversarial camouflage is a widely used physical attack against vehicle detectors for its superiority in multi-view attack performance. One promising approach involves using differentiable neural renderers to facilitate adversarial camouflage optimization through gradient back-propagation. However, existing methods often struggle to capture environmental characteristics during the rendering process or produce adversarial textures that can precisely map to the target vehicle, resulting in suboptimal attack performance. Moreover, these approaches neglect diverse weather conditions, reducing the efficacy of generated camouflage across varying weather scenarios. To tackle these challenges, we propose a robust and accurate camouflage generation method, namely RAUCA. The core of RAUCA is a novel neural rendering component, Neural Renderer Plus (NRP), which can accurately project vehicle textures and render images with environmental characteristics such as lighting and weather. In addition, we integrate a multi-weather dataset for camouflage generation, leveraging the NRP to enhance the attack robustness. Experimental results on six popular object detectors show that RAUCA consistently outperforms existing methods in both simulation and real-world settings.
|
https://proceedings.mlr.press/v235/zhou24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24o/zhou24o.pdf
|
https://openreview.net/forum?id=Htw0bSgjXE
|
CurBench: Curriculum Learning Benchmark
|
https://proceedings.mlr.press/v235/zhou24o.html
|
Yuwei Zhou, Zirui Pan, Xin Wang, Hong Chen, Haoyang Li, Yanwen Huang, Zhixiao Xiong, Fangzhou Xiong, Peiyang Xu, Shengnan Liu, Wenwu Zhu
|
https://proceedings.mlr.press/v235/zhou24o.html
|
ICML 2024
|
Curriculum learning is a training paradigm where machine learning models are trained in a meaningful order, inspired by the way humans learn curricula. Due to its capability to improve model generalization and convergence, curriculum learning has gained considerable attention and has been widely applied to various research domains. Nevertheless, as new curriculum learning methods continue to emerge, it remains an open issue to benchmark them fairly. Therefore, we develop CurBench, the first benchmark that supports systematic evaluations for curriculum learning. Specifically, it consists of 15 datasets spanning 3 research domains: computer vision, natural language processing, and graph machine learning, along with 3 settings: standard, noise, and imbalance. To facilitate a comprehensive comparison, we establish the evaluation from 2 dimensions: performance and complexity. CurBench also provides a unified toolkit that plugs automatic curricula into general machine learning processes, enabling the implementation of 15 core curriculum learning methods. On the basis of this benchmark, we conduct comparative experiments and make empirical analyses of existing methods. CurBench is open-source and publicly available at https://github.com/THUMNLab/CurBench.
|
https://proceedings.mlr.press/v235/zhou24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24p/zhou24p.pdf
|
https://openreview.net/forum?id=zL9q2JD1dC
|
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting
|
https://proceedings.mlr.press/v235/zhou24p.html
|
Xiaoyu Zhou, Xingjian Ran, Yajiao Xiong, Jinlin He, Zhiwei Lin, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang
|
https://proceedings.mlr.press/v235/zhou24p.html
|
ICML 2024
|
We present GALA3D, generative 3D GAussians with LAyout-guided control, for effective compositional text-to-3D generation. We first utilize large language models (LLMs) to generate the initial layout and introduce a layout-guided 3D Gaussian representation for 3D content generation with adaptive geometric constraints. We then propose an instance-scene compositional optimization mechanism with conditioned diffusion to collaboratively generate realistic 3D scenes with consistent geometry, texture, scale, and accurate interactions among multiple objects while simultaneously adjusting the coarse layout priors extracted from the LLMs to align with the generated scene. Experiments show that GALA3D is a user-friendly, end-to-end framework for state-of-the-art scene-level 3D content generation and controllable editing while ensuring the high fidelity of object-level entities within the scene. The source codes and models will be available at gala3d.github.io.
|
https://proceedings.mlr.press/v235/zhou24q.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24q/zhou24q.pdf
|
https://openreview.net/forum?id=ytz2naZoDB
|
Stabilizing Policy Gradients for Stochastic Differential Equations via Consistency with Perturbation Process
|
https://proceedings.mlr.press/v235/zhou24q.html
|
Xiangxin Zhou, Liang Wang, Yichi Zhou
|
https://proceedings.mlr.press/v235/zhou24q.html
|
ICML 2024
|
Considering generating samples with high rewards, we focus on optimizing deep neural networks parameterized stochastic differential equations (SDEs), the advanced generative models with high expressiveness, with policy gradient, the leading algorithm in reinforcement learning. Nevertheless, when applying policy gradients to SDEs, since the policy gradient is estimated on a finite set of trajectories, it can be ill-defined, and the policy behavior in data-scarce regions may be uncontrolled. This challenge compromises the stability of policy gradients and negatively impacts sample complexity. To address these issues, we propose constraining the SDE to be consistent with its associated perturbation process. Since the perturbation process covers the entire space and is easy to sample, we can mitigate the aforementioned problems. Our framework offers a general approach allowing for a versatile selection of policy gradient methods to effectively and efficiently train SDEs. We evaluate our algorithm on the task of structure-based drug design and optimize the binding affinity of generated ligand molecules. Our method achieves the best Vina score (-9.07) on the CrossDocked2020 dataset.
|
https://proceedings.mlr.press/v235/zhou24r.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24r/zhou24r.pdf
|
https://openreview.net/forum?id=njwv9BsGHF
|
Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models
|
https://proceedings.mlr.press/v235/zhou24r.html
|
Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang
|
https://proceedings.mlr.press/v235/zhou24r.html
|
ICML 2024
|
While language models (LMs) have shown potential across a range of decision-making tasks, their reliance on simple acting processes limits their broad deployment as autonomous agents. In this paper, we introduce Language Agent Tree Search (LATS) – the first general framework that synergizes the capabilities of LMs in reasoning, acting, and planning. By leveraging the in-context learning ability of LMs, we integrate Monte Carlo Tree Search into LATS to enable LMs as agents, along with LM-powered value functions and self-reflections for proficient exploration and enhanced decision-making. A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our experimental evaluation across diverse domains, including programming, interactive question-answering (QA), web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (92.7%) for programming on HumanEval with GPT-4 and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT-3.5. Code can be found at https://github.com/lapisrocks/LanguageAgentTreeSearch
|
https://proceedings.mlr.press/v235/zhou24s.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24s/zhou24s.pdf
|
https://openreview.net/forum?id=MoTUdh9ZCc
|
DeCoOp: Robust Prompt Tuning with Out-of-Distribution Detection
|
https://proceedings.mlr.press/v235/zhou24s.html
|
Zhi Zhou, Ming Yang, Jiang-Xin Shi, Lan-Zhe Guo, Yu-Feng Li
|
https://proceedings.mlr.press/v235/zhou24s.html
|
ICML 2024
|
Vision-language models (VLMs), such as CLIP, have demonstrated impressive zero-shot capabilities for various downstream tasks. Their performance can be further enhanced through few-shot prompt tuning methods. However, current studies evaluate the performance of learned prompts separately on base and new classes. This evaluation lacks practicality for real-world applications since downstream tasks cannot determine whether the data belongs to base or new classes in advance. In this paper, we explore a problem setting called O*pen-world Prompt Tuning (OPT), which involves tuning prompts on base classes and evaluating on a combination of base and new classes. By introducing Decomposed Prompt Tuning framework (DePT), we theoretically demonstrate that OPT can be solved by incorporating out-of-distribution detection into prompt tuning, thereby enhancing the base-to-new discriminability. Based on DePT, we present a novel prompt tuning approach, namely, Decomposed Context Op*timization (DeCoOp), which introduces new-class detectors and sub-classifiers to further enhance the base-class and new-class discriminability. Experimental results on 11 benchmark datasets validate the effectiveness of DePT and demonstrate that DeCoOp outperforms current state-of-the-art methods, providing a significant 2% average accuracy improvement.
|
https://proceedings.mlr.press/v235/zhou24t.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24t/zhou24t.pdf
|
https://openreview.net/forum?id=b6rA0kAHT1
|
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
|
https://proceedings.mlr.press/v235/zhou24t.html
|
Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, Aviral Kumar
|
https://proceedings.mlr.press/v235/zhou24t.html
|
ICML 2024
|
Large language models (LLMs) have the potential to tackle sequential decision-making problems due to their generalist capabilities. Instead of optimizing “myopic” surrogate objectives such as human preferences within a single turn, in such problems, we wish to directly optimize long-term objectives, such as user satisfaction over an entire dialogue with an LLM or delayed success metrics in web navigation. Multi-turn reinforcement learning (RL) provides an appealing approach to directly optimize long-term objectives, but how can we design effective and efficient multi-turn RL algorithms for LLMs? In this work, we propose an algorithmic framework to multi-turn RL for LLMs that preserves the flexibility of token-by-token RL used in single-turn RL problems, while still accommodating long horizons and delayed rewards more effectively. Our framework, the Actor-Critic Framework with a Hierarchical Structure (ArCHer), combines a high-level off-policy RL algorithm that trains a value function with a low-level RL algorithm that trains a token-by-token policy. While ArCHer can be instantiated with multiple RL algorithms, a particularly convenient instantiation is to use temporal difference (TD) learning at the high level and on-policy token-level policy gradient at the low level. Empirically, we show that ArCHer significantly improves efficiency and performance of multi-turn LLM tasks, attaining sample efficiency boosts of about 100x over prior on-policy methods and converging to a much better performance than other off-policy methods.
|
https://proceedings.mlr.press/v235/zhou24u.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24u/zhou24u.pdf
|
https://openreview.net/forum?id=7C4EQqtb02
|
Graphon Mean Field Games with a Representative Player: Analysis and Learning Algorithm
|
https://proceedings.mlr.press/v235/zhou24u.html
|
Fuzhong Zhou, Chenyu Zhang, Xu Chen, Xuan Di
|
https://proceedings.mlr.press/v235/zhou24u.html
|
ICML 2024
|
We propose a discrete time graphon game formulation on continuous state and action spaces using a representative player to study stochastic games with heterogeneous interaction among agents. This formulation admits both conceptual and mathematical advantages, compared to a widely adopted formulation using a continuum of players. We prove the existence and uniqueness of the graphon equilibrium with mild assumptions, and show that this equilibrium can be used to construct an approximate solution for the finite player game, which is challenging to analyze and solve due to curse of dimensionality. An online oracle-free learning algorithm is developed to solve the equilibrium numerically, and sample complexity analysis is provided for its convergence.
|
https://proceedings.mlr.press/v235/zhou24v.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24v/zhou24v.pdf
|
https://openreview.net/forum?id=rSfzchjIYu
|
Effective Federated Graph Matching
|
https://proceedings.mlr.press/v235/zhou24v.html
|
Yang Zhou, Zijie Zhang, Zeru Zhang, Lingjuan Lyu, Wei-Shinn Ku
|
https://proceedings.mlr.press/v235/zhou24v.html
|
ICML 2024
|
Graph matching in the setting of federated learning is still an open problem. This paper proposes an unsupervised federated graph matching algorithm, UFGM, for inferring matched node pairs on different graphs across clients while maintaining privacy requirement, by leveraging graphlet theory and trust region optimization. First, the nodes’ graphlet features are captured to generate pseudo matched node pairs on different graphs across clients as pseudo training data for tackling the dilemma of unsupervised graph matching in federated setting and leveraging the strength of supervised graph matching. An approximate graphlet enumeration method is proposed to sample a small number of graphlets and capture nodes’ graphlet features. Theoretical analysis is conducted to demonstrate that the approximate method is able to maintain the quality of graphlet estimation while reducing its expensive cost. Second, we propose a separate trust region algorithm for pseudo supervised federated graph matching while maintaining the privacy constraints. In order to avoid expensive cost of the second-order Hessian computation in the trust region algorithm, we propose two weak quasi-Newton conditions to construct a positive definite scalar matrix as the Hessian approximation with only first-order gradients. We theoretically derive the error introduced by the separate trust region due to the Hessian approximation and conduct the convergence analysis of the approximation method.
|
https://proceedings.mlr.press/v235/zhou24w.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24w/zhou24w.pdf
|
https://openreview.net/forum?id=NQ6KDfSDFK
|
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
|
https://proceedings.mlr.press/v235/zhou24w.html
|
Yuhang Zhou, Zihua Zhao, Siyuan Du, Haolin Li, Jiangchao Yao, Ya Zhang, Yanfeng Wang
|
https://proceedings.mlr.press/v235/zhou24w.html
|
ICML 2024
|
Training a unified model to take multiple targets into account is a trend towards artificial general intelligence. However, how to efficiently mitigate the training conflicts among heterogeneous data collected from different domains or tasks remains under-explored. In this study, we explore to leverage Mixture of Low-rank Adapters (MoLA) to mitigate conflicts in heterogeneous data training, which requires to jointly train the multiple low-rank adapters and their shared backbone. Specifically, we introduce two variants of MoLA, namely, MoLA-Grad and MoLA-Router, to respectively handle the target-aware and target-agnostic scenarios during inference. The former uses task identifiers to assign personalized low-rank adapters to each task, disentangling task-specific knowledge towards their adapters, thereby mitigating heterogeneity conflicts. The latter uses a novel Task-wise Decorrelation (TwD) loss to intervene the router to learn oriented weight combinations of adapters to homogeneous tasks, achieving similar effects. We conduct comprehensive experiments to verify the superiority of MoLA over previous state-of-the-art methods and present in-depth analysis on its working mechanism. Source code is available at: https://github.com/MediaBrain-SJTU/MoLA
|
https://proceedings.mlr.press/v235/zhou24x.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhou24x/zhou24x.pdf
|
https://openreview.net/forum?id=QhqQJqe0Wq
|
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
|
https://proceedings.mlr.press/v235/zhou24x.html
|
Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang
|
https://proceedings.mlr.press/v235/zhou24x.html
|
ICML 2024
|
We introduce Score identity Distillation (SiD), an innovative data-free method that distills the generative capabilities of pretrained diffusion models into a single-step generator. SiD not only facilitates an exponentially fast reduction in Fréchet inception distance (FID) during distillation but also approaches or even exceeds the FID performance of the original teacher diffusion models. By reformulating forward diffusion processes as semi-implicit distributions, we leverage three score-related identities to create an innovative loss mechanism. This mechanism achieves rapid FID reduction by training the generator using its own synthesized images, eliminating the need for real data or reverse-diffusion-based generation, all accomplished within significantly shortened generation time. Upon evaluation across four benchmark datasets, the SiD algorithm demonstrates high iteration efficiency during distillation and surpasses competing distillation approaches, whether they are one-step or few-step, data-free, or dependent on training data, in terms of generation quality. This achievement not only redefines the benchmarks for efficiency and effectiveness in diffusion distillation but also in the broader field of diffusion-based generation. The PyTorch implementation is available at https://github.com/mingyuanzhou/SiD.
|
https://proceedings.mlr.press/v235/zhu24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24a/zhu24a.pdf
|
https://openreview.net/forum?id=5ToHnqYxjB
|
Iterative Search Attribution for Deep Neural Networks
|
https://proceedings.mlr.press/v235/zhu24a.html
|
Zhiyu Zhu, Huaming Chen, Xinyi Wang, Jiayu Zhang, Zhibo Jin, Jason Xue, Jun Shen
|
https://proceedings.mlr.press/v235/zhu24a.html
|
ICML 2024
|
Deep neural networks (DNNs) have achieved state-of-the-art performance across various applications. However, ensuring the reliability and trustworthiness of DNNs requires enhanced interpretability of model inputs and outputs. As an effective means of Explainable Artificial Intelligence (XAI) research, the interpretability of existing attribution algorithms varies depending on the choice of reference point, the quality of adversarial samples, or the applicability of gradient constraints in specific tasks. To thoroughly explore the attribution integration paths, in this paper, inspired by the iterative generation of high-quality samples in the diffusion model, we propose an Iterative Search Attribution (ISA) method. To enhance attribution accuracy, ISA distinguishes the importance of samples during gradient ascent and descent, while clipping the relatively unimportant features in the model. Specifically, we introduce a scale parameter during the iterative process to ensure the features in next iteration are always more significant than those in current iteration. Comprehensive experimental results show that our method has superior interpretability in image recognition tasks compared with state-of-the-art baselines. Our code is available at: https://github.com/LMBTough/ISA
|
https://proceedings.mlr.press/v235/zhu24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24b/zhu24b.pdf
|
https://openreview.net/forum?id=ofXRBPtol3
|
Generative Active Learning for Long-tailed Instance Segmentation
|
https://proceedings.mlr.press/v235/zhu24b.html
|
Muzhi Zhu, Chengxiang Fan, Hao Chen, Yang Liu, Weian Mao, Xiaogang Xu, Chunhua Shen
|
https://proceedings.mlr.press/v235/zhu24b.html
|
ICML 2024
|
Recently, large-scale language-image generative models have gained widespread attention and many works have utilized generated data from these models to further enhance the performance of perception tasks. However, not all generated data can positively impact downstream models, and these methods do not thoroughly explore how to better select and utilize generated data. On the other hand, there is still a lack of research oriented towards active learning on generated data. In this paper, we explore how to perform active learning specifically for generated data in the long-tailed instance segmentation task. Subsequently, we propose BSGAL, a new algorithm that estimates the contribution of the current batch-generated data based on gradient cache. BSGAL is meticulously designed to cater for unlimited generated data and complex downstream segmentation tasks. BSGAL outperforms the baseline approach and effectually improves the performance of long-tailed segmentation.
|
https://proceedings.mlr.press/v235/zhu24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24c/zhu24c.pdf
|
https://openreview.net/forum?id=txRZBD8tBV
|
Asymmetry in Low-Rank Adapters of Foundation Models
|
https://proceedings.mlr.press/v235/zhu24c.html
|
Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez De Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon
|
https://proceedings.mlr.press/v235/zhu24c.html
|
ICML 2024
|
Parameter-efficient fine-tuning optimizes large, pre-trained foundation models by updating a subset of parameters; in this class, Low-Rank Adaptation (LoRA) is particularly effective. Inspired by an effort to investigate the different roles of LoRA matrices during fine-tuning, this paper characterizes and leverages unexpected asymmetry in the importance of low-rank adapter matrices. Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output. Based on this observation, we demonstrate that fine-tuning $B$ is inherently more effective than fine-tuning $A$, and that a random untrained $A$ should perform nearly as well as a fine-tuned one. Using an information-theoretic lens, we also bound the generalization of low-rank adapters, showing that the parameter savings of exclusively training $B$ improves the bound. We support our conclusions with experiments on RoBERTa, BART-Large, LLaMA-2, and ViTs. The code and data is available at https://github.com/Jiacheng-Zhu-AIML/AsymmetryLoRA
|
https://proceedings.mlr.press/v235/zhu24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24d/zhu24d.pdf
|
https://openreview.net/forum?id=aXD94eATtT
|
Improving Open-Ended Text Generation via Adaptive Decoding
|
https://proceedings.mlr.press/v235/zhu24d.html
|
Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, Rui Wang
|
https://proceedings.mlr.press/v235/zhu24d.html
|
ICML 2024
|
Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.
|
https://proceedings.mlr.press/v235/zhu24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24e/zhu24e.pdf
|
https://openreview.net/forum?id=WXg6MJo1FH
|
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
|
https://proceedings.mlr.press/v235/zhu24e.html
|
Banghua Zhu, Michael Jordan, Jiantao Jiao
|
https://proceedings.mlr.press/v235/zhu24e.html
|
ICML 2024
|
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique that aligns language models closely with human-centric values. The initial phase of RLHF involves learning human values using a reward model from ranking data. It is observed that the performance of the reward model degrades after one epoch of training, and optimizing too much against the learned reward model eventually hinders the true objective. This paper analyzes potential reasons behind the issues, and designs improved reward learning algorithm termed ’Iterative Data Smoothing’ (IDS). The core idea is that during each training epoch, we not only update the model with the data, but also update the date using the model, replacing hard labels with soft labels. Our empirical findings highlight the superior performance of this approach over the traditional methods.
|
https://proceedings.mlr.press/v235/zhu24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24f/zhu24f.pdf
|
https://openreview.net/forum?id=YbHCqn4qF4
|
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
|
https://proceedings.mlr.press/v235/zhu24f.html
|
Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang
|
https://proceedings.mlr.press/v235/zhu24f.html
|
ICML 2024
|
Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., the Mamba deep learning model, have shown great potential for long sequence modeling. Meanwhile building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8x faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248x1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to be the next-generation backbone for vision foundation models.
|
https://proceedings.mlr.press/v235/zhu24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24g/zhu24g.pdf
|
https://openreview.net/forum?id=2ulUrcOZ64
|
Switched Flow Matching: Eliminating Singularities via Switching ODEs
|
https://proceedings.mlr.press/v235/zhu24g.html
|
Qunxi Zhu, Wei Lin
|
https://proceedings.mlr.press/v235/zhu24g.html
|
ICML 2024
|
Continuous-time generative models, such as Flow Matching (FM), construct probability paths to transport between one distribution and another through the simulation-free learning of the neural ordinary differential equations (ODEs). During inference, however, the learned model often requires multiple neural network evaluations to accurately integrate the flow, resulting in a slow sampling speed. We attribute the reason to the inherent (joint) heterogeneity of source and/or target distributions, namely the singularity problem, which poses challenges for training the neural ODEs effectively. To address this issue, we propose a more general framework, termed Switched FM (SFM), that eliminates singularities via switching ODEs, as opposed to using a uniform ODE in FM. Importantly, we theoretically show that FM cannot transport between two simple distributions due to the existence and uniqueness of initial value problems of ODEs, while these limitations can be well tackled by SFM. From an orthogonal perspective, our framework can seamlessly integrate with the existing advanced techniques, such as minibatch optimal transport, to further enhance the straightness of the flow, yielding a more efficient sampling process with reduced costs. We demonstrate the effectiveness of the newly proposed SFM through several numerical examples.
|
https://proceedings.mlr.press/v235/zhu24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24h/zhu24h.pdf
|
https://openreview.net/forum?id=jJmGl01S4l
|
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning
|
https://proceedings.mlr.press/v235/zhu24h.html
|
Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, Mikhail Belkin
|
https://proceedings.mlr.press/v235/zhu24h.html
|
ICML 2024
|
In this paper, we first present an explanation regarding the common occurrence of spikes in the training loss when neural networks are trained with stochastic gradient descent (SGD). We provide evidence that the spikes in the training loss of SGD are "catapults", an optimization phenomenon originally observed in GD with large learning rates in Lewkowycz et al. (2020). We empirically show that these catapults occur in a low-dimensional subspace spanned by the top eigenvectors of the tangent kernel, for both GD and SGD. Second, we posit an explanation for how catapults lead to better generalization by demonstrating that catapults increase feature learning by increasing alignment with the Average Gradient Outer Product (AGOP) of the true predictor. Furthermore, we demonstrate that a smaller batch size in SGD induces a larger number of catapults, thereby improving AGOP alignment and test performance.
|
https://proceedings.mlr.press/v235/zhu24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24i/zhu24i.pdf
|
https://openreview.net/forum?id=C0sGIO2MZN
|
Toward Availability Attacks in 3D Point Clouds
|
https://proceedings.mlr.press/v235/zhu24i.html
|
Yifan Zhu, Yibo Miao, Yinpeng Dong, Xiao-Shan Gao
|
https://proceedings.mlr.press/v235/zhu24i.html
|
ICML 2024
|
Despite the great progress of 3D vision, data privacy and security issues in 3D deep learning are not explored systematically. In the domain of 2D images, many availability attacks have been proposed to prevent data from being illicitly learned by unauthorized deep models. However, unlike images represented on a fixed dimensional grid, point clouds are characterized as unordered and unstructured sets, posing a significant challenge in designing an effective availability attack for 3D deep learning. In this paper, we theoretically show that extending 2D availability attacks directly to 3D point clouds under distance regularization is susceptible to the degeneracy, rendering the generated poisons weaker or even ineffective. This is because in bi-level optimization, introducing regularization term can result in update directions out of control. To address this issue, we propose a novel Feature Collision Error-Minimization (FC-EM) method, which creates additional shortcuts in the feature space, inducing different update directions to prevent the degeneracy of bi-level optimization. Moreover, we provide a theoretical analysis that demonstrates the effectiveness of the FC-EM attack. Extensive experiments on typical point cloud datasets, 3D intracranial aneurysm medical dataset, and 3D face dataset verify the superiority and practicality of our approach.
|
https://proceedings.mlr.press/v235/zhu24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24j/zhu24j.pdf
|
https://openreview.net/forum?id=1YsQI04KaN
|
Antibody Design Using a Score-based Diffusion Model Guided by Evolutionary, Physical and Geometric Constraints
|
https://proceedings.mlr.press/v235/zhu24j.html
|
Tian Zhu, Milong Ren, Haicang Zhang
|
https://proceedings.mlr.press/v235/zhu24j.html
|
ICML 2024
|
Antibodies are central proteins in adaptive immune responses, responsible for protecting against viruses and other pathogens. Rational antibody design has proven effective in the diagnosis and treatment of various diseases like cancers and virus infections. While recent diffusion-based generative models show promise in designing antigen-specific antibodies, the primary challenge lies in the scarcity of labeled antibody-antigen complex data and binding affinity data. We present AbX, a new score-based diffusion generative model guided by evolutionary, physical, and geometric constraints for antibody design. These constraints serve to narrow the search space and provide priors for plausible antibody sequences and structures. Specifically, we leverage a pre-trained protein language model as priors for evolutionary plausible antibodies and introduce additional training objectives for geometric and physical constraints like van der Waals forces. Furthermore, as far as we know, AbX is the first score-based diffusion model with continuous timesteps for antibody design, jointly modeling the discrete sequence space and the $\mathrm{SE}(3)$ structure space. Evaluated on two independent testing sets, we show that AbX outperforms other published methods, achieving higher accuracy in sequence and structure generation and enhanced antibody-antigen binding affinity. Ablation studies highlight the clear contributions of the introduced constraints to antibody design.
|
https://proceedings.mlr.press/v235/zhu24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24k/zhu24k.pdf
|
https://openreview.net/forum?id=Mz1lcJPymz
|
Online Learning in Betting Markets: Profit versus Prediction
|
https://proceedings.mlr.press/v235/zhu24k.html
|
Haiqing Zhu, Alexander Soen, Yun Kuen Cheung, Lexing Xie
|
https://proceedings.mlr.press/v235/zhu24k.html
|
ICML 2024
|
We examine two types of binary betting markets, whose primary goal is for profit (such as sports gambling) or to gain information (such as prediction markets). We articulate the interplay between belief and price-setting to analyse both types of markets, and show that the goals of maximising bookmaker profit and eliciting information are fundamentally incompatible. A key insight is that profit hinges on the deviation between (the distribution of) bettor and true beliefs, and that heavier tails in bettor belief distribution implies higher profit. Our algorithmic contribution is to introduce online learning methods for price-setting. Traditionally bookmakers update their prices rather infrequently, we present two algorithms that guide price updates upon seeing each bet, assuming very little of bettor belief distributions. The online pricing algorithm achieves stochastic regret of $\mathcal{O}(\sqrt{T})$ against the worst local maximum, or $\mathcal{O}(\sqrt{T \log T})$ with high probability against the global maximum under fair odds. More broadly, the inherent tradeoff between profit and information-seeking in binary betting may inspire new understandings of large-scale multi-agent behaviour.
|
https://proceedings.mlr.press/v235/zhu24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24l/zhu24l.pdf
|
https://openreview.net/forum?id=piujJIF3zs
|
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
|
https://proceedings.mlr.press/v235/zhu24l.html
|
Didi Zhu, Zhongyisun Sun, Zexi Li, Tao Shen, Ke Yan, Shouhong Ding, Chao Wu, Kun Kuang
|
https://proceedings.mlr.press/v235/zhu24l.html
|
ICML 2024
|
Catastrophic forgetting emerges as a critical challenge when fine-tuning multi-modal large language models (MLLMs), where improving performance on unseen tasks often leads to a significant performance drop on the original tasks. This paper presents a comprehensive analysis of catastrophic forgetting in MLLMs and introduces a post-training adjustment method called Model Tailor. Our method primarily preserves the pre-trained parameters while replacing a small number ($\leq$ 10%) of fine-tuned parameters, maintaining $\sim$ 99% effectiveness on original tasks versus pre-training, and achieving $\sim$ 97% on new tasks compared to standard fine-tuning. Specifically, we derive a sparse mask to identify the model patch, based on a fusion strategy that integrates salience and sensitivity analysis. Subsequently, a compensation mechanism is introduced to decorate the patch, enhancing the model’s performance on both target and original tasks. Additionally, our method is adaptable to multi-task scenarios. Through extensive experiments on InstructBLIP and LLaVA-1.5 in both image captioning and visual question answering tasks, our approach demonstrates significant task adaptability while preserving inherent pre-trained capabilities.
|
https://proceedings.mlr.press/v235/zhu24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24m/zhu24m.pdf
|
https://openreview.net/forum?id=DwTgy1hXXo
|
Dynamic Evaluation of Large Language Models by Meta Probing Agents
|
https://proceedings.mlr.press/v235/zhu24m.html
|
Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie
|
https://proceedings.mlr.press/v235/zhu24m.html
|
ICML 2024
|
Evaluation of large language models (LLMs) has raised great concerns in the community due to the issue of data contamination. Existing work designed evaluation protocols using well-defined algorithms for specific tasks, which cannot be easily extended to diverse scenarios. Moreover, current evaluation benchmarks can only provide the overall benchmark results and cannot support a fine-grained and multifaceted analysis of LLMs’ abilities. In this paper, we propose meta probing agents (MPA), a general dynamic evaluation protocol inspired by psychometrics to evaluate LLMs. MPA designs the probing and judging agents to automatically transform an original evaluation problem into a new one following psychometric theory on three basic cognitive abilities: language understanding, problem solving, and domain knowledge. These basic abilities are also dynamically configurable, allowing multifaceted analysis. We conducted extensive evaluations using MPA and found that most LLMs achieve poorer performance, indicating room for improvement. Our multifaceted analysis demonstrated the strong correlation between the basic abilities and an implicit Mattew effect on model size, i.e., larger models possess stronger correlations of the abilities. MPA can also be used as a data augmentation approach to enhance LLMs. Code is available at: https://github.com/microsoft/promptbench.
|
https://proceedings.mlr.press/v235/zhu24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24n/zhu24n.pdf
|
https://openreview.net/forum?id=xFDJBzPhci
|
CRoFT: Robust Fine-Tuning with Concurrent Optimization for OOD Generalization and Open-Set OOD Detection
|
https://proceedings.mlr.press/v235/zhu24n.html
|
Lin Zhu, Yifeng Yang, Qinying Gu, Xinbing Wang, Chenghu Zhou, Nanyang Ye
|
https://proceedings.mlr.press/v235/zhu24n.html
|
ICML 2024
|
Recent vision-language pre-trained models (VL-PTMs) have shown remarkable success in open-vocabulary tasks. However, downstream use cases often involve further fine-tuning of VL-PTMs, which may distort their general knowledge and impair their ability to handle distribution shifts. In real-world scenarios, machine learning systems inevitably encounter both covariate shifts (e.g., changes in image styles) and semantic shifts (e.g., test-time unseen classes). This highlights the importance of enhancing out-of-distribution (OOD) generalization on covariate shifts and simultaneously detecting semantic-shifted unseen classes. Thus a critical but underexplored question arises: How to improve VL-PTMs’ generalization ability to closed-set OOD data, while effectively detecting open-set unseen classes during fine-tuning? In this paper, we propose a novel objective function of OOD detection that also serves to improve OOD generalization. We show that minimizing the gradient magnitude of energy scores on training data leads to domain-consistent Hessians of classification loss, a strong indicator for OOD generalization revealed by theoretical analysis. Based on this finding, we have developed a unified fine-tuning framework that allows for concurrent optimization of both tasks. Extensive experiments have demonstrated the superiority of our method. The code is available at https://github.com/LinLLLL/CRoFT.
|
https://proceedings.mlr.press/v235/zhu24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24o/zhu24o.pdf
|
https://openreview.net/forum?id=asJTE8EBjg
|
Language Models Represent Beliefs of Self and Others
|
https://proceedings.mlr.press/v235/zhu24o.html
|
Wentao Zhu, Zhining Zhang, Yizhou Wang
|
https://proceedings.mlr.press/v235/zhu24o.html
|
ICML 2024
|
Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others’ beliefs. By manipulating these representations, we observe dramatic changes in the models’ ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.
|
https://proceedings.mlr.press/v235/zhuang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuang24a/zhuang24a.pdf
|
https://openreview.net/forum?id=H5FDHzrWe2
|
Stealthy Imitation: Reward-guided Environment-free Policy Stealing
|
https://proceedings.mlr.press/v235/zhuang24a.html
|
Zhixiong Zhuang, Maria-Irina Nicolae, Mario Fritz
|
https://proceedings.mlr.press/v235/zhuang24a.html
|
ICML 2024
|
Deep reinforcement learning policies, which are integral to modern control systems, represent valuable intellectual property. The development of these policies demands considerable resources, such as domain expertise, simulation fidelity, and real-world validation. These policies are potentially vulnerable to model stealing attacks, which aim to replicate their functionality using only black-box access. In this paper, we propose Stealthy Imitation, the first attack designed to steal policies without access to the environment or knowledge of the input range. This setup has not been considered by previous model stealing methods. Lacking access to the victim’s input states distribution, Stealthy Imitation fits a reward model that allows to approximate it. We show that the victim policy is harder to imitate when the distribution of the attack queries matches that of the victim. We evaluate our approach across diverse, high-dimensional control tasks and consistently outperform prior data-free approaches adapted for policy stealing. Lastly, we propose a countermeasure that significantly diminishes the effectiveness of the attack.
|
https://proceedings.mlr.press/v235/zhuang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuang24b/zhuang24b.pdf
|
https://openreview.net/forum?id=mBc8Pestd5
|
Reinformer: Max-Return Sequence Modeling for Offline RL
|
https://proceedings.mlr.press/v235/zhuang24b.html
|
Zifeng Zhuang, Dengyun Peng, Jinxin Liu, Ziqi Zhang, Donglin Wang
|
https://proceedings.mlr.press/v235/zhuang24b.html
|
ICML 2024
|
As a data-driven paradigm, offline reinforcement learning (RL) has been formulated as sequence modeling that conditions on the hindsight information including returns, goal or future trajectory. Although promising, this supervised paradigm overlooks the core objective of RL that maximizes the return. This overlook directly leads to the lack of trajectory stitching capability that affects the sequence model learning from sub-optimal data. In this work, we introduce the concept of max-return sequence modeling which integrates the goal of maximizing returns into existing sequence models. We propose Reinforced Transformer (Reinformer), indicating the sequence model is reinforced by the RL objective. Reinformer additionally incorporates the objective of maximizing returns in the training phase, aiming to predict the maximum future return within the distribution. During inference, this in-distribution maximum return will guide the selection of optimal actions. Empirically, Reinformer is competitive with classical RL methods on the D4RL benchmark and outperforms state-of-the-art sequence model particularly in trajectory stitching ability. Code is public at https://github.com/Dragon-Zhuang/Reinformer.
|
https://proceedings.mlr.press/v235/zhuang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuang24c/zhuang24c.pdf
|
https://openreview.net/forum?id=ATRnM8PyQX
|
COALA: A Practical and Vision-Centric Federated Learning Platform
|
https://proceedings.mlr.press/v235/zhuang24c.html
|
Weiming Zhuang, Jian Xu, Chen Chen, Jingtao Li, Lingjuan Lyu
|
https://proceedings.mlr.press/v235/zhuang24c.html
|
ICML 2024
|
We present COALA, a vision-centric Federated Learning (FL) platform, and a suite of benchmarks for practical FL scenarios, which we categorize as task, data, and model levels. At the task level, COALA extends support from simple classification to 15 computer vision tasks, including object detection, segmentation, pose estimation, and more. It also facilitates federated multiple-task learning, allowing clients to train on multiple tasks simultaneously. At the data level, COALA goes beyond supervised FL to benchmark both semi-supervised FL and unsupervised FL. It also benchmarks feature distribution shifts other than commonly considered label distribution shifts. In addition to dealing with static data, it supports federated continual learning for continuously changing data in real-world scenarios. At the model level, COALA benchmarks FL with split models and different models in different clients. COALA platform offers three degrees of customization for these practical FL scenarios, including configuration customization, components customization, and workflow customization. We conduct systematic benchmarking experiments for the practical FL scenarios and highlight potential opportunities for further advancements in FL.
|
https://proceedings.mlr.press/v235/zhuge24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuge24a/zhuge24a.pdf
|
https://openreview.net/forum?id=uTC9AFXIhg
|
GPTSwarm: Language Agents as Optimizable Graphs
|
https://proceedings.mlr.press/v235/zhuge24a.html
|
Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, Jürgen Schmidhuber
|
https://proceedings.mlr.press/v235/zhuge24a.html
|
ICML 2024
|
Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases. We unify these approaches by describing LLM-based agents as computational graphs. The nodes implement functions to process multimodal data or query LLMs, and the edges describe the information flow between operations. Graphs can be recursively combined into larger composite graphs representing hierarchies of inter-agent collaboration (where edges connect operations of different agents). Our novel automatic graph optimizers (1) refine node-level LLM prompts (node optimization) and (2) improve agent orchestration by changing graph connectivity (edge optimization). Experiments demonstrate that our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents. Our code is public.
|
https://proceedings.mlr.press/v235/zhuge24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhuge24b/zhuge24b.pdf
|
https://openreview.net/forum?id=yL6hljtjW4
|
Towards Efficient Spiking Transformer: a Token Sparsification Framework for Training and Inference Acceleration
|
https://proceedings.mlr.press/v235/zhuge24b.html
|
Zhengyang Zhuge, Peisong Wang, Xingting Yao, Jian Cheng
|
https://proceedings.mlr.press/v235/zhuge24b.html
|
ICML 2024
|
Nowadays Spiking Transformers have exhibited remarkable performance close to Artificial Neural Networks (ANNs), while enjoying the inherent energy-efficiency of Spiking Neural Networks (SNNs). However, training Spiking Transformers on GPUs is considerably more time-consuming compared to the ANN counterparts, despite the energy-efficient inference through neuromorphic computation. In this paper, we investigate the token sparsification technique for efficient training of Spiking Transformer and find conventional methods suffer from noticeable performance degradation. We analyze the issue and propose our Sparsification with Timestep-wise Anchor Token and dual Alignments (STATA). Timestep-wise Anchor Token enables precise identification of important tokens across timesteps based on standardized criteria. Additionally, dual Alignments incorporate both Intra and Inter Alignment of the attention maps, fostering the learning of inferior attention. Extensive experiments show the effectiveness of STATA thoroughly, which demonstrates up to $\sim$1.53$\times$ training speedup and $\sim$48% energy reduction with comparable performance on various datasets and architectures.
|
https://proceedings.mlr.press/v235/ziemann24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ziemann24a/ziemann24a.pdf
|
https://openreview.net/forum?id=DHtF8Y6PqS
|
Sharp Rates in Dependent Learning Theory: Avoiding Sample Size Deflation for the Square Loss
|
https://proceedings.mlr.press/v235/ziemann24a.html
|
Ingvar Ziemann, Stephen Tu, George J. Pappas, Nikolai Matni
|
https://proceedings.mlr.press/v235/ziemann24a.html
|
ICML 2024
|
In this work, we study statistical learning with dependent data and square loss in a hypothesis class with tail decay in Orlicz space: $\mathscr{F}\subset L_{\Psi_p}$. Our inquiry is motivated by the search for a sharp noise interaction term, or variance proxy, in learning with dependent (e.g. $\beta$-mixing) data. Typical non-asymptotic results exhibit variance proxies that are deflated multiplicatively in the mixing time of the underlying covariates process. We show that whenever the topologies of $L^2$ and $\Psi_p$ are comparable on our hypothesis class $\mathscr{F}$, the empirical risk minimizer achieves a rate that only depends on the complexity of the class and second order statistics in its leading term. We refer to this as a near mixing-free rate, since direct dependence on mixing is relegated to an additive higher order term. Our approach, reliant on mixed tail generic chaining, allows us to obtain sharp, instance-optimal rates. Examples that satisfy our framework include for instance sub-Gaussian linear regression and bounded smoothness classes.
|
https://proceedings.mlr.press/v235/zimerman24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zimerman24a/zimerman24a.pdf
|
https://openreview.net/forum?id=9HPoJ6ulgV
|
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption
|
https://proceedings.mlr.press/v235/zimerman24a.html
|
Itamar Zimerman, Moran Baruch, Nir Drucker, Gilad Ezov, Omri Soceanu, Lior Wolf
|
https://proceedings.mlr.press/v235/zimerman24a.html
|
ICML 2024
|
Designing privacy-preserving DL solutions is a major challenge within the AI community. Homomorphic Encryption (HE) has emerged as one of the most promising approaches in this realm, enabling the decoupling of knowledge between a model owner and a data owner. Despite extensive research and application of this technology, primarily in CNNs, applying HE on transformer models has been challenging because of the difficulties in converting these models into a polynomial form. We break new ground by introducing the first polynomial transformer, providing the first demonstration of secure inference over HE with full transformers. This includes a transformer architecture tailored for HE, alongside a novel method for converting operators to their polynomial equivalent. This innovation enables us to perform secure inference on LMs and ViTs with several datasts and tasks. Our techniques yield results comparable to traditional models, bridging the performance gap with transformers of similar scale and underscoring the viability of HE for state-of-the-art applications. Finally, we assess the stability of our models and conduct a series of ablations to quantify the contribution of each model component. Our code is publicly available.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.