abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/huang24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24l/huang24l.pdf
|
https://openreview.net/forum?id=QLOvxGwbIM
|
Bayesian Power Steering: An Effective Approach for Domain Adaptation of Diffusion Models
|
https://proceedings.mlr.press/v235/huang24l.html
|
Ding Huang, Ting Li, Jian Huang
|
https://proceedings.mlr.press/v235/huang24l.html
|
ICML 2024
|
We propose a Bayesian framework for fine-tuning large diffusion models with a novel network structure called Bayesian Power Steering (BPS). We clarify the meaning behind adaptation from a large probability space to a small probability space and explore the task of fine-tuning pre-trained models using learnable modules from a Bayesian perspective. BPS extracts task-specific knowledge from a pre-trained model’s learned prior distribution. It efficiently leverages large diffusion models, differentially intervening different hidden features with a head-heavy and foot-light configuration. Experiments highlight the superiority of BPS over contemporary methods across a range of tasks even with limited amount of data. Notably, BPS attains an FID score of 10.49 under the sketch condition on the COCO17 dataset.
|
https://proceedings.mlr.press/v235/huang24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24m/huang24m.pdf
|
https://openreview.net/forum?id=7RHFdkAkVY
|
AttNS: Attention-Inspired Numerical Solving For Limited Data Scenarios
|
https://proceedings.mlr.press/v235/huang24m.html
|
Zhongzhan Huang, Mingfu Liang, Shanshan Zhong, Liang Lin
|
https://proceedings.mlr.press/v235/huang24m.html
|
ICML 2024
|
We propose the attention-inspired numerical solver (AttNS), a concise method that helps the generalization and robustness issues faced by the AI-Hybrid numerical solver in solving differential equations due to limited data. AttNS is inspired by the effectiveness of attention modules in Residual Neural Networks (ResNet) in enhancing model generalization and robustness for conventional deep learning tasks. Drawing from the dynamical system perspective of ResNet, We seamlessly incorporate attention mechanisms into the design of numerical methods tailored for the characteristics of solving differential equations. Our results on benchmarks, ranging from high-dimensional problems to chaotic systems, showcase AttNS consistently enhancing various numerical solvers without any intricate model crafting. Finally, we analyze AttNS experimentally and theoretically, demonstrating its ability to achieve strong generalization and robustness while ensuring the convergence of the solver. This includes requiring less data compared to other advanced methods to achieve comparable generalization errors and better prevention of numerical explosion issues when solving differential equations.
|
https://proceedings.mlr.press/v235/huang24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24n/huang24n.pdf
|
https://openreview.net/forum?id=yY6N89IlHa
|
CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks
|
https://proceedings.mlr.press/v235/huang24n.html
|
Yulong Huang, Xiaopeng Lin, Hongwei Ren, Haotian Fu, Yue Zhou, Zunchang Liu, Biao Pan, Bojun Cheng
|
https://proceedings.mlr.press/v235/huang24n.html
|
ICML 2024
|
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Compared to conventional deep Artificial Neural Networks (ANNs), SNNs exhibit superior efficiency and capability to process temporal information. However, it remains a challenge to train SNNs due to their undifferentiable spiking mechanism. The surrogate gradients method is commonly used to train SNNs, but often comes with an accuracy disadvantage over ANNs counterpart. We link the degraded accuracy to the vanishing of gradient on the temporal dimension through the analytical and experimental study of the training process of Leaky Integrate-and-Fire (LIF) Neuron-based SNNs. Moreover, we propose the Complementary Leaky Integrate-and-Fire (CLIF) Neuron. CLIF creates extra paths to facilitate the backpropagation in computing temporal gradient while keeping binary output. CLIF is hyperparameter-free and features broad applicability. Extensive experiments on a variety of datasets demonstrate CLIF’s clear performance advantage over other neuron models. Furthermore, the CLIF’s performance even slightly surpasses superior ANNs with identical network structure and training conditions. The code is available at https://github.com/HuuYuLong/Complementary-LIF.
|
https://proceedings.mlr.press/v235/huang24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24o/huang24o.pdf
|
https://openreview.net/forum?id=LwOfVWgEzS
|
Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning
|
https://proceedings.mlr.press/v235/huang24o.html
|
Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu
|
https://proceedings.mlr.press/v235/huang24o.html
|
ICML 2024
|
Although pre-trained models such as Contrastive Language-Image Pre-Training (CLIP) show impressive generalization results, their robustness is still limited under Out-of-Distribution (OOD) scenarios. Instead of undesirably leveraging human annotation as commonly done, it is possible to leverage the visual understanding power of Multi-modal Large Language Models (MLLMs). However, MLLMs struggle with vision problems due to task incompatibility, thus hindering their effectiveness. In this paper, we propose to effectively leverage MLLMs via Machine Vision Therapy which aims to rectify erroneous predictions of specific vision models. By supervising vision models using MLLM predictions, visual robustness can be boosted in a nearly unsupervised manner. Moreover, we propose a Denoising In-Context Learning (DICL) strategy to solve the incompatibility issue. Concretely, by examining the noise probability of each example through a transition matrix, we construct an instruction containing a correct exemplar and a probable erroneous one, which enables MLLMs to detect and rectify the incorrect predictions of vision models. Under mild assumptions, we theoretically show that our DICL method is guaranteed to find the ground truth. Through extensive experiments on various OOD datasets, our method demonstrates powerful capabilities for enhancing visual robustness under many OOD scenarios.
|
https://proceedings.mlr.press/v235/huang24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24p/huang24p.pdf
|
https://openreview.net/forum?id=disVlUOH4b
|
Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning
|
https://proceedings.mlr.press/v235/huang24p.html
|
Yizhe Huang, Anji Liu, Fanqi Kong, Yaodong Yang, Song-Chun Zhu, Xue Feng
|
https://proceedings.mlr.press/v235/huang24p.html
|
ICML 2024
|
Despite the recent successes of multi-agent reinforcement learning (MARL) algorithms, efficiently adapting to co-players in mixed-motive environments remains a significant challenge. One feasible approach is to hierarchically model co-players’ behavior based on inferring their characteristics. However, these methods often encounter difficulties in efficient reasoning and utilization of inferred information. To address these issues, we propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm that enables few-shot adaptation to unseen policies in mixed-motive environments. HOP is hierarchically composed of two modules: an opponent modeling module that infers others’ goals and learns corresponding goal-conditioned policies, and a planning module that employs Monte Carlo Tree Search (MCTS) to identify the best response. Our approach improves efficiency by updating beliefs about others’ goals both across and within episodes and by using information from the opponent modeling module to guide planning. Experimental results demonstrate that in mixed-motive environments, HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios. Furthermore, the emergence of social intelligence during our experiments underscores the potential of our approach in complex multi-agent environments.
|
https://proceedings.mlr.press/v235/huang24q.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24q/huang24q.pdf
|
https://openreview.net/forum?id=qOl2WWOqFg
|
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
|
https://proceedings.mlr.press/v235/huang24q.html
|
Wei Huang, Yangdong Liu, Haotong Qin, Ying Li, Shiming Zhang, Xianglong Liu, Michele Magno, Xiaojuan Qi
|
https://proceedings.mlr.press/v235/huang24q.html
|
ICML 2024
|
Pretrained large language models (LLMs) exhibit exceptional general language processing capabilities but come with significant demands on memory and computational resources. As a powerful compression technology, binarization can extremely reduce model weights to a mere 1 bit, lowering the expensive computation and memory requirements. However, existing quantization techniques fall short of maintaining LLM performance under ultra-low bit-widths. In response to this challenge, we present BiLLM, a groundbreaking 1-bit post-training quantization scheme tailored for pretrained LLMs. Based on the weight distribution of LLMs, BiLLM first identifies and structurally selects salient weights, and minimizes the compression loss through an effective binary residual approximation strategy. Moreover, considering the bell-shaped distribution of the non-salient weights, we propose an optimal splitting search to group and binarize them accurately. BiLLM, for the first time, achieves high-accuracy inference (e.g. 8.41 perplexity on LLaMA2-70B) with only 1.08-bit weights across various LLM families and evaluation metrics, outperforms SOTA quantization methods of LLM by significant margins. Moreover, BiLLM enables the binarization process of a 7-billion LLM within 0.5 hours on a single GPU, demonstrating satisfactory time efficiency. Our code is available at https://github.com/Aaronhuang-778/BiLLM .
|
https://proceedings.mlr.press/v235/huang24r.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24r/huang24r.pdf
|
https://openreview.net/forum?id=Nxz3CDtGXp
|
An Empirical Examination of Balancing Strategy for Counterfactual Estimation on Time Series
|
https://proceedings.mlr.press/v235/huang24r.html
|
Qiang Huang, Chuizheng Meng, Defu Cao, Biwei Huang, Yi Chang, Yan Liu
|
https://proceedings.mlr.press/v235/huang24r.html
|
ICML 2024
|
Counterfactual estimation from observations represents a critical endeavor in numerous application fields, such as healthcare and finance, with the primary challenge being the mitigation of treatment bias. The balancing strategy aimed at reducing covariate disparities between different treatment groups serves as a universal solution. However, when it comes to the time series data, the effectiveness of balancing strategies remains an open question, with a thorough analysis of the robustness and applicability of balancing strategies still lacking. This paper revisits counterfactual estimation in the temporal setting and provides a brief overview of recent advancements in balancing strategies. More importantly, we conduct a critical empirical examination for the effectiveness of the balancing strategies within the realm of temporal counterfactual estimation in various settings on multiple datasets. Our findings could be of significant interest to researchers and practitioners and call for a reexamination of the balancing strategy in time series settings.
|
https://proceedings.mlr.press/v235/huang24s.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24s/huang24s.pdf
|
https://openreview.net/forum?id=JGL39NaARS
|
MFTN: A Multi-scale Feature Transfer Network Based on IMatchFormer for Hyperspectral Image Super-Resolution
|
https://proceedings.mlr.press/v235/huang24s.html
|
Shuying Huang, Mingyang Ren, Yong Yang, Xiaozheng Wang, Yingzhi Wei
|
https://proceedings.mlr.press/v235/huang24s.html
|
ICML 2024
|
Hyperspectral image super-resolution (HISR) aims to fuse a low-resolution hyperspectral image (LR-HSI) with a high-resolution multispectral image (HR-MSI) to obtain a high-resolution hyperspectral image (HR-HSI). Due to some existing HISR methods ignoring the significant feature difference between LR-HSI and HR-MSI, the reconstructed HR-HSI typically exhibits spectral distortion and blurring of spatial texture. To solve this issue, we propose a multi-scale feature transfer network (MFTN) for HISR. Firstly, three multi-scale feature extractors are constructed to extract features of different scales from the input images. Then, a multi-scale feature transfer module (MFTM) consisting of three improved feature matching Transformers (IMatchFormers) is designed to learn the detail features of different scales from HR-MSI by establishing the cross-model feature correlation between LR-HSI and degraded HR-MSI. Finally, a multiscale dynamic aggregation module (MDAM) containing three spectral aware aggregation modules (SAAMs) is constructed to reconstruct the final HR-HSI by gradually aggregating features of different scales. Extensive experimental results on three commonly used datasets demonstrate that the proposed model achieves better performance compared to state- of-the-art (SOTA) methods.
|
https://proceedings.mlr.press/v235/huang24t.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24t/huang24t.pdf
|
https://openreview.net/forum?id=dcwUGaK9sQ
|
On Which Nodes Does GCN Fail? Enhancing GCN From the Node Perspective
|
https://proceedings.mlr.press/v235/huang24t.html
|
Jincheng Huang, Jialie Shen, Xiaoshuang Shi, Xiaofeng Zhu
|
https://proceedings.mlr.press/v235/huang24t.html
|
ICML 2024
|
The label smoothness assumption is at the core of Graph Convolutional Networks (GCNs): nodes in a local region have similar labels. Thus, GCN performs local feature smoothing operation to adhere to this assumption. However, there exist some nodes whose labels obtained by feature smoothing conflict with the label smoothness assumption. We find that the label smoothness assumption and the process of feature smoothing are both problematic on these nodes, and call these nodes out of GCN’s control (OOC nodes). In this paper, first, we design the corresponding algorithm to locate the OOC nodes, then we summarize the characteristics of OOC nodes that affect their representation learning, and based on their characteristics, we present DaGCN, an efficient framework that can facilitate the OOC nodes. Extensive experiments verify the superiority of the proposed method and demonstrate that current advanced GCNs are improvements specifically on OOC nodes; the remaining nodes under GCN’s control (UC nodes) are already optimally represented by vanilla GCN on most datasets.
|
https://proceedings.mlr.press/v235/huang24u.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24u/huang24u.pdf
|
https://openreview.net/forum?id=k2axqNsVVO
|
Self-Driven Entropy Aggregation for Byzantine-Robust Heterogeneous Federated Learning
|
https://proceedings.mlr.press/v235/huang24u.html
|
Wenke Huang, Zekun Shi, Mang Ye, He Li, Bo Du
|
https://proceedings.mlr.press/v235/huang24u.html
|
ICML 2024
|
Federated learning presents massive potential for privacy-friendly collaboration. However, the performance of federated learning is deeply affected by byzantine attacks, where malicious clients deliberately upload crafted vicious updates. While various robust aggregations have been proposed to defend against such attacks, they are subject to certain assumptions: homogeneous private data and related proxy datasets. To address these limitations, we propose Self-Driven Entropy Aggregation (SDEA), which leverages the random public dataset to conduct Byzantine-robust aggregation in heterogeneous federated learning. For Byzantine attackers, we observe that benign ones typically present more confident (sharper) predictions than evils on the public dataset. Thus, we highlight benign clients by introducing learnable aggregation weight to minimize the instance-prediction entropy of the global model on the random public dataset. Besides, with inherent data heterogeneity in federated learning, we reveal that it brings heterogeneous sharpness. Specifically, clients are optimized under distinct distribution and thus present fruitful predictive preferences. The learnable aggregation weight blindly allocates high attention to limited ones for sharper predictions, resulting in a biased global model. To alleviate this problem, we encourage the global model to offer diverse predictions via batch-prediction entropy maximization and conduct clustering to equally divide honest weights to accommodate different tendencies. This endows SDEA to detect Byzantine attackers in heterogeneous federated learning. Empirical results demonstrate the effectiveness.
|
https://proceedings.mlr.press/v235/huang24v.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24v/huang24v.pdf
|
https://openreview.net/forum?id=mNzkumTSVL
|
Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors
|
https://proceedings.mlr.press/v235/huang24v.html
|
Chun-Yin Huang, Kartik Srinivas, Xin Zhang, Xiaoxiao Li
|
https://proceedings.mlr.press/v235/huang24v.html
|
ICML 2024
|
Conventional Federated Learning (FL) involves collaborative training of a global model while maintaining user data privacy. One of its branches, decentralized FL, is a serverless network that allows clients to own and optimize different local models separately, which results in saving management and communication resources. Despite the promising advancements in decentralized FL, it may reduce model generalizability due to lacking a global model. In this scenario, managing data and model heterogeneity among clients becomes a crucial problem, which poses a unique challenge that must be overcome: How can every client’s local model learn generalizable representation in a decentralized manner? To address this challenge, we propose a novel Decentralized FL technique by introducing Synthetic Anchors, dubbed as DeSA. Based on the theory of domain adaptation and Knowledge Distillation (KD), we theoretically and empirically show that synthesizing global anchors based on raw data distribution facilitates mutual knowledge transfer. We further design two effective regularization terms for local training: 1) REG loss that regularizes the distribution of the client’s latent embedding with the anchors and 2) KD loss that enables clients to learn from others. Through extensive experiments on diverse client data distributions, we showcase the effectiveness of DeSA in enhancing both inter- and intra-domain accuracy of each client. The implementation of DeSA can be found at: https://github.com/ubc-tea/DESA
|
https://proceedings.mlr.press/v235/huang24w.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24w/huang24w.pdf
|
https://openreview.net/forum?id=gSMUjrkRRk
|
Quasi-Monte Carlo Features for Kernel Approximation
|
https://proceedings.mlr.press/v235/huang24w.html
|
Zhen Huang, Jiajin Sun, Yian Huang
|
https://proceedings.mlr.press/v235/huang24w.html
|
ICML 2024
|
Random features (Rahimi & Recht, 2007), based on Monte Carlo (MC) method, is one of the most popular approximation techniques to accelerate kernel methods. We show for a class of kernels, including Gaussian kernels, quasi-Monte Carlo (QMC) methods can be used in place of MC to improve the approximation error from $O_P(1/\sqrt{M})$ to $O(1/M)$ (up to logarithmic factors), for estimating both the kernel function itself and the associated integral operator, where $M$ is the number of features being used. Furthermore, we demonstrate the advantage of QMC features in the case of kernel ridge regression, where theoretically, fewer random features suffice to guarantee the same convergence rate of the excess risk. In practice, the QMC kernel approximation approach is easily implementable and shows superior performance, as supported by the empirical evidence provided in the paper.
|
https://proceedings.mlr.press/v235/huang24x.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24x/huang24x.pdf
|
https://openreview.net/forum?id=bWUU0LwwMp
|
Position: TrustLLM: Trustworthiness in Large Language Models
|
https://proceedings.mlr.press/v235/huang24x.html
|
Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Yang Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
|
https://proceedings.mlr.press/v235/huang24x.html
|
ICML 2024
|
Large language models (LLMs) have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. Our findings firstly show that in general trustworthiness and capability (i.e., functional effectiveness) are positively related. Secondly, our observations reveal that proprietary LLMs generally outperform most open-source counterparts in terms of trustworthiness, raising concerns about the potential risks of widely accessible open-source LLMs. However, a few open-source LLMs come very close to proprietary ones, suggesting that open-source models can achieve high levels of trustworthiness without additional mechanisms like moderator, offering valuable insights for developers in this field. Thirdly, it is important to note that some LLMs may be overly calibrated towards exhibiting trustworthiness, to the extent that they compromise their utility by mistakenly treating benign prompts as harmful and consequently not responding. Besides these observations, we’ve uncovered key insights into the multifaceted trustworthiness in LLMs. We emphasize the importance of ensuring transparency not only in the models themselves but also in the technologies that underpin trustworthiness. We advocate that the establishment of an AI alliance between industry, academia, the open-source community to foster collaboration is imperative to advance the trustworthiness of LLMs.
|
https://proceedings.mlr.press/v235/huang24y.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24y/huang24y.pdf
|
https://openreview.net/forum?id=1Fs1LvjYQW
|
MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation
|
https://proceedings.mlr.press/v235/huang24y.html
|
Qian Huang, Jian Vora, Percy Liang, Jure Leskovec
|
https://proceedings.mlr.press/v235/huang24y.html
|
ICML 2024
|
A central aspect of machine learning research is experimentation, the process of designing and running experiments, analyzing the results, and iterating towards some positive outcome (e.g., improving accuracy). Could agents driven by powerful language models perform machine learning experimentation effectively? To answer this question, we introduce MLAgentBench, a suite of 13 tasks ranging from improving model performance on CIFAR-10 to recent research problems like BabyLM. For each task, an agent can perform actions like reading/writing files, executing code, and inspecting outputs. We then construct an agent that can perform ML experimentation based on ReAct framework. We benchmark agents based on Claude v1.0, Claude v2.1, Claude v3 Opus, GPT-4, GPT-4-turbo, Gemini-Pro, and Mixtral and find that a Claude v3 Opus agent is the best in terms of success rate. It can build compelling ML models over many tasks in MLAgentBench with 37.5% average success rate. Our agents also display highly interpretable plans and actions. However, the success rates vary considerably; they span from 100% on well-established older datasets to as low as 0% on recent Kaggle challenges created potentially after the underlying LM was trained. Finally, we identify several key challenges for LM-based agents such as long-term planning and reducing hallucination.
|
https://proceedings.mlr.press/v235/huang24z.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24z/huang24z.pdf
|
https://openreview.net/forum?id=Z2LH6Va7L2
|
How Universal Polynomial Bases Enhance Spectral Graph Neural Networks: Heterophily, Over-smoothing, and Over-squashing
|
https://proceedings.mlr.press/v235/huang24z.html
|
Keke Huang, Yu Guang Wang, Ming Li, Pietro Lio
|
https://proceedings.mlr.press/v235/huang24z.html
|
ICML 2024
|
Spectral Graph Neural Networks (GNNs), alternatively known as graph filters, have gained increasing prevalence for heterophily graphs. Optimal graph filters rely on Laplacian eigendecomposition for Fourier transform. In an attempt to avert prohibitive computations, numerous polynomial filters have been proposed. However, polynomials in the majority of these filters are predefined and remain fixed across different graphs, failing to accommodate the varying degrees of heterophily. Addressing this gap, we demystify the intrinsic correlation between the spectral property of desired polynomial bases and the heterophily degrees via thorough theoretical analyses. Subsequently, we develop a novel adaptive heterophily basis wherein the basis vectors mutually form angles reflecting the heterophily degree of the graph. We integrate this heterophily basis with the homophily basis to construct a universal polynomial basis UniBasis, which devises a polynomial filter based graph neural network – UniFilter. It optimizes the convolution and propagation in GNN, thus effectively limiting over-smoothing and alleviating over-squashing. Our extensive experiments, conducted on datasets with a diverse range of heterophily, support the superiority of UniBasis in the universality but also its proficiency in graph explanation.
|
https://proceedings.mlr.press/v235/huang24aa.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24aa/huang24aa.pdf
|
https://openreview.net/forum?id=b3pYoZfcoo
|
Conformal Prediction for Deep Classifier via Label Ranking
|
https://proceedings.mlr.press/v235/huang24aa.html
|
Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei
|
https://proceedings.mlr.press/v235/huang24aa.html
|
ICML 2024
|
Conformal prediction is a statistical framework that generates prediction sets containing ground-truth labels with a desired coverage guarantee. The predicted probabilities produced by machine learning models are generally miscalibrated, leading to large prediction sets in conformal prediction. To address this issue, we propose a novel algorithm named $\textit{Sorted Adaptive Prediction Sets}$ (SAPS), which discards all the probability values except for the maximum softmax probability. The key idea behind SAPS is to minimize the dependence of the non-conformity score on the probability values while retaining the uncertainty information. In this manner, SAPS can produce compact prediction sets and communicate instance-wise uncertainty. Extensive experiments validate that SAPS not only lessens the prediction sets but also broadly enhances the conditional coverage rate of prediction sets.
|
https://proceedings.mlr.press/v235/huang24ab.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ab/huang24ab.pdf
|
https://openreview.net/forum?id=eejhD9FCP3
|
Interaction-based Retrieval-augmented Diffusion Models for Protein-specific 3D Molecule Generation
|
https://proceedings.mlr.press/v235/huang24ab.html
|
Zhilin Huang, Ling Yang, Xiangxin Zhou, Chujun Qin, Yijie Yu, Xiawu Zheng, Zikun Zhou, Wentao Zhang, Yu Wang, Wenming Yang
|
https://proceedings.mlr.press/v235/huang24ab.html
|
ICML 2024
|
Generating ligand molecules that bind to specific protein targets via generative models holds substantial promise for advancing structure-based drug design. Existing methods generate molecules from scratch without reference or template ligands, which poses challenges in model optimization and may yield suboptimal outcomes. To address this problem, we propose an innovative interaction-based retrieval-augmented diffusion model named IRDiff to facilitate target-aware molecule generation. IRDiff leverages a curated set of ligand references, i.e., those with desired properties such as high binding affinity, to steer the diffusion model towards synthesizing ligands that satisfy design criteria. Specifically, we utilize a protein-molecule interaction network (PMINet), which is pretrained with binding affinity signals to: (i) retrieve target-aware ligand molecules with high binding affinity to serve as references, and (ii) incorporate essential protein-ligand binding structures for steering molecular diffusion generation with two effective augmentation mechanisms, i.e., retrieval augmentation and self augmentation. Empirical studies on CrossDocked2020 dataset show IRDiff can generate molecules with more realistic 3D structures and achieve state-of-the-art binding affinities towards the protein targets, while maintaining proper molecular properties. The codes and models are available at https://github.com/YangLing0818/IRDiff
|
https://proceedings.mlr.press/v235/huang24ac.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ac/huang24ac.pdf
|
https://openreview.net/forum?id=0NdU4y9dWC
|
Enhancing Size Generalization in Graph Neural Networks through Disentangled Representation Learning
|
https://proceedings.mlr.press/v235/huang24ac.html
|
Zheng Huang, Qihui Yang, Dawei Zhou, Yujun Yan
|
https://proceedings.mlr.press/v235/huang24ac.html
|
ICML 2024
|
Although most graph neural networks (GNNs) can operate on graphs of any size, their classification performance often declines on graphs larger than those encountered during training. Existing methods insufficiently address the removal of size information from graph representations, resulting in sub-optimal performance and reliance on backbone models. In response, we propose DISGEN, a novel and model-agnostic framework designed to disentangle size factors from graph representations. DISGEN employs size- and task-invariant augmentations and introduces a decoupling loss that minimizes shared information in hidden representations, with theoretical guarantees for its effectiveness. Our empirical results show that DISGEN outperforms the state-of-the-art models by up to 6% on real-world datasets, underscoring its effectiveness in enhancing the size generalizability of GNNs. Our codes are available at: https://github.com/GraphmindDartmouth/DISGEN.
|
https://proceedings.mlr.press/v235/huang24ad.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ad/huang24ad.pdf
|
https://openreview.net/forum?id=OnkA4zaEU9
|
Triadic-OCD: Asynchronous Online Change Detection with Provable Robustness, Optimality, and Convergence
|
https://proceedings.mlr.press/v235/huang24ad.html
|
Yancheng Huang, Kai Yang, Zelin Zhu, Leian Chen
|
https://proceedings.mlr.press/v235/huang24ad.html
|
ICML 2024
|
The primary goal of online change detection (OCD) is to promptly identify changes in the data stream. OCD problem find a wide variety of applications in diverse areas, e.g., security detection in smart grids and intrusion detection in communication networks. Prior research usually assumes precise knowledge of the system parameters. Nevertheless, this presumption often proves unattainable in practical scenarios due to factors such as estimation errors, system updates, etc. This paper aims to take the first attempt to develop a triadic-OCD framework with certifiable robustness, provable optimality, and guaranteed convergence. In addition, the proposed triadic-OCD algorithm can be realized in a fully asynchronous distributed manner, easing the necessity of transmitting the data to a single server. This asynchronous mechanism could also mitigate the straggler issue that faced by traditional synchronous algorithm. Moreover, the non-asymptotic convergence property of Triadic-OCD is theoretically analyzed, and its iteration complexity to achieve an $\epsilon$-optimal point is derived. Extensive experiments have been conducted to elucidate the effectiveness of the proposed method.
|
https://proceedings.mlr.press/v235/huang24ae.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ae/huang24ae.pdf
|
https://openreview.net/forum?id=V4qV08Vk6S
|
An Embodied Generalist Agent in 3D World
|
https://proceedings.mlr.press/v235/huang24ae.html
|
Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, Siyuan Huang
|
https://proceedings.mlr.press/v235/huang24ae.html
|
ICML 2024
|
Leveraging massive knowledge from large language models (LLMs), recent machine learning models show notable successes in general-purpose task solving in diverse domains such as computer vision and robotics. However, several significant challenges remain: (i) most of these models rely on 2D images yet exhibit a limited capacity for 3D input; (ii) these models rarely explore the tasks inherently defined in 3D world, e.g., 3D grounding, embodied reasoning and acting. We argue these limitations significantly hinder current models from performing real-world tasks and approaching general intelligence. To this end, we introduce LEO, an embodied multi-modal generalist agent that excels in perceiving, grounding, reasoning, planning, and acting in the 3D world. LEO is trained with a unified task interface, model architecture, and objective in two stages: (i) 3D vision-language (VL) alignment and (ii) 3D vision-language-action (VLA) instruction tuning. We collect large-scale datasets comprising diverse object-level and scene-level tasks, which require considerable understanding of and interaction with the 3D world. Moreover, we meticulously design an LLM-assisted pipeline to produce high-quality 3D VL data. Through extensive experiments, we demonstrate LEO’s remarkable proficiency across a wide spectrum of tasks, including 3D captioning, question answering, embodied reasoning, navigation and manipulation. Our ablative studies and scaling analyses further provide valuable insights for developing future embodied generalist agents. Code and data are available on project page.
|
https://proceedings.mlr.press/v235/huang24af.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24af/huang24af.pdf
|
https://openreview.net/forum?id=wilej5VnqL
|
InterLUDE: Interactions between Labeled and Unlabeled Data to Enhance Semi-Supervised Learning
|
https://proceedings.mlr.press/v235/huang24af.html
|
Zhe Huang, Xiaowei Yu, Dajiang Zhu, Michael C Hughes
|
https://proceedings.mlr.press/v235/huang24af.html
|
ICML 2024
|
Semi-supervised learning (SSL) seeks to enhance task performance by training on both labeled and unlabeled data. Mainstream SSL image classification methods mostly optimize a loss that additively combines a supervised classification objective with a regularization term derived solely from unlabeled data. This formulation often neglects the potential for interaction between labeled and unlabeled images. In this paper, we introduce InterLUDE, a new approach to enhance SSL made of two parts that each benefit from labeled-unlabeled interaction. The first part, embedding fusion, interpolates between labeled and unlabeled embeddings to improve representation learning. The second part is a new loss, grounded in the principle of consistency regularization, that aims to minimize discrepancies in the model’s predictions between labeled versus unlabeled inputs. Experiments on standard closed-set SSL benchmarks and a medical SSL task with an uncurated unlabeled set show clear benefits to our approach. On the STL-10 dataset with only 40 labels, InterLUDE achieves 3.2% error rate, while the best previous method reports 6.3%.
|
https://proceedings.mlr.press/v235/huang24ag.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ag/huang24ag.pdf
|
https://openreview.net/forum?id=QRjTDhCIO8
|
Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge
|
https://proceedings.mlr.press/v235/huang24ag.html
|
Yufei Huang, Odin Zhang, Lirong Wu, Cheng Tan, Haitao Lin, Zhangyang Gao, Siyuan Li, Stan Z. Li
|
https://proceedings.mlr.press/v235/huang24ag.html
|
ICML 2024
|
Accurate prediction of protein-ligand binding structures, a task known as molecular docking is crucial for drug design but remains challenging. While deep learning has shown promise, existing methods often depend on holo-protein structures (docked, and not accessible in realistic tasks) or neglect pocket sidechain conformations, leading to limited practical utility and unrealistic conformation predictions. To fill these gaps, we introduce an under-explored task, named flexible docking to predict poses of ligand and pocket sidechains simultaneously and introduce Re-Dock, a novel diffusion bridge generative model extended to geometric manifolds. Specifically, we propose energy-to-geometry mapping inspired by the Newton-Euler equation to co-model the binding energy and conformations for reflecting the energy-constrained docking generative process. Comprehensive experiments on designed benchmark datasets including apo-dock and cross-dock demonstrate our model’s superior effectiveness and efficiency over current methods.
|
https://proceedings.mlr.press/v235/huang24ah.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ah/huang24ah.pdf
|
https://openreview.net/forum?id=Dwc0RwiNI5
|
Faster Adaptive Decentralized Learning Algorithms
|
https://proceedings.mlr.press/v235/huang24ah.html
|
Feihu Huang, Jianyu Zhao
|
https://proceedings.mlr.press/v235/huang24ah.html
|
ICML 2024
|
Decentralized learning recently has received increasing attention in machine learning due to its advantages in implementation simplicity and system robustness, data privacy. Meanwhile, the adaptive gradient methods show superior performances in many machine learning tasks such as training neural networks. Although some works focus on studying decentralized optimization algorithms with adaptive learning rates, these adaptive decentralized algorithms still suffer from high sample complexity. To fill these gaps, we propose a class of faster adaptive decentralized algorithms (i.e., AdaMDOS and AdaMDOF) for distributed nonconvex stochastic and finite-sum optimization, respectively. Moreover, we provide a solid convergence analysis framework for our methods. In particular, we prove that our AdaMDOS obtains a near-optimal sample complexity of $\tilde{O}(\epsilon^{-3})$ for finding an $\epsilon$-stationary solution of nonconvex stochastic optimization. Meanwhile, our AdaMDOF obtains a near-optimal sample complexity of $O(\sqrt{n}\epsilon^{-2})$ for finding an $\epsilon$-stationary solution of for nonconvex finite-sum optimization, where $n$ denotes the sample size. To the best of our knowledge, our AdaMDOF algorithm is the first adaptive decentralized algorithm for nonconvex finite-sum optimization. Some experimental results demonstrate efficiency of our algorithms.
|
https://proceedings.mlr.press/v235/huang24ai.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24ai/huang24ai.pdf
|
https://openreview.net/forum?id=D9EfAkQCzh
|
Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework
|
https://proceedings.mlr.press/v235/huang24ai.html
|
Haonan Huang, Guoxu Zhou, Yanghang Zheng, Yuning Qiu, Andong Wang, Qibin Zhao
|
https://proceedings.mlr.press/v235/huang24ai.html
|
ICML 2024
|
Deep Multi-view Clustering (DMVC) stands out as a widely adopted technique aiming at enhanced clustering performance by leveraging diverse data sources. However, the critical issue of vulnerability to adversarial attacks is unexplored due to the lack of well-defined attack objectives. To fill this crucial gap, this paper is the first work to investigate the possibility of adversarial attacks on DMVC models. Specifically, we introduce an adversarial attack with Generative Adversarial Networks (GANs) with the aim to maximally change the complementarity and consistency of multiple views, thus leading to wrong clustering. Building upon this adversarial context, in the realm of defense, we propose a novel Adversarially Robust Deep Multi-View Clustering by leveraging adversarial training. Based on the analysis from an information-theoretic perspective, we design an Attack Mitigator that provides a foundation to guarantee the adversarial robustness of our DMVC models. Experiments conducted on multi-view datasets confirmed that our attack framework effectively reduces the clustering performance of the target model. Furthermore, our proposed adversarially robust method is also demonstrated to be an effective defense against such attacks. This work is a pioneer in exploring adversarial threats and advancing both theoretical understanding and practical strategies for robust multi-view clustering. Code is available at https://github.com/libertyhhn/AR-DMVC.
|
https://proceedings.mlr.press/v235/huang24aj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huang24aj/huang24aj.pdf
|
https://openreview.net/forum?id=7gEcbhMqKU
|
Faster Sampling via Stochastic Gradient Proximal Sampler
|
https://proceedings.mlr.press/v235/huang24aj.html
|
Xunpeng Huang, Difan Zou, Hanze Dong, Yian Ma, Tong Zhang
|
https://proceedings.mlr.press/v235/huang24aj.html
|
ICML 2024
|
Stochastic gradients have been widely integrated into Langevin-based methods to improve their scalability and efficiency in solving large-scale sampling problems. However, the proximal sampler, which exhibits much faster convergence than Langevin-based algorithms in the deterministic setting (Lee et al., 2021), has yet to be explored in its stochastic variants. In this paper, we study the Stochastic Proximal Samplers (SPS) for sampling from non-log-concave distributions. We first establish a general framework for implementing stochastic proximal samplers and establish the convergence theory accordingly. We show that the convergence to the target distribution can be guaranteed as long as the second moment of the algorithm trajectory is bounded and restricted Gaussian oracles can be well approximated. We then provide two implementable variants based on Stochastic gradient Langevin dynamics (SGLD) and Metropolis-adjusted Langevin algorithm (MALA), giving rise to SPS-SGLD and SPS-MALA. We further show that SPS-SGLD and SPS-MALA can achieve $\epsilon$-sampling error in total variation (TV) distance within $\tilde{\mathcal{O}}(d\epsilon^{-2})$ and $\tilde{\mathcal{O}}(d^{1/2}\epsilon^{-2})$ gradient complexities, which outperform the best-known result by at least an $\tilde{\mathcal{O}}(d^{1/3})$ factor. This enhancement in performance is corroborated by our empirical studies on synthetic data with various dimensions, demonstrating the efficiency of our proposed algorithm.
|
https://proceedings.mlr.press/v235/hughes24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hughes24a/hughes24a.pdf
|
https://openreview.net/forum?id=Bc4vZ2CX7E
|
Position: Open-Endedness is Essential for Artificial Superhuman Intelligence
|
https://proceedings.mlr.press/v235/hughes24a.html
|
Edward Hughes, Michael D Dennis, Jack Parker-Holder, Feryal Behbahani, Aditi Mavalankar, Yuge Shi, Tom Schaul, Tim Rocktäschel
|
https://proceedings.mlr.press/v235/hughes24a.html
|
ICML 2024
|
In recent years there has been a tremendous surge in the general capabilities of AI systems, mainly fuelled by training foundation models on internet-scale data. Nevertheless, the creation of open-ended, ever self-improving AI remains elusive. In this position paper, we argue that the ingredients are now in place to achieve open-endedness in AI systems with respect to a human observer. Furthermore, we claim that such open-endedness is an essential property of any artificial superhuman intelligence (ASI). We begin by providing a concrete formal definition of open-endedness through the lens of novelty and learnability. We then illustrate a path towards ASI via open-ended systems built on top of foundation models, capable of making novel, human-relevant discoveries. We conclude by examining the safety implications of generally-capable open-ended AI. We expect that open-ended foundation models will prove to be an increasingly fertile and safety-critical area of research in the near future.
|
https://proceedings.mlr.press/v235/huh24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huh24a/huh24a.pdf
|
https://openreview.net/forum?id=BH8TYy0r6u
|
Position: The Platonic Representation Hypothesis
|
https://proceedings.mlr.press/v235/huh24a.html
|
Minyoung Huh, Brian Cheung, Tongzhou Wang, Phillip Isola
|
https://proceedings.mlr.press/v235/huh24a.html
|
ICML 2024
|
We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato’s concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis.
|
https://proceedings.mlr.press/v235/huh24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huh24b/huh24b.pdf
|
https://openreview.net/forum?id=QQkK6YH0Th
|
Nash Incentive-compatible Online Mechanism Learning via Weakly Differentially Private Online Learning
|
https://proceedings.mlr.press/v235/huh24b.html
|
Joon Suk Huh, Kirthevasan Kandasamy
|
https://proceedings.mlr.press/v235/huh24b.html
|
ICML 2024
|
We study a multi-round mechanism design problem, where we interact with a set of agents over a sequence of rounds. We wish to design an incentive-compatible (IC) online learning scheme to maximize an application-specific objective within a given class of mechanisms, without prior knowledge of the agents’ type distributions. Even if each mechanism in this class is IC in a single round, if an algorithm naively chooses from this class on each round, the entire learning process may not be IC against non-myopic buyers who appear over multiple rounds. On each round, our method randomly chooses between the recommendation of a weakly differentially private online learning algorithm (e.g., Hedge), and a commitment mechanism which penalizes non-truthful behavior. Our method is IC and achieves $O(T^{\frac{1+h}{2}})$ regret for the application-specific objective in an adversarial setting, where $h$ quantifies the long-sightedness of the agents. When compared to prior work, our approach is conceptually simpler, it applies to general mechanism design problems (beyond auctions), and its regret scales gracefully with the size of the mechanism class.
|
https://proceedings.mlr.press/v235/hui24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hui24a/hui24a.pdf
|
https://openreview.net/forum?id=8l1KYguM4w
|
Make-A-Shape: a Ten-Million-scale 3D Shape Model
|
https://proceedings.mlr.press/v235/hui24a.html
|
Ka-Hei Hui, Aditya Sanghi, Arianna Rampini, Kamal Rahimi Malekshan, Zhengzhe Liu, Hooman Shayani, Chi-Wing Fu
|
https://proceedings.mlr.press/v235/hui24a.html
|
ICML 2024
|
The progression in large-scale 3D generative models has been impeded by significant resource requirements for training and challenges like inefficient representations. This paper introduces Make-A-Shape, a novel 3D generative model trained on a vast scale, using 10 million publicly-available shapes. We first innovate the wavelet-tree representation to encode high-resolution SDF shapes with minimal loss, leveraging our newly-proposed subband coefficient filtering scheme. We then design a subband coefficient packing scheme to facilitate diffusion-based generation and a subband adaptive training strategy for effective training on the large-scale dataset. Our generative framework is versatile, capable of conditioning on various input modalities such as images, point clouds, and voxels, enabling a variety of downstream applications, e.g., unconditional generation, completion, and conditional generation. Our approach clearly surpasses the existing baselines in delivering high-quality results and can efficiently generate shapes within two seconds for most conditions.
|
https://proceedings.mlr.press/v235/huijben24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huijben24a/huijben24a.pdf
|
https://openreview.net/forum?id=NBAc36V00H
|
Residual Quantization with Implicit Neural Codebooks
|
https://proceedings.mlr.press/v235/huijben24a.html
|
Iris A.M. Huijben, Matthijs Douze, Matthew J. Muckley, Ruud Van Sloun, Jakob Verbeek
|
https://proceedings.mlr.press/v235/huijben24a.html
|
ICML 2024
|
Vector quantization is a fundamental operation for data compression and vector search. To obtain high accuracy, multi-codebook methods represent each vector using codewords across several codebooks. Residual quantization (RQ) is one such method, which iteratively quantizes the error of the previous step. While the error distribution is dependent on previously-selected codewords, this dependency is not accounted for in conventional RQ as it uses a fixed codebook per quantization step. In this paper, we propose QINCo, a neural RQ variant that constructs specialized codebooks per step that depend on the approximation of the vector from previous steps. Experiments show that QINCo outperforms state-of-the-art methods by a large margin on several datasets and code sizes. For example, QINCo achieves better nearest-neighbor search accuracy using 12-byte codes than the state-of-the-art UNQ using 16 bytes on the BigANN1M and Deep1M datasets.
|
https://proceedings.mlr.press/v235/huix24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huix24a/huix24a.pdf
|
https://openreview.net/forum?id=hnqlgwcRxb
|
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians
|
https://proceedings.mlr.press/v235/huix24a.html
|
Tom Huix, Anna Korba, Alain Oliviero Durmus, Eric Moulines
|
https://proceedings.mlr.press/v235/huix24a.html
|
ICML 2024
|
Variational inference (VI) is a popular approach in Bayesian inference, that looks for the best approximation of the posterior distribution within a parametric family, minimizing a loss that is (typically) the reverse Kullback-Leibler (KL) divergence. Despite its empirical success, the theoretical properties of VI have only recently received attention, and is restricted to the Gaussian case. This research paper aims to contribute to the theoretical study of VI in the non-Gaussian case by investigating the setting of Mixture of Gaussians with fixed covariance. In this view, VI over this specific family can be casted as the minimization of a Mollified relative entropy, i.e. the KL between the convolution (with respect to a Gaussian kernel) of an atomic measure supported on Diracs, where the support of the atomic measure correspond to the localization of the Gaussian components, and the target distribution. Hence, solving variational inference is equivalent to optimizing the positions of the Diracs (the particles), which can be done through gradient descent and takes the form of an interacting particle system. We study two sources of error in variational inference in this context. The first is an optimization result that is a descent lemma establishing that the algorithm decreases the objective at each iteration. The second is an approximation error that upper bounds the mollified relative entropy between an optimal finite mixture and the target distribution.
|
https://proceedings.mlr.press/v235/humayun24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/humayun24a/humayun24a.pdf
|
https://openreview.net/forum?id=zMue490KMr
|
Deep Networks Always Grok and Here is Why
|
https://proceedings.mlr.press/v235/humayun24a.html
|
Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk
|
https://proceedings.mlr.press/v235/humayun24a.html
|
ICML 2024
|
Grokking, or delayed generalization, is a phenomenon where generalization in a deep neural network (DNN) occurs long after achieving near zero training error. Previous studies have reported the occurrence of grokking in specific controlled settings, such as DNNs initialized with large-norm parameters or transformers trained on algorithmic datasets. We demonstrate that grokking is actually much more widespread and materializes in a wide range of practical settings, such as training of a convolutional neural network (CNN) on CIFAR10 or a Resnet on Imagenette. We introduce the new concept of delayed robustness, whereby a DNN groks adversarial examples and becomes robust, long after interpolation and/or generalization. We develop an analytical explanation for the emergence of both delayed generalization and delayed robustness based on the local complexity of a DNN’s input-output mapping. Our local complexity measures the density of so-called “linear regions’’ (aka, spline partition regions) that tile the DNN input space and serves as a utile progress measure for training. We provide the first evidence that, for classification problems, the linear regions undergo a phase transition during training whereafter they migrate away from the training samples (making the DNN mapping smoother there) and towards the decision boundary (making the DNN mapping less smooth there). Grokking occurs post phase transition as a robust partition of the input space thanks to the linearization of the DNN mapping around the training points. Web: https://bit.ly/grok-adversarial.
|
https://proceedings.mlr.press/v235/huo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/huo24a/huo24a.pdf
|
https://openreview.net/forum?id=AqBz54aFyj
|
Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models
|
https://proceedings.mlr.press/v235/huo24a.html
|
Mingjia Huo, Sai Ashish Somayajula, Youwei Liang, Ruisi Zhang, Farinaz Koushanfar, Pengtao Xie
|
https://proceedings.mlr.press/v235/huo24a.html
|
ICML 2024
|
Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts. Watermarking is pivotal in this context, which involves embedding hidden markers in texts during the LLM inference phase, which is imperceptible to humans. Achieving both the detectability of inserted watermarks and the semantic quality of generated texts is challenging. While current watermarking algorithms have made promising progress in this direction, there remains significant scope for improvement. To address these challenges, we introduce a novel multi-objective optimization (MOO) approach for watermarking that utilizes lightweight networks to generate token-specific watermarking logits and splitting ratios. By leveraging MOO to optimize for both detection and semantic objective functions, our method simultaneously achieves detectability and semantic integrity. Experimental results show that our method outperforms current watermarking techniques in enhancing the detectability of texts generated by LLMs while maintaining their semantic coherence. Our code is available at https://github.com/mignonjia/TS_watermark.
|
https://proceedings.mlr.press/v235/hussain24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hussain24a/hussain24a.pdf
|
https://openreview.net/forum?id=iPFuWc1TV2
|
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
|
https://proceedings.mlr.press/v235/hussain24a.html
|
Md Shamim Hussain, Mohammed J Zaki, Dharmashankar Subramanian
|
https://proceedings.mlr.press/v235/hussain24a.html
|
ICML 2024
|
Graph transformers typically lack third-order interactions, limiting their geometric understanding which is crucial for tasks like molecular geometry prediction. We propose the Triplet Graph Transformer (TGT) that enables direct communication between pairs within a 3-tuple of nodes via novel triplet attention and aggregation mechanisms. TGT is applied to molecular property prediction by first predicting interatomic distances from 2D graphs and then using these distances for downstream tasks. A novel three-stage training procedure and stochastic inference further improve training efficiency and model performance. Our model achieves new state-of-the-art (SOTA) results on open challenge benchmarks PCQM4Mv2 and OC20 IS2RE. We also obtain SOTA results on QM9, MOLPCBA, and LIT-PCBA molecular property prediction benchmarks via transfer learning. We also demonstrate the generality of TGT with SOTA results on the traveling salesman problem (TSP).
|
https://proceedings.mlr.press/v235/hvarfner24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hvarfner24a/hvarfner24a.pdf
|
https://openreview.net/forum?id=OfT8MgIqHT
|
Vanilla Bayesian Optimization Performs Great in High Dimensions
|
https://proceedings.mlr.press/v235/hvarfner24a.html
|
Carl Hvarfner, Erik Orm Hellsten, Luigi Nardi
|
https://proceedings.mlr.press/v235/hvarfner24a.html
|
ICML 2024
|
High-dimensional optimization problems have long been considered the Achilles’ heel of Bayesian optimization algorithms. Spurred by the curse of dimensionality, a large collection of algorithms aim to make BO more performant in this setting, commonly by imposing various simplifying assumptions on the objective, thereby decreasing its presumed complexity. In this paper, we identify the degeneracies that make vanilla BO poorly suited to high-dimensional tasks, and further show how existing algorithms address these degeneracies through the lens of model complexity. Motivated by the model complexity measure, we derive an enhancement to the prior assumptions that are typical of the vanilla BO algorithm, which reduces the complexity to manageable levels without imposing structural restrictions on the objective. Our modification - a simple scaling of the Gaussian process lengthscale prior in the dimensionality - reveals that standard BO works drastically better than previously thought in high dimensions. Our insights are supplemented by substantial out-performance of existing state-of-the-art on multiple commonly considered real-world high-dimensional tasks.
|
https://proceedings.mlr.press/v235/hwang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hwang24a/hwang24a.pdf
|
https://openreview.net/forum?id=6D0nyemiWk
|
On Positivity Condition for Causal Inference
|
https://proceedings.mlr.press/v235/hwang24a.html
|
Inwoo Hwang, Yesong Choe, Yeahoon Kwon, Sanghack Lee
|
https://proceedings.mlr.press/v235/hwang24a.html
|
ICML 2024
|
Identifying and estimating a causal effect is a fundamental task when researchers want to infer a causal effect using an observational study without experiments. A conventional assumption is the strict positivity of the given distribution, or so called positivity (or overlap) under the unconfounded assumption that the probabilities of treatments are positive. However, there exist many environments where neither observational data exhibits strict positivity nor unconfounded assumption holds. Against this background, we examine the graphical counterpart of the conventional positivity condition so as to license the use of identification formula without strict positivity. In particular, we explore various approaches, including analysis in a post-hoc manner, do-calculus, $Q$-decomposition, and algorithmic, to yielding a positivity condition for an identification formula, where we relate them, providing a comprehensive view. We further discuss the design of a positivity-aware identification algorithm based on the theoretical characterization of identification formulas.
|
https://proceedings.mlr.press/v235/hwang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hwang24b/hwang24b.pdf
|
https://openreview.net/forum?id=mrd4e8ZJjm
|
Fine-Grained Causal Dynamics Learning with Quantization for Improving Robustness in Reinforcement Learning
|
https://proceedings.mlr.press/v235/hwang24b.html
|
Inwoo Hwang, Yunhyeok Kwak, Suhyung Choi, Byoung-Tak Zhang, Sanghack Lee
|
https://proceedings.mlr.press/v235/hwang24b.html
|
ICML 2024
|
Causal dynamics learning has recently emerged as a promising approach to enhancing robustness in reinforcement learning (RL). Typically, the goal is to build a dynamics model that makes predictions based on the causal relationships among the entities. Despite the fact that causal connections often manifest only under certain contexts, existing approaches overlook such fine-grained relationships and lack a detailed understanding of the dynamics. In this work, we propose a novel dynamics model that infers fine-grained causal structures and employs them for prediction, leading to improved robustness in RL. The key idea is to jointly learn the dynamics model with a discrete latent variable that quantizes the state-action space into subgroups. This leads to recognizing meaningful context that displays sparse dependencies, where causal structures are learned for each subgroup throughout the training. Experimental results demonstrate the robustness of our method to unseen states and locally spurious correlations in downstream tasks where fine-grained causal reasoning is crucial. We further illustrate the effectiveness of our subgroup-based approach with quantization in discovering fine-grained causal relationships compared to prior methods.
|
https://proceedings.mlr.press/v235/hwang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hwang24c/hwang24c.pdf
|
https://openreview.net/forum?id=CuiRGtVI55
|
Adapting Pretrained ViTs with Convolution Injector for Visuo-Motor Control
|
https://proceedings.mlr.press/v235/hwang24c.html
|
Dongyoon Hwang, Byungkun Lee, Hojoon Lee, Hyunseung Kim, Jaegul Choo
|
https://proceedings.mlr.press/v235/hwang24c.html
|
ICML 2024
|
Vision Transformers (ViT), when paired with large-scale pretraining, have shown remarkable performance across various computer vision tasks, primarily due to their weak inductive bias. However, while such weak inductive bias aids in pretraining scalability, this may hinder the effective adaptation of ViTs for visuo-motor control tasks as a result of the absence of control-centric inductive biases. Such absent inductive biases include spatial locality and translation equivariance bias which convolutions naturally offer. To this end, we introduce Convolution Injector (CoIn), an add-on module that injects convolutions which are rich in locality and equivariance biases into a pretrained ViT for effective adaptation in visuo-motor control. We evaluate CoIn with three distinct types of pretrained ViTs (CLIP, MVP, VC-1) across 12 varied control tasks within three separate domains (Adroit, MetaWorld, DMC), and demonstrate that CoIn consistently enhances control task performance across all experimented environments and models, validating the effectiveness of providing pretrained ViTs with control-centric biases.
|
https://proceedings.mlr.press/v235/hwang24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/hwang24d/hwang24d.pdf
|
https://openreview.net/forum?id=nn5OPHom8t
|
EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens
|
https://proceedings.mlr.press/v235/hwang24d.html
|
Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang
|
https://proceedings.mlr.press/v235/hwang24d.html
|
ICML 2024
|
Masked Video Autoencoder (MVA) approaches have demonstrated their potential by significantly outperforming previous video representation learning methods. However, they waste an excessive amount of computations and memory in predicting uninformative tokens/frames due to random masking strategies. (e.g., over 16 nodes with 128 NVIDIA A100 GPUs). To resolve this issue, we exploit the unequal information density among the patches in videos and propose EVEREST, a surprisingly efficient MVA approach for video representation learning that finds tokens containing rich motion features and discards uninformative ones during both pre-training and fine-tuning. We further present an information-intensive frame selection strategy that allows the model to focus on informative and causal frames with minimal redundancy. Our method significantly reduces the computation and memory requirements of MVA, enabling the pre-training and fine-tuning on a single machine with 8 GPUs while achieving comparable performance to computation- and memory-heavy baselines on multiple benchmarks and the uncurated Ego4D dataset. We hope that our work contributes to reducing the barrier to further research on video understanding.
|
https://proceedings.mlr.press/v235/igel24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/igel24a/igel24a.pdf
|
https://openreview.net/forum?id=m8t1yzfBsJ
|
Smooth Min-Max Monotonic Networks
|
https://proceedings.mlr.press/v235/igel24a.html
|
Christian Igel
|
https://proceedings.mlr.press/v235/igel24a.html
|
ICML 2024
|
Monotonicity constraints are powerful regularizers in statistical modelling. They can support fairness in computer-aided decision making and increase plausibility in data-driven scientific models. The seminal min-max (MM) neural network architecture ensures monotonicity, but often gets stuck in undesired local optima during training because of partial derivatives being zero when computing extrema. We propose a simple modification of the MM network using strictly-increasing smooth minimum and maximum functions that alleviates this problem. The resulting smooth min-max (SMM) network module inherits the asymptotic approximation properties from the MM architecture. It can be used within larger deep learning systems trained end-to-end. The SMM module is conceptually simple and computationally less demanding than state-of-the-art neural networks for monotonic modelling. Our experiments show that this does not come with a loss in generalization performance compared to alternative neural and non-neural approaches.
|
https://proceedings.mlr.press/v235/ilbert24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ilbert24a/ilbert24a.pdf
|
https://openreview.net/forum?id=8kLzL5QBh2
|
SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
|
https://proceedings.mlr.press/v235/ilbert24a.html
|
Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko
|
https://proceedings.mlr.press/v235/ilbert24a.html
|
ICML 2024
|
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting. To better understand this phenomenon, we start by studying a toy linear forecasting problem for which we show that transformers are incapable of converging to their true solution despite their high expressive power. We further identify the attention of transformers as being responsible for this low generalization capacity. Building upon this insight, we propose a shallow lightweight transformer model that successfully escapes bad local minima when optimized with sharpness-aware optimization. We empirically demonstrate that this result extends to all commonly used real-world multivariate time series datasets. In particular, SAMformer surpasses current state-of-the-art methods and is on par with the biggest foundation model MOIRAI while having significantly fewer parameters. The code is available at https://github.com/romilbert/samformer.
|
https://proceedings.mlr.press/v235/ildiz24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ildiz24a/ildiz24a.pdf
|
https://openreview.net/forum?id=72oT4mPLUb
|
From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers
|
https://proceedings.mlr.press/v235/ildiz24a.html
|
Muhammed Emrullah Ildiz, Yixiao Huang, Yingcong Li, Ankit Singh Rawat, Samet Oymak
|
https://proceedings.mlr.press/v235/ildiz24a.html
|
ICML 2024
|
Modern language models rely on the transformer architecture and attention mechanism to perform language understanding and text generation. In this work, we study learning a 1-layer self-attention model from a set of prompts and the associated outputs sampled from the model. We first establish a formal link between the self-attention mechanism and Markov models under suitable conditions: Inputting a prompt to the self-attention model samples the output token according to a context-conditioned Markov chain (CCMC). CCMC is obtained by weighing the transition matrix of a standard Markov chain according to the sufficient statistics of the prompt/context. Building on this formalism, we develop identifiability/coverage conditions for the data distribution that guarantee consistent estimation of the latent model under a teacher-student setting and establish sample complexity guarantees under IID data. Finally, we study the problem of learning from a single output trajectory generated in response to an initial prompt. We characterize a winner-takes-all phenomenon where the generative process of self-attention evolves to sampling from a small set of winner tokens that dominate the context window. This provides a mathematical explanation to the tendency of modern LLMs to generate repetitive text.
|
https://proceedings.mlr.press/v235/im24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/im24a/im24a.pdf
|
https://openreview.net/forum?id=Hy88Jp0kQT
|
Understanding the Learning Dynamics of Alignment with Human Feedback
|
https://proceedings.mlr.press/v235/im24a.html
|
Shawn Im, Yixuan Li
|
https://proceedings.mlr.press/v235/im24a.html
|
ICML 2024
|
Aligning large language models (LLMs) with human intentions has become a critical task for safely deploying models in real-world systems. While existing alignment approaches have seen empirical success, theoretically understanding how these methods affect model behavior remains an open question. Our work provides an initial attempt to theoretically analyze the learning dynamics of human preference alignment. We formally show how the distribution of preference datasets influences the rate of model updates and provide rigorous guarantees on the training accuracy. Our theory also reveals an intricate phenomenon where the optimization is prone to prioritizing certain behaviors with higher preference distinguishability. We empirically validate our findings on contemporary LLMs and alignment tasks, reinforcing our theoretical insights and shedding light on considerations for future alignment approaches. Disclaimer: This paper contains potentially offensive text; reader discretion is advised.
|
https://proceedings.mlr.press/v235/ingebrand24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ingebrand24a/ingebrand24a.pdf
|
https://openreview.net/forum?id=tHBLwSYnLf
|
Zero-Shot Reinforcement Learning via Function Encoders
|
https://proceedings.mlr.press/v235/ingebrand24a.html
|
Tyler Ingebrand, Amy Zhang, Ufuk Topcu
|
https://proceedings.mlr.press/v235/ingebrand24a.html
|
ICML 2024
|
Although reinforcement learning (RL) can solve many challenging sequential decision making problems, achieving zero-shot transfer across related tasks remains a challenge. The difficulty lies in finding a good representation for the current task so that the agent understands how it relates to previously seen tasks. To achieve zero-shot transfer, we introduce the function encoder, a representation learning algorithm which represents a function as a weighted combination of learned, non-linear basis functions. By using a function encoder to represent the reward function or the transition function, the agent has information on how the current task relates to previously seen tasks via a coherent vector representation. Thus, the agent is able to achieve transfer between related tasks at run time with no additional training. We demonstrate state-of-the-art data efficiency, asymptotic performance, and training stability in three RL fields by augmenting basic RL algorithms with a function encoder task representation.
|
https://proceedings.mlr.press/v235/iollo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/iollo24a/iollo24a.pdf
|
https://openreview.net/forum?id=rGCvMARXkG
|
PASOA- PArticle baSed Bayesian Optimal Adaptive design
|
https://proceedings.mlr.press/v235/iollo24a.html
|
Jacopo Iollo, Christophe Heinkelé, Pierre Alliez, Florence Forbes
|
https://proceedings.mlr.press/v235/iollo24a.html
|
ICML 2024
|
We propose a new procedure named PASOA, for Bayesian experimental design, that performs sequential design optimization by simultaneously providing accurate estimates of successive posterior distributions for parameter inference. The sequential design process is carried out via a contrastive estimation principle, using stochastic optimization and Sequential Monte Carlo (SMC) samplers to maximise the Expected Information Gain (EIG). As larger information gains are obtained for larger distances between successive posterior distributions, this EIG objective may worsen classical SMC performance. To handle this issue, tempering is proposed to have both a large information gain and an accurate SMC sampling, that we show is crucial for performance. This novel combination of stochastic optimization and tempered SMC allows to jointly handle design optimization and parameter inference. We provide a proof that the obtained optimal design estimators benefit from some consistency property. Numerical experiments confirm the potential of the approach, which outperforms other recent existing procedures.
|
https://proceedings.mlr.press/v235/iqbal24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/iqbal24a/iqbal24a.pdf
|
https://openreview.net/forum?id=p1kDNFs62o
|
Nesting Particle Filters for Experimental Design in Dynamical Systems
|
https://proceedings.mlr.press/v235/iqbal24a.html
|
Sahel Iqbal, Adrien Corenflos, Simo Särkkä, Hany Abdulsamad
|
https://proceedings.mlr.press/v235/iqbal24a.html
|
ICML 2024
|
In this paper, we propose a novel approach to Bayesian experimental design for non-exchangeable data that formulates it as risk-sensitive policy optimization. We develop the Inside-Out SMC$^2$ algorithm, a nested sequential Monte Carlo technique to infer optimal designs, and embed it into a particle Markov chain Monte Carlo framework to perform gradient-based policy amortization. Our approach is distinct from other amortized experimental design techniques, as it does not rely on contrastive estimators. Numerical validation on a set of dynamical systems showcases the efficacy of our method in comparison to other state-of-the-art strategies.
|
https://proceedings.mlr.press/v235/j-thiagarajan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/j-thiagarajan24a/j-thiagarajan24a.pdf
|
https://openreview.net/forum?id=5353dJE9Ek
|
PAGER: Accurate Failure Characterization in Deep Regression Models
|
https://proceedings.mlr.press/v235/j-thiagarajan24a.html
|
Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Puja Trivedi, Rushil Anirudh
|
https://proceedings.mlr.press/v235/j-thiagarajan24a.html
|
ICML 2024
|
Safe deployment of AI models requires proactive detection of failures to prevent costly errors. To this end, we study the important problem of detecting failures in deep regression models. Existing approaches rely on epistemic uncertainty estimates or inconsistency w.r.t the training data to identify failure. Interestingly, we find that while uncertainties are necessary they are insufficient to accurately characterize failure in practice. Hence, we introduce PAGER (Principled Analysis of Generalization Errors in Regressors), a framework to systematically detect and characterize failures in deep regressors. Built upon the principle of anchored training in deep models, PAGER unifies both epistemic uncertainty and complementary manifold non-conformity scores to accurately organize samples into different risk regimes.
|
https://proceedings.mlr.press/v235/jacobsen24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jacobsen24a/jacobsen24a.pdf
|
https://openreview.net/forum?id=9iRGs3wBTy
|
Online Linear Regression in Dynamic Environments via Discounting
|
https://proceedings.mlr.press/v235/jacobsen24a.html
|
Andrew Jacobsen, Ashok Cutkosky
|
https://proceedings.mlr.press/v235/jacobsen24a.html
|
ICML 2024
|
We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees even in the complete absence of prior knowledge. We present a novel analysis showing that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form $R_{T}(\vec{u})\le O\Big(d\log(T)\vee \sqrt{dP_{T}^{\gamma}(\vec{u})T}\Big)$, where $P_{T}^{\gamma}(\vec{u})$ is a measure of variability of the comparator sequence, and show that the discount factor achieving this result can be learned on-the-fly. We show that this result is optimal by providing a matching lower bound. We also extend our results to strongly-adaptive guarantees which hold over every sub-interval $[a,b]\subseteq[1,T]$ simultaneously.
|
https://proceedings.mlr.press/v235/jagadish24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jagadish24a/jagadish24a.pdf
|
https://openreview.net/forum?id=oTmQmaNkGn
|
Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks
|
https://proceedings.mlr.press/v235/jagadish24a.html
|
Akshay Kumar Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz, Marcel Binz
|
https://proceedings.mlr.press/v235/jagadish24a.html
|
ICML 2024
|
Ecological rationality refers to the notion that humans are rational agents adapted to their environment. However, testing this theory remains challenging due to two reasons: the difficulty in defining what tasks are ecologically valid and building rational models for these tasks. In this work, we demonstrate that large language models can generate cognitive tasks, specifically category learning tasks, that match the statistics of real-world tasks, thereby addressing the first challenge. We tackle the second challenge by deriving rational agents adapted to these tasks using the framework of meta-learning, leading to a class of models called ecologically rational meta-learned inference (ERMI). ERMI quantitatively explains human data better than seven other cognitive models in two different experiments. It additionally matches human behavior on a qualitative level: (1) it finds the same tasks difficult that humans find difficult, (2) it becomes more reliant on an exemplar-based strategy for assigning categories with learning, and (3) it generalizes to unseen stimuli in a human-like way. Furthermore, we show that ERMI’s ecologically valid priors allow it to achieve state-of-the-art performance on the OpenML-CC18 classification benchmark.
|
https://proceedings.mlr.press/v235/jain24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jain24a/jain24a.pdf
|
https://openreview.net/forum?id=44qxX6Ty6F
|
Position: Scarce Resource Allocations That Rely On Machine Learning Should Be Randomized
|
https://proceedings.mlr.press/v235/jain24a.html
|
Shomik Jain, Kathleen Creel, Ashia Camage Wilson
|
https://proceedings.mlr.press/v235/jain24a.html
|
ICML 2024
|
Contrary to traditional deterministic notions of algorithmic fairness, this paper argues that fairly allocating scarce resources using machine learning often requires randomness. We address why, when, and how to randomize by offering a set of stochastic procedures that more adequately account for all of the claims individuals have to allocations of social goods or opportunities and effectively balances their interests.
|
https://proceedings.mlr.press/v235/jain24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jain24b/jain24b.pdf
|
https://openreview.net/forum?id=3JhmHCVPa8
|
Learning to Reach Goals via Diffusion
|
https://proceedings.mlr.press/v235/jain24b.html
|
Vineet Jain, Siamak Ravanbakhsh
|
https://proceedings.mlr.press/v235/jain24b.html
|
ICML 2024
|
We present a novel perspective on goal-conditioned reinforcement learning by framing it within the context of denoising diffusion models. Analogous to the diffusion process, where Gaussian noise is used to create random trajectories that walk away from the data manifold, we construct trajectories that move away from potential goal states. We then learn a goal-conditioned policy to reverse these deviations, analogous to the score function. This approach, which we call Merlin, can reach specified goals from arbitrary initial states without learning a separate value function. In contrast to recent works utilizing diffusion models in offline RL, Merlin stands out as the first method to perform diffusion in the state space, requiring only one "denoising" iteration per environment step. We experimentally validate our approach in various offline goal-reaching tasks, demonstrating substantial performance enhancements compared to state-of-the-art methods while improving computational efficiency over other diffusion-based RL methods by an order of magnitude. Our results suggest that this perspective on diffusion for RL is a simple and scalable approach for sequential decision making.
|
https://proceedings.mlr.press/v235/jain24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jain24c/jain24c.pdf
|
https://openreview.net/forum?id=kXHgEYFyf3
|
R2E: Turning any Github Repository into a Programming Agent Environment
|
https://proceedings.mlr.press/v235/jain24c.html
|
Naman Jain, Manish Shetty, Tianjun Zhang, King Han, Koushik Sen, Ion Stoica
|
https://proceedings.mlr.press/v235/jain24c.html
|
ICML 2024
|
While Large Language Models’ (LLMs) coding capabilities have advanced rapidly, corresponding evaluation benchmarks on real-world programming setups are yet to catch up. Building a scalable and interactive testbed for evaluating general-purpose AI coding agents for real-world code has been challenging, particularly due to a lack of high-quality test suites available. In this paper, we present Repository to Environment (R2E), a framework that can turn any GitHub repository into a test environment to evaluate the performance of code-generating systems, both static and interactive. R2E is powered by a synergistic combination of program analysis and LLMs to construct equivalence test harnesses for any GitHub function. We instantiate our framework to build the first large-scale benchmark, R2E-Eval1, for building realistic environments for AI coding assistants. Our results demonstrate that even when SOTA models cannot generate correct solutions with advanced prompting techniques, they can effectively use environment feedback highlighting the need to move from static functional coding to interactive programming paradigm. We hope that our framework (and the instantiated benchmark) can motivate research directions by providing web-scale open-ended coding environments. R2E code is available at https://r2e.dev/
|
https://proceedings.mlr.press/v235/jamshidi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jamshidi24a/jamshidi24a.pdf
|
https://openreview.net/forum?id=oSOZ31ISBV
|
On the sample complexity of conditional independence testing with Von Mises estimator with application to causal discovery
|
https://proceedings.mlr.press/v235/jamshidi24a.html
|
Fateme Jamshidi, Luca Ganassali, Negar Kiyavash
|
https://proceedings.mlr.press/v235/jamshidi24a.html
|
ICML 2024
|
Motivated by conditional independence testing, an essential step in constraint-based causal discovery algorithms, we study the nonparametric Von Mises estimator for the entropy of multivariate distributions built on a kernel density estimator. We establish an exponential concentration inequality for this estimator. We design a test for conditional independence (CI) based on our estimator, called VM-CI, which achieves optimal parametric rates under smoothness assumptions. Leveraging the exponential concentration, we prove a tight upper bound for the overall error of VM-CI. This, in turn, allows us to characterize the sample complexity of any constraint-based causal discovery algorithm that uses VM-CI for CI tests. To the best of our knowledge, this is the first sample complexity guarantee for causal discovery for non-linear models and non-Gaussian continuous variables. Furthermore, we empirically show that VM-CI outperforms other popular CI tests in terms of either time, sample complexity, or both. This enhancement significantly improves the performance in structure learning as well.
|
https://proceedings.mlr.press/v235/jang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jang24a/jang24a.pdf
|
https://openreview.net/forum?id=TtSFg4s3F0
|
Rethinking DP-SGD in Discrete Domain: Exploring Logistic Distribution in the Realm of signSGD
|
https://proceedings.mlr.press/v235/jang24a.html
|
Jonggyu Jang, Seongjin Hwang, Hyun Jong Yang
|
https://proceedings.mlr.press/v235/jang24a.html
|
ICML 2024
|
Deep neural networks (DNNs) have a risk of remembering sensitive data from their training datasets, inadvertently leading to substantial information leakage through privacy attacks like membership inference attacks. DP-SGD is a simple but effective defense method, incorporating Gaussian noise into gradient updates to safeguard sensitive information. With the prevalence of large neural networks, DP-signSGD, a variant of DP-SGD, has emerged, aiming to curtail memory usage while maintaining security. However, it is noteworthy that most DP-signSGD algorithms default to Gaussian noise, suitable only for DP-SGD, without scant discussion of its appropriateness for signSGD. Our study delves into an intriguing question: "Can we find a more efficient substitute for Gaussian noise to secure privacy in DP-signSGD?" We propose an answer with a Logistic mechanism, which conforms to signSGD principles and is interestingly evolved from an exponential mechanism. In this paper, we provide both theoretical and experimental evidence showing that our method surpasses DP-signSGD.
|
https://proceedings.mlr.press/v235/jang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jang24b/jang24b.pdf
|
https://openreview.net/forum?id=lwTshcWlmB
|
Degeneration-free Policy Optimization: RL Fine-Tuning for Language Models without Degeneration
|
https://proceedings.mlr.press/v235/jang24b.html
|
Youngsoo Jang, Geon-Hyeong Kim, Byoungjip Kim, Yu Jin Kim, Honglak Lee, Moontae Lee
|
https://proceedings.mlr.press/v235/jang24b.html
|
ICML 2024
|
As the pre-training objectives (e.g., next token prediction) of language models (LMs) are inherently not aligned with task scores, optimizing LMs to achieve higher downstream task scores is essential. One of the promising approaches is to fine-tune LMs through reinforcement learning (RL). However, conventional RL methods based on PPO and a penalty of KL divergence are vulnerable to text degeneration where LMs do not generate natural texts anymore after RL fine-tuning. To address this problem, we provide Degeneration-free Policy Optimization (DfPO) that can fine-tune LMs to generate texts that achieve improved downstream task scores, while preserving the ability to generate natural texts. To achieve this, we introduce KL-masking which masks out the actions that potentially cause deviation from the reference policy when its likelihood is increased or decreased. Then, we devise truncated advantage functions for separately performing likelihood maximization and minimization to improve the task performance. In the experiments, we provide the results of DfPO and baseline algorithms on various generative NLP tasks including text continuation, text detoxification, and commonsense generation. Our experiments demonstrate that DfPO successfully improves the downstream task scores while preserving the ability to generate natural texts, without requiring additional hyperparameter search.
|
https://proceedings.mlr.press/v235/jang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jang24c/jang24c.pdf
|
https://openreview.net/forum?id=rI6lxIX0uX
|
Visual Representation Learning with Stochastic Frame Prediction
|
https://proceedings.mlr.press/v235/jang24c.html
|
Huiwon Jang, Dongyoung Kim, Junsu Kim, Jinwoo Shin, Pieter Abbeel, Younggyo Seo
|
https://proceedings.mlr.press/v235/jang24c.html
|
ICML 2024
|
Self-supervised learning of image representations by predicting future frames is a promising direction but still remains a challenge. This is because of the under-determined nature of frame prediction; multiple potential futures can arise from a single current frame. To tackle this challenge, in this paper, we revisit the idea of stochastic video generation that learns to capture uncertainty in frame prediction and explore its effectiveness for representation learning. Specifically, we design a framework that trains a stochastic frame prediction model to learn temporal information between frames. Moreover, to learn dense information within each frame, we introduce an auxiliary masked image modeling objective along with a shared decoder architecture. We find this architecture allows for combining both objectives in a synergistic and compute-efficient manner. We demonstrate the effectiveness of our framework on a variety of tasks from video label propagation and vision-based robot learning domains, such as video segmentation, pose tracking, vision-based robotic locomotion, and manipulation tasks. Code is available on the project webpage: https://sites.google.com/view/2024rsp.
|
https://proceedings.mlr.press/v235/jang24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jang24d/jang24d.pdf
|
https://openreview.net/forum?id=s1sdx6vNsU
|
LoRA Training in the NTK Regime has No Spurious Local Minima
|
https://proceedings.mlr.press/v235/jang24d.html
|
Uijeong Jang, Jason D. Lee, Ernest K. Ryu
|
https://proceedings.mlr.press/v235/jang24d.html
|
ICML 2024
|
Low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM), but our theoretical understanding of LoRA has been limited. In this work, we theoretically analyze LoRA fine-tuning in the neural tangent kernel (NTK) regime with $N$ data points, showing: (i) full fine-tuning (without LoRA) admits a low-rank solution of rank $r\lesssim \sqrt{N}$; (ii) using LoRA with rank $r\gtrsim \sqrt{N}$ eliminates spurious local minima, allowing gradient descent to find the low-rank solutions; (iii) the low-rank solution found using LoRA generalizes well.
|
https://proceedings.mlr.press/v235/jang24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jang24e/jang24e.pdf
|
https://openreview.net/forum?id=SAEUO7847g
|
Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits
|
https://proceedings.mlr.press/v235/jang24e.html
|
Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun
|
https://proceedings.mlr.press/v235/jang24e.html
|
ICML 2024
|
We study low-rank matrix trace regression and the related problem of low-rank matrix bandits. Assuming access to the distribution of the covariates, we propose a novel low-rank matrix estimation method called LowPopArt and provide its recovery guarantee that depends on a novel quantity denoted by $B(Q)$ that characterizes the hardness of the problem, where $Q$ is the covariance matrix of the measurement distribution. We show that our method can provide tighter recovery guarantees than classical nuclear norm penalized least squares (Koltchinskii et al., 2011) in several problems. To perform an efficient estimation with a limited number of measurements from an arbitrarily given measurement set $\mathcal{A}$, we also propose a novel experimental design criterion that minimizes $B(Q)$ with computational efficiency. We leverage our novel estimator and design of experiments to derive two low-rank linear bandit algorithms for general arm sets that enjoy improved regret upper bounds. This improves over previous works on low-rank bandits, which make somewhat restrictive assumptions that the arm set is the unit ball or that an efficient exploration distribution is given. To our knowledge, our experimental design criterion is the first one tailored to low-rank matrix estimation beyond the naive reduction to linear regression, which can be of independent interest.
|
https://proceedings.mlr.press/v235/jaquier24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jaquier24a/jaquier24a.pdf
|
https://openreview.net/forum?id=ndVXXmxSC5
|
Bringing Motion Taxonomies to Continuous Domains via GPLVM on Hyperbolic manifolds
|
https://proceedings.mlr.press/v235/jaquier24a.html
|
Noémie Jaquier, Leonel Rozo, Miguel González-Duque, Viacheslav Borovitskiy, Tamim Asfour
|
https://proceedings.mlr.press/v235/jaquier24a.html
|
ICML 2024
|
Human motion taxonomies serve as high-level hierarchical abstractions that classify how humans move and interact with their environment. They have proven useful to analyse grasps, manipulation skills, and whole-body support poses. Despite substantial efforts devoted to design their hierarchy and underlying categories, their use remains limited. This may be attributed to the lack of computational models that fill the gap between the discrete hierarchical structure of the taxonomy and the high-dimensional heterogeneous data associated to its categories. To overcome this problem, we propose to model taxonomy data via hyperbolic embeddings that capture the associated hierarchical structure. We achieve this by formulating a novel Gaussian process hyperbolic latent variable model that incorporates the taxonomy structure through graph-based priors on the latent space and distance-preserving back constraints. We validate our model on three different human motion taxonomies to learn hyperbolic embeddings that faithfully preserve the original graph structure. We show that our model properly encodes unseen data from existing or new taxonomy categories, and outperforms its Euclidean and VAE-based counterparts. Finally, through proof-of-concept experiments, we show that our model may be used to generate realistic trajectories between the learned embeddings.
|
https://proceedings.mlr.press/v235/javanmard24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/javanmard24a/javanmard24a.pdf
|
https://openreview.net/forum?id=qawwyKqOkj
|
PriorBoost: An Adaptive Algorithm for Learning from Aggregate Responses
|
https://proceedings.mlr.press/v235/javanmard24a.html
|
Adel Javanmard, Matthew Fahrbach, Vahab Mirrokni
|
https://proceedings.mlr.press/v235/javanmard24a.html
|
ICML 2024
|
This work studies algorithms for learning from aggregate responses. We focus on the construction of aggregation sets (called bags in the literature) for event-level loss functions. We prove for linear regression and generalized linear models (GLMs) that the optimal bagging problem reduces to one-dimensional size-constrained $k$-means clustering. Further, we theoretically quantify the advantage of using curated bags over random bags. We then propose the $\texttt{PriorBoost}$ algorithm, which adaptively forms bags of samples that are increasingly homogeneous with respect to (unobserved) individual responses to improve model quality. We study label differential privacy for aggregate learning, and we also provide extensive experiments showing that $\texttt{PriorBoost}$ regularly achieves optimal model quality for event-level predictions, in stark contrast to non-adaptive algorithms.
|
https://proceedings.mlr.press/v235/jedra24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jedra24a/jedra24a.pdf
|
https://openreview.net/forum?id=Dk0RBrqiyk
|
Low-Rank Bandits via Tight Two-to-Infinity Singular Subspace Recovery
|
https://proceedings.mlr.press/v235/jedra24a.html
|
Yassir Jedra, William Réveillard, Stefan Stojanovic, Alexandre Proutiere
|
https://proceedings.mlr.press/v235/jedra24a.html
|
ICML 2024
|
We study contextual bandits with low-rank structure where, in each round, if the (context, arm) pair $(i,j)\in [m]\times [n]$ is selected, the learner observes a noisy sample of the $(i,j)$-th entry of an unknown low-rank reward matrix. Successive contexts are generated randomly in an i.i.d. manner and are revealed to the learner. For such bandits, we present efficient algorithms for policy evaluation, best policy identification and regret minimization. For policy evaluation and best policy identification, we show that our algorithms are nearly minimax optimal. For instance, the number of samples required to return an $\varepsilon$-optimal policy with probability at least $1-\delta$ typically scales as $\frac{m+n}{\varepsilon^2}\log(1/\delta)$. Our regret minimization algorithm enjoys minimax guarantees typically scaling as $r^{5/4}(m+n)^{3/4}\sqrt{T}$, which improves over existing algorithms. All the proposed algorithms consist of two phases: they first leverage spectral methods to estimate the left and right singular subspaces of the low-rank reward matrix. We show that these estimates enjoy tight error guarantees in the two-to-infinity norm. This in turn allows us to reformulate our problems as a misspecified linear bandit problem with dimension roughly $r(m+n)$ and misspecification controlled by the subspace recovery error, as well as to design the second phase of our algorithms efficiently.
|
https://proceedings.mlr.press/v235/jeeveswaran24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jeeveswaran24a/jeeveswaran24a.pdf
|
https://openreview.net/forum?id=1AAlMSo7Js
|
Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method
|
https://proceedings.mlr.press/v235/jeeveswaran24a.html
|
Kishaan Jeeveswaran, Elahe Arani, Bahram Zonooz
|
https://proceedings.mlr.press/v235/jeeveswaran24a.html
|
ICML 2024
|
Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.
|
https://proceedings.mlr.press/v235/jelassi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jelassi24a/jelassi24a.pdf
|
https://openreview.net/forum?id=duRRoGeoQT
|
Repeat After Me: Transformers are Better than State Space Models at Copying
|
https://proceedings.mlr.press/v235/jelassi24a.html
|
Samy Jelassi, David Brandfonbrener, Sham M. Kakade, Eran Malach
|
https://proceedings.mlr.press/v235/jelassi24a.html
|
ICML 2024
|
Transformers are the dominant architecture for sequence modeling, but there is growing interest in models that use a fixed-size latent state that does not depend on the sequence length, which we refer to as ”generalized state space models” (GSSMs). In this paper we show that while GSSMs are promising in terms of inference-time efficiency, they are limited compared to transformer models on tasks that require copying from the input context. We start with a theoretical analysis of the simple task of string copying and prove that a two layer transformer can copy strings of exponential length while GSSMs are fundamentally limited by their fixed-size latent state. Empirically, we find that transformers outperform GSSMs in terms of efficiency and generalization on synthetic tasks that require copying the context. Finally, we evaluate pretrained large language models and find that transformer models dramatically outperform state space models at copying and retrieving information from context. Taken together, these results suggest a fundamental gap between transformers and GSSMs on tasks of practical interest.
|
https://proceedings.mlr.press/v235/jeon24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jeon24a/jeon24a.pdf
|
https://openreview.net/forum?id=NQn2tYLv5I
|
An Information-Theoretic Analysis of In-Context Learning
|
https://proceedings.mlr.press/v235/jeon24a.html
|
Hong Jun Jeon, Jason D. Lee, Qi Lei, Benjamin Van Roy
|
https://proceedings.mlr.press/v235/jeon24a.html
|
ICML 2024
|
Previous theoretical results pertaining to meta-learning on sequences build on contrived and convoluted mixing time assumptions. We introduce new information-theoretic tools that lead to a concise yet general decomposition of error for a Bayes optimal predictor into two components: meta-learning error and intra-task error. These tools unify analyses across many meta-learning challenges. To illustrate, we apply them to establish new results about in-context learning with transformers and corroborate existing results a simple linear setting. Our theoretical results characterize how error decays in both the number of training sequences and sequence lengths. Our results are very general; for example, they avoid contrived mixing time assumptions made by all prior results that establish decay of error with sequence length.
|
https://proceedings.mlr.press/v235/jessica24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jessica24a/jessica24a.pdf
|
https://openreview.net/forum?id=WzD4a5ufN8
|
Finite Volume Features, Global Geometry Representations, and Residual Training for Deep Learning-based CFD Simulation
|
https://proceedings.mlr.press/v235/jessica24a.html
|
Loh Sher En Jessica, Naheed Anjum Arafat, Wei Xian Lim, Wai Lee Chan, Adams Wai-Kin Kong
|
https://proceedings.mlr.press/v235/jessica24a.html
|
ICML 2024
|
Computational fluid dynamics (CFD) simulation is an irreplaceable modelling step in many engineering designs, but it is often computationally expensive. Some graph neural network (GNN)-based CFD methods have been proposed. However, the current methods inherit the weakness of traditional numerical simulators, as well as ignore the cell characteristics in the mesh used in the finite volume method, a common method in practical CFD applications. Specifically, the input nodes in these GNN methods have very limited information about any object immersed in the simulation domain and its surrounding environment. Also, the cell characteristics of the mesh such as cell volume, face surface area, and face centroid are not included in the message-passing operations in the GNN methods. To address these weaknesses, this work proposes two novel geometric representations: Shortest Vector (SV) and Directional Integrated Distance (DID). Extracted from the mesh, the SV and DID provide global geometry perspective to each input node, thus removing the need to collect this information through message-passing. This work also introduces the use of Finite Volume Features (FVF) in the graph convolutions as node and edge attributes, enabling its message-passing operations to adjust to different nodes. Finally, this work is the first to demonstrate how residual training, with the availability of low-resolution data, can be adopted to improve the flow field prediction accuracy. Experimental results on two datasets with five different state-of-the-art GNN methods for CFD indicate that SV, DID, FVF and residual training can effectively reduce the predictive error of current GNN-based methods by as much as 41%. Our codes and datasets are available at https://github.com/toggled/FvFGeo.
|
https://proceedings.mlr.press/v235/jesson24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jesson24a/jesson24a.pdf
|
https://openreview.net/forum?id=3umNqxjFad
|
ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
|
https://proceedings.mlr.press/v235/jesson24a.html
|
Andrew Jesson, Chris Lu, Gunshi Gupta, Nicolas Beltran-Velez, Angelos Filos, Jakob Nicolaus Foerster, Yarin Gal
|
https://proceedings.mlr.press/v235/jesson24a.html
|
ICML 2024
|
This paper proposes a step toward approximate Bayesian inference in on-policy actor-critic deep reinforcement learning. It is implemented through three changes to the Asynchronous Advantage Actor-Critic (A3C) algorithm: (1) applying a ReLU function to advantage estimates, (2) spectral normalization of actor-critic weights, and (3) incorporating dropout as a Bayesian approximation. We prove under standard assumptions that restricting policy updates to positive advantages optimizes for value by maximizing a lower bound on the value function plus an additive term. We show that the additive term is bounded proportional to the Lipschitz constant of the value function, which offers theoretical grounding for spectral normalization of critic weights. Finally, our application of dropout corresponds to approximate Bayesian inference over both the actor and critic parameters, which enables adaptive state-aware exploration around the modes of the actor via Thompson sampling. We demonstrate significant improvements for median and interquartile mean metrics over A3C, PPO, SAC, and TD3 on the MuJoCo continuous control benchmark and improvement over PPO in the challenging ProcGen generalization benchmark.
|
https://proceedings.mlr.press/v235/ji24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24a/ji24a.pdf
|
https://openreview.net/forum?id=szRHR9XGrY
|
Advancing Dynamic Sparse Training by Exploring Optimization Opportunities
|
https://proceedings.mlr.press/v235/ji24a.html
|
Jie Ji, Gen Li, Lu Yin, Minghai Qin, Geng Yuan, Linke Guo, Shiwei Liu, Xiaolong Ma
|
https://proceedings.mlr.press/v235/ji24a.html
|
ICML 2024
|
Dynamic Sparse Training (DST) is an effective approach for addressing the substantial training resource requirements posed by the ever-increasing size of the Deep Neural Networks (DNNs). Characterized by its dynamic "train-prune-grow” schedule during training, DST implicitly develops a bi-level structure for training the weights while discovering a subnetwork topology. However, such a structure is consistently overlooked by the current DST algorithms for further optimization opportunities, and these algorithms, on the other hand, solely optimize the weights while determining masks heuristically. In this paper, we extensively study DST algorithms and argue that the training scheme of DST naturally forms a bi-level problem in which the updating of weight and mask is interdependent. Based on this observation, we introduce a novel efficient training framework called BiDST, which for the first time, introduces bi-level optimization methodology into dynamic sparse training domain. Unlike traditional partial-heuristic DST schemes, which suffer from sub-optimal search efficiency for masks and miss the opportunity to fully explore the topological space of neural networks, BiDST excels at discovering excellent sparse patterns by optimizing mask and weight simultaneously, resulting in maximum 2.62% higher accuracy, 2.1$\times$ faster execution speed, and 25$\times$ reduced overhead. Code available at https://github.com/jjsrf/BiDST-ICML2024.
|
https://proceedings.mlr.press/v235/ji24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24b/ji24b.pdf
|
https://openreview.net/forum?id=1puvYh729M
|
ACE: Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
|
https://proceedings.mlr.press/v235/ji24b.html
|
Tianying Ji, Yongyuan Liang, Yan Zeng, Yu Luo, Guowei Xu, Jiawei Guo, Ruijie Zheng, Furong Huang, Fuchun Sun, Huazhe Xu
|
https://proceedings.mlr.press/v235/ji24b.html
|
ICML 2024
|
The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms. Leveraging this insight, we explore the causal relationship between different action dimensions and rewards to evaluate the significance of various primitive behaviors during training. We introduce a causality-aware entropy term that effectively identifies and prioritizes actions with high potential impacts for efficient exploration. Furthermore, to prevent excessive focus on specific primitive behaviors, we analyze the gradient dormancy phenomenon and introduce a dormancy-guided reset mechanism to further enhance the efficacy of our method. Our proposed algorithm, ACE: Off-policy Actor-critic with Causality-aware Entropy regularization, demonstrates a substantial performance advantage across 29 diverse continuous control tasks spanning 7 domains compared to model-free RL baselines, which underscores the effectiveness, versatility, and efficient sample efficiency of our approach. Benchmark results and videos are available at https://ace-rl.github.io/.
|
https://proceedings.mlr.press/v235/ji24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24c/ji24c.pdf
|
https://openreview.net/forum?id=66k81s33p3
|
Towards Efficient Exact Optimization of Language Model Alignment
|
https://proceedings.mlr.press/v235/ji24c.html
|
Haozhe Ji, Cheng Lu, Yilin Niu, Pei Ke, Hongning Wang, Jun Zhu, Jie Tang, Minlie Huang
|
https://proceedings.mlr.press/v235/ji24c.html
|
ICML 2024
|
The alignment of language models with human preferences is vital for their application in real-world tasks. The problem is formulated as optimizing the model’s policy to maximize the expected reward that reflects human preferences with minimal deviation from the initial policy. While considered as a straightforward solution, reinforcement learning (RL) suffers from high variance in policy updates, which impedes efficient policy improvement. Recently, direct preference optimization (DPO) was proposed to directly optimize the policy from preference data. However, we show that DPO derived based on the optimal solution of the problem leads to a compromised mean-seeking approximation of the optimal solution in practice. In this paper, we propose efficient exact optimization (EXO) of the alignment objective. EXO is guaranteed to optimize in the same direction as RL algorithms asymptotically for arbitrary policy parametrization. This leads to the same mode-seeking solution, while enables efficient optimization by circumventing the complexities of RL. We also compare our method to DPO with both theoretical and empirical analyses, and further demonstrate the advantages of our method over existing approaches on realistic human preference data. Code is available at https://github.com/haozheji/exact-optimization.
|
https://proceedings.mlr.press/v235/ji24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24d/ji24d.pdf
|
https://openreview.net/forum?id=9Tq4L3Go9f
|
Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
|
https://proceedings.mlr.press/v235/ji24d.html
|
Tianying Ji, Yu Luo, Fuchun Sun, Xianyuan Zhan, Jianwei Zhang, Huazhe Xu
|
https://proceedings.mlr.press/v235/ji24d.html
|
ICML 2024
|
Learning high-quality $Q$-value functions plays a key role in the success of many modern off-policy deep reinforcement learning (RL) algorithms. Previous works primarily focus on addressing the value overestimation issue, an outcome of adopting function approximators and off-policy learning. Deviating from the common viewpoint, we observe that $Q$-values are often underestimated in the latter stage of the RL training process, potentially hindering policy learning and reducing sample efficiency. We find that such a long-neglected phenomenon is often related to the use of inferior actions from the current policy in Bellman updates as compared to the more optimal action samples in the replay buffer. We propose the Blended Exploitation and Exploration (BEE) operator, a simple yet effective approach that updates $Q$-value using both historical best-performing actions and the current policy. Based on BEE, the resulting practical algorithm BAC outperforms state-of-the-art methods in over 50 continuous control tasks and achieves strong performance in failure-prone scenarios and real-world robot tasks. Benchmark results and videos are available at https://jity16.github.io/BEE/.
|
https://proceedings.mlr.press/v235/ji24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ji24e/ji24e.pdf
|
https://openreview.net/forum?id=VWCpm39peL
|
Discrete Latent Perspective Learning for Segmentation and Detection
|
https://proceedings.mlr.press/v235/ji24e.html
|
Deyi Ji, Feng Zhao, Lanyun Zhu, Wenwei Jin, Hongtao Lu, Jieping Ye
|
https://proceedings.mlr.press/v235/ji24e.html
|
ICML 2024
|
In this paper, we address the challenge of Perspective-Invariant Learning in machine learning and computer vision, which involves enabling a network to understand images from varying perspectives to achieve consistent semantic interpretation. While standard approaches rely on the labor-intensive collection of multi-view images or limited data augmentation techniques, we propose a novel framework, Discrete Latent Perspective Learning (DLPL), for latent multi-perspective fusion learning using conventional single-view images. DLPL comprises three main modules: Perspective Discrete Decomposition (PDD), Perspective Homography Transformation (PHT), and Perspective Invariant Attention (PIA), which work together to discretize visual features, transform perspectives, and fuse multi-perspective semantic information, respectively. DLPL is a universal perspective learning framework applicable to a variety of scenarios and vision tasks. Extensive experiments demonstrate that DLPL significantly enhances the network’s capacity to depict images across diverse scenarios (daily photos, UAV, auto-driving) and tasks (detection, segmentation).
|
https://proceedings.mlr.press/v235/jia24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jia24a/jia24a.pdf
|
https://openreview.net/forum?id=vGHOFeUQi8
|
Simulation-Based Inference with Quantile Regression
|
https://proceedings.mlr.press/v235/jia24a.html
|
He Jia
|
https://proceedings.mlr.press/v235/jia24a.html
|
ICML 2024
|
We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference (SBI) method based on conditional quantile regression. NQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions. Posterior samples are obtained by interpolating the predicted quantiles using monotonic cubic Hermite spline, with specific treatment for the tail behavior and multi-modal distributions. We introduce an alternative definition for the Bayesian credible region using the local Cumulative Density Function (CDF), offering substantially faster evaluation than the traditional Highest Posterior Density Region (HPDR). In case of limited simulation budget and/or known model misspecification, a post-processing calibration step can be integrated into NQE to ensure the unbiasedness of the posterior estimation with negligible additional computational cost. We demonstrate that NQE achieves state-of-the-art performance on a variety of benchmark problems.
|
https://proceedings.mlr.press/v235/jia24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jia24b/jia24b.pdf
|
https://openreview.net/forum?id=Zsz9Pdfvtg
|
GeminiFusion: Efficient Pixel-wise Multimodal Fusion for Vision Transformer
|
https://proceedings.mlr.press/v235/jia24b.html
|
Ding Jia, Jianyuan Guo, Kai Han, Han Wu, Chao Zhang, Chang Xu, Xinghao Chen
|
https://proceedings.mlr.press/v235/jia24b.html
|
ICML 2024
|
Cross-modal transformers have demonstrated superiority in various vision tasks by effectively integrating different modalities. This paper first critiques prior token exchange methods which replace less informative tokens with inter-modal features, and demonstrate exchange based methods underperform cross-attention mechanisms, while the computational demand of the latter inevitably restricts its use with longer sequences. To surmount the computational challenges, we propose GeminiFusion, a pixel-wise fusion approach that capitalizes on aligned cross-modal representations. GeminiFusion elegantly combines intra-modal and inter-modal attentions, dynamically integrating complementary information across modalities. We employ a layer-adaptive noise to adaptively control their interplay on a per-layer basis, thereby achieving a harmonized fusion process. Notably, GeminiFusion maintains linear complexity with respect to the number of input tokens, ensuring this multimodal framework operates with efficiency comparable to unimodal networks. Comprehensive evaluations across multimodal image-to-image translation, $3$D object detection and arbitrary-modal semantic segmentation tasks, including RGB, depth, LiDAR, event data, etc. demonstrate the superior performance of our GeminiFusion against leading-edge techniques. The PyTorch code is available here.
|
https://proceedings.mlr.press/v235/jia24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jia24c/jia24c.pdf
|
https://openreview.net/forum?id=9xUpLGAOy9
|
Chain-of-Thought Predictive Control
|
https://proceedings.mlr.press/v235/jia24c.html
|
Zhiwei Jia, Vineet Thumuluri, Fangchen Liu, Linghao Chen, Zhiao Huang, Hao Su
|
https://proceedings.mlr.press/v235/jia24c.html
|
ICML 2024
|
We study generalizable policy learning from demonstrations for complex low-level control (e.g., contact-rich object manipulations). We propose a novel hierarchical imitation learning method that utilizes sub-optimal demos. Firstly, we propose an observation space-agnostic approach that efficiently discovers the multi-step subskill decomposition of the demos in an unsupervised manner. By grouping temporarily close and functionally similar actions into subskill-level demo segments, the observations at the segment boundaries constitute a chain of planning steps for the task, which we refer to as the chain-of-thought (CoT). Next, we propose a Transformer-based design that effectively learns to predict the CoT as the subskill-level guidance. We couple action and subskill predictions via learnable prompt tokens and a hybrid masking strategy, which enable dynamically updated guidance at test time and improve feature representation of the trajectory for generalizable policy learning. Our method, Chain-of-Thought Predictive Control (CoTPC), consistently surpasses existing strong baselines on various challenging low-level manipulation tasks with sub-optimal demos. See project page at https://sites.google.com/view/cotpc.
|
https://proceedings.mlr.press/v235/jiale24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiale24a/jiale24a.pdf
|
https://openreview.net/forum?id=qY63FnLuJ1
|
Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains
|
https://proceedings.mlr.press/v235/jiale24a.html
|
Zhao Jiale, Wanru Zhuang, Jia Song, Yaqi Li, Shuqi Lu
|
https://proceedings.mlr.press/v235/jiale24a.html
|
ICML 2024
|
In recent years, there has been a surge in the development of 3D structure-based pre-trained protein models, representing a significant advancement over pre-trained protein language models in various downstream tasks. However, most existing structure-based pre-trained models primarily focus on the residue level, i.e., alpha carbon atoms, while ignoring other atoms like side chain atoms. We argue that modeling proteins at both residue and atom levels is important since the side chain atoms can also be crucial for numerous downstream tasks, for example, molecular docking. Nevertheless, we find that naively combining residue and atom information during pre-training typically fails. We identify a key reason is the information leakage caused by the inclusion of atom structure in the input, which renders residue-level pre-training tasks trivial and results in insufficiently expressive residue representations. To address this issue, we introduce a span mask pre-training strategy on 3D protein chains to learn meaningful representations of both residues and atoms. This leads to a simple yet effective approach to learning protein representation suitable for diverse downstream tasks. Extensive experimental results on binding site prediction and function prediction tasks demonstrate our proposed pre-training approach significantly outperforms other methods. Our code will be made public.
|
https://proceedings.mlr.press/v235/jiang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24a/jiang24a.pdf
|
https://openreview.net/forum?id=elF0QoBSFV
|
NDOT: Neuronal Dynamics-based Online Training for Spiking Neural Networks
|
https://proceedings.mlr.press/v235/jiang24a.html
|
Haiyan Jiang, Giulia De Masi, Huan Xiong, Bin Gu
|
https://proceedings.mlr.press/v235/jiang24a.html
|
ICML 2024
|
Spiking Neural Networks (SNNs) are attracting great attention for their energy-efficient and fast-inference properties in neuromorphic computing. However, the efficient training of deep SNNs poses challenges in gradient calculation due to the non-differentiability of their binary spike-generating activation functions. The widely used surrogate gradient (SG) method, combined with the back-propagation through time (BPTT), has shown considerable effectiveness. Yet, BPTT’s process of unfolding and back-propagating along the computation graph requires storing intermediate information at all time-steps, resulting in huge memory consumption and failing to meet online requirements. In this work, we propose Neuronal Dynamics-based Online Training (NDOT) for SNNs, which uses the neuronal dynamics-based temporal dependency/sensitivity in gradient computation. NDOT enables forward-in-time learning by decomposing the full gradient into temporal and spatial gradients. To illustrate the intuition behind NDOT, we employ the Follow-the-Regularized-Leader (FTRL) algorithm. FTRL explicitly utilizes historical information and addresses limitations in instantaneous loss. Our proposed NDOT method accurately captures temporal dependencies through neuronal dynamics, functioning similarly to FTRL’s explicit utilizing historical information. Experiments on CIFAR-10, CIFAR-100, and CIFAR10-DVS demonstrate the superior performance of our NDOT method on large-scale static and neuromorphic datasets within a small number of time steps. The codes are available at https://github.com/HaiyanJiang/SNN-NDOT.
|
https://proceedings.mlr.press/v235/jiang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24b/jiang24b.pdf
|
https://openreview.net/forum?id=Wnni3cu39x
|
Conditional Common Entropy for Instrumental Variable Testing and Partial Identification
|
https://proceedings.mlr.press/v235/jiang24b.html
|
Ziwei Jiang, Murat Kocaoglu
|
https://proceedings.mlr.press/v235/jiang24b.html
|
ICML 2024
|
Instrumental variables (IVs) are widely used for estimating causal effects. There are two main challenges when using instrumental variables. First of all, using IV without additional assumptions such as linearity, the causal effect may still not be identifiable. Second, when selecting an IV, the validity of the selected IV is typically not testable since the causal graph is not identifiable from observational data. In this paper, we propose a method for bounding the causal effect with instrumental variables under weak confounding. In addition, we present a novel criterion to falsify the IV with side information about the confounder. We demonstrate the utility of the proposed method with simulated and real-world datasets.
|
https://proceedings.mlr.press/v235/jiang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24c/jiang24c.pdf
|
https://openreview.net/forum?id=07fSWltF6M
|
ProtoGate: Prototype-based Neural Networks with Global-to-local Feature Selection for Tabular Biomedical Data
|
https://proceedings.mlr.press/v235/jiang24c.html
|
Xiangjian Jiang, Andrei Margeloiu, Nikola Simidjievski, Mateja Jamnik
|
https://proceedings.mlr.press/v235/jiang24c.html
|
ICML 2024
|
Tabular biomedical data poses challenges in machine learning because it is often high-dimensional and typically low-sample-size (HDLSS). Previous research has attempted to address these challenges via local feature selection, but existing approaches often fail to achieve optimal performance due to their limitation in identifying globally important features and their susceptibility to the co-adaptation problem. In this paper, we propose ProtoGate, a prototype-based neural model for feature selection on HDLSS data. ProtoGate first selects instance-wise features via adaptively balancing global and local feature selection. Furthermore, ProtoGate employs a non-parametric prototype-based prediction mechanism to tackle the co-adaptation problem, ensuring the feature selection results and predictions are consistent with underlying data clusters. We conduct comprehensive experiments to evaluate the performance and interpretability of ProtoGate on synthetic and real-world datasets. The results show that ProtoGate generally outperforms state-of-the-art methods in prediction accuracy by a clear margin while providing high-fidelity feature selection and explainable predictions. Code is available at https://github.com/SilenceX12138/ProtoGate.
|
https://proceedings.mlr.press/v235/jiang24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24d/jiang24d.pdf
|
https://openreview.net/forum?id=otuTw4Mghk
|
On the Origins of Linear Representations in Large Language Models
|
https://proceedings.mlr.press/v235/jiang24d.html
|
Yibo Jiang, Goutham Rajendran, Pradeep Kumar Ravikumar, Bryon Aragam, Victor Veitch
|
https://proceedings.mlr.press/v235/jiang24d.html
|
ICML 2024
|
An array of recent works have argued that high-level semantic concepts are encoded "linearly" in the representation space of large language models. In this work, we study the origins of such linear representations. To that end, we introduce a latent variable model to abstract and formalize the concept dynamics of the next token prediction. We use this formalism to prove that linearity arises as a consequence of the loss function and the implicit bias of gradient descent. The theory is further substantiated empirically via experiments.
|
https://proceedings.mlr.press/v235/jiang24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24e/jiang24e.pdf
|
https://openreview.net/forum?id=JD03zxWZzs
|
Federated Optimization with Doubly Regularized Drift Correction
|
https://proceedings.mlr.press/v235/jiang24e.html
|
Xiaowen Jiang, Anton Rodomanov, Sebastian U Stich
|
https://proceedings.mlr.press/v235/jiang24e.html
|
ICML 2024
|
Federated learning is a distributed optimization paradigm that allows training machine learning models across decentralized devices while keeping the data localized. The standard method, FedAvg, suffers from client drift which can hamper performance and increase communication costs over centralized methods. Previous works proposed various strategies to mitigate drift, yet none have shown consistently improved communication-computation trade-offs over vanilla gradient descent across all standard function classes. In this work, we revisit DANE, an established method in distributed optimization. We show that (i) DANE can achieve the desired communication reduction under Hessian similarity constraints. Furthermore, (ii) we present an extension, DANE+, which supports arbitrary inexact local solvers and has more freedom to choose how to aggregate the local updates. We propose (iii) a novel method, FedRed, which has improved local computational complexity and retains the same communication complexity compared to DANE/DANE+. This is achieved by doubly regularized drift correction.
|
https://proceedings.mlr.press/v235/jiang24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24f/jiang24f.pdf
|
https://openreview.net/forum?id=9ANyvRtFGa
|
HexGen: Generative Inference of Large Language Model over Heterogeneous Environment
|
https://proceedings.mlr.press/v235/jiang24f.html
|
Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan
|
https://proceedings.mlr.press/v235/jiang24f.html
|
ICML 2024
|
Serving generative inference of the large language model is a crucial component of contemporary AI applications. In this paper, our focus lies in deploying such services in a heterogeneous and cross-datacenter setting to mitigate the substantial inference costs typically associated with a single centralized datacenter. Towards this end, we propose HexGen, a flexible distributed inference engine that uniquely supports the asymmetric partition of generative inference computations over both tensor model parallelism and pipeline parallelism, which allows for effective deployment across diverse GPUs interconnected by a fully heterogeneous network. We further propose a sophisticated scheduling algorithm grounded in constrained optimization that can adaptively assign asymmetric inference computation across the GPUs to fulfill inference requests while maintaining acceptable latency levels. We conduct an extensive empirical study to evaluate the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B) model. The experimental results suggest that HexGen can choose to achieve up to $2.3\times$ lower latency deadlines or tolerate up to $4\times$ more traffic request rates compared with the homogeneous baseline given the same budget.
|
https://proceedings.mlr.press/v235/jiang24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24g/jiang24g.pdf
|
https://openreview.net/forum?id=36jWuAmGRC
|
Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization
|
https://proceedings.mlr.press/v235/jiang24g.html
|
Wei Jiang, Sifan Yang, Wenhao Yang, Yibo Wang, Yuanyu Wan, Lijun Zhang
|
https://proceedings.mlr.press/v235/jiang24g.html
|
ICML 2024
|
This paper investigates projection-free algorithms for stochastic constrained multi-level optimization. In this context, the objective function is a nested composition of several smooth functions, and the decision set is closed and convex. Existing projection-free algorithms for solving this problem suffer from two limitations: 1) they solely focus on the gradient mapping criterion and fail to match the optimal sample complexities in unconstrained settings; 2) their analysis is exclusively applicable to non-convex functions, without considering convex and strongly convex objectives. To address these issues, we introduce novel projection-free variance reduction algorithms and analyze their complexities under different criteria. For gradient mapping, our complexities improve existing results and match the optimal rates for unconstrained problems. For the widely-used Frank-Wolfe gap criterion, we provide theoretical guarantees that align with those for single-level problems. Additionally, by using a stage-wise adaptation, we further obtain complexities for convex and strongly convex functions. Finally, numerical experiments on different tasks demonstrate the effectiveness of our methods.
|
https://proceedings.mlr.press/v235/jiang24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24h/jiang24h.pdf
|
https://openreview.net/forum?id=v7I5FtL2pV
|
Tabular Insights, Visual Impacts: Transferring Expertise from Tables to Images
|
https://proceedings.mlr.press/v235/jiang24h.html
|
Jun-Peng Jiang, Han-Jia Ye, Leye Wang, Yang Yang, Yuan Jiang, De-Chuan Zhan
|
https://proceedings.mlr.press/v235/jiang24h.html
|
ICML 2024
|
Transferring knowledge across diverse data modalities is receiving increasing attention in machine learning. This paper tackles the task of leveraging expert-derived, yet expensive, tabular data to enhance image-based predictions when tabular data is unavailable during inference. The primary challenges stem from the inherent complexity of accurately mapping diverse tabular data to visual contexts, coupled with the necessity to devise distinct strategies for numerical and categorical tabular attributes. We propose CHannel tAbulaR alignment with optiMal tranSport (Charms), which establishes an alignment between image channels and tabular attributes, enabling selective knowledge transfer that is pertinent to visual features. Specifically, Charms measures similarity distributions across modalities to effectively differentiate and transfer relevant tabular features, with a focus on morphological characteristics, enhancing the capabilities of visual classifiers. By maximizing the mutual information between image channels and tabular features, knowledge from both numerical and categorical tabular attributes are extracted. Experimental results demonstrate that Charms not only enhances the performance of image classifiers but also improves their interpretability by effectively utilizing tabular knowledge.
|
https://proceedings.mlr.press/v235/jiang24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiang24i/jiang24i.pdf
|
https://openreview.net/forum?id=D4B7kkB89m
|
Generalized Neural Collapse for a Large Number of Classes
|
https://proceedings.mlr.press/v235/jiang24i.html
|
Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin G. Mixon, Chong You, Zhihui Zhu
|
https://proceedings.mlr.press/v235/jiang24i.html
|
ICML 2024
|
Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized. We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes.
|
https://proceedings.mlr.press/v235/jiawei24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jiawei24a/jiawei24a.pdf
|
https://openreview.net/forum?id=ENNGAY5uKC
|
SuDA: Support-based Domain Adaptation for Sim2Real Hinge Joint Tracking with Flexible Sensors
|
https://proceedings.mlr.press/v235/jiawei24a.html
|
Fang Jiawei, Haishan Song, Chengxu Zuo, Xiaoxia Gao, Xiaowei Chen, Shihui Guo, Yipeng Qin
|
https://proceedings.mlr.press/v235/jiawei24a.html
|
ICML 2024
|
Flexible sensors hold promise for human motion capture (MoCap), offering advantages such as wearability, privacy preservation, and minimal constraints on natural movement. However, existing flexible sensor-based MoCap methods rely on deep learning and necessitate large and diverse labeled datasets for training. These data typically need to be collected in MoCap studios with specialized equipment and substantial manual labor, making them difficult and expensive to obtain at scale. Thanks to the high-linearity of flexible sensors, we address this challenge by proposing a novel Sim2Real solution for hinge joint tracking based on domain adaptation, eliminating the need for labeled data yet achieving comparable accuracy to supervised learning. Our solution relies on a novel Support-based Domain Adaptation method, namely SuDA, which aligns the supports of the predictive functions rather than the instance-dependent distributions between the source and target domains. Extensive experimental results demonstrate the effectiveness of our method and its superiority overstate-of-the-art distribution-based domain adaptation methods in our task.
|
https://proceedings.mlr.press/v235/jie24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jie24a/jie24a.pdf
|
https://openreview.net/forum?id=FHkavpr5Ze
|
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
|
https://proceedings.mlr.press/v235/jie24a.html
|
Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang
|
https://proceedings.mlr.press/v235/jie24a.html
|
ICML 2024
|
Current solutions for efficiently constructing large vision-language (VL) models follow a two-step paradigm: projecting the output of pre-trained vision encoders to the input space of pre-trained language models as visual prompts; and then transferring the models to downstream VL tasks via end-to-end parameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits inefficiency since it significantly increases the input length of the language models. In this paper, in contrast to integrating visual prompts into inputs, we regard visual prompts as additional knowledge that facilitates language models in addressing tasks associated with visual information. Motivated by the finding that Feed-Forward Network (FFN) of language models acts as "key-value memory", we introduce a novel approach termed memory-space visual prompting (MemVP), wherein visual prompts are concatenated with the weights of FFN for visual knowledge injection. Experimental results across various VL tasks and language models reveal that MemVP significantly reduces the training time and inference latency of the finetuned VL models and surpasses the performance of previous PEFT methods.
|
https://proceedings.mlr.press/v235/jin24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24a/jin24a.pdf
|
https://openreview.net/forum?id=zRrzSLwNHQ
|
Homomorphism Counts for Graph Neural Networks: All About That Basis
|
https://proceedings.mlr.press/v235/jin24a.html
|
Emily Jin, Michael M. Bronstein, Ismail Ilkan Ceylan, Matthias Lanzinger
|
https://proceedings.mlr.press/v235/jin24a.html
|
ICML 2024
|
A large body of work has investigated the properties of graph neural networks and identified several limitations, particularly pertaining to their expressive power. Their inability to count certain patterns (e.g., cycles) in a graph lies at the heart of such limitations, since many functions to be learned rely on the ability of counting such patterns. Two prominent paradigms aim to address this limitation by enriching the graph features with subgraph or homomorphism pattern counts. In this work, we show that both of these approaches are sub-optimal in a certain sense and argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the “basis” of the target pattern. This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity compared to existing approaches. We prove a series of theoretical results on node-level and graph-level motif parameters and empirically validate them on standard benchmark datasets.
|
https://proceedings.mlr.press/v235/jin24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24b/jin24b.pdf
|
https://openreview.net/forum?id=nkOMLBIiI7
|
LLM Maybe LongLM: SelfExtend LLM Context Window Without Tuning
|
https://proceedings.mlr.press/v235/jin24b.html
|
Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
|
https://proceedings.mlr.press/v235/jin24b.html
|
ICML 2024
|
It is well known that LLMs cannot generalize well to long contexts whose lengths are larger than the training sequence length. This poses challenges when employing LLMs for processing long input sequences during inference. In this work, we argue that LLMs themselves have inherent capabilities to handles s long contexts without fine-tuning. To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention. The grouped attention captures the dependencies among tokens that are far apart, while neighbor attention captures dependencies among adjacent tokens within a specified range. The two-level attentions are computed based on the original model’s self-attention mechanism during inference. With minor code modification, our SelfExtend can effortlessly extend existing LLMs’ context window without any fine-tuning. We conduct comprehensive experiments on multiple benchmarks and the results show that our SelfExtend can effectively extend existing LLMs’ context window length.
|
https://proceedings.mlr.press/v235/jin24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24c/jin24c.pdf
|
https://openreview.net/forum?id=5cm2jGct2W
|
On the Maximal Local Disparity of Fairness-Aware Classifiers
|
https://proceedings.mlr.press/v235/jin24c.html
|
Jinqiu Jin, Haoxuan Li, Fuli Feng
|
https://proceedings.mlr.press/v235/jin24c.html
|
ICML 2024
|
Fairness has become a crucial aspect in the development of trustworthy machine learning algorithms. Current fairness metrics to measure the violation of demographic parity have the following drawbacks: (i) the average difference of model predictions on two groups cannot reflect their distribution disparity, and (ii) the overall calculation along all possible predictions conceals the extreme local disparity at or around certain predictions. In this work, we propose a novel fairness metric called Maximal Cumulative ratio Disparity along varying Predictions’ neighborhood (MCDP), for measuring the maximal local disparity of the fairness-aware classifiers. To accurately and efficiently calculate the MCDP, we develop a provably exact and an approximate calculation algorithm that greatly reduces the computational complexity with low estimation error. We further propose a bi-level optimization algorithm using a differentiable approximation of the MCDP for improving the algorithmic fairness. Extensive experiments on both tabular and image datasets validate that our fair training algorithm can achieve superior fairness-accuracy trade-offs.
|
https://proceedings.mlr.press/v235/jin24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24d/jin24d.pdf
|
https://openreview.net/forum?id=bzNwexOPWm
|
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
|
https://proceedings.mlr.press/v235/jin24d.html
|
Xisen Jin, Xiang Ren
|
https://proceedings.mlr.press/v235/jin24d.html
|
ICML 2024
|
Language models deployed in the wild make errors. However, simply updating the model with the corrected error instances causes catastrophic forgetting—the updated model makes errors on instances learned during the instruction tuning or upstream training phase. Randomly replaying upstream data yields unsatisfactory performance and often comes with high variance and poor controllability. To this end, we try to forecast upstream examples that will be forgotten due to a model update for improved controllability of the replay process and interpretability. We train forecasting models given a collection of online learned examples and corresponding forgotten upstream pre-training examples. We propose a partially interpretable forecasting model based on the observation that changes in pre-softmax logit scores of pretraining examples resemble that of online learned examples, which performs decently on BART but fails on T5 models. We further show a black-box classifier based on inner products of example representations achieves better forecasting performance over a series of setups. Finally, we show that we reduce forgetting of upstream pretraining examples by replaying examples that are forecasted to be forgotten, demonstrating the practical utility of forecasting example forgetting.
|
https://proceedings.mlr.press/v235/jin24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24e/jin24e.pdf
|
https://openreview.net/forum?id=8PTx4CpNoT
|
Emergent Representations of Program Semantics in Language Models Trained on Programs
|
https://proceedings.mlr.press/v235/jin24e.html
|
Charles Jin, Martin Rinard
|
https://proceedings.mlr.press/v235/jin24e.html
|
ICML 2024
|
We present evidence that language models (LMs) of code can learn to represent the formal semantics of programs, despite being trained only to perform next-token prediction. Specifically, we train a Transformer model on a synthetic corpus of programs written in a domain-specific language for navigating 2D grid world environments. Each program in the corpus is preceded by a (partial) specification in the form of several input-output grid world states. Despite providing no further inductive biases, we find that a probing classifier is able to extract increasingly accurate representations of the unobserved, intermediate grid world states from the LM hidden states over the course of training, suggesting the LM acquires an emergent ability to interpret programs in the formal sense. We also develop a novel interventional baseline that enables us to disambiguate what is represented by the LM as opposed to learned by the probe. We anticipate that this technique may be generally applicable to a broad range of semantic probing experiments. In summary, this paper does not propose any new techniques for training LMs of code, but develops an experimental framework for and provides insights into the acquisition and representation of formal semantics in statistical models of code.
|
https://proceedings.mlr.press/v235/jin24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24f/jin24f.pdf
|
https://openreview.net/forum?id=S9lk6dk4LL
|
Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization
|
https://proceedings.mlr.press/v235/jin24f.html
|
Yang Jin, Zhicheng Sun, Kun Xu, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, Kun Gai, Yadong Mu
|
https://proceedings.mlr.press/v235/jin24f.html
|
ICML 2024
|
In light of recent advances in multimodal Large Language Models (LLMs), there is increasing attention to scaling them from image-text data to more informative real-world videos. Compared to static images, video poses unique challenges for effective large-scale pre-training due to the modeling of its spatiotemporal dynamics. In this paper, we address such limitations in video-language pre-training with an efficient video decomposition that represents each video as keyframes and temporal motions. These are then adapted to an LLM using well-designed tokenizers that discretize visual and temporal information as a few tokens, thus enabling unified generative pre-training of videos, images, and text. At inference, the generated tokens from the LLM are carefully recovered to the original continuous pixel space to create various video content. Our proposed framework is both capable of comprehending and generating image and video content, as demonstrated by its competitive performance across 13 multimodal benchmarks in image and video understanding and generation. Our code and models are available at https://video-lavit.github.io.
|
https://proceedings.mlr.press/v235/jin24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24g/jin24g.pdf
|
https://openreview.net/forum?id=F3x6uYILgL
|
An Image is Worth Multiple Words: Discovering Object Level Concepts using Multi-Concept Prompt Learning
|
https://proceedings.mlr.press/v235/jin24g.html
|
Chen Jin, Ryutaro Tanno, Amrutha Saseendran, Tom Diethe, Philip Alexander Teare
|
https://proceedings.mlr.press/v235/jin24g.html
|
ICML 2024
|
Textural Inversion, a prompt learning method, learns a singular text embedding for a new "word" to represent image style and appearance, allowing it to be integrated into natural language sentences to generate novel synthesised images. However, identifying multiple unknown object-level concepts within one scene remains a complex challenge. While recent methods have resorted to cropping or masking individual images to learn multiple concepts, these techniques often require prior knowledge of new concepts and are labour-intensive. To address this challenge, we introduce Multi-Concept Prompt Learning (MCPL), where multiple unknown "words" are simultaneously learned from a single sentence-image pair, without any imagery annotations. To enhance the accuracy of word-concept correlation and refine attention mask boundaries, we propose three regularisation techniques: Attention Masking, Prompts Contrastive Loss, and Bind Adjective. Extensive quantitative comparisons with both real-world categories and biomedical images demonstrate that our method can learn new semantically disentangled concepts. Our approach emphasises learning solely from textual embeddings, using less than 10% of the storage space compared to others. The project page, code, and data are available at https://astrazeneca.github.io/mcpl.github.io.
|
https://proceedings.mlr.press/v235/jin24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24h/jin24h.pdf
|
https://openreview.net/forum?id=sYeioWoF9u
|
Language Models as Semantic Indexers
|
https://proceedings.mlr.press/v235/jin24h.html
|
Bowen Jin, Hansi Zeng, Guoyin Wang, Xiusi Chen, Tianxin Wei, Ruirui Li, Zhengyang Wang, Zheng Li, Yang Li, Hanqing Lu, Suhang Wang, Jiawei Han, Xianfeng Tang
|
https://proceedings.mlr.press/v235/jin24h.html
|
ICML 2024
|
Semantic identifier (ID) is an important concept in information retrieval that aims to preserve the semantics of objects such as documents and items inside their IDs. Previous studies typically adopt a two-stage pipeline to learn semantic IDs by first procuring embeddings using off-the-shelf text encoders and then deriving IDs based on the embeddings. However, each step introduces potential information loss, and there is usually an inherent mismatch between the distribution of embeddings within the latent space produced by text encoders and the anticipated distribution required for semantic indexing. It is non-trivial to design a method that can learn the document’s semantic representations and its hierarchical structure simultaneously, given that semantic IDs are discrete and sequentially structured, and the semantic supervision is deficient. In this paper, we introduce LMIndexer, a self-supervised framework to learn semantic IDs with a generative language model. We tackle the challenge of sequential discrete ID by introducing a semantic indexer capable of generating neural sequential discrete representations with progressive training and contrastive learning. In response to the semantic supervision deficiency, we propose to train the model with a self-supervised document reconstruction objective. We show the high quality of the learned IDs and demonstrate their effectiveness on three tasks including recommendation, product search, and document retrieval on five datasets from various domains. Code is available at https://github.com/PeterGriffinJin/LMIndexer.
|
https://proceedings.mlr.press/v235/jin24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24i/jin24i.pdf
|
https://openreview.net/forum?id=iroZNDxFJZ
|
Position: What Can Large Language Models Tell Us about Time Series Analysis
|
https://proceedings.mlr.press/v235/jin24i.html
|
Ming Jin, Yifan Zhang, Wei Chen, Kexin Zhang, Yuxuan Liang, Bin Yang, Jindong Wang, Shirui Pan, Qingsong Wen
|
https://proceedings.mlr.press/v235/jin24i.html
|
ICML 2024
|
Time series analysis is essential for comprehending the complexities inherent in various real-world systems and applications. Although large language models (LLMs) have recently made significant strides, the development of artificial general intelligence (AGI) equipped with time series analysis capabilities remains in its nascent phase. Most existing time series models heavily rely on domain knowledge and extensive model tuning, predominantly focusing on prediction tasks. In this paper, we argue that current LLMs have the potential to revolutionize time series analysis, thereby promoting efficient decision-making and advancing towards a more universal form of time series analytical intelligence. Such advancement could unlock a wide range of possibilities, including time series modality switching and question answering. We encourage researchers and practitioners to recognize the potential of LLMs in advancing time series analysis and emphasize the need for trust in these related efforts. Furthermore, we detail the seamless integration of time series analysis with existing LLM technologies and outline promising avenues for future research.
|
https://proceedings.mlr.press/v235/jing24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jing24a/jing24a.pdf
|
https://openreview.net/forum?id=rs8Sh2UASt
|
AlphaFold Meets Flow Matching for Generating Protein Ensembles
|
https://proceedings.mlr.press/v235/jing24a.html
|
Bowen Jing, Bonnie Berger, Tommi Jaakkola
|
https://proceedings.mlr.press/v235/jing24a.html
|
ICML 2024
|
The biological functions of proteins often depend on dynamic structural ensembles. In this work, we develop a flow-based generative modeling approach for learning and sampling the conformational landscapes of proteins. We repurpose highly accurate single-state predictors such as AlphaFold and ESMFold and fine-tune them under a custom flow matching framework to obtain sequence-conditioned generative models of protein structure called AlphaFlow and ESMFlow. When trained and evaluated on the PDB, our method provides a superior combination of precision and diversity compared to AlphaFold with MSA subsampling. When further trained on ensembles from all-atom MD, our method accurately captures conformational flexibility, positional distributions, and higher-order ensemble observables for unseen proteins. Moreover, our method can diversify a static PDB structure with faster wall-clock convergence to certain equilibrium properties than replicate MD trajectories, demonstrating its potential as a proxy for expensive physics-based simulations. Code is available at https://github.com/bjing2016/alphaflow.
|
https://proceedings.mlr.press/v235/jing24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jing24b/jing24b.pdf
|
https://openreview.net/forum?id=0nMzOmkBHC
|
FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data
|
https://proceedings.mlr.press/v235/jing24b.html
|
Shusen Jing, Anlan Yu, Shuai Zhang, Songyang Zhang
|
https://proceedings.mlr.press/v235/jing24b.html
|
ICML 2024
|
Recent efforts have been made to integrate self-supervised learning (SSL) with the framework of federated learning (FL). One unique challenge of federated self-supervised learning (FedSSL) is that the global objective of FedSSL usually does not equal the weighted sum of local SSL objectives. Consequently, conventional approaches, such as federated averaging (FedAvg), fail to precisely minimize the FedSSL global objective, often resulting in suboptimal performance, especially when data is non-i.i.d.. To fill this gap, we propose a provable FedSSL algorithm, named FedSC, based on the spectral contrastive objective. In FedSC, clients share correlation matrices of data representations in addition to model weights periodically, which enables inter-client contrast of data samples in addition to intra-client contrast and contraction, resulting in improved quality of data representations. Differential privacy (DP) protection is deployed to control the additional privacy leakage on local datasets when correlation matrices are shared. We provide theoretical analysis on convergence and extra privacy leakage, and conduct numerical experiments to justify the effectiveness of our proposed algorithm.
|
https://proceedings.mlr.press/v235/jinnai24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jinnai24a/jinnai24a.pdf
|
https://openreview.net/forum?id=qDUaH9xHVV
|
Model-Based Minimum Bayes Risk Decoding for Text Generation
|
https://proceedings.mlr.press/v235/jinnai24a.html
|
Yuu Jinnai, Tetsuro Morimura, Ukyo Honda, Kaito Ariu, Kenshi Abe
|
https://proceedings.mlr.press/v235/jinnai24a.html
|
ICML 2024
|
Minimum Bayes Risk (MBR) decoding has been shown to be a powerful alternative to beam search decoding in a variety of text generation tasks. MBR decoding selects a hypothesis from a pool of hypotheses that has the least expected risk under a probability model according to a given utility function. Since it is impractical to compute the expected risk exactly over all possible hypotheses, two approximations are commonly used in MBR. First, it integrates over a sampled set of hypotheses rather than over all possible hypotheses. Second, it estimates the probability of each hypothesis using a Monte Carlo estimator. While the first approximation is necessary to make it computationally feasible, the second is not essential since we typically have access to the model probability at inference time. We propose model-based MBR (MBMBR), a variant of MBR that uses the model probability itself as the estimate of the probability distribution instead of the Monte Carlo estimate. We show analytically and empirically that the model-based estimate is more promising than the Monte Carlo estimate in text generation tasks. Our experiments show that MBMBR outperforms MBR in several text generation tasks, both with encoder-decoder models and with language models.
|
https://proceedings.mlr.press/v235/jo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jo24a/jo24a.pdf
|
https://openreview.net/forum?id=60HydCpCMZ
|
Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes
|
https://proceedings.mlr.press/v235/jo24a.html
|
Jaehyeong Jo, Sung Ju Hwang
|
https://proceedings.mlr.press/v235/jo24a.html
|
ICML 2024
|
Learning the distribution of data on Riemannian manifolds is crucial for modeling data from non-Euclidean space, which is required by many applications in diverse scientific fields. Yet, existing generative models on manifolds suffer from expensive divergence computation or rely on approximations of heat kernel. These limitations restrict their applicability to simple geometries and hinder scalability to high dimensions. In this work, we introduce the Riemannian Diffusion Mixture, a principled framework for building a generative diffusion process on manifolds. Instead of following the denoising approach of previous diffusion models, we construct a diffusion process using a mixture of bridge processes derived on general manifolds without requiring heat kernel estimations. We develop a geometric understanding of the mixture process, deriving the drift as a weighted mean of tangent directions to the data points that guides the process toward the data distribution. We further propose a scalable training objective for learning the mixture process that readily applies to general manifolds. Our method achieves superior performance on diverse manifolds with dramatically reduced number of in-training simulation steps for general manifolds.
|
https://proceedings.mlr.press/v235/jo24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jo24b/jo24b.pdf
|
https://openreview.net/forum?id=cZTFxktg23
|
Graph Generation with Diffusion Mixture
|
https://proceedings.mlr.press/v235/jo24b.html
|
Jaehyeong Jo, Dongki Kim, Sung Ju Hwang
|
https://proceedings.mlr.press/v235/jo24b.html
|
ICML 2024
|
Generation of graphs is a major challenge for real-world tasks that require understanding the complex nature of their non-Euclidean structures. Although diffusion models have achieved notable success in graph generation recently, they are ill-suited for modeling the topological properties of graphs since learning to denoise the noisy samples does not explicitly learn the graph structures to be generated. To tackle this limitation, we propose a generative framework that models the topology of graphs by explicitly learning the final graph structures of the diffusion process. Specifically, we design the generative process as a mixture of endpoint-conditioned diffusion processes which is driven toward the predicted graph that results in rapid convergence. We further introduce a simple parameterization of the mixture process and develop an objective for learning the final graph structure, which enables maximum likelihood training. Through extensive experimental validation on general graph and 2D/3D molecule generation tasks, we show that our method outperforms previous generative models, generating graphs with correct topology with both continuous (e.g. 3D coordinates) and discrete (e.g. atom types) features. Our code is available at https://github.com/harryjo97/GruM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.