abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/johnson24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/johnson24a/johnson24a.pdf
|
https://openreview.net/forum?id=AVEc9LvSlO
|
Experts Don’t Cheat: Learning What You Don’t Know By Predicting Pairs
|
https://proceedings.mlr.press/v235/johnson24a.html
|
Daniel D. Johnson, Daniel Tarlow, David Duvenaud, Chris J. Maddison
|
https://proceedings.mlr.press/v235/johnson24a.html
|
ICML 2024
|
Identifying how much a model $\hat{p}_{Y|X}^{\theta}$ knows about the stochastic real-world process $p_{Y|X}$ it was trained on is important to ensure it avoids producing incorrect or "hallucinated" answers or taking unsafe actions. But this is difficult for generative models because probabilistic predictions do not distinguish between per-response noise (aleatoric uncertainty) and lack of knowledge about the process (epistemic uncertainty), and existing epistemic uncertainty quantification techniques tend to be overconfident when the model underfits. We propose a general strategy for teaching a model to both approximate $p_{Y|X}$ and also estimate the remaining gaps between $\hat{p}_{Y|X}^{\theta}$ and $p_{Y|X}$: train it to predict pairs of independent responses drawn from the true conditional distribution, allow it to "cheat" by observing one response while predicting the other, then measure how much it cheats. Remarkably, we prove that being good at cheating (i.e. cheating whenever it improves your prediction) is equivalent to being second-order calibrated, a principled extension of ordinary calibration that allows us to construct provably-correct frequentist confidence intervals for $p_{Y|X}$ and detect incorrect responses with high probability. We demonstrate empirically that our approach accurately estimates how much models don’t know across ambiguous image classification, (synthetic) language modeling, and partially-observable navigation tasks, outperforming existing techniques.
|
https://proceedings.mlr.press/v235/jones24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jones24a/jones24a.pdf
|
https://openreview.net/forum?id=ttaTyweIr1
|
Learning to Infer Generative Template Programs for Visual Concepts
|
https://proceedings.mlr.press/v235/jones24a.html
|
R. Kenny Jones, Siddhartha Chaudhuri, Daniel Ritchie
|
https://proceedings.mlr.press/v235/jones24a.html
|
ICML 2024
|
People grasp flexible visual concepts from a few examples. We explore a neurosymbolic system that learns how to infer programs that capture visual concepts in a domain-general fashion. We introduce Template Programs: programmatic expressions from a domain-specific language that specify structural and parametric patterns common to an input concept. Our framework supports multiple concept-related tasks, including few-shot generation and co-segmentation through parsing. We develop a learning paradigm that allows us to train networks that infer Template Programs directly from visual datasets that contain concept groupings. We run experiments across multiple visual domains: 2D layouts, Omniglot characters, and 3D shapes. We find that our method outperforms task-specific alternatives, and performs competitively against domain-specific approaches for the limited domains where they exist.
|
https://proceedings.mlr.press/v235/jonnarth24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jonnarth24a/jonnarth24a.pdf
|
https://openreview.net/forum?id=nCZYRBK1J4
|
Learning Coverage Paths in Unknown Environments with Deep Reinforcement Learning
|
https://proceedings.mlr.press/v235/jonnarth24a.html
|
Arvi Jonnarth, Jie Zhao, Michael Felsberg
|
https://proceedings.mlr.press/v235/jonnarth24a.html
|
ICML 2024
|
Coverage path planning (CPP) is the problem of finding a path that covers the entire free space of a confined area, with applications ranging from robotic lawn mowing to search-and-rescue. When the environment is unknown, the path needs to be planned online while mapping the environment, which cannot be addressed by offline planning methods that do not allow for a flexible path space. We investigate how suitable reinforcement learning is for this challenging problem, and analyze the involved components required to efficiently learn coverage paths, such as action space, input feature representation, neural network architecture, and reward function. We propose a computationally feasible egocentric map representation based on frontiers, and a novel reward term based on total variation to promote complete coverage. Through extensive experiments, we show that our approach surpasses the performance of both previous RL-based approaches and highly specialized methods across multiple CPP variations.
|
https://proceedings.mlr.press/v235/joo24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/joo24a/joo24a.pdf
|
https://openreview.net/forum?id=5WEIVj98Ju
|
IW-GAE: Importance weighted group accuracy estimation for improved calibration and model selection in unsupervised domain adaptation
|
https://proceedings.mlr.press/v235/joo24a.html
|
Taejong Joo, Diego Klabjan
|
https://proceedings.mlr.press/v235/joo24a.html
|
ICML 2024
|
Distribution shifts pose significant challenges for model calibration and model selection tasks in the unsupervised domain adaptation problem—a scenario where the goal is to perform well in a distribution shifted domain without labels. In this work, we tackle difficulties coming from distribution shifts by developing a novel importance weighted group accuracy estimator. Specifically, we present a new perspective of addressing the model calibration and model selection tasks by estimating the group accuracy. Then, we formulate an optimization problem for finding an importance weight that leads to an accurate group accuracy estimation with theoretical analyses. Our extensive experiments show that our approach improves state-of-the-art performances by 22% in the model calibration task and 14% in the model selection task.
|
https://proceedings.mlr.press/v235/jordahn24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jordahn24a/jordahn24a.pdf
|
https://openreview.net/forum?id=F2Tegvyqlo
|
Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks
|
https://proceedings.mlr.press/v235/jordahn24a.html
|
Mikkel Jordahn, Pablo M. Olmos
|
https://proceedings.mlr.press/v235/jordahn24a.html
|
ICML 2024
|
Deep Neural Networks (DNN) have shown great promise in many classification applications, yet are widely known to have poorly calibrated predictions when they are over-parametrized. Improving DNN calibration without comprising on model accuracy is of extreme importance and interest in safety critical applications such as in the health-care sector. In this work, we show that decoupling the training of feature extraction layers and classification layers in over-parametrized DNN architectures such as Wide Residual Networks (WRN) and Vision Transformers (ViT) significantly improves model calibration whilst retaining accuracy, and at a low training cost. In addition, we show that placing a Gaussian prior on the last hidden layer outputs of a DNN, and training the model variationally in the classification training stage, even further improves calibration. We illustrate these methods improve calibration across ViT and WRN architectures for several image classification benchmark datasets.
|
https://proceedings.mlr.press/v235/jordan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jordan24a/jordan24a.pdf
|
https://openreview.net/forum?id=Xe7n2ZqpBP
|
Position: Benchmarking is Limited in Reinforcement Learning Research
|
https://proceedings.mlr.press/v235/jordan24a.html
|
Scott M. Jordan, Adam White, Bruno Castro Da Silva, Martha White, Philip S. Thomas
|
https://proceedings.mlr.press/v235/jordan24a.html
|
ICML 2024
|
Novel reinforcement learning algorithms, or improvements on existing ones, are commonly justified by evaluating their performance on benchmark environments and are compared to an ever-changing set of standard algorithms. However, despite numerous calls for improvements, experimental practices continue to produce misleading or unsupported claims. One reason for the ongoing substandard practices is that conducting rigorous benchmarking experiments requires substantial computational time. This work investigates the sources of increased computation costs in rigorous experiment designs. We show that conducting rigorous performance benchmarks will likely have computational costs that are often prohibitive. As a result, we argue for using an additional experimentation paradigm to overcome the limitations of benchmarking.
|
https://proceedings.mlr.press/v235/jovanovic24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jovanovic24a/jovanovic24a.pdf
|
https://openreview.net/forum?id=Wp054bnPq9
|
Watermark Stealing in Large Language Models
|
https://proceedings.mlr.press/v235/jovanovic24a.html
|
Nikola Jovanović, Robin Staab, Martin Vechev
|
https://proceedings.mlr.press/v235/jovanovic24a.html
|
ICML 2024
|
LLM watermarking has attracted attention as a promising way to detect AI-generated content, with some works suggesting that current schemes may already be fit for deployment. In this work we dispute this claim, identifying watermark stealing (WS) as a fundamental vulnerability of these schemes. We show that querying the API of the watermarked LLM to approximately reverse-engineer a watermark enables practical spoofing attacks, as hypothesized in prior work, but also greatly boosts scrubbing attacks, which was previously unnoticed. We are the first to propose an automated WS algorithm and use it in the first comprehensive study of spoofing and scrubbing in realistic settings. We show that for under $50 an attacker can both spoof and scrub state-of-the-art schemes previously considered safe, with average success rate of over 80%. Our findings challenge common beliefs about LLM watermarking, stressing the need for more robust schemes. We make all our code and additional examples available at https://watermark-stealing.org.
|
https://proceedings.mlr.press/v235/ju24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ju24a/ju24a.pdf
|
https://openreview.net/forum?id=M5ne8enLcr
|
Hypergraph-enhanced Dual Semi-supervised Graph Classification
|
https://proceedings.mlr.press/v235/ju24a.html
|
Wei Ju, Zhengyang Mao, Siyu Yi, Yifang Qin, Yiyang Gu, Zhiping Xiao, Yifan Wang, Xiao Luo, Ming Zhang
|
https://proceedings.mlr.press/v235/ju24a.html
|
ICML 2024
|
In this paper, we study semi-supervised graph classification, which aims at accurately predicting the categories of graphs in scenarios with limited labeled graphs and abundant unlabeled graphs. Despite the promising capability of graph neural networks (GNNs), they typically require a large number of costly labeled graphs, while a wealth of unlabeled graphs fail to be effectively utilized. Moreover, GNNs are inherently limited to encoding local neighborhood information using message-passing mechanisms, thus lacking the ability to model higher-order dependencies among nodes. To tackle these challenges, we propose a Hypergraph-Enhanced DuAL framework named HEAL for semi-supervised graph classification, which captures graph semantics from the perspective of the hypergraph and the line graph, respectively. Specifically, to better explore the higher-order relationships among nodes, we design a hypergraph structure learning to adaptively learn complex node dependencies beyond pairwise relations. Meanwhile, based on the learned hypergraph, we introduce a line graph to capture the interaction between hyperedges, thereby better mining the underlying semantic structures. Finally, we develop a relational consistency learning to facilitate knowledge transfer between the two branches and provide better mutual guidance. Extensive experiments on real-world graph datasets verify the effectiveness of the proposed method against existing state-of-the-art methods.
|
https://proceedings.mlr.press/v235/ju24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ju24b/ju24b.pdf
|
https://openreview.net/forum?id=dVhrnjZJad
|
NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models
|
https://proceedings.mlr.press/v235/ju24b.html
|
Zeqian Ju, Yuancheng Wang, Kai Shen, Xu Tan, Detai Xin, Dongchao Yang, Eric Liu, Yichong Leng, Kaitao Song, Siliang Tang, Zhizheng Wu, Tao Qin, Xiangyang Li, Wei Ye, Shikun Zhang, Jiang Bian, Lei He, Jinyu Li, Sheng Zhao
|
https://proceedings.mlr.press/v235/ju24b.html
|
ICML 2024
|
While recent large-scale text-to-speech (TTS) models have achieved significant progress, they still fall shorts in speech quality, similarity, and prosody. Considering that speech intricately encompasses various attributes (e.g., content, prosody, timbre, and acoustic details) that pose significant challenges for generation, a natural idea is to factorize speech into individual subspaces representing different attributes and generate them individually. Motivated by it, we propose a TTS system with novel factorized diffusion models to generate natural speech in a zero-shot way. Specifically, 1) we design a neural codec with factorized vector quantization (FVQ) to disentangle speech waveform into subspaces of content, prosody, timbre, and acoustic details; 2) we propose a factorized diffusion model, which generates attributes in each subspace following its corresponding prompt. With this factorization design, our method can effectively and efficiently model the intricate speech with disentangled subspaces in a divide-and-conquer way. Experimental results show that our method outperforms the state-of-the-art TTS systems on quality, similarity, prosody, and intelligibility.
|
https://proceedings.mlr.press/v235/juergens24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/juergens24a/juergens24a.pdf
|
https://openreview.net/forum?id=mxjB0LIgpT
|
Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?
|
https://proceedings.mlr.press/v235/juergens24a.html
|
Mira Juergens, Nis Meinert, Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
|
https://proceedings.mlr.press/v235/juergens24a.html
|
ICML 2024
|
Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as evidential deep learning methods, have become popular in recent years. The latter group of methods in essence extends empirical risk minimization (ERM) for predicting second-order probability distributions over outcomes, from which measures of epistemic (and aleatoric) uncertainty can be extracted. This paper presents novel theoretical insights of evidential deep learning, highlighting the difficulties in optimizing second-order loss functions and interpreting the resulting epistemic uncertainty measures. With a systematic setup that covers a wide range of approaches for classification, regression and counts, it provides novel insights into issues of identifiability and convergence in second-order loss minimization, and the relative (rather than absolute) nature of epistemic uncertainty measures.
|
https://proceedings.mlr.press/v235/jun24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jun24a/jun24a.pdf
|
https://openreview.net/forum?id=W8hBNk1FhQ
|
Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization
|
https://proceedings.mlr.press/v235/jun24a.html
|
Kwang-Sung Jun, Jungtaek Kim
|
https://proceedings.mlr.press/v235/jun24a.html
|
ICML 2024
|
Adapting to a priori unknown noise level is a very important but challenging problem in sequential decision-making as efficient exploration typically requires knowledge of the noise level, which is often loosely specified. We report significant progress in addressing this issue in linear bandits in two respects. First, we propose a novel confidence set that is ’semi-adaptive’ to the unknown sub-Gaussian parameter $\sigma_*^2$ in the sense that the (normalized) confidence width scales with $\sqrt{d\sigma_*^2 + \sigma_0^2}$ where $d$ is the dimension and $\sigma_0^2$ is the specified sub-Gaussian parameter (known) that can be much larger than $\sigma_*^2$. This is a significant improvement over $\sqrt{d\sigma_0^2}$ of the standard confidence set of Abbasi-Yadkori et al. (2011), especially when $d$ is large. We show that this leads to an improved regret bound in linear bandits. Second, for bounded rewards, we propose a novel variance-adaptive confidence set that has a much improved numerical performance upon prior art. We then apply this confidence set to develop, as we claim, the first practical variance-adaptive linear bandit algorithm via an optimistic approach, which is enabled by our novel regret analysis technique. Both of our confidence sets rely critically on ‘regret equality’ from online learning. Our empirical evaluation in Bayesian optimization tasks shows that our algorithms demonstrate better or comparable performance compared to existing methods.
|
https://proceedings.mlr.press/v235/jung24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jung24a/jung24a.pdf
|
https://openreview.net/forum?id=9zdTOOgutk
|
Unsupervised Episode Generation for Graph Meta-learning
|
https://proceedings.mlr.press/v235/jung24a.html
|
Jihyeong Jung, Sangwoo Seo, Sungwon Kim, Chanyoung Park
|
https://proceedings.mlr.press/v235/jung24a.html
|
ICML 2024
|
We propose Unsupervised Episode Generation method called Neighbors as Queries (NaQ) to solve the Few-Shot Node-Classification (FSNC) task by unsupervised Graph Meta-learning. Doing so enables full utilization of the information of all nodes in a graph, which is not possible in current supervised meta-learning methods for FSNC due to the label-scarcity problem. In addition, unlike unsupervised Graph Contrastive Learning (GCL) methods that overlook the downstream task to be solved at the training phase resulting in vulnerability to class imbalance of a graph, we adopt the episodic learning framework that allows the model to be aware of the downstream task format, i.e., FSNC. The proposed NaQ is a simple but effective unsupervised episode generation method that randomly samples nodes from a graph to make a support set, followed by similarity-based sampling of nodes to make the corresponding query set. Since NaQ is model-agnostic, any existing supervised graph meta-learning methods can be trained in an unsupervised manner, while not sacrificing much of their performance or sometimes even improving them. Extensive experimental results demonstrate the effectiveness of our proposed unsupervised episode generation method for graph meta-learning towards the FSNC task. Our code is available at: https://github.com/JhngJng/NaQ-PyTorch.
|
https://proceedings.mlr.press/v235/jung24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/jung24b/jung24b.pdf
|
https://openreview.net/forum?id=mU7FfQT6VE
|
PruNeRF: Segment-Centric Dataset Pruning via 3D Spatial Consistency
|
https://proceedings.mlr.press/v235/jung24b.html
|
Yeonsung Jung, Heecheol Yun, Joonhyung Park, Jin-Hwa Kim, Eunho Yang
|
https://proceedings.mlr.press/v235/jung24b.html
|
ICML 2024
|
Neural Radiance Fields (NeRF) have shown remarkable performance in learning 3D scenes. However, NeRF exhibits vulnerability when confronted with distractors in the training images – unexpected objects are present only within specific views, such as moving entities like pedestrians or birds. Excluding distractors during dataset construction is a straightforward solution, but without prior knowledge of their types and quantities, it becomes prohibitively expensive. In this paper, we propose PruNeRF, a segment-centric dataset pruning framework via 3D spatial consistency, that effectively identifies and prunes the distractors. We first examine existing metrics for measuring pixel-wise distraction and introduce Influence Functions for more accurate measurements. Then, we assess 3D spatial consistency using a depth-based reprojection technique to obtain 3D-aware distraction. Furthermore, we incorporate segmentation for pixel-to-segment refinement, enabling more precise identification. Our experiments on benchmark datasets demonstrate that PruNeRF consistently outperforms state-of-the-art methods in robustness against distractors.
|
https://proceedings.mlr.press/v235/kabra24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kabra24a/kabra24a.pdf
|
https://openreview.net/forum?id=5Pcl5qOOfL
|
Leveraging VLM-Based Pipelines to Annotate 3D Objects
|
https://proceedings.mlr.press/v235/kabra24a.html
|
Rishabh Kabra, Loic Matthey, Alexander Lerchner, Niloy Mitra
|
https://proceedings.mlr.press/v235/kabra24a.html
|
ICML 2024
|
Pretrained vision language models (VLMs) present an opportunity to caption unlabeled 3D objects at scale. The leading approach to summarize VLM descriptions from different views of an object (Luo et al., 2023) relies on a language model (GPT4) to produce the final output. This text-based aggregation is susceptible to hallucinations as it merges potentially contradictory descriptions. We propose an alternative algorithm to marginalize over factors such as the viewpoint that affect the VLM’s response. Instead of merging text-only responses, we utilize the VLM’s joint image-text likelihoods. We show our probabilistic aggregation is not only more reliable and efficient, but sets the SoTA on inferring object types with respect to human-verified labels. The aggregated annotations are also useful for conditional inference; they improve downstream predictions (e.g., of object material) when the object’s type is specified as an auxiliary text-based input. Such auxiliary inputs allow ablating the contribution of visual reasoning over visionless reasoning in an unsupervised setting. With these supervised and unsupervised evaluations, we show how a VLM-based pipeline can be leveraged to produce reliable annotations for 764K objects from the Objaverse dataset.
|
https://proceedings.mlr.press/v235/kacham24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kacham24a/kacham24a.pdf
|
https://openreview.net/forum?id=ghYrfdJfjK
|
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels
|
https://proceedings.mlr.press/v235/kacham24a.html
|
Praneeth Kacham, Vahab Mirrokni, Peilin Zhong
|
https://proceedings.mlr.press/v235/kacham24a.html
|
ICML 2024
|
The quadratic time and memory complexity inherent to self-attention mechanisms, with respect to sequence length, presents a critical computational bottleneck in the training and deployment of large-scale Transformer-based language models. Recent theoretical results indicate the intractability of sub-quadratic softmax attention approximation under reasonable complexity assumptions. This paper addresses this challenge by first demonstrating that polynomial attention with high degree can effectively replace softmax without sacrificing model quality. Next, we develop polynomial sketching techniques from numerical linear algebra to achieve linear-time polynomial attention with approximation guarantees. Crucially, our approach achieves this speedup without requiring the sparsification of attention matrices. We also present a block-based algorithm to apply causal masking efficiently. Combining these techniques, we provide PolySketchFormer, a practical linear-time Transformer architecture for language modeling that offers provable guarantees. We validate PolySketchFormer empirically by training language models capable of handling long contexts. These experiments utilize both synthetic and real-world datasets (PG19, Wikipedia and C4) on Google Cloud TPUs. For context lengths of 32k and GPT-2 style models, our model achieves 2x speedup in training compared to FlashAttention of the fastest configuration, with no observed degradation in quality across our experiments.
|
https://proceedings.mlr.press/v235/kadlecova24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kadlecova24a/kadlecova24a.pdf
|
https://openreview.net/forum?id=EhPpZV6KLk
|
Surprisingly Strong Performance Prediction with Neural Graph Features
|
https://proceedings.mlr.press/v235/kadlecova24a.html
|
Gabriela Kadlecová, Jovita Lukasik, Martin Pilát, Petra Vidnerová, Mahmoud Safari, Roman Neruda, Frank Hutter
|
https://proceedings.mlr.press/v235/kadlecova24a.html
|
ICML 2024
|
Performance prediction has been a key part of the neural architecture search (NAS) process, allowing to speed up NAS algorithms by avoiding resource-consuming network training. Although many performance predictors correlate well with ground truth performance, they require training data in the form of trained networks. Recently, zero-cost proxies have been proposed as an efficient method to estimate network performance without any training. However, they are still poorly understood, exhibit biases with network properties, and their performance is limited. Inspired by the drawbacks of zero-cost proxies, we propose neural graph features (GRAF), simple to compute properties of architectural graphs. GRAF offers fast and interpretable performance prediction while outperforming zero-cost proxies and other common encodings. In combination with other zero-cost proxies, GRAF outperforms most existing performance predictors at a fraction of the cost.
|
https://proceedings.mlr.press/v235/kai24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kai24a/kai24a.pdf
|
https://openreview.net/forum?id=Ry4RAzdOWl
|
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
|
https://proceedings.mlr.press/v235/kai24a.html
|
Dachun Kai, Jiayao Lu, Yueyi Zhang, Xiaoyan Sun
|
https://proceedings.mlr.press/v235/kai24a.html
|
ICML 2024
|
Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.
|
https://proceedings.mlr.press/v235/kaissis24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kaissis24a/kaissis24a.pdf
|
https://openreview.net/forum?id=iQTElQbAqo
|
Beyond the Calibration Point: Mechanism Comparison in Differential Privacy
|
https://proceedings.mlr.press/v235/kaissis24a.html
|
Georgios Kaissis, Stefan Kolek, Borja Balle, Jamie Hayes, Daniel Rueckert
|
https://proceedings.mlr.press/v235/kaissis24a.html
|
ICML 2024
|
In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(\varepsilon, \delta)$-pair. This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(\varepsilon, \delta)$, and potentially introduces privacy vulnerabilities which can remain undetected. This motivates the need for robust, rigorous methods for comparing DP guarantees in such cases. Here, we introduce the $\Delta$-divergence between mechanisms which quantifies the worst-case excess privacy vulnerability of choosing one mechanism over another in terms of $(\varepsilon, \delta)$, $f$-DP and in terms of a newly presented Bayesian interpretation. Moreover, as a generalisation of the Blackwell theorem, it is endowed with strong decision-theoretic foundations. Through application examples, we show that our techniques can facilitate informed decision-making and reveal gaps in the current understanding of privacy risks, as current practices in DP-SGD often result in choosing mechanisms with high excess privacy vulnerabilities.
|
https://proceedings.mlr.press/v235/kalavasis24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kalavasis24a/kalavasis24a.pdf
|
https://openreview.net/forum?id=CKCzfU9YKE
|
Replicable Learning of Large-Margin Halfspaces
|
https://proceedings.mlr.press/v235/kalavasis24a.html
|
Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas, Felix Zhou
|
https://proceedings.mlr.press/v235/kalavasis24a.html
|
ICML 2024
|
We provide an efficient replicable algorithm for the problem of learning large-margin halfspaces. Our results improve upon the algorithms provided by Impagliazzo, Lei, Pitassi, and Sorrell (STOC, 2022). We design the first dimension-independent replicable algorithm for this task which runs in polynomial time, is proper, and has strictly improved sample complexity compared to the one achieved by Impagliazzo et al. (STOC, 2022) with respect to all the relevant parameters. Moreover, our algorithm has sample complexity that is optimal with respect to the accuracy parameter $\epsilon$. Departing from the requirement of polynomial time algorithms, using the DP-to-Replicability reduction of Bun et al. (STOC 2023), we show how to obtain a replicable algorithm for large-margin halfspaces with improved sample complexity with respect to the margin parameter $\tau$, but running time doubly exponential in $1/\tau^2$ and worse sample complexity dependence on $\epsilon$ than our previous algorithm. We then design an improved algorithm with better sample complexity than both of our previous algorithms and running time exponential in $1/\tau^{2}.$
|
https://proceedings.mlr.press/v235/kalluri24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kalluri24a/kalluri24a.pdf
|
https://openreview.net/forum?id=sFN49CfklF
|
Tell, Don’t Show: Language Guidance Eases Transfer Across Domains in Images and Videos
|
https://proceedings.mlr.press/v235/kalluri24a.html
|
Tarun Kalluri, Bodhisattwa Prasad Majumder, Manmohan Chandraker
|
https://proceedings.mlr.press/v235/kalluri24a.html
|
ICML 2024
|
We introduce LaGTran, a novel framework that utilizes text supervision to guide robust transfer of discriminative knowledge from labeled source to unlabeled target data with domain gaps. While unsupervised adaptation methods have been established to address this problem, they show limitations in handling challenging domain shifts due to their exclusive operation within the pixel-space. Motivated by our observation that semantically richer text modality has more favorable transfer properties, we devise a transfer mechanism to use a source-trained text-classifier to generate predictions on the target text descriptions, and utilize these predictions as supervision for the corresponding images. Our approach driven by language guidance is surprisingly easy and simple, yet significantly outperforms all prior approaches on challenging datasets like GeoNet and DomainNet, validating its extreme effectiveness. To further extend the scope of our study beyond images, we introduce a new benchmark called Ego2Exo to study ego-exo transfer in videos and find that our language-aided approach LaGTran yields significant gains in this highly challenging and non-trivial transfer setting. Code, models, and proposed datasets are publicly available at https://tarun005.github.io/lagtran/.
|
https://proceedings.mlr.press/v235/kambhampati24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kambhampati24a/kambhampati24a.pdf
|
https://openreview.net/forum?id=Th8JPEmH4z
|
Position: LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks
|
https://proceedings.mlr.press/v235/kambhampati24a.html
|
Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Mudit Verma, Kaya Stechly, Siddhant Bhambri, Lucas Paul Saldyt, Anil B Murthy
|
https://proceedings.mlr.press/v235/kambhampati24a.html
|
ICML 2024
|
We argue that auto-regressive LLMs cannot, by themselves, do planning or self-verification (which is after all a form of reasoning), and shed some light on the reasons for misunderstandings in the literature. We will also argue that LLMs should be viewed as universal approximate knowledge sources that have much more meaningful roles to play in planning/reasoning tasks beyond simple front-end/back-end format translators. We present a vision of LLM-Modulo Frameworks that combine the strengths of LLMs with external model-based verifiers in a tighter bi-directional interaction regime. We will show how the models driving the external verifiers themselves can be acquired with the help of LLMs. We will also argue that rather than simply pipelining LLMs and symbolic components, this LLM-Modulo Framework provides a better neuro-symbolic approach that offers tighter integration between LLMs and symbolic components, and allows extending the scope of model-based planning/reasoning regimes towards more flexible knowledge, problem and preference specifications.
|
https://proceedings.mlr.press/v235/kamkari24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kamkari24a/kamkari24a.pdf
|
https://openreview.net/forum?id=EVMzCKLpdD
|
A Geometric Explanation of the Likelihood OOD Detection Paradox
|
https://proceedings.mlr.press/v235/kamkari24a.html
|
Hamidreza Kamkari, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini, Rahul Krishnan, Gabriel Loaiza-Ganem
|
https://proceedings.mlr.press/v235/kamkari24a.html
|
ICML 2024
|
Likelihood-based deep generative models (DGMs) commonly exhibit a puzzling behaviour: when trained on a relatively complex dataset, they assign higher likelihood values to out-of-distribution (OOD) data from simpler sources. Adding to the mystery, OOD samples are never generated by these DGMs despite having higher likelihoods. This two-pronged paradox has yet to be conclusively explained, making likelihood-based OOD detection unreliable. Our primary observation is that high-likelihood regions will not be generated if they contain minimal probability mass. We demonstrate how this seeming contradiction of large densities yet low probability mass can occur around data confined to low-dimensional manifolds. We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM. Our method can be applied to normalizing flows and score-based diffusion models, and obtains results which match or surpass state-of-the-art OOD detection benchmarks using the same DGM backbones. Our code is available at our GitHub repository.
|
https://proceedings.mlr.press/v235/kanamori24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kanamori24a/kanamori24a.pdf
|
https://openreview.net/forum?id=blGpu9aGs6
|
Learning Decision Trees and Forests with Algorithmic Recourse
|
https://proceedings.mlr.press/v235/kanamori24a.html
|
Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Yuichi Ike
|
https://proceedings.mlr.press/v235/kanamori24a.html
|
ICML 2024
|
This paper proposes a new algorithm for learning accurate tree-based models while ensuring the existence of recourse actions. Algorithmic Recourse (AR) aims to provide a recourse action for altering the undesired prediction result given by a model. Typical AR methods provide a reasonable action by solving an optimization task of minimizing the required effort among executable actions. In practice, however, such actions do not always exist for models optimized only for predictive performance. To alleviate this issue, we formulate the task of learning an accurate classification tree under the constraint of ensuring the existence of reasonable actions for as many instances as possible. Then, we propose an efficient top-down greedy algorithm by leveraging the adversarial training techniques. We also show that our proposed algorithm can be applied to the random forest, which is known as a popular framework for learning tree ensembles. Experimental results demonstrated that our method successfully provided reasonable actions to more instances than the baselines without significantly degrading accuracy and computational efficiency.
|
https://proceedings.mlr.press/v235/kang24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kang24a/kang24a.pdf
|
https://openreview.net/forum?id=FMa4c5NhOe
|
C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models
|
https://proceedings.mlr.press/v235/kang24a.html
|
Mintong Kang, Nezihe Merve Gürel, Ning Yu, Dawn Song, Bo Li
|
https://proceedings.mlr.press/v235/kang24a.html
|
ICML 2024
|
Despite the impressive capabilities of large language models (LLMs) across diverse applications, they still suffer from trustworthiness issues, such as hallucinations and misalignments. Retrieval-augmented language models (RAG) have been proposed to enhance the credibility of generations by grounding external knowledge, but the theoretical understandings of their generation risks remains unexplored. In this paper, we answer: 1) whether RAG can indeed lead to low generation risks, 2) how to provide provable guarantees on the generation risks of RAG and vanilla LLMs, and 3) what sufficient conditions enable RAG models to reduce generation risks. We propose C-RAG, the first framework to certify generation risks for RAG models. Specifically, we provide conformal risk analysis for RAG models and certify an upper confidence bound of generation risks, which we refer to as conformal generation risk. We also provide theoretical guarantees on conformal generation risks for general bounded risk functions under test distribution shifts. We prove that RAG achieves a lower conformal generation risk than that of a single LLM when the quality of the retrieval model and transformer is non-trivial. Our intensive empirical results demonstrate the soundness and tightness of our conformal generation risk guarantees across four widely-used NLP datasets on four state-of-the-art retrieval models.
|
https://proceedings.mlr.press/v235/kang24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kang24b/kang24b.pdf
|
https://openreview.net/forum?id=PSQ5Z920M8
|
Think Before You Act: Decision Transformers with Working Memory
|
https://proceedings.mlr.press/v235/kang24b.html
|
Jikun Kang, Romain Laroche, Xingdi Yuan, Adam Trischler, Xue Liu, Jie Fu
|
https://proceedings.mlr.press/v235/kang24b.html
|
ICML 2024
|
Decision Transformer-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performance relies on massive data and computation. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training. As a result, training on a new task may deteriorate the model’s performance on previous tasks. In contrast to LLMs’ implicit memory mechanism, the human brain utilizes distributed memory storage, which helps manage and organize multiple skills efficiently, mitigating the forgetting phenomenon. Inspired by this, we propose a working memory module to store, blend, and retrieve information for different downstream tasks. Evaluation results show that the proposed method improves training efficiency and generalization in Atari games and Meta-World object manipulation tasks. Moreover, we demonstrate that memory fine-tuning further enhances the adaptability of the proposed architecture.
|
https://proceedings.mlr.press/v235/kang24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kang24c/kang24c.pdf
|
https://openreview.net/forum?id=4axAQHwBOE
|
Certifiably Byzantine-Robust Federated Conformal Prediction
|
https://proceedings.mlr.press/v235/kang24c.html
|
Mintong Kang, Zhen Lin, Jimeng Sun, Cao Xiao, Bo Li
|
https://proceedings.mlr.press/v235/kang24c.html
|
ICML 2024
|
Conformal prediction has shown impressive capacity in constructing statistically rigorous prediction sets for machine learning models with exchangeable data samples. The siloed datasets, coupled with the escalating privacy concerns related to local data sharing, have inspired recent innovations extending conformal prediction into federated environments with distributed data samples. However, this framework for distributed uncertainty quantification is susceptible to Byzantine failures. A minor subset of malicious clients can significantly compromise the practicality of coverage guarantees. To address this vulnerability, we introduce a novel framework Rob-FCP, which executes robust federated conformal prediction, effectively countering malicious clients capable of reporting arbitrary statistics with the conformal calibration process. We theoretically provide the conformal coverage bound of Rob-FCP in the Byzantine setting and show that the coverage of Rob-FCP is asymptotically close to the desired coverage level. We also propose a malicious client number estimator to tackle a more challenging setting where the number of malicious clients is unknown to the defender and theoretically shows its effectiveness. We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks on five standard benchmark and real-world healthcare datasets.
|
https://proceedings.mlr.press/v235/kanoh24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kanoh24a/kanoh24a.pdf
|
https://openreview.net/forum?id=YxmcEfcgp3
|
Neural Tangent Kernels for Axis-Aligned Tree Ensembles
|
https://proceedings.mlr.press/v235/kanoh24a.html
|
Ryuichi Kanoh, Mahito Sugiyama
|
https://proceedings.mlr.press/v235/kanoh24a.html
|
ICML 2024
|
While axis-aligned rules are known to induce an important inductive bias in machine learning models such as typical hard decision tree ensembles, theoretical understanding of the learning behavior is largely unrevealed due to the discrete nature of rules. To address this issue, we impose the axis-aligned constraint on soft trees, which relax the splitting process of decision trees and are trained using a gradient method, and present their Neural Tangent Kernel (NTK), which enables us to analytically describe the training behavior. We study two cases: imposing the axis-aligned constraint throughout the entire training process, and only at the initial state. Moreover, we extend the NTK framework to handle various tree architectures simultaneously, and prove that any axis-aligned non-oblivious tree ensemble can be transformed into axis-aligned oblivious tree ensembles with the same NTK. One can search for suitable tree architecture via Multiple Kernel Learning (MKL), and our numerical experiments show a variety of suitable features depending on the type of constraints. Our NTK analysis highlights both the theoretical and practical impacts of the axis-aligned constraint in tree ensemble learning.
|
https://proceedings.mlr.press/v235/kapoor24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kapoor24a/kapoor24a.pdf
|
https://openreview.net/forum?id=jRX6yCxFhx
|
Position: On the Societal Impact of Open Foundation Models
|
https://proceedings.mlr.press/v235/kapoor24a.html
|
Sayash Kapoor, Rishi Bommasani, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Peter Cihon, Aspen K Hopkins, Kevin Bankston, Stella Biderman, Miranda Bogen, Rumman Chowdhury, Alex Engler, Peter Henderson, Yacine Jernite, Seth Lazar, Stefano Maffulli, Alondra Nelson, Joelle Pineau, Aviya Skowron, Dawn Song, Victor Storchan, Daniel Zhang, Daniel E. Ho, Percy Liang, Arvind Narayanan
|
https://proceedings.mlr.press/v235/kapoor24a.html
|
ICML 2024
|
Foundation models are powerful technologies: how they are released publicly directly shapes their societal impact. In this position paper, we focus on open foundation models, defined here as those with broadly available model weights (e.g., Llama 3, Stable Diffusion XL). We identify five distinctive properties (e.g., greater customizability, poor monitoring) that mediate their benefits and risks. Open foundation models present significant benefits, with some caveats, that span innovation, competition, the distribution of decision-making power, and transparency. To understand their risks of misuse, we design a risk assessment framework for analyzing their marginal risk. Across several misuse vectors (e.g., cyberattacks, bioweapons), we find that current research is insufficient to effectively characterize the marginal risk of open foundation models relative to pre-existing technologies. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements about misuse risks by revealing that past work has focused on different subsets of the framework with different assumptions, and articulates a way forward for more constructive debate. Overall, our work helps support a more grounded assessment of the societal impact of open foundation models by outlining what research is needed to empirically validate their theoretical benefits and risks.
|
https://proceedings.mlr.press/v235/karakulev24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karakulev24a/karakulev24a.pdf
|
https://openreview.net/forum?id=v6eaD7Wekw
|
Adaptive Robust Learning using Latent Bernoulli Variables
|
https://proceedings.mlr.press/v235/karakulev24a.html
|
Aleksandr Karakulev, Dave Zachariah, Prashant Singh
|
https://proceedings.mlr.press/v235/karakulev24a.html
|
ICML 2024
|
We present an adaptive approach for robust learning from corrupted training sets. We identify corrupted and non-corrupted samples with latent Bernoulli variables and thus formulate the learning problem as maximization of the likelihood where latent variables are marginalized. The resulting problem is solved via variational inference, using an efficient Expectation-Maximization based method. The proposed approach improves over the state-of-the-art by automatically inferring the corruption level, while adding minimal computational overhead. We demonstrate our robust learning method and its parameter-free nature on a wide variety of machine learning tasks including online learning and deep learning where it adapts to different levels of noise and maintains high prediction accuracy.
|
https://proceedings.mlr.press/v235/karamcheti24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karamcheti24a/karamcheti24a.pdf
|
https://openreview.net/forum?id=6FXtu8clyp
|
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
|
https://proceedings.mlr.press/v235/karamcheti24a.html
|
Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, Dorsa Sadigh
|
https://proceedings.mlr.press/v235/karamcheti24a.html
|
ICML 2024
|
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning; adoption that has fueled a wealth of new models such as LLaVa, InstructBLIP, and PaLI-3. Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored, making it challenging to understand what factors account for model performance – a challenge further complicated by the lack of objective, consistent evaluations. To address these gaps, we first compile a suite of standardized evaluations spanning visual question answering, object localization, and challenge sets that probe properties such as hallucination; evaluations that provide fine-grained insight VLM capabilities. Second, we rigorously investigate VLMs along key design axes, including pretrained visual representations and training from base vs. instruct-tuned language models, amongst others. We couple our analysis with three resource contributions: (1) a unified framework for evaluating VLMs, (2) optimized, flexible training code, and (3) checkpoints for all models, including a family of VLMs at the 7-13B scale that strictly outperform InstructBLIP and LLaVa v1.5, the state-of-the-art in open VLMs.
|
https://proceedings.mlr.press/v235/karchmer24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karchmer24a/karchmer24a.pdf
|
https://openreview.net/forum?id=8Z2xWhuT6R
|
On Stronger Computational Separations Between Multimodal and Unimodal Machine Learning
|
https://proceedings.mlr.press/v235/karchmer24a.html
|
Ari Karchmer
|
https://proceedings.mlr.press/v235/karchmer24a.html
|
ICML 2024
|
Recently, multimodal machine learning has enjoyed huge empirical success (e.g. GPT-4). Motivated to develop theoretical justification for this empirical success, Lu (NeurIPS ’23, ALT ’24) introduces a theory of multimodal learning, and considers possible separations between theoretical models of multimodal and unimodal learning. In particular, Lu (ALT ’24) shows a computational separation, which is relevant to worst-case instances of the learning task. In this paper, we give a stronger average-case computational separation, where for "typical" instances of the learning task, unimodal learning is computationally hard, but multimodal learning is easy. We then question how "natural" the average-case separation is. Would it be encountered in practice? To this end, we prove that under basic conditions, any given computational separation between average-case unimodal and multimodal learning tasks implies a corresponding cryptographic key agreement protocol. We suggest to interpret this as evidence that very strong computational advantages of multimodal learning may arise infrequently in practice, since they exist only for the "pathological" case of inherently cryptographic distributions. However, this does not apply to possible (super-polynomial) statistical advantages.
|
https://proceedings.mlr.press/v235/karczewski24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karczewski24a/karczewski24a.pdf
|
https://openreview.net/forum?id=Yqj3DzIC79
|
On the Generalization of Equivariant Graph Neural Networks
|
https://proceedings.mlr.press/v235/karczewski24a.html
|
Rafal Karczewski, Amauri H Souza, Vikas Garg
|
https://proceedings.mlr.press/v235/karczewski24a.html
|
ICML 2024
|
$E(n)$-Equivariant Graph Neural Networks (EGNNs) are among the most widely used and successful models for representation learning on geometric graphs (e.g., 3D molecules). However, while the expressivity of EGNNs has been explored in terms of geometric variants of the Weisfeiler-Leman isomorphism test, characterizing their generalization capability remains open. In this work, we establish the first generalization bound for EGNNs. Our bound depicts a dependence on the weighted sum of logarithms of the spectral norms of the weight matrices (EGNN parameters). In addition, our main result reveals interesting novel insights: $i$) the spectral norms of the initial layers may impact generalization more than the final ones; $ii$) $\varepsilon$-normalization is beneficial to generalization — confirming prior empirical evidence. We leverage these insights to introduce a spectral norm regularizer tailored to EGNNs. Experiments on real-world datasets substantiate our analysis, demonstrating a high correlation between theoretical and empirical generalization gaps and the effectiveness of the proposed regularization scheme.
|
https://proceedings.mlr.press/v235/kargin24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kargin24a/kargin24a.pdf
|
https://openreview.net/forum?id=h3SGdpI4Ta
|
Infinite-Horizon Distributionally Robust Regret-Optimal Control
|
https://proceedings.mlr.press/v235/kargin24a.html
|
Taylan Kargin, Joudi Hajar, Vikrant Malik, Babak Hassibi
|
https://proceedings.mlr.press/v235/kargin24a.html
|
ICML 2024
|
We study the infinite-horizon distributionally robust (DR) control of linear systems with quadratic costs, where disturbances have unknown, possibly time-correlated distribution within a Wasserstein-2 ambiguity set. We aim to minimize the worst-case expected regret—the excess cost of a causal policy compared to a non-causal one with access to future disturbance. Though the optimal policy lacks a finite-order state-space realization (i.e., it is non-rational), it can be characterized by a finite-dimensional parameter. Leveraging this, we develop an efficient frequency-domain algorithm to compute this optimal control policy and present a convex optimization method to construct a near-optimal state-space controller that approximates the optimal non-rational controller in the $\mathit{H}_\infty$-norm. This approach avoids solving a computationally expensive semi-definite program (SDP) that scales with the time horizon in the finite-horizon setting.
|
https://proceedings.mlr.press/v235/karimi-mamaghan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karimi-mamaghan24a/karimi-mamaghan24a.pdf
|
https://openreview.net/forum?id=bqgtkBDkNs
|
Challenges and Considerations in the Evaluation of Bayesian Causal Discovery
|
https://proceedings.mlr.press/v235/karimi-mamaghan24a.html
|
Amir Mohammad Karimi Mamaghan, Panagiotis Tigas, Karl Henrik Johansson, Yarin Gal, Yashas Annadani, Stefan Bauer
|
https://proceedings.mlr.press/v235/karimi-mamaghan24a.html
|
ICML 2024
|
Representing uncertainty in causal discovery is a crucial component for experimental design, and more broadly, for safe and reliable causal decision making. Bayesian Causal Discovery (BCD) offers a principled approach to encapsulating this uncertainty. Unlike non-Bayesian causal discovery, which relies on a single estimated causal graph and model parameters for assessment, evaluating BCD presents challenges due to the nature of its inferred quantity – the posterior distribution. As a result, the research community has proposed various metrics to assess the quality of the approximate posterior. However, there is, to date, no consensus on the most suitable metric(s) for evaluation. In this work, we reexamine this question by dissecting various metrics and understanding their limitations. Through extensive empirical evaluation, we find that many existing metrics fail to exhibit a strong correlation with the quality of approximation to the true posterior, especially in scenarios with low sample sizes where BCD is most desirable. We highlight the suitability (or lack thereof) of these metrics under two distinct factors: the identifiability of the underlying causal model and the quantity of available data. Both factors affect the entropy of the true posterior, indicating that the current metrics are less fitting in settings of higher entropy. Our findings underline the importance of a more nuanced evaluation of new methods by taking into account the nature of the true posterior, as well as guide and motivate the development of new evaluation procedures for this challenge.
|
https://proceedings.mlr.press/v235/kariyappa24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kariyappa24a/kariyappa24a.pdf
|
https://openreview.net/forum?id=hRX1o7FBhT
|
Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions
|
https://proceedings.mlr.press/v235/kariyappa24a.html
|
Sanjay Kariyappa, Freddy Lecue, Saumitra Mishra, Christopher Pond, Daniele Magazzeni, Manuela Veloso
|
https://proceedings.mlr.press/v235/kariyappa24a.html
|
ICML 2024
|
This paper proposes Progressive inference–a framework to explain the predictions of decoder-only transformer models trained to perform sequence classification tasks. Our work is based on the insight that the classification head of a decoder-only model can be used to make intermediate predictions by evaluating them at different points in the input sequence. Due to the masked attention mechanism used in decoder-only models, these intermediate predictions only depend on the tokens seen before the inference point, allowing us to obtain the model’s prediction on a masked input sub-sequence, with negligible computational overheads. We develop two methods to provide sub-sequence level attributions using this core insight. First, we propose Single Pass-Progressive Inference (SP-PI) to compute attributions by simply taking the difference between intermediate predictions. Second, we exploit a connection with Kernel SHAP to develop Multi Pass-Progressive Inference (MP-PI); this uses intermediate predictions from multiple masked versions of the input to compute higher-quality attributions that approximate SHAP values. We perform studies on several text classification datasets to demonstrate that our proposal provides better explanations compared to prior work, both in the single-pass and multi-pass settings.
|
https://proceedings.mlr.press/v235/karl24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karl24a/karl24a.pdf
|
https://openreview.net/forum?id=3RXAiU7sss
|
Position: Embracing Negative Results in Machine Learning
|
https://proceedings.mlr.press/v235/karl24a.html
|
Florian Karl, Malte Kemeter, Gabriel Dax, Paulina Sierak
|
https://proceedings.mlr.press/v235/karl24a.html
|
ICML 2024
|
Publications proposing novel machine learning methods are often primarily rated by exhibited predictive performance on selected problems. In this position paper we argue that predictive performance alone is not a good indicator for the worth of a publication. Using it as such even fosters problems like inefficiencies of the machine learning research community as a whole and setting wrong incentives for researchers. We therefore put out a call for the publication of “negative” results, which can help alleviate some of these problems and improve the scientific output of the machine learning research community. To substantiate our position, we present the advantages of publishing negative results and provide concrete measures for the community to move towards a paradigm where their publication is normalized.
|
https://proceedings.mlr.press/v235/karuvally24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/karuvally24a/karuvally24a.pdf
|
https://openreview.net/forum?id=NCjlFw1Ab0
|
Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks
|
https://proceedings.mlr.press/v235/karuvally24a.html
|
Arjun Karuvally, Terrence Sejnowski, Hava T Siegelmann
|
https://proceedings.mlr.press/v235/karuvally24a.html
|
ICML 2024
|
Traveling waves are a fundamental phenomenon in the brain, playing a crucial role in short-term information storage. In this study, we leverage the concept of traveling wave dynamics within a neural lattice to formulate a theoretical model of neural working memory in Recurrent Neural Networks (RNNs), study its properties, and its real world implications in AI. The proposed model diverges from traditional approaches, which assume information storage in static, register-like locations updated by interference. Instead, the model stores data as waves that is updated by the wave’s boundary conditions. We rigorously examine the model’s capabilities in representing and learning state histories, which are vital for learning history-dependent dynamical systems. The findings reveal that the model reliably stores external information and enhances the learning process by addressing the diminishing gradient problem of RNNs. To understand the model’s real-world applicability, we explore two cases: linear boundary condition and non-linear, self-attention-driven boundary condition. The experiments reveal that the linear scenario is effectively learned by RNNs through backpropagation when modeling history-dependent dynamical systems. Conversely, the non-linear scenario parallels an attention-only transformer. Collectively, our findings suggest the broader relevance of traveling waves in AI and its potential in advancing neural network architectures.
|
https://proceedings.mlr.press/v235/kato24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kato24a/kato24a.pdf
|
https://openreview.net/forum?id=K6HpbvkrwO
|
Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choice
|
https://proceedings.mlr.press/v235/kato24a.html
|
Masahiro Kato, Akihiro Oga, Wataru Komatsubara, Ryo Inokuchi
|
https://proceedings.mlr.press/v235/kato24a.html
|
ICML 2024
|
This study designs an adaptive experiment for efficiently estimating average treatment effects (ATEs). In each round of our adaptive experiment, an experimenter sequentially samples an experimental unit, assigns a treatment, and observes the corresponding outcome immediately. At the end of the experiment, the experimenter estimates an ATE using the gathered samples. The objective is to estimate the ATE with a smaller asymptotic variance. Existing studies have designed experiments that adaptively optimize the propensity score (treatment-assignment probability). As a generalization of such an approach, we propose optimizing the covariate density as well as the propensity score. First, we derive the efficient covariate density and propensity score that minimize the semiparametric efficiency bound and find that optimizing both covariate density and propensity score minimizes the semiparametric efficiency bound more effectively than optimizing only the propensity score. Next, we design an adaptive experiment using the efficient covariate density and propensity score sequentially estimated during the experiment. Lastly, we propose an ATE estimator whose asymptotic variance aligns with the minimized semiparametric efficiency bound.
|
https://proceedings.mlr.press/v235/kaufman24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kaufman24a/kaufman24a.pdf
|
https://openreview.net/forum?id=geajNKab7g
|
First-Order Manifold Data Augmentation for Regression Learning
|
https://proceedings.mlr.press/v235/kaufman24a.html
|
Ilya Kaufman, Omri Azencot
|
https://proceedings.mlr.press/v235/kaufman24a.html
|
ICML 2024
|
Data augmentation (DA) methods tailored to specific domains generate synthetic samples by applying transformations that are appropriate for the characteristics of the underlying data domain, such as rotations on images and time warping on time series data. In contrast, domain-independent approaches, e.g. mixup, are applicable to various data modalities, and as such they are general and versatile. While regularizing classification tasks via DA is a well-explored research topic, the effect of DA on regression problems received less attention. To bridge this gap, we study the problem of domain-independent augmentation for regression, and we introduce FOMA: a new data-driven domain-independent data augmentation method. Essentially, our approach samples new examples from the tangent planes of the train distribution. Augmenting data in this way aligns with the network tendency towards capturing the dominant features of its input signals. We evaluate FOMA on in-distribution generalization and out-of-distribution robustness benchmarks, and we show that it improves the generalization of several neural architectures. We also find that strong baselines based on mixup are less effective in comparison to our approach. Our code is publicly available at https://github.com/azencot-group/FOMA
|
https://proceedings.mlr.press/v235/kaushik24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kaushik24a/kaushik24a.pdf
|
https://openreview.net/forum?id=s4EYBJ30WY
|
Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance
|
https://proceedings.mlr.press/v235/kaushik24a.html
|
Chiraag Kaushik, Ran Liu, Chi-Heng Lin, Amrit Khera, Matthew Y Jin, Wenrui Ma, Vidya Muthukumar, Eva L Dyer
|
https://proceedings.mlr.press/v235/kaushik24a.html
|
ICML 2024
|
Classification models are expected to perform equally well for different classes, yet in practice, there are often large gaps in their performance. This issue of class bias is widely studied in cases of datasets with sample imbalance, but is relatively overlooked in balanced datasets. In this work, we introduce the concept of spectral imbalance in features as a potential source for class disparities and study the connections between spectral imbalance and class bias in both theory and practice. To build the connection between spectral imbalance and class gap, we develop a theoretical framework for studying class disparities and derive exact expressions for the per-class error in a high-dimensional mixture model setting. We then study this phenomenon in 11 different state-of-the-art pre-trained encoders, and show how our proposed framework can be used to compare the quality of encoders, as well as evaluate and combine data augmentation strategies to mitigate the issue. Our work sheds light on the class-dependent effects of learning, and provides new insights into how state-of-the-art pre-trained features may have unknown biases that can be diagnosed through their spectra.
|
https://proceedings.mlr.press/v235/kazadi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kazadi24a/kazadi24a.pdf
|
https://openreview.net/forum?id=kIHIA6Lr0B
|
Pluvial Flood Emulation with Hydraulics-informed Message Passing
|
https://proceedings.mlr.press/v235/kazadi24a.html
|
Arnold Kazadi, James Doss-Gollin, Arlei Lopes Da Silva
|
https://proceedings.mlr.press/v235/kazadi24a.html
|
ICML 2024
|
Machine Learning (ML) has emerged as a promising alternative to numerical methods for physics-based simulation due to its flexibility and efficiency. Flood modeling is a key case study for ML-based simulation due to its relevance as a tool for supporting preventive and emergency measures to mitigate flood risks. However, the complexity of the topography or domain (ground elevation) and the sparsity of the time-evolving precipitations (external forcing) can be challenging for most existing ML approaches for simulating flooding processes in space and time. Another critical challenge is incorporating physics domain knowledge (hydraulics) into these data-driven models. This paper addresses these challenges by introducing a hydraulics-informed graph neural network for flood simulation. Given a (geographical) region and precipitation data, our model predicts water depths in an auto-regressive fashion. We propose a message-passing framework inspired by the conservation of momentum and mass expressed in the shallow-water equations, which describe the physical process of a flooding event. Empirical results on a dataset covering 9 regions and 7 historical precipitation events demonstrate that our model outperforms the best baseline, and can capture the propagation of water flow more effectively, especially at the very early stage of the flooding event when the amount of water in the domain is scarce. Differently from some of the most recent methods for ML-based simulation, which tend to work well only when the domain is a smooth surface (e.g., flat terrain), we show that our solution achieves accurate results for real ground elevation data.
|
https://proceedings.mlr.press/v235/ke24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ke24a/ke24a.pdf
|
https://openreview.net/forum?id=9PQnc6EWdL
|
Accelerating Convergence in Bayesian Few-Shot Classification
|
https://proceedings.mlr.press/v235/ke24a.html
|
Tianjun Ke, Haoqun Cao, Feng Zhou
|
https://proceedings.mlr.press/v235/ke24a.html
|
ICML 2024
|
Bayesian few-shot classification has been a focal point in the field of few-shot learning. This paper seamlessly integrates mirror descent-based variational inference into Gaussian process-based few-shot classification, addressing the challenge of non-conjugate inference. By leveraging non-Euclidean geometry, mirror descent achieves accelerated convergence by providing the steepest descent direction along the corresponding manifold. It also exhibits the parameterization invariance property concerning the variational distribution. Experimental results demonstrate competitive classification accuracy, improved uncertainty quantification, and faster convergence compared to baseline models. Additionally, we investigate the impact of hyperparameters and components. Code is publicly available at https://github.com/keanson/MD-BSFC.
|
https://proceedings.mlr.press/v235/ke24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ke24b/ke24b.pdf
|
https://openreview.net/forum?id=dqdctVbSfs
|
An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks
|
https://proceedings.mlr.press/v235/ke24b.html
|
Zhifa Ke, Zaiwen Wen, Junyu Zhang
|
https://proceedings.mlr.press/v235/ke24b.html
|
ICML 2024
|
Temporal difference (TD) learning algorithms with neural network function parameterization have well-established empirical success in many practical large-scale reinforcement learning tasks. However, theoretical understanding of these algorithms remains challenging due to the nonlinearity of the action-value approximation. In this paper, we develop an improved non-asymptotic analysis of the neural TD method with a general $L$-layer neural network. New proof techniques are developed and an improved new $\tilde{\mathcal{O}}(\epsilon^{-1})$ sample complexity is derived. To our best knowledge, this is the first finite-time analysis of neural TD that achieves an $\tilde{\mathcal{O}}(\epsilon^{-1})$ complexity under the Markovian sampling, as opposed to the best known $\tilde{\mathcal{O}}(\epsilon^{-2})$ complexity in the existing literature.
|
https://proceedings.mlr.press/v235/ke24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ke24c/ke24c.pdf
|
https://openreview.net/forum?id=M3uv4qDKOL
|
DUPLEX: Dual GAT for Complex Embedding of Directed Graphs
|
https://proceedings.mlr.press/v235/ke24c.html
|
Zhaoru Ke, Hang Yu, Jianguo Li, Haipeng Zhang
|
https://proceedings.mlr.press/v235/ke24c.html
|
ICML 2024
|
Current directed graph embedding methods build upon undirected techniques but often inadequately capture directed edge information, leading to challenges such as: (1) Suboptimal representations for nodes with low in/out-degrees, due to the insufficient neighbor interactions; (2) Limited inductive ability for representing new nodes post-training; (3) Narrow generalizability, as training is overly coupled with specific tasks. In response, we propose DUPLEX, an inductive framework for complex embeddings of directed graphs. It (1) leverages Hermitian adjacency matrix decomposition for comprehensive neighbor integration, (2) employs a dual GAT encoder for directional neighbor modeling, and (3) features two parameter-free decoders to decouple training from particular tasks. DUPLEX outperforms state-of-the-art models, especially for nodes with sparse connectivity, and demonstrates robust inductive capability and adaptability across various tasks. The code will be available upon publication.
|
https://proceedings.mlr.press/v235/kedia24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kedia24a/kedia24a.pdf
|
https://openreview.net/forum?id=30waYPIZUA
|
Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models
|
https://proceedings.mlr.press/v235/kedia24a.html
|
Akhil Kedia, Mohd Abbas Zaidi, Sushil Khyalia, Jungho Jung, Harshith Goka, Haejun Lee
|
https://proceedings.mlr.press/v235/kedia24a.html
|
ICML 2024
|
In spite of their huge success, transformer models remain difficult to scale in depth. In this work, we develop a unified signal propagation theory and provide formulae that govern the moments of the forward and backward signal through the transformer model. Our framework can be used to understand and mitigate vanishing/exploding gradients, rank collapse, and instability associated with high attention scores. We also propose DeepScaleLM, an initialization and scaling scheme that conserves unit output/gradient moments throughout the model, enabling the training of very deep models with 1000 layers. We find that transformer models could be much deeper - our deep models with fewer parameters outperform shallow models in Language Modeling, Speech Translation, and Image Classification, across encoder-only, decoder-only and encoder-decoder variants, for both Pre-LN and Post-LN transformers, for multiple datasets and model sizes. These improvements also translate into improved performance on downstream Question Answering tasks and improved robustness for Image Classification.
|
https://proceedings.mlr.press/v235/kerger24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kerger24a/kerger24a.pdf
|
https://openreview.net/forum?id=aPhwhueqjR
|
A Universal Transfer Theorem for Convex Optimization Algorithms Using Inexact First-order Oracles
|
https://proceedings.mlr.press/v235/kerger24a.html
|
Phillip Kerger, Marco Molinaro, Hongyi Jiang, Amitabh Basu
|
https://proceedings.mlr.press/v235/kerger24a.html
|
ICML 2024
|
Given any algorithm for convex optimization that uses exact first-order information (i.e., function values and subgradients), we show how to use such an algorithm to solve the problem with access to inexact first-order information. This is done in a “black-box” manner without knowledge of the internal workings of the algorithm. This complements previous work that considers the performance of specific algorithms like (accelerated) gradient descent with inexact information. In particular, our results apply to a wider range of algorithms beyond variants of gradient descent, e.g., projection-free methods, cutting-plane methods, or any other first-order methods formulated in the future. Further, they also apply to algorithms that handle structured nonconvexities like mixed-integer decision variables.
|
https://proceedings.mlr.press/v235/keswani24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/keswani24a/keswani24a.pdf
|
https://openreview.net/forum?id=XSsoggg8pz
|
Fair Classification with Partial Feedback: An Exploration-Based Data Collection Approach
|
https://proceedings.mlr.press/v235/keswani24a.html
|
Vijay Keswani, Anay Mehrotra, L. Elisa Celis
|
https://proceedings.mlr.press/v235/keswani24a.html
|
ICML 2024
|
In many predictive contexts (e.g., credit lending), true outcomes are only observed for samples that were positively classified in the past. These past observations, in turn, form training datasets for classifiers that make future predictions. However, such training datasets lack information about the outcomes of samples that were (incorrectly) negatively classified in the past and can lead to erroneous classifiers. We present an approach that trains a classifier using available data and comes with a family of exploration strategies to collect outcome data about subpopulations that otherwise would have been ignored. For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier. The right exploration strategy is context-dependent; it can be chosen to improve learning guarantees and encode context-specific group fairness properties. Evaluation on real-world datasets shows that this approach consistently boosts the quality of collected outcome data and improves the fraction of true positives for all groups, with only a small reduction in predictive utility.
|
https://proceedings.mlr.press/v235/khalafi24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/khalafi24a/khalafi24a.pdf
|
https://openreview.net/forum?id=61JD8wp4Id
|
Neural Tangent Kernels Motivate Cross-Covariance Graphs in Neural Networks
|
https://proceedings.mlr.press/v235/khalafi24a.html
|
Shervin Khalafi, Saurabh Sihag, Alejandro Ribeiro
|
https://proceedings.mlr.press/v235/khalafi24a.html
|
ICML 2024
|
Neural tangent kernels (NTKs) provide a theoretical regime to analyze the learning and generalization behavior of over-parametrized neural networks. For a supervised learning task, the association between the eigenvectors of the NTK and given data (a concept referred to as alignment in this paper) can govern the rate of convergence of gradient descent, as well as generalization to unseen data. Building upon this concept and leveraging the structure of NTKs for graph neural networks (GNNs), we theoretically investigate NTKs and alignment, where our analysis reveals that optimizing the alignment translates to optimizing the graph representation or the graph shift operator (GSO) in a GNN. Our results further establish theoretical guarantees on the optimality of the alignment for a two-layer GNN and these guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data. The theoretical insights drawn from the analysis of NTKs are validated by our experiments focused on a multi-variate time series prediction task for a publicly available dataset. Specifically, they demonstrate that GNN-based learning models that operate on the cross-covariance matrix indeed outperform those that operate on the covariance matrix estimated from only the input data.
|
https://proceedings.mlr.press/v235/khaled24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/khaled24a/khaled24a.pdf
|
https://openreview.net/forum?id=A6fmX9QCEa
|
Tuning-Free Stochastic Optimization
|
https://proceedings.mlr.press/v235/khaled24a.html
|
Ahmed Khaled, Chi Jin
|
https://proceedings.mlr.press/v235/khaled24a.html
|
ICML 2024
|
Large-scale machine learning problems make the cost of hyperparameter tuning ever more prohibitive. This creates a need for algorithms that can tune themselves on-the-fly. We formalize the notion of “tuning-free” algorithms that can match the performance of optimally-tuned optimization algorithms up to polylogarithmic factors given only loose hints on the relevant problem parameters. We consider in particular algorithms that can match optimally-tuned Stochastic Gradient Descent (SGD). When the domain of optimization is bounded, we show tuning-free matching of SGD is possible and achieved by several existing algorithms. We prove that for the task of minimizing a convex and smooth or Lipschitz function over an unbounded domain, tuning-free optimization is impossible. We discuss conditions under which tuning-free optimization is possible even over unbounded domains. In particular, we show that the recently proposed DoG and DoWG algorithms are tuning-free when the noise distribution is sufficiently well-behaved. For the task of finding a stationary point of a smooth and potentially nonconvex function, we give a variant of SGD that matches the best-known high-probability convergence rate for tuned SGD at only an additional polylogarithmic cost. However, we also give an impossibility result that shows no algorithm can hope to match the optimal expected convergence rate for tuned SGD with high probability.
|
https://proceedings.mlr.press/v235/khan24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/khan24a/khan24a.pdf
|
https://openreview.net/forum?id=iLCZtl7FTa
|
Debating with More Persuasive LLMs Leads to More Truthful Answers
|
https://proceedings.mlr.press/v235/khan24a.html
|
Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R. Bowman, Tim Rocktäschel, Ethan Perez
|
https://proceedings.mlr.press/v235/khan24a.html
|
ICML 2024
|
Common methods for aligning large language models (LLMs) with desired behaviour heavily rely on human-labelled data. However, as models grow increasingly sophisticated, they will surpass human expertise, and the role of human evaluation will evolve into non-experts overseeing experts. In anticipation of this, we ask: can weaker models assess the correctness of stronger models? We investigate this question in an analogous setting, where stronger models (experts) possess the necessary information to answer questions and weaker models (non-experts) lack this information. The method we evaluate is debate, where two LLM experts each argue for a different answer, and a non-expert selects the answer. We find that debate consistently helps both non-expert models and humans answer questions, achieving 76% and 88% accuracy respectively (naive baselines obtain 48% and 60%). Furthermore, optimising expert debaters for persuasiveness in an unsupervised manner improves non-expert ability to identify the truth in debates. Our results provide encouraging empirical evidence for the viability of aligning models with debate in the absence of ground truth.
|
https://proceedings.mlr.press/v235/khan24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/khan24b/khan24b.pdf
|
https://openreview.net/forum?id=oiY7yhyi6W
|
Off-policy Evaluation Beyond Overlap: Sharp Partial Identification Under Smoothness
|
https://proceedings.mlr.press/v235/khan24b.html
|
Samir Khan, Martin Saveski, Johan Ugander
|
https://proceedings.mlr.press/v235/khan24b.html
|
ICML 2024
|
Off-policy evaluation, and the complementary problem of policy learning, use historical data collected under a logging policy to estimate and/or optimize the value of a target policy. Methods for these tasks typically assume overlap between the target and logging policy, enabling solutions based on importance weighting and/or imputation. Absent such an overlap assumption, existing work either relies on a well-specified model or optimizes needlessly conservative bounds. In this work, we develop methods for no-overlap policy evaluation without a well-specified model, relying instead on non-parametric assumptions on the expected outcome, with a particular focus on Lipschitz smoothness. Under such assumptions we are able to provide sharp bounds on the off-policy value, along with optimal estimators of those bounds. For Lipschitz smoothness, we construct a pair of linear programs that upper and lower bound the contribution of the no-overlap region to the off-policy value. We show that these programs have a concise closed form solution, and that their solutions converge under the Lipschitz assumption to the sharp partial identification bounds at a minimax optimal rate, up to log factors. We demonstrate the effectiveness our methods on two semi-synthetic examples, and obtain informative and valid bounds that are tighter than those possible without smoothness assumptions.
|
https://proceedings.mlr.press/v235/khona24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/khona24a/khona24a.pdf
|
https://openreview.net/forum?id=8VEGkphQaK
|
Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model
|
https://proceedings.mlr.press/v235/khona24a.html
|
Mikail Khona, Maya Okawa, Jan Hula, Rahul Ramesh, Kento Nishi, Robert P. Dick, Ekdeep Singh Lubana, Hidenori Tanaka
|
https://proceedings.mlr.press/v235/khona24a.html
|
ICML 2024
|
Stepwise inference protocols, such as scratchpads and chain-of-thought, help language models solve complex problems by decomposing them into a sequence of simpler subproblems. To unravel the underlying mechanisms of stepwise inference we propose to study autoregressive Transformer models on a synthetic task that embodies the multi-step nature of problems where stepwise inference is generally most useful. Specifically, we define a graph navigation problem wherein a model is tasked with traversing a path from a start to a goal node on the graph. We find we can empirically reproduce and analyze several phenomena observed at scale: (i) the stepwise inference reasoning gap, the cause of which we find in the structure of the training data; (ii) a diversity-accuracy trade-off in model generations as sampling temperature varies; (iii) a simplicity bias in the model’s output; and (iv) compositional generalization and a primacy bias with in-context exemplars. Overall, our work introduces a grounded, synthetic framework for studying stepwise inference and offers mechanistic hypotheses that can lay the foundation for a deeper understanding of this phenomenon.
|
https://proceedings.mlr.press/v235/kim24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24a/kim24a.pdf
|
https://openreview.net/forum?id=TzqmqZS0nj
|
Can Machines Learn the True Probabilities?
|
https://proceedings.mlr.press/v235/kim24a.html
|
Jinsook Kim
|
https://proceedings.mlr.press/v235/kim24a.html
|
ICML 2024
|
When there exists uncertainty, AI machines are designed to make decisions so as to reach the best expected outcomes. Expectations are based on true facts about the objective environment the machines interact with, and those facts can be encoded into AI models in the form of true objective probability functions. Accordingly, AI models involve probabilistic machine learning in which the probabilities should be objectively interpreted. We prove under some basic assumptions when machines can learn the true objective probabilities, if any, and when machines cannot learn them.
|
https://proceedings.mlr.press/v235/kim24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24b/kim24b.pdf
|
https://openreview.net/forum?id=TvoG41N1Y3
|
Gaussian Plane-Wave Neural Operator for Electron Density Estimation
|
https://proceedings.mlr.press/v235/kim24b.html
|
Seongsu Kim, Sungsoo Ahn
|
https://proceedings.mlr.press/v235/kim24b.html
|
ICML 2024
|
This work studies machine learning for electron density prediction, which is fundamental for understanding chemical systems and density functional theory (DFT) simulations. To this end, we introduce the Gaussian plane-wave neural operator (GPWNO), which operates in the infinite-dimensional functional space using the plane-wave and Gaussian-type orbital bases, widely recognized in the context of DFT. In particular, both high- and low-frequency components of the density can be effectively represented due to the complementary nature of the two bases. Extensive experiments on QM9, MD, and material project datasets demonstrate GPWNO’s superior performance over ten baselines.
|
https://proceedings.mlr.press/v235/kim24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24c/kim24c.pdf
|
https://openreview.net/forum?id=uDoy7AGvEC
|
LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging
|
https://proceedings.mlr.press/v235/kim24c.html
|
Jinuk Kim, Marwa El Halabi, Mingi Ji, Hyun Oh Song
|
https://proceedings.mlr.press/v235/kim24c.html
|
ICML 2024
|
Recent works show that reducing the number of layers in a convolutional neural network can enhance efficiency while maintaining the performance of the network. Existing depth compression methods remove redundant non-linear activation functions and merge the consecutive convolution layers into a single layer. However, these methods suffer from a critical drawback; the kernel size of the merged layers becomes larger, significantly undermining the latency reduction gained from reducing the depth of the network. We show that this problem can be addressed by jointly pruning convolution layers and activation functions. To this end, we propose LayerMerge, a novel depth compression method that selects which activation layers and convolution layers to remove, to achieve a desired inference speed-up while minimizing performance loss. Since the corresponding selection problem involves an exponential search space, we formulate a novel surrogate optimization problem and efficiently solve it via dynamic programming. Empirical results demonstrate that our method consistently outperforms existing depth compression and layer pruning methods on various network architectures, both on image classification and generation tasks. We release the code at https://github.com/snu-mllab/LayerMerge.
|
https://proceedings.mlr.press/v235/kim24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24d/kim24d.pdf
|
https://openreview.net/forum?id=9kArQnKLDp
|
CARTE: Pretraining and Transfer for Tabular Learning
|
https://proceedings.mlr.press/v235/kim24d.html
|
Myung Jun Kim, Leo Grinsztajn, Gael Varoquaux
|
https://proceedings.mlr.press/v235/kim24d.html
|
ICML 2024
|
Pretrained deep-learning models are the go-to solution for images or text. However, for tabular data the standard is still to train tree-based models. Indeed, transfer learning on tables hits the challenge of data integration: finding correspondences, correspondences in the entries (entity matching) where different words may denote the same entity, correspondences across columns (schema matching), which may come in different orders, names... We propose a neural architecture that does not need such correspondences. As a result, we can pretrain it on background data that has not been matched. The architecture –CARTE for Context Aware Representation of Table Entries– uses a graph representation of tabular (or relational) data to process tables with different columns, string embedding of entries and columns names to model an open vocabulary, and a graph-attentional network to contextualize entries with column names and neighboring entries. An extensive benchmark shows that CARTE facilitates learning, outperforming a solid set of baselines including the best tree-based models. CARTE also enables joint learning across tables with unmatched columns, enhancing a small table with bigger ones. CARTE opens the door to large pretrained models for tabular data.
|
https://proceedings.mlr.press/v235/kim24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24e/kim24e.pdf
|
https://openreview.net/forum?id=vQmVmMN5ft
|
Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning
|
https://proceedings.mlr.press/v235/kim24e.html
|
Do-Yeon Kim, Dong-Jun Han, Jun Seo, Jaekyun Moon
|
https://proceedings.mlr.press/v235/kim24e.html
|
ICML 2024
|
Handling the substantial communication burden in federated learning (FL) still remains a significant challenge. Although recent studies have attempted to compress the local gradients to address this issue, they typically perform compression only within the original parameter space, which may potentially limit the fundamental compression rate of the gradient. In this paper, instead of restricting our scope to a fixed traditional space, we consider an alternative space that provides an improved compressibility of the gradient. To this end, we utilize the structures of input activation and output gradient in designing our mapping function to a new space, which enables lossless gradient sparsification, i.e., mapping the gradient to our new space induces a greater number of near-zero elements without any loss of information. In light of this attribute, employing sparsification-based compressors in our new space allows for more aggressive compression with minimal information loss than the baselines. More surprisingly, our model even reaches higher accuracies than the full gradient uploading strategy in some cases, an extra benefit for utilizing the new space. We also theoretically confirm that our approach does not alter the existing, best known convergence rate of FL thanks to the orthogonal transformation properties of our mapping.
|
https://proceedings.mlr.press/v235/kim24f.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24f/kim24f.pdf
|
https://openreview.net/forum?id=0jpbpFia8m
|
SqueezeLLM: Dense-and-Sparse Quantization
|
https://proceedings.mlr.press/v235/kim24f.html
|
Sehoon Kim, Coleman Richard Charles Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W. Mahoney, Kurt Keutzer
|
https://proceedings.mlr.press/v235/kim24f.html
|
ICML 2024
|
Generative Large Language Models (LLMs) have demonstrated remarkable results for a wide range of tasks. However, deploying these models for inference has been a significant challenge due to their unprecedented resource requirements. This has forced existing deployment frameworks to use multi-GPU inference pipelines, which are often complex and costly, or to use smaller and less performant models. In this work, we demonstrate that the main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, specifically for single batch inference. While quantization has emerged as a promising solution by representing weights with reduced precision, previous efforts have often resulted in notable performance degradation. To address this, we introduce SqueezeLLM, a post-training quantization framework that not only enables lossless compression to ultra-low precisions of up to 3-bit, but also achieves higher quantization performance under the same memory constraint. Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format. When applied to the LLaMA models, our 3-bit quantization significantly reduces the perplexity gap from the FP16 baseline by up to 2.1x as compared to the state-of-the-art methods with the same memory requirement. Furthermore, when deployed on an A6000 GPU, our quantized models achieve up to 2.3x speedup compared to the baseline. Our code is available at https://github.com/SqueezeAILab/SqueezeLLM.
|
https://proceedings.mlr.press/v235/kim24g.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24g/kim24g.pdf
|
https://openreview.net/forum?id=WPt9HRmMrG
|
Active Label Correction for Semantic Segmentation with Foundation Models
|
https://proceedings.mlr.press/v235/kim24g.html
|
Hoyoung Kim, Sehyun Hwang, Suha Kwak, Jungseul Ok
|
https://proceedings.mlr.press/v235/kim24g.html
|
ICML 2024
|
Training and validating models for semantic segmentation require datasets with pixel-wise annotations, which are notoriously labor-intensive. Although useful priors such as foundation models or crowdsourced datasets are available, they are error-prone. We hence propose an effective framework of active label correction (ALC) based on a design of correction query to rectify pseudo labels of pixels, which in turn is more annotator-friendly than the standard one inquiring to classify a pixel directly according to our theoretical analysis and user study. Specifically, leveraging foundation models providing useful zero-shot predictions on pseudo labels and superpixels, our method comprises two key techniques: (i) an annotator-friendly design of correction query with the pseudo labels, and (ii) an acquisition function looking ahead label expansions based on the superpixels. Experimental results on PASCAL, Cityscapes, and Kvasir-SEG datasets demonstrate the effectiveness of our ALC framework, outperforming prior methods for active semantic segmentation and label correction. Notably, utilizing our method, we obtained a revised dataset of PASCAL by rectifying errors in 2.6 million pixels in PASCAL dataset.
|
https://proceedings.mlr.press/v235/kim24h.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24h/kim24h.pdf
|
https://openreview.net/forum?id=R8nbccD7kv
|
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models
|
https://proceedings.mlr.press/v235/kim24h.html
|
Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim
|
https://proceedings.mlr.press/v235/kim24h.html
|
ICML 2024
|
The unsupervised outlier detection (UOD) problem refers to a task to identify inliers given training data which contain outliers as well as inliers, without any labeled information about inliers and outliers. It has been widely recognized that using fully-trained likelihood-based deep generative models (DGMs) often results in poor performance in distinguishing inliers from outliers. In this study, we claim that the likelihood itself could serve as powerful evidence for identifying inliers in UOD tasks, provided that DGMs are carefully under-fitted. Our approach begins with a novel observation called the inlier-memorization (IM) effect–when training a deep generative model with data including outliers, the model initially memorizes inliers before outliers. Based on this finding, we develop a new method called the outlier detection via the IM effect (ODIM). Remarkably, the ODIM requires only a few updates, making it computationally efficient–at least tens of times faster than other deep-learning-based algorithms. Also, the ODIM filters out outliers excellently, regardless of the data type, including tabular, image, and text data. To validate the superiority and efficiency of our method, we provide extensive empirical analyses on close to 60 datasets.
|
https://proceedings.mlr.press/v235/kim24i.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24i/kim24i.pdf
|
https://openreview.net/forum?id=j6QZy90B93
|
Hybrid Neural Representations for Spherical Data
|
https://proceedings.mlr.press/v235/kim24i.html
|
Hyomin Kim, Yunhui Jang, Jaeho Lee, Sungsoo Ahn
|
https://proceedings.mlr.press/v235/kim24i.html
|
ICML 2024
|
In this paper, we study hybrid neural representations for spherical data, a domain of increasing relevance in scientific research. In particular, our work focuses on weather and climate data as well as cosmic microwave background (CMB) data. Although previous studies have delved into coordinate-based neural representations for spherical signals, they often fail to capture the intricate details of highly nonlinear signals. To address this limitation, we introduce a novel approach named Hybrid Neural Representations for Spherical data (HNeR-S). Our main idea is to use spherical feature-grids to obtain positional features which are combined with a multi-layer perceptron to predict the target signal. We consider feature-grids with equirectangular and hierarchical equal area isolatitude pixelization structures that align with weather data and CMB data, respectively. We extensively verify the effectiveness of our HNeR-S for regression, super-resolution, temporal interpolation, and compression tasks.
|
https://proceedings.mlr.press/v235/kim24j.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24j/kim24j.pdf
|
https://openreview.net/forum?id=7tyAO5tUF8
|
Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs
|
https://proceedings.mlr.press/v235/kim24j.html
|
Mingyu Kim, Kim Jun-Seong, Se-Young Yun, Jin-Hwa Kim
|
https://proceedings.mlr.press/v235/kim24j.html
|
ICML 2024
|
The multi-plane representation has been highlighted for its fast training and inference across static and dynamic neural radiance fields. This approach constructs relevant features via projection onto learnable grids and interpolating adjacent vertices. However, it has limitations in capturing low-frequency details and tends to overuse parameters for low-frequency features due to its bias toward fine details, despite its multi-resolution concept. This phenomenon leads to instability and inefficiency when training poses are sparse. In this work, we propose a method that synergistically integrates multi-plane representation with a coordinate-based MLP network known for strong bias toward low-frequency signals. The coordinate-based network is responsible for capturing low-frequency details, while the multi-plane representation focuses on capturing fine-grained details. We demonstrate that using residual connections between them seamlessly preserves their own inherent properties. Additionally, the proposed progressive training scheme accelerates the disentanglement of these two features. We demonstrate empirically that our proposed method not only outperforms baseline models for both static and dynamic NeRFs with sparse inputs, but also achieves comparable results with fewer parameters.
|
https://proceedings.mlr.press/v235/kim24k.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24k/kim24k.pdf
|
https://openreview.net/forum?id=aECamk9izk
|
Learning to Explore for Stochastic Gradient MCMC
|
https://proceedings.mlr.press/v235/kim24k.html
|
Seunghyun Kim, Seohyeon Jung, Seonghyeon Kim, Juho Lee
|
https://proceedings.mlr.press/v235/kim24k.html
|
ICML 2024
|
Bayesian Neural Networks(BNNs) with high-dimensional parameters pose a challenge for posterior inference due to the multi-modality of the posterior distributions. Stochastic Gradient Markov Chain Monte Carlo(SGMCMC) with cyclical learning rate scheduling is a promising solution, but it requires a large number of sampling steps to explore high-dimensional multi-modal posteriors, making it computationally expensive. In this paper, we propose a meta-learning strategy to build SGMCMC which can efficiently explore the multi-modal target distributions. Our algorithm allows the learned SGMCMC to quickly explore the high-density region of the posterior landscape. Also, we show that this exploration property is transferrable to various tasks, even for the ones unseen during a meta-training stage. Using popular image classification benchmarks and a variety of downstream tasks, we demonstrate that our method significantly improves the sampling efficiency, achieving better performance than vanilla SGMCMC without incurring significant computational overhead.
|
https://proceedings.mlr.press/v235/kim24l.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24l/kim24l.pdf
|
https://openreview.net/forum?id=CbbTF6tDhW
|
Improving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization
|
https://proceedings.mlr.press/v235/kim24l.html
|
Nayeong Kim, Juwon Kang, Sungsoo Ahn, Jungseul Ok, Suha Kwak
|
https://proceedings.mlr.press/v235/kim24l.html
|
ICML 2024
|
We study the problem of training an unbiased and accurate model given a dataset with multiple biases. This problem is challenging since the multiple biases cause multiple undesirable shortcuts during training, and even worse, mitigating one may exacerbate the other. We propose a novel training method to tackle this challenge. Our method first groups training data so that different groups induce different shortcuts, and then optimizes a linear combination of group-wise losses while adjusting their weights dynamically to alleviate conflicts between the groups in performance; this approach, rooted in the multi-objective optimization theory, encourages to achieve the minimax Pareto solution. We also present a new benchmark with multiple biases, dubbed MultiCelebA, for evaluating debiased training methods under realistic and challenging scenarios. Our method achieved the best on three datasets with multiple biases, and also showed superior performance on conventional single-bias datasets.
|
https://proceedings.mlr.press/v235/kim24m.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24m/kim24m.pdf
|
https://openreview.net/forum?id=nUVForc3VP
|
Double-Step Alternating Extragradient with Increasing Timescale Separation for Finding Local Minimax Points: Provable Improvements
|
https://proceedings.mlr.press/v235/kim24m.html
|
Kyuwon Kim, Donghwan Kim
|
https://proceedings.mlr.press/v235/kim24m.html
|
ICML 2024
|
In nonconvex-nonconcave minimax optimization, two-timescale gradient methods have shown their potential to find local minimax (optimal) points, provided that the timescale separation between the min and the max player is sufficiently large. However, existing two-timescale variants of gradient descent ascent and extragradient methods face two shortcomings, especially when we search for non-strict local minimax points that are prevalent in modern overparameterized setting. In specific, (1) these methods can be unstable at some non-strict local minimax points even with sufficiently large timescale separation, and even (2) computing a proper amount of timescale separation is infeasible in practice. To remedy these two issues, we propose to incorporate two simple but provably effective schemes, double-step alternating update and increasing timescale separation, into the two-timescale extragradient method, respectively. Under mild conditions, we show that the proposed methods converge to non-strict local minimax points that all existing two-timescale methods fail to converge.
|
https://proceedings.mlr.press/v235/kim24n.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24n/kim24n.pdf
|
https://openreview.net/forum?id=tTq3qMkJ8w
|
Scene Graph Generation Strategy with Co-occurrence Knowledge and Learnable Term Frequency
|
https://proceedings.mlr.press/v235/kim24n.html
|
Hyeongjin Kim, Sangwon Kim, Dasom Ahn, Jong Taek Lee, Byoung Chul Ko
|
https://proceedings.mlr.press/v235/kim24n.html
|
ICML 2024
|
Scene graph generation (SGG) is an important task in image understanding because it represents the relationships between objects in an image as a graph structure, making it possible to understand the semantic relationships between objects intuitively. Previous SGG studies used a message-passing neural networks (MPNN) to update features, which can effectively reflect information about surrounding objects. However, these studies have failed to reflect the co-occurrence of objects during SGG generation. In addition, they only addressed the long-tail problem of the training dataset from the perspectives of sampling and learning methods. To address these two problems, we propose CooK, which reflects the Co-occurrence Knowledge between objects, and the learnable term frequency-inverse document frequency (TF-$l$-IDF) to solve the long-tail problem. We applied the proposed model to the SGG benchmark dataset, and the results showed a performance improvement of up to 3.8% compared with existing state-of-the-art models in SGGen subtask. The proposed method exhibits generalization ability from the results obtained, showing uniform performance improvement for all MPNN models.
|
https://proceedings.mlr.press/v235/kim24o.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24o/kim24o.pdf
|
https://openreview.net/forum?id=cmD5E6ami4
|
Symmetric Replay Training: Enhancing Sample Efficiency in Deep Reinforcement Learning for Combinatorial Optimization
|
https://proceedings.mlr.press/v235/kim24o.html
|
Hyeonah Kim, Minsu Kim, Sungsoo Ahn, Jinkyoo Park
|
https://proceedings.mlr.press/v235/kim24o.html
|
ICML 2024
|
Deep reinforcement learning (DRL) has significantly advanced the field of combinatorial optimization (CO). However, its practicality is hindered by the necessity for a large number of reward evaluations, especially in scenarios involving computationally intensive function assessments. To enhance the sample efficiency, we propose a simple but effective method, called symmetric replay training (SRT), which can be easily integrated into various DRL methods. Our method leverages high-reward samples to encourage exploration of the under-explored symmetric regions without additional online interactions - free. Through replay training, the policy is trained to maximize the likelihood of the symmetric trajectories of discovered high-rewarded samples. Experimental results demonstrate the consistent improvement of our method in sample efficiency across diverse DRL methods applied to real-world tasks, such as molecular optimization and hardware design.
|
https://proceedings.mlr.press/v235/kim24p.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24p/kim24p.pdf
|
https://openreview.net/forum?id=J4HJUF70qm
|
Clustered Federated Learning via Gradient-based Partitioning
|
https://proceedings.mlr.press/v235/kim24p.html
|
Heasung Kim, Hyeji Kim, Gustavo De Veciana
|
https://proceedings.mlr.press/v235/kim24p.html
|
ICML 2024
|
Clustered Federated Learning (CFL) is a promising distributed learning framework that addresses data heterogeneity issues across multiple clients by grouping clients and providing a shared generalized model for each group. However, under privacy-preserving federated learning protocols where there is no direct sharing of clients’ local datasets, existing approaches often fail to find optimal client groupings resulting in sub-optimal performance. In this paper, we propose a novel CFL algorithm that achieves robust clustering and learning performance. Conceptually, our algorithm groups clients that exhibit similarity in their model updates by periodically accumulating and clustering the gradients that clients compute for various models. The proposed algorithm is shown to achieve a near-optimal error rate for stochastic convergence to optimal models under mild conditions. We present a detailed analysis of the algorithm along with an evaluation on several CFL benchmarks demonstrating that it outperforms existing approaches in terms of convergence speed, clustering accuracy, and task performance.
|
https://proceedings.mlr.press/v235/kim24q.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24q/kim24q.pdf
|
https://openreview.net/forum?id=yDXnXJE1RK
|
Variational Partial Group Convolutions for Input-Aware Partial Equivariance of Rotations and Color-Shifts
|
https://proceedings.mlr.press/v235/kim24q.html
|
Hyunsu Kim, Yegon Kim, Hongseok Yang, Juho Lee
|
https://proceedings.mlr.press/v235/kim24q.html
|
ICML 2024
|
Group Equivariant CNNs (G-CNNs) have shown promising efficacy in various tasks, owing to their ability to capture hierarchical features in an equivariant manner. However, their equivariance is fixed to the symmetry of the whole group, limiting adaptability to diverse partial symmetries in real-world datasets, such as limited rotation symmetry of handwritten digit images and limited color-shift symmetry of flower images. Recent efforts address this limitation, one example being Partial G-CNN which restricts the output group space of convolution layers to break full equivariance. However, such an approach still fails to adjust equivariance levels across data. In this paper, we propose a novel approach, Variational Partial G-CNN (VP G-CNN), to capture varying levels of partial equivariance specific to each data instance. VP G-CNN redesigns the distribution of the output group elements to be conditioned on input data, leveraging variational inference to avoid overfitting. This enables the model to adjust its equivariance levels according to the needs of individual data points. Additionally, we address training instability inherent in discrete group equivariance models by redesigning the reparametrizable distribution. We demonstrate the effectiveness of VP G-CNN on both toy and real-world datasets, including MNIST67-180, CIFAR10, ColorMNIST, and Flowers102. Our results show robust performance, even in uncertainty metrics.
|
https://proceedings.mlr.press/v235/kim24r.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24r/kim24r.pdf
|
https://openreview.net/forum?id=z373OXJXWU
|
Demystifying SGD with Doubly Stochastic Gradients
|
https://proceedings.mlr.press/v235/kim24r.html
|
Kyurae Kim, Joohwan Ko, Yian Ma, Jacob R. Gardner
|
https://proceedings.mlr.press/v235/kim24r.html
|
ICML 2024
|
Optimization objectives in the form of a sum of intractable expectations are rising in importance (e.g.,, diffusion models, variational autoencoders, and many more), a setting also known as "finite sum with infinite data." For these problems, a popular strategy is to employ SGD with doubly stochastic gradients (doubly SGD): the expectations are estimated using the gradient estimator of each component, while the sum is estimated by subsampling over these estimators. Despite its popularity, little is known about the convergence properties of doubly SGD, except under strong assumptions such as bounded variance. In this work, we establish the convergence of doubly SGD with independent minibatching and random reshuffling under general conditions, which encompasses dependent component gradient estimators. In particular, for dependent estimators, our analysis allows fined-grained analysis of the effect correlations. As a result, under a per-iteration computational budget of $b \times m$, where $b$ is the minibatch size and $m$ is the number of Monte Carlo samples, our analysis suggests where one should invest most of the budget in general. Furthermore, we prove that random reshuffling (RR) improves the complexity dependence on the subsampling noise.
|
https://proceedings.mlr.press/v235/kim24s.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24s/kim24s.pdf
|
https://openreview.net/forum?id=GUEsK9xJny
|
Learning to Scale Logits for Temperature-Conditional GFlowNets
|
https://proceedings.mlr.press/v235/kim24s.html
|
Minsu Kim, Joohwan Ko, Taeyoung Yun, Dinghuai Zhang, Ling Pan, Woo Chang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio
|
https://proceedings.mlr.press/v235/kim24s.html
|
ICML 2024
|
GFlowNets are probabilistic models that sequentially generate compositional structures through a stochastic policy. Among GFlowNets, temperature-conditional GFlowNets can introduce temperature-based controllability for exploration and exploitation. We propose Logit-scaling GFlowNets (Logit-GFN), a novel architectural design that greatly accelerates the training of temperature-conditional GFlowNets. It is based on the idea that previously proposed approaches introduced numerical challenges in the deep network training, since different temperatures may give rise to very different gradient profiles as well as magnitudes of the policy’s logits. We find that the challenge is greatly reduced if a learned function of the temperature is used to scale the policy’s logits directly. Also, using Logit-GFN, GFlowNets can be improved by having better generalization capabilities in offline learning and mode discovery capabilities in online learning, which is empirically verified in various biological and chemical tasks. Our code is available at https://github.com/dbsxodud-11/logit-gfn
|
https://proceedings.mlr.press/v235/kim24t.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24t/kim24t.pdf
|
https://openreview.net/forum?id=Mw8kNVfdMs
|
Attribute Based Interpretable Evaluation Metrics for Generative Models
|
https://proceedings.mlr.press/v235/kim24t.html
|
Dongkyun Kim, Mingi Kwon, Youngjung Uh
|
https://proceedings.mlr.press/v235/kim24t.html
|
ICML 2024
|
When the training dataset comprises a 1:1 proportion of dogs to cats, a generative model that produces 1:1 dogs and cats better resembles the training species distribution than another model with 3:1 dogs and cats. Can we capture this phenomenon using existing metrics? Unfortunately, we cannot, because these metrics do not provide any interpretability beyond “diversity". In this context, we propose a new evaluation protocol that measures the divergence of a set of generated images from the training set regarding the distribution of attribute strengths as follows. Singleattribute Divergence (SaD) reveals the attributes that are generated excessively or insufficiently by measuring the divergence of PDFs of individual attributes. Paired-attribute Divergence (PaD) reveals such pairs of attributes by measuring the divergence of joint PDFs of pairs of attributes. For measuring the attribute strengths of an image, we propose Heterogeneous CLIPScore (HCS) which measures the cosine similarity between image and text vectors with heterogeneous initial points. With SaD and PaD, we reveal the following about existing generative models. ProjectedGAN generates implausible attribute relationships such as baby with beard even though it has competitive scores of existing metrics. Diffusion models struggle to capture diverse colors in the datasets. The larger sampling timesteps of the latent diffusion model generate the more minor objects including earrings and necklace. Stable Diffusion v1.5 better captures the attributes than v2.1. Our metrics lay a foundation for explainable evaluations of generative models.
|
https://proceedings.mlr.press/v235/kim24u.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24u/kim24u.pdf
|
https://openreview.net/forum?id=OiI12sNbgD
|
Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning
|
https://proceedings.mlr.press/v235/kim24u.html
|
Donghu Kim, Hojoon Lee, Kyungmin Lee, Dongyoon Hwang, Jaegul Choo
|
https://proceedings.mlr.press/v235/kim24u.html
|
ICML 2024
|
Recently, various pre-training methods have been introduced in vision-based Reinforcement Learning (RL). However, their generalization ability remains unclear due to evaluations being limited to in-distribution environments and non-unified experimental setups. To address this, we introduce the Atari Pre-training Benchmark (Atari-PB), which pre-trains a ResNet-50 model on 10 million transitions from 50 Atari games and evaluates it across diverse environment distributions. Our experiments show that pre-training objectives focused on learning task-agnostic features (e.g., identifying objects and understanding temporal dynamics) enhance generalization across different environments. In contrast, objectives focused on learning task-specific knowledge (e.g., identifying agents and fitting reward functions) improve performance in environments similar to the pre-training dataset but not in varied ones. We publicize our codes, datasets, and model checkpoints at https://github.com/dojeon-ai/Atari-PB.
|
https://proceedings.mlr.press/v235/kim24v.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24v/kim24v.pdf
|
https://openreview.net/forum?id=8nd1yBRCDl
|
EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning
|
https://proceedings.mlr.press/v235/kim24v.html
|
Jongsuk Kim, Hyeongkeun Lee, Kyeongha Rho, Junmo Kim, Joon Son Chung
|
https://proceedings.mlr.press/v235/kim24v.html
|
ICML 2024
|
Recent advancements in self-supervised audio-visual representation learning have demonstrated its potential to capture rich and comprehensive representations. However, despite the advantages of data augmentation verified in many learning methods, audio-visual learning has struggled to fully harness these benefits, as augmentations can easily disrupt the correspondence between input pairs. To address this limitation, we introduce EquiAV, a novel framework that leverages equivariance for audio-visual contrastive learning. Our approach begins with extending equivariance to audio-visual learning, facilitated by a shared attention-based transformation predictor. It enables the aggregation of features from diverse augmentations into a representative embedding, providing robust supervision. Notably, this is achieved with minimal computational overhead. Extensive ablation studies and qualitative results verify the effectiveness of our method. EquiAV outperforms previous works across various audio-visual benchmarks. The code is available on https://github.com/JongSuk1/EquiAV
|
https://proceedings.mlr.press/v235/kim24w.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24w/kim24w.pdf
|
https://openreview.net/forum?id=uLpyWQPyF9
|
Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training
|
https://proceedings.mlr.press/v235/kim24w.html
|
Yechan Kim, Hwijoon Lim, Dongsu Han
|
https://proceedings.mlr.press/v235/kim24w.html
|
ICML 2024
|
Mixture-of-Experts (MoE) is a powerful technique for enhancing the performance of neural networks while decoupling computational complexity from the number of parameters. However, despite this, scaling the number of experts requires adding more GPUs. In addition, the load imbalance in token load across experts causes unnecessary computation or straggler problems. We present ES-MoE, a novel method for efficient scaling MoE training. It offloads expert parameters to host memory and leverages pipelined expert processing to overlap GPU-CPU communication with GPU computation. It dynamically balances token loads across GPUs, improving computational efficiency. ES-MoE accelerates MoE training on a limited number of GPUs without degradation in model performance. We validate our approach on GPT-based MoE models, demonstrating 67$\times$ better scalability and up to 17.5$\times$ better throughput over existing frameworks.
|
https://proceedings.mlr.press/v235/kim24x.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24x/kim24x.pdf
|
https://openreview.net/forum?id=24zMewdzyJ
|
Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient
|
https://proceedings.mlr.press/v235/kim24x.html
|
Ju-Hyun Kim, Seungki Min
|
https://proceedings.mlr.press/v235/kim24x.html
|
ICML 2024
|
This paper addresses a policy optimization task with the conditional value-at-risk (CVaR) objective. We introduce the predictive CVaR policy gradient, a novel approach that seamlessly integrates risk-neutral policy gradient algorithms with minimal modifications. Our method incorporates a reweighting strategy in gradient calculation – individual cost terms are reweighted in proportion to their predicted contribution to the objective. These weights can be easily estimated through a separate learning procedure. We provide theoretical and empirical analyses, demonstrating the validity and effectiveness of our proposed method.
|
https://proceedings.mlr.press/v235/kim24y.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24y/kim24y.pdf
|
https://openreview.net/forum?id=uQ2FUoFjnF
|
An LLM Compiler for Parallel Function Calling
|
https://proceedings.mlr.press/v235/kim24y.html
|
Sehoon Kim, Suhong Moon, Ryan Tabrizi, Nicholas Lee, Michael W. Mahoney, Kurt Keutzer, Amir Gholami
|
https://proceedings.mlr.press/v235/kim24y.html
|
ICML 2024
|
The reasoning capabilities of the recent LLMs enable them to execute external function calls to overcome their inherent limitations, such as knowledge cutoffs, poor arithmetic skills, or lack of access to private data. This development has allowed LLMs to select and coordinate multiple functions based on the context to tackle more complex problems. However, current methods for function calling often require sequential reasoning and acting for each function which can result in high latency, cost, and sometimes inaccurate behavior. To address this, we introduce LLMCompiler, which executes functions in parallel to efficiently orchestrate multiple function calls. Drawing inspiration from the principles of classical compilers, LLMCompiler enables parallel function calling with three components: (i) a Function Calling Planner, formulating execution plans for function calling; (ii) a Task Fetching Unit, dispatching function calling tasks; and (iii) an Executor, executing these tasks in parallel. LLMCompiler automatically generates an optimized orchestration for the function calls and can be used with both open-source and closed-source models. We have benchmarked LLMCompiler on a range of tasks with different patterns of function calling. We observe consistent latency speedup of up to $3.7 \times$, cost savings of up to $6.7 \times$, and accuracy improvement of up to $\sim 9 %$ compared to ReAct.Our code is available at https://github.com/SqueezeAILab/LLMCompiler.
|
https://proceedings.mlr.press/v235/kim24z.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24z/kim24z.pdf
|
https://openreview.net/forum?id=8KeD4mEh3j
|
Data-Efficient Molecular Generation with Hierarchical Textual Inversion
|
https://proceedings.mlr.press/v235/kim24z.html
|
Seojin Kim, Jaehyun Nam, Sihyun Yu, Younghoon Shin, Jinwoo Shin
|
https://proceedings.mlr.press/v235/kim24z.html
|
ICML 2024
|
Developing an effective molecular generation framework even with a limited number of molecules is often important for its practical deployment, e.g., drug discovery, since acquiring task-related molecular data requires expensive and time-consuming experimental costs. To tackle this issue, we introduce Hierarchical Textual Inversion for Molecular Generation (HI-Mol), a novel data-efficient molecular generation method. HI-Mol is inspired by the importance of hierarchical information, e.g., both coarse- and fine-grained features, in understanding the molecule distribution. We propose to use multi-level embeddings to reflect such hierarchical features based on the adoption of the recent textual inversion technique in the visual domain, which achieves data-efficient image generation. Compared to the conventional textual inversion method in the image domain using a single-level token embedding, our multi-level token embeddings allow the model to effectively learn the underlying low-shot molecule distribution. We then generate molecules based on the interpolation of the multi-level token embeddings. Extensive experiments demonstrate the superiority of HI-Mol with notable data-efficiency. For instance, on QM9, HI-Mol outperforms the prior state-of-the-art method with 50x less training data. We also show the effectiveness of molecules generated by HI-Mol in low-shot molecular property prediction. Code is available at https://github.com/Seojin-Kim/HI-Mol.
|
https://proceedings.mlr.press/v235/kim24aa.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24aa/kim24aa.pdf
|
https://openreview.net/forum?id=xSizvCoI79
|
Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning
|
https://proceedings.mlr.press/v235/kim24aa.html
|
Dongkwan Kim, Alice Oh
|
https://proceedings.mlr.press/v235/kim24aa.html
|
ICML 2024
|
Subgraph representation learning has emerged as an important problem, but it is by default approached with specialized graph neural networks on a large global graph. These models demand extensive memory and computational resources but challenge modeling hierarchical structures of subgraphs. In this paper, we propose Subgraph-To-Node (S2N) translation, a novel formulation for learning representations of subgraphs. Specifically, given a set of subgraphs in the global graph, we construct a new graph by coarsely transforming subgraphs into nodes. Demonstrating both theoretical and empirical evidence, S2N not only significantly reduces memory and computational costs compared to state-of-the-art models but also outperforms them by capturing both local and global structures of the subgraph. By leveraging graph coarsening methods, our method outperforms baselines even in a data-scarce setting with insufficient subgraphs. Our experiments on eight benchmarks demonstrate that fined-tuned models with S2N translation can process 183 – 711 times more subgraph samples than state-of-the-art models at a better or similar performance level.
|
https://proceedings.mlr.press/v235/kim24ab.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ab/kim24ab.pdf
|
https://openreview.net/forum?id=apxON2uH4N
|
Privacy-Preserving Embedding via Look-up Table Evaluation with Fully Homomorphic Encryption
|
https://proceedings.mlr.press/v235/kim24ab.html
|
Jae-Yun Kim, Saerom Park, Joohee Lee, Jung Hee Cheon
|
https://proceedings.mlr.press/v235/kim24ab.html
|
ICML 2024
|
In privacy-preserving machine learning (PPML), homomorphic encryption (HE) has emerged as a significant primitive, allowing the use of machine learning (ML) models while protecting the confidentiality of input data. Although extensive research has been conducted on implementing PPML with HE by developing the efficient construction of private counterparts to ML models, the efficient HE implementation of embedding layers for token inputs such as words remains inadequately addressed. Thus, our study proposes an efficient algorithm for privacy-preserving embedding via look-up table evaluation with HE(HELUT) by developing an encrypted indicator function (EIF) that assures high precision with the use of the approximate HE scheme(CKKS). Based on the proposed EIF, we propose the CodedHELUT algorithm to facilitate an encrypted embedding layer for the first time. CodedHELUT leverages coded inputs to improve overall efficiency and optimize memory usage. Our comprehensive empirical analysis encompasses both synthetic tables and real-world largescale word embedding models. CodedHELUT algorithm achieves amortized evaluation time of 0.018-0.242s for GloVe6B50d, 0.104-01.298s for GloVe42300d, 0.262-3.283s for GPT-2 and BERT embedding layers while maintaining high precision (16 bits)
|
https://proceedings.mlr.press/v235/kim24ac.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ac/kim24ac.pdf
|
https://openreview.net/forum?id=Zn44XGFGam
|
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Time
|
https://proceedings.mlr.press/v235/kim24ac.html
|
Sungyoon Kim, Mert Pilanci
|
https://proceedings.mlr.press/v235/kim24ac.html
|
ICML 2024
|
In this paper, we study the optimality gap between two-layer ReLU networks regularized with weight decay and their convex relaxations. We show that when the training data is random, the relative optimality gap between the original problem and its relaxation can be bounded by a factor of O(√log n), where n is the number of training samples. A simple application leads to a tractable polynomial-time algorithm that is guaranteed to solve the original non-convex problem up to a logarithmic factor. Moreover, under mild assumptions, we show that local gradient methods converge to a point with low training loss with high probability. Our result is an exponential improvement compared to existing results and sheds new light on understanding why local gradient methods work well.
|
https://proceedings.mlr.press/v235/kim24ad.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ad/kim24ad.pdf
|
https://openreview.net/forum?id=LbEB39lZqp
|
USTAD: Unified Single-model Training Achieving Diverse Scores for Information Retrieval
|
https://proceedings.mlr.press/v235/kim24ad.html
|
Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer, Wittawat Jitkrittum, Veeranjaneyulu Sadhanala, Sadeep Jayasumana, Aditya Krishna Menon, Rob Fergus, Sanjiv Kumar
|
https://proceedings.mlr.press/v235/kim24ad.html
|
ICML 2024
|
Modern information retrieval (IR) systems consists of multiple stages like retrieval and ranking, with Transformer-based models achieving state-of-the-art performance at each stage. In this paper, we challenge the tradition of using separate models for different stages and ask if a single Transformer encoder can provide relevance score needed in each stage. We present USTAD – a new unified approach to train a single network that can provide powerful ranking scores as a cross-encoder (CE) model as well as factorized embeddings for large-scale retrieval as a dual-encoder (DE) model. Empirically, we find a single USTAD model to be competitive to separate ranking CE and retrieval DE models. Furthermore, USTAD combines well with a novel embedding matching-based distillation, significantly improving CE to DE distillation. It further motivates novel asymmetric architectures for student models to ensure a better embedding alignment between the student and the teacher while ensuring small online inference cost. On standard benchmarks like MSMARCO, we demonstrate that USTAD with our proposed distillation method leads to asymmetric students with only 1/10th trainable parameter but retaining 95-97% of the teacher performance.
|
https://proceedings.mlr.press/v235/kim24ae.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ae/kim24ae.pdf
|
https://openreview.net/forum?id=QZd3rvlP76
|
Polynomial-based Self-Attention for Table Representation Learning
|
https://proceedings.mlr.press/v235/kim24ae.html
|
Jayoung Kim, Yehjin Shin, Jeongwhan Choi, Hyowon Wi, Noseong Park
|
https://proceedings.mlr.press/v235/kim24ae.html
|
ICML 2024
|
Structured data, which constitutes a significant portion of existing data types, has been a long-standing research topic in the field of machine learning. Various representation learning methods for tabular data have been proposed, ranging from encoder-decoder structures to Transformers. Among these, Transformer-based methods have achieved state-of-the-art performance not only in tabular data but also in various other fields, including computer vision and natural language processing. However, recent studies have revealed that self-attention, a key component of Transformers, can lead to an oversmoothing issue. We show that Transformers for tabular data also face this problem. To tackle the problem, we suggest a novel self-attention layer for tabular data, leveraging matrix polynomials. This proposed layer serves as a replacement for the original self-attention layer, contributing to the improvement of model scalability. In our experiments with three representative table learning models equipped with our proposed layer, we illustrate that the layer effectively mitigates the oversmoothing problem and enhances the representation performance of the existing methods, outperforming the state-of-the-art table representation methods.
|
https://proceedings.mlr.press/v235/kim24af.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24af/kim24af.pdf
|
https://openreview.net/forum?id=xm2lU7tteQ
|
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape
|
https://proceedings.mlr.press/v235/kim24af.html
|
Juno Kim, Taiji Suzuki
|
https://proceedings.mlr.press/v235/kim24af.html
|
ICML 2024
|
Large language models based on the Transformer architecture have demonstrated impressive capabilities to learn in context. However, existing theoretical studies on how this phenomenon arises are limited to the dynamics of a single layer of attention trained on linear regression tasks. In this paper, we study the optimization of a Transformer consisting of a fully connected layer followed by a linear attention layer. The MLP acts as a common nonlinear representation or feature map, greatly enhancing the power of in-context learning. We prove in the mean-field and two-timescale limit that the infinite-dimensional loss landscape for the distribution of parameters, while highly nonconvex, becomes quite benign. We also analyze the second-order stability of mean-field dynamics and show that Wasserstein gradient flow almost always avoids saddle points. Furthermore, we establish novel methods for obtaining concrete improvement rates both away from and near critical points. This represents the first saddle point analysis of mean-field dynamics in general and the techniques are of independent interest.
|
https://proceedings.mlr.press/v235/kim24ag.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ag/kim24ag.pdf
|
https://openreview.net/forum?id=hFEgae0od4
|
Discovering Features with Synergistic Interactions in Multiple Views
|
https://proceedings.mlr.press/v235/kim24ag.html
|
Chohee Kim, Mihaela Van Der Schaar, Changhee Lee
|
https://proceedings.mlr.press/v235/kim24ag.html
|
ICML 2024
|
Discovering features with synergistic interactions in multi-view data, that provide more information gain when considered together than when considered separately, is particularly valuable. This fosters a more comprehensive understanding of the target outcome from diverse perspectives (views). However, despite the increasing opportunities presented by multi-view data, surprisingly little attention has been paid to uncovering these crucial interactions. To address this gap, we formally define the problem of selecting synergistic and non-synergistic feature subsets in multi-view data, leveraging an information-theoretic concept known as interaction information. To this end, we introduce a novel deep learning-based feature selection method that identifies different interactions across multiple views, employing a Bernoulli relaxation technique to solve this intractable subset searching problem. Experiments on synthetic, semi-synthetic, and real-world multi-view datasets demonstrate that our model discovers relevant feature subsets with synergistic and non-synergistic interactions, achieving remarkable similarity to the ground truth. Furthermore, we corroborate the discovered features with supporting medical and scientific literature, underscoring its utility in elucidating complex dependencies and interactions in multi-view data.
|
https://proceedings.mlr.press/v235/kim24ah.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ah/kim24ah.pdf
|
https://openreview.net/forum?id=8AeuhCgRRv
|
An Infinite-Width Analysis on the Jacobian-Regularised Training of a Neural Network
|
https://proceedings.mlr.press/v235/kim24ah.html
|
Taeyoung Kim, Hongseok Yang
|
https://proceedings.mlr.press/v235/kim24ah.html
|
ICML 2024
|
The recent theoretical analysis of deep neural networks in their infinite-width limits has deepened our understanding of initialisation, feature learning, and training of those networks, and brought new practical techniques for finding appropriate hyperparameters, learning network weights, and performing inference. In this paper, we broaden this line of research by showing that this infinite-width analysis can be extended to the Jacobian of a deep neural network. We show that a multilayer perceptron (MLP) and its Jacobian at initialisation jointly converge to a Gaussian process (GP) as the widths of the MLP’s hidden layers go to infinity and characterise this GP. We also prove that in the infinite-width limit, the evolution of the MLP under the so-called robust training (i.e., training with a regulariser on the Jacobian) is described by a linear first-order ordinary differential equation that is determined by a variant of the Neural Tangent Kernel. We experimentally show the relevance of our theoretical claims to wide finite networks, and empirically analyse the properties of kernel regression solution to obtain an insight into Jacobian regularisation.
|
https://proceedings.mlr.press/v235/kim24ai.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ai/kim24ai.pdf
|
https://openreview.net/forum?id=WUi1AqhKn5
|
One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning
|
https://proceedings.mlr.press/v235/kim24ai.html
|
Doyoung Kim, Susik Yoon, Dongmin Park, Youngjun Lee, Hwanjun Song, Jihwan Bang, Jae-Gil Lee
|
https://proceedings.mlr.press/v235/kim24ai.html
|
ICML 2024
|
In real-world continual learning (CL) scenarios, tasks often exhibit intricate and unpredictable semantic shifts, posing challenges for fixed prompt management strategies which are tailored to only handle semantic shifts of uniform degree (i.e., uniformly mild or uniformly abrupt). To address this limitation, we propose an adaptive prompting approach that effectively accommodates semantic shifts of varying degree where mild and abrupt shifts are mixed. AdaPromptCL employs the assign-and-refine semantic grouping mechanism that dynamically manages prompt groups in accordance with the semantic similarity between tasks, enhancing the quality of grouping through continuous refinement. Our experiment results demonstrate that AdaPromptCL outperforms existing prompting methods by up to 21.3%, especially in the benchmark datasets with diverse semantic shifts between tasks.
|
https://proceedings.mlr.press/v235/kim24aj.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24aj/kim24aj.pdf
|
https://openreview.net/forum?id=2dEH0u8w0b
|
Do Topological Characteristics Help in Knowledge Distillation?
|
https://proceedings.mlr.press/v235/kim24aj.html
|
Jungeun Kim, Junwon You, Dongjin Lee, Ha Young Kim, Jae-Hun Jung
|
https://proceedings.mlr.press/v235/kim24aj.html
|
ICML 2024
|
Knowledge distillation (KD) aims to transfer knowledge from larger (teacher) to smaller (student) networks. Previous studies focus on point-to-point or pairwise relationships in embedding features as knowledge and struggle to efficiently transfer relationships of complex latent spaces. To tackle this issue, we propose a novel KD method called TopKD, which considers the global topology of the latent spaces. We define global topology knowledge using the persistence diagram (PD) that captures comprehensive geometric structures such as shape of distribution, multiscale structure and connectivity, and the topology distillation loss for teaching this knowledge. To make the PD transferable within reasonable computational time, we employ approximated persistence images of PDs. Through experiments, we support the benefits of using global topology as knowledge and demonstrate the potential of TopKD. Code is available at https://github.com/jekim5418/TopKD
|
https://proceedings.mlr.press/v235/kim24ak.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kim24ak/kim24ak.pdf
|
https://openreview.net/forum?id=Olix9pk6nV
|
A Unified Linear Programming Framework for Offline Reward Learning from Human Demonstrations and Feedback
|
https://proceedings.mlr.press/v235/kim24ak.html
|
Kihyun Kim, Jiawei Zhang, Asuman E. Ozdaglar, Pablo Parrilo
|
https://proceedings.mlr.press/v235/kim24ak.html
|
ICML 2024
|
Inverse Reinforcement Learning (IRL) and Reinforcement Learning from Human Feedback (RLHF) are pivotal methodologies in reward learning, which involve inferring and shaping the underlying reward function of sequential decision-making problems based on observed human demonstrations and feedback. Most prior work in reward learning has relied on prior knowledge or assumptions about decision or preference models, potentially leading to robustness issues. In response, this paper introduces a novel linear programming (LP) framework tailored for offline reward learning. Utilizing pre-collected trajectories without online exploration, this framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, and offers an optimality guarantee with provable sample efficiency. Our LP framework also enables aligning the reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. We demonstrate that our framework potentially achieves better performance compared to the conventional maximum likelihood estimation (MLE) approach through analytical examples and numerical experiments.
|
https://proceedings.mlr.press/v235/kirschstein24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kirschstein24a/kirschstein24a.pdf
|
https://openreview.net/forum?id=QE6iC9s6vU
|
The Merit of River Network Topology for Neural Flood Forecasting
|
https://proceedings.mlr.press/v235/kirschstein24a.html
|
Nikolas Kirschstein, Yixuan Sun
|
https://proceedings.mlr.press/v235/kirschstein24a.html
|
ICML 2024
|
Climate change exacerbates riverine floods, which occur with higher frequency and intensity than ever. The much-needed forecasting systems typically rely on accurate river discharge predictions. To this end, the SOTA data-driven approaches treat forecasting at spatially distributed gauge stations as isolated problems, even within the same river network. However, incorporating the known topology of the river network into the prediction model has the potential to leverage the adjacency relationship between gauges. Thus, we model river discharge for a network of gauging stations with GNNs and compare the forecasting performance achieved by different adjacency definitions. Our results show that the model fails to benefit from the river network topology information, both on the entire network and small subgraphs. The learned edge weights correlate with neither of the static definitions and exhibit no regular pattern. Furthermore, the GNNs struggle to predict sudden, narrow discharge spikes. Our work hints at a more general underlying phenomenon of neural prediction not always benefitting from graphical structure and may inspire a systematic study of the conditions under which this happens.
|
https://proceedings.mlr.press/v235/kitouni24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kitouni24a/kitouni24a.pdf
|
https://openreview.net/forum?id=XMlUlY7ONf
|
From Neurons to Neutrons: A Case Study in Interpretability
|
https://proceedings.mlr.press/v235/kitouni24a.html
|
Ouail Kitouni, Niklas Nolte, Vı́ctor Samuel Pérez-Dı́az, Sokratis Trifinopoulos, Mike Williams
|
https://proceedings.mlr.press/v235/kitouni24a.html
|
ICML 2024
|
Mechanistic Interpretability (MI) proposes a path toward fully understanding how neural networks make their predictions. Prior work demonstrates that even when trained to perform simple arithmetic, models can implement a variety of algorithms (sometimes concurrently) depending on initialization and hyperparameters. Does this mean neuron-level interpretability techniques have limited applicability? Here, we argue that high-dimensional neural networks can learn useful low-dimensional representations of the data they were trained on, going beyond simply making good predictions: Such representations can be understood with the MI lens and provide insights that are surprisingly faithful to human-derived domain knowledge. This indicates that such approaches to interpretability can be useful for deriving a new understanding of a problem from models trained to solve it. As a case study, we extract nuclear physics concepts by studying models trained to reproduce nuclear data.
|
https://proceedings.mlr.press/v235/kiyani24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kiyani24a/kiyani24a.pdf
|
https://openreview.net/forum?id=YPbcUBcTAk
|
Conformal Prediction with Learned Features
|
https://proceedings.mlr.press/v235/kiyani24a.html
|
Shayan Kiyani, George J. Pappas, Hamed Hassani
|
https://proceedings.mlr.press/v235/kiyani24a.html
|
ICML 2024
|
In this paper, we focus on the problem of conformal prediction with conditional guarantees. Prior work has shown that it is impossible to construct nontrivial prediction sets with full conditional coverage guarantees. A wealth of research has considered relaxations of full conditional guarantees, relying on some predefined uncertainty structures. Departing from this line of thinking, we propose Partition Learning Conformal Prediction (PLCP), a framework to improve conditional validity of prediction sets through learning uncertainty-guided features from the calibration data. We implement PLCP efficiently with alternating gradient descent, utilizing off-the-shelf machine learning models. We further analyze PLCP theoretically and provide conditional guarantees for infinite and finite sample sizes. Finally, our experimental results over four real-world and synthetic datasets show the superior performance of PLCP compared to state-of-the-art methods in terms of coverage and length in both classification and regression scenarios.
|
https://proceedings.mlr.press/v235/klarner24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/klarner24a/klarner24a.pdf
|
https://openreview.net/forum?id=8NfHmzo0Op
|
Context-Guided Diffusion for Out-of-Distribution Molecular and Protein Design
|
https://proceedings.mlr.press/v235/klarner24a.html
|
Leo Klarner, Tim G. J. Rudner, Garrett M Morris, Charlotte Deane, Yee Whye Teh
|
https://proceedings.mlr.press/v235/klarner24a.html
|
ICML 2024
|
Generative models have the potential to accelerate key steps in the discovery of novel molecular therapeutics and materials. Diffusion models have recently emerged as a powerful approach, excelling at unconditional sample generation and, with data-driven guidance, conditional generation within their training domain. Reliably sampling from high-value regions beyond the training data, however, remains an open challenge—with current methods predominantly focusing on modifying the diffusion process itself. In this paper, we develop context-guided diffusion (CGD), a simple plug-and-play method that leverages unlabeled data and smoothness constraints to improve the out-of-distribution generalization of guided diffusion models. We demonstrate that this approach leads to substantial performance gains across various settings, including continuous, discrete, and graph-structured diffusion processes with applications across drug discovery, materials science, and protein design.
|
https://proceedings.mlr.press/v235/kleine-buening24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kleine-buening24a/kleine-buening24a.pdf
|
https://openreview.net/forum?id=Ar0dsOMStE
|
Environment Design for Inverse Reinforcement Learning
|
https://proceedings.mlr.press/v235/kleine-buening24a.html
|
Thomas Kleine Buening, Victor Villin, Christos Dimitrakakis
|
https://proceedings.mlr.press/v235/kleine-buening24a.html
|
ICML 2024
|
Learning a reward function from demonstrations suffers from low sample-efficiency. Even with abundant data, current inverse reinforcement learning methods that focus on learning from a single environment can fail to handle slight changes in the environment dynamics. We tackle these challenges through adaptive environment design. In our framework, the learner repeatedly interacts with the expert, with the former selecting environments to identify the reward function as quickly as possible from the expert’s demonstrations in said environments. This results in improvements in both sample-efficiency and robustness, as we show experimentally, for both exact and approximate inference.
|
https://proceedings.mlr.press/v235/ko24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ko24a/ko24a.pdf
|
https://openreview.net/forum?id=MmZJ3kJXjX
|
What Would Gauss Say About Representations? Probing Pretrained Image Models using Synthetic Gaussian Benchmarks
|
https://proceedings.mlr.press/v235/ko24a.html
|
Ching-Yun Ko, Pin-Yu Chen, Payel Das, Jeet Mohapatra, Luca Daniel
|
https://proceedings.mlr.press/v235/ko24a.html
|
ICML 2024
|
Recent years have witnessed a paradigm shift in deep learning from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. Pretrained model representations are commonly evaluated extensively across various real-world tasks and used as a foundation for different downstream tasks. This paper proposes a solution for assessing the quality of representations in a task-agnostic way. To circumvent the need for real-world data in evaluation, we explore the use of synthetic binary classification tasks with Gaussian mixtures to probe pretrained models and compare the robustness-accuracy performance on pretrained representations with an idealized reference. Our approach offers a holistic evaluation, revealing intrinsic model capabilities and reducing the dependency on real-life data for model evaluation. Evaluated with various pretrained image models, the experimental results confirm that our task-agnostic evaluation correlates with actual linear probing performance on downstream tasks and can also guide parameter choice in robust linear probing to achieve a better robustness-accuracy trade-off.
|
https://proceedings.mlr.press/v235/ko24b.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ko24b/ko24b.pdf
|
https://openreview.net/forum?id=OVn8FpeBpG
|
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
|
https://proceedings.mlr.press/v235/ko24b.html
|
Hyunouk Ko, Xiaoming Huo
|
https://proceedings.mlr.press/v235/ko24b.html
|
ICML 2024
|
In this paper, we prove the universal consistency of wide and deep ReLU neural network classifiers. We also give sufficient conditions for a class of probability measures for which classifiers based on neural networks achieve minimax optimal rates of convergence. The result applies to a wide range of known function classes. In particular, while most previous works impose explicit smoothness assumptions on the regression function, our framework encompasses more general settings. The proposed neural networks are either the minimizers of the $0$-$1$ loss that exhibit a benign overfitting behavior.
|
https://proceedings.mlr.press/v235/ko24c.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ko24c/ko24c.pdf
|
https://openreview.net/forum?id=lsHZNNoC7r
|
DistiLLM: Towards Streamlined Distillation for Large Language Models
|
https://proceedings.mlr.press/v235/ko24c.html
|
Jongwoo Ko, Sungnyun Kim, Tianyi Chen, Se-Young Yun
|
https://proceedings.mlr.press/v235/ko24c.html
|
ICML 2024
|
Knowledge distillation (KD) is widely used for compressing a teacher model to a smaller student model, reducing its inference cost and memory footprint while preserving model capabilities. However, current KD methods for auto-regressive sequence models (e.g., large language models) suffer from missing a standardized objective function. Moreover, the recent use of student-generated outputs to address training-inference mismatches has significantly escalated computational costs. To tackle these issues, we introduce DistiLLM, a more effective and efficient KD framework for auto-regressive language models. DistiLLM comprises two components: (1) a novel skew Kullback-Leibler divergence loss, where we unveil and leverage its theoretical properties, and (2) an adaptive off-policy approach designed to enhance the efficiency in utilizing student-generated outputs. Extensive experiments, including instruction-following tasks, demonstrate the effectiveness of DistiLLM in building high-performing student models while achieving up to 4.3$\times$ speedup compared to recent KD methods.
|
https://proceedings.mlr.press/v235/ko24d.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ko24d/ko24d.pdf
|
https://openreview.net/forum?id=0miAQ1qHiw
|
Provably Scalable Black-Box Variational Inference with Structured Variational Families
|
https://proceedings.mlr.press/v235/ko24d.html
|
Joohwan Ko, Kyurae Kim, Woo Chang Kim, Jacob R. Gardner
|
https://proceedings.mlr.press/v235/ko24d.html
|
ICML 2024
|
Variational families with full-rank covariance approximations are known not to work well in black-box variational inference (BBVI), both empirically and theoretically. In fact, recent computational complexity results for BBVI have established that full-rank variational families scale poorly with the dimensionality of the problem compared to e.g. mean-field families. This is particularly critical to hierarchical Bayesian models with local variables; their dimensionality increases with the size of the datasets. Consequently, one gets an iteration complexity with an explicit $\mathcal{O}(N^2)$ dependence on the dataset size $N$. In this paper, we explore a theoretical middle ground between mean-field variational families and full-rank families: structured variational families. We rigorously prove that certain scale matrix structures can achieve a better iteration complexity of $\mathcal{O}\left(N\right)$, implying better scaling with respect to $N$. We empirically verify our theoretical results on large-scale hierarchical models.
|
https://proceedings.mlr.press/v235/ko24e.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ko24e/ko24e.pdf
|
https://openreview.net/forum?id=rMV86cAOh6
|
Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis
|
https://proceedings.mlr.press/v235/ko24e.html
|
Juyeon Ko, Inho Kong, Dogyun Park, Hyunwoo J. Kim
|
https://proceedings.mlr.press/v235/ko24e.html
|
ICML 2024
|
Semantic image synthesis (SIS) is a task to generate realistic images corresponding to semantic maps (labels). However, in real-world applications, SIS often encounters noisy user inputs. To address this, we propose Stochastic Conditional Diffusion Model (SCDM), which is a robust conditional diffusion model that features novel forward and generation processes tailored for SIS with noisy labels. It enhances robustness by stochastically perturbing the semantic label maps through Label Diffusion, which diffuses the labels with discrete diffusion. Through the diffusion of labels, the noisy and clean semantic maps become similar as the timestep increases, eventually becoming identical at $t=T$. This facilitates the generation of an image close to a clean image, enabling robust generation. Furthermore, we propose a class-wise noise schedule to differentially diffuse the labels depending on the class. We demonstrate that the proposed method generates high-quality samples through extensive experiments and analyses on benchmark datasets, including a novel experimental setup simulating human errors during real-world applications. Code is available at https://github.com/mlvlab/SCDM.
|
https://proceedings.mlr.press/v235/kogler24a.html
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/kogler24a/kogler24a.pdf
|
https://openreview.net/forum?id=1HDrfUahXv
|
Compression of Structured Data with Autoencoders: Provable Benefit of Nonlinearities and Depth
|
https://proceedings.mlr.press/v235/kogler24a.html
|
Kevin Kögler, Aleksandr Shevchenko, Hamed Hassani, Marco Mondelli
|
https://proceedings.mlr.press/v235/kogler24a.html
|
ICML 2024
|
Autoencoders are a prominent model in many empirical branches of machine learning and lossy data compression. However, basic theoretical questions remain unanswered even in a shallow two-layer setting. In particular, to what degree does a shallow autoencoder capture the structure of the underlying data distribution? For the prototypical case of the 1-bit compression of sparse Gaussian data, we prove that gradient descent converges to a solution that completely disregards the sparse structure of the input. Namely, the performance of the algorithm is the same as if it was compressing a Gaussian source – with no sparsity. For general data distributions, we give evidence of a phase transition phenomenon in the shape of the gradient descent minimizer, as a function of the data sparsity: below the critical sparsity level, the minimizer is a rotation taken uniformly at random (just like in the compression of non-sparse data); above the critical sparsity, the minimizer is the identity (up to a permutation). Finally, by exploiting a connection with approximate message passing algorithms, we show how to improve upon Gaussian performance for the compression of sparse data: adding a denoising function to a shallow architecture already reduces the loss provably, and a suitable multi-layer decoder leads to a further improvement. We validate our findings on image datasets, such as CIFAR-10 and MNIST.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.