abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/zimerman24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zimerman24b/zimerman24b.pdf
https://openreview.net/forum?id=nOyj26YdIQ
Viewing Transformers Through the Lens of Long Convolutions Layers
https://proceedings.mlr.press/v235/zimerman24b.html
Itamar Zimerman, Lior Wolf
https://proceedings.mlr.press/v235/zimerman24b.html
ICML 2024
Despite their dominance in modern DL and, especially, NLP domains, transformer architectures exhibit sub-optimal performance on long-range tasks compared to recent layers that are specifically designed for this purpose. In this work, drawing inspiration from key attributes of longrange layers, such as state-space layers, linear RNN layers, and global convolution layers, we demonstrate that minimal modifications to the transformer architecture can significantly enhance performance on the Long Range Arena (LRA) benchmark, thus narrowing the gap with these specialized layers. We identify that two key principles for long-range tasks are (i) incorporating an inductive bias towards smoothness, and (ii) locality. As we show, integrating these ideas into the attention mechanism improves results with a negligible amount of additional computation and without any additional trainable parameters. Our theory and experiments also shed light on the reasons for the inferior performance of transformers on long-range tasks and identify critical properties that are essential for successfully capturing long-range dependencies.
https://proceedings.mlr.press/v235/zisman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zisman24a/zisman24a.pdf
https://openreview.net/forum?id=Y8KsHT1kTV
Emergence of In-Context Reinforcement Learning from Noise Distillation
https://proceedings.mlr.press/v235/zisman24a.html
Ilya Zisman, Vladislav Kurenkov, Alexander Nikulin, Viacheslav Sinii, Sergey Kolesnikov
https://proceedings.mlr.press/v235/zisman24a.html
ICML 2024
Recently, extensive studies in Reinforcement Learning have been carried out on the ability of transformers to adapt in-context to various environments and tasks. Current in-context RL methods are limited by their strict requirements for data, which needs to be generated by RL agents or labeled with actions from an optimal policy. In order to address this prevalent problem, we propose AD$^\varepsilon$, a new data acquisition approach that enables in-context Reinforcement Learning from noise-induced curriculum. We show that it is viable to construct a synthetic noise injection curriculum which helps to obtain learning histories. Moreover, we experimentally demonstrate that it is possible to alleviate the need for generation using optimal policies, with in-context RL still able to outperform the best suboptimal policy in a learning dataset by a 2x margin.
https://proceedings.mlr.press/v235/ziyin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ziyin24a/ziyin24a.pdf
https://openreview.net/forum?id=7AF0AMI4AE
Symmetry Induces Structure and Constraint of Learning
https://proceedings.mlr.press/v235/ziyin24a.html
Liu Ziyin
https://proceedings.mlr.press/v235/ziyin24a.html
ICML 2024
Due to common architecture designs, symmetries exist extensively in contemporary neural networks. In this work, we unveil the importance of the loss function symmetries in affecting, if not deciding, the learning behavior of machine learning models. We prove that every mirror-reflection symmetry, with reflection surface $O$, in the loss function leads to the emergence of a constraint on the model parameters $\theta$: $O^T\theta =0$. This constrained solution becomes satisfied when either the weight decay or gradient noise is large. Common instances of mirror symmetries in deep learning include rescaling, rotation, and permutation symmetry. As direct corollaries, we show that rescaling symmetry leads to sparsity, rotation symmetry leads to low rankness, and permutation symmetry leads to homogeneous ensembling. Then, we show that the theoretical framework can explain intriguing phenomena, such as the loss of plasticity and various collapse phenomena in neural networks, and suggest how symmetries can be used to design an elegant algorithm to enforce hard constraints in a differentiable way.
https://proceedings.mlr.press/v235/zong24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zong24a/zong24a.pdf
https://openreview.net/forum?id=bWZKvF0g7G
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
https://proceedings.mlr.press/v235/zong24a.html
Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, Timothy Hospedales
https://proceedings.mlr.press/v235/zong24a.html
ICML 2024
Current vision large language models (VLLMs) exhibit remarkable capabilities yet are prone to generate harmful content and are vulnerable to even the simplest jailbreaking attacks. Our initial analysis finds that this is due to the presence of harmful data during vision-language instruction fine-tuning, and that VLLM fine-tuning can cause forgetting of safety alignment previously learned by the underpinning LLM. To address this issue, we first curate a vision-language safe instruction-following dataset VLGuard covering various harmful categories. Our experiments demonstrate that integrating this dataset into standard vision-language fine-tuning or utilizing it for post-hoc fine-tuning effectively safety aligns VLLMs. This alignment is achieved with minimal impact on, or even enhancement of, the models’ helpfulness. The versatility of our safety fine-tuning dataset makes it a valuable resource for safety-testing existing VLLMs, training new models or safeguarding pre-trained VLLMs. Empirical results demonstrate that fine-tuned VLLMs effectively reject unsafe instructions and substantially reduce the success rates of several black-box adversarial attacks, which approach zero in many cases. The code and dataset will be open-sourced.
https://proceedings.mlr.press/v235/zong24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zong24b/zong24b.pdf
https://openreview.net/forum?id=IUijgjJgWO
Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations
https://proceedings.mlr.press/v235/zong24b.html
Yongshuo Zong, Tingyang Yu, Ruchika Chavhan, Bingchen Zhao, Timothy Hospedales
https://proceedings.mlr.press/v235/zong24b.html
ICML 2024
Large language and vision-language models are rapidly being deployed in practice thanks to their impressive capabilities in instruction following, in-context learning, and so on. This raises an urgent need to carefully analyse their robustness so that stakeholders can understand if and when such models are trustworthy enough to be relied upon in any given application. In this paper, we highlight a specific vulnerability in popular models, namely permutation sensitivity in multiple-choice question answering (MCQA). Specifically, we show empirically that popular models are vulnerable to adversarial permutation in answer sets for multiple-choice prompting, which is surprising as models should ideally be as invariant to prompt permutation as humans are. These vulnerabilities persist across various model sizes, and exist in very recent language and vision-language models. Code to reproduce all experiments is provided in supplementary materials.
https://proceedings.mlr.press/v235/zou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zou24a/zou24a.pdf
https://openreview.net/forum?id=eapFRURALQ
Leveraging Attractor Dynamics in Spatial Navigation for Better Language Parsing
https://proceedings.mlr.press/v235/zou24a.html
Xiaolong Zou, Xingxing Cao, Xiaojiao Yang, Bo Hong
https://proceedings.mlr.press/v235/zou24a.html
ICML 2024
Increasing experimental evidence suggests that the human hippocampus, evolutionarily shaped by spatial navigation tasks, also plays an important role in language comprehension, indicating a shared computational mechanism for both functions. However, the specific relationship between the hippocampal formation’s computational mechanism in spatial navigation and its role in language processing remains elusive. To investigate this question, we develop a prefrontal-hippocampal-entorhinal model (which called PHE-trinity) that features two key aspects: 1) the use of a modular continuous attractor neural network to represent syntactic structure, akin to the grid network in the entorhinal cortex; 2) the creation of two separate input streams, mirroring the factorized structure-content representation found in the hippocampal formation. We evaluate our model on language command parsing tasks, specifically using the SCAN dataset. Our findings include: 1) attractor dynamics can facilitate systematic generalization and efficient learning from limited data; 2) through visualization and reverse engineering, we unravel a potential dynamic mechanism for grid network representing syntactic structure. Our research takes an initial step in uncovering the dynamic mechanism shared by spatial navigation and language information processing.
https://proceedings.mlr.press/v235/zou24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zou24b/zou24b.pdf
https://openreview.net/forum?id=GHZVjmaGQM
Hybrid$^2$ Neural ODE Causal Modeling and an Application to Glycemic Response
https://proceedings.mlr.press/v235/zou24b.html
Bob Junyi Zou, Matthew E Levine, Dessi P. Zaharieva, Ramesh Johari, Emily Fox
https://proceedings.mlr.press/v235/zou24b.html
ICML 2024
Hybrid models composing mechanistic ODE-based dynamics with flexible and expressive neural network components have grown rapidly in popularity, especially in scientific domains where such ODE-based modeling offers important interpretability and validated causal grounding (e.g., for counterfactual reasoning). The incorporation of mechanistic models also provides inductive bias in standard blackbox modeling approaches, critical when learning from small datasets or partially observed, complex systems. Unfortunately, as the hybrid models become more flexible, the causal grounding provided by the mechanistic model can quickly be lost. We address this problem by leveraging another common source of domain knowledge: ranking of treatment effects for a set of interventions, even if the precise treatment effect is unknown. We encode this information in a causal loss that we combine with the standard predictive loss to arrive at a hybrid loss that biases our learning towards causally valid hybrid models. We demonstrate our ability to achieve a win-win, state-of-the-art predictive performance and causal validity, in the challenging task of modeling glucose dynamics post-exercise in individuals with type 1 diabetes.
https://proceedings.mlr.press/v235/zou24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zou24c/zou24c.pdf
https://openreview.net/forum?id=t4908PyZxs
Compositional Few-Shot Class-Incremental Learning
https://proceedings.mlr.press/v235/zou24c.html
Yixiong Zou, Shanghang Zhang, Haichen Zhou, Yuhua Li, Ruixuan Li
https://proceedings.mlr.press/v235/zou24c.html
ICML 2024
Few-shot class-incremental learning (FSCIL) is proposed to continually learn from novel classes with only a few samples after the (pre-)training on base classes with sufficient data. However, this remains a challenge. In contrast, humans can easily recognize novel classes with a few samples. Cognitive science demonstrates that an important component of such human capability is compositional learning. This involves identifying visual primitives from learned knowledge and then composing new concepts using these transferred primitives, making incremental learning both effective and interpretable. To imitate human compositional learning, we propose a cognitive-inspired method for the FSCIL task. We define and build a compositional model based on set similarities, and then equip it with a primitive composition module and a primitive reuse module. In the primitive composition module, we propose to utilize the Centered Kernel Alignment (CKA) similarity to approximate the similarity between primitive sets, allowing the training and evaluation based on primitive compositions. In the primitive reuse module, we enhance primitive reusability by classifying inputs based on primitives replaced with the closest primitives from other classes. Experiments on three datasets validate our method, showing it outperforms current state-of-the-art methods with improved interpretability. Our code is available at https://github.com/Zoilsen/Comp-FSCIL.
https://proceedings.mlr.press/v235/zou24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zou24d/zou24d.pdf
https://openreview.net/forum?id=DbyHDYslM7
BiE: Bi-Exponent Block Floating-Point for Large Language Models Quantization
https://proceedings.mlr.press/v235/zou24d.html
Lancheng Zou, Wenqian Zhao, Shuo Yin, Chen Bai, Qi Sun, Bei Yu
https://proceedings.mlr.press/v235/zou24d.html
ICML 2024
Nowadays, Large Language Models (LLMs) mostly possess billions of parameters, bringing significant challenges to hardware platforms. Although quantization is an efficient approach to reduce computation and memory overhead for inference optimization, we stress the challenge that mainstream low-bit quantization approaches still suffer from either various data distribution outliers or a lack of hardware efficiency. We also find that low-bit data format has further potential expressiveness to cover the atypical language data distribution. In this paper, we propose a novel numerical representation, Bi-Exponent Block Floating Point (BiE), and a new quantization flow. BiE quantization shows accuracy superiority and hardware friendliness on various models and benchmarks.
https://proceedings.mlr.press/v235/zrnic24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/zrnic24a/zrnic24a.pdf
https://openreview.net/forum?id=GKMcCtWC7H
Active Statistical Inference
https://proceedings.mlr.press/v235/zrnic24a.html
Tijana Zrnic, Emmanuel Candes
https://proceedings.mlr.press/v235/zrnic24a.html
ICML 2024
Inspired by the concept of active learning, we propose active inference—a methodology for statistical inference with machine-learning-assisted data collection. Assuming a budget on the number of labels that can be collected, the methodology uses a machine learning model to identify which data points would be most beneficial to label, thus effectively utilizing the budget. It operates on a simple yet powerful intuition: prioritize the collection of labels for data points where the model exhibits uncertainty, and rely on the model’s predictions where it is confident. Active inference constructs valid confidence intervals and hypothesis tests while leveraging any black-box machine learning model and handling any data distribution. The key point is that it achieves the same level of accuracy with far fewer samples than existing baselines relying on non-adaptively-collected data. This means that for the same number of collected samples, active inference enables smaller confidence intervals and more powerful tests. We evaluate active inference on datasets from public opinion research, census analysis, and proteomics.