title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
GDL-DS: A Benchmark for Geometric Deep Learning under Distribution Shifts | Geometric deep learning (GDL) has gained significant attention in various
scientific fields, chiefly for its proficiency in modeling data with intricate
geometric structures. Yet, very few works have delved into its capability of
tackling the distribution shift problem, a prevalent challenge in many relevant
applications. To bridge this gap, we propose GDL-DS, a comprehensive benchmark
designed for evaluating the performance of GDL models in scenarios with
distribution shifts. Our evaluation datasets cover diverse scientific domains
from particle physics and materials science to biochemistry, and encapsulate a
broad spectrum of distribution shifts including conditional, covariate, and
concept shifts. Furthermore, we study three levels of information access from
the out-of-distribution (OOD) testing data, including no OOD information, only
OOD features without labels, and OOD features with a few labels. Overall, our
benchmark results in 30 different experiment settings, and evaluates 3 GDL
backbones and 11 learning algorithms in each setting. A thorough analysis of
the evaluation results is provided, poised to illuminate insights for DGL
researchers and domain practitioners who are to use DGL in their applications. | [
"Deyu Zou",
"Shikun Liu",
"Siqi Miao",
"Victor Fung",
"Shiyu Chang",
"Pan Li"
] | 2023-10-12 19:27:43 | http://arxiv.org/abs/2310.08677v1 | http://arxiv.org/pdf/2310.08677v1 | 2310.08677v1 |
Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal | In many settings, interventions may be more effective for some individuals
than others, so that targeting interventions may be beneficial. We analyze the
value of targeting in the context of a large-scale field experiment with over
53,000 college students, where the goal was to use "nudges" to encourage
students to renew their financial-aid applications before a non-binding
deadline. We begin with baseline approaches to targeting. First, we target
based on a causal forest that estimates heterogeneous treatment effects and
then assigns students to treatment according to those estimated to have the
highest treatment effects. Next, we evaluate two alternative targeting
policies, one targeting students with low predicted probability of renewing
financial aid in the absence of the treatment, the other targeting those with
high probability. The predicted baseline outcome is not the ideal criterion for
targeting, nor is it a priori clear whether to prioritize low, high, or
intermediate predicted probability. Nonetheless, targeting on low baseline
outcomes is common in practice, for example because the relationship between
individual characteristics and treatment effects is often difficult or
impossible to estimate with historical data. We propose hybrid approaches that
incorporate the strengths of both predictive approaches (accurate estimation)
and causal approaches (correct criterion); we show that targeting intermediate
baseline outcomes is most effective, while targeting based on low baseline
outcomes is detrimental. In one year of the experiment, nudging all students
improved early filing by an average of 6.4 percentage points over a baseline
average of 37% filing, and we estimate that targeting half of the students
using our preferred policy attains around 75% of this benefit. | [
"Susan Athey",
"Niall Keleher",
"Jann Spiess"
] | 2023-10-12 19:08:45 | http://arxiv.org/abs/2310.08672v1 | http://arxiv.org/pdf/2310.08672v1 | 2310.08672v1 |
Every Parameter Matters: Ensuring the Convergence of Federated Learning with Dynamic Heterogeneous Models Reduction | Cross-device Federated Learning (FL) faces significant challenges where
low-end clients that could potentially make unique contributions are excluded
from training large models due to their resource bottlenecks. Recent research
efforts have focused on model-heterogeneous FL, by extracting reduced-size
models from the global model and applying them to local clients accordingly.
Despite the empirical success, general theoretical guarantees of convergence on
this method remain an open question. In this paper, we present a unifying
framework for heterogeneous FL algorithms with online model extraction and
provide a general convergence analysis. In particular, we prove that under
certain sufficient conditions and for both IID and non-IID data, these
algorithms converge to a stationary point of standard FL for general smooth
cost functions. Moreover, we illuminate two key factors impacting its
convergence: model-extraction noise and minimum coverage index, advocating a
joint design of local model extraction for efficient heterogeneous FL. | [
"Hanhan Zhou",
"Tian Lan",
"Guru Venkataramani",
"Wenbo Ding"
] | 2023-10-12 19:07:58 | http://arxiv.org/abs/2310.08670v1 | http://arxiv.org/pdf/2310.08670v1 | 2310.08670v1 |
Counting and Algorithmic Generalization with Transformers | Algorithmic generalization in machine learning refers to the ability to learn
the underlying algorithm that generates data in a way that generalizes
out-of-distribution. This is generally considered a difficult task for most
machine learning algorithms. Here, we analyze algorithmic generalization when
counting is required, either implicitly or explicitly. We show that standard
Transformers are based on architectural decisions that hinder
out-of-distribution performance for such tasks. In particular, we discuss the
consequences of using layer normalization and of normalizing the attention
weights via softmax. With ablation of the problematic operations, we
demonstrate that a modified transformer can exhibit a good algorithmic
generalization performance on counting while using a very lightweight
architecture. | [
"Simon Ouellette",
"Rolf Pfister",
"Hansueli Jud"
] | 2023-10-12 18:39:24 | http://arxiv.org/abs/2310.08661v1 | http://arxiv.org/pdf/2310.08661v1 | 2310.08661v1 |
Learning RL-Policies for Joint Beamforming Without Exploration: A Batch Constrained Off-Policy Approach | In this project, we consider the problem of network parameter optimization
for rate maximization. We frame this as a joint optimization problem of power
control, beam forming, and interference cancellation. We consider the setting
where multiple Base Stations (BSs) are communicating with multiple user
equipments (UEs). Because of the exponential computational complexity of brute
force search, we instead solve this non-convex optimization problem using deep
reinforcement learning (RL) techniques. The modern communication systems are
notorious for their difficulty in exactly modeling their behaviour. This limits
us in using RL based algorithms as interaction with the environment is needed
for the agent to explore and learn efficiently. Further, it is ill advised to
deploy the algorithm in real world for exploration and learning because of the
high cost of failure. In contrast to the previous RL-based solutions proposed,
such as deep-Q network (DQN) based control, we propose taking an offline model
based approach. We specifically consider discrete batch constrained deep
Q-learning (BCQ) and show that performance similar to DQN can be acheived with
only a fraction of the data and without the need for exploration. This results
in maximizing sample efficiency and minimizing risk in the deployment of a new
algorithm to commercial networks. We provide the entire resource of the
project, including code and data, at the following link:
https://github.com/Heasung-Kim/ safe-rl-deployment-for-5g. | [
"Heasung Kim",
"Sravan Ankireddy"
] | 2023-10-12 18:36:36 | http://arxiv.org/abs/2310.08660v1 | http://arxiv.org/pdf/2310.08660v1 | 2310.08660v1 |
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models | Quantization is an indispensable technique for serving Large Language Models
(LLMs) and has recently found its way into LoRA fine-tuning. In this work we
focus on the scenario where quantization and LoRA fine-tuning are applied
together on a pre-trained model. In such cases it is common to observe a
consistent gap in the performance on downstream tasks between full fine-tuning
and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ
(LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that
simultaneously quantizes an LLM and finds a proper low-rank initialization for
LoRA fine-tuning. Such an initialization alleviates the discrepancy between the
quantized and full-precision model and significantly improves the
generalization in downstream tasks. We evaluate our method on natural language
understanding, question answering, summarization, and natural language
generation tasks. Experiments show that our method is highly effective and
outperforms existing quantization methods, especially in the challenging 2-bit
and 2/4-bit mixed precision regimes. We will release our code. | [
"Yixiao Li",
"Yifan Yu",
"Chen Liang",
"Pengcheng He",
"Nikos Karampatziakis",
"Weizhu Chen",
"Tuo Zhao"
] | 2023-10-12 18:34:08 | http://arxiv.org/abs/2310.08659v3 | http://arxiv.org/pdf/2310.08659v3 | 2310.08659v3 |
SplitBeam: Effective and Efficient Beamforming in Wi-Fi Networks Through Split Computing | Modern IEEE 802.11 (Wi-Fi) networks extensively rely on multiple-input
multiple-output (MIMO) to significantly improve throughput. To correctly
beamform MIMO transmissions, the access point needs to frequently acquire a
beamforming matrix (BM) from each connected station. However, the size of the
matrix grows with the number of antennas and subcarriers, resulting in an
increasing amount of airtime overhead and computational load at the station.
Conventional approaches come with either excessive computational load or loss
of beamforming precision. For this reason, we propose SplitBeam, a new
framework where we train a split deep neural network (DNN) to directly output
the BM given the channel state information (CSI) matrix as input. We formulate
and solve a bottleneck optimization problem (BOP) to keep computation, airtime
overhead, and bit error rate (BER) below application requirements. We perform
extensive experimental CSI collection with off-the-shelf Wi-Fi devices in two
distinct environments and compare the performance of SplitBeam with the
standard IEEE 802.11 algorithm for BM feedback and the state-of-the-art
DNN-based approach LB-SciFi. Our experimental results show that SplitBeam
reduces the beamforming feedback size and computational complexity by
respectively up to 81% and 84% while maintaining BER within about 10^-3 of
existing approaches. We also implement the SplitBeam DNNs on FPGA hardware to
estimate the end-to-end BM reporting delay, and show that the latter is less
than 10 milliseconds in the most complex scenario, which is the target channel
sounding frequency in realistic multi-user MIMO scenarios. | [
"Niloofar Bahadori",
"Yoshitomo Matsubara",
"Marco Levorato",
"Francesco Restuccia"
] | 2023-10-12 18:29:21 | http://arxiv.org/abs/2310.08656v1 | http://arxiv.org/pdf/2310.08656v1 | 2310.08656v1 |
Analyzing Textual Data for Fatality Classification in Afghanistan's Armed Conflicts: A BERT Approach | Afghanistan has witnessed many armed conflicts throughout history, especially
in the past 20 years; these events have had a significant impact on human
lives, including military and civilians, with potential fatalities. In this
research, we aim to leverage state-of-the-art machine learning techniques to
classify the outcomes of Afghanistan armed conflicts to either fatal or
non-fatal based on their textual descriptions provided by the Armed Conflict
Location & Event Data Project (ACLED) dataset. The dataset contains
comprehensive descriptions of armed conflicts in Afghanistan that took place
from August 2021 to March 2023. The proposed approach leverages the power of
BERT (Bidirectional Encoder Representations from Transformers), a cutting-edge
language representation model in natural language processing. The classifier
utilizes the raw textual description of an event to estimate the likelihood of
the event resulting in a fatality. The model achieved impressive performance on
the test set with an accuracy of 98.8%, recall of 98.05%, precision of 99.6%,
and an F1 score of 98.82%. These results highlight the model's robustness and
indicate its potential impact in various areas such as resource allocation,
policymaking, and humanitarian aid efforts in Afghanistan. The model indicates
a machine learning-based text classification approach using the ACLED dataset
to accurately classify fatality in Afghanistan armed conflicts, achieving
robust performance with the BERT model and paving the way for future endeavors
in predicting event severity in Afghanistan. | [
"Hikmatullah Mohammadi",
"Ziaullah Momand",
"Parwin Habibi",
"Nazifa Ramaki",
"Bibi Storay Fazli",
"Sayed Zobair Rohany",
"Iqbal Samsoor"
] | 2023-10-12 18:26:23 | http://arxiv.org/abs/2310.08653v1 | http://arxiv.org/pdf/2310.08653v1 | 2310.08653v1 |
Electrical Grid Anomaly Detection via Tensor Decomposition | Supervisory Control and Data Acquisition (SCADA) systems often serve as the
nervous system for substations within power grids. These systems facilitate
real-time monitoring, data acquisition, control of equipment, and ensure smooth
and efficient operation of the substation and its connected devices. Previous
work has shown that dimensionality reduction-based approaches, such as
Principal Component Analysis (PCA), can be used for accurate identification of
anomalies in SCADA systems. While not specifically applied to SCADA,
non-negative matrix factorization (NMF) has shown strong results at detecting
anomalies in wireless sensor networks. These unsupervised approaches model the
normal or expected behavior and detect the unseen types of attacks or anomalies
by identifying the events that deviate from the expected behavior. These
approaches; however, do not model the complex and multi-dimensional
interactions that are naturally present in SCADA systems. Differently,
non-negative tensor decomposition is a powerful unsupervised machine learning
(ML) method that can model the complex and multi-faceted activity details of
SCADA events. In this work, we novelly apply the tensor decomposition method
Canonical Polyadic Alternating Poisson Regression (CP-APR) with a probabilistic
framework, which has previously shown state-of-the-art anomaly detection
results on cyber network data, to identify anomalies in SCADA systems. We
showcase that the use of statistical behavior analysis of SCADA communication
with tensor decomposition improves the specificity and accuracy of identifying
anomalies in electrical grid systems. In our experiments, we model real-world
SCADA system data collected from the electrical grid operated by Los Alamos
National Laboratory (LANL) which provides transmission and distribution service
through a partnership with Los Alamos County, and detect synthetically
generated anomalies. | [
"Alexander Most",
"Maksim Eren",
"Nigel Lawrence",
"Boian Alexandrov"
] | 2023-10-12 18:23:06 | http://arxiv.org/abs/2310.08650v1 | http://arxiv.org/pdf/2310.08650v1 | 2310.08650v1 |
Time-vectorized numerical integration for systems of ODEs | Stiff systems of ordinary differential equations (ODEs) and sparse training
data are common in scientific problems. This paper describes efficient,
implicit, vectorized methods for integrating stiff systems of ordinary
differential equations through time and calculating parameter gradients with
the adjoint method. The main innovation is to vectorize the problem both over
the number of independent times series and over a batch or "chunk" of
sequential time steps, effectively vectorizing the assembly of the implicit
system of ODEs. The block-bidiagonal structure of the linearized implicit
system for the backward Euler method allows for further vectorization using
parallel cyclic reduction (PCR). Vectorizing over both axes of the input data
provides a higher bandwidth of calculations to the computing device, allowing
even problems with comparatively sparse data to fully utilize modern GPUs and
achieving speed ups of greater than 100x, compared to standard, sequential time
integration. We demonstrate the advantages of implicit, vectorized time
integration with several example problems, drawn from both analytical stiff and
non-stiff ODE models as well as neural ODE models. We also describe and provide
a freely available open-source implementation of the methods developed here. | [
"Mark C. Messner",
"Tianchen Hu",
"Tianju Chen"
] | 2023-10-12 18:21:02 | http://arxiv.org/abs/2310.08649v1 | http://arxiv.org/pdf/2310.08649v1 | 2310.08649v1 |
Defect Analysis of 3D Printed Cylinder Object Using Transfer Learning Approaches | Additive manufacturing (AM) is gaining attention across various industries
like healthcare, aerospace, and automotive. However, identifying defects early
in the AM process can reduce production costs and improve productivity - a key
challenge. This study explored the effectiveness of machine learning (ML)
approaches, specifically transfer learning (TL) models, for defect detection in
3D-printed cylinders. Images of cylinders were analyzed using models including
VGG16, VGG19, ResNet50, ResNet101, InceptionResNetV2, and MobileNetV2.
Performance was compared across two datasets using accuracy, precision, recall,
and F1-score metrics. In the first study, VGG16, InceptionResNetV2, and
MobileNetV2 achieved perfect scores. In contrast, ResNet50 had the lowest
performance, with an average F1-score of 0.32. Similarly, in the second study,
MobileNetV2 correctly classified all instances, while ResNet50 struggled with
more false positives and fewer true positives, resulting in an F1-score of
0.75. Overall, the findings suggest certain TL models like MobileNetV2 can
deliver high accuracy for AM defect classification, although performance varies
across algorithms. The results provide insights into model optimization and
integration needs for reliable automated defect analysis during 3D printing. By
identifying the top-performing TL techniques, this study aims to enhance AM
product quality through robust image-based monitoring and inspection. | [
"Md Manjurul Ahsan",
"Shivakumar Raman",
"Zahed Siddique"
] | 2023-10-12 18:10:36 | http://arxiv.org/abs/2310.08645v1 | http://arxiv.org/pdf/2310.08645v1 | 2310.08645v1 |
A Mass-Conserving-Perceptron for Machine Learning-Based Modeling of Geoscientific Systems | Although decades of effort have been devoted to building Physical-Conceptual
(PC) models for predicting the time-series evolution of geoscientific systems,
recent work shows that Machine Learning (ML) based Gated Recurrent Neural
Network technology can be used to develop models that are much more accurate.
However, the difficulty of extracting physical understanding from ML-based
models complicates their utility for enhancing scientific knowledge regarding
system structure and function. Here, we propose a physically-interpretable Mass
Conserving Perceptron (MCP) as a way to bridge the gap between PC-based and
ML-based modeling approaches. The MCP exploits the inherent isomorphism between
the directed graph structures underlying both PC models and GRNNs to explicitly
represent the mass-conserving nature of physical processes while enabling the
functional nature of such processes to be directly learned (in an interpretable
manner) from available data using off-the-shelf ML technology. As a proof of
concept, we investigate the functional expressivity (capacity) of the MCP,
explore its ability to parsimoniously represent the rainfall-runoff (RR)
dynamics of the Leaf River Basin, and demonstrate its utility for scientific
hypothesis testing. To conclude, we discuss extensions of the concept to enable
ML-based physical-conceptual representation of the coupled nature of
mass-energy-information flows through geoscientific systems. | [
"Yuan-Heng Wang",
"Hoshin V. Gupta"
] | 2023-10-12 18:09:33 | http://arxiv.org/abs/2310.08644v1 | http://arxiv.org/pdf/2310.08644v1 | 2310.08644v1 |
Octopus: Embodied Vision-Language Programmer from Environmental Feedback | Large vision-language models (VLMs) have achieved substantial progress in
multimodal perception and reasoning. Furthermore, when seamlessly integrated
into an embodied agent, it signifies a crucial stride towards the creation of
autonomous and context-aware systems capable of formulating plans and executing
commands with precision. In this paper, we introduce Octopus, a novel VLM
designed to proficiently decipher an agent's vision and textual task objectives
and to formulate intricate action sequences and generate executable code. Our
design allows the agent to adeptly handle a wide spectrum of tasks, ranging
from mundane daily chores in simulators to sophisticated interactions in
complex video games. Octopus is trained by leveraging GPT-4 to control an
explorative agent to generate training data, i.e., action blueprints and the
corresponding executable code, within our experimental environment called
OctoVerse. We also collect the feedback that allows the enhanced training
scheme of Reinforcement Learning with Environmental Feedback (RLEF). Through a
series of experiments, we illuminate Octopus's functionality and present
compelling results, and the proposed RLEF turns out to refine the agent's
decision-making. By open-sourcing our model architecture, simulator, and
dataset, we aspire to ignite further innovation and foster collaborative
applications within the broader embodied AI community. | [
"Jingkang Yang",
"Yuhao Dong",
"Shuai Liu",
"Bo Li",
"Ziyue Wang",
"Chencheng Jiang",
"Haoran Tan",
"Jiamu Kang",
"Yuanhan Zhang",
"Kaiyang Zhou",
"Ziwei Liu"
] | 2023-10-12 17:59:58 | http://arxiv.org/abs/2310.08588v1 | http://arxiv.org/pdf/2310.08588v1 | 2310.08588v1 |
Tree-Planner: Efficient Close-loop Task Planning with Large Language Models | This paper studies close-loop task planning, which refers to the process of
generating a sequence of skills (a plan) to accomplish a specific goal while
adapting the plan based on real-time observations. Recently, prompting Large
Language Models (LLMs) to generate actions iteratively has become a prevalent
paradigm due to its superior performance and user-friendliness. However, this
paradigm is plagued by two inefficiencies: high token consumption and redundant
error correction, both of which hinder its scalability for large-scale testing
and applications. To address these issues, we propose Tree-Planner, which
reframes task planning with LLMs into three distinct phases: plan sampling,
action tree construction, and grounded deciding. Tree-Planner starts by using
an LLM to sample a set of potential plans before execution, followed by the
aggregation of them to form an action tree. Finally, the LLM performs a
top-down decision-making process on the tree, taking into account real-time
environmental information. Experiments show that Tree-Planner achieves
state-of-the-art performance while maintaining high efficiency. By decomposing
LLM queries into a single plan-sampling call and multiple grounded-deciding
calls, a considerable part of the prompt are less likely to be repeatedly
consumed. As a result, token consumption is reduced by 92.2% compared to the
previously best-performing model. Additionally, by enabling backtracking on the
action tree as needed, the correction process becomes more flexible, leading to
a 40.5% decrease in error corrections. Project page:
https://tree-planner.github.io/ | [
"Mengkang Hu",
"Yao Mu",
"Xinmiao Yu",
"Mingyu Ding",
"Shiguang Wu",
"Wenqi Shao",
"Qiguang Chen",
"Bin Wang",
"Yu Qiao",
"Ping Luo"
] | 2023-10-12 17:59:50 | http://arxiv.org/abs/2310.08582v1 | http://arxiv.org/pdf/2310.08582v1 | 2310.08582v1 |
Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models | Recent advances in the development of vision-language models (VLMs) are
yielding remarkable success in recognizing visual semantic content, including
impressive instances of compositional image understanding. Here, we introduce
the novel task of Visual Data-Type Identification, a basic perceptual skill
with implications for data curation (e.g., noisy data-removal from large
datasets, domain-specific retrieval) and autonomous vision (e.g.,
distinguishing changing weather conditions from camera lens staining). We
develop two datasets consisting of animal images altered across a diverse set
of 27 visual data-types, spanning four broad categories. An extensive zero-shot
evaluation of 39 VLMs, ranging from 100M to 80B parameters, shows a nuanced
performance landscape. While VLMs are reasonably good at identifying certain
stylistic \textit{data-types}, such as cartoons and sketches, they struggle
with simpler data-types arising from basic manipulations like image rotations
or additive noise. Our findings reveal that (i) model scaling alone yields
marginal gains for contrastively-trained models like CLIP, and (ii) there is a
pronounced drop in performance for the largest auto-regressively trained VLMs
like OpenFlamingo. This finding points to a blind spot in current frontier
VLMs: they excel in recognizing semantic content but fail to acquire an
understanding of visual data-types through scaling. By analyzing the
pre-training distributions of these models and incorporating data-type
information into the captions during fine-tuning, we achieve a significant
enhancement in performance. By exploring this previously uncharted task, we aim
to set the stage for further advancing VLMs to equip them with visual data-type
understanding. Code and datasets are released at
https://github.com/bethgelab/DataTypeIdentification. | [
"Vishaal Udandarao",
"Max F. Burg",
"Samuel Albanie",
"Matthias Bethge"
] | 2023-10-12 17:59:30 | http://arxiv.org/abs/2310.08577v2 | http://arxiv.org/pdf/2310.08577v2 | 2310.08577v2 |
Learning to Act from Actionless Videos through Dense Correspondences | In this work, we present an approach to construct a video-based robot policy
capable of reliably executing diverse tasks across different robots and
environments from few video demonstrations without using any action
annotations. Our method leverages images as a task-agnostic representation,
encoding both the state and action information, and text as a general
representation for specifying robot goals. By synthesizing videos that
``hallucinate'' robot executing actions and in combination with dense
correspondences between frames, our approach can infer the closed-formed action
to execute to an environment without the need of any explicit action labels.
This unique capability allows us to train the policy solely based on RGB videos
and deploy learned policies to various robotic tasks. We demonstrate the
efficacy of our approach in learning policies on table-top manipulation and
navigation tasks. Additionally, we contribute an open-source framework for
efficient video modeling, enabling the training of high-fidelity policy models
with four GPUs within a single day. | [
"Po-Chen Ko",
"Jiayuan Mao",
"Yilun Du",
"Shao-Hua Sun",
"Joshua B. Tenenbaum"
] | 2023-10-12 17:59:23 | http://arxiv.org/abs/2310.08576v1 | http://arxiv.org/pdf/2310.08576v1 | 2310.08576v1 |
Jigsaw: Supporting Designers in Prototyping Multimodal Applications by Assembling AI Foundation Models | Recent advancements in AI foundation models have made it possible for them to
be utilized off-the-shelf for creative tasks, including ideating design
concepts or generating visual prototypes. However, integrating these models
into the creative process can be challenging as they often exist as standalone
applications tailored to specific tasks. To address this challenge, we
introduce Jigsaw, a prototype system that employs puzzle pieces as metaphors to
represent foundation models. Jigsaw allows designers to combine different
foundation model capabilities across various modalities by assembling
compatible puzzle pieces. To inform the design of Jigsaw, we interviewed ten
designers and distilled design goals. In a user study, we showed that Jigsaw
enhanced designers' understanding of available foundation model capabilities,
provided guidance on combining capabilities across different modalities and
tasks, and served as a canvas to support design exploration, prototyping, and
documentation. | [
"David Chuan-En Lin",
"Nikolas Martelaro"
] | 2023-10-12 17:57:57 | http://arxiv.org/abs/2310.08574v1 | http://arxiv.org/pdf/2310.08574v1 | 2310.08574v1 |
Bucks for Buckets (B4B): Active Defenses Against Stealing Encoders | Machine Learning as a Service (MLaaS) APIs provide ready-to-use and
high-utility encoders that generate vector representations for given inputs.
Since these encoders are very costly to train, they become lucrative targets
for model stealing attacks during which an adversary leverages query access to
the API to replicate the encoder locally at a fraction of the original training
costs. We propose Bucks for Buckets (B4B), the first active defense that
prevents stealing while the attack is happening without degrading
representation quality for legitimate API users. Our defense relies on the
observation that the representations returned to adversaries who try to steal
the encoder's functionality cover a significantly larger fraction of the
embedding space than representations of legitimate users who utilize the
encoder to solve a particular downstream task.vB4B leverages this to adaptively
adjust the utility of the returned representations according to a user's
coverage of the embedding space. To prevent adaptive adversaries from eluding
our defense by simply creating multiple user accounts (sybils), B4B also
individually transforms each user's representations. This prevents the
adversary from directly aggregating representations over multiple accounts to
create their stolen encoder copy. Our active defense opens a new path towards
securely sharing and democratizing encoders over public APIs. | [
"Jan Dubiński",
"Stanisław Pawlak",
"Franziska Boenisch",
"Tomasz Trzciński",
"Adam Dziedzic"
] | 2023-10-12 17:56:53 | http://arxiv.org/abs/2310.08571v1 | http://arxiv.org/pdf/2310.08571v1 | 2310.08571v1 |
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining | Large transformer models pretrained on offline reinforcement learning
datasets have demonstrated remarkable in-context reinforcement learning (ICRL)
capabilities, where they can make good decisions when prompted with interaction
trajectories from unseen environments. However, when and how transformers can
be trained to perform ICRL have not been theoretically well-understood. In
particular, it is unclear which reinforcement-learning algorithms transformers
can perform in context, and how distribution mismatch in offline training data
affects the learned algorithms. This paper provides a theoretical framework
that analyzes supervised pretraining for ICRL. This includes two recently
proposed training methods -- algorithm distillation and decision-pretrained
transformers. First, assuming model realizability, we prove the
supervised-pretrained transformer will imitate the conditional expectation of
the expert algorithm given the observed trajectory. The generalization error
will scale with model capacity and a distribution divergence factor between the
expert and offline algorithms. Second, we show transformers with ReLU attention
can efficiently approximate near-optimal online reinforcement learning
algorithms like LinUCB and Thompson sampling for stochastic linear bandits, and
UCB-VI for tabular Markov decision processes. This provides the first
quantitative analysis of the ICRL capabilities of transformers pretrained from
offline trajectories. | [
"Licong Lin",
"Yu Bai",
"Song Mei"
] | 2023-10-12 17:55:02 | http://arxiv.org/abs/2310.08566v1 | http://arxiv.org/pdf/2310.08566v1 | 2310.08566v1 |
Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate Exploration Bias | It is desirable for policies to optimistically explore new states and
behaviors during online reinforcement learning (RL) or fine-tuning, especially
when prior offline data does not provide enough state coverage. However,
exploration bonuses can bias the learned policy, and our experiments find that
naive, yet standard use of such bonuses can fail to recover a performant
policy. Concurrently, pessimistic training in offline RL has enabled recovery
of performant policies from static datasets. Can we leverage offline RL to
recover better policies from online interaction? We make a simple observation
that a policy can be trained from scratch on all interaction data with
pessimistic objectives, thereby decoupling the policies used for data
collection and for evaluation. Specifically, we propose offline retraining, a
policy extraction step at the end of online fine-tuning in our
Offline-to-Online-to-Offline (OOO) framework for reinforcement learning (RL).
An optimistic (exploration) policy is used to interact with the environment,
and a separate pessimistic (exploitation) policy is trained on all the observed
data for evaluation. Such decoupling can reduce any bias from online
interaction (intrinsic rewards, primacy bias) in the evaluation policy, and can
allow more exploratory behaviors during online interaction which in turn can
generate better data for exploitation. OOO is complementary to several
offline-to-online RL and online RL methods, and improves their average
performance by 14% to 26% in our fine-tuning experiments, achieves
state-of-the-art performance on several environments in the D4RL benchmarks,
and improves online RL performance by 165% on two OpenAI gym environments.
Further, OOO can enable fine-tuning from incomplete offline datasets where
prior methods can fail to recover a performant policy. Implementation:
https://github.com/MaxSobolMark/OOO | [
"Max Sobol Mark",
"Archit Sharma",
"Fahim Tajwar",
"Rafael Rafailov",
"Sergey Levine",
"Chelsea Finn"
] | 2023-10-12 17:50:09 | http://arxiv.org/abs/2310.08558v1 | http://arxiv.org/pdf/2310.08558v1 | 2310.08558v1 |
Cross-Episodic Curriculum for Transformer Agents | We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the
learning efficiency and generalization of Transformer agents. Central to CEC is
the placement of cross-episodic experiences into a Transformer's context, which
forms the basis of a curriculum. By sequentially structuring online learning
trials and mixed-quality demonstrations, CEC constructs curricula that
encapsulate learning progression and proficiency increase across episodes. Such
synergy combined with the potent pattern recognition capabilities of
Transformer models delivers a powerful cross-episodic attention mechanism. The
effectiveness of CEC is demonstrated under two representative scenarios: one
involving multi-task reinforcement learning with discrete control, such as in
DeepMind Lab, where the curriculum captures the learning progression in both
individual and progressively complex settings; and the other involving
imitation learning with mixed-quality data for continuous control, as seen in
RoboMimic, where the curriculum captures the improvement in demonstrators'
expertise. In all instances, policies resulting from CEC exhibit superior
performance and strong generalization. Code is open-sourced at
https://cec-agent.github.io/ to facilitate research on Transformer agent
learning. | [
"Lucy Xiaoyang Shi",
"Yunfan Jiang",
"Jake Grigsby",
"Linxi \"Jim\" Fan",
"Yuke Zhu"
] | 2023-10-12 17:45:05 | http://arxiv.org/abs/2310.08549v1 | http://arxiv.org/pdf/2310.08549v1 | 2310.08549v1 |
Stronger Coreset Bounds for Kernel Density Estimators via Chaining | We apply the discrepancy method and a chaining approach to give improved
bounds on the coreset complexity of a wide class of kernel functions. Our
results give randomized polynomial time algorithms to produce coresets of size
$O\big(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log\log \frac{1}{\varepsilon}}\big)$
for the Gaussian and Laplacian kernels in the case that the data set is
uniformly bounded, an improvement that was not possible with previous
techniques. We also obtain coresets of size
$O\big(\frac{1}{\varepsilon}\sqrt{\log\log \frac{1}{\varepsilon}}\big)$ for the
Laplacian kernel for $d$ constant. Finally, we give the best known bounds of
$O\big(\frac{\sqrt{d}}{\varepsilon}\sqrt{\log(2\max\{1,\alpha\})}\big)$ on the
coreset complexity of the exponential, Hellinger, and JS Kernels, where
$1/\alpha$ is the bandwidth parameter of the kernel. | [
"Rainie Bozzai",
"Thomas Rothvoss"
] | 2023-10-12 17:44:59 | http://arxiv.org/abs/2310.08548v1 | http://arxiv.org/pdf/2310.08548v1 | 2310.08548v1 |
Do pretrained Transformers Really Learn In-context by Gradient Descent? | Is In-Context Learning (ICL) implicitly equivalent to Gradient Descent (GD)?
Several recent works draw analogies between the dynamics of GD and the emergent
behavior of ICL in large language models. However, these works make assumptions
far from the realistic natural language setting in which language models are
trained. Such discrepancies between theory and practice, therefore, necessitate
further investigation to validate their applicability.
We start by highlighting the weaknesses in prior works that construct
Transformer weights to simulate gradient descent. Their experiments with
training Transformers on ICL objective, inconsistencies in the order
sensitivity of ICL and GD, sparsity of the constructed weights, and sensitivity
to parameter changes are some examples of a mismatch from the real-world
setting.
Furthermore, we probe and compare the ICL vs. GD hypothesis in a natural
setting. We conduct comprehensive empirical analyses on language models
pretrained on natural data (LLaMa-7B). Our comparisons on various performance
metrics highlight the inconsistent behavior of ICL and GD as a function of
various factors such as datasets, models, and number of demonstrations. We
observe that ICL and GD adapt the output distribution of language models
differently. These results indicate that the equivalence between ICL and GD is
an open hypothesis, requires nuanced considerations and calls for further
studies. | [
"Lingfeng Shen",
"Aayush Mishra",
"Daniel Khashabi"
] | 2023-10-12 17:32:09 | http://arxiv.org/abs/2310.08540v1 | http://arxiv.org/pdf/2310.08540v1 | 2310.08540v1 |
Divorce Prediction with Machine Learning: Insights and LIME Interpretability | Divorce is one of the most common social issues in developed countries like
in the United States. Almost 50% of the recent marriages turn into an
involuntary divorce or separation. While it is evident that people vary to a
different extent, and even over time, an incident like Divorce does not
interrupt the individual's daily activities; still, Divorce has a severe effect
on the individual's mental health, and personal life. Within the scope of this
research, the divorce prediction was carried out by evaluating a dataset named
by the 'divorce predictor dataset' to correctly classify between married and
Divorce people using six different machine learning algorithms- Logistic
Regression (LR), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN),
Classification and Regression Trees (CART), Gaussian Na\"ive Bayes (NB), and,
Support Vector Machines (SVM). Preliminary computational results show that
algorithms such as SVM, KNN, and LDA, can perform that task with an accuracy of
98.57%. This work's additional novel contribution is the detailed and
comprehensive explanation of prediction probabilities using Local Interpretable
Model-Agnostic Explanations (LIME). Utilizing LIME to analyze test results
illustrates the possibility of differentiating between divorced and married
couples. Finally, we have developed a divorce predictor app considering ten
most important features that potentially affect couples in making decisions in
their divorce, such tools can be used by any one in order to identify their
relationship condition. | [
"Md Manjurul Ahsan"
] | 2023-10-12 17:05:51 | http://arxiv.org/abs/2310.08620v1 | http://arxiv.org/pdf/2310.08620v1 | 2310.08620v1 |
Unsupervised Learning of Object-Centric Embeddings for Cell Instance Segmentation in Microscopy Images | Segmentation of objects in microscopy images is required for many biomedical
applications. We introduce object-centric embeddings (OCEs), which embed image
patches such that the spatial offsets between patches cropped from the same
object are preserved. Those learnt embeddings can be used to delineate
individual objects and thus obtain instance segmentations. Here, we show
theoretically that, under assumptions commonly found in microscopy images, OCEs
can be learnt through a self-supervised task that predicts the spatial offset
between image patches. Together, this forms an unsupervised cell instance
segmentation method which we evaluate on nine diverse large-scale microscopy
datasets. Segmentations obtained with our method lead to substantially improved
results, compared to state-of-the-art baselines on six out of nine datasets,
and perform on par on the remaining three datasets. If ground-truth annotations
are available, our method serves as an excellent starting point for supervised
training, reducing the required amount of ground-truth needed by one order of
magnitude, thus substantially increasing the practical applicability of our
method. Source code is available at https://github.com/funkelab/cellulus. | [
"Steffen Wolf",
"Manan Lalit",
"Henry Westmacott",
"Katie McDole",
"Jan Funke"
] | 2023-10-12 16:59:50 | http://arxiv.org/abs/2310.08501v1 | http://arxiv.org/pdf/2310.08501v1 | 2310.08501v1 |
Impact of time and note duration tokenizations on deep learning symbolic music modeling | Symbolic music is widely used in various deep learning tasks, including
generation, transcription, synthesis, and Music Information Retrieval (MIR). It
is mostly employed with discrete models like Transformers, which require music
to be tokenized, i.e., formatted into sequences of distinct elements called
tokens. Tokenization can be performed in different ways. As Transformer can
struggle at reasoning, but capture more easily explicit information, it is
important to study how the way the information is represented for such model
impact their performances. In this work, we analyze the common tokenization
methods and experiment with time and note duration representations. We compare
the performances of these two impactful criteria on several tasks, including
composer and emotion classification, music generation, and sequence
representation learning. We demonstrate that explicit information leads to
better results depending on the task. | [
"Nathan Fradet",
"Nicolas Gutowski",
"Fabien Chhel",
"Jean-Pierre Briot"
] | 2023-10-12 16:56:37 | http://arxiv.org/abs/2310.08497v1 | http://arxiv.org/pdf/2310.08497v1 | 2310.08497v1 |
Characterizing climate pathways using feature importance on echo state networks | The 2022 National Defense Strategy of the United States listed climate change
as a serious threat to national security. Climate intervention methods, such as
stratospheric aerosol injection, have been proposed as mitigation strategies,
but the downstream effects of such actions on a complex climate system are not
well understood. The development of algorithmic techniques for quantifying
relationships between source and impact variables related to a climate event
(i.e., a climate pathway) would help inform policy decisions. Data-driven deep
learning models have become powerful tools for modeling highly nonlinear
relationships and may provide a route to characterize climate variable
relationships. In this paper, we explore the use of an echo state network (ESN)
for characterizing climate pathways. ESNs are a computationally efficient
neural network variation designed for temporal data, and recent work proposes
ESNs as a useful tool for forecasting spatio-temporal climate data. Like other
neural networks, ESNs are non-interpretable black-box models, which poses a
hurdle for understanding variable relationships. We address this issue by
developing feature importance methods for ESNs in the context of
spatio-temporal data to quantify variable relationships captured by the model.
We conduct a simulation study to assess and compare the feature importance
techniques, and we demonstrate the approach on reanalysis climate data. In the
climate application, we select a time period that includes the 1991 volcanic
eruption of Mount Pinatubo. This event was a significant stratospheric aerosol
injection, which we use as a proxy for an artificial stratospheric aerosol
injection. Using the proposed approach, we are able to characterize
relationships between pathway variables associated with this event. | [
"Katherine Goode",
"Daniel Ries",
"Kellie McClernon"
] | 2023-10-12 16:55:04 | http://arxiv.org/abs/2310.08495v1 | http://arxiv.org/pdf/2310.08495v1 | 2310.08495v1 |
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models | Recently, using a powerful proprietary Large Language Model (LLM) (e.g.,
GPT-4) as an evaluator for long-form responses has become the de facto
standard. However, for practitioners with large-scale evaluation tasks and
custom criteria in consideration (e.g., child-readability), using proprietary
LLMs as an evaluator is unreliable due to the closed-source nature,
uncontrolled versioning, and prohibitive costs. In this work, we propose
Prometheus, a fully open-source LLM that is on par with GPT-4's evaluation
capabilities when the appropriate reference materials (reference answer, score
rubric) are accompanied. We first construct the Feedback Collection, a new
dataset that consists of 1K fine-grained score rubrics, 20K instructions, and
100K responses and language feedback generated by GPT-4. Using the Feedback
Collection, we train Prometheus, a 13B evaluator LLM that can assess any given
long-form text based on customized score rubric provided by the user.
Experimental results show that Prometheus scores a Pearson correlation of 0.897
with human evaluators when evaluating with 45 customized score rubrics, which
is on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392).
Furthermore, measuring correlation with GPT-4 with 1222 customized score
rubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask
Eval) shows similar trends, bolstering Prometheus's capability as an evaluator
LLM. Lastly, Prometheus achieves the highest accuracy on two human preference
benchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced
reward models explicitly trained on human preference datasets, highlighting its
potential as an universal reward model. We open-source our code, dataset, and
model at https://github.com/kaistAI/Prometheus. | [
"Seungone Kim",
"Jamin Shin",
"Yejin Cho",
"Joel Jang",
"Shayne Longpre",
"Hwaran Lee",
"Sangdoo Yun",
"Seongjin Shin",
"Sungdong Kim",
"James Thorne",
"Minjoon Seo"
] | 2023-10-12 16:50:08 | http://arxiv.org/abs/2310.08491v1 | http://arxiv.org/pdf/2310.08491v1 | 2310.08491v1 |
Can We Edit Multimodal Large Language Models? | In this paper, we focus on editing Multimodal Large Language Models (MLLMs).
Compared to editing single-modal LLMs, multimodal model editing is more
challenging, which demands a higher level of scrutiny and careful consideration
in the editing process. To facilitate research in this area, we construct a new
benchmark, dubbed MMEdit, for editing multimodal LLMs and establishing a suite
of innovative metrics for evaluation. We conduct comprehensive experiments
involving various model editing baselines and analyze the impact of editing
different components for multimodal LLMs. Empirically, we notice that previous
baselines can implement editing multimodal LLMs to some extent, but the effect
is still barely satisfactory, indicating the potential difficulty of this task.
We hope that our work can provide the NLP community with insights. Code and
dataset are available in https://github.com/zjunlp/EasyEdit. | [
"Siyuan Cheng",
"Bozhong Tian",
"Qingbin Liu",
"Xi Chen",
"Yongheng Wang",
"Huajun Chen",
"Ningyu Zhang"
] | 2023-10-12 16:32:44 | http://arxiv.org/abs/2310.08475v2 | http://arxiv.org/pdf/2310.08475v2 | 2310.08475v2 |
Strategies and impact of learning curve estimation for CNN-based image classification | Learning curves are a measure for how the performance of machine learning
models improves given a certain volume of training data. Over a wide variety of
applications and models it was observed that learning curves follow -- to a
large extent -- a power law behavior. This makes the performance of different
models for a given task somewhat predictable and opens the opportunity to
reduce the training time for practitioners, who are exploring the space of
possible models and hyperparameters for the problem at hand. By estimating the
learning curve of a model from training on small subsets of data only the best
models need to be considered for training on the full dataset. How to choose
subset sizes and how often to sample models on these to obtain estimates is
however not researched. Given that the goal is to reduce overall training time
strategies are needed that sample the performance in a time-efficient way and
yet leads to accurate learning curve estimates. In this paper we formulate the
framework for these strategies and propose several strategies. Further we
evaluate the strategies for simulated learning curves and in experiments with
popular datasets and models for image classification tasks. | [
"Laura Didyk",
"Brayden Yarish",
"Michael A. Beck",
"Christopher P. Bidinosti",
"Christopher J. Henry"
] | 2023-10-12 16:28:25 | http://arxiv.org/abs/2310.08470v1 | http://arxiv.org/pdf/2310.08470v1 | 2310.08470v1 |
DistillSpec: Improving Speculative Decoding via Knowledge Distillation | Speculative decoding (SD) accelerates large language model inference by
employing a faster draft model for generating multiple tokens, which are then
verified in parallel by the larger target model, resulting in the text
generated according to the target model distribution. However, identifying a
compact draft model that is well-aligned with the target model is challenging.
To tackle this issue, we propose DistillSpec that uses knowledge distillation
to better align the draft model with the target model, before applying SD.
DistillSpec makes two key design choices, which we demonstrate via systematic
study to be crucial to improving the draft and target alignment: utilizing
on-policy data generation from the draft model, and tailoring the divergence
function to the task and decoding strategy. Notably, DistillSpec yields
impressive 10 - 45% speedups over standard SD on a range of standard
benchmarks, using both greedy and non-greedy sampling. Furthermore, we combine
DistillSpec with lossy SD to achieve fine-grained control over the latency vs.
task performance trade-off. Finally, in practical scenarios with models of
varying sizes, first using distillation to boost the performance of the target
model and then applying DistillSpec to train a well-aligned draft model can
reduce decoding latency by 6-10x with minimal performance drop, compared to
standard decoding without distillation. | [
"Yongchao Zhou",
"Kaifeng Lyu",
"Ankit Singh Rawat",
"Aditya Krishna Menon",
"Afshin Rostamizadeh",
"Sanjiv Kumar",
"Jean-François Kagy",
"Rishabh Agarwal"
] | 2023-10-12 16:21:04 | http://arxiv.org/abs/2310.08461v1 | http://arxiv.org/pdf/2310.08461v1 | 2310.08461v1 |
A Survey of Heterogeneous Transfer Learning | The application of transfer learning, an approach utilizing knowledge from a
source domain to enhance model performance in a target domain, has seen a
tremendous rise in recent years, underpinning many real-world scenarios. The
key to its success lies in the shared common knowledge between the domains, a
prerequisite in most transfer learning methodologies. These methods typically
presuppose identical feature spaces and label spaces in both domains, known as
homogeneous transfer learning, which, however, is not always a practical
assumption. Oftentimes, the source and target domains vary in feature spaces,
data distributions, and label spaces, making it challenging or costly to secure
source domain data with identical feature and label spaces as the target
domain. Arbitrary elimination of these differences is not always feasible or
optimal. Thus, heterogeneous transfer learning, acknowledging and dealing with
such disparities, has emerged as a promising approach for a variety of tasks.
Despite the existence of a survey in 2017 on this topic, the fast-paced
advances post-2017 necessitate an updated, in-depth review. We therefore
present a comprehensive survey of recent developments in heterogeneous transfer
learning methods, offering a systematic guide for future research. Our paper
reviews methodologies for diverse learning scenarios, discusses the limitations
of current studies, and covers various application contexts, including Natural
Language Processing, Computer Vision, Multimodality, and Biomedicine, to foster
a deeper understanding and spur future research. | [
"Runxue Bao",
"Yiming Sun",
"Yuhe Gao",
"Jindong Wang",
"Qiang Yang",
"Haifeng Chen",
"Zhi-Hong Mao",
"Ye Ye"
] | 2023-10-12 16:19:58 | http://arxiv.org/abs/2310.08459v2 | http://arxiv.org/pdf/2310.08459v2 | 2310.08459v2 |
Towards Robust Multi-Modal Reasoning via Model Selection | The reasoning capabilities of LLM (Large Language Model) are widely
acknowledged in recent research, inspiring studies on tool learning and
autonomous agents. LLM serves as the "brain" of agent, orchestrating multiple
tools for collaborative multi-step task solving. Unlike methods invoking tools
like calculators or weather APIs for straightforward tasks, multi-modal agents
excel by integrating diverse AI models for complex challenges. However, current
multi-modal agents neglect the significance of model selection: they primarily
focus on the planning and execution phases, and will only invoke predefined
task-specific models for each subtask, making the execution fragile. Meanwhile,
other traditional model selection methods are either incompatible with or
suboptimal for the multi-modal agent scenarios, due to ignorance of
dependencies among subtasks arising by multi-step reasoning.
To this end, we identify the key challenges therein and propose the
$\textit{M}^3$ framework as a plug-in with negligible runtime overhead at
test-time. This framework improves model selection and bolsters the robustness
of multi-modal agents in multi-step reasoning. In the absence of suitable
benchmarks, we create MS-GQA, a new dataset specifically designed to
investigate the model selection challenge in multi-modal agents. Our
experiments reveal that our framework enables dynamic model selection,
considering both user inputs and subtask dependencies, thereby robustifying the
overall reasoning process. Our code and benchmark:
https://github.com/LINs-lab/M3. | [
"Xiangyan Liu",
"Rongxue Li",
"Wei Ji",
"Tao Lin"
] | 2023-10-12 16:06:18 | http://arxiv.org/abs/2310.08446v1 | http://arxiv.org/pdf/2310.08446v1 | 2310.08446v1 |
Neural Sampling in Hierarchical Exponential-family Energy-based Models | Bayesian brain theory suggests that the brain employs generative models to
understand the external world. The sampling-based perspective posits that the
brain infers the posterior distribution through samples of stochastic neuronal
responses. Additionally, the brain continually updates its generative model to
approach the true distribution of the external world. In this study, we
introduce the Hierarchical Exponential-family Energy-based (HEE) model, which
captures the dynamics of inference and learning. In the HEE model, we decompose
the partition function into individual layers and leverage a group of neurons
with shorter time constants to sample the gradient of the decomposed
normalization term. This allows our model to estimate the partition function
and perform inference simultaneously, circumventing the negative phase
encountered in conventional energy-based models (EBMs). As a result, the
learning process is localized both in time and space, and the model is easy to
converge. To match the brain's rapid computation, we demonstrate that neural
adaptation can serve as a momentum term, significantly accelerating the
inference process. On natural image datasets, our model exhibits
representations akin to those observed in the biological visual system.
Furthermore, for the machine learning community, our model can generate
observations through joint or marginal generation. We show that marginal
generation outperforms joint generation and achieves performance on par with
other EBMs. | [
"Xingsi Dong",
"Si Wu"
] | 2023-10-12 15:56:02 | http://arxiv.org/abs/2310.08431v2 | http://arxiv.org/pdf/2310.08431v2 | 2310.08431v2 |
Differentially Private Non-convex Learning for Multi-layer Neural Networks | This paper focuses on the problem of Differentially Private Stochastic
Optimization for (multi-layer) fully connected neural networks with a single
output node. In the first part, we examine cases with no hidden nodes,
specifically focusing on Generalized Linear Models (GLMs). We investigate the
well-specific model where the random noise possesses a zero mean, and the link
function is both bounded and Lipschitz continuous. We propose several
algorithms and our analysis demonstrates the feasibility of achieving an excess
population risk that remains invariant to the data dimension. We also delve
into the scenario involving the ReLU link function, and our findings mirror
those of the bounded link function. We conclude this section by contrasting
well-specified and misspecified models, using ReLU regression as a
representative example.
In the second part of the paper, we extend our ideas to two-layer neural
networks with sigmoid or ReLU activation functions in the well-specified model.
In the third part, we study the theoretical guarantees of DP-SGD in Abadi et
al. (2016) for fully connected multi-layer neural networks. By utilizing recent
advances in Neural Tangent Kernel theory, we provide the first excess
population risk when both the sample size and the width of the network are
sufficiently large. Additionally, we discuss the role of some parameters in
DP-SGD regarding their utility, both theoretically and empirically. | [
"Hanpu Shen",
"Cheng-Long Wang",
"Zihang Xiang",
"Yiming Ying",
"Di Wang"
] | 2023-10-12 15:48:14 | http://arxiv.org/abs/2310.08425v1 | http://arxiv.org/pdf/2310.08425v1 | 2310.08425v1 |
Jailbreaking Black Box Large Language Models in Twenty Queries | There is growing interest in ensuring that large language models (LLMs) align
with human values. However, the alignment of such models is vulnerable to
adversarial jailbreaks, which coax LLMs into overriding their safety
guardrails. The identification of these vulnerabilities is therefore
instrumental in understanding inherent weaknesses and preventing future misuse.
To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an
algorithm that generates semantic jailbreaks with only black-box access to an
LLM. PAIR -- which is inspired by social engineering attacks -- uses an
attacker LLM to automatically generate jailbreaks for a separate targeted LLM
without human intervention. In this way, the attacker LLM iteratively queries
the target LLM to update and refine a candidate jailbreak. Empirically, PAIR
often requires fewer than twenty queries to produce a jailbreak, which is
orders of magnitude more efficient than existing algorithms. PAIR also achieves
competitive jailbreaking success rates and transferability on open and
closed-source LLMs, including GPT-3.5/4, Vicuna, and PaLM-2. | [
"Patrick Chao",
"Alexander Robey",
"Edgar Dobriban",
"Hamed Hassani",
"George J. Pappas",
"Eric Wong"
] | 2023-10-12 15:38:28 | http://arxiv.org/abs/2310.08419v2 | http://arxiv.org/pdf/2310.08419v2 | 2310.08419v2 |
Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization | Despite recent advances, evaluating how well large language models (LLMs)
follow user instructions remains an open problem. While evaluation methods of
language models have seen a rise in prompt-based approaches, limited work on
the correctness of these methods has been conducted. In this work, we perform a
meta-evaluation of a variety of metrics to quantify how accurately they measure
the instruction-following abilities of LLMs. Our investigation is performed on
grounded query-based summarization by collecting a new short-form, real-world
dataset riSum, containing 300 document-instruction pairs with 3 answers each.
All 900 answers are rated by 3 human annotators. Using riSum, we analyze the
agreement between evaluation methods and human judgment. Finally, we propose
new LLM-based reference-free evaluation methods that improve upon established
baselines and perform on par with costly reference-based metrics that require
high-quality summaries. | [
"Ondrej Skopek",
"Rahul Aralikatte",
"Sian Gooding",
"Victor Carbune"
] | 2023-10-12 15:07:11 | http://arxiv.org/abs/2310.08394v2 | http://arxiv.org/pdf/2310.08394v2 | 2310.08394v2 |
Introducing a Deep Neural Network-based Model Predictive Control Framework for Rapid Controller Implementation | Model Predictive Control (MPC) provides an optimal control solution based on
a cost function while allowing for the implementation of process constraints.
As a model-based optimal control technique, the performance of MPC strongly
depends on the model used where a trade-off between model computation time and
prediction performance exists. One solution is the integration of MPC with a
machine learning (ML) based process model which are quick to evaluate online.
This work presents the experimental implementation of a deep neural network
(DNN) based nonlinear MPC for Homogeneous Charge Compression Ignition (HCCI)
combustion control. The DNN model consists of a Long Short-Term Memory (LSTM)
network surrounded by fully connected layers which was trained using
experimental engine data and showed acceptable prediction performance with
under 5% error for all outputs. Using this model, the MPC is designed to track
the Indicated Mean Effective Pressure (IMEP) and combustion phasing
trajectories, while minimizing several parameters. Using the acados software
package to enable the real-time implementation of the MPC on an ARM Cortex A72,
the optimization calculations are completed within 1.4 ms. The external A72
processor is integrated with the prototyping engine controller using a UDP
connection allowing for rapid experimental deployment of the NMPC. The IMEP
trajectory following of the developed controller was excellent, with a
root-mean-square error of 0.133 bar, in addition to observing process
constraints. | [
"David C. Gordon",
"Alexander Winkler",
"Julian Bedei",
"Patrick Schaber",
"Jakob Andert",
"Charles R. Koch"
] | 2023-10-12 15:03:50 | http://arxiv.org/abs/2310.08392v1 | http://arxiv.org/pdf/2310.08392v1 | 2310.08392v1 |
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression? | Transformers pretrained on diverse tasks exhibit remarkable in-context
learning (ICL) capabilities, enabling them to solve unseen tasks solely based
on input contexts without adjusting model parameters. In this paper, we study
ICL in one of its simplest setups: pretraining a linearly parameterized
single-layer linear attention model for linear regression with a Gaussian
prior. We establish a statistical task complexity bound for the attention model
pretraining, showing that effective pretraining only requires a small number of
independent tasks. Furthermore, we prove that the pretrained model closely
matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by
achieving nearly Bayes optimal risk on unseen tasks under a fixed context
length. These theoretical findings complement prior experimental research and
shed light on the statistical foundations of ICL. | [
"Jingfeng Wu",
"Difan Zou",
"Zixiang Chen",
"Vladimir Braverman",
"Quanquan Gu",
"Peter L. Bartlett"
] | 2023-10-12 15:01:43 | http://arxiv.org/abs/2310.08391v1 | http://arxiv.org/pdf/2310.08391v1 | 2310.08391v1 |
MeanAP-Guided Reinforced Active Learning for Object Detection | Active learning presents a promising avenue for training high-performance
models with minimal labeled data, achieved by judiciously selecting the most
informative instances to label and incorporating them into the task learner.
Despite notable advancements in active learning for image recognition, metrics
devised or learned to gauge the information gain of data, crucial for query
strategy design, do not consistently align with task model performance metrics,
such as Mean Average Precision (MeanAP) in object detection tasks. This paper
introduces MeanAP-Guided Reinforced Active Learning for Object Detection
(MAGRAL), a novel approach that directly utilizes the MeanAP metric of the task
model to devise a sampling strategy employing a reinforcement learning-based
sampling agent. Built upon LSTM architecture, the agent efficiently explores
and selects subsequent training instances, and optimizes the process through
policy gradient with MeanAP serving as reward. Recognizing the time-intensive
nature of MeanAP computation at each step, we propose fast look-up tables to
expedite agent training. We assess MAGRAL's efficacy across popular benchmarks,
PASCAL VOC and MS COCO, utilizing different backbone architectures. Empirical
findings substantiate MAGRAL's superiority over recent state-of-the-art
methods, showcasing substantial performance gains. MAGRAL establishes a robust
baseline for reinforced active object detection, signifying its potential in
advancing the field. | [
"Zhixuan Liang",
"Xingyu Zeng",
"Rui Zhao",
"Ping Luo"
] | 2023-10-12 14:59:22 | http://arxiv.org/abs/2310.08387v1 | http://arxiv.org/pdf/2310.08387v1 | 2310.08387v1 |
AutoVP: An Automated Visual Prompting Framework and Benchmark | Visual prompting (VP) is an emerging parameter-efficient fine-tuning approach
to adapting pre-trained vision models to solve various downstream
image-classification tasks. However, there has hitherto been little systematic
study of the design space of VP and no clear benchmark for evaluating its
performance. To bridge this gap, we propose AutoVP, an end-to-end expandable
framework for automating VP design choices, along with 12 downstream
image-classification tasks that can serve as a holistic VP-performance
benchmark. Our design space covers 1) the joint optimization of the prompts; 2)
the selection of pre-trained models, including image classifiers and text-image
encoders; and 3) model output mapping strategies, including nonparametric and
trainable label mapping. Our extensive experimental results show that AutoVP
outperforms the best-known current VP methods by a substantial margin, having
up to 6.7% improvement in accuracy; and attains a maximum performance increase
of 27.5% compared to linear-probing (LP) baseline. AutoVP thus makes a two-fold
contribution: serving both as an efficient tool for hyperparameter tuning on VP
design choices, and as a comprehensive benchmark that can reasonably be
expected to accelerate VP's development. The source code is available at
https://github.com/IBM/AutoVP. | [
"Hsi-Ai Tsao",
"Lei Hsiung",
"Pin-Yu Chen",
"Sijia Liu",
"Tsung-Yi Ho"
] | 2023-10-12 14:55:31 | http://arxiv.org/abs/2310.08381v1 | http://arxiv.org/pdf/2310.08381v1 | 2310.08381v1 |
MCU: A Task-centric Framework for Open-ended Agent Evaluation in Minecraft | To pursue the goal of creating an open-ended agent in Minecraft, an
open-ended game environment with unlimited possibilities, this paper introduces
a task-centric framework named MCU for Minecraft agent evaluation. The MCU
framework leverages the concept of atom tasks as fundamental building blocks,
enabling the generation of diverse or even arbitrary tasks. Within the MCU
framework, each task is measured with six distinct difficulty scores (time
consumption, operational effort, planning complexity, intricacy, creativity,
novelty). These scores offer a multi-dimensional assessment of a task from
different angles, and thus can reveal an agent's capability on specific facets.
The difficulty scores also serve as the feature of each task, which creates a
meaningful task space and unveils the relationship between tasks. For efficient
evaluation of Minecraft agents employing the MCU framework, we maintain a
unified benchmark, namely SkillForge, which comprises representative tasks with
diverse categories and difficulty distribution. We also provide convenient
filters for users to select tasks to assess specific capabilities of agents. We
show that MCU has the high expressivity to cover all tasks used in recent
literature on Minecraft agent, and underscores the need for advancements in
areas such as creativity, precise control, and out-of-distribution
generalization under the goal of open-ended Minecraft agent development. | [
"Haowei Lin",
"Zihao Wang",
"Jianzhu Ma",
"Yitao Liang"
] | 2023-10-12 14:38:25 | http://arxiv.org/abs/2310.08367v1 | http://arxiv.org/pdf/2310.08367v1 | 2310.08367v1 |
Towards Demystifying the Generalization Behaviors When Neural Collapse Emerges | Neural Collapse (NC) is a well-known phenomenon of deep neural networks in
the terminal phase of training (TPT). It is characterized by the collapse of
features and classifier into a symmetrical structure, known as simplex
equiangular tight frame (ETF). While there have been extensive studies on
optimization characteristics showing the global optimality of neural collapse,
little research has been done on the generalization behaviors during the
occurrence of NC. Particularly, the important phenomenon of generalization
improvement during TPT has been remaining in an empirical observation and
lacking rigorous theoretical explanation. In this paper, we establish the
connection between the minimization of CE and a multi-class SVM during TPT, and
then derive a multi-class margin generalization bound, which provides a
theoretical explanation for why continuing training can still lead to accuracy
improvement on test set, even after the train accuracy has reached 100%.
Additionally, our further theoretical results indicate that different alignment
between labels and features in a simplex ETF can result in varying degrees of
generalization improvement, despite all models reaching NC and demonstrating
similar optimization performance on train set. We refer to this newly
discovered property as "non-conservative generalization". In experiments, we
also provide empirical observations to verify the indications suggested by our
theoretical results. | [
"Peifeng Gao",
"Qianqian Xu",
"Yibo Yang",
"Peisong Wen",
"Huiyang Shao",
"Zhiyong Yang",
"Bernard Ghanem",
"Qingming Huang"
] | 2023-10-12 14:29:02 | http://arxiv.org/abs/2310.08358v1 | http://arxiv.org/pdf/2310.08358v1 | 2310.08358v1 |
LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios | Building agents based on tree-search planning capabilities with learned
models has achieved remarkable success in classic decision-making problems,
such as Go and Atari. However, it has been deemed challenging or even
infeasible to extend Monte Carlo Tree Search (MCTS) based algorithms to diverse
real-world applications, especially when these environments involve complex
action spaces and significant simulation costs, or inherent stochasticity. In
this work, we introduce LightZero, the first unified benchmark for deploying
MCTS/MuZero in general sequential decision scenarios. Specificially, we
summarize the most critical challenges in designing a general MCTS-style
decision-making solver, then decompose the tightly-coupled algorithm and system
design of tree-search RL methods into distinct sub-modules. By incorporating
more appropriate exploration and optimization strategies, we can significantly
enhance these sub-modules and construct powerful LightZero agents to tackle
tasks across a wide range of domains, such as board games, Atari, MuJoCo,
MiniGrid and GoBigger. Detailed benchmark results reveal the significant
potential of such methods in building scalable and efficient decision
intelligence. The code is available as part of OpenDILab at
https://github.com/opendilab/LightZero. | [
"Yazhe Niu",
"Yuan Pu",
"Zhenjie Yang",
"Xueyan Li",
"Tong Zhou",
"Jiyuan Ren",
"Shuai Hu",
"Hongsheng Li",
"Yu Liu"
] | 2023-10-12 14:18:09 | http://arxiv.org/abs/2310.08348v1 | http://arxiv.org/pdf/2310.08348v1 | 2310.08348v1 |
A Generic Software Framework for Distributed Topological Analysis Pipelines | This system paper presents a software framework for the support of
topological analysis pipelines in a distributed-memory model. While several
recent papers introduced topology-based approaches for distributed-memory
environments, these were reporting experiments obtained with tailored,
mono-algorithm implementations. In contrast, we describe in this paper a
general-purpose, generic framework for topological analysis pipelines, i.e. a
sequence of topological algorithms interacting together, possibly on distinct
numbers of processes. Specifically, we instantiated our framework with the MPI
model, within the Topology ToolKit (TTK). While developing this framework, we
faced several algorithmic and software engineering challenges, which we
document in this paper. We provide a taxonomy for the distributed-memory
topological algorithms supported by TTK, depending on their communication needs
and provide examples of hybrid MPI+thread parallelizations. Detailed
performance analyses show that parallel efficiencies range from $20\%$ to
$80\%$ (depending on the algorithms), and that the MPI-specific preconditioning
introduced by our framework induces a negligible computation time overhead. We
illustrate the new distributed-memory capabilities of TTK with an example of
advanced analysis pipeline, combining multiple algorithms, run on the largest
publicly available dataset we have found (120 billion vertices) on a standard
cluster with 64 nodes (for a total of 1,536 cores). Finally, we provide a
roadmap for the completion of TTK's MPI extension, along with generic
recommendations for each algorithm communication category. | [
"Eve Le Guillou",
"Michael Will",
"Pierre Guillou",
"Jonas Lukasczyk",
"Pierre Fortin",
"Christoph Garth",
"Julien Tierny"
] | 2023-10-12 13:57:32 | http://arxiv.org/abs/2310.08339v1 | http://arxiv.org/pdf/2310.08339v1 | 2310.08339v1 |
Neural Diffusion Models | Diffusion models have shown remarkable performance on many generative tasks.
Despite recent success, most diffusion models are restricted in that they only
allow linear transformation of the data distribution. In contrast, broader
family of transformations can potentially help train generative distributions
more efficiently, simplifying the reverse process and closing the gap between
the true negative log-likelihood and the variational approximation. In this
paper, we present Neural Diffusion Models (NDMs), a generalization of
conventional diffusion models that enables defining and learning time-dependent
non-linear transformations of data. We show how to optimise NDMs using a
variational bound in a simulation-free setting. Moreover, we derive a
time-continuous formulation of NDMs, which allows fast and reliable inference
using off-the-shelf numerical ODE and SDE solvers. Finally, we demonstrate the
utility of NDMs with learnable transformations through experiments on standard
image generation benchmarks, including CIFAR-10, downsampled versions of
ImageNet and CelebA-HQ. NDMs outperform conventional diffusion models in terms
of likelihood and produce high-quality samples. | [
"Grigory Bartosh",
"Dmitry Vetrov",
"Christian A. Naesseth"
] | 2023-10-12 13:54:55 | http://arxiv.org/abs/2310.08337v1 | http://arxiv.org/pdf/2310.08337v1 | 2310.08337v1 |
Impact of multi-armed bandit strategies on deep recurrent reinforcement learning | Incomplete knowledge of the environment leads an agent to make decisions
under uncertainty. One of the major dilemmas in Reinforcement Learning (RL)
where an autonomous agent has to balance two contrasting needs in making its
decisions is: exploiting the current knowledge of the environment to maximize
the cumulative reward as well as exploring actions that allow improving the
knowledge of the environment, hopefully leading to higher reward values
(exploration-exploitation trade-off). Concurrently, another relevant issue
regards the full observability of the states, which may not be assumed in all
applications. Such as when only 2D images are considered as input in a RL
approach used for finding the optimal action within a 3D simulation
environment. In this work, we address these issues by deploying and testing
several techniques to balance exploration and exploitation trade-off on
partially observable systems for predicting steering wheels in autonomous
driving scenario. More precisely, the final aim is to investigate the effects
of using both stochastic and deterministic multi-armed bandit strategies
coupled with a Deep Recurrent Q-Network. Additionally, we adapted and evaluated
the impact of an innovative method to improve the learning phase of the
underlying Convolutional Recurrent Neural Network. We aim to show that adaptive
stochastic methods for exploration better approximate the trade-off between
exploration and exploitation as, in general, Softmax and Max-Boltzmann
strategies are able to outperform epsilon-greedy techniques. | [
"Valentina Zangirolami",
"Matteo Borrotti"
] | 2023-10-12 13:45:33 | http://arxiv.org/abs/2310.08331v1 | http://arxiv.org/pdf/2310.08331v1 | 2310.08331v1 |
Defending Our Privacy With Backdoors | The proliferation of large AI models trained on uncurated, often sensitive
web-scraped data has raised significant privacy concerns. One of the concerns
is that adversaries can extract information about the training data using
privacy attacks. Unfortunately, the task of removing specific information from
the models without sacrificing performance is not straightforward and has
proven to be challenging. We propose a rather easy yet effective defense based
on backdoor attacks to remove private information such as names of individuals
from models, and focus in this work on text encoders. Specifically, through
strategic insertion of backdoors, we align the embeddings of sensitive phrases
with those of neutral terms-"a person" instead of the person's name. Our
empirical results demonstrate the effectiveness of our backdoor-based defense
on CLIP by assessing its performance using a specialized privacy attack for
zero-shot classifiers. Our approach provides not only a new "dual-use"
perspective on backdoor attacks, but also presents a promising avenue to
enhance the privacy of individuals within models trained on uncurated
web-scraped data. | [
"Dominik Hintersdorf",
"Lukas Struppek",
"Daniel Neider",
"Kristian Kersting"
] | 2023-10-12 13:33:04 | http://arxiv.org/abs/2310.08320v1 | http://arxiv.org/pdf/2310.08320v1 | 2310.08320v1 |
GePSAn: Generative Procedure Step Anticipation in Cooking Videos | We study the problem of future step anticipation in procedural videos. Given
a video of an ongoing procedural activity, we predict a plausible next
procedure step described in rich natural language. While most previous work
focus on the problem of data scarcity in procedural video datasets, another
core challenge of future anticipation is how to account for multiple plausible
future realizations in natural settings. This problem has been largely
overlooked in previous work. To address this challenge, we frame future step
prediction as modelling the distribution of all possible candidates for the
next step. Specifically, we design a generative model that takes a series of
video clips as input, and generates multiple plausible and diverse candidates
(in natural language) for the next step. Following previous work, we side-step
the video annotation scarcity by pretraining our model on a large text-based
corpus of procedural activities, and then transfer the model to the video
domain. Our experiments, both in textual and video domains, show that our model
captures diversity in the next step prediction and generates multiple plausible
future predictions. Moreover, our model establishes new state-of-the-art
results on YouCookII, where it outperforms existing baselines on the next step
anticipation. Finally, we also show that our model can successfully transfer
from text to the video domain zero-shot, ie, without fine-tuning or adaptation,
and produces good-quality future step predictions from video. | [
"Mohamed Ashraf Abdelsalam",
"Samrudhdhi B. Rangrej",
"Isma Hadji",
"Nikita Dvornik",
"Konstantinos G. Derpanis",
"Afsaneh Fazly"
] | 2023-10-12 13:20:17 | http://arxiv.org/abs/2310.08312v1 | http://arxiv.org/pdf/2310.08312v1 | 2310.08312v1 |
CHIP: Contrastive Hierarchical Image Pretraining | Few-shot object classification is the task of classifying objects in an image
with limited number of examples as supervision. We propose a one-shot/few-shot
classification model that can classify an object of any unseen class into a
relatively general category in an hierarchically based classification. Our
model uses a three-level hierarchical contrastive loss based ResNet152
classifier for classifying an object based on its features extracted from Image
embedding, not used during the training phase. For our experimentation, we have
used a subset of the ImageNet (ILSVRC-12) dataset that contains only the animal
classes for training our model and created our own dataset of unseen classes
for evaluating our trained model. Our model provides satisfactory results in
classifying the unknown objects into a generic category which has been later
discussed in greater detail. | [
"Arpit Mittal",
"Harshil Jhaveri",
"Swapnil Mallick",
"Abhishek Ajmera"
] | 2023-10-12 13:11:38 | http://arxiv.org/abs/2310.08304v1 | http://arxiv.org/pdf/2310.08304v1 | 2310.08304v1 |
A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors | The distribution of the weights of modern deep neural networks (DNNs) -
crucial for uncertainty quantification and robustness - is an eminently complex
object due to its extremely high dimensionality. This paper proposes one of the
first large-scale explorations of the posterior distribution of deep Bayesian
Neural Networks (BNNs), expanding its study to real-world vision tasks and
architectures. Specifically, we investigate the optimal approach for
approximating the posterior, analyze the connection between posterior quality
and uncertainty quantification, delve into the impact of modes on the
posterior, and explore methods for visualizing the posterior. Moreover, we
uncover weight-space symmetries as a critical aspect for understanding the
posterior. To this extent, we develop an in-depth assessment of the impact of
both permutation and scaling symmetries that tend to obfuscate the Bayesian
posterior. While the first type of transformation is known for duplicating
modes, we explore the relationship between the latter and L2 regularization,
challenging previous misconceptions. Finally, to help the community improve our
understanding of the Bayesian posterior, we will shortly release the first
large-scale checkpoint dataset, including thousands of real-world models and
our codes. | [
"Olivier Laurent",
"Emanuel Aldea",
"Gianni Franchi"
] | 2023-10-12 12:45:13 | http://arxiv.org/abs/2310.08287v1 | http://arxiv.org/pdf/2310.08287v1 | 2310.08287v1 |
Data driven modeling of self-similar dynamics | Multiscale modeling of complex systems is crucial for understanding their
intricacies. Data-driven multiscale modeling has emerged as a promising
approach to tackle challenges associated with complex systems. On the other
hand, self-similarity is prevalent in complex systems, hinting that large-scale
complex systems can be modeled at a reduced cost. In this paper, we introduce a
multiscale neural network framework that incorporates self-similarity as prior
knowledge, facilitating the modeling of self-similar dynamical systems. For
deterministic dynamics, our framework can discern whether the dynamics are
self-similar. For uncertain dynamics, it can compare and determine which
parameter set is closer to self-similarity. The framework allows us to extract
scale-invariant kernels from the dynamics for modeling at any scale. Moreover,
our method can identify the power law exponents in self-similar systems.
Preliminary tests on the Ising model yielded critical exponents consistent with
theoretical expectations, providing valuable insights for addressing critical
phase transitions in non-equilibrium systems. | [
"Ruyi Tao",
"Ningning Tao",
"Yizhuang You",
"Jiang Zhang"
] | 2023-10-12 12:39:08 | http://arxiv.org/abs/2310.08282v1 | http://arxiv.org/pdf/2310.08282v1 | 2310.08282v1 |
Lag-Llama: Towards Foundation Models for Time Series Forecasting | Aiming to build foundation models for time-series forecasting and study their
scaling behavior, we present here our work-in-progress on Lag-Llama, a
general-purpose univariate probabilistic time-series forecasting model trained
on a large collection of time-series data. The model shows good zero-shot
prediction capabilities on unseen "out-of-distribution" time-series datasets,
outperforming supervised baselines. We use smoothly broken power-laws to fit
and predict model scaling behavior. The open source code is made available at
https://github.com/kashif/pytorch-transformer-ts. | [
"Kashif Rasul",
"Arjun Ashok",
"Andrew Robert Williams",
"Arian Khorasani",
"George Adamopoulos",
"Rishika Bhagwatkar",
"Marin Biloš",
"Hena Ghonia",
"Nadhir Vincent Hassen",
"Anderson Schneider",
"Sahil Garg",
"Alexandre Drouin",
"Nicolas Chapados",
"Yuriy Nevmyvaka",
"Irina Rish"
] | 2023-10-12 12:29:32 | http://arxiv.org/abs/2310.08278v1 | http://arxiv.org/pdf/2310.08278v1 | 2310.08278v1 |
Invisible Threats: Backdoor Attack in OCR Systems | Optical Character Recognition (OCR) is a widely used tool to extract text
from scanned documents. Today, the state-of-the-art is achieved by exploiting
deep neural networks. However, the cost of this performance is paid at the
price of system vulnerability. For instance, in backdoor attacks, attackers
compromise the training phase by inserting a backdoor in the victim's model
that will be activated at testing time by specific patterns while leaving the
overall model performance intact. This work proposes a backdoor attack for OCR
resulting in the injection of non-readable characters from malicious input
images. This simple but effective attack exposes the state-of-the-art OCR
weakness, making the extracted text correct to human eyes but simultaneously
unusable for the NLP application that uses OCR as a preprocessing step.
Experimental results show that the attacked models successfully output
non-readable characters for around 90% of the poisoned instances without
harming their performance for the remaining instances. | [
"Mauro Conti",
"Nicola Farronato",
"Stefanos Koffas",
"Luca Pajola",
"Stjepan Picek"
] | 2023-10-12 12:05:51 | http://arxiv.org/abs/2310.08259v1 | http://arxiv.org/pdf/2310.08259v1 | 2310.08259v1 |
Impact of Co-occurrence on Factual Knowledge of Large Language Models | Large language models (LLMs) often make factually incorrect responses despite
their success in various applications. In this paper, we hypothesize that
relying heavily on simple co-occurrence statistics of the pre-training corpora
is one of the main factors that cause factual errors. Our results reveal that
LLMs are vulnerable to the co-occurrence bias, defined as preferring frequently
co-occurred words over the correct answer. Consequently, LLMs struggle to
recall facts whose subject and object rarely co-occur in the pre-training
dataset although they are seen during finetuning. We show that co-occurrence
bias remains despite scaling up model sizes or finetuning. Therefore, we
suggest finetuning on a debiased dataset to mitigate the bias by filtering out
biased samples whose subject-object co-occurrence count is high. Although
debiased finetuning allows LLMs to memorize rare facts in the training set, it
is not effective in recalling rare facts unseen during finetuning. Further
research in mitigation will help build reliable language models by preventing
potential errors. The code is available at
\url{https://github.com/CheongWoong/impact_of_cooccurrence}. | [
"Cheongwoong Kang",
"Jaesik Choi"
] | 2023-10-12 12:01:32 | http://arxiv.org/abs/2310.08256v1 | http://arxiv.org/pdf/2310.08256v1 | 2310.08256v1 |
MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning | Recently, Meta-Black-Box Optimization with Reinforcement Learning
(MetaBBO-RL) has showcased the power of leveraging RL at the meta-level to
mitigate manual fine-tuning of low-level black-box optimizers. However, this
field is hindered by the lack of a unified benchmark. To fill this gap, we
introduce MetaBox, the first benchmark platform expressly tailored for
developing and evaluating MetaBBO-RL methods. MetaBox offers a flexible
algorithmic template that allows users to effortlessly implement their unique
designs within the platform. Moreover, it provides a broad spectrum of over 300
problem instances, collected from synthetic to realistic scenarios, and an
extensive library of 19 baseline methods, including both traditional black-box
optimizers and recent MetaBBO-RL methods. Besides, MetaBox introduces three
standardized performance metrics, enabling a more thorough assessment of the
methods. In a bid to illustrate the utility of MetaBox for facilitating
rigorous evaluation and in-depth analysis, we carry out a wide-ranging
benchmarking study on existing MetaBBO-RL methods. Our MetaBox is open-source
and accessible at: https://github.com/GMC-DRL/MetaBox. | [
"Zeyuan Ma",
"Hongshu Guo",
"Jiacheng Chen",
"Zhenrui Li",
"Guojun Peng",
"Yue-Jiao Gong",
"Yining Ma",
"Zhiguang Cao"
] | 2023-10-12 11:55:17 | http://arxiv.org/abs/2310.08252v1 | http://arxiv.org/pdf/2310.08252v1 | 2310.08252v1 |
Towards a Unified Analysis of Kernel-based Methods Under Covariate Shift | Covariate shift occurs prevalently in practice, where the input distributions
of the source and target data are substantially different. Despite its
practical importance in various learning problems, most of the existing methods
only focus on some specific learning tasks and are not well validated
theoretically and numerically. To tackle this problem, we propose a unified
analysis of general nonparametric methods in a reproducing kernel Hilbert space
(RKHS) under covariate shift. Our theoretical results are established for a
general loss belonging to a rich loss function family, which includes many
commonly used methods as special cases, such as mean regression, quantile
regression, likelihood-based classification, and margin-based classification.
Two types of covariate shift problems are the focus of this paper and the sharp
convergence rates are established for a general loss function to provide a
unified theoretical analysis, which concurs with the optimal results in
literature where the squared loss is used. Extensive numerical studies on
synthetic and real examples confirm our theoretical findings and further
illustrate the effectiveness of our proposed method. | [
"Xingdong Feng",
"Xin He",
"Caixing Wang",
"Chao Wang",
"Jingnan Zhang"
] | 2023-10-12 11:33:15 | http://arxiv.org/abs/2310.08237v2 | http://arxiv.org/pdf/2310.08237v2 | 2310.08237v2 |
GROOT: Learning to Follow Instructions by Watching Gameplay Videos | We study the problem of building a controller that can follow open-ended
instructions in open-world environments. We propose to follow reference videos
as instructions, which offer expressive goal specifications while eliminating
the need for expensive text-gameplay annotations. A new learning framework is
derived to allow learning such instruction-following controllers from gameplay
videos while producing a video instruction encoder that induces a structured
goal space. We implement our agent GROOT in a simple yet effective
encoder-decoder architecture based on causal transformers. We evaluate GROOT
against open-world counterparts and human players on a proposed Minecraft
SkillForge benchmark. The Elo ratings clearly show that GROOT is closing the
human-machine gap as well as exhibiting a 70% winning rate over the best
generalist agent baseline. Qualitative analysis of the induced goal space
further demonstrates some interesting emergent properties, including the goal
composition and complex gameplay behavior synthesis. Code and video can be
found on the website https://craftjarvis-groot.github.io. | [
"Shaofei Cai",
"Bowei Zhang",
"Zihao Wang",
"Xiaojian Ma",
"Anji Liu",
"Yitao Liang"
] | 2023-10-12 11:31:01 | http://arxiv.org/abs/2310.08235v1 | http://arxiv.org/pdf/2310.08235v1 | 2310.08235v1 |
Emergence of Latent Binary Encoding in Deep Neural Network Classifiers | We observe the emergence of binary encoding within the latent space of
deep-neural-network classifiers. Such binary encoding is induced by introducing
a linear penultimate layer, which is equipped during training with a loss
function that grows as $\exp(\vec{x}^2)$, where $\vec{x}$ are the coordinates
in the latent space. The phenomenon we describe represents a specific instance
of a well-documented occurrence known as \textit{neural collapse}, which arises
in the terminal phase of training and entails the collapse of latent class
means to the vertices of a simplex equiangular tight frame (ETF). We show that
binary encoding accelerates convergence toward the simplex ETF and enhances
classification accuracy. | [
"Luigi Sbailò",
"Luca Ghiringhelli"
] | 2023-10-12 11:16:57 | http://arxiv.org/abs/2310.08224v1 | http://arxiv.org/pdf/2310.08224v1 | 2310.08224v1 |
SimCKP: Simple Contrastive Learning of Keyphrase Representations | Keyphrase generation (KG) aims to generate a set of summarizing words or
phrases given a source document, while keyphrase extraction (KE) aims to
identify them from the text. Because the search space is much smaller in KE, it
is often combined with KG to predict keyphrases that may or may not exist in
the corresponding document. However, current unified approaches adopt sequence
labeling and maximization-based generation that primarily operate at a token
level, falling short in observing and scoring keyphrases as a whole. In this
work, we propose SimCKP, a simple contrastive learning framework that consists
of two stages: 1) An extractor-generator that extracts keyphrases by learning
context-aware phrase-level representations in a contrastive manner while also
generating keyphrases that do not appear in the document; 2) A reranker that
adapts scores for each generated phrase by likewise aligning their
representations with the corresponding document. Experimental results on
multiple benchmark datasets demonstrate the effectiveness of our proposed
approach, which outperforms the state-of-the-art models by a significant
margin. | [
"Minseok Choi",
"Chaeheon Gwak",
"Seho Kim",
"Si Hyeong Kim",
"Jaegul Choo"
] | 2023-10-12 11:11:54 | http://arxiv.org/abs/2310.08221v1 | http://arxiv.org/pdf/2310.08221v1 | 2310.08221v1 |
TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion | Continual learning (CL) has remained a persistent challenge for deep neural
networks due to catastrophic forgetting (CF) of previously learned tasks.
Several techniques such as weight regularization, experience rehearsal, and
parameter isolation have been proposed to alleviate CF. Despite their relative
success, these research directions have predominantly remained orthogonal and
suffer from several shortcomings, while missing out on the advantages of
competing strategies. On the contrary, the brain continually learns,
accommodates, and transfers knowledge across tasks by simultaneously leveraging
several neurophysiological processes, including neurogenesis, active
forgetting, neuromodulation, metaplasticity, experience rehearsal, and
context-dependent gating, rarely resulting in CF. Inspired by how the brain
exploits multiple mechanisms concurrently, we propose TriRE, a novel CL
paradigm that encompasses retaining the most prominent neurons for each task,
revising and solidifying the extracted knowledge of current and past tasks, and
actively promoting less active neurons for subsequent tasks through rewinding
and relearning. Across CL settings, TriRE significantly reduces task
interference and surpasses different CL approaches considered in isolation. | [
"Preetha Vijayan",
"Prashant Bhat",
"Elahe Arani",
"Bahram Zonooz"
] | 2023-10-12 11:05:34 | http://arxiv.org/abs/2310.08217v1 | http://arxiv.org/pdf/2310.08217v1 | 2310.08217v1 |
Trustworthy Machine Learning | As machine learning technology gets applied to actual products and solutions,
new challenges have emerged. Models unexpectedly fail to generalize to small
changes in the distribution, tend to be confident on novel data they have never
seen, or cannot communicate the rationale behind their decisions effectively
with the end users. Collectively, we face a trustworthiness issue with the
current machine learning technology. This textbook on Trustworthy Machine
Learning (TML) covers a theoretical and technical background of four key topics
in TML: Out-of-Distribution Generalization, Explainability, Uncertainty
Quantification, and Evaluation of Trustworthiness. We discuss important
classical and contemporary research papers of the aforementioned fields and
uncover and connect their underlying intuitions. The book evolved from the
homonymous course at the University of T\"ubingen, first offered in the Winter
Semester of 2022/23. It is meant to be a stand-alone product accompanied by
code snippets and various pointers to further sources on topics of TML. The
dedicated website of the book is https://trustworthyml.io/. | [
"Bálint Mucsányi",
"Michael Kirchhof",
"Elisa Nguyen",
"Alexander Rubinstein",
"Seong Joon Oh"
] | 2023-10-12 11:04:17 | http://arxiv.org/abs/2310.08215v1 | http://arxiv.org/pdf/2310.08215v1 | 2310.08215v1 |
Conformal inference for regression on Riemannian Manifolds | Regression on manifolds, and, more broadly, statistics on manifolds, has
garnered significant importance in recent years due to the vast number of
applications for this type of data. Circular data is a classic example, but so
is data in the space of covariance matrices, data on the Grassmannian manifold
obtained as a result of principal component analysis, among many others. In
this work we investigate prediction sets for regression scenarios when the
response variable, denoted by $Y$, resides in a manifold, and the covariable,
denoted by X, lies in Euclidean space. This extends the concepts delineated in
[Lei and Wasserman, 2014] to this novel context. Aligning with traditional
principles in conformal inference, these prediction sets are distribution-free,
indicating that no specific assumptions are imposed on the joint distribution
of $(X, Y)$, and they maintain a non-parametric character. We prove the
asymptotic almost sure convergence of the empirical version of these regions on
the manifold to their population counterparts. The efficiency of this method is
shown through a comprehensive simulation study and an analysis involving
real-world data. | [
"Alejandro Cholaquidis",
"Fabrice Gamboa",
"Leonardo Moreno"
] | 2023-10-12 10:56:25 | http://arxiv.org/abs/2310.08209v1 | http://arxiv.org/pdf/2310.08209v1 | 2310.08209v1 |
Lifelong Audio-video Masked Autoencoder with Forget-robust Localized Alignments | We present a lifelong audio-video masked autoencoder that continually learns
the multimodal representations from a video stream containing audio-video
pairs, while its distribution continually shifts over time. Specifically, we
propose two novel ideas to tackle the problem: (1) Localized Alignment: We
introduce a small trainable multimodal encoder that predicts the audio and
video tokens that are well-aligned with each other. This allows the model to
learn only the highly correlated audiovisual patches with accurate multimodal
relationships. (2) Forget-robust multimodal patch selection: We compare the
relative importance of each audio-video patch between the current and past data
pair to mitigate unintended drift of the previously learned audio-video
representations. Our proposed method, FLAVA (Forget-robust Localized
Audio-Video Alignment), therefore, captures the complex relationships between
the audio and video modalities during training on a sequence of pre-training
tasks while alleviating the forgetting of learned audiovisual correlations. Our
experiments validate that FLAVA outperforms the state-of-the-art continual
learning methods on several benchmark datasets under continual audio-video
representation learning scenarios. | [
"Jaewoo Lee",
"Jaehong Yoon",
"Wonjae Kim",
"Yunji Kim",
"Sung Ju Hwang"
] | 2023-10-12 10:50:21 | http://arxiv.org/abs/2310.08204v1 | http://arxiv.org/pdf/2310.08204v1 | 2310.08204v1 |
Beyond Traditional DoE: Deep Reinforcement Learning for Optimizing Experiments in Model Identification of Battery Dynamics | Model identification of battery dynamics is a central problem in energy
research; many energy management systems and design processes rely on accurate
battery models for efficiency optimization. The standard methodology for
battery modelling is traditional design of experiments (DoE), where the battery
dynamics are excited with many different current profiles and the measured
outputs are used to estimate the system dynamics. However, although it is
possible to obtain useful models with the traditional approach, the process is
time consuming and expensive because of the need to sweep many different
current-profile configurations. In the present work, a novel DoE approach is
developed based on deep reinforcement learning, which alters the configuration
of the experiments on the fly based on the statistics of past experiments.
Instead of sticking to a library of predefined current profiles, the proposed
approach modifies the current profiles dynamically by updating the output space
covered by past measurements, hence only the current profiles that are
informative for future experiments are applied. Simulations and real
experiments are used to show that the proposed approach gives models that are
as accurate as those obtained with traditional DoE but by using 85\% less
resources. | [
"Gokhan Budan",
"Francesca Damiani",
"Can Kurtulus",
"N. Kemal Ure"
] | 2023-10-12 10:44:47 | http://arxiv.org/abs/2310.08198v1 | http://arxiv.org/pdf/2310.08198v1 | 2310.08198v1 |
Learn From Model Beyond Fine-Tuning: A Survey | Foundation models (FM) have demonstrated remarkable performance across a wide
range of tasks (especially in the fields of natural language processing and
computer vision), primarily attributed to their ability to comprehend
instructions and access extensive, high-quality data. This not only showcases
their current effectiveness but also sets a promising trajectory towards the
development of artificial general intelligence. Unfortunately, due to multiple
constraints, the raw data of the model used for large model training are often
inaccessible, so the use of end-to-end models for downstream tasks has become a
new research trend, which we call Learn From Model (LFM) in this article. LFM
focuses on the research, modification, and design of FM based on the model
interface, so as to better understand the model structure and weights (in a
black box environment), and to generalize the model to downstream tasks. The
study of LFM techniques can be broadly categorized into five major areas: model
tuning, model distillation, model reuse, meta learning and model editing. Each
category encompasses a repertoire of methods and strategies that aim to enhance
the capabilities and performance of FM. This paper gives a comprehensive review
of the current methods based on FM from the perspective of LFM, in order to
help readers better understand the current research status and ideas. To
conclude, we summarize the survey by highlighting several critical areas for
future exploration and addressing open issues that require further attention
from the research community. The relevant papers we investigated in this
article can be accessed at
<https://github.com/ruthless-man/Awesome-Learn-from-Model>. | [
"Hongling Zheng",
"Li Shen",
"Anke Tang",
"Yong Luo",
"Han Hu",
"Bo Du",
"Dacheng Tao"
] | 2023-10-12 10:20:36 | http://arxiv.org/abs/2310.08184v1 | http://arxiv.org/pdf/2310.08184v1 | 2310.08184v1 |
XIMAGENET-12: An Explainable AI Benchmark Dataset for Model Robustness Evaluation | The lack of standardized robustness metrics and the widespread reliance on
numerous unrelated benchmark datasets for testing have created a gap between
academically validated robust models and their often problematic practical
adoption. To address this, we introduce XIMAGENET-12, an explainable benchmark
dataset with over 200K images and 15,600 manual semantic annotations. Covering
12 categories from ImageNet to represent objects commonly encountered in
practical life and simulating six diverse scenarios, including overexposure,
blurring, color changing, etc., we further propose a novel robustness criterion
that extends beyond model generation ability assessment. This benchmark
dataset, along with related code, is available at
https://sites.google.com/view/ximagenet-12/home. Researchers and practitioners
can leverage this resource to evaluate the robustness of their visual models
under challenging conditions and ultimately benefit from the demands of
practical computer vision systems. | [
"Qiang Li",
"Dan Zhang",
"Shengzhao Lei",
"Xun Zhao",
"Shuyan Li",
"Porawit Kamnoedboon",
"WeiWei Li"
] | 2023-10-12 10:17:40 | http://arxiv.org/abs/2310.08182v1 | http://arxiv.org/pdf/2310.08182v1 | 2310.08182v1 |
Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization | Evaluating the adversarial robustness of machine learning models using
gradient-based attacks is challenging. In this work, we show that
hyperparameter optimization can improve fast minimum-norm attacks by automating
the selection of the loss function, the optimizer and the step-size scheduler,
along with the corresponding hyperparameters. Our extensive evaluation
involving several robust models demonstrates the improved efficacy of fast
minimum-norm attacks when hyper-up with hyperparameter optimization. We release
our open-source code at https://github.com/pralab/HO-FMN. | [
"Giuseppe Floris",
"Raffaele Mura",
"Luca Scionis",
"Giorgio Piras",
"Maura Pintor",
"Ambra Demontis",
"Battista Biggio"
] | 2023-10-12 10:03:25 | http://arxiv.org/abs/2310.08177v1 | http://arxiv.org/pdf/2310.08177v1 | 2310.08177v1 |
Infinite Width Graph Neural Networks for Node Regression/ Classification | This work analyzes Graph Neural Networks, a generalization of Fully-Connected
Deep Neural Nets on Graph structured data, when their width, that is the number
of nodes in each fullyconnected layer is increasing to infinity. Infinite Width
Neural Networks are connecting Deep Learning to Gaussian Processes and Kernels,
both Machine Learning Frameworks with long traditions and extensive theoretical
foundations. Gaussian Processes and Kernels have much less hyperparameters then
Neural Networks and can be used for uncertainty estimation, making them more
user friendly for applications. This works extends the increasing amount of
research connecting Gaussian Processes and Kernels to Neural Networks. The
Kernel and Gaussian Process closed forms are derived for a variety of
architectures, namely the standard Graph Neural Network, the Graph Neural
Network with Skip-Concatenate Connections and the Graph Attention Neural
Network. All architectures are evaluated on a variety of datasets on the task
of transductive Node Regression and Classification. Additionally, a Spectral
Sparsification method known as Effective Resistance is used to improve runtime
and memory requirements. Extending the setting to inductive graph learning
tasks (Graph Regression/ Classification) is straightforward and is briefly
discussed in 3.5. | [
"Yunus Cobanoglu"
] | 2023-10-12 10:01:39 | http://arxiv.org/abs/2310.08176v1 | http://arxiv.org/pdf/2310.08176v1 | 2310.08176v1 |
COVID-19 Detection Using Swin Transformer Approach from Computed Tomography Images | The accurate and efficient diagnosis of COVID-19 is of paramount importance,
particularly in the context of large-scale medical imaging datasets. In this
preprint paper, we propose a novel approach for COVID-19 diagnosis using CT
images that leverages the power of Swin Transformer models, state-of-the-art
solutions in computer vision tasks. Our method includes a systematic approach
for patient-level predictions, where individual CT slices are classified as
COVID-19 or non-COVID, and the patient's overall diagnosis is determined
through majority voting. The application of the Swin Transformer in this
context results in patient-level predictions that demonstrate exceptional
diagnostic accuracy. In terms of evaluation metrics, our approach consistently
outperforms the baseline, as well as numerous competing methods, showcasing its
effectiveness in COVID-19 diagnosis. The macro F1 score achieved by our model
exceeds the baseline and offers a robust solution for accurate diagnosis. | [
"Kenan Morani"
] | 2023-10-12 09:37:56 | http://arxiv.org/abs/2310.08165v1 | http://arxiv.org/pdf/2310.08165v1 | 2310.08165v1 |
Interpreting Reward Models in RLHF-Tuned Language Models Using Sparse Autoencoders | Large language models (LLMs) aligned to human preferences via reinforcement
learning from human feedback (RLHF) underpin many commercial applications.
However, how RLHF impacts LLM internals remains opaque. We propose a novel
method to interpret learned reward functions in RLHF-tuned LLMs using sparse
autoencoders. Our approach trains autoencoder sets on activations from a base
LLM and its RLHF-tuned version. By comparing autoencoder hidden spaces, we
identify unique features that reflect the accuracy of the learned reward model.
To quantify this, we construct a scenario where the tuned LLM learns
token-reward mappings to maximize reward. This is the first application of
sparse autoencoders for interpreting learned rewards and broadly inspecting
reward learning in LLMs. Our method provides an abstract approximation of
reward integrity. This presents a promising technique for ensuring alignment
between specified objectives and model behaviors. | [
"Luke Marks",
"Amir Abdullah",
"Luna Mendez",
"Rauno Arike",
"Philip Torr",
"Fazl Barez"
] | 2023-10-12 09:36:03 | http://arxiv.org/abs/2310.08164v1 | http://arxiv.org/pdf/2310.08164v1 | 2310.08164v1 |
On Extreme Value Asymptotics of Projected Sample Covariances in High Dimensions with Applications in Finance and Convolutional Networks | Maximum-type statistics of certain functions of the sample covariance matrix
of high-dimensional vector time series are studied to statistically confirm or
reject the null hypothesis that a data set has been collected under normal
conditions. The approach generalizes the case of the maximal deviation of the
sample autocovariances function from its assumed values. Within a linear time
series framework it is shown that Gumbel-type extreme value asymptotics holds
true. As applications we discuss long-only mimimal-variance portfolio
optimization and subportfolio analysis with respect to idiosyncratic risks, ETF
index tracking by sparse tracking portfolios, convolutional deep learners for
image analysis and the analysis of array-of-sensors data. | [
"Ansgar Steland"
] | 2023-10-12 09:17:46 | http://arxiv.org/abs/2310.08150v1 | http://arxiv.org/pdf/2310.08150v1 | 2310.08150v1 |
Open-Set Knowledge-Based Visual Question Answering with Inference Paths | Given an image and an associated textual question, the purpose of
Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct
answer to the question with the aid of external knowledge bases. Prior KB-VQA
models are usually formulated as a retriever-classifier framework, where a
pre-trained retriever extracts textual or visual information from knowledge
graphs and then makes a prediction among the candidates. Despite promising
progress, there are two drawbacks with existing models. Firstly, modeling
question-answering as multi-class classification limits the answer space to a
preset corpus and lacks the ability of flexible reasoning. Secondly, the
classifier merely consider "what is the answer" without "how to get the
answer", which cannot ground the answer to explicit reasoning paths. In this
paper, we confront the challenge of \emph{explainable open-set} KB-VQA, where
the system is required to answer questions with entities at wild and retain an
explainable reasoning path. To resolve the aforementioned issues, we propose a
new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for
brevity). Specifically, it contains graph constructing, pruning, and path-level
ranking, which not only retrieves accurate answers but also provides inference
paths that explain the reasoning process. To comprehensively evaluate our
model, we reformulate the benchmark dataset OK-VQA with manually corrected
entity-level annotations and release it as ConceptVQA. Extensive experiments on
real-world questions demonstrate that our framework is not only able to perform
open-set question answering across the whole knowledge base but provide
explicit reasoning path. | [
"Jingru Gan",
"Xinzhe Han",
"Shuhui Wang",
"Qingming Huang"
] | 2023-10-12 09:12:50 | http://arxiv.org/abs/2310.08148v1 | http://arxiv.org/pdf/2310.08148v1 | 2310.08148v1 |
Multi-Scale Spatial-Temporal Recurrent Networks for Traffic Flow Prediction | Traffic flow prediction is one of the most fundamental tasks of intelligent
transportation systems. The complex and dynamic spatial-temporal dependencies
make the traffic flow prediction quite challenging. Although existing
spatial-temporal graph neural networks hold prominent, they often encounter
challenges such as (1) ignoring the fixed graph that limits the predictive
performance of the model, (2) insufficiently capturing complex spatial-temporal
dependencies simultaneously, and (3) lacking attention to spatial-temporal
information at different time lengths. In this paper, we propose a Multi-Scale
Spatial-Temporal Recurrent Network for traffic flow prediction, namely MSSTRN,
which consists of two different recurrent neural networks: the single-step gate
recurrent unit and the multi-step gate recurrent unit to fully capture the
complex spatial-temporal information in the traffic data under different time
windows. Moreover, we propose a spatial-temporal synchronous attention
mechanism that integrates adaptive position graph convolutions into the
self-attention mechanism to achieve synchronous capture of spatial-temporal
dependencies. We conducted extensive experiments on four real traffic datasets
and demonstrated that our model achieves the best prediction accuracy with
non-trivial margins compared to all the twenty baseline methods. | [
"Haiyang Liu",
"Chunjiang Zhu",
"Detian Zhang",
"Qing Li"
] | 2023-10-12 08:52:36 | http://arxiv.org/abs/2310.08138v1 | http://arxiv.org/pdf/2310.08138v1 | 2310.08138v1 |
Counterfactual Explanations for Time Series Forecasting | Among recent developments in time series forecasting methods, deep
forecasting models have gained popularity as they can utilize hidden feature
patterns in time series to improve forecasting performance. Nevertheless, the
majority of current deep forecasting models are opaque, hence making it
challenging to interpret the results. While counterfactual explanations have
been extensively employed as a post-hoc approach for explaining classification
models, their application to forecasting models still remains underexplored. In
this paper, we formulate the novel problem of counterfactual generation for
time series forecasting, and propose an algorithm, called ForecastCF, that
solves the problem by applying gradient-based perturbations to the original
time series. ForecastCF guides the perturbations by applying constraints to the
forecasted values to obtain desired prediction outcomes. We experimentally
evaluate ForecastCF using four state-of-the-art deep model architectures and
compare to two baselines. Our results show that ForecastCF outperforms the
baseline in terms of counterfactual validity and data manifold closeness.
Overall, our findings suggest that ForecastCF can generate meaningful and
relevant counterfactual explanations for various forecasting tasks. | [
"Zhendong Wang",
"Ioanna Miliou",
"Isak Samsten",
"Panagiotis Papapetrou"
] | 2023-10-12 08:51:59 | http://arxiv.org/abs/2310.08137v1 | http://arxiv.org/pdf/2310.08137v1 | 2310.08137v1 |
Core-sets for Fair and Diverse Data Summarization | We study core-set construction algorithms for the task of Diversity
Maximization under fairness/partition constraint. Given a set of points $P$ in
a metric space partitioned into $m$ groups, and given $k_1,\ldots,k_m$, the
goal of this problem is to pick $k_i$ points from each group $i$ such that the
overall diversity of the $k=\sum_i k_i$ picked points is maximized. We consider
two natural diversity measures: sum-of-pairwise distances and
sum-of-nearest-neighbor distances, and show improved core-set construction
algorithms with respect to these measures. More precisely, we show the first
constant factor core-set w.r.t. sum-of-pairwise distances whose size is
independent of the size of the dataset and the aspect ratio. Second, we show
the first core-set w.r.t. the sum-of-nearest-neighbor distances. Finally, we
run several experiments showing the effectiveness of our core-set approach. In
particular, we apply constrained diversity maximization to summarize a set of
timed messages that takes into account the messages' recency. Specifically, the
summary should include more recent messages compared to older ones. This is a
real task in one of the largest communication platforms, affecting the
experience of hundreds of millions daily active users. By utilizing our
core-set method for this task, we achieve a 100x speed-up while losing the
diversity by only a few percent. Moreover, our approach allows us to improve
the space usage of the algorithm in the streaming setting. | [
"Sepideh Mahabadi",
"Stojan Trajanovski"
] | 2023-10-12 08:24:02 | http://arxiv.org/abs/2310.08122v1 | http://arxiv.org/pdf/2310.08122v1 | 2310.08122v1 |
Overview of Physics-Informed Machine Learning Inversion of Geophysical Data | We review four types of algorithms for physics-informed machine learning
(PIML) inversion of geophysical data. The unifying equation is given by the
joint objective function $\epsilon$:
\begin{eqnarray} \epsilon^{||-PIML}&=&\lambda_1 \overbrace{||{\bf
W}^{ML}({\bf H}_{{\bf w}} {\bf d}^{obs}-{\bf m})||^2}^{NN} + \lambda_2
\overbrace{{||{\bf W}^{FWI}({\bf L} {\bf m}-{\bf d}^{obs})||^2}}^{FWI} ~+
\nonumber\\ \nonumber\\ && + ~~Regularizer, \label{PIML.eq120}
\end{eqnarray}where the optimal model ${\bf m}^*$ and weights $\bf w^*$
minimize $\epsilon$. Here, The matrix weights are given by the boldface symbol
$\bf W$, and full waveform inversion (FWI) is typically computed using a
finite-difference solution of the wave equation, where $\bf L$ represents the
forward modeling operation of the wave equation as a function of the model $\bf
m$. Also, a fully-connected neural network (NN) is used to compute the model
${\bf H_w}{\bf d}^{obs} \approx \bf m$ from the observed input data ${\bf
d}^{obs}$. The selection of weights $\lambda_i$ and the NN operations determine
one of four different PIML algorithms.
PIML offers potential advantages over standard FWI through its enhanced
ability to avoid local minima and the option to locally train the inversion
operator, minimizing the requirement for extensive training data for global
applicability. However, the effectiveness of PIML relies on the similarity
between the test and trained data. Nevertheless, a possible strategy to
overcome this limitation involves initial pretraining of a PIML architecture
with data from a broader region, followed by fine-tuning for specific data-a
method reminiscent of the way large language models are pretrained and adapted
for various tasks. | [
"Gerard T. Schuster",
"Shihang Feng"
] | 2023-10-12 08:10:31 | http://arxiv.org/abs/2310.08109v1 | http://arxiv.org/pdf/2310.08109v1 | 2310.08109v1 |
Generative Intrinsic Optimization: Intrisic Control with Model Learning | Future sequence represents the outcome after executing the action into the
environment. When driven by the information-theoretic concept of mutual
information, it seeks maximally informative consequences. Explicit outcomes may
vary across state, return, or trajectory serving different purposes such as
credit assignment or imitation learning. However, the inherent nature of
incorporating intrinsic motivation with reward maximization is often neglected.
In this work, we propose a variational approach to jointly learn the necessary
quantity for estimating the mutual information and the dynamics model,
providing a general framework for incorporating different forms of outcomes of
interest. Integrated into a policy iteration scheme, our approach guarantees
convergence to the optimal policy. While we mainly focus on theoretical
analysis, our approach opens the possibilities of leveraging intrinsic control
with model learning to enhance sample efficiency and incorporate uncertainty of
the environment into decision-making. | [
"Jianfei Ma"
] | 2023-10-12 07:50:37 | http://arxiv.org/abs/2310.08100v1 | http://arxiv.org/pdf/2310.08100v1 | 2310.08100v1 |
ClimateBERT-NetZero: Detecting and Assessing Net Zero and Reduction Targets | Public and private actors struggle to assess the vast amounts of information
about sustainability commitments made by various institutions. To address this
problem, we create a novel tool for automatically detecting corporate,
national, and regional net zero and reduction targets in three steps. First, we
introduce an expert-annotated data set with 3.5K text samples. Second, we train
and release ClimateBERT-NetZero, a natural language classifier to detect
whether a text contains a net zero or reduction target. Third, we showcase its
analysis potential with two use cases: We first demonstrate how
ClimateBERT-NetZero can be combined with conventional question-answering (Q&A)
models to analyze the ambitions displayed in net zero and reduction targets.
Furthermore, we employ the ClimateBERT-NetZero model on quarterly earning call
transcripts and outline how communication patterns evolve over time. Our
experiments demonstrate promising pathways for extracting and analyzing net
zero and emission reduction targets at scale. | [
"Tobias Schimanski",
"Julia Bingler",
"Camilla Hyslop",
"Mathias Kraus",
"Markus Leippold"
] | 2023-10-12 07:43:27 | http://arxiv.org/abs/2310.08096v1 | http://arxiv.org/pdf/2310.08096v1 | 2310.08096v1 |
Discerning Temporal Difference Learning | Temporal difference learning (TD) is a foundational concept in reinforcement
learning (RL), aimed at efficiently assessing a policy's value function.
TD($\lambda$), a potent variant, incorporates a memory trace to distribute the
prediction error into the historical context. However, this approach often
neglects the significance of historical states and the relative importance of
propagating the TD error, influenced by challenges such as visitation imbalance
or outcome noise. To address this, we propose a novel TD algorithm named
discerning TD learning (DTD), which allows flexible emphasis
functions$-$predetermined or adapted during training$-$to allocate efforts
effectively across states. We establish the convergence properties of our
method within a specific class of emphasis functions and showcase its promising
potential for adaptation to deep RL contexts. Empirical results underscore that
employing a judicious emphasis function not only improves value estimation but
also expedites learning across diverse scenarios. | [
"Jianfei Ma"
] | 2023-10-12 07:38:10 | http://arxiv.org/abs/2310.08091v1 | http://arxiv.org/pdf/2310.08091v1 | 2310.08091v1 |
Dealing with zero-inflated data: achieving SOTA with a two-fold machine learning approach | In many cases, a machine learning model must learn to correctly predict a few
data points with particular values of interest in a broader range of data where
many target values are zero. Zero-inflated data can be found in diverse
scenarios, such as lumpy and intermittent demands, power consumption for home
appliances being turned on and off, impurities measurement in distillation
processes, and even airport shuttle demand prediction. The presence of zeroes
affects the models' learning and may result in poor performance. Furthermore,
zeroes also distort the metrics used to compute the model's prediction quality.
This paper showcases two real-world use cases (home appliances classification
and airport shuttle demand prediction) where a hierarchical model applied in
the context of zero-inflated data leads to excellent results. In particular,
for home appliances classification, the weighted average of Precision, Recall,
F1, and AUC ROC was increased by 27%, 34%, 49%, and 27%, respectively.
Furthermore, it is estimated that the proposed approach is also four times more
energy efficient than the SOTA approach against which it was compared to.
Two-fold models performed best in all cases when predicting airport shuttle
demand, and the difference against other models has been proven to be
statistically significant. | [
"Jože M. Rožanec",
"Gašper Petelin",
"João Costa",
"Blaž Bertalanič",
"Gregor Cerar",
"Marko Guček",
"Gregor Papa",
"Dunja Mladenić"
] | 2023-10-12 07:26:41 | http://arxiv.org/abs/2310.08088v1 | http://arxiv.org/pdf/2310.08088v1 | 2310.08088v1 |
A Carbon Tracking Model for Federated Learning: Impact of Quantization and Sparsification | Federated Learning (FL) methods adopt efficient communication technologies to
distribute machine learning tasks across edge devices, reducing the overhead in
terms of data storage and computational complexity compared to centralized
solutions. Rather than moving large data volumes from producers (sensors,
machines) to energy-hungry data centers, raising environmental concerns due to
resource demands, FL provides an alternative solution to mitigate the energy
demands of several learning tasks while enabling new Artificial Intelligence of
Things (AIoT) applications. This paper proposes a framework for real-time
monitoring of the energy and carbon footprint impacts of FL systems. The carbon
tracking tool is evaluated for consensus (fully decentralized) and classical FL
policies. For the first time, we present a quantitative evaluation of different
computationally and communication efficient FL methods from the perspectives of
energy consumption and carbon equivalent emissions, suggesting also general
guidelines for energy-efficient design. Results indicate that consensus-driven
FL implementations should be preferred for limiting carbon emissions when the
energy efficiency of the communication is low (i.e., < 25 Kbit/Joule). Besides,
quantization and sparsification operations are shown to strike a balance
between learning performances and energy consumption, leading to sustainable FL
designs. | [
"Luca Barbieri",
"Stefano Savazzi",
"Sanaz Kianoush",
"Monica Nicoli",
"Luigi Serio"
] | 2023-10-12 07:20:03 | http://arxiv.org/abs/2310.08087v1 | http://arxiv.org/pdf/2310.08087v1 | 2310.08087v1 |
To token or not to token: A Comparative Study of Text Representations for Cross-Lingual Transfer | Choosing an appropriate tokenization scheme is often a bottleneck in
low-resource cross-lingual transfer. To understand the downstream implications
of text representation choices, we perform a comparative analysis on language
models having diverse text representation modalities including 2
segmentation-based models (\texttt{BERT}, \texttt{mBERT}), 1 image-based model
(\texttt{PIXEL}), and 1 character-level model (\texttt{CANINE}). First, we
propose a scoring Language Quotient (LQ) metric capable of providing a weighted
representation of both zero-shot and few-shot evaluation combined. Utilizing
this metric, we perform experiments comprising 19 source languages and 133
target languages on three tasks (POS tagging, Dependency parsing, and NER). Our
analysis reveals that image-based models excel in cross-lingual transfer when
languages are closely related and share visually similar scripts. However, for
tasks biased toward word meaning (POS, NER), segmentation-based models prove to
be superior. Furthermore, in dependency parsing tasks where word relationships
play a crucial role, models with their character-level focus, outperform
others. Finally, we propose a recommendation scheme based on our findings to
guide model selection according to task and language requirements. | [
"Md Mushfiqur Rahman",
"Fardin Ahsan Sakib",
"Fahim Faisal",
"Antonios Anastasopoulos"
] | 2023-10-12 06:59:10 | http://arxiv.org/abs/2310.08078v1 | http://arxiv.org/pdf/2310.08078v1 | 2310.08078v1 |
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks | Neural network pruning has shown to be an effective technique for reducing
the network size, trading desirable properties like generalization and
robustness to adversarial attacks for higher sparsity. Recent work has claimed
that adversarial pruning methods can produce sparse networks while also
preserving robustness to adversarial examples. In this work, we first
re-evaluate three state-of-the-art adversarial pruning methods, showing that
their robustness was indeed overestimated. We then compare pruned and dense
versions of the same models, discovering that samples on thin ice, i.e., closer
to the unpruned model's decision boundary, are typically misclassified after
pruning. We conclude by discussing how this intuition may lead to designing
more effective adversarial pruning methods in future work. | [
"Giorgio Piras",
"Maura Pintor",
"Ambra Demontis",
"Battista Biggio"
] | 2023-10-12 06:50:43 | http://arxiv.org/abs/2310.08073v1 | http://arxiv.org/pdf/2310.08073v1 | 2310.08073v1 |
Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation | Despite the great progress of unsupervised domain adaptation (UDA) with the
deep neural networks, current UDA models are opaque and cannot provide
promising explanations, limiting their applications in the scenarios that
require safe and controllable model decisions. At present, a surge of work
focuses on designing deep interpretable methods with adequate data annotations
and only a few methods consider the distributional shift problem. Most existing
interpretable UDA methods are post-hoc ones, which cannot facilitate the model
learning process for performance enhancement. In this paper, we propose an
inherently interpretable method, named Transferable Conceptual Prototype
Learning (TCPL), which could simultaneously interpret and improve the processes
of knowledge transfer and decision-making in UDA. To achieve this goal, we
design a hierarchically prototypical module that transfers categorical basic
concepts from the source domain to the target domain and learns domain-shared
prototypes for explaining the underlying reasoning process. With the learned
transferable prototypes, a self-predictive consistent pseudo-label strategy
that fuses confidence, predictions, and prototype information, is designed for
selecting suitable target samples for pseudo annotations and gradually
narrowing down the domain gap. Comprehensive experiments show that the proposed
method can not only provide effective and intuitive explanations but also
outperform previous state-of-the-arts. | [
"Junyu Gao",
"Xinhong Ma",
"Changsheng Xu"
] | 2023-10-12 06:36:41 | http://arxiv.org/abs/2310.08071v1 | http://arxiv.org/pdf/2310.08071v1 | 2310.08071v1 |
Tight Time-Space Lower Bounds for Constant-Pass Learning | In his breakthrough paper, Raz showed that any parity learning algorithm
requires either quadratic memory or an exponential number of samples [FOCS'16,
JACM'19]. A line of work that followed extended this result to a large class of
learning problems. Until recently, all these results considered learning in the
streaming model, where each sample is drawn independently, and the learner is
allowed a single pass over the stream of samples. Garg, Raz, and Tal [CCC'19]
considered a stronger model, allowing multiple passes over the stream. In the
$2$-pass model, they showed that learning parities of size $n$ requires either
a memory of size $n^{1.5}$ or at least $2^{\sqrt{n}}$ samples. (Their result
also generalizes to other learning problems.)
In this work, for any constant $q$, we prove tight memory-sample lower bounds
for any parity learning algorithm that makes $q$ passes over the stream of
samples. We show that such a learner requires either $\Omega(n^{2})$ memory
size or at least $2^{\Omega(n)}$ samples. Beyond establishing a tight lower
bound, this is the first non-trivial lower bound for $q$-pass learning for any
$q\ge 3$. Similar to prior work, our results extend to any learning problem
with many nearly-orthogonal concepts.
We complement the lower bound with an upper bound, showing that parity
learning with $q$ passes can be done efficiently with $O(n^2/\log q)$ memory. | [
"Xin Lyu",
"Avishay Tal",
"Hongxun Wu",
"Junzhao Yang"
] | 2023-10-12 06:36:31 | http://arxiv.org/abs/2310.08070v1 | http://arxiv.org/pdf/2310.08070v1 | 2310.08070v1 |
Rethinking Negative Pairs in Code Search | Recently, contrastive learning has become a key component in fine-tuning code
search models for software development efficiency and effectiveness. It pulls
together positive code snippets while pushing negative samples away given
search queries. Among contrastive learning, InfoNCE is the most widely used
loss function due to its better performance. However, the following problems in
negative samples of InfoNCE may deteriorate its representation learning: 1) The
existence of false negative samples in large code corpora due to duplications.
2). The failure to explicitly differentiate between the potential relevance of
negative samples. As an example, a bubble sorting algorithm example is less
``negative'' than a file saving function for the quick sorting algorithm query.
In this paper, we tackle the above problems by proposing a simple yet effective
Soft-InfoNCE loss that inserts weight terms into InfoNCE. In our proposed loss
function, we apply three methods to estimate the weights of negative pairs and
show that the vanilla InfoNCE loss is a special case of Soft-InfoNCE.
Theoretically, we analyze the effects of Soft-InfoNCE on controlling the
distribution of learnt code representations and on deducing a more precise
mutual information estimation. We furthermore discuss the superiority of
proposed loss functions with other design alternatives. Extensive experiments
demonstrate the effectiveness of Soft-InfoNCE and weights estimation methods
under state-of-the-art code search models on a large-scale public dataset
consisting of six programming languages. Source code is available at
\url{https://github.com/Alex-HaochenLi/Soft-InfoNCE}. | [
"Haochen Li",
"Xin Zhou",
"Luu Anh Tuan",
"Chunyan Miao"
] | 2023-10-12 06:32:42 | http://arxiv.org/abs/2310.08069v1 | http://arxiv.org/pdf/2310.08069v1 | 2310.08069v1 |
ETDock: A Novel Equivariant Transformer for Protein-Ligand Docking | Predicting the docking between proteins and ligands is a crucial and
challenging task for drug discovery. However, traditional docking methods
mainly rely on scoring functions, and deep learning-based docking approaches
usually neglect the 3D spatial information of proteins and ligands, as well as
the graph-level features of ligands, which limits their performance. To address
these limitations, we propose an equivariant transformer neural network for
protein-ligand docking pose prediction. Our approach involves the fusion of
ligand graph-level features by feature processing, followed by the learning of
ligand and protein representations using our proposed TAMformer module.
Additionally, we employ an iterative optimization approach based on the
predicted distance matrix to generate refined ligand poses. The experimental
results on real datasets show that our model can achieve state-of-the-art
performance. | [
"Yiqiang Yi",
"Xu Wan",
"Yatao Bian",
"Le Ou-Yang",
"Peilin Zhao"
] | 2023-10-12 06:23:12 | http://arxiv.org/abs/2310.08061v1 | http://arxiv.org/pdf/2310.08061v1 | 2310.08061v1 |
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation | Learning from Label Proportions (LLP) is a learning problem where only
aggregate level labels are available for groups of instances, called bags,
during training, and the aim is to get the best performance at the
instance-level on the test data. This setting arises in domains like
advertising and medicine due to privacy considerations. We propose a novel
algorithmic framework for this problem that iteratively performs two main
steps. For the first step (Pseudo Labeling) in every iteration, we define a
Gibbs distribution over binary instance labels that incorporates a) covariate
information through the constraint that instances with similar covariates
should have similar labels and b) the bag level aggregated label. We then use
Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo
labels. In the second step (Embedding Refinement), we use the pseudo labels to
provide supervision for a learner that yields a better embedding. Further, we
iterate on the two steps again by using the second step's embeddings as new
covariates for the next iteration. In the final iteration, a classifier is
trained using the pseudo labels. Our algorithm displays strong gains against
several SOTA baselines (up to 15%) for the LLP Binary Classification problem on
various dataset types - tabular and Image. We achieve these improvements with
minimal computational overhead above standard supervised learning due to Belief
Propagation, for large bag sizes, even for a million samples. | [
"Shreyas Havaldar",
"Navodita Sharma",
"Shubhi Sareen",
"Karthikeyan Shanmugam",
"Aravindan Raghuveer"
] | 2023-10-12 06:09:26 | http://arxiv.org/abs/2310.08056v1 | http://arxiv.org/pdf/2310.08056v1 | 2310.08056v1 |
LGL-BCI: A Lightweight Geometric Learning Framework for Motor Imagery-Based Brain-Computer Interfaces | Brain-Computer Interfaces (BCIs) are a groundbreaking technology for
interacting with external devices using brain signals. Despite advancements,
electroencephalogram (EEG)-based Motor Imagery (MI) tasks face challenges like
amplitude and phase variability, and complex spatial correlations, with a need
for smaller model size and faster inference. This study introduces the LGL-BCI
framework, employing a Geometric Deep Learning Framework for EEG processing in
non-Euclidean metric spaces, particularly the Symmetric Positive Definite (SPD)
Manifold space. LGL-BCI offers robust EEG data representation and captures
spatial correlations. We propose an EEG channel selection solution via a
feature decomposition algorithm to reduce SPD matrix dimensionality, with a
lossless transformation boosting inference speed. Extensive experiments show
LGL-BCI's superior accuracy and efficiency compared to current solutions,
highlighting geometric deep learning's potential in MI-BCI applications. The
efficiency, assessed on two public EEG datasets and two real-world EEG devices,
significantly outperforms the state-of-the-art solution in accuracy ($82.54\%$
versus $62.22\%$) with fewer parameters (64.9M compared to 183.7M). | [
"Jianchao Lu",
"Yuzhe Tian",
"Yang Zhang",
"Jiaqi Ge",
"Quan Z. Sheng",
"Xi Zheng"
] | 2023-10-12 05:52:54 | http://arxiv.org/abs/2310.08051v1 | http://arxiv.org/pdf/2310.08051v1 | 2310.08051v1 |
Exploring the Relationship Between Model Architecture and In-Context Learning Ability | What is the relationship between model architecture and the ability to
perform in-context learning? In this empirical study, we take the first steps
towards answering this question. In particular, we evaluate fifteen model
architectures across a suite of synthetic in-context learning tasks. The
selected architectures represent a broad range of paradigms, including
recurrent and convolution-based neural networks, transformers, and emerging
attention alternatives. We discover that all considered architectures can
perform in-context learning under certain conditions. However, contemporary
architectures are found to be the best performing, especially as task
complexity grows. Additionally, our follow-up experiments delve into various
factors that influence in-context learning. We observe varied sensitivities
among architectures with respect to hyperparameter settings. Our study of
training dynamics reveals that certain architectures exhibit a smooth,
progressive learning trajectory, while others demonstrate periods of stagnation
followed by abrupt mastery of the task. Finally, and somewhat surprisingly, we
find that several emerging attention alternatives are more robust in-context
learners than transformers; since such approaches have constant-sized memory
footprints at inference time, this result opens the future possibility of
scaling up in-context learning to vastly larger numbers of in-context examples. | [
"Ivan Lee",
"Nan Jiang",
"Taylor Berg-Kirkpatrick"
] | 2023-10-12 05:43:06 | http://arxiv.org/abs/2310.08049v1 | http://arxiv.org/pdf/2310.08049v1 | 2310.08049v1 |
QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models | Large Language Models (LLMs) excel in NLP, but their demands hinder their
widespread deployment. While Quantization-Aware Training (QAT) offers a
solution, its extensive training costs make Post-Training Quantization (PTQ) a
more practical approach for LLMs. In existing studies, activation outliers in
particular channels are identified as the bottleneck to PTQ accuracy. They
propose to transform the magnitudes from activations to weights, which however
offers limited alleviation or suffers from unstable gradients, resulting in a
severe performance drop at low-bitwidth. In this paper, we propose QLLM, an
accurate and efficient low-bitwidth PTQ method designed for LLMs. QLLM
introduces an adaptive channel reassembly technique that reallocates the
magnitude of outliers to other channels, thereby mitigating their impact on the
quantization range. This is achieved by channel disassembly and channel
assembly, which first breaks down the outlier channels into several
sub-channels to ensure a more balanced distribution of activation magnitudes.
Then similar channels are merged to maintain the original channel number for
efficiency. Additionally, an adaptive strategy is designed to autonomously
determine the optimal number of sub-channels for channel disassembly. To
further compensate for the performance loss caused by quantization, we propose
an efficient tuning method that only learns a small number of low-rank weights
while freezing the pre-trained quantized model. After training, these low-rank
parameters can be fused into the frozen weights without affecting inference.
Extensive experiments on LLaMA-1 and LLaMA-2 show that QLLM can obtain accurate
quantized models efficiently. For example, QLLM quantizes the 4-bit LLaMA-2-70B
within 10 hours on a single A100-80G GPU, outperforming the previous
state-of-the-art method by 7.89% on the average accuracy across five zero-shot
tasks. | [
"Jing Liu",
"Ruihao Gong",
"Xiuying Wei",
"Zhiwei Dong",
"Jianfei Cai",
"Bohan Zhuang"
] | 2023-10-12 05:25:49 | http://arxiv.org/abs/2310.08041v1 | http://arxiv.org/pdf/2310.08041v1 | 2310.08041v1 |
SEE-OoD: Supervised Exploration For Enhanced Out-of-Distribution Detection | Current techniques for Out-of-Distribution (OoD) detection predominantly rely
on quantifying predictive uncertainty and incorporating model regularization
during the training phase, using either real or synthetic OoD samples. However,
methods that utilize real OoD samples lack exploration and are prone to overfit
the OoD samples at hand. Whereas synthetic samples are often generated based on
features extracted from training data, rendering them less effective when the
training and OoD data are highly overlapped in the feature space. In this work,
we propose a Wasserstein-score-based generative adversarial training scheme to
enhance OoD detection accuracy, which, for the first time, performs data
augmentation and exploration simultaneously under the supervision of limited
OoD samples. Specifically, the generator explores OoD spaces and generates
synthetic OoD samples using feedback from the discriminator, while the
discriminator exploits both the observed and synthesized samples for OoD
detection using a predefined Wasserstein score. We provide theoretical
guarantees that the optimal solutions of our generative scheme are
statistically achievable through adversarial training in empirical settings. We
then demonstrate that the proposed method outperforms state-of-the-art
techniques on various computer vision datasets and exhibits superior
generalizability to unseen OoD data. | [
"Xiaoyang Song",
"Wenbo Sun",
"Maher Nouiehed",
"Raed Al Kontar",
"Judy Jin"
] | 2023-10-12 05:20:18 | http://arxiv.org/abs/2310.08040v1 | http://arxiv.org/pdf/2310.08040v1 | 2310.08040v1 |
Rethinking Large-scale Pre-ranking System: Entire-chain Cross-domain Models | Industrial systems such as recommender systems and online advertising, have
been widely equipped with multi-stage architectures, which are divided into
several cascaded modules, including matching, pre-ranking, ranking and
re-ranking. As a critical bridge between matching and ranking, existing
pre-ranking approaches mainly endure sample selection bias (SSB) problem owing
to ignoring the entire-chain data dependence, resulting in sub-optimal
performances. In this paper, we rethink pre-ranking system from the perspective
of the entire sample space, and propose Entire-chain Cross-domain Models (ECM),
which leverage samples from the whole cascaded stages to effectively alleviate
SSB problem. Besides, we design a fine-grained neural structure named ECMM to
further improve the pre-ranking accuracy. Specifically, we propose a
cross-domain multi-tower neural network to comprehensively predict for each
stage result, and introduce the sub-networking routing strategy with $L0$
regularization to reduce computational costs. Evaluations on real-world
large-scale traffic logs demonstrate that our pre-ranking models outperform
SOTA methods while time consumption is maintained within an acceptable level,
which achieves better trade-off between efficiency and effectiveness. | [
"Jinbo Song",
"Ruoran Huang",
"Xinyang Wang",
"Wei Huang",
"Qian Yu",
"Mingming Chen",
"Yafei Yao",
"Chaosheng Fan",
"Changping Peng",
"Zhangang Lin",
"Jinghe Hu",
"Jingping Shao"
] | 2023-10-12 05:14:42 | http://arxiv.org/abs/2310.08039v1 | http://arxiv.org/pdf/2310.08039v1 | 2310.08039v1 |
Continual Learning via Manifold Expansion Replay | In continual learning, the learner learns multiple tasks in sequence, with
data being acquired only once for each task. Catastrophic forgetting is a major
challenge to continual learning. To reduce forgetting, some existing
rehearsal-based methods use episodic memory to replay samples of previous
tasks. However, in the process of knowledge integration when learning a new
task, this strategy also suffers from catastrophic forgetting due to an
imbalance between old and new knowledge. To address this problem, we propose a
novel replay strategy called Manifold Expansion Replay (MaER). We argue that
expanding the implicit manifold of the knowledge representation in the episodic
memory helps to improve the robustness and expressiveness of the model. To this
end, we propose a greedy strategy to keep increasing the diameter of the
implicit manifold represented by the knowledge in the buffer during memory
management. In addition, we introduce Wasserstein distance instead of cross
entropy as distillation loss to preserve previous knowledge. With extensive
experimental validation on MNIST, CIFAR10, CIFAR100, and TinyImageNet, we show
that the proposed method significantly improves the accuracy in continual
learning setup, outperforming the state of the arts. | [
"Zihao Xu",
"Xuan Tang",
"Yufei Shi",
"Jianfeng Zhang",
"Jian Yang",
"Mingsong Chen",
"Xian Wei"
] | 2023-10-12 05:09:27 | http://arxiv.org/abs/2310.08038v1 | http://arxiv.org/pdf/2310.08038v1 | 2310.08038v1 |
ZEST: Attention-based Zero-Shot Learning for Unseen IoT Device Classification | Recent research works have proposed machine learning models for classifying
IoT devices connected to a network. However, there is still a practical
challenge of not having all devices (and hence their traffic) available during
the training of a model. This essentially means, during the operational phase,
we need to classify new devices not seen during the training phase. To address
this challenge, we propose ZEST -- a ZSL (zero-shot learning) framework based
on self-attention for classifying both seen and unseen devices. ZEST consists
of i) a self-attention based network feature extractor, termed SANE, for
extracting latent space representations of IoT traffic, ii) a generative model
that trains a decoder using latent features to generate pseudo data, and iii) a
supervised model that is trained on the generated pseudo data for classifying
devices. We carry out extensive experiments on real IoT traffic data; our
experiments demonstrate i) ZEST achieves significant improvement (in terms of
accuracy) over the baselines; ii) ZEST is able to better extract meaningful
representations than LSTM which has been commonly used for modeling network
traffic. | [
"Binghui Wu",
"Philipp Gysel",
"Dinil Mon Divakaran",
"Mohan Gurusamy"
] | 2023-10-12 05:08:21 | http://arxiv.org/abs/2310.08036v1 | http://arxiv.org/pdf/2310.08036v1 | 2310.08036v1 |
Local Graph Clustering with Noisy Labels | The growing interest in machine learning problems over graphs with additional
node information such as texts, images, or labels has popularized methods that
require the costly operation of processing the entire graph. Yet, little effort
has been made to the development of fast local methods (i.e. without accessing
the entire graph) that extract useful information from such data. To that end,
we propose a study of local graph clustering using noisy node labels as a proxy
for additional node information. In this setting, nodes receive initial binary
labels based on cluster affiliation: 1 if they belong to the target cluster and
0 otherwise. Subsequently, a fraction of these labels is flipped. We
investigate the benefits of incorporating noisy labels for local graph
clustering. By constructing a weighted graph with such labels, we study the
performance of graph diffusion-based local clustering method on both the
original and the weighted graphs. From a theoretical perspective, we consider
recovering an unknown target cluster with a single seed node in a random graph
with independent noisy node labels. We provide sufficient conditions on the
label noise under which, with high probability, using diffusion in the weighted
graph yields a more accurate recovery of the target cluster. This approach
proves more effective than using the given labels alone or using diffusion in
the label-free original graph. Empirically, we show that reliable node labels
can be obtained with just a few samples from an attributed graph. Moreover,
utilizing these labels via diffusion in the weighted graph leads to
significantly better local clustering performance across several real-world
datasets, improving F1 scores by up to 13%. | [
"Artur Back de Luca",
"Kimon Fountoulakis",
"Shenghao Yang"
] | 2023-10-12 04:37:15 | http://arxiv.org/abs/2310.08031v1 | http://arxiv.org/pdf/2310.08031v1 | 2310.08031v1 |
Robust 1-bit Compressed Sensing with Iterative Hard Thresholding | In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector
$x\in S^{n-1}$ within an $\epsilon$ error (in $\ell_2$) from minimal number of
linear measurements that are quantized to just their signs, i.e., from
measurements of the form $y = \mathrm{Sign}(\langle a, x\rangle).$ In this
paper, we study a noisy version where a fraction of the measurements can be
flipped, potentially by an adversary. In particular, we analyze the Binary
Iterative Hard Thresholding (BIHT) algorithm, a proximal gradient descent on a
properly defined loss function used for 1-bit compressed sensing, in this noisy
setting. It is known from recent results that, with
$\tilde{O}(\frac{k}{\epsilon})$ noiseless measurements, BIHT provides an
estimate within $\epsilon$ error. This result is optimal and universal, meaning
one set of measurements work for all sparse vectors. In this paper, we show
that BIHT also provides better results than all known methods for the noisy
setting. We show that when up to $\tau$-fraction of the sign measurements are
incorrect (adversarial error), with the same number of measurements as before,
BIHT agnostically provides an estimate of $x$ within an
$\tilde{O}(\epsilon+\tau)$ error, maintaining the universality of measurements.
This establishes stability of iterative hard thresholding in the presence of
measurement error. To obtain the result, we use the restricted approximate
invertibility of Gaussian matrices, as well as a tight analysis of the
high-dimensional geometry of the adversarially corrupted measurements. | [
"Namiko Matsumoto",
"Arya Mazumdar"
] | 2023-10-12 03:41:32 | http://arxiv.org/abs/2310.08019v1 | http://arxiv.org/pdf/2310.08019v1 | 2310.08019v1 |
Why Train More? Effective and Efficient Membership Inference via Memorization | Membership Inference Attacks (MIAs) aim to identify specific data samples
within the private training dataset of machine learning models, leading to
serious privacy violations and other sophisticated threats. Many practical
black-box MIAs require query access to the data distribution (the same
distribution where the private data is drawn) to train shadow models. By doing
so, the adversary obtains models trained "with" or "without" samples drawn from
the distribution, and analyzes the characteristics of the samples under
consideration. The adversary is often required to train more than hundreds of
shadow models to extract the signals needed for MIAs; this becomes the
computational overhead of MIAs. In this paper, we propose that by strategically
choosing the samples, MI adversaries can maximize their attack success while
minimizing the number of shadow models. First, our motivational experiments
suggest memorization as the key property explaining disparate sample
vulnerability to MIAs. We formalize this through a theoretical bound that
connects MI advantage with memorization. Second, we show sample complexity
bounds that connect the number of shadow models needed for MIAs with
memorization. Lastly, we confirm our theoretical arguments with comprehensive
experiments; by utilizing samples with high memorization scores, the adversary
can (a) significantly improve its efficacy regardless of the MIA used, and (b)
reduce the number of shadow models by nearly two orders of magnitude compared
to state-of-the-art approaches. | [
"Jihye Choi",
"Shruti Tople",
"Varun Chandrasekaran",
"Somesh Jha"
] | 2023-10-12 03:29:53 | http://arxiv.org/abs/2310.08015v1 | http://arxiv.org/pdf/2310.08015v1 | 2310.08015v1 |
AutoFHE: Automated Adaption of CNNs for Efficient Evaluation over FHE | Secure inference of deep convolutional neural networks (CNNs) under RNS-CKKS
involves polynomial approximation of unsupported non-linear activation
functions. However, existing approaches have three main limitations: 1)
Inflexibility: The polynomial approximation and associated homomorphic
evaluation architecture are customized manually for each CNN architecture and
do not generalize to other networks. 2) Suboptimal Approximation: Each
activation function is approximated instead of the function represented by the
CNN. 3) Restricted Design: Either high-degree or low-degree polynomial
approximations are used. The former retains high accuracy but slows down
inference due to bootstrapping operations, while the latter accelerates
ciphertext inference but compromises accuracy. To address these limitations, we
present AutoFHE, which automatically adapts standard CNNs for secure inference
under RNS-CKKS. The key idea is to adopt layerwise mixed-degree polynomial
activation functions, which are optimized jointly with the homomorphic
evaluation architecture in terms of the placement of bootstrapping operations.
The problem is modeled within a multi-objective optimization framework to
maximize accuracy and minimize the number of bootstrapping operations. AutoFHE
can be applied flexibly on any CNN architecture, and it provides diverse
solutions that span the trade-off between accuracy and latency. Experimental
evaluation over RNS-CKKS encrypted CIFAR datasets shows that AutoFHE
accelerates secure inference by $1.32\times$ to $1.8\times$ compared to methods
employing high-degree polynomials. It also improves accuracy by up to 2.56%
compared to methods using low-degree polynomials. Lastly, AutoFHE accelerates
inference and improves accuracy by $103\times$ and 3.46%, respectively,
compared to CNNs under TFHE. | [
"Wei Ao",
"Vishnu Naresh Boddeti"
] | 2023-10-12 03:28:14 | http://arxiv.org/abs/2310.08012v1 | http://arxiv.org/pdf/2310.08012v1 | 2310.08012v1 |
Subsets and Splits