title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Retro-fallback: retrosynthetic planning in an uncertain world | Retrosynthesis is the task of proposing a series of chemical reactions to
create a desired molecule from simpler, buyable molecules. While previous works
have proposed algorithms to find optimal solutions for a range of metrics (e.g.
shortest, lowest-cost), these works generally overlook the fact that we have
imperfect knowledge of the space of possible reactions, meaning plans created
by the algorithm may not work in a laboratory. In this paper we propose a novel
formulation of retrosynthesis in terms of stochastic processes to account for
this uncertainty. We then propose a novel greedy algorithm called
retro-fallback which maximizes the probability that at least one synthesis plan
can be executed in the lab. Using in-silico benchmarks we demonstrate that
retro-fallback generally produces better sets of synthesis plans than the
popular MCTS and retro* algorithms. | [
"Austin Tripp",
"Krzysztof Maziarz",
"Sarah Lewis",
"Marwin Segler",
"José Miguel Hernández-Lobato"
] | 2023-10-13 17:35:04 | http://arxiv.org/abs/2310.09270v1 | http://arxiv.org/pdf/2310.09270v1 | 2310.09270v1 |
Genetic algorithms are strong baselines for molecule generation | Generating molecules, both in a directed and undirected fashion, is a huge
part of the drug discovery pipeline. Genetic algorithms (GAs) generate
molecules by randomly modifying known molecules. In this paper we show that GAs
are very strong algorithms for such tasks, outperforming many complicated
machine learning methods: a result which many researchers may find surprising.
We therefore propose insisting during peer review that new algorithms must have
some clear advantage over GAs, which we call the GA criterion. Ultimately our
work suggests that a lot of research in molecule generation should be
re-assessed. | [
"Austin Tripp",
"José Miguel Hernández-Lobato"
] | 2023-10-13 17:25:11 | http://arxiv.org/abs/2310.09267v1 | http://arxiv.org/pdf/2310.09267v1 | 2310.09267v1 |
User Inference Attacks on Large Language Models | Fine-tuning is a common and effective method for tailoring large language
models (LLMs) to specialized tasks and applications. In this paper, we study
the privacy implications of fine-tuning LLMs on user data. To this end, we
define a realistic threat model, called user inference, wherein an attacker
infers whether or not a user's data was used for fine-tuning. We implement
attacks for this threat model that require only a small set of samples from a
user (possibly different from the samples used for training) and black-box
access to the fine-tuned LLM. We find that LLMs are susceptible to user
inference attacks across a variety of fine-tuning datasets, at times with near
perfect attack success rates. Further, we investigate which properties make
users vulnerable to user inference, finding that outlier users (i.e. those with
data distributions sufficiently different from other users) and users who
contribute large quantities of data are most susceptible to attack. Finally, we
explore several heuristics for mitigating privacy attacks. We find that
interventions in the training algorithm, such as batch or per-example gradient
clipping and early stopping fail to prevent user inference. However, limiting
the number of fine-tuning samples from a single user can reduce attack
effectiveness, albeit at the cost of reducing the total amount of fine-tuning
data. | [
"Nikhil Kandpal",
"Krishna Pillutla",
"Alina Oprea",
"Peter Kairouz",
"Christopher A. Choquette-Choo",
"Zheng Xu"
] | 2023-10-13 17:24:52 | http://arxiv.org/abs/2310.09266v1 | http://arxiv.org/pdf/2310.09266v1 | 2310.09266v1 |
PromptRE: Weakly-Supervised Document-Level Relation Extraction via Prompting-Based Data Programming | Relation extraction aims to classify the relationships between two entities
into pre-defined categories. While previous research has mainly focused on
sentence-level relation extraction, recent studies have expanded the scope to
document-level relation extraction. Traditional relation extraction methods
heavily rely on human-annotated training data, which is time-consuming and
labor-intensive. To mitigate the need for manual annotation, recent
weakly-supervised approaches have been developed for sentence-level relation
extraction while limited work has been done on document-level relation
extraction. Weakly-supervised document-level relation extraction faces
significant challenges due to an imbalanced number "no relation" instances and
the failure of directly probing pretrained large language models for document
relation extraction. To address these challenges, we propose PromptRE, a novel
weakly-supervised document-level relation extraction method that combines
prompting-based techniques with data programming. Furthermore, PromptRE
incorporates the label distribution and entity types as prior knowledge to
improve the performance. By leveraging the strengths of both prompting and data
programming, PromptRE achieves improved performance in relation classification
and effectively handles the "no relation" problem. Experimental results on
ReDocRED, a benchmark dataset for document-level relation extraction,
demonstrate the superiority of PromptRE over baseline approaches. | [
"Chufan Gao",
"Xulin Fan",
"Jimeng Sun",
"Xuan Wang"
] | 2023-10-13 17:23:17 | http://arxiv.org/abs/2310.09265v1 | http://arxiv.org/pdf/2310.09265v1 | 2310.09265v1 |
Towards End-to-end 4-Bit Inference on Generative Large Language Models | We show that the majority of the inference computations for large generative
models such as LLaMA and OPT can be performed with both weights and activations
being cast to 4 bits, in a way that leads to practical speedups while at the
same time maintaining good accuracy. We achieve this via a hybrid quantization
strategy called QUIK, which compresses most of the weights and activations to
4-bit, while keeping some outlier weights and activations in higher-precision.
Crucially, our scheme is designed with computational efficiency in mind: we
provide GPU kernels with highly-efficient layer-wise runtimes, which lead to
practical end-to-end throughput improvements of up to 3.1x relative to FP16
execution. Code and models are provided at https://github.com/IST-DASLab/QUIK. | [
"Saleh Ashkboos",
"Ilia Markov",
"Elias Frantar",
"Tingxuan Zhong",
"Xincheng Wang",
"Jie Ren",
"Torsten Hoefler",
"Dan Alistarh"
] | 2023-10-13 17:15:05 | http://arxiv.org/abs/2310.09259v1 | http://arxiv.org/pdf/2310.09259v1 | 2310.09259v1 |
Generative Entropic Neural Optimal Transport To Map Within and Across Spaces | Learning measure-to-measure mappings is a crucial task in machine learning,
featured prominently in generative modeling. Recent years have witnessed a
surge of techniques that draw inspiration from optimal transport (OT) theory.
Combined with neural network models, these methods collectively known as
\textit{Neural OT} use optimal transport as an inductive bias: such mappings
should be optimal w.r.t. a given cost function, in the sense that they are able
to move points in a thrifty way, within (by minimizing displacements) or across
spaces (by being isometric). This principle, while intuitive, is often
confronted with several practical challenges that require adapting the OT
toolbox: cost functions other than the squared-Euclidean cost can be
challenging to handle, the deterministic formulation of Monge maps leaves
little flexibility, mapping across incomparable spaces raises multiple
challenges, while the mass conservation constraint inherent to OT can provide
too much credit to outliers. While each of these mismatches between practice
and theory has been addressed independently in various works, we propose in
this work an elegant framework to unify them, called \textit{generative
entropic neural optimal transport} (GENOT). GENOT can accommodate any cost
function; handles randomness using conditional generative models; can map
points across incomparable spaces, and can be used as an \textit{unbalanced}
solver. We evaluate our approach through experiments conducted on various
synthetic datasets and demonstrate its practicality in single-cell biology. In
this domain, GENOT proves to be valuable for tasks such as modeling cell
development, predicting cellular responses to drugs, and translating between
different data modalities of cells. | [
"Dominik Klein",
"Théo Uscidda",
"Fabian Theis",
"Marco Cuturi"
] | 2023-10-13 17:12:04 | http://arxiv.org/abs/2310.09254v2 | http://arxiv.org/pdf/2310.09254v2 | 2310.09254v2 |
It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models | Classical wisdom in machine learning holds that the generalization error can
be decomposed into bias and variance, and these two terms exhibit a
\emph{trade-off}. However, in this paper, we show that for an ensemble of deep
learning based classification models, bias and variance are \emph{aligned} at a
sample level, where squared bias is approximately \emph{equal} to variance for
correctly classified sample points. We present empirical evidence confirming
this phenomenon in a variety of deep learning models and datasets. Moreover, we
study this phenomenon from two theoretical perspectives: calibration and neural
collapse. We first show theoretically that under the assumption that the models
are well calibrated, we can observe the bias-variance alignment. Second,
starting from the picture provided by the neural collapse theory, we show an
approximate correlation between bias and variance. | [
"Lin Chen",
"Michal Lukasik",
"Wittawat Jitkrittum",
"Chong You",
"Sanjiv Kumar"
] | 2023-10-13 17:06:34 | http://arxiv.org/abs/2310.09250v1 | http://arxiv.org/pdf/2310.09250v1 | 2310.09250v1 |
Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet Hierarchy | Text-to-image synthesis has recently attracted widespread attention due to
rapidly improving quality and numerous practical applications. However, the
language understanding capabilities of text-to-image models are still poorly
understood, which makes it difficult to reason about prompt formulations that a
given model would understand well. In this work, we measure the capability of
popular text-to-image models to understand $\textit{hypernymy}$, or the "is-a"
relation between words. We design two automatic metrics based on the WordNet
semantic hierarchy and existing image classifiers pretrained on ImageNet. These
metrics both enable broad quantitative comparison of linguistic capabilities
for text-to-image models and offer a way of finding fine-grained qualitative
differences, such as words that are unknown to models and thus are difficult
for them to draw. We comprehensively evaluate popular text-to-image models,
including GLIDE, Latent Diffusion, and Stable Diffusion, showing how our
metrics can provide a better understanding of the individual strengths and
weaknesses of these models. | [
"Anton Baryshnikov",
"Max Ryabinin"
] | 2023-10-13 16:53:25 | http://arxiv.org/abs/2310.09247v1 | http://arxiv.org/pdf/2310.09247v1 | 2310.09247v1 |
Time CNN and Graph Convolution Network for Epileptic Spike Detection in MEG Data | Magnetoencephalography (MEG) recordings of patients with epilepsy exhibit
spikes, a typical biomarker of the pathology. Detecting those spikes allows
accurate localization of brain regions triggering seizures. Spike detection is
often performed manually. However, it is a burdensome and error prone task due
to the complexity of MEG data. To address this problem, we propose a 1D
temporal convolutional neural network (Time CNN) coupled with a graph
convolutional network (GCN) to classify short time frames of MEG recording as
containing a spike or not. Compared to other recent approaches, our models have
fewer parameters to train and we propose to use a GCN to account for MEG
sensors spatial relationships. Our models produce clinically relevant results
and outperform deep learning-based state-of-the-art methods reaching a
classification f1-score of 76.7% on a balanced dataset and of 25.5% on a
realistic, highly imbalanced dataset, for the spike class. | [
"Pauline Mouches",
"Thibaut Dejean",
"Julien Jung",
"Romain Bouet",
"Carole Lartizien",
"Romain Quentin"
] | 2023-10-13 16:40:29 | http://arxiv.org/abs/2310.09236v1 | http://arxiv.org/pdf/2310.09236v1 | 2310.09236v1 |
Insuring Smiles: Predicting routine dental coverage using Spark ML | Finding suitable health insurance coverage can be challenging for individuals
and small enterprises in the USA. The Health Insurance Exchange Public Use
Files (Exchange PUFs) dataset provided by CMS offers valuable information on
health and dental policies [1]. In this paper, we leverage machine learning
algorithms to predict if a health insurance plan covers routine dental services
for adults. By analyzing plan type, region, deductibles, out-of-pocket
maximums, and copayments, we employ Logistic Regression, Decision Tree, Random
Forest, Gradient Boost, Factorization Model and Support Vector Machine
algorithms. Our goal is to provide a clinical strategy for individuals and
families to select the most suitable insurance plan based on income and
expenses. | [
"Aishwarya Gupta",
"Rahul S. Bhogale",
"Priyanka Thota",
"Prathushkumar Dathuri",
"Jongwook Woo"
] | 2023-10-13 16:31:51 | http://arxiv.org/abs/2310.09229v1 | http://arxiv.org/pdf/2310.09229v1 | 2310.09229v1 |
Fast & Efficient Learning of Bayesian Networks from Data: Knowledge Discovery and Causality | Structure learning is essential for Bayesian networks (BNs) as it uncovers
causal relationships, and enables knowledge discovery, predictions, inferences,
and decision-making under uncertainty. Two novel algorithms, FSBN and SSBN,
based on the PC algorithm, employ local search strategy and conditional
independence tests to learn the causal network structure from data. They
incorporate d-separation to infer additional topology information, prioritize
conditioning sets, and terminate the search immediately and efficiently. FSBN
achieves up to 52% computation cost reduction, while SSBN surpasses it with a
remarkable 72% reduction for a 200-node network. SSBN demonstrates further
efficiency gains due to its intelligent strategy. Experimental studies show
that both algorithms match the induction quality of the PC algorithm while
significantly reducing computation costs. This enables them to offer
interpretability and adaptability while reducing the computational burden,
making them valuable for various applications in big data analytics. | [
"Minn Sein",
"Fu Shunkai"
] | 2023-10-13 16:20:20 | http://arxiv.org/abs/2310.09222v1 | http://arxiv.org/pdf/2310.09222v1 | 2310.09222v1 |
Unseen Image Synthesis with Diffusion Models | While the current trend in the generative field is scaling up towards larger
models and more training data for generalized domain representations, we go the
opposite direction in this work by synthesizing unseen domain images without
additional training. We do so via latent sampling and geometric optimization
using pre-trained and frozen Denoising Diffusion Probabilistic Models (DDPMs)
on single-domain datasets. Our key observation is that DDPMs pre-trained even
just on single-domain images are already equipped with sufficient
representation abilities to reconstruct arbitrary images from the inverted
latent encoding following bi-directional deterministic diffusion and denoising
trajectories. This motivates us to investigate the statistical and geometric
behaviors of the Out-Of-Distribution (OOD) samples from unseen image domains in
the latent spaces along the denoising chain. Notably, we theoretically and
empirically show that the inverted OOD samples also establish Gaussians that
are distinguishable from the original In-Domain (ID) samples in the
intermediate latent spaces, which allows us to sample from them directly.
Geometrical domain-specific and model-dependent information of the unseen
subspace (e.g., sample-wise distance and angles) is used to further optimize
the sampled OOD latent encodings from the estimated Gaussian prior. We conduct
extensive analysis and experiments using pre-trained diffusion models (DDPM,
iDDPM) on different datasets (AFHQ, CelebA-HQ, LSUN-Church, and LSUN-Bedroom),
proving the effectiveness of this novel perspective to explore and re-think the
diffusion models' data synthesis generalization ability. | [
"Ye Zhu",
"Yu Wu",
"Zhiwei Deng",
"Olga Russakovsky",
"Yan Yan"
] | 2023-10-13 16:07:31 | http://arxiv.org/abs/2310.09213v1 | http://arxiv.org/pdf/2310.09213v1 | 2310.09213v1 |
Regularization-Based Methods for Ordinal Quantification | Quantification, i.e., the task of training predictors of the class prevalence
values in sets of unlabeled data items, has received increased attention in
recent years. However, most quantification research has concentrated on
developing algorithms for binary and multiclass problems in which the classes
are not ordered. Here, we study the ordinal case, i.e., the case in which a
total order is defined on the set of n>2 classes. We give three main
contributions to this field. First, we create and make available two datasets
for ordinal quantification (OQ) research that overcome the inadequacies of the
previously available ones. Second, we experimentally compare the most important
OQ algorithms proposed in the literature so far. To this end, we bring together
algorithms proposed by authors from very different research fields, such as
data mining and astrophysics, who were unaware of each others' developments.
Third, we propose a novel class of regularized OQ algorithms, which outperforms
existing algorithms in our experiments. The key to this gain in performance is
that our regularization prevents ordinally implausible estimates, assuming that
ordinal distributions tend to be smooth in practice. We informally verify this
assumption for several real-world applications. | [
"Mirko Bunse",
"Alejandro Moreo",
"Fabrizio Sebastiani",
"Martin Senz"
] | 2023-10-13 16:04:06 | http://arxiv.org/abs/2310.09210v1 | http://arxiv.org/pdf/2310.09210v1 | 2310.09210v1 |
SiamAF: Learning Shared Information from ECG and PPG Signals for Robust Atrial Fibrillation Detection | Atrial fibrillation (AF) is the most common type of cardiac arrhythmia. It is
associated with an increased risk of stroke, heart failure, and other
cardiovascular complications, but can be clinically silent. Passive AF
monitoring with wearables may help reduce adverse clinical outcomes related to
AF. Detecting AF in noisy wearable data poses a significant challenge, leading
to the emergence of various deep learning techniques. Previous deep learning
models learn from a single modality, either electrocardiogram (ECG) or
photoplethysmography (PPG) signals. However, deep learning models often
struggle to learn generalizable features and rely on features that are more
susceptible to corruption from noise, leading to sub-optimal performances in
certain scenarios, especially with low-quality signals. Given the increasing
availability of ECG and PPG signal pairs from wearables and bedside monitors,
we propose a new approach, SiamAF, leveraging a novel Siamese network
architecture and joint learning loss function to learn shared information from
both ECG and PPG signals. At inference time, the proposed model is able to
predict AF from either PPG or ECG and outperforms baseline methods on three
external test sets. It learns medically relevant features as a result of our
novel architecture design. The proposed model also achieves comparable
performance to traditional learning regimes while requiring much fewer training
labels, providing a potential approach to reduce future reliance on manual
labeling. | [
"Zhicheng Guo",
"Cheng Ding",
"Duc H. Do",
"Amit Shah",
"Randall J. Lee",
"Xiao Hu",
"Cynthia Rudin"
] | 2023-10-13 15:48:24 | http://arxiv.org/abs/2310.09203v1 | http://arxiv.org/pdf/2310.09203v1 | 2310.09203v1 |
Graph Condensation via Eigenbasis Matching | The increasing amount of graph data places requirements on the efficiency and
scalability of graph neural networks (GNNs), despite their effectiveness in
various graph-related applications. Recently, the emerging graph condensation
(GC) sheds light on reducing the computational cost of GNNs from a data
perspective. It aims to replace the real large graph with a significantly
smaller synthetic graph so that GNNs trained on both graphs exhibit comparable
performance. However, our empirical investigation reveals that existing GC
methods suffer from poor generalization, i.e., different GNNs trained on the
same synthetic graph have obvious performance gaps. What factors hinder the
generalization of GC and how can we mitigate it? To answer this question, we
commence with a detailed analysis and observe that GNNs will inject spectrum
bias into the synthetic graph, resulting in a distribution shift. To tackle
this issue, we propose eigenbasis matching for spectrum-free graph
condensation, named GCEM, which has two key steps: First, GCEM matches the
eigenbasis of the real and synthetic graphs, rather than the graph structure,
which eliminates the spectrum bias of GNNs. Subsequently, GCEM leverages the
spectrum of the real graph and the synthetic eigenbasis to construct the
synthetic graph, thereby preserving the essential structural information. We
theoretically demonstrate that the synthetic graph generated by GCEM maintains
the spectral similarity, i.e., total variation, of the real graph. Extensive
experiments conducted on five graph datasets verify that GCEM not only achieves
state-of-the-art performance over baselines but also significantly narrows the
performance gaps between different GNNs. | [
"Yang Liu",
"Deyu Bo",
"Chuan Shi"
] | 2023-10-13 15:48:12 | http://arxiv.org/abs/2310.09202v1 | http://arxiv.org/pdf/2310.09202v1 | 2310.09202v1 |
A 4-approximation algorithm for min max correlation clustering | We introduce a lower bounding technique for the min max correlation
clustering problem and, based on this technique, a combinatorial
4-approximation algorithm for complete graphs. This improves upon the previous
best known approximation guarantees of 5, using a linear program formulation
(Kalhan et al., 2019), and 4, for a combinatorial algorithm (Davies et al.,
2023). We extend this algorithm by a greedy joining heuristic and show
empirically that it improves the state of the art in solution quality and
runtime on several benchmark datasets. | [
"Holger Heidrich",
"Jannik Irmai",
"Bjoern Andres"
] | 2023-10-13 15:42:55 | http://arxiv.org/abs/2310.09196v1 | http://arxiv.org/pdf/2310.09196v1 | 2310.09196v1 |
Variational autoencoder with weighted samples for high-dimensional non-parametric adaptive importance sampling | Probability density function estimation with weighted samples is the main
foundation of all adaptive importance sampling algorithms. Classically, a
target distribution is approximated either by a non-parametric model or within
a parametric family. However, these models suffer from the curse of
dimensionality or from their lack of flexibility. In this contribution, we
suggest to use as the approximating model a distribution parameterised by a
variational autoencoder. We extend the existing framework to the case of
weighted samples by introducing a new objective function. The flexibility of
the obtained family of distributions makes it as expressive as a non-parametric
model, and despite the very high number of parameters to estimate, this family
is much more efficient in high dimension than the classical Gaussian or
Gaussian mixture families. Moreover, in order to add flexibility to the model
and to be able to learn multimodal distributions, we consider a learnable prior
distribution for the variational autoencoder latent variables. We also
introduce a new pre-training procedure for the variational autoencoder to find
good starting weights of the neural networks to prevent as much as possible the
posterior collapse phenomenon to happen. At last, we explicit how the resulting
distribution can be combined with importance sampling, and we exploit the
proposed procedure in existing adaptive importance sampling algorithms to draw
points from a target distribution and to estimate a rare event probability in
high dimension on two multimodal problems. | [
"Julien Demange-Chryst",
"François Bachoc",
"Jérôme Morio",
"Timothé Krauth"
] | 2023-10-13 15:40:55 | http://arxiv.org/abs/2310.09194v1 | http://arxiv.org/pdf/2310.09194v1 | 2310.09194v1 |
Does Graph Distillation See Like Vision Dataset Counterpart? | Training on large-scale graphs has achieved remarkable results in graph
representation learning, but its cost and storage have attracted increasing
concerns. Existing graph condensation methods primarily focus on optimizing the
feature matrices of condensed graphs while overlooking the impact of the
structure information from the original graphs. To investigate the impact of
the structure information, we conduct analysis from the spectral domain and
empirically identify substantial Laplacian Energy Distribution (LED) shifts in
previous works. Such shifts lead to poor performance in cross-architecture
generalization and specific tasks, including anomaly detection and link
prediction. In this paper, we propose a novel Structure-broadcasting Graph
Dataset Distillation (SGDD) scheme for broadcasting the original structure
information to the generation of the synthetic one, which explicitly prevents
overlooking the original structure information. Theoretically, the synthetic
graphs by SGDD are expected to have smaller LED shifts than previous works,
leading to superior performance in both cross-architecture settings and
specific tasks. We validate the proposed SGDD across 9 datasets and achieve
state-of-the-art results on all of them: for example, on the YelpChi dataset,
our approach maintains 98.6% test accuracy of training on the original graph
dataset with 1,000 times saving on the scale of the graph. Moreover, we
empirically evaluate there exist 17.6% ~ 31.4% reductions in LED shift crossing
9 datasets. Extensive experiments and analysis verify the effectiveness and
necessity of the proposed designs. The code is available in the GitHub
repository: https://github.com/RingBDStack/SGDD. | [
"Beining Yang",
"Kai Wang",
"Qingyun Sun",
"Cheng Ji",
"Xingcheng Fu",
"Hao Tang",
"Yang You",
"Jianxin Li"
] | 2023-10-13 15:36:48 | http://arxiv.org/abs/2310.09192v1 | http://arxiv.org/pdf/2310.09192v1 | 2310.09192v1 |
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning | Classical federated learning (FL) enables training machine learning models
without sharing data for privacy preservation, but heterogeneous data
characteristic degrades the performance of the localized model. Personalized FL
(PFL) addresses this by synthesizing personalized models from a global model
via training on local data. Such a global model may overlook the specific
information that the clients have been sampled. In this paper, we propose a
novel scheme to inject personalized prior knowledge into the global model in
each client, which attempts to mitigate the introduced incomplete information
problem in PFL. At the heart of our proposed approach is a framework, the PFL
with Bregman Divergence (pFedBreD), decoupling the personalized prior from the
local objective function regularized by Bregman divergence for greater
adaptability in personalized scenarios. We also relax the mirror descent (RMD)
to extract the prior explicitly to provide optional strategies. Additionally,
our pFedBreD is backed up by a convergence analysis. Sufficient experiments
demonstrate that our method reaches the state-of-the-art performances on 5
datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
Extensive analyses verify the robustness and necessity of proposed designs. | [
"Mingjia Shi",
"Yuhao Zhou",
"Kai Wang",
"Huaizheng Zhang",
"Shudong Huang",
"Qing Ye",
"Jiangcheng Lv"
] | 2023-10-13 15:21:25 | http://arxiv.org/abs/2310.09183v1 | http://arxiv.org/pdf/2310.09183v1 | 2310.09183v1 |
A Deep Neural Network -- Mechanistic Hybrid Model to Predict Pharmacokinetics in Rat | An important aspect in the development of small molecules as drugs or
agro-chemicals is their systemic availability after intravenous and oral
administration.The prediction of the systemic availability from the chemical
structure of a poten-tial candidate is highly desirable, as it allows to focus
the drug or agrochemicaldevelopment on compounds with a favorable kinetic
profile. However, such pre-dictions are challenging as the availability is the
result of the complex interplaybetween molecular properties, biology and
physiology and training data is rare.In this work we improve the hybrid model
developed earlier [34]. We reducethe median fold change error for the total
oral exposure from 2.85 to 2.35 andfor intravenous administration from 1.95 to
1.62. This is achieved by trainingon a larger data set, improving the neural
network architecture as well as theparametrization of mechanistic model.
Further, we extend our approach to predictadditional endpoints and to handle
different covariates, like sex and dosage form.In contrast to a pure machine
learning model, our model is able to predict newend points on which it has not
been trained. We demonstrate this feature by1predicting the exposure over the
first 24h, while the model has only been trainedon the total exposure. | [
"Florian Führer",
"Andrea Gruber",
"Holger Diedam",
"Andreas H. Göller",
"Stephan Menz",
"Sebastian Schneckener"
] | 2023-10-13 15:01:55 | http://arxiv.org/abs/2310.09167v1 | http://arxiv.org/pdf/2310.09167v1 | 2310.09167v1 |
Quantum Machine Learning in Climate Change and Sustainability: a Review | Climate change and its impact on global sustainability are critical
challenges, demanding innovative solutions that combine cutting-edge
technologies and scientific insights. Quantum machine learning (QML) has
emerged as a promising paradigm that harnesses the power of quantum computing
to address complex problems in various domains including climate change and
sustainability. In this work, we survey existing literature that applies
quantum machine learning to solve climate change and sustainability-related
problems. We review promising QML methodologies that have the potential to
accelerate decarbonization including energy systems, climate data forecasting,
climate monitoring, and hazardous events predictions. We discuss the challenges
and current limitations of quantum machine learning approaches and provide an
overview of potential opportunities and future work to leverage QML-based
methods in the important area of climate change research. | [
"Amal Nammouchi",
"Andreas Kassler",
"Andreas Theorachis"
] | 2023-10-13 14:56:38 | http://arxiv.org/abs/2310.09162v1 | http://arxiv.org/pdf/2310.09162v1 | 2310.09162v1 |
Jointly-Learned Exit and Inference for a Dynamic Neural Network : JEI-DNN | Large pretrained models, coupled with fine-tuning, are slowly becoming
established as the dominant architecture in machine learning. Even though these
models offer impressive performance, their practical application is often
limited by the prohibitive amount of resources required for every inference.
Early-exiting dynamic neural networks (EDNN) circumvent this issue by allowing
a model to make some of its predictions from intermediate layers (i.e.,
early-exit). Training an EDNN architecture is challenging as it consists of two
intertwined components: the gating mechanism (GM) that controls early-exiting
decisions and the intermediate inference modules (IMs) that perform inference
from intermediate representations. As a result, most existing approaches rely
on thresholding confidence metrics for the gating mechanism and strive to
improve the underlying backbone network and the inference modules. Although
successful, this approach has two fundamental shortcomings: 1) the GMs and the
IMs are decoupled during training, leading to a train-test mismatch; and 2) the
thresholding gating mechanism introduces a positive bias into the predictive
probabilities, making it difficult to readily extract uncertainty information.
We propose a novel architecture that connects these two modules. This leads to
significant performance improvements on classification datasets and enables
better uncertainty characterization capabilities. | [
"Florence Regol",
"Joud Chataoui",
"Mark Coates"
] | 2023-10-13 14:56:38 | http://arxiv.org/abs/2310.09163v1 | http://arxiv.org/pdf/2310.09163v1 | 2310.09163v1 |
The Computational Complexity of Finding Stationary Points in Non-Convex Optimization | Finding approximate stationary points, i.e., points where the gradient is
approximately zero, of non-convex but smooth objective functions $f$ over
unrestricted $d$-dimensional domains is one of the most fundamental problems in
classical non-convex optimization. Nevertheless, the computational and query
complexity of this problem are still not well understood when the dimension $d$
of the problem is independent of the approximation error. In this paper, we
show the following computational and query complexity results:
1. The problem of finding approximate stationary points over unrestricted
domains is PLS-complete.
2. For $d = 2$, we provide a zero-order algorithm for finding
$\varepsilon$-approximate stationary points that requires at most
$O(1/\varepsilon)$ value queries to the objective function.
3. We show that any algorithm needs at least $\Omega(1/\varepsilon)$ queries
to the objective function and/or its gradient to find $\varepsilon$-approximate
stationary points when $d=2$. Combined with the above, this characterizes the
query complexity of this problem to be $\Theta(1/\varepsilon)$.
4. For $d = 2$, we provide a zero-order algorithm for finding
$\varepsilon$-KKT points in constrained optimization problems that requires at
most $O(1/\sqrt{\varepsilon})$ value queries to the objective function. This
closes the gap between the works of Bubeck and Mikulincer [2020] and Vavasis
[1993] and characterizes the query complexity of this problem to be
$\Theta(1/\sqrt{\varepsilon})$.
5. Combining our results with the recent result of Fearnley et al. [2022], we
show that finding approximate KKT points in constrained optimization is
reducible to finding approximate stationary points in unconstrained
optimization but the converse is impossible. | [
"Alexandros Hollender",
"Manolis Zampetakis"
] | 2023-10-13 14:52:46 | http://arxiv.org/abs/2310.09157v1 | http://arxiv.org/pdf/2310.09157v1 | 2310.09157v1 |
Lattice Approximations in Wasserstein Space | We consider structured approximation of measures in Wasserstein space
$W_p(\mathbb{R}^d)$ for $p\in[1,\infty)$ by discrete and piecewise constant
measures based on a scaled Voronoi partition of $\mathbb{R}^d$. We show that if
a full rank lattice $\Lambda$ is scaled by a factor of $h\in(0,1]$, then
approximation of a measure based on the Voronoi partition of $h\Lambda$ is
$O(h)$ regardless of $d$ or $p$. We then use a covering argument to show that
$N$-term approximations of compactly supported measures is $O(N^{-\frac1d})$
which matches known rates for optimal quantizers and empirical measure
approximation in most instances. Finally, we extend these results to
noncompactly supported measures with sufficient decay. | [
"Keaton Hamm",
"Varun Khurana"
] | 2023-10-13 14:43:11 | http://arxiv.org/abs/2310.09149v1 | http://arxiv.org/pdf/2310.09149v1 | 2310.09149v1 |
Goodhart's Law in Reinforcement Learning | Implementing a reward function that perfectly captures a complex task in the
real world is impractical. As a result, it is often appropriate to think of the
reward function as a proxy for the true objective rather than as its
definition. We study this phenomenon through the lens of Goodhart's law, which
predicts that increasing optimisation of an imperfect proxy beyond some
critical point decreases performance on the true objective. First, we propose a
way to quantify the magnitude of this effect and show empirically that
optimising an imperfect proxy reward often leads to the behaviour predicted by
Goodhart's law for a wide range of environments and reward functions. We then
provide a geometric explanation for why Goodhart's law occurs in Markov
decision processes. We use these theoretical insights to propose an optimal
early stopping method that provably avoids the aforementioned pitfall and
derive theoretical regret bounds for this method. Moreover, we derive a
training method that maximises worst-case reward, for the setting where there
is uncertainty about the true reward function. Finally, we evaluate our early
stopping method experimentally. Our results support a foundation for a
theoretically-principled study of reinforcement learning under reward
misspecification. | [
"Jacek Karwowski",
"Oliver Hayman",
"Xingjian Bai",
"Klaus Kiendlhofer",
"Charlie Griffin",
"Joar Skalse"
] | 2023-10-13 14:35:59 | http://arxiv.org/abs/2310.09144v1 | http://arxiv.org/pdf/2310.09144v1 | 2310.09144v1 |
The Consensus Game: Language Model Generation via Equilibrium Search | When applied to question answering and other text generation tasks, language
models (LMs) may be queried generatively (by sampling answers from their output
distribution) or discriminatively (by using them to score or rank a set of
candidate outputs). These procedures sometimes yield very different
predictions. How do we reconcile mutually incompatible scoring procedures to
obtain coherent LM predictions? We introduce a new, a training-free,
game-theoretic procedure for language model decoding. Our approach casts
language model decoding as a regularized imperfect-information sequential
signaling game - which we term the CONSENSUS GAME - in which a GENERATOR seeks
to communicate an abstract correctness parameter using natural language
sentences to a DISCRIMINATOR. We develop computational procedures for finding
approximate equilibria of this game, resulting in a decoding algorithm we call
EQUILIBRIUM-RANKING. Applied to a large number of tasks (including reading
comprehension, commonsense reasoning, mathematical problem-solving, and
dialog), EQUILIBRIUM-RANKING consistently, and sometimes substantially,
improves performance over existing LM decoding procedures - on multiple
benchmarks, we observe that applying EQUILIBRIUM-RANKING to LLaMA-7B
outperforms the much larger LLaMA-65B and PaLM-540B models. These results
highlight the promise of game-theoretic tools for addressing fundamental
challenges of truthfulness and consistency in LMs. | [
"Athul Paul Jacob",
"Yikang Shen",
"Gabriele Farina",
"Jacob Andreas"
] | 2023-10-13 14:27:21 | http://arxiv.org/abs/2310.09139v1 | http://arxiv.org/pdf/2310.09139v1 | 2310.09139v1 |
Computing Marginal and Conditional Divergences between Decomposable Models with Applications | The ability to compute the exact divergence between two high-dimensional
distributions is useful in many applications but doing so naively is
intractable. Computing the alpha-beta divergence -- a family of divergences
that includes the Kullback-Leibler divergence and Hellinger distance -- between
the joint distribution of two decomposable models, i.e chordal Markov networks,
can be done in time exponential in the treewidth of these models. However,
reducing the dissimilarity between two high-dimensional objects to a single
scalar value can be uninformative. Furthermore, in applications such as
supervised learning, the divergence over a conditional distribution might be of
more interest. Therefore, we propose an approach to compute the exact
alpha-beta divergence between any marginal or conditional distribution of two
decomposable models. Doing so tractably is non-trivial as we need to decompose
the divergence between these distributions and therefore, require a
decomposition over the marginal and conditional distributions of these models.
Consequently, we provide such a decomposition and also extend existing work to
compute the marginal and conditional alpha-beta divergence between these
decompositions. We then show how our method can be used to analyze
distributional changes by first applying it to a benchmark image dataset.
Finally, based on our framework, we propose a novel way to quantify the error
in contemporary superconducting quantum computers. Code for all experiments is
available at: https://lklee.dev/pub/2023-icdm/code | [
"Loong Kuan Lee",
"Geoffrey I. Webb",
"Daniel F. Schmidt",
"Nico Piatkowski"
] | 2023-10-13 14:17:25 | http://arxiv.org/abs/2310.09129v1 | http://arxiv.org/pdf/2310.09129v1 | 2310.09129v1 |
On Generalization Bounds for Projective Clustering | Given a set of points, clustering consists of finding a partition of a point
set into $k$ clusters such that the center to which a point is assigned is as
close as possible. Most commonly, centers are points themselves, which leads to
the famous $k$-median and $k$-means objectives. One may also choose centers to
be $j$ dimensional subspaces, which gives rise to subspace clustering. In this
paper, we consider learning bounds for these problems. That is, given a set of
$n$ samples $P$ drawn independently from some unknown, but fixed distribution
$\mathcal{D}$, how quickly does a solution computed on $P$ converge to the
optimal clustering of $\mathcal{D}$? We give several near optimal results. In
particular,
For center-based objectives, we show a convergence rate of
$\tilde{O}\left(\sqrt{{k}/{n}}\right)$. This matches the known optimal bounds
of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016]
and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for $k$-means
and extends it to other important objectives such as $k$-median.
For subspace clustering with $j$-dimensional subspaces, we show a convergence
rate of $\tilde{O}\left(\sqrt{\frac{kj^2}{n}}\right)$. These are the first
provable bounds for most of these problems. For the specific case of projective
clustering, which generalizes $k$-means, we show a convergence rate of
$\Omega\left(\sqrt{\frac{kj}{n}}\right)$ is necessary, thereby proving that the
bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical
Society 2016] are essentially optimal. | [
"Maria Sofia Bucarelli",
"Matilde Fjeldsø Larsen",
"Chris Schwiegelshohn",
"Mads Bech Toftrup"
] | 2023-10-13 14:15:54 | http://arxiv.org/abs/2310.09127v1 | http://arxiv.org/pdf/2310.09127v1 | 2310.09127v1 |
Physics-guided Noise Neural Proxy for Low-light Raw Image Denoising | Low-light raw image denoising plays a crucial role in mobile photography, and
learning-based methods have become the mainstream approach. Training the
learning-based methods with synthetic data emerges as an efficient and
practical alternative to paired real data. However, the quality of synthetic
data is inherently limited by the low accuracy of the noise model, which
decreases the performance of low-light raw image denoising. In this paper, we
develop a novel framework for accurate noise modeling that learns a
physics-guided noise neural proxy (PNNP) from dark frames. PNNP integrates
three efficient techniques: physics-guided noise decoupling (PND),
physics-guided proxy model (PPM), and differentiable distribution-oriented loss
(DDL). The PND decouples the dark frame into different components and handles
different levels of noise in a flexible manner, which reduces the complexity of
the noise neural proxy. The PPM incorporates physical priors to effectively
constrain the generated noise, which promotes the accuracy of the noise neural
proxy. The DDL provides explicit and reliable supervision for noise modeling,
which promotes the precision of the noise neural proxy. Extensive experiments
on public low-light raw image denoising datasets and real low-light imaging
scenarios demonstrate the superior performance of our PNNP framework. | [
"Hansen Feng",
"Lizhi Wang",
"Yiqi Huang",
"Yuzhi Wang",
"Hua Huang"
] | 2023-10-13 14:14:43 | http://arxiv.org/abs/2310.09126v1 | http://arxiv.org/pdf/2310.09126v1 | 2310.09126v1 |
Training and Predicting Visual Error for Real-Time Applications | Visual error metrics play a fundamental role in the quantification of
perceived image similarity. Most recently, use cases for them in real-time
applications have emerged, such as content-adaptive shading and shading reuse
to increase performance and improve efficiency. A wide range of different
metrics has been established, with the most sophisticated being capable of
capturing the perceptual characteristics of the human visual system. However,
their complexity, computational expense, and reliance on reference images to
compare against prevent their generalized use in real-time, restricting such
applications to using only the simplest available metrics. In this work, we
explore the abilities of convolutional neural networks to predict a variety of
visual metrics without requiring either reference or rendered images.
Specifically, we train and deploy a neural network to estimate the visual error
resulting from reusing shading or using reduced shading rates. The resulting
models account for 70%-90% of the variance while achieving up to an order of
magnitude faster computation times. Our solution combines image-space
information that is readily available in most state-of-the-art deferred shading
pipelines with reprojection from previous frames to enable an adequate estimate
of visual errors, even in previously unseen regions. We describe a suitable
convolutional network architecture and considerations for data preparation for
training. We demonstrate the capability of our network to predict complex error
metrics at interactive rates in a real-time application that implements
content-adaptive shading in a deferred pipeline. Depending on the portion of
unseen image regions, our approach can achieve up to $2\times$ performance
compared to state-of-the-art methods. | [
"João Libório Cardoso",
"Bernhard Kerbl",
"Lei Yang",
"Yury Uralsky",
"Michael Wimmer"
] | 2023-10-13 14:14:00 | http://arxiv.org/abs/2310.09125v1 | http://arxiv.org/pdf/2310.09125v1 | 2310.09125v1 |
Automatic Music Playlist Generation via Simulation-based Reinforcement Learning | Personalization of playlists is a common feature in music streaming services,
but conventional techniques, such as collaborative filtering, rely on explicit
assumptions regarding content quality to learn how to make recommendations.
Such assumptions often result in misalignment between offline model objectives
and online user satisfaction metrics. In this paper, we present a reinforcement
learning framework that solves for such limitations by directly optimizing for
user satisfaction metrics via the use of a simulated playlist-generation
environment. Using this simulator we develop and train a modified Deep
Q-Network, the action head DQN (AH-DQN), in a manner that addresses the
challenges imposed by the large state and action space of our RL formulation.
The resulting policy is capable of making recommendations from large and
dynamic sets of candidate items with the expectation of maximizing consumption
metrics. We analyze and evaluate agents offline via simulations that use
environment models trained on both public and proprietary streaming datasets.
We show how these agents lead to better user-satisfaction metrics compared to
baseline methods during online A/B tests. Finally, we demonstrate that
performance assessments produced from our simulator are strongly correlated
with observed online metric results. | [
"Federico Tomasi",
"Joseph Cauteruccio",
"Surya Kanoria",
"Kamil Ciosek",
"Matteo Rinaldi",
"Zhenwen Dai"
] | 2023-10-13 14:13:02 | http://arxiv.org/abs/2310.09123v1 | http://arxiv.org/pdf/2310.09123v1 | 2310.09123v1 |
DSG: An End-to-End Document Structure Generator | Information in industry, research, and the public sector is widely stored as
rendered documents (e.g., PDF files, scans). Hence, to enable downstream tasks,
systems are needed that map rendered documents onto a structured hierarchical
format. However, existing systems for this task are limited by heuristics and
are not end-to-end trainable. In this work, we introduce the Document Structure
Generator (DSG), a novel system for document parsing that is fully end-to-end
trainable. DSG combines a deep neural network for parsing (i) entities in
documents (e.g., figures, text blocks, headers, etc.) and (ii) relations that
capture the sequence and nested structure between entities. Unlike existing
systems that rely on heuristics, our DSG is trained end-to-end, making it
effective and flexible for real-world applications. We further contribute a
new, large-scale dataset called E-Periodica comprising real-world magazines
with complex document structures for evaluation. Our results demonstrate that
our DSG outperforms commercial OCR tools and, on top of that, achieves
state-of-the-art performance. To the best of our knowledge, our DSG system is
the first end-to-end trainable system for hierarchical document parsing. | [
"Johannes Rausch",
"Gentiana Rashiti",
"Maxim Gusev",
"Ce Zhang",
"Stefan Feuerriegel"
] | 2023-10-13 14:03:01 | http://arxiv.org/abs/2310.09118v1 | http://arxiv.org/pdf/2310.09118v1 | 2310.09118v1 |
BaitBuster-Bangla: A Comprehensive Dataset for Clickbait Detection in Bangla with Multi-Feature and Multi-Modal Analysis | This study presents a large multi-modal Bangla YouTube clickbait dataset
consisting of 253,070 data points collected through an automated process using
the YouTube API and Python web automation frameworks. The dataset contains 18
diverse features categorized into metadata, primary content, engagement
statistics, and labels for individual videos from 58 Bangla YouTube channels. A
rigorous preprocessing step has been applied to denoise, deduplicate, and
remove bias from the features, ensuring unbiased and reliable analysis. As the
largest and most robust clickbait corpus in Bangla to date, this dataset
provides significant value for natural language processing and data science
researchers seeking to advance modeling of clickbait phenomena in low-resource
languages. Its multi-modal nature allows for comprehensive analyses of
clickbait across content, user interactions, and linguistic dimensions to
develop more sophisticated detection methods with cross-linguistic
applications. | [
"Abdullah Al Imran",
"Md Sakib Hossain Shovon",
"M. F. Mridha"
] | 2023-10-13 13:25:16 | http://arxiv.org/abs/2310.11465v1 | http://arxiv.org/pdf/2310.11465v1 | 2310.11465v1 |
Insightful analysis of historical sources at scales beyond human capabilities using unsupervised Machine Learning and XAI | Historical materials are abundant. Yet, piecing together how human knowledge
has evolved and spread both diachronically and synchronically remains a
challenge that can so far only be very selectively addressed. The vast volume
of materials precludes comprehensive studies, given the restricted number of
human specialists. However, as large amounts of historical materials are now
available in digital form there is a promising opportunity for AI-assisted
historical analysis. In this work, we take a pivotal step towards analyzing
vast historical corpora by employing innovative machine learning (ML)
techniques, enabling in-depth historical insights on a grand scale. Our study
centers on the evolution of knowledge within the `Sacrobosco Collection' -- a
digitized collection of 359 early modern printed editions of textbooks on
astronomy used at European universities between 1472 and 1650 -- roughly 76,000
pages, many of which contain astronomic, computational tables. An ML based
analysis of these tables helps to unveil important facets of the
spatio-temporal evolution of knowledge and innovation in the field of
mathematical astronomy in the period, as taught at European universities. | [
"Oliver Eberle",
"Jochen Büttner",
"Hassan El-Hajj",
"Grégoire Montavon",
"Klaus-Robert Müller",
"Matteo Valleriani"
] | 2023-10-13 13:22:05 | http://arxiv.org/abs/2310.09091v1 | http://arxiv.org/pdf/2310.09091v1 | 2310.09091v1 |
Topological Data Analysis in smart manufacturing processes -- A survey on the state of the art | Topological Data Analysis (TDA) is a mathematical method using techniques
from topology for the analysis of complex, multi-dimensional data that has been
widely and successfully applied in several fields such as medicine, material
science, biology, and others. This survey summarizes the state of the art of
TDA in yet another application area: industrial manufacturing and production in
the context of Industry 4.0. We perform a rigorous and reproducible literature
search of applications of TDA on the setting of industrial production and
manufacturing. The resulting works are clustered and analyzed based on their
application area within the manufacturing process and their input data type. We
highlight the key benefits of TDA and their tools in this area and describe its
challenges, as well as future potential. Finally, we discuss which TDA methods
are underutilized in (the specific area of) industry and the identified types
of application, with the goal of prompting more research in this profitable
area of application. | [
"Martin Uray",
"Barbara Giunti",
"Michael Kerber",
"Stefan Huber"
] | 2023-10-13 13:03:25 | http://arxiv.org/abs/2310.09319v1 | http://arxiv.org/pdf/2310.09319v1 | 2310.09319v1 |
Online Relocating and Matching of Ride-Hailing Services: A Model-Based Modular Approach | This study proposes an innovative model-based modular approach (MMA) to
dynamically optimize order matching and vehicle relocation in a ride-hailing
platform. MMA utilizes a two-layer and modular modeling structure. The upper
layer determines the spatial transfer patterns of vehicle flow within the
system to maximize the total revenue of the current and future stages. With the
guidance provided by the upper layer, the lower layer performs rapid
vehicle-to-order matching and vehicle relocation. MMA is interpretable, and
equipped with the customized and polynomial-time algorithm, which, as an online
order-matching and vehicle-relocation algorithm, can scale past thousands of
vehicles. We theoretically prove that the proposed algorithm can achieve the
global optimum in stylized networks, while the numerical experiments based on
both the toy network and realistic dataset demonstrate that MMA is capable of
achieving superior systematic performance compared to batch matching and
reinforcement-learning based methods. Moreover, its modular and lightweight
modeling structure further enables it to achieve a high level of robustness
against demand variation while maintaining a relatively low computational cost. | [
"Chang Gao",
"Xi Lin",
"Fang He",
"Xindi Tang"
] | 2023-10-13 12:45:52 | http://arxiv.org/abs/2310.09071v1 | http://arxiv.org/pdf/2310.09071v1 | 2310.09071v1 |
KCTS: Knowledge-Constrained Tree Search Decoding with Token-Level Hallucination Detection | Large Language Models (LLMs) have demonstrated remarkable human-level natural
language generation capabilities. However, their potential to generate
misinformation, often called the hallucination problem, poses a significant
risk to their deployment. A common approach to address this issue is to
retrieve relevant knowledge and fine-tune the LLM with the knowledge in its
input. Unfortunately, this method incurs high training costs and may cause
catastrophic forgetting for multi-tasking models. To overcome these
limitations, we propose a knowledge-constrained decoding method called KCTS
(Knowledge-Constrained Tree Search), which guides a frozen LM to generate text
aligned with the reference knowledge at each decoding step using a knowledge
classifier score and MCTS (Monte-Carlo Tree Search). To adapt the
sequence-level knowledge classifier to token-level guidance, we also propose a
novel token-level hallucination detection method called RIPA (Reward Inflection
Point Approximation). Our empirical results on knowledge-grounded dialogue and
abstractive summarization demonstrate the strength of KCTS as a plug-and-play,
model-agnostic decoding method that can effectively reduce hallucinations in
natural language generation. | [
"Sehyun Choi",
"Tianqing Fang",
"Zhaowei Wang",
"Yangqiu Song"
] | 2023-10-13 12:12:34 | http://arxiv.org/abs/2310.09044v1 | http://arxiv.org/pdf/2310.09044v1 | 2310.09044v1 |
Optimal Scheduling of Electric Vehicle Charging with Deep Reinforcement Learning considering End Users Flexibility | The rapid growth of decentralized energy resources and especially Electric
Vehicles (EV), that are expected to increase sharply over the next decade, will
put further stress on existing power distribution networks, increasing the need
for higher system reliability and flexibility. In an attempt to avoid
unnecessary network investments and to increase the controllability over
distribution networks, network operators develop demand response (DR) programs
that incentivize end users to shift their consumption in return for financial
or other benefits. Artificial intelligence (AI) methods are in the research
forefront for residential load scheduling applications, mainly due to their
high accuracy, high computational speed and lower dependence on the physical
characteristics of the models under development. The aim of this work is to
identify households' EV cost-reducing charging policy under a Time-of-Use
tariff scheme, with the use of Deep Reinforcement Learning, and more
specifically Deep Q-Networks (DQN). A novel end users flexibility potential
reward is inferred from historical data analysis, where households with solar
power generation have been used to train and test the designed algorithm. The
suggested DQN EV charging policy can lead to more than 20% of savings in end
users electricity bills. | [
"Christoforos Menos-Aikateriniadis",
"Stavros Sykiotis",
"Pavlos S. Georgilakis"
] | 2023-10-13 12:07:36 | http://arxiv.org/abs/2310.09040v1 | http://arxiv.org/pdf/2310.09040v1 | 2310.09040v1 |
MINDE: Mutual Information Neural Diffusion Estimation | In this work we present a new method for the estimation of Mutual Information
(MI) between random variables. Our approach is based on an original
interpretation of the Girsanov theorem, which allows us to use score-based
diffusion models to estimate the Kullback Leibler divergence between two
densities as a difference between their score functions. As a by-product, our
method also enables the estimation of the entropy of random variables. Armed
with such building blocks, we present a general recipe to measure MI, which
unfolds in two directions: one uses conditional diffusion process, whereas the
other uses joint diffusion processes that allow simultaneous modelling of two
random variables. Our results, which derive from a thorough experimental
protocol over all the variants of our approach, indicate that our method is
more accurate than the main alternatives from the literature, especially for
challenging distributions. Furthermore, our methods pass MI self-consistency
tests, including data processing and additivity under independence, which
instead are a pain-point of existing methods. | [
"Giulio Franzese",
"Mustapha Bounoua",
"Pietro Michiardi"
] | 2023-10-13 11:47:41 | http://arxiv.org/abs/2310.09031v1 | http://arxiv.org/pdf/2310.09031v1 | 2310.09031v1 |
Subspace Adaptation Prior for Few-Shot Learning | Gradient-based meta-learning techniques aim to distill useful prior knowledge
from a set of training tasks such that new tasks can be learned more
efficiently with gradient descent. While these methods have achieved successes
in various scenarios, they commonly adapt all parameters of trainable layers
when learning new tasks. This neglects potentially more efficient learning
strategies for a given task distribution and may be susceptible to overfitting,
especially in few-shot learning where tasks must be learned from a limited
number of examples. To address these issues, we propose Subspace Adaptation
Prior (SAP), a novel gradient-based meta-learning algorithm that jointly learns
good initialization parameters (prior knowledge) and layer-wise parameter
subspaces in the form of operation subsets that should be adaptable. In this
way, SAP can learn which operation subsets to adjust with gradient descent
based on the underlying task distribution, simultaneously decreasing the risk
of overfitting when learning new tasks. We demonstrate that this ability is
helpful as SAP yields superior or competitive performance in few-shot image
classification settings (gains between 0.1% and 3.9% in accuracy). Analysis of
the learned subspaces demonstrates that low-dimensional operations often yield
high activation strengths, indicating that they may be important for achieving
good few-shot learning performance. For reproducibility purposes, we publish
all our research code publicly. | [
"Mike Huisman",
"Aske Plaat",
"Jan N. van Rijn"
] | 2023-10-13 11:40:18 | http://arxiv.org/abs/2310.09028v1 | http://arxiv.org/pdf/2310.09028v1 | 2310.09028v1 |
Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation Encoding | Deep learning-based fault diagnosis (FD) approaches require a large amount of
training data, which are difficult to obtain since they are located across
different entities. Federated learning (FL) enables multiple clients to
collaboratively train a shared model with data privacy guaranteed. However, the
domain discrepancy and data scarcity problems among clients deteriorate the
performance of the global FL model. To tackle these issues, we propose a novel
framework called representation encoding-based federated meta-learning (REFML)
for few-shot FD. First, a novel training strategy based on representation
encoding and meta-learning is developed. It harnesses the inherent
heterogeneity among training clients, effectively transforming it into an
advantage for out-of-distribution generalization on unseen working conditions
or equipment types. Additionally, an adaptive interpolation method that
calculates the optimal combination of local and global models as the
initialization of local training is proposed. This helps to further utilize
local information to mitigate the negative effects of domain discrepancy. As a
result, high diagnostic accuracy can be achieved on unseen working conditions
or equipment types with limited training data. Compared with the
state-of-the-art methods, such as FedProx, the proposed REFML framework
achieves an increase in accuracy by 2.17%-6.50% when tested on unseen working
conditions of the same equipment type and 13.44%-18.33% when tested on totally
unseen equipment types, respectively. | [
"Jixuan Cui",
"Jun Li",
"Zhen Mei",
"Kang Wei",
"Sha Wei",
"Ming Ding",
"Wen Chen",
"Song Guo"
] | 2023-10-13 10:48:28 | http://arxiv.org/abs/2310.09002v1 | http://arxiv.org/pdf/2310.09002v1 | 2310.09002v1 |
Measuring the Stability of Process Outcome Predictions in Online Settings | Predictive Process Monitoring aims to forecast the future progress of process
instances using historical event data. As predictive process monitoring is
increasingly applied in online settings to enable timely interventions,
evaluating the performance of the underlying models becomes crucial for
ensuring their consistency and reliability over time. This is especially
important in high risk business scenarios where incorrect predictions may have
severe consequences. However, predictive models are currently usually evaluated
using a single, aggregated value or a time-series visualization, which makes it
challenging to assess their performance and, specifically, their stability over
time. This paper proposes an evaluation framework for assessing the stability
of models for online predictive process monitoring. The framework introduces
four performance meta-measures: the frequency of significant performance drops,
the magnitude of such drops, the recovery rate, and the volatility of
performance. To validate this framework, we applied it to two artificial and
two real-world event logs. The results demonstrate that these meta-measures
facilitate the comparison and selection of predictive models for different
risk-taking scenarios. Such insights are of particular value to enhance
decision-making in dynamic business environments. | [
"Suhwan Lee",
"Marco Comuzzi",
"Xixi Lu",
"Hajo A. Reijers"
] | 2023-10-13 10:37:46 | http://arxiv.org/abs/2310.09000v1 | http://arxiv.org/pdf/2310.09000v1 | 2310.09000v1 |
Reroute Prediction Service | The cost of delays was estimated as 33 billion US dollars only in 2019 for
the US National Airspace System, a peak value following a growth trend in past
years. Aiming to address this huge inefficiency, we designed and developed a
novel Data Analytics and Machine Learning system, which aims at reducing delays
by proactively supporting re-routing decisions.
Given a time interval up to a few days in the future, the system predicts if
a reroute advisory for a certain Air Route Traffic Control Center or for a
certain advisory identifier will be issued, which may impact the pertinent
routes. To deliver such predictions, the system uses historical reroute data,
collected from the System Wide Information Management (SWIM) data services
provided by the FAA, and weather data, provided by the US National Centers for
Environmental Prediction (NCEP). The data is huge in volume, and has many items
streamed at high velocity, uncorrelated and noisy. The system continuously
processes the incoming raw data and makes it available for the next step where
an interim data store is created and adaptively maintained for efficient query
processing. The resulting data is fed into an array of ML algorithms, which
compete for higher accuracy. The best performing algorithm is used in the final
prediction, generating the final results. Mean accuracy values higher than 90%
were obtained in our experiments with this system.
Our algorithm divides the area of interest in units of aggregation and uses
temporal series of the aggregate measures of weather forecast parameters in
each geographical unit, in order to detect correlations with reroutes and where
they will most likely occur. Aiming at practical application, the system is
formed by a number of microservices, which are deployed in the cloud, making
the system distributed, scalable and highly available. | [
"Ítalo Romani de Oliveira",
"Samet Ayhan",
"Michael Biglin",
"Pablo Costas",
"Euclides C. Pinto Neto"
] | 2023-10-13 10:09:12 | http://arxiv.org/abs/2310.08988v1 | http://arxiv.org/pdf/2310.08988v1 | 2310.08988v1 |
PAGE: Equilibrate Personalization and Generalization in Federated Learning | Federated learning (FL) is becoming a major driving force behind machine
learning as a service, where customers (clients) collaboratively benefit from
shared local updates under the orchestration of the service provider (server).
Representing clients' current demands and the server's future demand, local
model personalization and global model generalization are separately
investigated, as the ill-effects of data heterogeneity enforce the community to
focus on one over the other. However, these two seemingly competing goals are
of equal importance rather than black and white issues, and should be achieved
simultaneously. In this paper, we propose the first algorithm to balance
personalization and generalization on top of game theory, dubbed PAGE, which
reshapes FL as a co-opetition game between clients and the server. To explore
the equilibrium, PAGE further formulates the game as Markov decision processes,
and leverages the reinforcement learning algorithm, which simplifies the
solving complexity. Extensive experiments on four widespread datasets show that
PAGE outperforms state-of-the-art FL baselines in terms of global and local
prediction accuracy simultaneously, and the accuracy can be improved by up to
35.20% and 39.91%, respectively. In addition, biased variants of PAGE imply
promising adaptiveness to demand shifts in practice. | [
"Qian Chen",
"Zilong Wang",
"Jiaqi Hu",
"Haonan Yan",
"Jianying Zhou",
"Xiaodong Lin"
] | 2023-10-13 09:11:35 | http://arxiv.org/abs/2310.08961v1 | http://arxiv.org/pdf/2310.08961v1 | 2310.08961v1 |
CAMELL: Confidence-based Acquisition Model for Efficient Self-supervised Active Learning with Label Validation | Supervised neural approaches are hindered by their dependence on large,
meticulously annotated datasets, a requirement that is particularly cumbersome
for sequential tasks. The quality of annotations tends to deteriorate with the
transition from expert-based to crowd-sourced labelling. To address these
challenges, we present \textbf{CAMELL} (Confidence-based Acquisition Model for
Efficient self-supervised active Learning with Label validation), a pool-based
active learning framework tailored for sequential multi-output problems. CAMELL
possesses three core features: (1) it requires expert annotators to label only
a fraction of a chosen sequence, (2) it facilitates self-supervision for the
remainder of the sequence, and (3) it employs a label validation mechanism to
prevent erroneous labels from contaminating the dataset and harming model
performance. We evaluate CAMELL on sequential tasks, with a special emphasis on
dialogue belief tracking, a task plagued by the constraints of limited and
noisy datasets. Our experiments demonstrate that CAMELL outperforms the
baselines in terms of efficiency. Furthermore, the data corrections suggested
by our method contribute to an overall improvement in the quality of the
resulting datasets. | [
"Carel van Niekerk",
"Christian Geishauser",
"Michael Heck",
"Shutong Feng",
"Hsien-chin Lin",
"Nurul Lubis",
"Benjamin Ruppik",
"Renato Vukovic",
"Milica Gašić"
] | 2023-10-13 08:19:31 | http://arxiv.org/abs/2310.08944v1 | http://arxiv.org/pdf/2310.08944v1 | 2310.08944v1 |
Progressively Efficient Learning | Assistant AI agents should be capable of rapidly acquiring novel skills and
adapting to new user preferences. Traditional frameworks like imitation
learning and reinforcement learning do not facilitate this capability because
they support only low-level, inefficient forms of communication. In contrast,
humans communicate with progressive efficiency by defining and sharing abstract
intentions. Reproducing similar capability in AI agents, we develop a novel
learning framework named Communication-Efficient Interactive Learning (CEIL).
By equipping a learning agent with an abstract, dynamic language and an
intrinsic motivation to learn with minimal communication effort, CEIL leads to
emergence of a human-like pattern where the learner and the teacher communicate
progressively efficiently by exchanging increasingly more abstract intentions.
CEIL demonstrates impressive performance and communication efficiency on a 2D
MineCraft domain featuring long-horizon decision-making tasks. Agents trained
with CEIL quickly master new tasks, outperforming non-hierarchical and
hierarchical imitation learning by up to 50% and 20% in absolute success rate,
respectively, given the same number of interactions with the teacher.
Especially, the framework performs robustly with teachers modeled after human
pragmatic communication behavior. | [
"Ruijie Zheng",
"Khanh Nguyen",
"Hal Daumé III",
"Furong Huang",
"Karthik Narasimhan"
] | 2023-10-13 07:52:04 | http://arxiv.org/abs/2310.13004v1 | http://arxiv.org/pdf/2310.13004v1 | 2310.13004v1 |
LLaMA Rider: Spurring Large Language Models to Explore the Open World | Recently, various studies have leveraged Large Language Models (LLMs) to help
decision-making and planning in environments, and try to align the LLMs'
knowledge with the world conditions. Nonetheless, the capacity of LLMs to
continuously acquire environmental knowledge and adapt in an open world remains
uncertain. In this paper, we propose an approach to spur LLMs to explore the
open world, gather experiences, and learn to improve their task-solving
capabilities. In this approach, a multi-round feedback-revision mechanism is
utilized to encourage LLMs to actively select appropriate revision actions
guided by feedback information from the environment. This facilitates
exploration and enhances the model's performance. Besides, we integrate
sub-task relabeling to assist LLMs in maintaining consistency in sub-task
planning and help the model learn the combinatorial nature between tasks,
enabling it to complete a wider range of tasks through training based on the
acquired exploration experiences. By evaluation in Minecraft, an open-ended
sandbox world, we demonstrate that our approach LLaMA-Rider enhances the
efficiency of the LLM in exploring the environment, and effectively improves
the LLM's ability to accomplish more tasks through fine-tuning with merely 1.3k
instances of collected data, showing minimal training costs compared to the
baseline using reinforcement learning. | [
"Yicheng Feng",
"Yuxuan Wang",
"Jiazheng Liu",
"Sipeng Zheng",
"Zongqing Lu"
] | 2023-10-13 07:47:44 | http://arxiv.org/abs/2310.08922v1 | http://arxiv.org/pdf/2310.08922v1 | 2310.08922v1 |
Embarrassingly Simple Text Watermarks | We propose Easymark, a family of embarrassingly simple yet effective
watermarks. Text watermarking is becoming increasingly important with the
advent of Large Language Models (LLM). LLMs can generate texts that cannot be
distinguished from human-written texts. This is a serious problem for the
credibility of the text. Easymark is a simple yet effective solution to this
problem. Easymark can inject a watermark without changing the meaning of the
text at all while a validator can detect if a text was generated from a system
that adopted Easymark or not with high credibility. Easymark is extremely easy
to implement so that it only requires a few lines of code. Easymark does not
require access to LLMs, so it can be implemented on the user-side when the LLM
providers do not offer watermarked LLMs. In spite of its simplicity, it
achieves higher detection accuracy and BLEU scores than the state-of-the-art
text watermarking methods. We also prove the impossibility theorem of perfect
watermarking, which is valuable in its own right. This theorem shows that no
matter how sophisticated a watermark is, a malicious user could remove it from
the text, which motivate us to use a simple watermark such as Easymark. We
carry out experiments with LLM-generated texts and confirm that Easymark can be
detected reliably without any degradation of BLEU and perplexity, and
outperform state-of-the-art watermarks in terms of both quality and
reliability. | [
"Ryoma Sato",
"Yuki Takezawa",
"Han Bao",
"Kenta Niwa",
"Makoto Yamada"
] | 2023-10-13 07:44:05 | http://arxiv.org/abs/2310.08920v1 | http://arxiv.org/pdf/2310.08920v1 | 2310.08920v1 |
Relation-aware Ensemble Learning for Knowledge Graph Embedding | Knowledge graph (KG) embedding is a fundamental task in natural language
processing, and various methods have been proposed to explore semantic patterns
in distinctive ways. In this paper, we propose to learn an ensemble by
leveraging existing methods in a relation-aware manner. However, exploring
these semantics using relation-aware ensemble leads to a much larger search
space than general ensemble methods. To address this issue, we propose a
divide-search-combine algorithm RelEns-DSC that searches the relation-wise
ensemble weights independently. This algorithm has the same computation cost as
general ensemble methods but with much better performance. Experimental results
on benchmark datasets demonstrate the effectiveness of the proposed method in
efficiently searching relation-aware ensemble weights and achieving
state-of-the-art embedding performance. The code is public at
https://github.com/LARS-research/RelEns. | [
"Ling Yue",
"Yongqi Zhang",
"Quanming Yao",
"Yong Li",
"Xian Wu",
"Ziheng Zhang",
"Zhenxi Lin",
"Yefeng Zheng"
] | 2023-10-13 07:40:12 | http://arxiv.org/abs/2310.08917v1 | http://arxiv.org/pdf/2310.08917v1 | 2310.08917v1 |
Scalarization for Multi-Task and Multi-Domain Learning at Scale | Training a single model on multiple input domains and/or output tasks allows
for compressing information from multiple sources into a unified backbone hence
improves model efficiency. It also enables potential positive knowledge
transfer across tasks/domains, leading to improved accuracy and data-efficient
training. However, optimizing such networks is a challenge, in particular due
to discrepancies between the different tasks or domains: Despite several
hypotheses and solutions proposed over the years, recent work has shown that
uniform scalarization training, i.e., simply minimizing the average of the task
losses, yields on-par performance with more costly SotA optimization methods.
This raises the issue of how well we understand the training dynamics of
multi-task and multi-domain networks. In this work, we first devise a
large-scale unified analysis of multi-domain and multi-task learning to better
understand the dynamics of scalarization across varied task/domain combinations
and model sizes. Following these insights, we then propose to leverage
population-based training to efficiently search for the optimal scalarization
weights when dealing with a large number of tasks or domains. | [
"Amelie Royer",
"Tijmen Blankevoort",
"Babak Ehteshami Bejnordi"
] | 2023-10-13 07:31:04 | http://arxiv.org/abs/2310.08910v1 | http://arxiv.org/pdf/2310.08910v1 | 2310.08910v1 |
Community Membership Hiding as Counterfactual Graph Search via Deep Reinforcement Learning | Community detection techniques are useful tools for social media platforms to
discover tightly connected groups of users who share common interests. However,
this functionality often comes at the expense of potentially exposing
individuals to privacy breaches by inadvertently revealing their tastes or
preferences. Therefore, some users may wish to safeguard their anonymity and
opt out of community detection for various reasons, such as affiliation with
political or religious organizations.
In this study, we address the challenge of community membership hiding, which
involves strategically altering the structural properties of a network graph to
prevent one or more nodes from being identified by a given community detection
algorithm. We tackle this problem by formulating it as a constrained
counterfactual graph objective, and we solve it via deep reinforcement
learning. We validate the effectiveness of our method through two distinct
tasks: node and community deception. Extensive experiments show that our
approach overall outperforms existing baselines in both tasks. | [
"Andrea Bernini",
"Fabrizio Silvestri",
"Gabriele Tolomei"
] | 2023-10-13 07:30:50 | http://arxiv.org/abs/2310.08909v1 | http://arxiv.org/pdf/2310.08909v1 | 2310.08909v1 |
Self supervised convolutional kernel based handcrafted feature harmonization: Enhanced left ventricle hypertension disease phenotyping on echocardiography | Radiomics, a medical imaging technique, extracts quantitative handcrafted
features from images to predict diseases. Harmonization in those features
ensures consistent feature extraction across various imaging devices and
protocols. Methods for harmonization include standardized imaging protocols,
statistical adjustments, and evaluating feature robustness. Myocardial diseases
such as Left Ventricular Hypertrophy (LVH) and Hypertensive Heart Disease (HHD)
are diagnosed via echocardiography, but variable imaging settings pose
challenges. Harmonization techniques are crucial for applying handcrafted
features in disease diagnosis in such scenario. Self-supervised learning (SSL)
enhances data understanding within limited datasets and adapts to diverse data
settings. ConvNeXt-V2 integrates convolutional layers into SSL, displaying
superior performance in various tasks. This study focuses on convolutional
filters within SSL, using them as preprocessing to convert images into feature
maps for handcrafted feature harmonization. Our proposed method excelled in
harmonization evaluation and exhibited superior LVH classification performance
compared to existing methods. | [
"Jina Lee",
"Youngtaek Hong",
"Dawun Jeong",
"Yeonggul Jang",
"Sihyeon Jeong",
"Taekgeun Jung",
"Yeonyee E. Yoon",
"Inki Moon",
"Seung-Ah Lee",
"Hyuk-Jae Chang"
] | 2023-10-13 06:58:52 | http://arxiv.org/abs/2310.08897v1 | http://arxiv.org/pdf/2310.08897v1 | 2310.08897v1 |
EHI: End-to-end Learning of Hierarchical Index for Efficient Dense Retrieval | Dense embedding-based retrieval is now the industry standard for semantic
search and ranking problems, like obtaining relevant web documents for a given
query. Such techniques use a two-stage process: (a) contrastive learning to
train a dual encoder to embed both the query and documents and (b) approximate
nearest neighbor search (ANNS) for finding similar documents for a given query.
These two stages are disjoint; the learned embeddings might be ill-suited for
the ANNS method and vice-versa, leading to suboptimal performance. In this
work, we propose End-to-end Hierarchical Indexing -- EHI -- that jointly learns
both the embeddings and the ANNS structure to optimize retrieval performance.
EHI uses a standard dual encoder model for embedding queries and documents
while learning an inverted file index (IVF) style tree structure for efficient
ANNS. To ensure stable and efficient learning of discrete tree-based ANNS
structure, EHI introduces the notion of dense path embedding that captures the
position of a query/document in the tree. We demonstrate the effectiveness of
EHI on several benchmarks, including de-facto industry standard MS MARCO (Dev
set and TREC DL19) datasets. For example, with the same compute budget, EHI
outperforms state-of-the-art (SOTA) in by 0.6% (MRR@10) on MS MARCO dev set and
by 4.2% (nDCG@10) on TREC DL19 benchmarks. | [
"Ramnath Kumar",
"Anshul Mittal",
"Nilesh Gupta",
"Aditya Kusupati",
"Inderjit Dhillon",
"Prateek Jain"
] | 2023-10-13 06:53:02 | http://arxiv.org/abs/2310.08891v1 | http://arxiv.org/pdf/2310.08891v1 | 2310.08891v1 |
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction | Unsupervised pre-training strategies have proven to be highly effective in
natural language processing and computer vision. Likewise, unsupervised
reinforcement learning (RL) holds the promise of discovering a variety of
potentially useful behaviors that can accelerate the learning of a wide array
of downstream tasks. Previous unsupervised RL approaches have mainly focused on
pure exploration and mutual information skill learning. However, despite the
previous attempts, making unsupervised RL truly scalable still remains a major
open challenge: pure exploration approaches might struggle in complex
environments with large state spaces, where covering every possible transition
is infeasible, and mutual information skill learning approaches might
completely fail to explore the environment due to the lack of incentives. To
make unsupervised RL scalable to complex, high-dimensional environments, we
propose a novel unsupervised RL objective, which we call Metric-Aware
Abstraction (METRA). Our main idea is, instead of directly covering the entire
state space, to only cover a compact latent space $Z$ that is metrically
connected to the state space $S$ by temporal distances. By learning to move in
every direction in the latent space, METRA obtains a tractable set of diverse
behaviors that approximately cover the state space, being scalable to
high-dimensional environments. Through our experiments in five locomotion and
manipulation environments, we demonstrate that METRA can discover a variety of
useful behaviors even in complex, pixel-based environments, being the first
unsupervised RL method that discovers diverse locomotion behaviors in
pixel-based Quadruped and Humanoid. Our code and videos are available at
https://seohong.me/projects/metra/ | [
"Seohong Park",
"Oleh Rybkin",
"Sergey Levine"
] | 2023-10-13 06:43:11 | http://arxiv.org/abs/2310.08887v1 | http://arxiv.org/pdf/2310.08887v1 | 2310.08887v1 |
Gesture Recognition for FMCW Radar on the Edge | This paper introduces a lightweight gesture recognition system based on 60
GHz frequency modulated continuous wave (FMCW) radar. We show that gestures can
be characterized efficiently by a set of five features, and propose a slim
radar processing algorithm to extract these features. In contrast to previous
approaches, we avoid heavy 2D processing, i.e. range-Doppler imaging, and
perform instead an early target detection - this allows us to port the system
to fully embedded platforms with tight constraints on memory, compute and power
consumption. A recurrent neural network (RNN) based architecture exploits these
features to jointly detect and classify five different gestures. The proposed
system recognizes gestures with an F1 score of 98.4% on our hold-out test
dataset, it runs on an Arm Cortex-M4 microcontroller requiring less than 280 kB
of flash memory, 120 kB of RAM, and consuming 75 mW of power. | [
"Maximilian Strobel",
"Stephan Schoenfeldt",
"Jonas Daugalas"
] | 2023-10-13 06:03:07 | http://arxiv.org/abs/2310.08876v1 | http://arxiv.org/pdf/2310.08876v1 | 2310.08876v1 |
A Survey of Methods for Handling Disk Data Imbalance | Class imbalance exists in many classification problems, and since the data is
designed for accuracy, imbalance in data classes can lead to classification
challenges with a few classes having higher misclassification costs. The
Backblaze dataset, a widely used dataset related to hard discs, has a small
amount of failure data and a large amount of health data, which exhibits a
serious class imbalance. This paper provides a comprehensive overview of
research in the field of imbalanced data classification. The discussion is
organized into three main aspects: data-level methods, algorithmic-level
methods, and hybrid methods. For each type of method, we summarize and analyze
the existing problems, algorithmic ideas, strengths, and weaknesses.
Additionally, the challenges of unbalanced data classification are discussed,
along with strategies to address them. It is convenient for researchers to
choose the appropriate method according to their needs. | [
"Shuangshuang Yuan",
"Peng Wu",
"Yuehui Chen",
"Qiang Li"
] | 2023-10-13 05:35:13 | http://arxiv.org/abs/2310.08867v1 | http://arxiv.org/pdf/2310.08867v1 | 2310.08867v1 |
Adaptivity and Modularity for Efficient Generalization Over Task Complexity | Can transformers generalize efficiently on problems that require dealing with
examples with different levels of difficulty? We introduce a new task tailored
to assess generalization over different complexities and present results that
indicate that standard transformers face challenges in solving these tasks.
These tasks are variations of pointer value retrieval previously introduced by
Zhang et al. (2021). We investigate how the use of a mechanism for adaptive and
modular computation in transformers facilitates the learning of tasks that
demand generalization over the number of sequential computation steps (i.e.,
the depth of the computation graph). Based on our observations, we propose a
transformer-based architecture called Hyper-UT, which combines dynamic function
generation from hyper networks with adaptive depth from Universal Transformers.
This model demonstrates higher accuracy and a fairer allocation of
computational resources when generalizing to higher numbers of computation
steps. We conclude that mechanisms for adaptive depth and modularity complement
each other in improving efficient generalization concerning example complexity.
Additionally, to emphasize the broad applicability of our findings, we
illustrate that in a standard image recognition task, Hyper- UT's performance
matches that of a ViT model but with considerably reduced computational demands
(achieving over 70\% average savings by effectively using fewer layers). | [
"Samira Abnar",
"Omid Saremi",
"Laurent Dinh",
"Shantel Wilson",
"Miguel Angel Bautista",
"Chen Huang",
"Vimal Thilak",
"Etai Littwin",
"Jiatao Gu",
"Josh Susskind",
"Samy Bengio"
] | 2023-10-13 05:29:09 | http://arxiv.org/abs/2310.08866v1 | http://arxiv.org/pdf/2310.08866v1 | 2310.08866v1 |
In-Context Learning for Few-Shot Molecular Property Prediction | In-context learning has become an important approach for few-shot learning in
Large Language Models because of its ability to rapidly adapt to new tasks
without fine-tuning model parameters. However, it is restricted to applications
in natural language and inapplicable to other domains. In this paper, we adapt
the concepts underpinning in-context learning to develop a new algorithm for
few-shot molecular property prediction. Our approach learns to predict
molecular properties from a context of (molecule, property measurement) pairs
and rapidly adapts to new properties without fine-tuning. On the FS-Mol and
BACE molecular property prediction benchmarks, we find this method surpasses
the performance of recent meta-learning algorithms at small support sizes and
is competitive with the best methods at large support sizes. | [
"Christopher Fifty",
"Jure Leskovec",
"Sebastian Thrun"
] | 2023-10-13 05:12:48 | http://arxiv.org/abs/2310.08863v1 | http://arxiv.org/pdf/2310.08863v1 | 2310.08863v1 |
Adam-family Methods with Decoupled Weight Decay in Deep Learning | In this paper, we investigate the convergence properties of a wide class of
Adam-family methods for minimizing quadratically regularized nonsmooth
nonconvex optimization problems, especially in the context of training
nonsmooth neural networks with weight decay. Motivated by the AdamW method, we
propose a novel framework for Adam-family methods with decoupled weight decay.
Within our framework, the estimators for the first-order and second-order
moments of stochastic subgradients are updated independently of the weight
decay term. Under mild assumptions and with non-diminishing stepsizes for
updating the primary optimization variables, we establish the convergence
properties of our proposed framework. In addition, we show that our proposed
framework encompasses a wide variety of well-known Adam-family methods, hence
offering convergence guarantees for these methods in the training of nonsmooth
neural networks. More importantly, we show that our proposed framework
asymptotically approximates the SGD method, thereby providing an explanation
for the empirical observation that decoupled weight decay enhances
generalization performance for Adam-family methods. As a practical application
of our proposed framework, we propose a novel Adam-family method named Adam
with Decoupled Weight Decay (AdamD), and establish its convergence properties
under mild conditions. Numerical experiments demonstrate that AdamD outperforms
Adam and is comparable to AdamW, in the aspects of both generalization
performance and efficiency. | [
"Kuangyu Ding",
"Nachuan Xiao",
"Kim-Chuan Toh"
] | 2023-10-13 04:59:44 | http://arxiv.org/abs/2310.08858v1 | http://arxiv.org/pdf/2310.08858v1 | 2310.08858v1 |
Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation | Continual learning entails learning a sequence of tasks and balancing their
knowledge appropriately. With limited access to old training samples, much of
the current work in deep neural networks has focused on overcoming catastrophic
forgetting of old tasks in gradient-based optimization. However, the
normalization layers provide an exception, as they are updated interdependently
by the gradient and statistics of currently observed training samples, which
require specialized strategies to mitigate recency bias. In this work, we focus
on the most popular Batch Normalization (BN) and provide an in-depth
theoretical analysis of its sub-optimality in continual learning. Our analysis
demonstrates the dilemma between balance and adaptation of BN statistics for
incremental tasks, which potentially affects training stability and
generalization. Targeting on these particular challenges, we propose Adaptive
Balance of BN (AdaB$^2$N), which incorporates appropriately a Bayesian-based
strategy to adapt task-wise contributions and a modified momentum to balance BN
statistics, corresponding to the training and testing stages. By implementing
BN in a continual learning fashion, our approach achieves significant
performance gains across a wide range of benchmarks, particularly for the
challenging yet realistic online scenarios (e.g., up to 7.68%, 6.86% and 4.26%
on Split CIFAR-10, Split CIFAR-100 and Split Mini-ImageNet, respectively). Our
code is available at https://github.com/lvyilin/AdaB2N. | [
"Yilin Lyu",
"Liyuan Wang",
"Xingxing Zhang",
"Zicheng Sun",
"Hang Su",
"Jun Zhu",
"Liping Jing"
] | 2023-10-13 04:50:40 | http://arxiv.org/abs/2310.08855v1 | http://arxiv.org/pdf/2310.08855v1 | 2310.08855v1 |
Rank-DETR for High Quality Object Detection | Modern detection transformers (DETRs) use a set of object queries to predict
a list of bounding boxes, sort them by their classification confidence scores,
and select the top-ranked predictions as the final detection results for the
given input image. A highly performant object detector requires accurate
ranking for the bounding box predictions. For DETR-based detectors, the
top-ranked bounding boxes suffer from less accurate localization quality due to
the misalignment between classification scores and localization accuracy, thus
impeding the construction of high-quality detectors. In this work, we introduce
a simple and highly performant DETR-based object detector by proposing a series
of rank-oriented designs, combinedly called Rank-DETR. Our key contributions
include: (i) a rank-oriented architecture design that can prompt positive
predictions and suppress the negative ones to ensure lower false positive
rates, as well as (ii) a rank-oriented loss function and matching cost design
that prioritizes predictions of more accurate localization accuracy during
ranking to boost the AP under high IoU thresholds. We apply our method to
improve the recent SOTA methods (e.g., H-DETR and DINO-DETR) and report strong
COCO object detection results when using different backbones such as
ResNet-$50$, Swin-T, and Swin-L, demonstrating the effectiveness of our
approach. Code is available at \url{https://github.com/LeapLabTHU/Rank-DETR}. | [
"Yifan Pu",
"Weicong Liang",
"Yiduo Hao",
"Yuhui Yuan",
"Yukang Yang",
"Chao Zhang",
"Han Hu",
"Gao Huang"
] | 2023-10-13 04:48:32 | http://arxiv.org/abs/2310.08854v2 | http://arxiv.org/pdf/2310.08854v2 | 2310.08854v2 |
Semi-Supervised End-To-End Contrastive Learning For Time Series Classification | Time series classification is a critical task in various domains, such as
finance, healthcare, and sensor data analysis. Unsupervised contrastive
learning has garnered significant interest in learning effective
representations from time series data with limited labels. The prevalent
approach in existing contrastive learning methods consists of two separate
stages: pre-training the encoder on unlabeled datasets and fine-tuning the
well-trained model on a small-scale labeled dataset. However, such two-stage
approaches suffer from several shortcomings, such as the inability of
unsupervised pre-training contrastive loss to directly affect downstream
fine-tuning classifiers, and the lack of exploiting the classification loss
which is guided by valuable ground truth. In this paper, we propose an
end-to-end model called SLOTS (Semi-supervised Learning fOr Time
clasSification). SLOTS receives semi-labeled datasets, comprising a large
number of unlabeled samples and a small proportion of labeled samples, and maps
them to an embedding space through an encoder. We calculate not only the
unsupervised contrastive loss but also measure the supervised contrastive loss
on the samples with ground truth. The learned embeddings are fed into a
classifier, and the classification loss is calculated using the available true
labels. The unsupervised, supervised contrastive losses and classification loss
are jointly used to optimize the encoder and classifier. We evaluate SLOTS by
comparing it with ten state-of-the-art methods across five datasets. The
results demonstrate that SLOTS is a simple yet effective framework. When
compared to the two-stage framework, our end-to-end SLOTS utilizes the same
input data, consumes a similar computational cost, but delivers significantly
improved performance. We release code and datasets at
https://anonymous.4open.science/r/SLOTS-242E. | [
"Huili Cai",
"Xiang Zhang",
"Xiaofeng Liu"
] | 2023-10-13 04:22:21 | http://arxiv.org/abs/2310.08848v1 | http://arxiv.org/pdf/2310.08848v1 | 2310.08848v1 |
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting | Overfitting negatively impacts the generalization ability of deep neural
networks (DNNs) in both natural and adversarial training. Existing methods
struggle to consistently address different types of overfitting, typically
designing strategies that focus separately on either natural or adversarial
patterns. In this work, we adopt a unified perspective by solely focusing on
natural patterns to explore different types of overfitting. Specifically, we
examine the memorization effect in DNNs and reveal a shared behaviour termed
over-memorization, which impairs their generalization capacity. This behaviour
manifests as DNNs suddenly becoming high-confidence in predicting certain
training patterns and retaining a persistent memory for them. Furthermore, when
DNNs over-memorize an adversarial pattern, they tend to simultaneously exhibit
high-confidence prediction for the corresponding natural pattern. These
findings motivate us to holistically mitigate different types of overfitting by
hindering the DNNs from over-memorization natural patterns. To this end, we
propose a general framework, Distraction Over-Memorization (DOM), which
explicitly prevents over-memorization by either removing or augmenting the
high-confidence natural patterns. Extensive experiments demonstrate the
effectiveness of our proposed method in mitigating overfitting across various
training paradigms. | [
"Runqi Lin",
"Chaojian Yu",
"Bo Han",
"Tongliang Liu"
] | 2023-10-13 04:14:51 | http://arxiv.org/abs/2310.08847v1 | http://arxiv.org/pdf/2310.08847v1 | 2310.08847v1 |
A Framework for Few-Shot Policy Transfer through Observation Mapping and Behavior Cloning | Despite recent progress in Reinforcement Learning for robotics applications,
many tasks remain prohibitively difficult to solve because of the expensive
interaction cost. Transfer learning helps reduce the training time in the
target domain by transferring knowledge learned in a source domain. Sim2Real
transfer helps transfer knowledge from a simulated robotic domain to a physical
target domain. Knowledge transfer reduces the time required to train a task in
the physical world, where the cost of interactions is high. However, most
existing approaches assume exact correspondence in the task structure and the
physical properties of the two domains. This work proposes a framework for
Few-Shot Policy Transfer between two domains through Observation Mapping and
Behavior Cloning. We use Generative Adversarial Networks (GANs) along with a
cycle-consistency loss to map the observations between the source and target
domains and later use this learned mapping to clone the successful source task
behavior policy to the target domain. We observe successful behavior policy
transfer with limited target task interactions and in cases where the source
and target task are semantically dissimilar. | [
"Yash Shukla",
"Bharat Kesari",
"Shivam Goel",
"Robert Wright",
"Jivko Sinapov"
] | 2023-10-13 03:15:42 | http://arxiv.org/abs/2310.08836v1 | http://arxiv.org/pdf/2310.08836v1 | 2310.08836v1 |
Optimal Sample Complexity for Average Reward Markov Decision Processes | We settle the sample complexity of policy learning for the maximization of
the long run average reward associated with a uniformly ergodic Markov decision
process (MDP), assuming a generative model. In this context, the existing
literature provides a sample complexity upper bound of $\widetilde
O(|S||A|t_{\text{mix}}^2 \epsilon^{-2})$ and a lower bound of
$\Omega(|S||A|t_{\text{mix}} \epsilon^{-2})$. In these expressions, $|S|$ and
$|A|$ denote the cardinalities of the state and action spaces respectively,
$t_{\text{mix}}$ serves as a uniform upper limit for the total variation mixing
times, and $\epsilon$ signifies the error tolerance. Therefore, a notable gap
of $t_{\text{mix}}$ still remains to be bridged. Our primary contribution is to
establish an estimator for the optimal policy of average reward MDPs with a
sample complexity of $\widetilde O(|S||A|t_{\text{mix}}\epsilon^{-2})$,
effectively reaching the lower bound in the literature. This is achieved by
combining algorithmic ideas in Jin and Sidford (2021) with those of Li et al.
(2020). | [
"Shengbo Wang",
"Jose Blanchet",
"Peter Glynn"
] | 2023-10-13 03:08:59 | http://arxiv.org/abs/2310.08833v1 | http://arxiv.org/pdf/2310.08833v1 | 2310.08833v1 |
Distance-rank Aware Sequential Reward Learning for Inverse Reinforcement Learning with Sub-optimal Demonstrations | Inverse reinforcement learning (IRL) aims to explicitly infer an underlying
reward function based on collected expert demonstrations. Considering that
obtaining expert demonstrations can be costly, the focus of current IRL
techniques is on learning a better-than-demonstrator policy using a reward
function derived from sub-optimal demonstrations. However, existing IRL
algorithms primarily tackle the challenge of trajectory ranking ambiguity when
learning the reward function. They overlook the crucial role of considering the
degree of difference between trajectories in terms of their returns, which is
essential for further removing reward ambiguity. Additionally, it is important
to note that the reward of a single transition is heavily influenced by the
context information within the trajectory. To address these issues, we
introduce the Distance-rank Aware Sequential Reward Learning (DRASRL)
framework. Unlike existing approaches, DRASRL takes into account both the
ranking of trajectories and the degrees of dissimilarity between them to
collaboratively eliminate reward ambiguity when learning a sequence of
contextually informed reward signals. Specifically, we leverage the distance
between policies, from which the trajectories are generated, as a measure to
quantify the degree of differences between traces. This distance-aware
information is then used to infer embeddings in the representation space for
reward learning, employing the contrastive learning technique. Meanwhile, we
integrate the pairwise ranking loss function to incorporate ranking information
into the latent features. Moreover, we resort to the Transformer architecture
to capture the contextual dependencies within the trajectories in the latent
space, leading to more accurate reward estimation. Through extensive
experimentation, our DRASRL framework demonstrates significant performance
improvements over previous SOTA methods. | [
"Lu Li",
"Yuxin Pan",
"Ruobing Chen",
"Jie Liu",
"Zilin Wang",
"Yu Liu",
"Zhiheng Li"
] | 2023-10-13 02:38:35 | http://arxiv.org/abs/2310.08823v1 | http://arxiv.org/pdf/2310.08823v1 | 2310.08823v1 |
Exploring the relationship between response time sequence in scale answering process and severity of insomnia: a machine learning approach | Objectives: The study aims to investigate the relationship between insomnia
and response time. Additionally, it aims to develop a machine learning model to
predict the presence of insomnia in participants using response time data.
Methods: A mobile application was designed to administer scale tests and
collect response time data from 2729 participants. The relationship between
symptom severity and response time was explored, and a machine learning model
was developed to predict the presence of insomnia. Results: The result revealed
a statistically significant difference (p<.001) in the total response time
between participants with or without insomnia symptoms. A correlation was
observed between the severity of specific insomnia aspects and response times
at the individual questions level. The machine learning model demonstrated a
high predictive accuracy of 0.743 in predicting insomnia symptoms based on
response time data. Conclusions: These findings highlight the potential utility
of response time data to evaluate cognitive and psychological measures,
demonstrating the effectiveness of using response time as a diagnostic tool in
the assessment of insomnia. | [
"Zhao Su",
"Rongxun Liu",
"Keyin Zhou",
"Xinru Wei",
"Ning Wang",
"Zexin Lin",
"Yuanchen Xie",
"Jie Wang",
"Fei Wang",
"Shenzhong Zhang",
"Xizhe Zhang"
] | 2023-10-13 02:06:52 | http://arxiv.org/abs/2310.08817v1 | http://arxiv.org/pdf/2310.08817v1 | 2310.08817v1 |
A Nonlinear Method for time series forecasting using VMD-GARCH-LSTM model | Time series forecasting represents a significant and challenging task across
various fields. Recently, methods based on mode decomposition have dominated
the forecasting of complex time series because of the advantages of capturing
local characteristics and extracting intrinsic modes from data. Unfortunately,
most models fail to capture the implied volatilities that contain significant
information. To enhance the forecasting of current, rapidly evolving, and
volatile time series, we propose a novel decomposition-ensemble paradigm, the
VMD-LSTM-GARCH model. The Variational Mode Decomposition algorithm is employed
to decompose the time series into K sub-modes. Subsequently, the GARCH model
extracts the volatility information from these sub-modes, which serve as the
input for the LSTM. The numerical and volatility information of each sub-mode
is utilized to train a Long Short-Term Memory network. This network predicts
the sub-mode, and then we aggregate the predictions from all sub-modes to
produce the output. By integrating econometric and artificial intelligence
methods, and taking into account both the numerical and volatility information
of the time series, our proposed model demonstrates superior performance in
time series forecasting, as evidenced by the significant decrease in MSE, RMSE,
and MAPE in our comparative experimental results. | [
"Zhengtao Gui",
"Haoyuan Li",
"Sijie Xu",
"Yu Chen"
] | 2023-10-13 01:50:43 | http://arxiv.org/abs/2310.08812v1 | http://arxiv.org/pdf/2310.08812v1 | 2310.08812v1 |
DDMT: Denoising Diffusion Mask Transformer Models for Multivariate Time Series Anomaly Detection | Anomaly detection in multivariate time series has emerged as a crucial
challenge in time series research, with significant research implications in
various fields such as fraud detection, fault diagnosis, and system state
estimation. Reconstruction-based models have shown promising potential in
recent years for detecting anomalies in time series data. However, due to the
rapid increase in data scale and dimensionality, the issues of noise and Weak
Identity Mapping (WIM) during time series reconstruction have become
increasingly pronounced. To address this, we introduce a novel Adaptive Dynamic
Neighbor Mask (ADNM) mechanism and integrate it with the Transformer and
Denoising Diffusion Model, creating a new framework for multivariate time
series anomaly detection, named Denoising Diffusion Mask Transformer (DDMT).
The ADNM module is introduced to mitigate information leakage between input and
output features during data reconstruction, thereby alleviating the problem of
WIM during reconstruction. The Denoising Diffusion Transformer (DDT) employs
the Transformer as an internal neural network structure for Denoising Diffusion
Model. It learns the stepwise generation process of time series data to model
the probability distribution of the data, capturing normal data patterns and
progressively restoring time series data by removing noise, resulting in a
clear recovery of anomalies. To the best of our knowledge, this is the first
model that combines Denoising Diffusion Model and the Transformer for
multivariate time series anomaly detection. Experimental evaluations were
conducted on five publicly available multivariate time series anomaly detection
datasets. The results demonstrate that the model effectively identifies
anomalies in time series data, achieving state-of-the-art performance in
anomaly detection. | [
"Chaocheng Yang",
"Tingyin Wang",
"Xuanhui Yan"
] | 2023-10-13 01:18:41 | http://arxiv.org/abs/2310.08800v1 | http://arxiv.org/pdf/2310.08800v1 | 2310.08800v1 |
Mitigating Bias for Question Answering Models by Tracking Bias Influence | Models of various NLP tasks have been shown to exhibit stereotypes, and the
bias in the question answering (QA) models is especially harmful as the output
answers might be directly consumed by the end users. There have been datasets
to evaluate bias in QA models, while bias mitigation technique for the QA
models is still under-explored. In this work, we propose BMBI, an approach to
mitigate the bias of multiple-choice QA models. Based on the intuition that a
model would lean to be more biased if it learns from a biased example, we
measure the bias level of a query instance by observing its influence on
another instance. If the influenced instance is more biased, we derive that the
query instance is biased. We then use the bias level detected as an
optimization objective to form a multi-task learning setting in addition to the
original QA task. We further introduce a new bias evaluation metric to quantify
bias in a comprehensive and sensitive way. We show that our method could be
applied to multiple QA formulations across multiple bias categories. It can
significantly reduce the bias level in all 9 bias categories in the BBQ dataset
while maintaining comparable QA accuracy. | [
"Mingyu Derek Ma",
"Jiun-Yu Kao",
"Arpit Gupta",
"Yu-Hsiang Lin",
"Wenbo Zhao",
"Tagyoung Chung",
"Wei Wang",
"Kai-Wei Chang",
"Nanyun Peng"
] | 2023-10-13 00:49:09 | http://arxiv.org/abs/2310.08795v1 | http://arxiv.org/pdf/2310.08795v1 | 2310.08795v1 |
Analysis of Weather and Time Features in Machine Learning-aided ERCOT Load Forecasting | Accurate load forecasting is critical for efficient and reliable operations
of the electric power system. A large part of electricity consumption is
affected by weather conditions, making weather information an important
determinant of electricity usage. Personal appliances and industry equipment
also contribute significantly to electricity demand with temporal patterns,
making time a useful factor to consider in load forecasting. This work develops
several machine learning (ML) models that take various time and weather
information as part of the input features to predict the short-term system-wide
total load. Ablation studies were also performed to investigate and compare the
impacts of different weather factors on the prediction accuracy. Actual load
and historical weather data for the same region were processed and then used to
train the ML models. It is interesting to observe that using all available
features, each of which may be correlated to the load, is unlikely to achieve
the best forecasting performance; features with redundancy may even decrease
the inference capabilities of ML models. This indicates the importance of
feature selection for ML models. Overall, case studies demonstrated the
effectiveness of ML models trained with different weather and time input
features for ERCOT load forecasting. | [
"Jonathan Yang",
"Mingjian Tuo",
"Jin Lu",
"Xingpeng Li"
] | 2023-10-13 00:46:12 | http://arxiv.org/abs/2310.08793v1 | http://arxiv.org/pdf/2310.08793v1 | 2310.08793v1 |
Incentive Mechanism Design for Distributed Ensemble Learning | Distributed ensemble learning (DEL) involves training multiple models at
distributed learners, and then combining their predictions to improve
performance. Existing related studies focus on DEL algorithm design and
optimization but ignore the important issue of incentives, without which
self-interested learners may be unwilling to participate in DEL. We aim to fill
this gap by presenting a first study on the incentive mechanism design for DEL.
Our proposed mechanism specifies both the amount of training data and reward
for learners with heterogeneous computation and communication costs. One design
challenge is to have an accurate understanding regarding how learners'
diversity (in terms of training data) affects the ensemble accuracy. To this
end, we decompose the ensemble accuracy into a diversity-precision tradeoff to
guide the mechanism design. Another challenge is that the mechanism design
involves solving a mixed-integer program with a large search space. To this
end, we propose an alternating algorithm that iteratively updates each
learner's training data size and reward. We prove that under mild conditions,
the algorithm converges. Numerical results using MNIST dataset show an
interesting result: our proposed mechanism may prefer a lower level of learner
diversity to achieve a higher ensemble accuracy. | [
"Chao Huang",
"Pengchao Han",
"Jianwei Huang"
] | 2023-10-13 00:34:12 | http://arxiv.org/abs/2310.08792v1 | http://arxiv.org/pdf/2310.08792v1 | 2310.08792v1 |
Price of Stability in Quality-Aware Federated Learning | Federated Learning (FL) is a distributed machine learning scheme that enables
clients to train a shared global model without exchanging local data. The
presence of label noise can severely degrade the FL performance, and some
existing studies have focused on algorithm design for label denoising. However,
they ignored the important issue that clients may not apply costly label
denoising strategies due to them being self-interested and having heterogeneous
valuations on the FL performance. To fill this gap, we model the clients'
interactions as a novel label denoising game and characterize its equilibrium.
We also analyze the price of stability, which quantifies the difference in the
system performance (e.g., global model accuracy, social welfare) between the
equilibrium outcome and the socially optimal solution. We prove that the
equilibrium outcome always leads to a lower global model accuracy than the
socially optimal solution does. We further design an efficient algorithm to
compute the socially optimal solution. Numerical experiments on MNIST dataset
show that the price of stability increases as the clients' data become noisier,
calling for an effective incentive mechanism. | [
"Yizhou Yan",
"Xinyu Tang",
"Chao Huang",
"Ming Tang"
] | 2023-10-13 00:25:21 | http://arxiv.org/abs/2310.08790v1 | http://arxiv.org/pdf/2310.08790v1 | 2310.08790v1 |
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning | Massive data is often considered essential for deep learning applications,
but it also incurs significant computational and infrastructural costs.
Therefore, dataset pruning (DP) has emerged as an effective way to improve data
efficiency by identifying and removing redundant training samples without
sacrificing performance. In this work, we aim to address the problem of DP for
transfer learning, i.e., how to prune a source dataset for improved pretraining
efficiency and lossless finetuning accuracy on downstream target tasks. To our
best knowledge, the problem of DP for transfer learning remains open, as
previous studies have primarily addressed DP and transfer learning as separate
problems. By contrast, we establish a unified viewpoint to integrate DP with
transfer learning and find that existing DP methods are not suitable for the
transfer learning paradigm. We then propose two new DP methods, label mapping
and feature mapping, for supervised and self-supervised pretraining settings
respectively, by revisiting the DP problem through the lens of source-target
domain mapping. Furthermore, we demonstrate the effectiveness of our approach
on numerous transfer learning tasks. We show that source data classes can be
pruned by up to 40% ~ 80% without sacrificing downstream performance, resulting
in a significant 2 ~ 5 times speed-up during the pretraining stage. Besides,
our proposal exhibits broad applicability and can improve other computationally
intensive transfer learning techniques, such as adversarial pretraining. Codes
are available at https://github.com/OPTML-Group/DP4TL. | [
"Yihua Zhang",
"Yimeng Zhang",
"Aochuan Chen",
"Jinghan Jia",
"Jiancheng Liu",
"Gaowen Liu",
"Mingyi Hong",
"Shiyu Chang",
"Sijia Liu"
] | 2023-10-13 00:07:49 | http://arxiv.org/abs/2310.08782v2 | http://arxiv.org/pdf/2310.08782v2 | 2310.08782v2 |
When Machine Learning Models Leak: An Exploration of Synthetic Training Data | We investigate an attack on a machine learning model that predicts whether a
person or household will relocate in the next two years, i.e., a
propensity-to-move classifier. The attack assumes that the attacker can query
the model to obtain predictions and that the marginal distribution of the data
on which the model was trained is publicly available. The attack also assumes
that the attacker has obtained the values of non-sensitive attributes for a
certain number of target individuals. The objective of the attack is to infer
the values of sensitive attributes for these target individuals. We explore how
replacing the original data with synthetic data when training the model impacts
how successfully the attacker can infer sensitive attributes.\footnote{Original
paper published at PSD 2022. The paper was subsequently updated.} | [
"Manel Slokom",
"Peter-Paul de Wolf",
"Martha Larson"
] | 2023-10-12 23:47:22 | http://arxiv.org/abs/2310.08775v1 | http://arxiv.org/pdf/2310.08775v1 | 2310.08775v1 |
PhyloGFN: Phylogenetic inference with generative flow networks | Phylogenetics is a branch of computational biology that studies the
evolutionary relationships among biological entities. Its long history and
numerous applications notwithstanding, inference of phylogenetic trees from
sequence data remains challenging: the high complexity of tree space poses a
significant obstacle for the current combinatorial and probabilistic
techniques. In this paper, we adopt the framework of generative flow networks
(GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and
Bayesian phylogenetic inference. Because GFlowNets are well-suited for sampling
complex combinatorial structures, they are a natural choice for exploring and
sampling from the multimodal posterior distribution over tree topologies and
evolutionary distances. We demonstrate that our amortized posterior sampler,
PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real
benchmark datasets. PhyloGFN is competitive with prior works in marginal
likelihood estimation and achieves a closer fit to the target distribution than
state-of-the-art variational inference methods. | [
"Mingyang Zhou",
"Zichao Yan",
"Elliot Layne",
"Nikolay Malkin",
"Dinghuai Zhang",
"Moksh Jain",
"Mathieu Blanchette",
"Yoshua Bengio"
] | 2023-10-12 23:46:08 | http://arxiv.org/abs/2310.08774v1 | http://arxiv.org/pdf/2310.08774v1 | 2310.08774v1 |
Modeling Fission Gas Release at the Mesoscale using Multiscale DenseNet Regression with Attention Mechanism and Inception Blocks | Mesoscale simulations of fission gas release (FGR) in nuclear fuel provide a
powerful tool for understanding how microstructure evolution impacts FGR, but
they are computationally intensive. In this study, we present an alternate,
data-driven approach, using deep learning to predict instantaneous FGR flux
from 2D nuclear fuel microstructure images. Four convolutional neural network
(CNN) architectures with multiscale regression are trained and evaluated on
simulated FGR data generated using a hybrid phase field/cluster dynamics model.
All four networks show high predictive power, with $R^{2}$ values above 98%.
The best performing network combine a Convolutional Block Attention Module
(CBAM) and InceptionNet mechanisms to provide superior accuracy (mean absolute
percentage error of 4.4%), training stability, and robustness on very low
instantaneous FGR flux values. | [
"Peter Toma",
"Md Ali Muntaha",
"Joel B. Harley",
"Michael R. Tonks"
] | 2023-10-12 23:26:44 | http://arxiv.org/abs/2310.08767v1 | http://arxiv.org/pdf/2310.08767v1 | 2310.08767v1 |
Calibrating Likelihoods towards Consistency in Summarization Models | Despite the recent advances in abstractive text summarization, current
summarization models still suffer from generating factually inconsistent
summaries, reducing their utility for real-world application. We argue that the
main reason for such behavior is that the summarization models trained with
maximum likelihood objective assign high probability to plausible sequences
given the context, but they often do not accurately rank sequences by their
consistency. In this work, we solve this problem by calibrating the likelihood
of model generated sequences to better align with a consistency metric measured
by natural language inference (NLI) models. The human evaluation study and
automatic metrics show that the calibrated models generate more consistent and
higher-quality summaries. We also show that the models trained using our method
return probabilities that are better aligned with the NLI scores, which
significantly increase reliability of summarization models. | [
"Polina Zablotskaia",
"Misha Khalman",
"Rishabh Joshi",
"Livio Baldini Soares",
"Shoshana Jakobovits",
"Joshua Maynez",
"Shashi Narayan"
] | 2023-10-12 23:17:56 | http://arxiv.org/abs/2310.08764v1 | http://arxiv.org/pdf/2310.08764v1 | 2310.08764v1 |
Stabilizing Subject Transfer in EEG Classification with Divergence Estimation | Classification models for electroencephalogram (EEG) data show a large
decrease in performance when evaluated on unseen test sub jects. We reduce this
performance decrease using new regularization techniques during model training.
We propose several graphical models to describe an EEG classification task.
From each model, we identify statistical relationships that should hold true in
an idealized training scenario (with infinite data and a globally-optimal
model) but that may not hold in practice. We design regularization penalties to
enforce these relationships in two stages. First, we identify suitable proxy
quantities (divergences such as Mutual Information and Wasserstein-1) that can
be used to measure statistical independence and dependence relationships.
Second, we provide algorithms to efficiently estimate these quantities during
training using secondary neural network models. We conduct extensive
computational experiments using a large benchmark EEG dataset, comparing our
proposed techniques with a baseline method that uses an adversarial classifier.
We find our proposed methods significantly increase balanced accuracy on test
subjects and decrease overfitting. The proposed methods exhibit a larger
benefit over a greater range of hyperparameters than the baseline method, with
only a small computational cost at training time. These benefits are largest
when used for a fixed training period, though there is still a significant
benefit for a subset of hyperparameters when our techniques are used in
conjunction with early stopping regularization. | [
"Niklas Smedemark-Margulies",
"Ye Wang",
"Toshiaki Koike-Akino",
"Jing Liu",
"Kieran Parsons",
"Yunus Bicer",
"Deniz Erdogmus"
] | 2023-10-12 23:06:52 | http://arxiv.org/abs/2310.08762v1 | http://arxiv.org/pdf/2310.08762v1 | 2310.08762v1 |
Question Answering for Electronic Health Records: A Scoping Review of datasets and models | Question Answering (QA) systems on patient-related data can assist both
clinicians and patients. They can, for example, assist clinicians in
decision-making and enable patients to have a better understanding of their
medical history. Significant amounts of patient data are stored in Electronic
Health Records (EHRs), making EHR QA an important research area. In EHR QA, the
answer is obtained from the medical record of the patient. Because of the
differences in data format and modality, this differs greatly from other
medical QA tasks that employ medical websites or scientific papers to retrieve
answers, making it critical to research EHR question answering. This study
aimed to provide a methodological review of existing works on QA over EHRs. We
searched for articles from January 1st, 2005 to September 30th, 2023 in four
digital sources including Google Scholar, ACL Anthology, ACM Digital Library,
and PubMed to collect relevant publications on EHR QA. 4111 papers were
identified for our study, and after screening based on our inclusion criteria,
we obtained a total of 47 papers for further study. Out of the 47 papers, 25
papers were about EHR QA datasets, and 37 papers were about EHR QA models. It
was observed that QA on EHRs is relatively new and unexplored. Most of the
works are fairly recent. Also, it was observed that emrQA is by far the most
popular EHR QA dataset, both in terms of citations and usage in other papers.
Furthermore, we identified the different models used in EHR QA along with the
evaluation metrics used for these models. | [
"Jayetri Bardhan",
"Kirk Roberts",
"Daisy Zhe Wang"
] | 2023-10-12 22:56:53 | http://arxiv.org/abs/2310.08759v1 | http://arxiv.org/pdf/2310.08759v1 | 2310.08759v1 |
Detection and prediction of clopidogrel treatment failures using longitudinal structured electronic health records | We propose machine learning algorithms to automatically detect and predict
clopidogrel treatment failure using longitudinal structured electronic health
records (EHR). By drawing analogies between natural language and structured
EHR, we introduce various machine learning algorithms used in natural language
processing (NLP) applications to build models for treatment failure detection
and prediction. In this regard, we generated a cohort of patients with
clopidogrel prescriptions from UK Biobank and annotated if the patients had
treatment failure events within one year of the first clopidogrel prescription;
out of 502,527 patients, 1,824 patients were identified as treatment failure
cases, and 6,859 patients were considered as control cases. From the dataset,
we gathered diagnoses, prescriptions, and procedure records together per
patient and organized them into visits with the same date to build models. The
models were built for two different tasks, i.e., detection and prediction, and
the experimental results showed that time series models outperform bag-of-words
approaches in both tasks. In particular, a Transformer-based model, namely
BERT, could reach 0.928 AUC in detection tasks and 0.729 AUC in prediction
tasks. BERT also showed competence over other time series models when there is
not enough training data, because it leverages the pre-training procedure using
large unlabeled data. | [
"Samuel Kim",
"In Gu Sean Lee",
"Mijeong Irene Ban",
"Jane Chiang"
] | 2023-10-12 22:52:29 | http://arxiv.org/abs/2310.08757v1 | http://arxiv.org/pdf/2310.08757v1 | 2310.08757v1 |
Tokenizer Choice For LLM Training: Negligible or Crucial? | The recent success of LLMs has been predominantly driven by curating the
training dataset composition, scaling of model architectures and dataset sizes
and advancements in pretraining objectives, leaving tokenizer influence as a
blind spot. Shedding light on this underexplored area, we conduct a
comprehensive study on the influence of tokenizer choice on LLM downstream
performance by training 24 mono- and multilingual LLMs at a 2.6B parameter
scale, ablating different tokenizer algorithms and parameterizations. Our
studies highlight that the tokenizer choice can significantly impact the
model's downstream performance, training and inference costs. In particular, we
find that the common tokenizer evaluation metrics fertility and parity are not
always predictive of model downstream performance, rendering these metrics a
questionable proxy for the model's downstream performance. Furthermore, we show
that multilingual tokenizers trained on the five most frequent European
languages require vocabulary size increases of factor three in comparison to
English. While English-only tokenizers have been applied to the training of
multi-lingual LLMs, we find that this approach results in a severe downstream
performance degradation and additional training costs of up to 68%, due to an
inefficient tokenization vocabulary. | [
"Mehdi Ali",
"Michael Fromm",
"Klaudia Thellmann",
"Richard Rutmann",
"Max Lübbering",
"Johannes Leveling",
"Katrin Klug",
"Jan Ebert",
"Niclas Doll",
"Jasper Schulze Buschhoff",
"Charvi Jain",
"Alexander Arno Weber",
"Lena Jurkschat",
"Hammam Abdelwahab",
"Chelsea John",
"Pedro Ortiz Suarez",
"Malte Ostendorff",
"Samuel Weinbach",
"Rafet Sifa",
"Stefan Kesselheim",
"Nicolas Flores-Herr"
] | 2023-10-12 22:44:19 | http://arxiv.org/abs/2310.08754v3 | http://arxiv.org/pdf/2310.08754v3 | 2310.08754v3 |
Constrained Bayesian Optimization with Adaptive Active Learning of Unknown Constraints | Optimizing objectives under constraints, where both the objectives and
constraints are black box functions, is a common scenario in real-world
applications such as scientific experimental design, design of medical
therapies, and industrial process optimization. One popular approach to
handling these complex scenarios is Bayesian Optimization (BO). In terms of
theoretical behavior, BO is relatively well understood in the unconstrained
setting, where its principles have been well explored and validated. However,
when it comes to constrained Bayesian optimization (CBO), the existing
framework often relies on heuristics or approximations without the same level
of theoretical guarantees.
In this paper, we delve into the theoretical and practical aspects of
constrained Bayesian optimization, where the objective and constraints can be
independently evaluated and are subject to noise. By recognizing that both the
objective and constraints can help identify high-confidence regions of interest
(ROI), we propose an efficient CBO framework that intersects the ROIs
identified from each aspect to determine the general ROI. The ROI, coupled with
a novel acquisition function that adaptively balances the optimization of the
objective and the identification of feasible regions, enables us to derive
rigorous theoretical justifications for its performance. We showcase the
efficiency and robustness of our proposed CBO framework through empirical
evidence and discuss the fundamental challenge of deriving practical regret
bounds for CBO algorithms. | [
"Fengxue Zhang",
"Zejie Zhu",
"Yuxin Chen"
] | 2023-10-12 22:32:00 | http://arxiv.org/abs/2310.08751v1 | http://arxiv.org/pdf/2310.08751v1 | 2310.08751v1 |
Search-Adaptor: Text Embedding Customization for Information Retrieval | Text embeddings extracted by pre-trained Large Language Models (LLMs) have
significant potential to improve information retrieval and search. Beyond the
zero-shot setup in which they are being conventionally used, being able to take
advantage of the information from the relevant query-corpus paired data has the
power to further boost the LLM capabilities. In this paper, we propose a novel
method, Search-Adaptor, for customizing LLMs for information retrieval in an
efficient and robust way. Search-Adaptor modifies the original text embedding
generated by pre-trained LLMs, and can be integrated with any LLM, including
those only available via APIs. On multiple real-world English and multilingual
retrieval datasets, we show consistent and significant performance benefits for
Search-Adaptor -- e.g., more than 5.2% improvements over the Google Embedding
APIs in nDCG@10 averaged over 13 BEIR datasets. | [
"Jinsung Yoon",
"Sercan O Arik",
"Yanfei Chen",
"Tomas Pfister"
] | 2023-10-12 22:30:15 | http://arxiv.org/abs/2310.08750v1 | http://arxiv.org/pdf/2310.08750v1 | 2310.08750v1 |
Evolutionary Dynamic Optimization and Machine Learning | Evolutionary Computation (EC) has emerged as a powerful field of Artificial
Intelligence, inspired by nature's mechanisms of gradual development. However,
EC approaches often face challenges such as stagnation, diversity loss,
computational complexity, population initialization, and premature convergence.
To overcome these limitations, researchers have integrated learning algorithms
with evolutionary techniques. This integration harnesses the valuable data
generated by EC algorithms during iterative searches, providing insights into
the search space and population dynamics. Similarly, the relationship between
evolutionary algorithms and Machine Learning (ML) is reciprocal, as EC methods
offer exceptional opportunities for optimizing complex ML tasks characterized
by noisy, inaccurate, and dynamic objective functions. These hybrid techniques,
known as Evolutionary Machine Learning (EML), have been applied at various
stages of the ML process. EC techniques play a vital role in tasks such as data
balancing, feature selection, and model training optimization. Moreover, ML
tasks often require dynamic optimization, for which Evolutionary Dynamic
Optimization (EDO) is valuable. This paper presents the first comprehensive
exploration of reciprocal integration between EDO and ML. The study aims to
stimulate interest in the evolutionary learning community and inspire
innovative contributions in this domain. | [
"Abdennour Boulesnane"
] | 2023-10-12 22:28:53 | http://arxiv.org/abs/2310.08748v1 | http://arxiv.org/pdf/2310.08748v1 | 2310.08748v1 |
Robustness to Multi-Modal Environment Uncertainty in MARL using Curriculum Learning | Multi-agent reinforcement learning (MARL) plays a pivotal role in tackling
real-world challenges. However, the seamless transition of trained policies
from simulations to real-world requires it to be robust to various
environmental uncertainties. Existing works focus on finding Nash Equilibrium
or the optimal policy under uncertainty in one environment variable (i.e.
action, state or reward). This is because a multi-agent system itself is highly
complex and unstationary. However, in real-world situation uncertainty can
occur in multiple environment variables simultaneously. This work is the first
to formulate the generalised problem of robustness to multi-modal environment
uncertainty in MARL. To this end, we propose a general robust training approach
for multi-modal uncertainty based on curriculum learning techniques. We handle
two distinct environmental uncertainty simultaneously and present extensive
results across both cooperative and competitive MARL environments,
demonstrating that our approach achieves state-of-the-art levels of robustness. | [
"Aakriti Agrawal",
"Rohith Aralikatti",
"Yanchao Sun",
"Furong Huang"
] | 2023-10-12 22:19:36 | http://arxiv.org/abs/2310.08746v1 | http://arxiv.org/pdf/2310.08746v1 | 2310.08746v1 |
Circuit Component Reuse Across Tasks in Transformer Language Models | Recent work in mechanistic interpretability has shown that behaviors in
language models can be successfully reverse-engineered through circuit
analysis. A common criticism, however, is that each circuit is task-specific,
and thus such analysis cannot contribute to understanding the models at a
higher level. In this work, we present evidence that insights (both low-level
findings about specific heads and higher-level findings about general
algorithms) can indeed generalize across tasks. Specifically, we study the
circuit discovered in Wang et al. (2022) for the Indirect Object Identification
(IOI) task and 1.) show that it reproduces on a larger GPT2 model, and 2.) that
it is mostly reused to solve a seemingly different task: Colored Objects
(Ippolito & Callison-Burch, 2023). We provide evidence that the process
underlying both tasks is functionally very similar, and contains about a 78%
overlap in in-circuit attention heads. We further present a proof-of-concept
intervention experiment, in which we adjust four attention heads in middle
layers in order to 'repair' the Colored Objects circuit and make it behave like
the IOI circuit. In doing so, we boost accuracy from 49.6% to 93.7% on the
Colored Objects task and explain most sources of error. The intervention
affects downstream attention heads in specific ways predicted by their
interactions in the IOI circuit, indicating that this subcircuit behavior is
invariant to the different task inputs. Overall, our results provide evidence
that it may yet be possible to explain large language models' behavior in terms
of a relatively small number of interpretable task-general algorithmic building
blocks and computational components. | [
"Jack Merullo",
"Carsten Eickhoff",
"Ellie Pavlick"
] | 2023-10-12 22:12:28 | http://arxiv.org/abs/2310.08744v1 | http://arxiv.org/pdf/2310.08744v1 | 2310.08744v1 |
Development and Validation of a Deep Learning-Based Microsatellite Instability Predictor from Prostate Cancer Whole-Slide Images | Microsatellite instability-high (MSI-H) is a tumor agnostic biomarker for
immune checkpoint inhibitor therapy. However, MSI status is not routinely
tested in prostate cancer, in part due to low prevalence and assay cost. As
such, prediction of MSI status from hematoxylin and eosin (H&E) stained
whole-slide images (WSIs) could identify prostate cancer patients most likely
to benefit from confirmatory testing and becoming eligible for immunotherapy.
Prostate biopsies and surgical resections from de-identified records of
consecutive prostate cancer patients referred to our institution were analyzed.
Their MSI status was determined by next generation sequencing. Patients before
a cutoff date were split into an algorithm development set (n=4015, MSI-H 1.8%)
and a paired validation set (n=173, MSI-H 19.7%) that consisted of two serial
sections from each sample, one stained and scanned internally and the other at
an external site. Patients after the cutoff date formed the temporal validation
set (n=1350, MSI-H 2.3%). Attention-based multiple instance learning models
were trained to predict MSI-H from H&E WSIs. The MSI-H predictor achieved area
under the receiver operating characteristic curve values of 0.78 (95% CI
[0.69-0.86]), 0.72 (95% CI [0.63-0.81]), and 0.72 (95% CI [0.62-0.82]) on the
internally prepared, externally prepared, and temporal validation sets,
respectively. While MSI-H status is significantly correlated with Gleason
score, the model remained predictive within each Gleason score subgroup. In
summary, we developed and validated an AI-based MSI-H diagnostic model on a
large real-world cohort of routine H&E slides, which effectively generalized to
externally stained and scanned samples and a temporally independent validation
cohort. This algorithm has the potential to direct prostate cancer patients
toward immunotherapy and to identify MSI-H cases secondary to Lynch syndrome. | [
"Qiyuan Hu",
"Abbas A. Rizvi",
"Geoffery Schau",
"Kshitij Ingale",
"Yoni Muller",
"Rachel Baits",
"Sebastian Pretzer",
"Aïcha BenTaieb",
"Abigail Gordhamer",
"Roberto Nussenzveig",
"Adam Cole",
"Matthew O. Leavitt",
"Rohan P. Joshi",
"Nike Beaubier",
"Martin C. Stumpe",
"Kunal Nagpal"
] | 2023-10-12 22:09:53 | http://arxiv.org/abs/2310.08743v1 | http://arxiv.org/pdf/2310.08743v1 | 2310.08743v1 |
Splicing Up Your Predictions with RNA Contrastive Learning | In the face of rapidly accumulating genomic data, our understanding of the
RNA regulatory code remains incomplete. Recent self-supervised methods in other
domains have demonstrated the ability to learn rules underlying the
data-generating process such as sentence structure in language. Inspired by
this, we extend contrastive learning techniques to genomic data by utilizing
functional similarities between sequences generated through alternative
splicing and gene duplication. Our novel dataset and contrastive objective
enable the learning of generalized RNA isoform representations. We validate
their utility on downstream tasks such as RNA half-life and mean ribosome load
prediction. Our pre-training strategy yields competitive results using linear
probing on both tasks, along with up to a two-fold increase in Pearson
correlation in low-data conditions. Importantly, our exploration of the learned
latent space reveals that our contrastive objective yields semantically
meaningful representations, underscoring its potential as a valuable
initialization technique for RNA property prediction. | [
"Philip Fradkin",
"Ruian Shi",
"Bo Wang",
"Brendan Frey",
"Leo J. Lee"
] | 2023-10-12 21:51:25 | http://arxiv.org/abs/2310.08738v2 | http://arxiv.org/pdf/2310.08738v2 | 2310.08738v2 |
Provably Robust Cost-Sensitive Learning via Randomized Smoothing | We focus on learning adversarially robust classifiers under a cost-sensitive
scenario, where the potential harm of different classwise adversarial
transformations is encoded in a binary cost matrix. Existing methods are either
empirical that cannot certify robustness or suffer from inherent scalability
issues. In this work, we study whether randomized smoothing, a more scalable
robustness certification framework, can be leveraged to certify cost-sensitive
robustness. Built upon a notion of cost-sensitive certified radius, we show how
to adapt the standard randomized smoothing certification pipeline to produce
tight robustness guarantees for any cost matrix. In addition, with fine-grained
certified radius optimization schemes specifically designed for different data
subgroups, we propose an algorithm to train smoothed classifiers that are
optimized for cost-sensitive robustness. Extensive experiments on image
benchmarks and a real-world medical dataset demonstrate the superiority of our
method in achieving significantly improved performance of certified
cost-sensitive robustness while having a negligible impact on overall accuracy. | [
"Yuan Xin",
"Michael Backes",
"Xiao Zhang"
] | 2023-10-12 21:39:16 | http://arxiv.org/abs/2310.08732v1 | http://arxiv.org/pdf/2310.08732v1 | 2310.08732v1 |
A Simple Way to Incorporate Novelty Detection in World Models | Reinforcement learning (RL) using world models has found significant recent
successes. However, when a sudden change to world mechanics or properties
occurs then agent performance and reliability can dramatically decline. We
refer to the sudden change in visual properties or state transitions as {\em
novelties}. Implementing novelty detection within generated world model
frameworks is a crucial task for protecting the agent when deployed. In this
paper, we propose straightforward bounding approaches to incorporate novelty
detection into world model RL agents, by utilizing the misalignment of the
world model's hallucinated states and the true observed states as an anomaly
score. We first provide an ontology of novelty detection relevant to sequential
decision making, then we provide effective approaches to detecting novelties in
a distribution of transitions learned by an agent in a world model. Finally, we
show the advantage of our work in a novel environment compared to traditional
machine learning novelty detection methods as well as currently accepted RL
focused novelty detection algorithms. | [
"Geigh Zollicoffer",
"Kenneth Eaton",
"Jonathan Balloch",
"Julia Kim",
"Mark O. Riedl",
"Robert Wright"
] | 2023-10-12 21:38:07 | http://arxiv.org/abs/2310.08731v1 | http://arxiv.org/pdf/2310.08731v1 | 2310.08731v1 |
Heterophily-Based Graph Neural Network for Imbalanced Classification | Graph neural networks (GNNs) have shown promise in addressing graph-related
problems, including node classification. However, conventional GNNs assume an
even distribution of data across classes, which is often not the case in
real-world scenarios, where certain classes are severely underrepresented. This
leads to suboptimal performance of standard GNNs on imbalanced graphs. In this
paper, we introduce a unique approach that tackles imbalanced classification on
graphs by considering graph heterophily. We investigate the intricate
relationship between class imbalance and graph heterophily, revealing that
minority classes not only exhibit a scarcity of samples but also manifest lower
levels of homophily, facilitating the propagation of erroneous information
among neighboring nodes. Drawing upon this insight, we propose an efficient
method, called Fast Im-GBK, which integrates an imbalance classification
strategy with heterophily-aware GNNs to effectively address the class imbalance
problem while significantly reducing training time. Our experiments on
real-world graphs demonstrate our model's superiority in classification
performance and efficiency for node classification tasks compared to existing
baselines. | [
"Zirui Liang",
"Yuntao Li",
"Tianjin Huang",
"Akrati Saxena",
"Yulong Pei",
"Mykola Pechenizkiy"
] | 2023-10-12 21:19:47 | http://arxiv.org/abs/2310.08725v1 | http://arxiv.org/pdf/2310.08725v1 | 2310.08725v1 |
Designing Observables for Measurements with Deep Learning | Many analyses in particle and nuclear physics use simulations to infer
fundamental, effective, or phenomenological parameters of the underlying
physics models. When the inference is performed with unfolded cross sections,
the observables are designed using physics intuition and heuristics. We propose
to design optimal observables with machine learning. Unfolded, differential
cross sections in a neural network output contain the most information about
parameters of interest and can be well-measured by construction. We demonstrate
this idea using two physics models for inclusive measurements in deep inelastic
scattering. | [
"Owen Long",
"Benjamin Nachman"
] | 2023-10-12 20:54:34 | http://arxiv.org/abs/2310.08717v1 | http://arxiv.org/pdf/2310.08717v1 | 2310.08717v1 |
Transformer Choice Net: A Transformer Neural Network for Choice Prediction | Discrete-choice models, such as Multinomial Logit, Probit, or Mixed-Logit,
are widely used in Marketing, Economics, and Operations Research: given a set
of alternatives, the customer is modeled as choosing one of the alternatives to
maximize a (latent) utility function. However, extending such models to
situations where the customer chooses more than one item (such as in e-commerce
shopping) has proven problematic. While one can construct reasonable models of
the customer's behavior, estimating such models becomes very challenging
because of the combinatorial explosion in the number of possible subsets of
items. In this paper we develop a transformer neural network architecture, the
Transformer Choice Net, that is suitable for predicting multiple choices.
Transformer networks turn out to be especially suitable for this task as they
take into account not only the features of the customer and the items but also
the context, which in this case could be the assortment as well as the
customer's past choices. On a range of benchmark datasets, our architecture
shows uniformly superior out-of-sample prediction performance compared to the
leading models in the literature, without requiring any custom modeling or
tuning for each instance. | [
"Hanzhao Wang",
"Xiaocheng Li",
"Kalyan Talluri"
] | 2023-10-12 20:54:10 | http://arxiv.org/abs/2310.08716v1 | http://arxiv.org/pdf/2310.08716v1 | 2310.08716v1 |
Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research | Simulation is an essential tool to develop and benchmark autonomous vehicle
planning software in a safe and cost-effective manner. However, realistic
simulation requires accurate modeling of nuanced and complex multi-agent
interactive behaviors. To address these challenges, we introduce Waymax, a new
data-driven simulator for autonomous driving in multi-agent scenes, designed
for large-scale simulation and testing. Waymax uses publicly-released,
real-world driving data (e.g., the Waymo Open Motion Dataset) to initialize or
play back a diverse set of multi-agent simulated scenarios. It runs entirely on
hardware accelerators such as TPUs/GPUs and supports in-graph simulation for
training, making it suitable for modern large-scale, distributed machine
learning workflows. To support online training and evaluation, Waymax includes
several learned and hard-coded behavior models that allow for realistic
interaction within simulation. To supplement Waymax, we benchmark a suite of
popular imitation and reinforcement learning algorithms with ablation studies
on different design decisions, where we highlight the effectiveness of routes
as guidance for planning agents and the ability of RL to overfit against
simulated agents. | [
"Cole Gulino",
"Justin Fu",
"Wenjie Luo",
"George Tucker",
"Eli Bronstein",
"Yiren Lu",
"Jean Harb",
"Xinlei Pan",
"Yan Wang",
"Xiangyu Chen",
"John D. Co-Reyes",
"Rishabh Agarwal",
"Rebecca Roelofs",
"Yao Lu",
"Nico Montali",
"Paul Mougin",
"Zoey Yang",
"Brandyn White",
"Aleksandra Faust",
"Rowan McAllister",
"Dragomir Anguelov",
"Benjamin Sapp"
] | 2023-10-12 20:49:15 | http://arxiv.org/abs/2310.08710v1 | http://arxiv.org/pdf/2310.08710v1 | 2310.08710v1 |
Polynomial Time Cryptanalytic Extraction of Neural Network Models | Billions of dollars and countless GPU hours are currently spent on training
Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to
determine the difficulty of extracting all the parameters of such neural
networks when given access to their black-box implementations. Many versions of
this problem have been studied over the last 30 years, and the best current
attack on ReLU-based deep neural networks was presented at Crypto 2020 by
Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext
attack on a cryptosystem, which has a secret key embedded in its black-box
implementation and requires a polynomial number of queries but an exponential
amount of time (as a function of the number of neurons). In this paper, we
improve this attack by developing several new techniques that enable us to
extract with arbitrarily high precision all the real-valued parameters of a
ReLU-based DNN using a polynomial number of queries and a polynomial amount of
time. We demonstrate its practical efficiency by applying it to a full-sized
neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8
hidden layers with 256 neurons each, and over million neuronal parameters. An
attack following the approach by Carlini et al. requires an exhaustive search
over 2 to the power 256 possibilities. Our attack replaces this with our new
techniques, which require only 30 minutes on a 256-core computer. | [
"Adi Shamir",
"Isaac Canales-Martinez",
"Anna Hambitzer",
"Jorge Chavez-Saab",
"Francisco Rodrigez-Henriquez",
"Nitin Satpute"
] | 2023-10-12 20:44:41 | http://arxiv.org/abs/2310.08708v1 | http://arxiv.org/pdf/2310.08708v1 | 2310.08708v1 |
Eliciting Model Steering Interactions from Users via Data and Visual Design Probes | Domain experts increasingly use automated data science tools to incorporate
machine learning (ML) models in their work but struggle to "debug" these models
when they are incorrect. For these experts, semantic interactions can provide
an accessible avenue to guide and refine ML models without having to
programmatically dive into its technical details. In this research, we conduct
an elicitation study using data and visual design probes to examine if and how
experts with a spectrum of ML expertise use semantic interactions to update a
simple classification model. We use our design probes to facilitate an
interactive dialogue with 20 participants and codify their interactions as a
set of target-interaction pairs. Interestingly, our findings revealed that many
targets of semantic interactions do not directly map to ML model parameters,
but instead aim to augment the data a model uses for training. We also identify
reasons that participants would hesitate to interact with ML models, including
burdens of cognitive load and concerns of injecting bias. Unexpectedly
participants also saw the value of using semantic interactions to work
collaboratively with members of their team. Participants with less ML expertise
found this to be a useful mechanism for communicating their concerns to ML
experts. This was an especially important observation, as our study also shows
the different needs that correspond to diverse ML expertise. Collectively, we
demonstrate that design probes are effective tools for proactively gathering
the affordances that should be offered in an interactive machine learning
system. | [
"Anamaria Crisan",
"Maddie Shang",
"Eric Brochu"
] | 2023-10-12 20:34:02 | http://arxiv.org/abs/2310.09314v1 | http://arxiv.org/pdf/2310.09314v1 | 2310.09314v1 |
ELDEN: Exploration via Local Dependencies | Tasks with large state space and sparse rewards present a longstanding
challenge to reinforcement learning. In these tasks, an agent needs to explore
the state space efficiently until it finds a reward. To deal with this problem,
the community has proposed to augment the reward function with intrinsic
reward, a bonus signal that encourages the agent to visit interesting states.
In this work, we propose a new way of defining interesting states for
environments with factored state spaces and complex chained dependencies, where
an agent's actions may change the value of one entity that, in order, may
affect the value of another entity. Our insight is that, in these environments,
interesting states for exploration are states where the agent is uncertain
whether (as opposed to how) entities such as the agent or objects have some
influence on each other. We present ELDEN, Exploration via Local DepENdencies,
a novel intrinsic reward that encourages the discovery of new interactions
between entities. ELDEN utilizes a novel scheme -- the partial derivative of
the learned dynamics to model the local dependencies between entities
accurately and computationally efficiently. The uncertainty of the predicted
dependencies is then used as an intrinsic reward to encourage exploration
toward new interactions. We evaluate the performance of ELDEN on four different
domains with complex dependencies, ranging from 2D grid worlds to 3D robotic
tasks. In all domains, ELDEN correctly identifies local dependencies and learns
successful policies, significantly outperforming previous state-of-the-art
exploration methods. | [
"Jiaheng Hu",
"Zizhao Wang",
"Peter Stone",
"Roberto Martin-Martin"
] | 2023-10-12 20:20:21 | http://arxiv.org/abs/2310.08702v1 | http://arxiv.org/pdf/2310.08702v1 | 2310.08702v1 |
Kernel-Elastic Autoencoder for Molecular Design | We introduce the Kernel-Elastic Autoencoder (KAE), a self-supervised
generative model based on the transformer architecture with enhanced
performance for molecular design. KAE is formulated based on two novel loss
functions: modified maximum mean discrepancy and weighted reconstruction. KAE
addresses the long-standing challenge of achieving valid generation and
accurate reconstruction at the same time. KAE achieves remarkable diversity in
molecule generation while maintaining near-perfect reconstructions on the
independent testing dataset, surpassing previous molecule-generating models.
KAE enables conditional generation and allows for decoding based on beam search
resulting in state-of-the-art performance in constrained optimizations.
Furthermore, KAE can generate molecules conditional to favorable binding
affinities in docking applications as confirmed by AutoDock Vina and Glide
scores, outperforming all existing candidates from the training dataset. Beyond
molecular design, we anticipate KAE could be applied to solve problems by
generation in a wide range of applications. | [
"Haote Li",
"Yu Shee",
"Brandon Allen",
"Federica Maschietto",
"Victor Batista"
] | 2023-10-12 19:44:20 | http://arxiv.org/abs/2310.08685v1 | http://arxiv.org/pdf/2310.08685v1 | 2310.08685v1 |
Virtual Augmented Reality for Atari Reinforcement Learning | Reinforcement Learning (RL) has achieved significant milestones in the gaming
domain, most notably Google DeepMind's AlphaGo defeating human Go champion Ken
Jie. This victory was also made possible through the Atari Learning Environment
(ALE): The ALE has been foundational in RL research, facilitating significant
RL algorithm developments such as AlphaGo and others. In current Atari video
game RL research, RL agents' perceptions of its environment is based on raw
pixel data from the Atari video game screen with minimal image preprocessing.
Contrarily, cutting-edge ML research, external to the Atari video game RL
research domain, is focusing on enhancing image perception. A notable example
is Meta Research's "Segment Anything Model" (SAM), a foundation model capable
of segmenting images without prior training (zero-shot). This paper addresses a
novel methodical question: Can state-of-the-art image segmentation models such
as SAM improve the performance of RL agents playing Atari video games? The
results suggest that SAM can serve as a "virtual augmented reality" for the RL
agent, boosting its Atari video game playing performance under certain
conditions. Comparing RL agent performance results from raw and augmented pixel
inputs provides insight into these conditions. Although this paper was limited
by computational constraints, the findings show improved RL agent performance
for augmented pixel inputs and can inform broader research agendas in the
domain of "virtual augmented reality for video game playing RL agents". | [
"Christian A. Schiller"
] | 2023-10-12 19:42:42 | http://arxiv.org/abs/2310.08683v1 | http://arxiv.org/pdf/2310.08683v1 | 2310.08683v1 |
Subsets and Splits