title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
list | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
A General Theoretical Paradigm to Understand Learning from Human Preferences
|
The prevalent deployment of learning from human preferences through
reinforcement learning (RLHF) relies on two important approximations: the first
assumes that pairwise preferences can be substituted with pointwise rewards.
The second assumes that a reward model trained on these pointwise rewards can
generalize from collected data to out-of-distribution data sampled by the
policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an
approach that bypasses the second approximation and learn directly a policy
from collected data without the reward modelling stage. However, this method
still heavily relies on the first approximation.
In this paper we try to gain a deeper theoretical understanding of these
practical algorithms. In particular we derive a new general objective called
$\Psi$PO for learning from human preferences that is expressed in terms of
pairwise preferences and therefore bypasses both approximations. This new
general objective allows us to perform an in-depth analysis of the behavior of
RLHF and DPO (as special cases of $\Psi$PO) and to identify their potential
pitfalls. We then consider another special case for $\Psi$PO by setting $\Psi$
simply to Identity, for which we can derive an efficient optimisation
procedure, prove performance guarantees and demonstrate its empirical
superiority to DPO on some illustrative examples.
|
[
"Mohammad Gheshlaghi Azar",
"Mark Rowland",
"Bilal Piot",
"Daniel Guo",
"Daniele Calandriello",
"Michal Valko",
"Rémi Munos"
] |
2023-10-18 15:21:28
|
http://arxiv.org/abs/2310.12036v1
|
http://arxiv.org/pdf/2310.12036v1
|
2310.12036v1
|
Conformal Drug Property Prediction with Density Estimation under Covariate Shift
|
In drug discovery, it is vital to confirm the predictions of pharmaceutical
properties from computational models using costly wet-lab experiments. Hence,
obtaining reliable uncertainty estimates is crucial for prioritizing drug
molecules for subsequent experimental validation. Conformal Prediction (CP) is
a promising tool for creating such prediction sets for molecular properties
with a coverage guarantee. However, the exchangeability assumption of CP is
often challenged with covariate shift in drug discovery tasks: Most datasets
contain limited labeled data, which may not be representative of the vast
chemical space from which molecules are drawn. To address this limitation, we
propose a method called CoDrug that employs an energy-based model leveraging
both training data and unlabelled data, and Kernel Density Estimation (KDE) to
assess the densities of a molecule set. The estimated densities are then used
to weigh the molecule samples while building prediction sets and rectifying for
distribution shift. In extensive experiments involving realistic distribution
drifts in various small-molecule drug discovery tasks, we demonstrate the
ability of CoDrug to provide valid prediction sets and its utility in
addressing the distribution shift arising from de novo drug design models. On
average, using CoDrug can reduce the coverage gap by over 35% when compared to
conformal prediction sets not adjusted for covariate shift.
|
[
"Siddhartha Laghuvarapu",
"Zhen Lin",
"Jimeng Sun"
] |
2023-10-18 15:17:10
|
http://arxiv.org/abs/2310.12033v1
|
http://arxiv.org/pdf/2310.12033v1
|
2310.12033v1
|
Exact and efficient solutions of the LMC Multitask Gaussian Process model
|
The Linear Model of Co-regionalization (LMC) is a very general model of
multitask gaussian process for regression or classification. While its
expressivity and conceptual simplicity are appealing, naive implementations
have cubic complexity in the number of datapoints and number of tasks, making
approximations mandatory for most applications. However, recent work has shown
that under some conditions the latent processes of the model can be decoupled,
leading to a complexity that is only linear in the number of said processes. We
here extend these results, showing from the most general assumptions that the
only condition necessary to an efficient exact computation of the LMC is a mild
hypothesis on the noise model. We introduce a full parametrization of the
resulting \emph{projected LMC} model, and an expression of the marginal
likelihood enabling efficient optimization. We perform a parametric study on
synthetic data to show the excellent performance of our approach, compared to
an unrestricted exact LMC and approximations of the latter. Overall, the
projected LMC appears as a credible and simpler alternative to state-of-the art
models, which greatly facilitates some computations such as leave-one-out
cross-validation and fantasization.
|
[
"Olivier Truffinet",
"Karim Ammar",
"Jean-Philippe Argaud",
"Bertrand Bouriquet"
] |
2023-10-18 15:16:24
|
http://arxiv.org/abs/2310.12032v1
|
http://arxiv.org/pdf/2310.12032v1
|
2310.12032v1
|
SegmATRon: Embodied Adaptive Semantic Segmentation for Indoor Environment
|
This paper presents an adaptive transformer model named SegmATRon for
embodied image semantic segmentation. Its distinctive feature is the adaptation
of model weights during inference on several images using a hybrid
multicomponent loss function. We studied this model on datasets collected in
the photorealistic Habitat and the synthetic AI2-THOR Simulators. We showed
that obtaining additional images using the agent's actions in an indoor
environment can improve the quality of semantic segmentation. The code of the
proposed approach and datasets are publicly available at
https://github.com/wingrune/SegmATRon.
|
[
"Tatiana Zemskova",
"Margarita Kichik",
"Dmitry Yudin",
"Aleksei Staroverov",
"Aleksandr Panov"
] |
2023-10-18 15:15:13
|
http://arxiv.org/abs/2310.12031v1
|
http://arxiv.org/pdf/2310.12031v1
|
2310.12031v1
|
Nonparametric Discrete Choice Experiments with Machine Learning Guided Adaptive Design
|
Designing products to meet consumers' preferences is essential for a
business's success. We propose the Gradient-based Survey (GBS), a discrete
choice experiment for multiattribute product design. The experiment elicits
consumer preferences through a sequence of paired comparisons for partial
profiles. GBS adaptively constructs paired comparison questions based on the
respondents' previous choices. Unlike the traditional random utility
maximization paradigm, GBS is robust to model misspecification by not requiring
a parametric utility model. Cross-pollinating the machine learning and
experiment design, GBS is scalable to products with hundreds of attributes and
can design personalized products for heterogeneous consumers. We demonstrate
the advantage of GBS in accuracy and sample efficiency compared to the existing
parametric and nonparametric methods in simulations.
|
[
"Mingzhang Yin",
"Ruijiang Gao",
"Weiran Lin",
"Steven M. Shugan"
] |
2023-10-18 15:01:53
|
http://arxiv.org/abs/2310.12026v1
|
http://arxiv.org/pdf/2310.12026v1
|
2310.12026v1
|
Bayesian Flow Networks in Continual Learning
|
Bayesian Flow Networks (BFNs) has been recently proposed as one of the most
promising direction to universal generative modelling, having ability to learn
any of the data type. Their power comes from the expressiveness of neural
networks and Bayesian inference which make them suitable in the context of
continual learning. We delve into the mechanics behind BFNs and conduct the
experiments to empirically verify the generative capabilities on non-stationary
data.
|
[
"Mateusz Pyla",
"Kamil Deja",
"Bartłomiej Twardowski",
"Tomasz Trzciński"
] |
2023-10-18 14:32:20
|
http://arxiv.org/abs/2310.12001v1
|
http://arxiv.org/pdf/2310.12001v1
|
2310.12001v1
|
Iterative Methods for Vecchia-Laplace Approximations for Latent Gaussian Process Models
|
Latent Gaussian process (GP) models are flexible probabilistic non-parametric
function models. Vecchia approximations are accurate approximations for GPs to
overcome computational bottlenecks for large data, and the Laplace
approximation is a fast method with asymptotic convergence guarantees to
approximate marginal likelihoods and posterior predictive distributions for
non-Gaussian likelihoods. Unfortunately, the computational complexity of
combined Vecchia-Laplace approximations grows faster than linearly in the
sample size when used in combination with direct solver methods such as the
Cholesky decomposition. Computations with Vecchia-Laplace approximations thus
become prohibitively slow precisely when the approximations are usually the
most accurate, i.e., on large data sets. In this article, we present several
iterative methods for inference with Vecchia-Laplace approximations which make
computations considerably faster compared to Cholesky-based calculations. We
analyze our proposed methods theoretically and in experiments with simulated
and real-world data. In particular, we obtain a speed-up of an order of
magnitude compared to Cholesky-based inference and a threefold increase in
prediction accuracy in terms of the continuous ranked probability score
compared to a state-of-the-art method on a large satellite data set. All
methods are implemented in a free C++ software library with high-level Python
and R packages.
|
[
"Pascal Kündig",
"Fabio Sigrist"
] |
2023-10-18 14:31:16
|
http://arxiv.org/abs/2310.12000v1
|
http://arxiv.org/pdf/2310.12000v1
|
2310.12000v1
|
Removing Spurious Concepts from Neural Network Representations via Joint Subspace Estimation
|
Out-of-distribution generalization in neural networks is often hampered by
spurious correlations. A common strategy is to mitigate this by removing
spurious concepts from the neural network representation of the data. Existing
concept-removal methods tend to be overzealous by inadvertently eliminating
features associated with the main task of the model, thereby harming model
performance. We propose an iterative algorithm that separates spurious from
main-task concepts by jointly identifying two low-dimensional orthogonal
subspaces in the neural network representation. We evaluate the algorithm on
benchmark datasets for computer vision (Waterbirds, CelebA) and natural
language processing (MultiNLI), and show that it outperforms existing concept
removal methods
|
[
"Floris Holstege",
"Bram Wouters",
"Noud van Giersbergen",
"Cees Diks"
] |
2023-10-18 14:22:36
|
http://arxiv.org/abs/2310.11991v1
|
http://arxiv.org/pdf/2310.11991v1
|
2310.11991v1
|
Image Clustering with External Guidance
|
The core of clustering is incorporating prior knowledge to construct
supervision signals. From classic k-means based on data compactness to recent
contrastive clustering guided by self-supervision, the evolution of clustering
methods intrinsically corresponds to the progression of supervision signals. At
present, substantial efforts have been devoted to mining internal supervision
signals from data. Nevertheless, the abundant external knowledge such as
semantic descriptions, which naturally conduces to clustering, is regrettably
overlooked. In this work, we propose leveraging external knowledge as a new
supervision signal to guide clustering, even though it seems irrelevant to the
given data. To implement and validate our idea, we design an externally guided
clustering method (Text-Aided Clustering, TAC), which leverages the textual
semantics of WordNet to facilitate image clustering. Specifically, TAC first
selects and retrieves WordNet nouns that best distinguish images to enhance the
feature discriminability. Then, to improve image clustering performance, TAC
collaborates text and image modalities by mutually distilling cross-modal
neighborhood information. Experiments demonstrate that TAC achieves
state-of-the-art performance on five widely used and three more challenging
image clustering benchmarks, including the full ImageNet-1K dataset.
|
[
"Yunfan Li",
"Peng Hu",
"Dezhong Peng",
"Jiancheng Lv",
"Jianping Fan",
"Xi Peng"
] |
2023-10-18 14:20:55
|
http://arxiv.org/abs/2310.11989v1
|
http://arxiv.org/pdf/2310.11989v1
|
2310.11989v1
|
A Finite-Horizon Approach to Active Level Set Estimation
|
We consider the problem of active learning in the context of spatial sampling
for level set estimation (LSE), where the goal is to localize all regions where
a function of interest lies above/below a given threshold as quickly as
possible. We present a finite-horizon search procedure to perform LSE in one
dimension while optimally balancing both the final estimation error and the
distance traveled for a fixed number of samples. A tuning parameter is used to
trade off between the estimation accuracy and distance traveled. We show that
the resulting optimization problem can be solved in closed form and that the
resulting policy generalizes existing approaches to this problem. We then show
how this approach can be used to perform level set estimation in higher
dimensions under the popular Gaussian process model. Empirical results on
synthetic data indicate that as the cost of travel increases, our method's
ability to treat distance nonmyopically allows it to significantly improve on
the state of the art. On real air quality data, our approach achieves roughly
one fifth the estimation error at less than half the cost of competing
algorithms.
|
[
"Phillip Kearns",
"Bruno Jedynak",
"John Lipor"
] |
2023-10-18 14:11:41
|
http://arxiv.org/abs/2310.11985v1
|
http://arxiv.org/pdf/2310.11985v1
|
2310.11985v1
|
From Interpolation to Extrapolation: Complete Length Generalization for Arithmetic Transformers
|
Since its introduction, the transformer model has demonstrated outstanding
performance across various tasks. However, there are still unresolved issues
regarding length generalization, particularly in algorithmic tasks. In this
paper, we investigate the inherent capabilities of transformer models in
learning arithmetic algorithms, such as addition and multiplication. Through
experiments and attention analysis, we identify a number of crucial factors for
achieving optimal length generalization. We show that transformer models are
able to generalize to long lengths with the help of targeted attention biasing.
We then introduce Attention Bias Calibration (ABC), a calibration stage that
enables the model to automatically learn the proper attention biases, which we
link to mechanisms in relative position encoding. We demonstrate that using
ABC, the transformer model can achieve unprecedented perfect length
generalization on certain arithmetic tasks.
|
[
"Shaoxiong Duan",
"Yining Shi"
] |
2023-10-18 14:10:47
|
http://arxiv.org/abs/2310.11984v1
|
http://arxiv.org/pdf/2310.11984v1
|
2310.11984v1
|
Can bin-wise scaling improve consistency and adaptivity of prediction uncertainty for machine learning regression ?
|
Binwise Variance Scaling (BVS) has recently been proposed as a post hoc
recalibration method for prediction uncertainties of machine learning
regression problems that is able of more efficient corrections than uniform
variance (or temperature) scaling. The original version of BVS uses
uncertainty-based binning, which is aimed to improve calibration conditionally
on uncertainty, i.e. consistency. I explore here several adaptations of BVS, in
particular with alternative loss functions and a binning scheme based on an
input-feature (X) in order to improve adaptivity, i.e. calibration conditional
on X. The performances of BVS and its proposed variants are tested on a
benchmark dataset for the prediction of atomization energies and compared to
the results of isotonic regression.
|
[
"Pascal Pernot"
] |
2023-10-18 14:05:04
|
http://arxiv.org/abs/2310.11978v1
|
http://arxiv.org/pdf/2310.11978v1
|
2310.11978v1
|
Improving Generalization of Alignment with Human Preferences through Group Invariant Learning
|
The success of AI assistants based on language models (LLMs) hinges crucially
on Reinforcement Learning from Human Feedback (RLHF), which enables the
generation of responses more aligned with human preferences. As universal AI
assistants, there's a growing expectation for them to perform consistently
across various domains. However, previous work shows that Reinforcement
Learning (RL) often exploits shortcuts to attain high rewards and overlooks
challenging samples. This focus on quick reward gains undermines both the
stability in training and the model's ability to generalize to new, unseen
data. In this work, we propose a novel approach that can learn a consistent
policy via RL across various data groups or domains. Given the challenges
associated with acquiring group annotations, our method automatically
classifies data into different groups, deliberately maximizing performance
variance. Then, we optimize the policy to perform well on challenging groups.
Lastly, leveraging the established groups, our approach adaptively adjusts the
exploration space, allocating more learning capacity to more challenging data
and preventing the model from over-optimizing on simpler data. Experimental
results indicate that our approach significantly enhances training stability
and model generalization.
|
[
"Rui Zheng",
"Wei Shen",
"Yuan Hua",
"Wenbin Lai",
"Shihan Dou",
"Yuhao Zhou",
"Zhiheng Xi",
"Xiao Wang",
"Haoran Huang",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] |
2023-10-18 13:54:15
|
http://arxiv.org/abs/2310.11971v2
|
http://arxiv.org/pdf/2310.11971v2
|
2310.11971v2
|
Take the aTrain. Introducing an Interface for the Accessible Transcription of Interviews
|
aTrain is an open-source and offline tool for transcribing audio data in
multiple languages with CPU and NVIDIA GPU support. It is specifically designed
for researchers using qualitative data generated from various forms of speech
interactions with research participants. aTrain requires no programming skills,
runs on most computers, does not require an internet connection, and was
verified not to upload data to any server. aTrain combines OpenAI's Whisper
model with speaker recognition to provide output that integrates with the
popular qualitative data analysis software tools MAXQDA and ATLAS.ti. It has an
easy-to-use graphical interface and is provided as a Windows-App through the
Microsoft Store allowing for simple installation by researchers. The source
code is freely available on GitHub. Having developed aTrain with a focus on
speed on local computers, we show that the transcription time on current mobile
CPUs is around 2 to 3 times the duration of the audio file using the
highest-accuracy transcription models. If an entry-level graphics card is
available, the transcription speed increases to 20% of the audio duration.
|
[
"Armin Haberl",
"Jürgen Fleiß",
"Dominik Kowald",
"Stefan Thalmann"
] |
2023-10-18 13:45:47
|
http://arxiv.org/abs/2310.11967v1
|
http://arxiv.org/pdf/2310.11967v1
|
2310.11967v1
|
Flexible Payload Configuration for Satellites using Machine Learning
|
Satellite communications, essential for modern connectivity, extend access to
maritime, aeronautical, and remote areas where terrestrial networks are
unfeasible. Current GEO systems distribute power and bandwidth uniformly across
beams using multi-beam footprints with fractional frequency reuse. However,
recent research reveals the limitations of this approach in heterogeneous
traffic scenarios, leading to inefficiencies. To address this, this paper
presents a machine learning (ML)-based approach to Radio Resource Management
(RRM).
We treat the RRM task as a regression ML problem, integrating RRM objectives
and constraints into the loss function that the ML algorithm aims at
minimizing. Moreover, we introduce a context-aware ML metric that evaluates the
ML model's performance but also considers the impact of its resource allocation
decisions on the overall performance of the communication system.
|
[
"Marcele O. K. Mendonca",
"Flor G. Ortiz-Gomez",
"Jorge Querol",
"Eva Lagunas",
"Juan A. Vásquez Peralvo",
"Victor Monzon Baeza",
"Symeon Chatzinotas",
"Bjorn Ottersten"
] |
2023-10-18 13:45:17
|
http://arxiv.org/abs/2310.11966v1
|
http://arxiv.org/pdf/2310.11966v1
|
2310.11966v1
|
Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long Sequences
|
Transformer-based models have achieved state-of-the-art performance in many
areas. However, the quadratic complexity of self-attention with respect to the
input length hinders the applicability of Transformer-based models to long
sequences. To address this, we present Fast Multipole Attention, a new
attention mechanism that uses a divide-and-conquer strategy to reduce the time
and memory complexity of attention for sequences of length $n$ from
$\mathcal{O}(n^2)$ to $\mathcal{O}(n \log n)$ or $O(n)$, while retaining a
global receptive field. The hierarchical approach groups queries, keys, and
values into $\mathcal{O}( \log n)$ levels of resolution, where groups at
greater distances are increasingly larger in size and the weights to compute
group quantities are learned. As such, the interaction between tokens far from
each other is considered in lower resolution in an efficient hierarchical
manner. The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$
or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled
or not. This multi-level divide-and-conquer strategy is inspired by fast
summation methods from $n$-body physics and the Fast Multipole Method. We
perform evaluation on autoregressive and bidirectional language modeling tasks
and compare our Fast Multipole Attention model with other efficient attention
variants on medium-size datasets. We find empirically that the Fast Multipole
Transformer performs much better than other efficient transformers in terms of
memory size and accuracy. The Fast Multipole Attention mechanism has the
potential to empower large language models with much greater sequence lengths,
taking the full context into account in an efficient, naturally hierarchical
manner during training and when generating long sequences.
|
[
"Yanming Kang",
"Giang Tran",
"Hans De Sterck"
] |
2023-10-18 13:40:41
|
http://arxiv.org/abs/2310.11960v2
|
http://arxiv.org/pdf/2310.11960v2
|
2310.11960v2
|
A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis
|
Time series data, often characterized by unique composition and complex
multi-scale temporal variations, requires special consideration of
decomposition and multi-scale modeling in its analysis. Existing deep learning
methods on this best fit to only univariate time series, and have not
sufficiently accounted for sub-series level modeling and decomposition
completeness. To address this, we propose MSD-Mixer, a Multi-Scale
Decomposition MLP-Mixer which learns to explicitly decompose the input time
series into different components, and represents the components in different
layers. To handle multi-scale temporal patterns and inter-channel dependencies,
we propose a novel temporal patching approach to model the time series as
multi-scale sub-series, i.e., patches, and employ MLPs to mix intra- and
inter-patch variations and channel-wise correlations. In addition, we propose a
loss function to constrain both the magnitude and autocorrelation of the
decomposition residual for decomposition completeness. Through extensive
experiments on various real-world datasets for five common time series analysis
tasks (long- and short-term forecasting, imputation, anomaly detection, and
classification), we demonstrate that MSD-Mixer consistently achieves
significantly better performance in comparison with other state-of-the-art
task-general and task-specific approaches.
|
[
"Shuhan Zhong",
"Sizhe Song",
"Guanyao Li",
"Weipeng Zhuo",
"Yang Liu",
"S. -H. Gary Chan"
] |
2023-10-18 13:39:07
|
http://arxiv.org/abs/2310.11959v1
|
http://arxiv.org/pdf/2310.11959v1
|
2310.11959v1
|
Emptying the Ocean with a Spoon: Should We Edit Models?
|
We call into question the recently popularized method of direct model editing
as a means of correcting factual errors in LLM generations. We contrast model
editing with three similar but distinct approaches that pursue better defined
objectives: (1) retrieval-based architectures, which decouple factual memory
from inference and linguistic capabilities embodied in LLMs; (2) concept
erasure methods, which aim at preventing systemic bias in generated text; and
(3) attribution methods, which aim at grounding generations into identified
textual sources. We argue that direct model editing cannot be trusted as a
systematic remedy for the disadvantages inherent to LLMs, and while it has
proven potential in improving model explainability, it opens risks by
reinforcing the notion that models can be trusted for factuality. We call for
cautious promotion and application of model editing as part of the LLM
deployment process, and for responsibly limiting the use cases of LLMs to those
not relying on editing as a critical component.
|
[
"Yuval Pinter",
"Michael Elhadad"
] |
2023-10-18 13:38:03
|
http://arxiv.org/abs/2310.11958v1
|
http://arxiv.org/pdf/2310.11958v1
|
2310.11958v1
|
Recasting Continual Learning as Sequence Modeling
|
In this work, we aim to establish a strong connection between two significant
bodies of machine learning research: continual learning and sequence modeling.
That is, we propose to formulate continual learning as a sequence modeling
problem, allowing advanced sequence models to be utilized for continual
learning. Under this formulation, the continual learning process becomes the
forward pass of a sequence model. By adopting the meta-continual learning (MCL)
framework, we can train the sequence model at the meta-level, on multiple
continual learning episodes. As a specific example of our new formulation, we
demonstrate the application of Transformers and their efficient variants as MCL
methods. Our experiments on seven benchmarks, covering both classification and
regression, show that sequence models can be an attractive solution for general
MCL.
|
[
"Soochan Lee",
"Jaehyeon Son",
"Gunhee Kim"
] |
2023-10-18 13:26:52
|
http://arxiv.org/abs/2310.11952v1
|
http://arxiv.org/pdf/2310.11952v1
|
2310.11952v1
|
Too Good To Be True: performance overestimation in (re)current practices for Human Activity Recognition
|
Today, there are standard and well established procedures within the Human
Activity Recognition (HAR) pipeline. However, some of these conventional
approaches lead to accuracy overestimation. In particular, sliding windows for
data segmentation followed by standard random k-fold cross validation, produce
biased results. An analysis of previous literature and present-day studies,
surprisingly, shows that these are common approaches in state-of-the-art
studies on HAR. It is important to raise awareness in the scientific community
about this problem, whose negative effects are being overlooked. Otherwise,
publications of biased results lead to papers that report lower accuracies,
with correct unbiased methods, harder to publish. Several experiments with
different types of datasets and different types of classification models allow
us to exhibit the problem and show it persists independently of the method or
dataset.
|
[
"Andrés Tello",
"Victoria Degeler",
"Alexander Lazovik"
] |
2023-10-18 13:24:05
|
http://arxiv.org/abs/2310.11950v1
|
http://arxiv.org/pdf/2310.11950v1
|
2310.11950v1
|
Interpretable Spectral Variational AutoEncoder (ISVAE) for time series clustering
|
The best encoding is the one that is interpretable in nature. In this work,
we introduce a novel model that incorporates an interpretable bottleneck-termed
the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE). This
arrangement compels the VAE to attend on the most informative segments of the
input signal, fostering the learning of a novel encoding ${f_0}$ which boasts
enhanced interpretability and clusterability over traditional latent spaces. By
deliberately constraining the VAE with this FB, we intentionally constrict its
capacity to access broad input domain information, promoting the development of
an encoding that is discernible, separable, and of reduced dimensionality. The
evolutionary learning trajectory of ${f_0}$ further manifests as a dynamic
hierarchical tree, offering profound insights into cluster similarities.
Additionally, for handling intricate data configurations, we propose a tailored
decoder structure that is symmetrically aligned with FB's architecture.
Empirical evaluations highlight the superior efficacy of ISVAE, which compares
favorably to state-of-the-art results in clustering metrics across real-world
datasets.
|
[
"Óscar Jiménez Rama",
"Fernando Moreno-Pino",
"David Ramírez",
"Pablo M. Olmos"
] |
2023-10-18 13:06:05
|
http://arxiv.org/abs/2310.11940v1
|
http://arxiv.org/pdf/2310.11940v1
|
2310.11940v1
|
A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs
|
Semi-inductive link prediction (LP) in knowledge graphs (KG) is the task of
predicting facts for new, previously unseen entities based on context
information. Although new entities can be integrated by retraining the model
from scratch in principle, such an approach is infeasible for large-scale KGs,
where retraining is expensive and new entities may arise frequently. In this
paper, we propose and describe a large-scale benchmark to evaluate
semi-inductive LP models. The benchmark is based on and extends Wikidata5M: It
provides transductive, k-shot, and 0-shot LP tasks, each varying the available
information from (i) only KG structure, to (ii) including textual mentions, and
(iii) detailed descriptions of the entities. We report on a small study of
recent approaches and found that semi-inductive LP performance is far from
transductive performance on long-tail entities throughout all experiments. The
benchmark provides a test bed for further research into integrating context and
textual information in semi-inductive LP models.
|
[
"Adrian Kochsiek",
"Rainer Gemulla"
] |
2023-10-18 12:13:13
|
http://arxiv.org/abs/2310.11917v1
|
http://arxiv.org/pdf/2310.11917v1
|
2310.11917v1
|
Multi-modal Medical Neurological Image Fusion using Wavelet Pooled Edge Preserving Autoencoder
|
Medical image fusion integrates the complementary diagnostic information of
the source image modalities for improved visualization and analysis of
underlying anomalies. Recently, deep learning-based models have excelled the
conventional fusion methods by executing feature extraction, feature selection,
and feature fusion tasks, simultaneously. However, most of the existing
convolutional neural network (CNN) architectures use conventional pooling or
strided convolutional strategies to downsample the feature maps. It causes the
blurring or loss of important diagnostic information and edge details available
in the source images and dilutes the efficacy of the feature extraction
process. Therefore, this paper presents an end-to-end unsupervised fusion model
for multimodal medical images based on an edge-preserving dense autoencoder
network. In the proposed model, feature extraction is improved by using wavelet
decomposition-based attention pooling of feature maps. This helps in preserving
the fine edge detail information present in both the source images and enhances
the visual perception of fused images. Further, the proposed model is trained
on a variety of medical image pairs which helps in capturing the intensity
distributions of the source images and preserves the diagnostic information
effectively. Substantial experiments are conducted which demonstrate that the
proposed method provides improved visual and quantitative results as compared
to the other state-of-the-art fusion methods.
|
[
"Manisha Das",
"Deep Gupta",
"Petia Radeva",
"Ashwini M Bakde"
] |
2023-10-18 11:59:35
|
http://arxiv.org/abs/2310.11910v1
|
http://arxiv.org/pdf/2310.11910v1
|
2310.11910v1
|
Accelerated Policy Gradient: On the Nesterov Momentum for Reinforcement Learning
|
Policy gradient methods have recently been shown to enjoy global convergence
at a $\Theta(1/t)$ rate in the non-regularized tabular softmax setting.
Accordingly, one important research question is whether this convergence rate
can be further improved, with only first-order updates. In this paper, we
answer the above question from the perspective of momentum by adapting the
celebrated Nesterov's accelerated gradient (NAG) method to reinforcement
learning (RL), termed \textit{Accelerated Policy Gradient} (APG). To
demonstrate the potential of APG in achieving faster global convergence, we
formally show that with the true gradient, APG with softmax policy
parametrization converges to an optimal policy at a $\tilde{O}(1/t^2)$ rate. To
the best of our knowledge, this is the first characterization of the global
convergence rate of NAG in the context of RL. Notably, our analysis relies on
one interesting finding: Regardless of the initialization, APG could end up
reaching a locally nearly-concave regime, where APG could benefit significantly
from the momentum, within finite iterations. By means of numerical validation,
we confirm that APG exhibits $\tilde{O}(1/t^2)$ rate as well as show that APG
could significantly improve the convergence behavior over the standard policy
gradient.
|
[
"Yen-Ju Chen",
"Nai-Chieh Huang",
"Ping-Chun Hsieh"
] |
2023-10-18 11:33:22
|
http://arxiv.org/abs/2310.11897v1
|
http://arxiv.org/pdf/2310.11897v1
|
2310.11897v1
|
A New Multimodal Medical Image Fusion based on Laplacian Autoencoder with Channel Attention
|
Medical image fusion combines the complementary information of multimodal
medical images to assist medical professionals in the clinical diagnosis of
patients' disorders and provide guidance during preoperative and
intra-operative procedures. Deep learning (DL) models have achieved end-to-end
image fusion with highly robust and accurate fusion performance. However, most
DL-based fusion models perform down-sampling on the input images to minimize
the number of learnable parameters and computations. During this process,
salient features of the source images become irretrievable leading to the loss
of crucial diagnostic edge details and contrast of various brain tissues. In
this paper, we propose a new multimodal medical image fusion model is proposed
that is based on integrated Laplacian-Gaussian concatenation with attention
pooling (LGCA). We prove that our model preserves effectively complementary
information and important tissue structures.
|
[
"Payal Wankhede",
"Manisha Das",
"Deep Gupta",
"Petia Radeva",
"Ashwini M Bakde"
] |
2023-10-18 11:29:53
|
http://arxiv.org/abs/2310.11896v1
|
http://arxiv.org/pdf/2310.11896v1
|
2310.11896v1
|
A Hyperparameter Study for Quantum Kernel Methods
|
Quantum kernel methods are a promising method in quantum machine learning
thanks to the guarantees connected to them. Their accessibility for analytic
considerations also opens up the possibility of prescreening datasets based on
their potential for a quantum advantage. To do so, earlier works developed the
geometric difference, which can be understood as a closeness measure between
two kernel-based machine learning approaches, most importantly between a
quantum kernel and classical kernel. This metric links the quantum and
classical model complexities. Therefore, it raises the question of whether the
geometric difference, based on its relation to model complexity, can be a
useful tool in evaluations other than for the potential for quantum advantage.
In this work, we investigate the effects of hyperparameter choice on the model
performance and the generalization gap between classical and quantum kernels.
The importance of hyperparameter optimization is well known also for classical
machine learning. Especially for the quantum Hamiltonian evolution feature map,
the scaling of the input data has been shown to be crucial. However, there are
additional parameters left to be optimized, like the best number of qubits to
trace out before computing a projected quantum kernel. We investigate the
influence of these hyperparameters and compare the classically reliable method
of cross validation with the method of choosing based on the geometric
difference. Based on the thorough investigation of the hyperparameters across
11 datasets we identified commodities that can be exploited when examining a
new dataset. In addition, our findings contribute to better understanding of
the applicability of the geometric difference.
|
[
"Sebastian Egginger",
"Alona Sakhnenko",
"Jeanette Miriam Lorenz"
] |
2023-10-18 11:20:59
|
http://arxiv.org/abs/2310.11891v1
|
http://arxiv.org/pdf/2310.11891v1
|
2310.11891v1
|
Building a Graph-based Deep Learning network model from captured traffic traces
|
Currently the state of the art network models are based or depend on Discrete
Event Simulation (DES). While DES is highly accurate, it is also
computationally costly and cumbersome to parallelize, making it unpractical to
simulate high performance networks. Additionally, simulated scenarios fail to
capture all of the complexities present in real network scenarios. While there
exists network models based on Machine Learning (ML) techniques to minimize
these issues, these models are also trained with simulated data and hence
vulnerable to the same pitfalls. Consequently, the Graph Neural Networking
Challenge 2023 introduces a dataset of captured traffic traces that can be used
to build a ML-based network model without these limitations. In this paper we
propose a Graph Neural Network (GNN)-based solution specifically designed to
better capture the complexities of real network scenarios. This is done through
a novel encoding method to capture information from the sequence of captured
packets, and an improved message passing algorithm to better represent the
dependencies present in physical networks. We show that the proposed solution
it is able to learn and generalize to unseen captured network scenarios.
|
[
"Carlos Güemes-Palau",
"Miquel Ferriol Galmés",
"Albert Cabellos-Aparicio",
"Pere Barlet-Ros"
] |
2023-10-18 11:16:32
|
http://arxiv.org/abs/2310.11889v1
|
http://arxiv.org/pdf/2310.11889v1
|
2310.11889v1
|
Analyze Mass Spectrometry data with Artificial Intelligence to assist the understanding of past habitability of Mars and provide insights for future missions
|
This paper presents an application of artificial intelligence on mass
spectrometry data for detecting habitability potential of ancient Mars.
Although data was collected for planet Mars the same approach can be replicated
for any terrestrial object of our solar system. Furthermore, proposed
methodology can be adapted to any domain that uses mass spectrometry. This
research is focused in data analysis of two mass spectrometry techniques,
evolved gas analysis (EGA-MS) and gas chromatography (GC-MS), which are used to
identify specific chemical compounds in geological material samples. The study
demonstrates the applicability of EGA-MS and GC-MS data to extra-terrestrial
material analysis. Most important features of proposed methodology includes
square root transformation of mass spectrometry values, conversion of raw data
to 2D sprectrograms and utilization of specific machine learning models and
techniques to avoid overfitting on relative small datasets. Both EGA-MS and
GC-MS datasets come from NASA and two machine learning competitions that the
author participated and exploited. Complete running code for the GC-MS
dataset/competition is available at GitHub.1 Raw training mass spectrometry
data include [0, 1] labels of specific chemical compounds, selected to provide
valuable insights and contribute to our understanding of the potential past
habitability of Mars.
|
[
"Ioannis Nasios"
] |
2023-10-18 11:14:46
|
http://arxiv.org/abs/2310.11888v1
|
http://arxiv.org/pdf/2310.11888v1
|
2310.11888v1
|
From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks
|
In this paper, we review recent approaches for explaining concepts in neural
networks. Concepts can act as a natural link between learning and reasoning:
once the concepts are identified that a neural learning system uses, one can
integrate those concepts with a reasoning system for inference or use a
reasoning system to act upon them to improve or enhance the learning system. On
the other hand, knowledge can not only be extracted from neural networks but
concept knowledge can also be inserted into neural network architectures. Since
integrating learning and reasoning is at the core of neuro-symbolic AI, the
insights gained from this survey can serve as an important step towards
realizing neuro-symbolic AI based on explainable concepts.
|
[
"Jae Hee Lee",
"Sergio Lanza",
"Stefan Wermter"
] |
2023-10-18 11:08:02
|
http://arxiv.org/abs/2310.11884v1
|
http://arxiv.org/pdf/2310.11884v1
|
2310.11884v1
|
Online Convex Optimization with Switching Cost and Delayed Gradients
|
We consider the online convex optimization (OCO) problem with quadratic and
linear switching cost in the limited information setting, where an online
algorithm can choose its action using only gradient information about the
previous objective function. For $L$-smooth and $\mu$-strongly convex objective
functions, we propose an online multiple gradient descent (OMGD) algorithm and
show that its competitive ratio for the OCO problem with quadratic switching
cost is at most $4(L + 5) + \frac{16(L + 5)}{\mu}$. The competitive ratio upper
bound for OMGD is also shown to be order-wise tight in terms of $L,\mu$. In
addition, we show that the competitive ratio of any online algorithm is
$\max\{\Omega(L), \Omega(\frac{L}{\sqrt{\mu}})\}$ in the limited information
setting when the switching cost is quadratic. We also show that the OMGD
algorithm achieves the optimal (order-wise) dynamic regret in the limited
information setting. For the linear switching cost, the competitive ratio upper
bound of the OMGD algorithm is shown to depend on both the path length and the
squared path length of the problem instance, in addition to $L, \mu$, and is
shown to be order-wise, the best competitive ratio any online algorithm can
achieve. Consequently, we conclude that the optimal competitive ratio for the
quadratic and linear switching costs are fundamentally different in the limited
information setting.
|
[
"Spandan Senapati",
"Rahul Vaze"
] |
2023-10-18 11:06:06
|
http://arxiv.org/abs/2310.11880v1
|
http://arxiv.org/pdf/2310.11880v1
|
2310.11880v1
|
SQ Lower Bounds for Learning Mixtures of Linear Classifiers
|
We study the problem of learning mixtures of linear classifiers under
Gaussian covariates. Given sample access to a mixture of $r$ distributions on
$\mathbb{R}^n$ of the form $(\mathbf{x},y_{\ell})$, $\ell\in [r]$, where
$\mathbf{x}\sim\mathcal{N}(0,\mathbf{I}_n)$ and
$y_\ell=\mathrm{sign}(\langle\mathbf{v}_\ell,\mathbf{x}\rangle)$ for an unknown
unit vector $\mathbf{v}_\ell$, the goal is to learn the underlying distribution
in total variation distance. Our main result is a Statistical Query (SQ) lower
bound suggesting that known algorithms for this problem are essentially best
possible, even for the special case of uniform mixtures. In particular, we show
that the complexity of any SQ algorithm for the problem is
$n^{\mathrm{poly}(1/\Delta) \log(r)}$, where $\Delta$ is a lower bound on the
pairwise $\ell_2$-separation between the $\mathbf{v}_\ell$'s. The key technical
ingredient underlying our result is a new construction of spherical designs
that may be of independent interest.
|
[
"Ilias Diakonikolas",
"Daniel M. Kane",
"Yuxin Sun"
] |
2023-10-18 10:56:57
|
http://arxiv.org/abs/2310.11876v1
|
http://arxiv.org/pdf/2310.11876v1
|
2310.11876v1
|
Fractional Concepts in Neural Networks: Enhancing Activation and Loss Functions
|
The paper presents a method for using fractional concepts in a neural network
to modify the activation and loss functions. The methodology allows the neural
network to define and optimize its activation functions by determining the
fractional derivative order of the training process as an additional
hyperparameter. This will enable neurons in the network to adjust their
activation functions to match input data better and reduce output errors,
potentially improving the network's overall performance.
|
[
"Zahra Alijani",
"Vojtech Molek"
] |
2023-10-18 10:49:29
|
http://arxiv.org/abs/2310.11875v1
|
http://arxiv.org/pdf/2310.11875v1
|
2310.11875v1
|
Evaluating the Fairness of Discriminative Foundation Models in Computer Vision
|
We propose a novel taxonomy for bias evaluation of discriminative foundation
models, such as Contrastive Language-Pretraining (CLIP), that are used for
labeling tasks. We then systematically evaluate existing methods for mitigating
bias in these models with respect to our taxonomy. Specifically, we evaluate
OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot
classification, image retrieval and image captioning. We categorize desired
behaviors based around three axes: (i) if the task concerns humans; (ii) how
subjective the task is (i.e., how likely it is that people from a diverse range
of backgrounds would agree on a labeling); and (iii) the intended purpose of
the task and if fairness is better served by impartiality (i.e., making
decisions independent of the protected attributes) or representation (i.e.,
making decisions to maximize diversity). Finally, we provide quantitative
fairness evaluations for both binary-valued and multi-valued protected
attributes over ten diverse datasets. We find that fair PCA, a post-processing
method for fair representations, works very well for debiasing in most of the
aforementioned tasks while incurring only minor loss of performance. However,
different debiasing approaches vary in their effectiveness depending on the
task. Hence, one should choose the debiasing approach depending on the specific
use case.
|
[
"Junaid Ali",
"Matthaeus Kleindessner",
"Florian Wenzel",
"Kailash Budhathoki",
"Volkan Cevher",
"Chris Russell"
] |
2023-10-18 10:32:39
|
http://arxiv.org/abs/2310.11867v1
|
http://arxiv.org/pdf/2310.11867v1
|
2310.11867v1
|
Stochastic Optimization for Non-convex Problem with Inexact Hessian Matrix, Gradient, and Function
|
Trust-region (TR) and adaptive regularization using cubics (ARC) have proven
to have some very appealing theoretical properties for non-convex optimization
by concurrently computing function value, gradient, and Hessian matrix to
obtain the next search direction and the adjusted parameters. Although
stochastic approximations help largely reduce the computational cost, it is
challenging to theoretically guarantee the convergence rate. In this paper, we
explore a family of stochastic TR and ARC methods that can simultaneously
provide inexact computations of the Hessian matrix, gradient, and function
values. Our algorithms require much fewer propagations overhead per iteration
than TR and ARC. We prove that the iteration complexity to achieve
$\epsilon$-approximate second-order optimality is of the same order as the
exact computations demonstrated in previous studies. Additionally, the mild
conditions on inexactness can be met by leveraging a random sampling technology
in the finite-sum minimization problem. Numerical experiments with a non-convex
problem support these findings and demonstrate that, with the same or a similar
number of iterations, our algorithms require less computational overhead per
iteration than current second-order methods.
|
[
"Liu Liu",
"Xuanqing Liu",
"Cho-Jui Hsieh",
"Dacheng Tao"
] |
2023-10-18 10:29:58
|
http://arxiv.org/abs/2310.11866v1
|
http://arxiv.org/pdf/2310.11866v1
|
2310.11866v1
|
Effective and Efficient Federated Tree Learning on Hybrid Data
|
Federated learning has emerged as a promising distributed learning paradigm
that facilitates collaborative learning among multiple parties without
transferring raw data. However, most existing federated learning studies focus
on either horizontal or vertical data settings, where the data of different
parties are assumed to be from the same feature or sample space. In practice, a
common scenario is the hybrid data setting, where data from different parties
may differ both in the features and samples. To address this, we propose
HybridTree, a novel federated learning approach that enables federated tree
learning on hybrid data. We observe the existence of consistent split rules in
trees. With the help of these split rules, we theoretically show that the
knowledge of parties can be incorporated into the lower layers of a tree. Based
on our theoretical analysis, we propose a layer-level solution that does not
need frequent communication traffic to train a tree. Our experiments
demonstrate that HybridTree can achieve comparable accuracy to the centralized
setting with low computational and communication overhead. HybridTree can
achieve up to 8 times speedup compared with the other baselines.
|
[
"Qinbin Li",
"Chulin Xie",
"Xiaojun Xu",
"Xiaoyuan Liu",
"Ce Zhang",
"Bo Li",
"Bingsheng He",
"Dawn Song"
] |
2023-10-18 10:28:29
|
http://arxiv.org/abs/2310.11865v1
|
http://arxiv.org/pdf/2310.11865v1
|
2310.11865v1
|
VQ-NeRF: Neural Reflectance Decomposition and Editing with Vector Quantization
|
We propose VQ-NeRF, a two-branch neural network model that incorporates
Vector Quantization (VQ) to decompose and edit reflectance fields in 3D scenes.
Conventional neural reflectance fields use only continuous representations to
model 3D scenes, despite the fact that objects are typically composed of
discrete materials in reality. This lack of discretization can result in noisy
material decomposition and complicated material editing. To address these
limitations, our model consists of a continuous branch and a discrete branch.
The continuous branch follows the conventional pipeline to predict decomposed
materials, while the discrete branch uses the VQ mechanism to quantize
continuous materials into individual ones. By discretizing the materials, our
model can reduce noise in the decomposition process and generate a segmentation
map of discrete materials. Specific materials can be easily selected for
further editing by clicking on the corresponding area of the segmentation
outcomes. Additionally, we propose a dropout-based VQ codeword ranking strategy
to predict the number of materials in a scene, which reduces redundancy in the
material segmentation process. To improve usability, we also develop an
interactive interface to further assist material editing. We evaluate our model
on both computer-generated and real-world scenes, demonstrating its superior
performance. To the best of our knowledge, our model is the first to enable
discrete material editing in 3D scenes.
|
[
"Hongliang Zhong",
"Jingbo Zhang",
"Jing Liao"
] |
2023-10-18 10:26:56
|
http://arxiv.org/abs/2310.11864v1
|
http://arxiv.org/pdf/2310.11864v1
|
2310.11864v1
|
Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
|
Transferable adversarial examples raise critical security concerns in
real-world, black-box attack scenarios. However, in this work, we identify two
main problems in common evaluation practices: (1) For attack transferability,
lack of systematic, one-to-one attack comparison and fair hyperparameter
settings. (2) For attack stealthiness, simply no comparisons. To address these
problems, we establish new evaluation guidelines by (1) proposing a novel
attack categorization strategy and conducting systematic and fair
intra-category analyses on transferability, and (2) considering diverse
imperceptibility metrics and finer-grained stealthiness characteristics from
the perspective of attack traceback. To this end, we provide the first
large-scale evaluation of transferable adversarial examples on ImageNet,
involving 23 representative attacks against 9 representative defenses. Our
evaluation leads to a number of new insights, including consensus-challenging
ones: (1) Under a fair attack hyperparameter setting, one early attack method,
DI, actually outperforms all the follow-up methods. (2) A state-of-the-art
defense, DiffPure, actually gives a false sense of (white-box) security since
it is indeed largely bypassed by our (black-box) transferable attacks. (3) Even
when all attacks are bounded by the same $L_p$ norm, they lead to dramatically
different stealthiness performance, which negatively correlates with their
transferability performance. Overall, our work demonstrates that existing
problematic evaluations have indeed caused misleading conclusions and missing
points, and as a result, hindered the assessment of the actual progress in this
field.
|
[
"Zhengyu Zhao",
"Hanwei Zhang",
"Renjue Li",
"Ronan Sicre",
"Laurent Amsaleg",
"Michael Backes",
"Qi Li",
"Chao Shen"
] |
2023-10-18 10:06:42
|
http://arxiv.org/abs/2310.11850v1
|
http://arxiv.org/pdf/2310.11850v1
|
2310.11850v1
|
Accelerate Presolve in Large-Scale Linear Programming via Reinforcement Learning
|
Large-scale LP problems from industry usually contain much redundancy that
severely hurts the efficiency and reliability of solving LPs, making presolve
(i.e., the problem simplification module) one of the most critical components
in modern LP solvers. However, how to design high-quality presolve routines --
that is, the program determining (P1) which presolvers to select, (P2) in what
order to execute, and (P3) when to stop -- remains a highly challenging task
due to the extensive requirements on expert knowledge and the large search
space. Due to the sequential decision property of the task and the lack of
expert demonstrations, we propose a simple and efficient reinforcement learning
(RL) framework -- namely, reinforcement learning for presolve (RL4Presolve) --
to tackle (P1)-(P3) simultaneously. Specifically, we formulate the routine
design task as a Markov decision process and propose an RL framework with
adaptive action sequences to generate high-quality presolve routines
efficiently. Note that adaptive action sequences help learn complex behaviors
efficiently and adapt to various benchmarks. Experiments on two solvers
(open-source and commercial) and eight benchmarks (real-world and synthetic)
demonstrate that RL4Presolve significantly and consistently improves the
efficiency of solving large-scale LPs, especially on benchmarks from industry.
Furthermore, we optimize the hard-coded presolve routines in LP solvers by
extracting rules from learned policies for simple and efficient deployment to
Huawei's supply chain. The results show encouraging economic and academic
potential for incorporating machine learning to modern solvers.
|
[
"Yufei Kuang",
"Xijun Li",
"Jie Wang",
"Fangzhou Zhu",
"Meng Lu",
"Zhihai Wang",
"Jia Zeng",
"Houqiang Li",
"Yongdong Zhang",
"Feng Wu"
] |
2023-10-18 09:51:59
|
http://arxiv.org/abs/2310.11845v1
|
http://arxiv.org/pdf/2310.11845v1
|
2310.11845v1
|
On The Expressivity of Objective-Specification Formalisms in Reinforcement Learning
|
To solve a task with reinforcement learning (RL), it is necessary to formally
specify the goal of that task. Although most RL algorithms require that the
goal is formalised as a Markovian reward function, alternatives have been
developed (such as Linear Temporal Logic and Multi-Objective Reinforcement
Learning). Moreover, it is well known that some of these formalisms are able to
express certain tasks that other formalisms cannot express. However, there has
not yet been any thorough analysis of how these formalisms relate to each other
in terms of expressivity. In this work, we fill this gap in the existing
literature by providing a comprehensive comparison of the expressivities of 17
objective-specification formalisms in RL. We place these formalisms in a
preorder based on their expressive power, and present this preorder as a Hasse
diagram. We find a variety of limitations for the different formalisms, and
that no formalism is both dominantly expressive and straightforward to optimise
with current techniques. For example, we prove that each of Regularised RL,
Outer Nonlinear Markov Rewards, Reward Machines, Linear Temporal Logic, and
Limit Average Rewards can express an objective that the others cannot. Our
findings have implications for both policy optimisation and reward learning.
Firstly, we identify expressivity limitations which are important to consider
when specifying objectives in practice. Secondly, our results highlight the
need for future research which adapts reward learning to work with a variety of
formalisms, since many existing reward learning methods implicitly assume that
desired objectives can be expressed with Markovian rewards. Our work
contributes towards a more cohesive understanding of the costs and benefits of
different RL objective-specification formalisms.
|
[
"Rohan Subramani",
"Marcus Williams",
"Max Heitmann",
"Halfdan Holm",
"Charlie Griffin",
"Joar Skalse"
] |
2023-10-18 09:46:01
|
http://arxiv.org/abs/2310.11840v1
|
http://arxiv.org/pdf/2310.11840v1
|
2310.11840v1
|
Equivariant Bootstrapping for Uncertainty Quantification in Imaging Inverse Problems
|
Scientific imaging problems are often severely ill-posed, and hence have
significant intrinsic uncertainty. Accurately quantifying the uncertainty in
the solutions to such problems is therefore critical for the rigorous
interpretation of experimental results as well as for reliably using the
reconstructed images as scientific evidence. Unfortunately, existing imaging
methods are unable to quantify the uncertainty in the reconstructed images in a
manner that is robust to experiment replications. This paper presents a new
uncertainty quantification methodology based on an equivariant formulation of
the parametric bootstrap algorithm that leverages symmetries and invariance
properties commonly encountered in imaging problems. Additionally, the proposed
methodology is general and can be easily applied with any image reconstruction
technique, including unsupervised training strategies that can be trained from
observed data alone, thus enabling uncertainty quantification in situations
where there is no ground truth data available. We demonstrate the proposed
approach with a series of numerical experiments and through comparisons with
alternative uncertainty quantification strategies from the state-of-the-art,
such as Bayesian strategies involving score-based diffusion models and Langevin
samplers. In all our experiments, the proposed method delivers remarkably
accurate high-dimensional confidence regions and outperforms the competing
approaches in terms of estimation accuracy, uncertainty quantification
accuracy, and computing time.
|
[
"Julian Tachella",
"Marcelo Pereyra"
] |
2023-10-18 09:43:15
|
http://arxiv.org/abs/2310.11838v2
|
http://arxiv.org/pdf/2310.11838v2
|
2310.11838v2
|
Optimising Distributions with Natural Gradient Surrogates
|
Natural gradient methods have been used to optimise the parameters of
probability distributions in a variety of settings, often resulting in
fast-converging procedures. Unfortunately, for many distributions of interest,
computing the natural gradient has a number of challenges. In this work we
propose a novel technique for tackling such issues, which involves reframing
the optimisation as one with respect to the parameters of a surrogate
distribution, for which computing the natural gradient is easy. We give several
examples of existing methods that can be interpreted as applying this
technique, and propose a new method for applying it to a wide variety of
problems. Our method expands the set of distributions that can be efficiently
targeted with natural gradients. Furthermore, it is fast, easy to understand,
simple to implement using standard autodiff software, and does not require
lengthy model-specific derivations. We demonstrate our method on maximum
likelihood estimation and variational inference tasks.
|
[
"Jonathan So",
"Richard E. Turner"
] |
2023-10-18 09:42:39
|
http://arxiv.org/abs/2310.11837v1
|
http://arxiv.org/pdf/2310.11837v1
|
2310.11837v1
|
CLARA: Multilingual Contrastive Learning for Audio Representation Acquisition
|
This paper proposes a novel framework for multilingual speech and sound
representation learning using contrastive learning. The lack of sizeable
labelled datasets hinders speech-processing research across languages. Recent
advances in contrastive learning provide self-supervised techniques to learn
from unlabelled data. Motivated by reducing data dependence and improving
generalisation across diverse languages and conditions, we develop a
multilingual contrastive framework. This framework enables models to acquire
shared representations across languages, facilitating cross-lingual transfer
with limited target language data.
Additionally, capturing emotional cues within speech is challenging due to
subjective perceptual assessments. By learning expressive representations from
diverse, multilingual data in a self-supervised manner, our approach aims to
develop speech representations that encode emotive dimensions.
Our method trains encoders on a large corpus of multi-lingual audio data.
Data augmentation techniques are employed to expand the dataset. The
contrastive learning approach trains the model to maximise agreement between
positive pairs and minimise agreement between negative pairs. Extensive
experiments demonstrate state-of-the-art performance of the proposed model on
emotion recognition, audio classification, and retrieval benchmarks under
zero-shot and few-shot conditions. This provides an effective approach for
acquiring shared and generalised speech representations across languages and
acoustic conditions while encoding latent emotional dimensions.
|
[
"Kari A Noriy",
"Xiaosong Yang",
"Marcin Budka",
"Jian Jun Zhang"
] |
2023-10-18 09:31:56
|
http://arxiv.org/abs/2310.11830v1
|
http://arxiv.org/pdf/2310.11830v1
|
2310.11830v1
|
Towards Graph Foundation Models: A Survey and Beyond
|
Emerging as fundamental building blocks for diverse artificial intelligence
applications, foundation models have achieved notable success across natural
language processing and many other domains. Parallelly, graph machine learning
has witnessed a transformative shift, with shallow methods giving way to deep
learning approaches. The emergence and homogenization capabilities of
foundation models have piqued the interest of graph machine learning
researchers, sparking discussions about developing the next graph learning
paradigm that is pre-trained on broad graph data and can be adapted to a wide
range of downstream graph tasks. However, there is currently no clear
definition and systematic analysis for this type of work. In this article, we
propose the concept of graph foundation models (GFMs), and provide the first
comprehensive elucidation on their key characteristics and technologies.
Following that, we categorize existing works towards GFMs into three categories
based on their reliance on graph neural networks and large language models.
Beyond providing a comprehensive overview of the current landscape of graph
foundation models, this article also discusses potential research directions
for this evolving field.
|
[
"Jiawei Liu",
"Cheng Yang",
"Zhiyuan Lu",
"Junze Chen",
"Yibo Li",
"Mengmei Zhang",
"Ting Bai",
"Yuan Fang",
"Lichao Sun",
"Philip S. Yu",
"Chuan Shi"
] |
2023-10-18 09:31:21
|
http://arxiv.org/abs/2310.11829v1
|
http://arxiv.org/pdf/2310.11829v1
|
2310.11829v1
|
Conservative Predictions on Noisy Financial Data
|
Price movements in financial markets are well known to be very noisy. As a
result, even if there are, on occasion, exploitable patterns that could be
picked up by machine-learning algorithms, these are obscured by feature and
label noise rendering the predictions less useful, and risky in practice.
Traditional rule-learning techniques developed for noisy data, such as CN2,
would seek only high precision rules and refrain from making predictions where
their antecedents did not apply. We apply a similar approach, where a model
abstains from making a prediction on data points that it is uncertain on.
During training, a cascade of such models are learned in sequence, similar to
rule lists, with each model being trained only on data on which the previous
model(s) were uncertain. Similar pruning of data takes place at test-time, with
(higher accuracy) predictions being made albeit only on a fraction (support) of
test-time data. In a financial prediction setting, such an approach allows
decisions to be taken only when the ensemble model is confident, thereby
reducing risk. We present results using traditional MLPs as well as
differentiable decision trees, on synthetic data as well as real financial
market data, to predict fixed-term returns using commonly used features. We
submit that our approach is likely to result in better overall returns at a
lower level of risk. In this context we introduce an utility metric to measure
the average gain per trade, as well as the return adjusted for downside risk,
both of which are improved significantly by our approach.
|
[
"Omkar Nabar",
"Gautam Shroff"
] |
2023-10-18 09:14:19
|
http://arxiv.org/abs/2310.11815v1
|
http://arxiv.org/pdf/2310.11815v1
|
2310.11815v1
|
De novo protein design using geometric vector field networks
|
Innovations like protein diffusion have enabled significant progress in de
novo protein design, which is a vital topic in life science. These methods
typically depend on protein structure encoders to model residue backbone
frames, where atoms do not exist. Most prior encoders rely on atom-wise
features, such as angles and distances between atoms, which are not available
in this context. Thus far, only several simple encoders, such as IPA, have been
proposed for this scenario, exposing the frame modeling as a bottleneck. In
this work, we proffer the Vector Field Network (VFN), which enables network
layers to perform learnable vector computations between coordinates of
frame-anchored virtual atoms, thus achieving a higher capability for modeling
frames. The vector computation operates in a manner similar to a linear layer,
with each input channel receiving 3D virtual atom coordinates instead of scalar
values. The multiple feature vectors output by the vector computation are then
used to update the residue representations and virtual atom coordinates via
attention aggregation. Remarkably, VFN also excels in modeling both frames and
atoms, as the real atoms can be treated as the virtual atoms for modeling,
positioning VFN as a potential universal encoder. In protein diffusion (frame
modeling), VFN exhibits an impressive performance advantage over IPA, excelling
in terms of both designability (67.04% vs. 53.58%) and diversity (66.54% vs.
51.98%). In inverse folding (frame and atom modeling), VFN outperforms the
previous SoTA model, PiFold (54.7% vs. 51.66%), on sequence recovery rate. We
also propose a method of equipping VFN with the ESM model, which significantly
surpasses the previous ESM-based SoTA (62.67% vs. 55.65%), LM-Design, by a
substantial margin.
|
[
"Weian Mao",
"Muzhi Zhu",
"Zheng Sun",
"Shuaike Shen",
"Lin Yuanbo Wu",
"Hao Chen",
"Chunhua Shen"
] |
2023-10-18 08:45:57
|
http://arxiv.org/abs/2310.11802v1
|
http://arxiv.org/pdf/2310.11802v1
|
2310.11802v1
|
Adversarial Training for Physics-Informed Neural Networks
|
Physics-informed neural networks have shown great promise in solving partial
differential equations. However, due to insufficient robustness, vanilla PINNs
often face challenges when solving complex PDEs, especially those involving
multi-scale behaviors or solutions with sharp or oscillatory characteristics.
To address these issues, based on the projected gradient descent adversarial
attack, we proposed an adversarial training strategy for PINNs termed by
AT-PINNs. AT-PINNs enhance the robustness of PINNs by fine-tuning the model
with adversarial samples, which can accurately identify model failure locations
and drive the model to focus on those regions during training. AT-PINNs can
also perform inference with temporal causality by selecting the initial
collocation points around temporal initial values. We implement AT-PINNs to the
elliptic equation with multi-scale coefficients, Poisson equation with
multi-peak solutions, Burgers equation with sharp solutions and the Allen-Cahn
equation. The results demonstrate that AT-PINNs can effectively locate and
reduce failure regions. Moreover, AT-PINNs are suitable for solving complex
PDEs, since locating failure regions through adversarial attacks is independent
of the size of failure regions or the complexity of the distribution.
|
[
"Yao Li",
"Shengzhu Shi",
"Zhichang Guo",
"Boying Wu"
] |
2023-10-18 08:28:43
|
http://arxiv.org/abs/2310.11789v1
|
http://arxiv.org/pdf/2310.11789v1
|
2310.11789v1
|
NeuroCUT: A Neural Approach for Robust Graph Partitioning
|
Graph partitioning aims to divide a graph into $k$ disjoint subsets while
optimizing a specific partitioning objective. The majority of formulations
related to graph partitioning exhibit NP-hardness due to their combinatorial
nature. As a result, conventional approximation algorithms rely on heuristic
methods, sometimes with approximation guarantees and sometimes without.
Unfortunately, traditional approaches are tailored for specific partitioning
objectives and do not generalize well across other known partitioning
objectives from the literature. To overcome this limitation, and learn
heuristics from the data directly, neural approaches have emerged,
demonstrating promising outcomes. In this study, we extend this line of work
through a novel framework, NeuroCut. NeuroCut introduces two key innovations
over prevailing methodologies. First, it is inductive to both graph topology
and the partition count, which is provided at query time. Second, by leveraging
a reinforcement learning based framework over node representations derived from
a graph neural network, NeuroCut can accommodate any optimization objective,
even those encompassing non-differentiable functions. Through empirical
evaluation, we demonstrate that NeuroCut excels in identifying high-quality
partitions, showcases strong generalization across a wide spectrum of
partitioning objectives, and exhibits resilience to topological modifications.
|
[
"Rishi Shah",
"Krishnanshu Jain",
"Sahil Manchanda",
"Sourav Medya",
"Sayan Ranu"
] |
2023-10-18 08:27:09
|
http://arxiv.org/abs/2310.11787v1
|
http://arxiv.org/pdf/2310.11787v1
|
2310.11787v1
|
A Quasi-Wasserstein Loss for Learning Graph Neural Networks
|
When learning graph neural networks (GNNs) in node-level prediction tasks,
most existing loss functions are applied for each node independently, even if
node embeddings and their labels are non-i.i.d. because of their graph
structures. To eliminate such inconsistency, in this study we propose a novel
Quasi-Wasserstein (QW) loss with the help of the optimal transport defined on
graphs, leading to new learning and prediction paradigms of GNNs. In
particular, we design a "Quasi-Wasserstein" distance between the observed
multi-dimensional node labels and their estimations, optimizing the label
transport defined on graph edges. The estimations are parameterized by a GNN in
which the optimal label transport may determine the graph edge weights
optionally. By reformulating the strict constraint of the label transport to a
Bregman divergence-based regularizer, we obtain the proposed Quasi-Wasserstein
loss associated with two efficient solvers learning the GNN together with
optimal label transport. When predicting node labels, our model combines the
output of the GNN with the residual component provided by the optimal label
transport, leading to a new transductive prediction paradigm. Experiments show
that the proposed QW loss applies to various GNNs and helps to improve their
performance in node-level classification and regression tasks.
|
[
"Minjie Cheng",
"Hongteng Xu"
] |
2023-10-18 07:39:05
|
http://arxiv.org/abs/2310.11762v2
|
http://arxiv.org/pdf/2310.11762v2
|
2310.11762v2
|
Domain-Generalized Face Anti-Spoofing with Unknown Attacks
|
Although face anti-spoofing (FAS) methods have achieved remarkable
performance on specific domains or attack types, few studies have focused on
the simultaneous presence of domain changes and unknown attacks, which is
closer to real application scenarios. To handle domain-generalized unknown
attacks, we introduce a new method, DGUA-FAS, which consists of a
Transformer-based feature extractor and a synthetic unknown attack sample
generator (SUASG). The SUASG network simulates unknown attack samples to assist
the training of the feature extractor. Experimental results show that our
method achieves superior performance on domain generalization FAS with known or
unknown attacks.
|
[
"Zong-Wei Hong",
"Yu-Chen Lin",
"Hsuan-Tung Liu",
"Yi-Ren Yeh",
"Chu-Song Chen"
] |
2023-10-18 07:31:35
|
http://arxiv.org/abs/2310.11758v1
|
http://arxiv.org/pdf/2310.11758v1
|
2310.11758v1
|
Estimating Material Properties of Interacting Objects Using Sum-GP-UCB
|
Robots need to estimate the material and dynamic properties of objects from
observations in order to simulate them accurately. We present a Bayesian
optimization approach to identifying the material property parameters of
objects based on a set of observations. Our focus is on estimating these
properties based on observations of scenes with different sets of interacting
objects. We propose an approach that exploits the structure of the reward
function by modeling the reward for each observation separately and using only
the parameters of the objects in that scene as inputs. The resulting
lower-dimensional models generalize better over the parameter space, which in
turn results in a faster optimization. To speed up the optimization process
further, and reduce the number of simulation runs needed to find good parameter
values, we also propose partial evaluations of the reward function, wherein the
selected parameters are only evaluated on a subset of real world evaluations.
The approach was successfully evaluated on a set of scenes with a wide range of
object interactions, and we showed that our method can effectively perform
incremental learning without resetting the rewards of the gathered
observations.
|
[
"M. Yunus Seker",
"Oliver Kroemer"
] |
2023-10-18 07:16:06
|
http://arxiv.org/abs/2310.11749v1
|
http://arxiv.org/pdf/2310.11749v1
|
2310.11749v1
|
Unintended Memorization in Large ASR Models, and How to Mitigate It
|
It is well-known that neural networks can unintentionally memorize their
training examples, causing privacy concerns. However, auditing memorization in
large non-auto-regressive automatic speech recognition (ASR) models has been
challenging due to the high compute cost of existing methods such as hardness
calibration. In this work, we design a simple auditing method to measure
memorization in large ASR models without the extra compute overhead.
Concretely, we speed up randomly-generated utterances to create a mapping
between vocal and text information that is difficult to learn from typical
training examples. Hence, accurate predictions only for sped-up training
examples can serve as clear evidence for memorization, and the corresponding
accuracy can be used to measure memorization. Using the proposed method, we
showcase memorization in the state-of-the-art ASR models. To mitigate
memorization, we tried gradient clipping during training to bound the influence
of any individual example on the final model. We empirically show that clipping
each example's gradient can mitigate memorization for sped-up training examples
with up to 16 repetitions in the training set. Furthermore, we show that in
large-scale distributed training, clipping the average gradient on each compute
core maintains neutral model quality and compute cost while providing strong
privacy protection.
|
[
"Lun Wang",
"Om Thakkar",
"Rajiv Mathews"
] |
2023-10-18 06:45:49
|
http://arxiv.org/abs/2310.11739v1
|
http://arxiv.org/pdf/2310.11739v1
|
2310.11739v1
|
Investigating Uncertainty Calibration of Aligned Language Models under the Multiple-Choice Setting
|
Despite the significant progress made in practical applications of aligned
language models (LMs), they tend to be overconfident in output answers compared
to the corresponding pre-trained LMs. In this work, we systematically evaluate
the impact of the alignment process on logit-based uncertainty calibration of
LMs under the multiple-choice setting. We first conduct a thoughtful empirical
study on how aligned LMs differ in calibration from their pre-trained
counterparts. Experimental results reveal that there are two distinct
uncertainties in LMs under the multiple-choice setting, which are responsible
for the answer decision and the format preference of the LMs, respectively.
Then, we investigate the role of these two uncertainties on aligned LM's
calibration through fine-tuning in simple synthetic alignment schemes and
conclude that one reason for aligned LMs' overconfidence is the conflation of
these two types of uncertainty. Furthermore, we examine the utility of common
post-hoc calibration methods for aligned LMs and propose an easy-to-implement
and sample-efficient method to calibrate aligned LMs. We hope our findings
could provide insights into the design of more reliable alignment processes for
LMs.
|
[
"Guande He",
"Peng Cui",
"Jianfei Chen",
"Wenbo Hu",
"Jun Zhu"
] |
2023-10-18 06:07:28
|
http://arxiv.org/abs/2310.11732v1
|
http://arxiv.org/pdf/2310.11732v1
|
2310.11732v1
|
Federated Heterogeneous Graph Neural Network for Privacy-preserving Recommendation
|
Heterogeneous information network (HIN), which contains rich semantics
depicted by meta-paths, has become a powerful tool to alleviate data sparsity
in recommender systems. Existing HIN-based recommendations hold the data
centralized storage assumption and conduct centralized model training. However,
the real-world data is often stored in a distributed manner for privacy
concerns, resulting in the failure of centralized HIN-based recommendations. In
this paper, we suggest the HIN is partitioned into private HINs stored in the
client side and shared HINs in the server. Following this setting, we propose a
federated heterogeneous graph neural network (FedHGNN) based framework, which
can collaboratively train a recommendation model on distributed HINs without
leaking user privacy. Specifically, we first formalize the privacy definition
in the light of differential privacy for HIN-based federated recommendation,
which aims to protect user-item interactions of private HIN as well as user's
high-order patterns from shared HINs. To recover the broken meta-path based
semantics caused by distributed data storage and satisfy the proposed privacy,
we elaborately design a semantic-preserving user interactions publishing
method, which locally perturbs user's high-order patterns as well as related
user-item interactions for publishing. After that, we propose a HGNN model for
recommendation, which conducts node- and semantic-level aggregations to capture
recovered semantics. Extensive experiments on three datasets demonstrate our
model outperforms existing methods by a large margin (up to 34% in HR@10 and
42% in NDCG@10) under an acceptable privacy budget.
|
[
"Bo Yan",
"Yang Cao",
"Haoyu Wang",
"Wenchuan Yang",
"Junping Du",
"Chuan Shi"
] |
2023-10-18 05:59:41
|
http://arxiv.org/abs/2310.11730v1
|
http://arxiv.org/pdf/2310.11730v1
|
2310.11730v1
|
Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding
|
Chain-of-Thought (CoT) is a technique that guides Large Language Models
(LLMs) to decompose complex tasks into multi-step reasoning through
intermediate steps in natural language form. Briefly, CoT enables LLMs to think
step by step. However, although many Natural Language Understanding (NLU) tasks
also require thinking step by step, LLMs perform less well than small-scale
Masked Language Models (MLMs). To migrate CoT from LLMs to MLMs, we propose
Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt
tuning, to implement step-by-step thinking for MLMs on NLU tasks. From the
perspective of CoT, CoTT's two-step framework enables MLMs to implement task
decomposition; CoTT's prompt tuning allows intermediate steps to be used in
natural language form. Thereby, the success of CoT can be extended to NLU tasks
through MLMs. To verify the effectiveness of CoTT, we conduct experiments on
two NLU tasks: hierarchical classification and relation extraction, and the
results show that CoTT outperforms baselines and achieves state-of-the-art
performance.
|
[
"Caoyun Fan",
"Jidong Tian",
"Yitian Li",
"Wenqing Chen",
"Hao He",
"Yaohui Jin"
] |
2023-10-18 05:39:20
|
http://arxiv.org/abs/2310.11721v1
|
http://arxiv.org/pdf/2310.11721v1
|
2310.11721v1
|
On the Evaluation of Generative Models in Distributed Learning Tasks
|
The evaluation of deep generative models including generative adversarial
networks (GANs) and diffusion models has been extensively studied in the
literature. While the existing evaluation methods mainly target a centralized
learning problem with training data stored by a single client, many
applications of generative models concern distributed learning settings, e.g.
the federated learning scenario, where training data are collected by and
distributed among several clients. In this paper, we study the evaluation of
generative models in distributed learning tasks with heterogeneous data
distributions. First, we focus on the Fr\'echet inception distance (FID) and
consider the following FID-based aggregate scores over the clients: 1) FID-avg
as the mean of clients' individual FID scores, 2) FID-all as the FID distance
of the trained model to the collective dataset containing all clients' data. We
prove that the model rankings according to the FID-all and FID-avg scores could
be inconsistent, which can lead to different optimal generative models
according to the two aggregate scores. Next, we consider the kernel inception
distance (KID) and similarly define the KID-avg and KID-all aggregations.
Unlike the FID case, we prove that KID-all and KID-avg result in the same
rankings of generative models. We perform several numerical experiments on
standard image datasets and training schemes to support our theoretical
findings on the evaluation of generative models in distributed learning
problems.
|
[
"Zixiao Wang",
"Farzan Farnia",
"Zhenghao Lin",
"Yunheng Shen",
"Bei Yu"
] |
2023-10-18 05:06:04
|
http://arxiv.org/abs/2310.11714v1
|
http://arxiv.org/pdf/2310.11714v1
|
2310.11714v1
|
Learning under Label Proportions for Text Classification
|
We present one of the preliminary NLP works under the challenging setup of
Learning from Label Proportions (LLP), where the data is provided in an
aggregate form called bags and only the proportion of samples in each class as
the ground truth. This setup is inline with the desired characteristics of
training models under Privacy settings and Weakly supervision. By
characterizing some irregularities of the most widely used baseline technique
DLLP, we propose a novel formulation that is also robust. This is accompanied
with a learnability result that provides a generalization bound under LLP.
Combining this formulation with a self-supervised objective, our method
achieves better results as compared to the baselines in almost 87% of the
experimental configurations which include large scale models for both long and
short range texts across multiple metrics.
|
[
"Jatin Chauhan",
"Xiaoxuan Wang",
"Wei Wang"
] |
2023-10-18 04:39:25
|
http://arxiv.org/abs/2310.11707v1
|
http://arxiv.org/pdf/2310.11707v1
|
2310.11707v1
|
Runner re-identification from single-view video in the open-world setting
|
In many sports, player re-identification is crucial for automatic video
processing and analysis. However, most of the current studies on player
re-identification in multi- or single-view sports videos focus on
re-identification in the closed-world setting using labeled image dataset, and
player re-identification in the open-world setting for automatic video analysis
is not well developed. In this paper, we propose a runner re-identification
system that directly processes single-view video to address the open-world
setting. In the open-world setting, we cannot use labeled dataset and have to
process video directly. The proposed system automatically processes raw video
as input to identify runners, and it can identify runners even when they are
framed out multiple times. For the automatic processing, we first detect the
runners in the video using the pre-trained YOLOv8 and the fine-tuned
EfficientNet. We then track the runners using ByteTrack and detect their shoes
with the fine-tuned YOLOv8. Finally, we extract the image features of the
runners using an unsupervised method using the gated recurrent unit autoencoder
model. To improve the accuracy of runner re-identification, we use dynamic
features of running sequence images. We evaluated the system on a running
practice video dataset and showed that the proposed method identified runners
with higher accuracy than one of the state-of-the-art models in unsupervised
re-identification. We also showed that our unsupervised running dynamic feature
extractor was effective for runner re-identification. Our runner
re-identification system can be useful for the automatic analysis of running
videos.
|
[
"Tomohiro Suzuki",
"Kazushi Tsutsui",
"Kazuya Takeda",
"Keisuke Fujii"
] |
2023-10-18 04:15:39
|
http://arxiv.org/abs/2310.11700v1
|
http://arxiv.org/pdf/2310.11700v1
|
2310.11700v1
|
Architectural Implications of GNN Aggregation Programming Abstractions
|
Graph neural networks (GNNs) have gained significant popularity due to the
powerful capability to extract useful representations from graph data. As the
need for efficient GNN computation intensifies, a variety of programming
abstractions designed for optimizing GNN Aggregation have emerged to facilitate
acceleration. However, there is no comprehensive evaluation and analysis upon
existing abstractions, thus no clear consensus on which approach is better. In
this letter, we classify existing programming abstractions for GNN Aggregation
by the dimension of data organization and propagation method. By constructing
these abstractions on a state-of-the-art GNN library, we perform a thorough and
detailed characterization study to compare their performance and efficiency,
and provide several insights on future GNN acceleration based on our analysis.
|
[
"Yingjie Qi",
"Jianlei Yang",
"Ao Zhou",
"Tong Qiao",
"Chunming Hu"
] |
2023-10-18 04:13:48
|
http://arxiv.org/abs/2310.12184v2
|
http://arxiv.org/pdf/2310.12184v2
|
2310.12184v2
|
AUC-mixup: Deep AUC Maximization with Mixup
|
While deep AUC maximization (DAM) has shown remarkable success on imbalanced
medical tasks, e.g., chest X-rays classification and skin lesions
classification, it could suffer from severe overfitting when applied to small
datasets due to its aggressive nature of pushing prediction scores of positive
data away from that of negative data. This paper studies how to improve
generalization of DAM by mixup data augmentation -- an approach that is widely
used for improving generalization of the cross-entropy loss based deep learning
methods. %For overfitting issues arising from limited data, the common approach
is to employ mixup data augmentation to boost the models' generalization
performance by enriching the training data. However, AUC is defined over
positive and negative pairs, which makes it challenging to incorporate mixup
data augmentation into DAM algorithms. To tackle this challenge, we employ the
AUC margin loss and incorporate soft labels into the formulation to effectively
learn from data generated by mixup augmentation, which is referred to as the
AUC-mixup loss. Our experimental results demonstrate the effectiveness of the
proposed AUC-mixup methods on imbalanced benchmark and medical image datasets
compared to standard DAM training methods.
|
[
"Jianzhi Xv",
"Gang Li",
"Tianbao Yang"
] |
2023-10-18 03:43:11
|
http://arxiv.org/abs/2310.11693v1
|
http://arxiv.org/pdf/2310.11693v1
|
2310.11693v1
|
Deep learning based on Transformer architecture for power system short-term voltage stability assessment with class imbalance
|
Most existing data-driven power system short-term voltage stability
assessment (STVSA) approaches presume class-balanced input data. However, in
practical applications, the occurrence of short-term voltage instability
following a disturbance is minimal, leading to a significant class imbalance
problem and a consequent decline in classifier performance. This work proposes
a Transformer-based STVSA method to address this challenge. By utilizing the
basic Transformer architecture, a stability assessment Transformer (StaaT) is
developed {as a classification model to reflect the correlation between the
operational states of the system and the resulting stability outcomes}. To
combat the negative impact of imbalanced datasets, this work employs a
conditional Wasserstein generative adversarial network with gradient penalty
(CWGAN-GP) for synthetic data generation, aiding in the creation of a balanced,
representative training set for the classifier. Semi-supervised clustering
learning is implemented to enhance clustering quality, addressing the lack of a
unified quantitative criterion for short-term voltage stability. {Numerical
tests on the IEEE 39-bus test system extensively demonstrate that the proposed
method exhibits robust performance under class imbalances up to 100:1 and noisy
environments, and maintains consistent effectiveness even with an increased
penetration of renewable energy}. Comparative results reveal that the CWGAN-GP
generates more balanced datasets than traditional oversampling methods and that
the StaaT outperforms other deep learning algorithms. This study presents a
compelling solution for real-world STVSA applications that often face class
imbalance and data noise challenges.
|
[
"Yang Li",
"Jiting Cao",
"Yan Xu",
"Lipeng Zhu",
"Zhao Yang Dong"
] |
2023-10-18 03:36:10
|
http://arxiv.org/abs/2310.11690v1
|
http://arxiv.org/pdf/2310.11690v1
|
2310.11690v1
|
Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
|
Large language models (LLMs) have recently shown great advances in a variety
of tasks, including natural language understanding and generation. However,
their use in high-stakes decision-making scenarios is still limited due to the
potential for errors. Selective prediction is a technique that can be used to
improve the reliability of the LLMs by allowing them to abstain from making
predictions when they are unsure of the answer. In this work, we propose a
novel framework for adaptation with self-evaluation to improve the selective
prediction performance of LLMs. Our framework is based on the idea of using
parameter-efficient tuning to adapt the LLM to the specific task at hand while
improving its ability to perform self-evaluation. We evaluate our method on a
variety of question-answering (QA) datasets and show that it outperforms
state-of-the-art selective prediction methods. For example, on the CoQA
benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the
AUROC from 74.61% to 80.25%.
|
[
"Jiefeng Chen",
"Jinsung Yoon",
"Sayna Ebrahimi",
"Sercan O Arik",
"Tomas Pfister",
"Somesh Jha"
] |
2023-10-18 03:34:59
|
http://arxiv.org/abs/2310.11689v1
|
http://arxiv.org/pdf/2310.11689v1
|
2310.11689v1
|
Superiority of Softmax: Unveiling the Performance Edge Over Linear Attention
|
Large transformer models have achieved state-of-the-art results in numerous
natural language processing tasks. Among the pivotal components of the
transformer architecture, the attention mechanism plays a crucial role in
capturing token interactions within sequences through the utilization of
softmax function.
Conversely, linear attention presents a more computationally efficient
alternative by approximating the softmax operation with linear complexity.
However, it exhibits substantial performance degradation when compared to the
traditional softmax attention mechanism.
In this paper, we bridge the gap in our theoretical understanding of the
reasons behind the practical performance gap between softmax and linear
attention. By conducting a comprehensive comparative analysis of these two
attention mechanisms, we shed light on the underlying reasons for why softmax
attention outperforms linear attention in most scenarios.
|
[
"Yichuan Deng",
"Zhao Song",
"Tianyi Zhou"
] |
2023-10-18 03:17:57
|
http://arxiv.org/abs/2310.11685v1
|
http://arxiv.org/pdf/2310.11685v1
|
2310.11685v1
|
Quantum Acceleration of Infinite Horizon Average-Reward Reinforcement Learning
|
This paper investigates the potential of quantum acceleration in addressing
infinite horizon Markov Decision Processes (MDPs) to enhance average reward
outcomes. We introduce an innovative quantum framework for the agent's
engagement with an unknown MDP, extending the conventional interaction
paradigm. Our approach involves the design of an optimism-driven tabular
Reinforcement Learning algorithm that harnesses quantum signals acquired by the
agent through efficient quantum mean estimation techniques. Through thorough
theoretical analysis, we demonstrate that the quantum advantage in mean
estimation leads to exponential advancements in regret guarantees for infinite
horizon Reinforcement Learning. Specifically, the proposed Quantum algorithm
achieves a regret bound of $\tilde{\mathcal{O}}(1)$, a significant improvement
over the $\tilde{\mathcal{O}}(\sqrt{T})$ bound exhibited by classical
counterparts.
|
[
"Bhargav Ganguly",
"Vaneet Aggarwal"
] |
2023-10-18 03:17:51
|
http://arxiv.org/abs/2310.11684v1
|
http://arxiv.org/pdf/2310.11684v1
|
2310.11684v1
|
Using Experience Classification for Training Non-Markovian Tasks
|
Unlike the standard Reinforcement Learning (RL) model, many real-world tasks
are non-Markovian, whose rewards are predicated on state history rather than
solely on the current state. Solving a non-Markovian task, frequently applied
in practical applications such as autonomous driving, financial trading, and
medical diagnosis, can be quite challenging. We propose a novel RL approach to
achieve non-Markovian rewards expressed in temporal logic LTL$_f$ (Linear
Temporal Logic over Finite Traces). To this end, an encoding of linear
complexity from LTL$_f$ into MDPs (Markov Decision Processes) is introduced to
take advantage of advanced RL algorithms. Then, a prioritized experience replay
technique based on the automata structure (semantics equivalent to LTL$_f$
specification) is utilized to improve the training process. We empirically
evaluate several benchmark problems augmented with non-Markovian tasks to
demonstrate the feasibility and effectiveness of our approach.
|
[
"Ruixuan Miao",
"Xu Lu",
"Cong Tian",
"Bin Yu",
"Zhenhua Duan"
] |
2023-10-18 03:00:59
|
http://arxiv.org/abs/2310.11678v1
|
http://arxiv.org/pdf/2310.11678v1
|
2310.11678v1
|
Improved Sample Complexity Analysis of Natural Policy Gradient Algorithm with General Parameterization for Infinite Horizon Discounted Reward Markov Decision Processes
|
We consider the problem of designing sample efficient learning algorithms for
infinite horizon discounted reward Markov Decision Process. Specifically, we
propose the Accelerated Natural Policy Gradient (ANPG) algorithm that utilizes
an accelerated stochastic gradient descent process to obtain the natural policy
gradient. ANPG achieves $\mathcal{O}({\epsilon^{-2}})$ sample complexity and
$\mathcal{O}(\epsilon^{-1})$ iteration complexity with general parameterization
where $\epsilon$ defines the optimality error. This improves the
state-of-the-art sample complexity by a $\log(\frac{1}{\epsilon})$ factor. ANPG
is a first-order algorithm and unlike some existing literature, does not
require the unverifiable assumption that the variance of importance sampling
(IS) weights is upper bounded. In the class of Hessian-free and IS-free
algorithms, ANPG beats the best-known sample complexity by a factor of
$\mathcal{O}(\epsilon^{-\frac{1}{2}})$ and simultaneously matches their
state-of-the-art iteration complexity.
|
[
"Washim Uddin Mondal",
"Vaneet Aggarwal"
] |
2023-10-18 03:00:15
|
http://arxiv.org/abs/2310.11677v1
|
http://arxiv.org/pdf/2310.11677v1
|
2310.11677v1
|
PREM: A Simple Yet Effective Approach for Node-Level Graph Anomaly Detection
|
Node-level graph anomaly detection (GAD) plays a critical role in identifying
anomalous nodes from graph-structured data in various domains such as medicine,
social networks, and e-commerce. However, challenges have arisen due to the
diversity of anomalies and the dearth of labeled data. Existing methodologies -
reconstruction-based and contrastive learning - while effective, often suffer
from efficiency issues, stemming from their complex objectives and elaborate
modules. To improve the efficiency of GAD, we introduce a simple method termed
PREprocessing and Matching (PREM for short). Our approach streamlines GAD,
reducing time and memory consumption while maintaining powerful anomaly
detection capabilities. Comprising two modules - a pre-processing module and an
ego-neighbor matching module - PREM eliminates the necessity for
message-passing propagation during training, and employs a simple contrastive
loss, leading to considerable reductions in training time and memory usage.
Moreover, through rigorous evaluations of five real-world datasets, our method
demonstrated robustness and effectiveness. Notably, when validated on the ACM
dataset, PREM achieved a 5% improvement in AUC, a 9-fold increase in training
speed, and sharply reduce memory usage compared to the most efficient baseline.
|
[
"Junjun Pan",
"Yixin Liu",
"Yizhen Zheng",
"Shirui Pan"
] |
2023-10-18 02:59:57
|
http://arxiv.org/abs/2310.11676v1
|
http://arxiv.org/pdf/2310.11676v1
|
2310.11676v1
|
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents
|
Humans are social beings; we pursue social goals in our daily interactions,
which is a crucial aspect of social intelligence. Yet, AI systems' abilities in
this realm remain elusive. We present SOTOPIA, an open-ended environment to
simulate complex social interactions between artificial agents and evaluate
their social intelligence. In our environment, agents role-play and interact
under a wide variety of scenarios; they coordinate, collaborate, exchange, and
compete with each other to achieve complex social goals. We simulate the
role-play interaction between LLM-based agents and humans within this task
space and evaluate their performance with a holistic evaluation framework
called SOTOPIA-Eval. With SOTOPIA, we find significant differences between
these models in terms of their social intelligence, and we identify a subset of
SOTOPIA scenarios, SOTOPIA-hard, that is generally challenging for all models.
We find that on this subset, GPT-4 achieves a significantly lower goal
completion rate than humans and struggles to exhibit social commonsense
reasoning and strategic communication skills. These findings demonstrate
SOTOPIA's promise as a general platform for research on evaluating and
improving social intelligence in artificial agents.
|
[
"Xuhui Zhou",
"Hao Zhu",
"Leena Mathur",
"Ruohong Zhang",
"Haofei Yu",
"Zhengyang Qi",
"Louis-Philippe Morency",
"Yonatan Bisk",
"Daniel Fried",
"Graham Neubig",
"Maarten Sap"
] |
2023-10-18 02:27:01
|
http://arxiv.org/abs/2310.11667v1
|
http://arxiv.org/pdf/2310.11667v1
|
2310.11667v1
|
Hetero$^2$Net: Heterophily-aware Representation Learning on Heterogenerous Graphs
|
Real-world graphs are typically complex, exhibiting heterogeneity in the
global structure, as well as strong heterophily within local neighborhoods.
While a growing body of literature has revealed the limitations of common graph
neural networks (GNNs) in handling homogeneous graphs with heterophily, little
work has been conducted on investigating the heterophily properties in the
context of heterogeneous graphs. To bridge this research gap, we identify the
heterophily in heterogeneous graphs using metapaths and propose two practical
metrics to quantitatively describe the levels of heterophily. Through in-depth
investigations on several real-world heterogeneous graphs exhibiting varying
levels of heterophily, we have observed that heterogeneous graph neural
networks (HGNNs), which inherit many mechanisms from GNNs designed for
homogeneous graphs, fail to generalize to heterogeneous graphs with heterophily
or low level of homophily. To address the challenge, we present Hetero$^2$Net,
a heterophily-aware HGNN that incorporates both masked metapath prediction and
masked label prediction tasks to effectively and flexibly handle both
homophilic and heterophilic heterogeneous graphs. We evaluate the performance
of Hetero$^2$Net on five real-world heterogeneous graph benchmarks with varying
levels of heterophily. The results demonstrate that Hetero$^2$Net outperforms
strong baselines in the semi-supervised node classification task, providing
valuable insights into effectively handling more complex heterogeneous graphs.
|
[
"Jintang Li",
"Zheng Wei",
"Jiawang Dan",
"Jing Zhou",
"Yuchang Zhu",
"Ruofan Wu",
"Baokun Wang",
"Zhang Zhen",
"Changhua Meng",
"Hong Jin",
"Zibin Zheng",
"Liang Chen"
] |
2023-10-18 02:19:12
|
http://arxiv.org/abs/2310.11664v1
|
http://arxiv.org/pdf/2310.11664v1
|
2310.11664v1
|
Subject-specific Deep Neural Networks for Count Data with High-cardinality Categorical Features
|
There is a growing interest in subject-specific predictions using deep neural
networks (DNNs) because real-world data often exhibit correlations, which has
been typically overlooked in traditional DNN frameworks. In this paper, we
propose a novel hierarchical likelihood learning framework for introducing
gamma random effects into the Poisson DNN, so as to improve the prediction
performance by capturing both nonlinear effects of input variables and
subject-specific cluster effects. The proposed method simultaneously yields
maximum likelihood estimators for fixed parameters and best unbiased predictors
for random effects by optimizing a single objective function. This approach
enables a fast end-to-end algorithm for handling clustered count data, which
often involve high-cardinality categorical features. Furthermore,
state-of-the-art network architectures can be easily implemented into the
proposed h-likelihood framework. As an example, we introduce multi-head
attention layer and a sparsemax function, which allows feature selection in
high-dimensional settings. To enhance practical performance and learning
efficiency, we present an adjustment procedure for prediction of random
parameters and a method-of-moments estimator for pretraining of variance
component. Various experiential studies and real data analyses confirm the
advantages of our proposed methods.
|
[
"Hangbin Lee",
"Il Do Ha",
"Changha Hwang",
"Youngjo Lee"
] |
2023-10-18 01:54:48
|
http://arxiv.org/abs/2310.11654v1
|
http://arxiv.org/pdf/2310.11654v1
|
2310.11654v1
|
Free-text Keystroke Authentication using Transformers: A Comparative Study of Architectures and Loss Functions
|
Keystroke biometrics is a promising approach for user identification and
verification, leveraging the unique patterns in individuals' typing behavior.
In this paper, we propose a Transformer-based network that employs
self-attention to extract informative features from keystroke sequences,
surpassing the performance of traditional Recurrent Neural Networks. We explore
two distinct architectures, namely bi-encoder and cross-encoder, and compare
their effectiveness in keystroke authentication. Furthermore, we investigate
different loss functions, including triplet, batch-all triplet, and WDCL loss,
along with various distance metrics such as Euclidean, Manhattan, and cosine
distances. These experiments allow us to optimize the training process and
enhance the performance of our model. To evaluate our proposed model, we employ
the Aalto desktop keystroke dataset. The results demonstrate that the
bi-encoder architecture with batch-all triplet loss and cosine distance
achieves the best performance, yielding an exceptional Equal Error Rate of
0.0186%. Furthermore, alternative algorithms for calculating similarity scores
are explored to enhance accuracy. Notably, the utilization of a one-class
Support Vector Machine reduces the Equal Error Rate to an impressive 0.0163%.
The outcomes of this study indicate that our model surpasses the previous
state-of-the-art in free-text keystroke authentication. These findings
contribute to advancing the field of keystroke authentication and offer
practical implications for secure user verification systems.
|
[
"Saleh Momeni",
"Bagher BabaAli"
] |
2023-10-18 00:34:26
|
http://arxiv.org/abs/2310.11640v1
|
http://arxiv.org/pdf/2310.11640v1
|
2310.11640v1
|
Balance Act: Mitigating Hubness in Cross-Modal Retrieval with Query and Gallery Banks
|
In this work, we present a post-processing solution to address the hubness
problem in cross-modal retrieval, a phenomenon where a small number of gallery
data points are frequently retrieved, resulting in a decline in retrieval
performance. We first theoretically demonstrate the necessity of incorporating
both the gallery and query data for addressing hubness as hubs always exhibit
high similarity with gallery and query data. Second, building on our
theoretical results, we propose a novel framework, Dual Bank Normalization
(DBNorm). While previous work has attempted to alleviate hubness by only
utilizing the query samples, DBNorm leverages two banks constructed from the
query and gallery samples to reduce the occurrence of hubs during inference.
Next, to complement DBNorm, we introduce two novel methods, dual inverted
softmax and dual dynamic inverted softmax, for normalizing similarity based on
the two banks. Specifically, our proposed methods reduce the similarity between
hubs and queries while improving the similarity between non-hubs and queries.
Finally, we present extensive experimental results on diverse language-grounded
benchmarks, including text-image, text-video, and text-audio, demonstrating the
superior performance of our approaches compared to previous methods in
addressing hubness and boosting retrieval performance. Our code is available at
https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.
|
[
"Yimu Wang",
"Xiangru Jian",
"Bo Xue"
] |
2023-10-17 22:10:17
|
http://arxiv.org/abs/2310.11612v1
|
http://arxiv.org/pdf/2310.11612v1
|
2310.11612v1
|
In defense of parameter sharing for model-compression
|
When considering a model architecture, there are several ways to reduce its
memory footprint. Historically, popular approaches included selecting smaller
architectures and creating sparse networks through pruning. More recently,
randomized parameter-sharing (RPS) methods have gained traction for model
compression at start of training. In this paper, we comprehensively assess the
trade-off between memory and accuracy across RPS, pruning techniques, and
building smaller models. Our findings demonstrate that RPS, which is both data
and model-agnostic, consistently outperforms/matches smaller models and all
moderately informed pruning strategies, such as MAG, SNIP, SYNFLOW, and GRASP,
across the entire compression range. This advantage becomes particularly
pronounced in higher compression scenarios. Notably, even when compared to
highly informed pruning techniques like Lottery Ticket Rewinding (LTR), RPS
exhibits superior performance in high compression settings. This points out
inherent capacity advantage that RPS enjoys over sparse models. Theoretically,
we establish RPS as a superior technique in terms of memory-efficient
representation when compared to pruning for linear models. This paper argues in
favor of paradigm shift towards RPS based models. During our rigorous
evaluation of RPS, we identified issues in the state-of-the-art RPS technique
ROAST, specifically regarding stability (ROAST's sensitivity to initialization
hyperparameters, often leading to divergence) and Pareto-continuity (ROAST's
inability to recover the accuracy of the original model at zero compression).
We provably address both of these issues. We refer to the modified RPS, which
incorporates our improvements, as STABLE-RPS.
|
[
"Aditya Desai",
"Anshumali Shrivastava"
] |
2023-10-17 22:08:01
|
http://arxiv.org/abs/2310.11611v1
|
http://arxiv.org/pdf/2310.11611v1
|
2310.11611v1
|
Reflection-Equivariant Diffusion for 3D Structure Determination from Isotopologue Rotational Spectra in Natural Abundance
|
Structure determination is necessary to identify unknown organic molecules,
such as those in natural products, forensic samples, the interstellar medium,
and laboratory syntheses. Rotational spectroscopy enables structure
determination by providing accurate 3D information about small organic
molecules via their moments of inertia. Using these moments, Kraitchman
analysis determines isotopic substitution coordinates, which are the unsigned
$|x|,|y|,|z|$ coordinates of all atoms with natural isotopic abundance,
including carbon, nitrogen, and oxygen. While unsigned substitution coordinates
can verify guesses of structures, the missing $+/-$ signs make it challenging
to determine the actual structure from the substitution coordinates alone. To
tackle this inverse problem, we develop KREED (Kraitchman
REflection-Equivariant Diffusion), a generative diffusion model that infers a
molecule's complete 3D structure from its molecular formula, moments of
inertia, and unsigned substitution coordinates of heavy atoms. KREED's top-1
predictions identify the correct 3D structure with >98% accuracy on the QM9 and
GEOM datasets when provided with substitution coordinates of all heavy atoms
with natural isotopic abundance. When substitution coordinates are restricted
to only a subset of carbons, accuracy is retained at 91% on QM9 and 32% on
GEOM. On a test set of experimentally measured substitution coordinates
gathered from the literature, KREED predicts the correct all-atom 3D structure
in 25 of 33 cases, demonstrating experimental applicability for context-free 3D
structure determination with rotational spectroscopy.
|
[
"Austin Cheng",
"Alston Lo",
"Santiago Miret",
"Brooks Pate",
"Alán Aspuru-Guzik"
] |
2023-10-17 22:05:11
|
http://arxiv.org/abs/2310.11609v1
|
http://arxiv.org/pdf/2310.11609v1
|
2310.11609v1
|
TK-KNN: A Balanced Distance-Based Pseudo Labeling Approach for Semi-Supervised Intent Classification
|
The ability to detect intent in dialogue systems has become increasingly
important in modern technology. These systems often generate a large amount of
unlabeled data, and manually labeling this data requires substantial human
effort. Semi-supervised methods attempt to remedy this cost by using a model
trained on a few labeled examples and then by assigning pseudo-labels to
further a subset of unlabeled examples that has a model prediction confidence
higher than a certain threshold. However, one particularly perilous consequence
of these methods is the risk of picking an imbalanced set of examples across
classes, which could lead to poor labels. In the present work, we describe
Top-K K-Nearest Neighbor (TK-KNN), which uses a more robust pseudo-labeling
approach based on distance in the embedding space while maintaining a balanced
set of pseudo-labeled examples across classes through a ranking-based approach.
Experiments on several datasets show that TK-KNN outperforms existing models,
particularly when labeled data is scarce on popular datasets such as CLINC150
and Banking77. Code is available at https://github.com/ServiceNow/tk-knn
|
[
"Nicholas Botzer",
"David Vasquez",
"Tim Weninger",
"Issam Laradji"
] |
2023-10-17 22:00:42
|
http://arxiv.org/abs/2310.11607v1
|
http://arxiv.org/pdf/2310.11607v1
|
2310.11607v1
|
DIAR: Deep Image Alignment and Reconstruction using Swin Transformers
|
When taking images of some occluded content, one is often faced with the
problem that every individual image frame contains unwanted artifacts, but a
collection of images contains all relevant information if properly aligned and
aggregated. In this paper, we attempt to build a deep learning pipeline that
simultaneously aligns a sequence of distorted images and reconstructs them. We
create a dataset that contains images with image distortions, such as lighting,
specularities, shadows, and occlusion. We create perspective distortions with
corresponding ground-truth homographies as labels. We use our dataset to train
Swin transformer models to analyze sequential image data. The attention maps
enable the model to detect relevant image content and differentiate it from
outliers and artifacts. We further explore using neural feature maps as
alternatives to classical key point detectors. The feature maps of trained
convolutional layers provide dense image descriptors that can be used to find
point correspondences between images. We utilize this to compute coarse image
alignments and explore its limitations.
|
[
"Monika Kwiatkowski",
"Simon Matern",
"Olaf Hellwich"
] |
2023-10-17 21:59:45
|
http://arxiv.org/abs/2310.11605v1
|
http://arxiv.org/pdf/2310.11605v1
|
2310.11605v1
|
Language Models as Zero-Shot Trajectory Generators
|
Large Language Models (LLMs) have recently shown promise as high-level
planners for robots when given access to a selection of low-level skills.
However, it is often assumed that LLMs do not possess sufficient knowledge to
be used for the low-level trajectories themselves. In this work, we address
this assumption thoroughly, and investigate if an LLM (GPT-4) can directly
predict a dense sequence of end-effector poses for manipulation skills, when
given access to only object detection and segmentation vision models. We study
how well a single task-agnostic prompt, without any in-context examples, motion
primitives, or external trajectory optimisers, can perform across 26 real-world
language-based tasks, such as "open the bottle cap" and "wipe the plate with
the sponge", and we investigate which design choices in this prompt are the
most effective. Our conclusions raise the assumed limit of LLMs for robotics,
and we reveal for the first time that LLMs do indeed possess an understanding
of low-level robot control sufficient for a range of common tasks, and that
they can additionally detect failures and then re-plan trajectories
accordingly. Videos, code, and prompts are available at:
https://www.robot-learning.uk/language-models-trajectory-generators.
|
[
"Teyun Kwon",
"Norman Di Palo",
"Edward Johns"
] |
2023-10-17 21:57:36
|
http://arxiv.org/abs/2310.11604v1
|
http://arxiv.org/pdf/2310.11604v1
|
2310.11604v1
|
Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning
|
In today's data-driven landscape, the delicate equilibrium between
safeguarding user privacy and unleashing data potential stands as a paramount
concern. Federated learning, which enables collaborative model training without
necessitating data sharing, has emerged as a privacy-centric solution. This
decentralized approach brings forth security challenges, notably poisoning and
backdoor attacks where malicious entities inject corrupted data. Our research,
initially spurred by test-time evasion attacks, investigates the intersection
of adversarial training and backdoor attacks within federated learning,
introducing Adversarial Robustness Unhardening (ARU). ARU is employed by a
subset of adversaries to intentionally undermine model robustness during
decentralized training, rendering models susceptible to a broader range of
evasion attacks. We present extensive empirical experiments evaluating ARU's
impact on adversarial training and existing robust aggregation defenses against
poisoning and backdoor attacks. Our findings inform strategies for enhancing
ARU to counter current defensive measures and highlight the limitations of
existing defenses, offering insights into bolstering defenses against ARU.
|
[
"Taejin Kim",
"Jiarui Li",
"Shubhranshu Singh",
"Nikhil Madaan",
"Carlee Joe-Wong"
] |
2023-10-17 21:38:41
|
http://arxiv.org/abs/2310.11594v2
|
http://arxiv.org/pdf/2310.11594v2
|
2310.11594v2
|
Automated Evaluation of Personalized Text Generation using Large Language Models
|
Personalized text generation presents a specialized mechanism for delivering
content that is specific to a user's personal context. While the research
progress in this area has been rapid, evaluation still presents a challenge.
Traditional automated metrics such as BLEU and ROUGE primarily measure lexical
similarity to human-written references, and are not able to distinguish
personalization from other subtle semantic aspects, thus falling short of
capturing the nuances of personalized generated content quality. On the other
hand, human judgments are costly to obtain, especially in the realm of
personalized evaluation. Inspired by these challenges, we explore the use of
large language models (LLMs) for evaluating personalized text generation, and
examine their ability to understand nuanced user context. We present AuPEL, a
novel evaluation method that distills three major semantic aspects of the
generated text: personalization, quality and relevance, and automatically
measures these aspects. To validate the effectiveness of AuPEL, we design
carefully controlled experiments and compare the accuracy of the evaluation
judgments made by LLMs versus that of judgements made by human annotators, and
conduct rigorous analyses of the consistency and sensitivity of the proposed
metric. We find that, compared to existing evaluation metrics, AuPEL not only
distinguishes and ranks models based on their personalization abilities more
accurately, but also presents commendable consistency and efficiency for this
task. Our work suggests that using LLMs as the evaluators of personalized text
generation is superior to traditional text similarity metrics, even though
interesting new challenges still remain.
|
[
"Yaqing Wang",
"Jiepu Jiang",
"Mingyang Zhang",
"Cheng Li",
"Yi Liang",
"Qiaozhu Mei",
"Michael Bendersky"
] |
2023-10-17 21:35:06
|
http://arxiv.org/abs/2310.11593v1
|
http://arxiv.org/pdf/2310.11593v1
|
2310.11593v1
|
Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios
|
Human impressions of robot performance are often measured through surveys. As
a more scalable and cost-effective alternative, we study the possibility of
predicting people's impressions of robot behavior using non-verbal behavioral
cues and machine learning techniques. To this end, we first contribute the SEAN
TOGETHER Dataset consisting of observations of an interaction between a person
and a mobile robot in a Virtual Reality simulation, together with impressions
of robot performance provided by users on a 5-point scale. Second, we
contribute analyses of how well humans and supervised learning techniques can
predict perceived robot performance based on different combinations of
observation types (e.g., facial, spatial, and map features). Our results show
that facial expressions alone provide useful information about human
impressions of robot performance; but in the navigation scenarios we tested,
spatial features are the most critical piece of information for this inference
task. Also, when evaluating results as binary classification (rather than
multiclass classification), the F1-Score of human predictions and machine
learning models more than doubles, showing that both are better at telling the
directionality of robot performance than predicting exact performance ratings.
Based on our findings, we provide guidelines for implementing these predictions
models in real-world navigation scenarios.
|
[
"Qiping Zhang",
"Nathan Tsoi",
"Booyeon Choi",
"Jie Tan",
"Hao-Tien Lewis Chiang",
"Marynel Vázquez"
] |
2023-10-17 21:12:32
|
http://arxiv.org/abs/2310.11590v1
|
http://arxiv.org/pdf/2310.11590v1
|
2310.11590v1
|
Eliciting Human Preferences with Language Models
|
Language models (LMs) can be directed to perform target tasks by using
labeled examples or natural language prompts. But selecting examples or writing
prompts for can be challenging--especially in tasks that involve unusual edge
cases, demand precise articulation of nebulous preferences, or require an
accurate mental model of LM behavior. We propose to use *LMs themselves* to
guide the task specification process. In this paper, we introduce **Generative
Active Task Elicitation (GATE)**: a learning framework in which models elicit
and infer intended behavior through free-form, language-based interaction with
users. We study GATE in three domains: email validation, content
recommendation, and moral reasoning. In preregistered experiments, we show that
LMs prompted to perform GATE (e.g., by generating open-ended questions or
synthesizing informative edge cases) elicit responses that are often more
informative than user-written prompts or labels. Users report that interactive
task elicitation requires less effort than prompting or example labeling and
surfaces novel considerations not initially anticipated by users. Our findings
suggest that LM-driven elicitation can be a powerful tool for aligning models
to complex human preferences and values.
|
[
"Belinda Z. Li",
"Alex Tamkin",
"Noah Goodman",
"Jacob Andreas"
] |
2023-10-17 21:11:21
|
http://arxiv.org/abs/2310.11589v1
|
http://arxiv.org/pdf/2310.11589v1
|
2310.11589v1
|
Studying the Effects of Sex-related Differences on Brain Age Prediction using brain MR Imaging
|
While utilizing machine learning models, one of the most crucial aspects is
how bias and fairness affect model outcomes for diverse demographics. This
becomes especially relevant in the context of machine learning for medical
imaging applications as these models are increasingly being used for diagnosis
and treatment planning. In this paper, we study biases related to sex when
developing a machine learning model based on brain magnetic resonance images
(MRI). We investigate the effects of sex by performing brain age prediction
considering different experimental designs: model trained using only female
subjects, only male subjects and a balanced dataset. We also perform evaluation
on multiple MRI datasets (Calgary-Campinas(CC359) and CamCAN) to assess the
generalization capability of the proposed models. We found disparities in the
performance of brain age prediction models when trained on distinct sex
subgroups and datasets, in both final predictions and decision making (assessed
using interpretability models). Our results demonstrated variations in model
generalizability across sex-specific subgroups, suggesting potential biases in
models trained on unbalanced datasets. This underlines the critical role of
careful experimental design in generating fair and reliable outcomes.
|
[
"Mahsa Dibaji",
"Neha Gianchandani",
"Akhil Nair",
"Mansi Singhal",
"Roberto Souza",
"Mariana Bento"
] |
2023-10-17 20:55:53
|
http://arxiv.org/abs/2310.11577v1
|
http://arxiv.org/pdf/2310.11577v1
|
2310.11577v1
|
What is a good question? Task-oriented asking with fact-level masking
|
Asking questions is an important element of real-life collaboration on
reasoning tasks like question answering. For example, a legal assistant chatbot
may be unable to make accurate recommendations without specific information on
the user's circumstances. However, large language models are usually deployed
to solve reasoning tasks directly without asking follow-up questions to the
user or third parties. We term this problem task-oriented asking (TOA).
Zero-shot chat models can perform TOA, but their training is primarily based on
next-token prediction rather than whether questions contribute to successful
collaboration. To enable the training and evaluation of TOA models, we present
a definition and framework for natural language task-oriented asking, the
problem of generating questions that result in answers useful for a reasoning
task. We also present fact-level masking (FLM), a procedure for converting
natural language datasets into self-supervised TOA datasets by omitting
particular critical facts. Finally, we generate a TOA dataset from the HotpotQA
dataset using FLM and evaluate several zero-shot language models on it. Our
experiments show that current zero-shot models struggle to ask questions that
retrieve useful information, as compared to human annotators. These results
demonstrate an opportunity to use FLM datasets and the TOA framework to train
and evaluate better TOA models.
|
[
"Matthew Toles",
"Yukun Huang",
"Zhou Yu",
"Luis Gravano"
] |
2023-10-17 20:40:59
|
http://arxiv.org/abs/2310.11571v1
|
http://arxiv.org/pdf/2310.11571v1
|
2310.11571v1
|
When Rigidity Hurts: Soft Consistency Regularization for Probabilistic Hierarchical Time Series Forecasting
|
Probabilistic hierarchical time-series forecasting is an important variant of
time-series forecasting, where the goal is to model and forecast multivariate
time-series that have underlying hierarchical relations. Most methods focus on
point predictions and do not provide well-calibrated probabilistic forecasts
distributions. Recent state-of-art probabilistic forecasting methods also
impose hierarchical relations on point predictions and samples of distribution
which does not account for coherency of forecast distributions. Previous works
also silently assume that datasets are always consistent with given
hierarchical relations and do not adapt to real-world datasets that show
deviation from this assumption. We close both these gap and propose PROFHiT,
which is a fully probabilistic hierarchical forecasting model that jointly
models forecast distribution of entire hierarchy. PROFHiT uses a flexible
probabilistic Bayesian approach and introduces a novel Distributional Coherency
regularization to learn from hierarchical relations for entire forecast
distribution that enables robust and calibrated forecasts as well as adapt to
datasets of varying hierarchical consistency. On evaluating PROFHiT over wide
range of datasets, we observed 41-88% better performance in accuracy and
significantly better calibration. Due to modeling the coherency over full
distribution, we observed that PROFHiT can robustly provide reliable forecasts
even if up to 10% of input time-series data is missing where other methods'
performance severely degrade by over 70%.
|
[
"Harshavardhan Kamarthi",
"Lingkai Kong",
"Alexander Rodríguez",
"Chao Zhang",
"B. Aditya Prakash"
] |
2023-10-17 20:30:16
|
http://arxiv.org/abs/2310.11569v2
|
http://arxiv.org/pdf/2310.11569v2
|
2310.11569v2
|
Partially Observable Stochastic Games with Neural Perception Mechanisms
|
Stochastic games are a well established model for multi-agent sequential
decision making under uncertainty. In reality, though, agents have only partial
observability of their environment, which makes the problem computationally
challenging, even in the single-agent setting of partially observable Markov
decision processes. Furthermore, in practice, agents increasingly perceive
their environment using data-driven approaches such as neural networks trained
on continuous data. To tackle this problem, we propose the model of
neuro-symbolic partially-observable stochastic games (NS-POSGs), a variant of
continuous-space concurrent stochastic games that explicitly incorporates
perception mechanisms. We focus on a one-sided setting, comprising a
partially-informed agent with discrete, data-driven observations and a
fully-informed agent with continuous observations. We present a new point-based
method, called one-sided NS-HSVI, for approximating values of one-sided
NS-POSGs and implement it based on the popular particle-based beliefs, showing
that it has closed forms for computing values of interest. We provide
experimental results to demonstrate the practical applicability of our method
for neural networks whose preimage is in polyhedral form.
|
[
"Rui Yan",
"Gabriel Santos",
"Gethin Norman",
"David Parker",
"Marta Kwiatkowska"
] |
2023-10-17 20:25:40
|
http://arxiv.org/abs/2310.11566v1
|
http://arxiv.org/pdf/2310.11566v1
|
2310.11566v1
|
Online Algorithms with Uncertainty-Quantified Predictions
|
Online algorithms with predictions have become a trending topic in the field
of beyond worst-case analysis of algorithms. These algorithms incorporate
predictions about the future to obtain performance guarantees that are of high
quality when the predictions are good, while still maintaining bounded
worst-case guarantees when predictions are arbitrarily poor. In general, the
algorithm is assumed to be unaware of the prediction's quality. However, recent
developments in the machine learning literature have studied techniques for
providing uncertainty quantification on machine-learned predictions, which
describes how certain a model is about its quality. This paper examines the
question of how to optimally utilize uncertainty-quantified predictions in the
design of online algorithms. In particular, we consider predictions augmented
with uncertainty quantification describing the likelihood of the ground truth
falling in a certain range, designing online algorithms with these
probabilistic predictions for two classic online problems: ski rental and
online search. In each case, we demonstrate that non-trivial modifications to
algorithm design are needed to fully leverage the probabilistic predictions.
Moreover, we consider how to utilize more general forms of uncertainty
quantification, proposing a framework based on online learning that learns to
exploit uncertainty quantification to make optimal decisions in multi-instance
settings.
|
[
"Bo Sun",
"Jerry Huang",
"Nicolas Christianson",
"Mohammad Hajiesmaili",
"Adam Wierman"
] |
2023-10-17 20:09:41
|
http://arxiv.org/abs/2310.11558v1
|
http://arxiv.org/pdf/2310.11558v1
|
2310.11558v1
|
Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback
|
We study online reinforcement learning in linear Markov decision processes
with adversarial losses and bandit feedback, without prior knowledge on
transitions or access to simulators. We introduce two algorithms that achieve
improved regret performance compared to existing approaches. The first
algorithm, although computationally inefficient, ensures a regret of
$\widetilde{\mathcal{O}}\left(\sqrt{K}\right)$, where $K$ is the number of
episodes. This is the first result with the optimal $K$ dependence in the
considered setting. The second algorithm, which is based on the policy
optimization framework, guarantees a regret of
$\widetilde{\mathcal{O}}\left(K^{\frac{3}{4}} \right)$ and is computationally
efficient. Both our results significantly improve over the state-of-the-art: a
computationally inefficient algorithm by Kong et al. [2023] with
$\widetilde{\mathcal{O}}\left(K^{\frac{4}{5}}+poly\left(\frac{1}{\lambda_{\min}}\right)
\right)$ regret, for some problem-dependent constant $\lambda_{\min}$ that can
be arbitrarily close to zero, and a computationally efficient algorithm by
Sherman et al. [2023b] with $\widetilde{\mathcal{O}}\left(K^{\frac{6}{7}}
\right)$ regret.
|
[
"Haolin Liu",
"Chen-Yu Wei",
"Julian Zimmert"
] |
2023-10-17 19:43:37
|
http://arxiv.org/abs/2310.11550v1
|
http://arxiv.org/pdf/2310.11550v1
|
2310.11550v1
|
Bias and Error Mitigation in Software-Generated Data: An Advanced Search and Optimization Framework Leveraging Generative Code Models
|
Data generation and analysis is a fundamental aspect of many industries and
disciplines, from strategic decision making in business to research in the
physical and social sciences. However, data generated using software and
algorithms can be subject to biases and errors. These can be due to problems
with the original software, default settings that do not align with the
specific needs of the situation, or even deeper problems with the underlying
theories and models. This paper proposes an advanced search and optimization
framework aimed at generating and choosing optimal source code capable of
correcting errors and biases from previous versions to address typical problems
in software systems specializing in data analysis and generation, especially
those in the corporate and data science world. Applying this framework multiple
times on the same software system would incrementally improve the quality of
the output results. It uses Solomonoff Induction as a sound theoretical basis,
extending it with Kolmogorov Conditional Complexity, a novel adaptation, to
evaluate a set of candidate programs. We propose the use of generative models
for the creation of this set of programs, with special emphasis on the
capabilities of Large Language Models (LLMs) to generate high quality code.
|
[
"Ernesto Giralt Hernández"
] |
2023-10-17 19:31:05
|
http://arxiv.org/abs/2310.11546v1
|
http://arxiv.org/pdf/2310.11546v1
|
2310.11546v1
|
MUST&P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning
|
In this paper, we present a methodology for linguistic feature extraction,
focusing particularly on automatically syllabifying words in multiple
languages, with a design to be compatible with a forced-alignment tool, the
Montreal Forced Aligner (MFA). In both the textual and phonetic domains, our
method focuses on the extraction of phonetic transcriptions from text, stress
marks, and a unified automatic syllabification (in text and phonetic domains).
The system was built with open-source components and resources. Through an
ablation study, we demonstrate the efficacy of our approach in automatically
syllabifying words from several languages (English, French and Spanish).
Additionally, we apply the technique to the transcriptions of the CMU ARCTIC
dataset, generating valuable annotations available
online\footnote{\url{https://github.com/noetits/MUST_P-SRL}} that are ideal for
speech representation learning, speech unit discovery, and disentanglement of
speech factors in several speech-related fields.
|
[
"Noé Tits"
] |
2023-10-17 19:27:23
|
http://arxiv.org/abs/2310.11541v1
|
http://arxiv.org/pdf/2310.11541v1
|
2310.11541v1
|
Efficient Online Learning with Offline Datasets for Infinite Horizon MDPs: A Bayesian Approach
|
In this paper, we study the problem of efficient online reinforcement
learning in the infinite horizon setting when there is an offline dataset to
start with. We assume that the offline dataset is generated by an expert but
with unknown level of competence, i.e., it is not perfect and not necessarily
using the optimal policy. We show that if the learning agent models the
behavioral policy (parameterized by a competence parameter) used by the expert,
it can do substantially better in terms of minimizing cumulative regret, than
if it doesn't do that. We establish an upper bound on regret of the exact
informed PSRL algorithm that scales as $\tilde{O}(\sqrt{T})$. This requires a
novel prior-dependent regret analysis of Bayesian online learning algorithms
for the infinite horizon setting. We then propose an approximate Informed RLSVI
algorithm that we can interpret as performing imitation learning with the
offline dataset, and then performing online learning.
|
[
"Dengwang Tang",
"Rahul Jain",
"Botao Hao",
"Zheng Wen"
] |
2023-10-17 19:01:08
|
http://arxiv.org/abs/2310.11531v1
|
http://arxiv.org/pdf/2310.11531v1
|
2310.11531v1
|
Thin and Deep Gaussian Processes
|
Gaussian processes (GPs) can provide a principled approach to uncertainty
quantification with easy-to-interpret kernel hyperparameters, such as the
lengthscale, which controls the correlation distance of function values.
However, selecting an appropriate kernel can be challenging. Deep GPs avoid
manual kernel engineering by successively parameterizing kernels with GP
layers, allowing them to learn low-dimensional embeddings of the inputs that
explain the output data. Following the architecture of deep neural networks,
the most common deep GPs warp the input space layer-by-layer but lose all the
interpretability of shallow GPs. An alternative construction is to successively
parameterize the lengthscale of a kernel, improving the interpretability but
ultimately giving away the notion of learning lower-dimensional embeddings.
Unfortunately, both methods are susceptible to particular pathologies which may
hinder fitting and limit their interpretability. This work proposes a novel
synthesis of both previous approaches: Thin and Deep GP (TDGP). Each TDGP layer
defines locally linear transformations of the original input data maintaining
the concept of latent embeddings while also retaining the interpretation of
lengthscales of a kernel. Moreover, unlike the prior solutions, TDGP induces
non-pathological manifolds that admit learning lower-dimensional
representations. We show with theoretical and experimental results that i) TDGP
is, unlike previous models, tailored to specifically discover lower-dimensional
manifolds in the input data, ii) TDGP behaves well when increasing the number
of layers, and iii) TDGP performs well in standard benchmark datasets.
|
[
"Daniel Augusto de Souza",
"Alexander Nikitin",
"ST John",
"Magnus Ross",
"Mauricio A. Álvarez",
"Marc Peter Deisenroth",
"João P. P. Gomes",
"Diego Mesquita",
"César Lincoln C. Mattos"
] |
2023-10-17 18:50:24
|
http://arxiv.org/abs/2310.11527v1
|
http://arxiv.org/pdf/2310.11527v1
|
2310.11527v1
|
Group Preference Optimization: Few-Shot Alignment of Large Language Models
|
Many applications of large language models (LLMs), ranging from chatbots to
creative writing, require nuanced subjective judgments that can differ
significantly across different groups. Existing alignment algorithms can be
expensive to align for each group, requiring prohibitive amounts of
group-specific preference data and computation for real-world use cases. We
introduce Group Preference Optimization (GPO), an alignment framework that
steers language models to preferences of individual groups in a few-shot
manner. In GPO, we augment the base LLM with an independent transformer module
trained to predict the preferences of a group for the LLM generations. For
few-shot learning, we parameterize this module as an in-context autoregressive
transformer and train it via meta-learning on several groups. We empirically
validate the efficacy of GPO through rigorous evaluations using LLMs with
varied sizes on three human opinion adaptation tasks. These tasks involve
adapting to the preferences of US demographic groups, global countries, and
individual users. Our results demonstrate that GPO not only aligns models more
accurately but also requires fewer group-specific preferences, and less
training and inference computing resources, outperforming existing strategies
such as in-context steering and fine-tuning methods.
|
[
"Siyan Zhao",
"John Dang",
"Aditya Grover"
] |
2023-10-17 18:41:57
|
http://arxiv.org/abs/2310.11523v1
|
http://arxiv.org/pdf/2310.11523v1
|
2310.11523v1
|
Automatic News Summerization
|
Natural Language Processing is booming with its applications in the real
world, one of which is Text Summarization for large texts including news
articles. This research paper provides an extensive comparative evaluation of
extractive and abstractive approaches for news text summarization, with an
emphasis on the ROUGE score analysis. The study employs the CNN-Daily Mail
dataset, which consists of news articles and human-generated reference
summaries. The evaluation employs ROUGE scores to assess the efficacy and
quality of generated summaries. After Evaluation, we integrate the
best-performing models on a web application to assess their real-world
capabilities and user experience.
|
[
"Kavach Dheer",
"Arpit Dhankhar"
] |
2023-10-17 18:38:03
|
http://arxiv.org/abs/2310.11520v1
|
http://arxiv.org/pdf/2310.11520v1
|
2310.11520v1
|
Guarantees for Self-Play in Multiplayer Games via Polymatrix Decomposability
|
Self-play is a technique for machine learning in multi-agent systems where a
learning algorithm learns by interacting with copies of itself. Self-play is
useful for generating large quantities of data for learning, but has the
drawback that the agents the learner will face post-training may have
dramatically different behavior than the learner came to expect by interacting
with itself. For the special case of two-player constant-sum games, self-play
that reaches Nash equilibrium is guaranteed to produce strategies that perform
well against any post-training opponent; however, no such guarantee exists for
multi-player games. We show that in games that approximately decompose into a
set of two-player constant-sum games (called polymatrix games) where global
$\epsilon$-Nash equilibria are boundedly far from Nash-equilibria in each
subgame, any no-external-regret algorithm that learns by self-play will produce
a strategy with bounded vulnerability. For the first time, our results identify
a structural property of multi-player games that enable performance guarantees
for the strategies produced by a broad class of self-play algorithms. We
demonstrate our findings through experiments on Leduc poker.
|
[
"Revan MacQueen",
"James R. Wright"
] |
2023-10-17 18:33:21
|
http://arxiv.org/abs/2310.11518v1
|
http://arxiv.org/pdf/2310.11518v1
|
2310.11518v1
|
Value-Biased Maximum Likelihood Estimation for Model-based Reinforcement Learning in Discounted Linear MDPs
|
We consider the infinite-horizon linear Markov Decision Processes (MDPs),
where the transition probabilities of the dynamic model can be linearly
parameterized with the help of a predefined low-dimensional feature mapping.
While the existing regression-based approaches have been theoretically shown to
achieve nearly-optimal regret, they are computationally rather inefficient due
to the need for a large number of optimization runs in each time step,
especially when the state and action spaces are large. To address this issue,
we propose to solve linear MDPs through the lens of Value-Biased Maximum
Likelihood Estimation (VBMLE), which is a classic model-based exploration
principle in the adaptive control literature for resolving the well-known
closed-loop identification problem of Maximum Likelihood Estimation. We
formally show that (i) VBMLE enjoys $\widetilde{O}(d\sqrt{T})$ regret, where
$T$ is the time horizon and $d$ is the dimension of the model parameter, and
(ii) VBMLE is computationally more efficient as it only requires solving one
optimization problem in each time step. In our regret analysis, we offer a
generic convergence result of MLE in linear MDPs through a novel
supermartingale construct and uncover an interesting connection between linear
MDPs and online learning, which could be of independent interest. Finally, the
simulation results show that VBMLE significantly outperforms the benchmark
method in terms of both empirical regret and computation time.
|
[
"Yu-Heng Hung",
"Ping-Chun Hsieh",
"Akshay Mete",
"P. R. Kumar"
] |
2023-10-17 18:27:27
|
http://arxiv.org/abs/2310.11515v1
|
http://arxiv.org/pdf/2310.11515v1
|
2310.11515v1
|
GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment
|
Recent breakthroughs in diffusion models, multimodal pretraining, and
efficient finetuning have led to an explosion of text-to-image generative
models. Given human evaluation is expensive and difficult to scale, automated
methods are critical for evaluating the increasingly large number of new
models. However, most current automated evaluation metrics like FID or
CLIPScore only offer a holistic measure of image quality or image-text
alignment, and are unsuited for fine-grained or instance-level analysis. In
this paper, we introduce GenEval, an object-focused framework to evaluate
compositional image properties such as object co-occurrence, position, count,
and color. We show that current object detection models can be leveraged to
evaluate text-to-image models on a variety of generation tasks with strong
human agreement, and that other discriminative vision models can be linked to
this pipeline to further verify properties like object color. We then evaluate
several open-source text-to-image models and analyze their relative generative
capabilities on our benchmark. We find that recent models demonstrate
significant improvement on these tasks, though they are still lacking in
complex capabilities such as spatial relations and attribute binding. Finally,
we demonstrate how GenEval might be used to help discover existing failure
modes, in order to inform development of the next generation of text-to-image
models. Our code to run the GenEval framework is publicly available at
https://github.com/djghosh13/geneval.
|
[
"Dhruba Ghosh",
"Hanna Hajishirzi",
"Ludwig Schmidt"
] |
2023-10-17 18:20:03
|
http://arxiv.org/abs/2310.11513v1
|
http://arxiv.org/pdf/2310.11513v1
|
2310.11513v1
|
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
|
Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
|
[
"Akari Asai",
"Zeqiu Wu",
"Yizhong Wang",
"Avirup Sil",
"Hannaneh Hajishirzi"
] |
2023-10-17 18:18:32
|
http://arxiv.org/abs/2310.11511v1
|
http://arxiv.org/pdf/2310.11511v1
|
2310.11511v1
|
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective
|
Large Language Models (LLMs) inherently encode a wealth of knowledge within
their parameters through pre-training on extensive corpora. While prior
research has delved into operations on these parameters to manipulate the
underlying implicit knowledge (encompassing detection, editing, and merging),
there remains an ambiguous understanding regarding their transferability across
models with varying scales. In this paper, we seek to empirically investigate
knowledge transfer from larger to smaller models through a parametric
perspective. To achieve this, we employ sensitivity-based techniques to extract
and align knowledge-specific parameters between different LLMs. Moreover, the
LoRA module is used as the intermediary mechanism for injecting the extracted
knowledge into smaller models. Evaluations across four benchmarks validate the
efficacy of our proposed method. Our findings highlight the critical factors
contributing to the process of parametric knowledge transfer, underscoring the
transferability of model parameters across LLMs of different scales. We release
code and data at \url{https://github.com/maszhongming/ParaKnowTransfer}.
|
[
"Ming Zhong",
"Chenxin An",
"Weizhu Chen",
"Jiawei Han",
"Pengcheng He"
] |
2023-10-17 17:58:34
|
http://arxiv.org/abs/2310.11451v1
|
http://arxiv.org/pdf/2310.11451v1
|
2310.11451v1
|
Explaining Deep Neural Networks for Bearing Fault Detection with Vibration Concepts
|
Concept-based explanation methods, such as Concept Activation Vectors, are
potent means to quantify how abstract or high-level characteristics of input
data influence the predictions of complex deep neural networks. However,
applying them to industrial prediction problems is challenging as it is not
immediately clear how to define and access appropriate concepts for individual
use cases and specific data types. In this work, we investigate how to leverage
established concept-based explanation techniques in the context of bearing
fault detection with deep neural networks trained on vibration signals. Since
bearings are prevalent in almost every rotating equipment, ensuring the
reliability of intransparent fault detection models is crucial to prevent
costly repairs and downtimes of industrial machinery. Our evaluations
demonstrate that explaining opaque models in terms of vibration concepts
enables human-comprehensible and intuitive insights about their inner workings,
but the underlying assumptions need to be carefully validated first.
|
[
"Thomas Decker",
"Michael Lebacher",
"Volker Tresp"
] |
2023-10-17 17:58:19
|
http://arxiv.org/abs/2310.11450v1
|
http://arxiv.org/pdf/2310.11450v1
|
2310.11450v1
|
Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament
|
Accurately predicting the future would be an important milestone in the
capabilities of artificial intelligence. However, research on the ability of
large language models to provide probabilistic predictions about future events
remains nascent. To empirically test this ability, we enrolled OpenAI's
state-of-the-art large language model, GPT-4, in a three-month forecasting
tournament hosted on the Metaculus platform. The tournament, running from July
to October 2023, attracted 843 participants and covered diverse topics
including Big Tech, U.S. politics, viral outbreaks, and the Ukraine conflict.
Focusing on binary forecasts, we show that GPT-4's probabilistic forecasts are
significantly less accurate than the median human-crowd forecasts. We find that
GPT-4's forecasts did not significantly differ from the no-information
forecasting strategy of assigning a 50% probability to every question. We
explore a potential explanation, that GPT-4 might be predisposed to predict
probabilities close to the midpoint of the scale, but our data do not support
this hypothesis. Overall, we find that GPT-4 significantly underperforms in
real-world predictive tasks compared to median human-crowd forecasts. A
potential explanation for this underperformance is that in real-world
forecasting tournaments, the true answers are genuinely unknown at the time of
prediction; unlike in other benchmark tasks like professional exams or time
series forecasting, where strong performance may at least partly be due to the
answers being memorized from the training data. This makes real-world
forecasting tournaments an ideal environment for testing the generalized
reasoning and prediction capabilities of artificial intelligence going forward.
|
[
"Philipp Schoenegger",
"Peter S. Park"
] |
2023-10-17 17:58:17
|
http://arxiv.org/abs/2310.13014v1
|
http://arxiv.org/pdf/2310.13014v1
|
2310.13014v1
|
DELIFFAS: Deformable Light Fields for Fast Avatar Synthesis
|
Generating controllable and photorealistic digital human avatars is a
long-standing and important problem in Vision and Graphics. Recent methods have
shown great progress in terms of either photorealism or inference speed while
the combination of the two desired properties still remains unsolved. To this
end, we propose a novel method, called DELIFFAS, which parameterizes the
appearance of the human as a surface light field that is attached to a
controllable and deforming human mesh model. At the core, we represent the
light field around the human with a deformable two-surface parameterization,
which enables fast and accurate inference of the human appearance. This allows
perceptual supervision on the full image compared to previous approaches that
could only supervise individual pixels or small patches due to their slow
runtime. Our carefully designed human representation and supervision strategy
leads to state-of-the-art synthesis results and inference time. The video
results and code are available at
https://vcai.mpi-inf.mpg.de/projects/DELIFFAS.
|
[
"Youngjoong Kwon",
"Lingjie Liu",
"Henry Fuchs",
"Marc Habermann",
"Christian Theobalt"
] |
2023-10-17 17:58:00
|
http://arxiv.org/abs/2310.11449v1
|
http://arxiv.org/pdf/2310.11449v1
|
2310.11449v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.