title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
On the Effects of Heterogeneous Errors on Multi-fidelity Bayesian Optimization | Bayesian optimization (BO) is a sequential optimization strategy that is
increasingly employed in a wide range of areas including materials design. In
real world applications, acquiring high-fidelity (HF) data through physical
experiments or HF simulations is the major cost component of BO. To alleviate
this bottleneck, multi-fidelity (MF) methods are used to forgo the sole
reliance on the expensive HF data and reduce the sampling costs by querying
inexpensive low-fidelity (LF) sources whose data are correlated with HF
samples. However, existing multi-fidelity BO (MFBO) methods operate under the
following two assumptions that rarely hold in practical applications: (1) LF
sources provide data that are well correlated with the HF data on a global
scale, and (2) a single random process can model the noise in the fused data.
These assumptions dramatically reduce the performance of MFBO when LF sources
are only locally correlated with the HF source or when the noise variance
varies across the data sources. In this paper, we dispense with these incorrect
assumptions by proposing an MF emulation method that (1) learns a noise model
for each data source, and (2) enables MFBO to leverage highly biased LF sources
which are only locally correlated with the HF source. We illustrate the
performance of our method through analytical examples and engineering problems
on materials design. | [
"Zahra Zanjani Foumani",
"Amin Yousefpour",
"Mehdi Shishehbor",
"Ramin Bostanabad"
] | 2023-09-06 06:26:21 | http://arxiv.org/abs/2309.02771v1 | http://arxiv.org/pdf/2309.02771v1 | 2309.02771v1 |
Unifying over-smoothing and over-squashing in graph neural networks: A physics informed approach and beyond | Graph Neural Networks (GNNs) have emerged as one of the leading approaches
for machine learning on graph-structured data. Despite their great success,
critical computational challenges such as over-smoothing, over-squashing, and
limited expressive power continue to impact the performance of GNNs. In this
study, inspired from the time-reversal principle commonly utilized in classical
and quantum physics, we reverse the time direction of the graph heat equation.
The resulted reversing process yields a class of high pass filtering functions
that enhance the sharpness of graph node features. Leveraging this concept, we
introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse
filtering functions' effects on node features. To explore more flexible
filtering conditions, we further generalize MHKG into a model termed G-MHKG and
thoroughly show the roles of each element in controlling over-smoothing,
over-squashing and expressive power. Notably, we illustrate that all
aforementioned issues can be characterized and analyzed via the properties of
the filtering functions, and uncover a trade-off between over-smoothing and
over-squashing: enhancing node feature sharpness will make model suffer more
from over-squashing, and vice versa. Furthermore, we manipulate the time again
to show how G-MHKG can handle both two issues under mild conditions. Our
conclusive experiments highlight the effectiveness of proposed models. It
surpasses several GNN baseline models in performance across graph datasets
characterized by both homophily and heterophily. | [
"Zhiqi Shao",
"Dai Shi",
"Andi Han",
"Yi Guo",
"Qibin Zhao",
"Junbin Gao"
] | 2023-09-06 06:22:18 | http://arxiv.org/abs/2309.02769v2 | http://arxiv.org/pdf/2309.02769v2 | 2309.02769v2 |
Towards Unsupervised Graph Completion Learning on Graphs with Features and Structure Missing | In recent years, graph neural networks (GNN) have achieved significant
developments in a variety of graph analytical tasks. Nevertheless, GNN's
superior performance will suffer from serious damage when the collected node
features or structure relationships are partially missing owning to numerous
unpredictable factors. Recently emerged graph completion learning (GCL) has
received increasing attention, which aims to reconstruct the missing node
features or structure relationships under the guidance of a specifically
supervised task. Although these proposed GCL methods have made great success,
they still exist the following problems: the reliance on labels, the bias of
the reconstructed node features and structure relationships. Besides, the
generalization ability of the existing GCL still faces a huge challenge when
both collected node features and structure relationships are partially missing
at the same time. To solve the above issues, we propose a more general GCL
framework with the aid of self-supervised learning for improving the task
performance of the existing GNN variants on graphs with features and structure
missing, termed unsupervised GCL (UGCL). Specifically, to avoid the mismatch
between missing node features and structure during the message-passing process
of GNN, we separate the feature reconstruction and structure reconstruction and
design its personalized model in turn. Then, a dual contrastive loss on the
structure level and feature level is introduced to maximize the mutual
information of node representations from feature reconstructing and structure
reconstructing paths for providing more supervision signals. Finally, the
reconstructed node features and structure can be applied to the downstream node
classification task. Extensive experiments on eight datasets, three GNN
variants and five missing rates demonstrate the effectiveness of our proposed
method. | [
"Sichao Fu",
"Qinmu Peng",
"Yang He",
"Baokun Du",
"Xinge You"
] | 2023-09-06 06:20:12 | http://arxiv.org/abs/2309.02762v1 | http://arxiv.org/pdf/2309.02762v1 | 2309.02762v1 |
GPT Can Solve Mathematical Problems Without a Calculator | Previous studies have typically assumed that large language models are unable
to accurately perform arithmetic operations, particularly multiplication of >8
digits, and operations involving decimals and fractions, without the use of
calculator tools. This paper aims to challenge this misconception. With
sufficient training data, a 2 billion-parameter language model can accurately
perform multi-digit arithmetic operations with almost 100% accuracy without
data leakage, significantly surpassing GPT-4 (whose multi-digit multiplication
accuracy is only 4.3%). We also demonstrate that our MathGLM, fine-tuned from
GLM-10B on a dataset with additional multi-step arithmetic operations and math
problems described in text, achieves similar performance to GPT-4 on a
5,000-samples Chinese math problem test set. Our code and data are public at
https://github.com/THUDM/MathGLM. | [
"Zhen Yang",
"Ming Ding",
"Qingsong Lv",
"Zhihuan Jiang",
"Zehai He",
"Yuyi Guo",
"Jinfeng Bai",
"Jie Tang"
] | 2023-09-06 06:18:16 | http://arxiv.org/abs/2309.03241v2 | http://arxiv.org/pdf/2309.03241v2 | 2309.03241v2 |
SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series | Time series classification (TSC) has emerged as a critical task in various
domains, and deep neural models have shown superior performance in TSC tasks.
However, these models are vulnerable to adversarial attacks, where subtle
perturbations can significantly impact the prediction results. Existing
adversarial methods often suffer from over-parameterization or random logit
perturbation, hindering their effectiveness. Additionally, increasing the
attack success rate (ASR) typically involves generating more noise, making the
attack more easily detectable. To address these limitations, we propose SWAP, a
novel attacking method for TSC models. SWAP focuses on enhancing the confidence
of the second-ranked logits while minimizing the manipulation of other logits.
This is achieved by minimizing the Kullback-Leibler divergence between the
target logit distribution and the predictive logit distribution. Experimental
results demonstrate that SWAP achieves state-of-the-art performance, with an
ASR exceeding 50% and an 18% increase compared to existing methods. | [
"Chang George Dong",
"Liangwei Nathan Zheng",
"Weitong Chen",
"Wei Emma Zhang",
"Lin Yue"
] | 2023-09-06 06:17:35 | http://arxiv.org/abs/2309.02752v1 | http://arxiv.org/pdf/2309.02752v1 | 2309.02752v1 |
Safe Neural Control for Non-Affine Control Systems with Differentiable Control Barrier Functions | This paper addresses the problem of safety-critical control for non-affine
control systems. It has been shown that optimizing quadratic costs subject to
state and control constraints can be sub-optimally reduced to a sequence of
quadratic programs (QPs) by using Control Barrier Functions (CBFs). Our
recently proposed High Order CBFs (HOCBFs) can accommodate constraints of
arbitrary relative degree. The main challenges in this approach are that it
requires affine control dynamics and the solution of the CBF-based QP is
sub-optimal since it is solved point-wise. To address these challenges, we
incorporate higher-order CBFs into neural ordinary differential equation-based
learning models as differentiable CBFs to guarantee safety for non-affine
control systems. The differentiable CBFs are trainable in terms of their
parameters, and thus, they can address the conservativeness of CBFs such that
the system state will not stay unnecessarily far away from safe set boundaries.
Moreover, the imitation learning model is capable of learning complex and
optimal control policies that are usually intractable online. We illustrate the
effectiveness of the proposed framework on LiDAR-based autonomous driving and
compare it with existing methods. | [
"Wei Xiao",
"Ross Allen",
"Daniela Rus"
] | 2023-09-06 05:35:48 | http://arxiv.org/abs/2309.04492v1 | http://arxiv.org/pdf/2309.04492v1 | 2309.04492v1 |
Offensive Hebrew Corpus and Detection using BERT | Offensive language detection has been well studied in many languages, but it
is lagging behind in low-resource languages, such as Hebrew. In this paper, we
present a new offensive language corpus in Hebrew. A total of 15,881 tweets
were retrieved from Twitter. Each was labeled with one or more of five classes
(abusive, hate, violence, pornographic, or none offensive) by Arabic-Hebrew
bilingual speakers. The annotation process was challenging as each annotator is
expected to be familiar with the Israeli culture, politics, and practices to
understand the context of each tweet. We fine-tuned two Hebrew BERT models,
HeBERT and AlephBERT, using our proposed dataset and another published dataset.
We observed that our data boosts HeBERT performance by 2% when combined with
D_OLaH. Fine-tuning AlephBERT on our data and testing on D_OLaH yields 69%
accuracy, while fine-tuning on D_OLaH and testing on our data yields 57%
accuracy, which may be an indication to the generalizability our data offers.
Our dataset and fine-tuned models are available on GitHub and Huggingface. | [
"Nagham Hamad",
"Mustafa Jarrar",
"Mohammad Khalilia",
"Nadim Nashif"
] | 2023-09-06 05:18:43 | http://arxiv.org/abs/2309.02724v1 | http://arxiv.org/pdf/2309.02724v1 | 2309.02724v1 |
Unveiling the frontiers of deep learning: innovations shaping diverse domains | Deep learning (DL) enables the development of computer models that are
capable of learning, visualizing, optimizing, refining, and predicting data. In
recent years, DL has been applied in a range of fields, including audio-visual
data processing, agriculture, transportation prediction, natural language,
biomedicine, disaster management, bioinformatics, drug design, genomics, face
recognition, and ecology. To explore the current state of deep learning, it is
necessary to investigate the latest developments and applications of deep
learning in these disciplines. However, the literature is lacking in exploring
the applications of deep learning in all potential sectors. This paper thus
extensively investigates the potential applications of deep learning across all
major fields of study as well as the associated benefits and challenges. As
evidenced in the literature, DL exhibits accuracy in prediction and analysis,
makes it a powerful computational tool, and has the ability to articulate
itself and optimize, making it effective in processing data with no prior
training. Given its independence from training data, deep learning necessitates
massive amounts of data for effective analysis and processing, much like data
volume. To handle the challenge of compiling huge amounts of medical,
scientific, healthcare, and environmental data for use in deep learning, gated
architectures like LSTMs and GRUs can be utilized. For multimodal learning,
shared neurons in the neural network for all activities and specialized neurons
for particular tasks are necessary. | [
"Shams Forruque Ahmed",
"Md. Sakib Bin Alam",
"Maliha Kabir",
"Shaila Afrin",
"Sabiha Jannat Rafa",
"Aanushka Mehjabin",
"Amir H. Gandomi"
] | 2023-09-06 04:50:39 | http://arxiv.org/abs/2309.02712v1 | http://arxiv.org/pdf/2309.02712v1 | 2309.02712v1 |
Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension | Symmetry, a fundamental concept to understand our environment, often
oversimplifies reality from a mathematical perspective. Humans are a prime
example, deviating from perfect symmetry in terms of appearance and cognitive
biases (e.g. having a dominant hand). Nevertheless, our brain can easily
overcome these imperfections and efficiently adapt to symmetrical tasks. The
driving motivation behind this work lies in capturing this ability through
reinforcement learning. To this end, we introduce Adaptive Symmetry Learning
(ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that
addresses incomplete or inexact symmetry descriptions by adapting itself during
the learning process. ASL consists of a symmetry fitting component and a
modular loss function that enforces a common symmetric relation across all
states while adapting to the learned policy. The performance of ASL is compared
to existing symmetry-enhanced methods in a case study involving a four-legged
ant model for multidirectional locomotion tasks. The results demonstrate that
ASL is capable of recovering from large perturbations and generalizing
knowledge to hidden symmetric states. It achieves comparable or better
performance than alternative methods in most scenarios, making it a valuable
approach for leveraging model symmetry while compensating for inherent
perturbations. | [
"Miguel Abreu",
"Luis Paulo Reis",
"Nuno Lau"
] | 2023-09-06 04:47:46 | http://arxiv.org/abs/2309.02711v1 | http://arxiv.org/pdf/2309.02711v1 | 2309.02711v1 |
Improved Outlier Robust Seeding for k-means | The $k$-means is a popular clustering objective, although it is inherently
non-robust and sensitive to outliers. Its popular seeding or initialization
called $k$-means++ uses $D^{2}$ sampling and comes with a provable $O(\log k)$
approximation guarantee \cite{AV2007}. However, in the presence of adversarial
noise or outliers, $D^{2}$ sampling is more likely to pick centers from distant
outliers instead of inlier clusters, and therefore its approximation guarantees
\textit{w.r.t.} $k$-means solution on inliers, does not hold.
Assuming that the outliers constitute a constant fraction of the given data,
we propose a simple variant in the $D^2$ sampling distribution, which makes it
robust to the outliers. Our algorithm runs in $O(ndk)$ time, outputs $O(k)$
clusters, discards marginally more points than the optimal number of outliers,
and comes with a provable $O(1)$ approximation guarantee.
Our algorithm can also be modified to output exactly $k$ clusters instead of
$O(k)$ clusters, while keeping its running time linear in $n$ and $d$. This is
an improvement over previous results for robust $k$-means based on LP
relaxation and rounding \cite{Charikar}, \cite{KrishnaswamyLS18} and
\textit{robust $k$-means++} \cite{DeshpandeKP20}. Our empirical results show
the advantage of our algorithm over $k$-means++~\cite{AV2007}, uniform random
seeding, greedy sampling for $k$ means~\cite{tkmeanspp}, and robust
$k$-means++~\cite{DeshpandeKP20}, on standard real-world and synthetic data
sets used in previous work. Our proposal is easily amenable to scalable,
faster, parallel implementations of $k$-means++ \cite{Bahmani,BachemL017} and
is of independent interest for coreset constructions in the presence of
outliers \cite{feldman2007ptas,langberg2010universal,feldman2011unified}. | [
"Amit Deshpande",
"Rameshwar Pratap"
] | 2023-09-06 04:46:01 | http://arxiv.org/abs/2309.02710v1 | http://arxiv.org/pdf/2309.02710v1 | 2309.02710v1 |
Certifying LLM Safety against Adversarial Prompting | Large language models (LLMs) released for public use incorporate guardrails
to ensure their output is safe, often referred to as "model alignment." An
aligned language model should decline a user's request to produce harmful
content. However, such safety measures are vulnerable to adversarial prompts,
which contain maliciously designed token sequences to circumvent the model's
safety guards and cause it to produce harmful content. In this work, we
introduce erase-and-check, the first framework to defend against adversarial
prompts with verifiable safety guarantees. We erase tokens individually and
inspect the resulting subsequences using a safety filter. Our procedure labels
the input prompt as harmful if any subsequences or the input prompt are
detected as harmful by the filter. This guarantees that any adversarial
modification of a harmful prompt up to a certain size is also labeled harmful.
We defend against three attack modes: i) adversarial suffix, which appends an
adversarial sequence at the end of the prompt; ii) adversarial insertion, where
the adversarial sequence is inserted anywhere in the middle of the prompt; and
iii) adversarial infusion, where adversarial tokens are inserted at arbitrary
positions in the prompt, not necessarily as a contiguous block. Empirical
results demonstrate that our technique obtains strong certified safety
guarantees on harmful prompts while maintaining good performance on safe
prompts. For example, against adversarial suffixes of length 20, it certifiably
detects 93% of the harmful prompts and labels 94% of the safe prompts as safe
using the open source language model Llama 2 as the safety filter. | [
"Aounon Kumar",
"Chirag Agarwal",
"Suraj Srinivas",
"Soheil Feizi",
"Hima Lakkaraju"
] | 2023-09-06 04:37:20 | http://arxiv.org/abs/2309.02705v1 | http://arxiv.org/pdf/2309.02705v1 | 2309.02705v1 |
Diffusion-EDFs: Bi-equivariant Denoising Generative Modeling on SE(3) for Visual Robotic Manipulation | Recent studies have verified that equivariant methods can significantly
improve the data efficiency, generalizability, and robustness in robot
learning. Meanwhile, denoising diffusion-based generative modeling has recently
gained significant attention as a promising approach for robotic manipulation
learning from demonstrations with stochastic behaviors. In this paper, we
present Diffusion-EDFs, a novel approach that incorporates spatial
roto-translation equivariance, i.e., SE(3)-equivariance to diffusion generative
modeling. By integrating SE(3)-equivariance into our model architectures, we
demonstrate that our proposed method exhibits remarkable data efficiency,
requiring only 5 to 10 task demonstrations for effective end-to-end training.
Furthermore, our approach showcases superior generalizability compared to
previous diffusion-based manipulation methods. | [
"Hyunwoo Ryu",
"Jiwoo Kim",
"Junwoo Chang",
"Hyun Seok Ahn",
"Joohwan Seo",
"Taehan Kim",
"Yubin Kim",
"Jongeun Choi",
"Roberto Horowitz"
] | 2023-09-06 03:42:20 | http://arxiv.org/abs/2309.02685v2 | http://arxiv.org/pdf/2309.02685v2 | 2309.02685v2 |
Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd Flow Inference | Accurate acquisition of crowd flow at Points of Interest (POIs) is pivotal
for effective traffic management, public service, and urban planning. Despite
this importance, due to the limitations of urban sensing techniques, the data
quality from most sources is inadequate for monitoring crowd flow at each POI.
This renders the inference of accurate crowd flow from low-quality data a
critical and challenging task. The complexity is heightened by three key
factors: 1) The scarcity and rarity of labeled data, 2) The intricate
spatio-temporal dependencies among POIs, and 3) The myriad correlations between
precise crowd flow and GPS reports.
To address these challenges, we recast the crowd flow inference problem as a
self-supervised attributed graph representation learning task and introduce a
novel Contrastive Self-learning framework for Spatio-Temporal data (CSST). Our
approach initiates with the construction of a spatial adjacency graph founded
on the POIs and their respective distances. We then employ a contrastive
learning technique to exploit large volumes of unlabeled spatio-temporal data.
We adopt a swapped prediction approach to anticipate the representation of the
target subgraph from similar instances. Following the pre-training phase, the
model is fine-tuned with accurate crowd flow data. Our experiments, conducted
on two real-world datasets, demonstrate that the CSST pre-trained on extensive
noisy data consistently outperforms models trained from scratch. | [
"Songyu Ke",
"Ting Li",
"Li Song",
"Yanping Sun",
"Qintian Sun",
"Junbo Zhang",
"Yu Zheng"
] | 2023-09-06 02:51:24 | http://arxiv.org/abs/2309.03239v2 | http://arxiv.org/pdf/2309.03239v2 | 2309.03239v2 |
Implicit Design Choices and Their Impact on Emotion Recognition Model Development and Evaluation | Emotion recognition is a complex task due to the inherent subjectivity in
both the perception and production of emotions. The subjectivity of emotions
poses significant challenges in developing accurate and robust computational
models. This thesis examines critical facets of emotion recognition, beginning
with the collection of diverse datasets that account for psychological factors
in emotion production.
To handle the challenge of non-representative training data, this work
collects the Multimodal Stressed Emotion dataset, which introduces controlled
stressors during data collection to better represent real-world influences on
emotion production. To address issues with label subjectivity, this research
comprehensively analyzes how data augmentation techniques and annotation
schemes impact emotion perception and annotator labels. It further handles
natural confounding variables and variations by employing adversarial networks
to isolate key factors like stress from learned emotion representations during
model training. For tackling concerns about leakage of sensitive demographic
variables, this work leverages adversarial learning to strip sensitive
demographic information from multimodal encodings. Additionally, it proposes
optimized sociological evaluation metrics aligned with cost-effective,
real-world needs for model testing.
This research advances robust, practical emotion recognition through
multifaceted studies of challenges in datasets, labels, modeling, demographic
and membership variable encoding in representations, and evaluation. The
groundwork has been laid for cost-effective, generalizable emotion recognition
models that are less likely to encode sensitive demographic information. | [
"Mimansa Jaiswal"
] | 2023-09-06 02:45:42 | http://arxiv.org/abs/2309.03238v1 | http://arxiv.org/pdf/2309.03238v1 | 2309.03238v1 |
RLSynC: Offline-Online Reinforcement Learning for Synthon Completion | Retrosynthesis is the process of determining the set of reactant molecules
that can react to form a desired product. Semi-template-based retrosynthesis
methods, which imitate the reverse logic of synthesis reactions, first predict
the reaction centers in the products, and then complete the resulting synthons
back into reactants. These methods enable necessary interpretability and high
practical utility to inform synthesis planning. We develop a new offline-online
reinforcement learning method RLSynC for synthon completion in
semi-template-based methods. RLSynC assigns one agent to each synthon, all of
which complete the synthons by conducting actions step by step in a
synchronized fashion. RLSynC learns the policy from both offline training
episodes and online interactions which allow RLSynC to explore new reaction
spaces. RLSynC uses a forward synthesis model to evaluate the likelihood of the
predicted reactants in synthesizing a product, and thus guides the action
search. We compare RLSynC with the state-of-the-art retrosynthesis methods. Our
experimental results demonstrate that RLSynC can outperform these methods with
improvement as high as 14.9% on synthon completion, and 14.0% on
retrosynthesis, highlighting its potential in synthesis planning. | [
"Frazier N. Baker",
"Ziqi Chen",
"Xia Ning"
] | 2023-09-06 02:40:33 | http://arxiv.org/abs/2309.02671v2 | http://arxiv.org/pdf/2309.02671v2 | 2309.02671v2 |
Marketing Budget Allocation with Offline Constrained Deep Reinforcement Learning | We study the budget allocation problem in online marketing campaigns that
utilize previously collected offline data. We first discuss the long-term
effect of optimizing marketing budget allocation decisions in the offline
setting. To overcome the challenge, we propose a novel game-theoretic offline
value-based reinforcement learning method using mixed policies. The proposed
method reduces the need to store infinitely many policies in previous methods
to only constantly many policies, which achieves nearly optimal policy
efficiency, making it practical and favorable for industrial usage. We further
show that this method is guaranteed to converge to the optimal policy, which
cannot be achieved by previous value-based reinforcement learning methods for
marketing budget allocation. Our experiments on a large-scale marketing
campaign with tens-of-millions users and more than one billion budget verify
the theoretical results and show that the proposed method outperforms various
baseline methods. The proposed method has been successfully deployed to serve
all the traffic of this marketing campaign. | [
"Tianchi Cai",
"Jiyan Jiang",
"Wenpeng Zhang",
"Shiji Zhou",
"Xierui Song",
"Li Yu",
"Lihong Gu",
"Xiaodong Zeng",
"Jinjie Gu",
"Guannan Zhang"
] | 2023-09-06 02:35:46 | http://arxiv.org/abs/2309.02669v1 | http://arxiv.org/pdf/2309.02669v1 | 2309.02669v1 |
Federated Learning Over Images: Vertical Decompositions and Pre-Trained Backbones Are Difficult to Beat | We carefully evaluate a number of algorithms for learning in a federated
environment, and test their utility for a variety of image classification
tasks. We consider many issues that have not been adequately considered before:
whether learning over data sets that do not have diverse sets of images affects
the results; whether to use a pre-trained feature extraction "backbone"; how to
evaluate learner performance (we argue that classification accuracy is not
enough), among others. Overall, across a wide variety of settings, we find that
vertically decomposing a neural network seems to give the best results, and
outperforms more standard reconciliation-used methods. | [
"Erdong Hu",
"Yuxin Tang",
"Anastasios Kyrillidis",
"Chris Jermaine"
] | 2023-09-06 02:09:14 | http://arxiv.org/abs/2309.03237v1 | http://arxiv.org/pdf/2309.03237v1 | 2309.03237v1 |
Contrastive Learning as Kernel Approximation | In standard supervised machine learning, it is necessary to provide a label
for every input in the data. While raw data in many application domains is
easily obtainable on the Internet, manual labelling of this data is
prohibitively expensive. To circumvent this issue, contrastive learning methods
produce low-dimensional vector representations (also called features) of
high-dimensional inputs on large unlabelled datasets. This is done by training
with a contrastive loss function, which enforces that similar inputs have high
inner product and dissimilar inputs have low inner product in the feature
space. Rather than annotating each input individually, it suffices to define a
means of sampling pairs of similar and dissimilar inputs. Contrastive features
can then be fed as inputs to supervised learning systems on much smaller
labelled datasets to obtain high accuracy on end tasks of interest.
The goal of this thesis is to provide an overview of the current theoretical
understanding of contrastive learning, specifically as it pertains to the
minimizers of contrastive loss functions and their relationship to prior
methods for learning features from unlabelled data. We highlight popular
contrastive loss functions whose minimizers implicitly approximate a positive
semidefinite (PSD) kernel. The latter is a well-studied object in functional
analysis and learning theory that formalizes a notion of similarity between
elements of a space. PSD kernels provide an implicit definition of features
through the theory of reproducing kernel Hilbert spaces. | [
"Konstantinos Christopher Tsiolis"
] | 2023-09-06 01:25:30 | http://arxiv.org/abs/2309.02651v1 | http://arxiv.org/pdf/2309.02651v1 | 2309.02651v1 |
TFBEST: Dual-Aspect Transformer with Learnable Positional Encoding for Failure Prediction | Hard Disk Drive (HDD) failures in datacenters are costly - from catastrophic
data loss to a question of goodwill, stakeholders want to avoid it like the
plague. An important tool in proactively monitoring against HDD failure is
timely estimation of the Remaining Useful Life (RUL). To this end, the
Self-Monitoring, Analysis and Reporting Technology employed within HDDs
(S.M.A.R.T.) provide critical logs for long-term maintenance of the security
and dependability of these essential data storage devices. Data-driven
predictive models in the past have used these S.M.A.R.T. logs and CNN/RNN based
architectures heavily. However, they have suffered significantly in providing a
confidence interval around the predicted RUL values as well as in processing
very long sequences of logs. In addition, some of these approaches, such as
those based on LSTMs, are inherently slow to train and have tedious feature
engineering overheads. To overcome these challenges, in this work we propose a
novel transformer architecture - a Temporal-fusion Bi-encoder Self-attention
Transformer (TFBEST) for predicting failures in hard-drives. It is an
encoder-decoder based deep learning technique that enhances the context gained
from understanding health statistics sequences and predicts a sequence of the
number of days remaining before a disk potentially fails. In this paper, we
also provide a novel confidence margin statistic that can help manufacturers
replace a hard-drive within a time frame. Experiments on Seagate HDD data show
that our method significantly outperforms the state-of-the-art RUL prediction
methods during testing over the exhaustive 10-year data from Backblaze
(2013-present). Although validated on HDD failure prediction, the TFBEST
architecture is well-suited for other prognostics applications and may be
adapted for allied regression problems. | [
"Rohan Mohapatra",
"Saptarshi Sengupta"
] | 2023-09-06 01:03:14 | http://arxiv.org/abs/2309.02641v1 | http://arxiv.org/pdf/2309.02641v1 | 2309.02641v1 |
Epi-Curriculum: Episodic Curriculum Learning for Low-Resource Domain Adaptation in Neural Machine Translation | Neural Machine Translation (NMT) models have become successful, but their
performance remains poor when translating on new domains with a limited number
of data. In this paper, we present a novel approach Epi-Curriculum to address
low-resource domain adaptation (DA), which contains a new episodic training
framework along with denoised curriculum learning. Our episodic training
framework enhances the model's robustness to domain shift by episodically
exposing the encoder/decoder to an inexperienced decoder/encoder. The denoised
curriculum learning filters the noised data and further improves the model's
adaptability by gradually guiding the learning process from easy to more
difficult tasks. Experiments on English-German and English-Romanian translation
show that: (i) Epi-Curriculum improves both model's robustness and adaptability
in seen and unseen domains; (ii) Our episodic training framework enhances the
encoder and decoder's robustness to domain shift. | [
"Keyu Chen",
"Di Zhuang",
"Mingchen Li",
"J. Morris Chang"
] | 2023-09-06 00:59:27 | http://arxiv.org/abs/2309.02640v1 | http://arxiv.org/pdf/2309.02640v1 | 2309.02640v1 |
Multiclass Alignment of Confidence and Certainty for Network Calibration | Deep neural networks (DNNs) have made great strides in pushing the
state-of-the-art in several challenging domains. Recent studies reveal that
they are prone to making overconfident predictions. This greatly reduces the
overall trust in model predictions, especially in safety-critical applications.
Early work in improving model calibration employs post-processing techniques
which rely on limited parameters and require a hold-out set. Some recent
train-time calibration methods, which involve all model parameters, can
outperform the postprocessing methods. To this end, we propose a new train-time
calibration method, which features a simple, plug-and-play auxiliary loss known
as multi-class alignment of predictive mean confidence and predictive certainty
(MACC). It is based on the observation that a model miscalibration is directly
related to its predictive certainty, so a higher gap between the mean
confidence and certainty amounts to a poor calibration both for in-distribution
and out-of-distribution predictions. Armed with this insight, our proposed loss
explicitly encourages a confident (or underconfident) model to also provide a
low (or high) spread in the presoftmax distribution. Extensive experiments on
ten challenging datasets, covering in-domain, out-domain, non-visual
recognition and medical image classification scenarios, show that our method
achieves state-of-the-art calibration performance for both in-domain and
out-domain predictions. Our code and models will be publicly released. | [
"Vinith Kugathasan",
"Muhammad Haris Khan"
] | 2023-09-06 00:56:24 | http://arxiv.org/abs/2309.02636v1 | http://arxiv.org/pdf/2309.02636v1 | 2309.02636v1 |
Deep Reinforcement Learning from Hierarchical Weak Preference Feedback | Reward design is a fundamental, yet challenging aspect of practical
reinforcement learning (RL). For simple tasks, researchers typically handcraft
the reward function, e.g., using a linear combination of several reward
factors. However, such reward engineering is subject to approximation bias,
incurs large tuning cost, and often cannot provide the granularity required for
complex tasks. To avoid these difficulties, researchers have turned to
reinforcement learning from human feedback (RLHF), which learns a reward
function from human preferences between pairs of trajectory sequences. By
leveraging preference-based reward modeling, RLHF learns complex rewards that
are well aligned with human preferences, allowing RL to tackle increasingly
difficult problems. Unfortunately, the applicability of RLHF is limited due to
the high cost and difficulty of obtaining human preference data. In light of
this cost, we investigate learning reward functions for complex tasks with less
human effort; simply by ranking the importance of the reward factors. More
specifically, we propose a new RL framework -- HERON, which compares
trajectories using a hierarchical decision tree induced by the given ranking.
These comparisons are used to train a preference-based reward model, which is
then used for policy learning. We find that our framework can not only train
high performing agents on a variety of difficult tasks, but also provide
additional benefits such as improved sample efficiency and robustness. Our code
is available at https://github.com/abukharin3/HERON. | [
"Alexander Bukharin",
"Yixiao Li",
"Pengcheng He",
"Weizhu Chen",
"Tuo Zhao"
] | 2023-09-06 00:44:29 | http://arxiv.org/abs/2309.02632v1 | http://arxiv.org/pdf/2309.02632v1 | 2309.02632v1 |
Superclustering by finding statistically significant separable groups of optimal gaussian clusters | The paper presents the algorithm for clustering a dataset by grouping the
optimal, from the point of view of the BIC criterion, number of Gaussian
clusters into the optimal, from the point of view of their statistical
separability, superclusters.
The algorithm consists of three stages: representation of the dataset as a
mixture of Gaussian distributions - clusters, which number is determined based
on the minimum of the BIC criterion; using the Mahalanobis distance, to
estimate the distances between the clusters and cluster sizes; combining the
resulting clusters into superclusters using the DBSCAN method by finding its
hyperparameter (maximum distance) providing maximum value of introduced matrix
quality criterion at maximum number of superclusters. The matrix quality
criterion corresponds to the proportion of statistically significant separated
superclusters among all found superclusters.
The algorithm has only one hyperparameter - statistical significance level,
and automatically detects optimal number and shape of superclusters based of
statistical hypothesis testing approach. The algorithm demonstrates a good
results on test datasets in noise and noiseless situations. An essential
advantage of the algorithm is its ability to predict correct supercluster for
new data based on already trained clusterer and perform soft (fuzzy)
clustering. The disadvantages of the algorithm are: its low speed and
stochastic nature of the final clustering. It requires a sufficiently large
dataset for clustering, which is typical for many statistical methods. | [
"Oleg I. Berngardt"
] | 2023-09-05 23:49:46 | http://arxiv.org/abs/2309.02623v1 | http://arxiv.org/pdf/2309.02623v1 | 2309.02623v1 |
Compressing Vision Transformers for Low-Resource Visual Learning | Vision transformer (ViT) and its variants have swept through visual learning
leaderboards and offer state-of-the-art accuracy in tasks such as image
classification, object detection, and semantic segmentation by attending to
different parts of the visual input and capturing long-range spatial
dependencies. However, these models are large and computation-heavy. For
instance, the recently proposed ViT-B model has 86M parameters making it
impractical for deployment on resource-constrained devices. As a result, their
deployment on mobile and edge scenarios is limited. In our work, we aim to take
a step toward bringing vision transformers to the edge by utilizing popular
model compression techniques such as distillation, pruning, and quantization.
Our chosen application environment is an unmanned aerial vehicle (UAV) that
is battery-powered and memory-constrained, carrying a single-board computer on
the scale of an NVIDIA Jetson Nano with 4GB of RAM. On the other hand, the UAV
requires high accuracy close to that of state-of-the-art ViTs to ensure safe
object avoidance in autonomous navigation, or correct localization of humans in
search-and-rescue. Inference latency should also be minimized given the
application requirements. Hence, our target is to enable rapid inference of a
vision transformer on an NVIDIA Jetson Nano (4GB) with minimal accuracy loss.
This allows us to deploy ViTs on resource-constrained devices, opening up new
possibilities in surveillance, environmental monitoring, etc. Our
implementation is made available at https://github.com/chensy7/efficient-vit. | [
"Eric Youn",
"Sai Mitheran J",
"Sanjana Prabhu",
"Siyuan Chen"
] | 2023-09-05 23:33:39 | http://arxiv.org/abs/2309.02617v1 | http://arxiv.org/pdf/2309.02617v1 | 2309.02617v1 |
Generative AI-aided Joint Training-free Secure Semantic Communications via Multi-modal Prompts | Semantic communication (SemCom) holds promise for reducing network resource
consumption while achieving the communications goal. However, the computational
overheads in jointly training semantic encoders and decoders-and the subsequent
deployment in network devices-are overlooked. Recent advances in Generative
artificial intelligence (GAI) offer a potential solution. The robust learning
abilities of GAI models indicate that semantic decoders can reconstruct source
messages using a limited amount of semantic information, e.g., prompts, without
joint training with the semantic encoder. A notable challenge, however, is the
instability introduced by GAI's diverse generation ability. This instability,
evident in outputs like text-generated images, limits the direct application of
GAI in scenarios demanding accurate message recovery, such as face image
transmission. To solve the above problems, this paper proposes a GAI-aided
SemCom system with multi-model prompts for accurate content decoding. Moreover,
in response to security concerns, we introduce the application of covert
communications aided by a friendly jammer. The system jointly optimizes the
diffusion step, jamming, and transmitting power with the aid of the generative
diffusion models, enabling successful and secure transmission of the source
messages. | [
"Hongyang Du",
"Guangyuan Liu",
"Dusit Niyato",
"Jiayi Zhang",
"Jiawen Kang",
"Zehui Xiong",
"Bo Ai",
"Dong In Kim"
] | 2023-09-05 23:24:56 | http://arxiv.org/abs/2309.02616v1 | http://arxiv.org/pdf/2309.02616v1 | 2309.02616v1 |
Generative Algorithms for Fusion of Physics-Based Wildfire Spread Models with Satellite Data for Initializing Wildfire Forecasts | Increases in wildfire activity and the resulting impacts have prompted the
development of high-resolution wildfire behavior models for forecasting fire
spread. Recent progress in using satellites to detect fire locations further
provides the opportunity to use measurements to improve fire spread forecasts
from numerical models through data assimilation. This work develops a method
for inferring the history of a wildfire from satellite measurements, providing
the necessary information to initialize coupled atmosphere-wildfire models from
a measured wildfire state in a physics-informed approach. The fire arrival
time, which is the time the fire reaches a given spatial location, acts as a
succinct representation of the history of a wildfire. In this work, a
conditional Wasserstein Generative Adversarial Network (cWGAN), trained with
WRF-SFIRE simulations, is used to infer the fire arrival time from satellite
active fire data. The cWGAN is used to produce samples of likely fire arrival
times from the conditional distribution of arrival times given satellite active
fire detections. Samples produced by the cWGAN are further used to assess the
uncertainty of predictions. The cWGAN is tested on four California wildfires
occurring between 2020 and 2022, and predictions for fire extent are compared
against high resolution airborne infrared measurements. Further, the predicted
ignition times are compared with reported ignition times. An average Sorensen's
coefficient of 0.81 for the fire perimeters and an average ignition time error
of 32 minutes suggest that the method is highly accurate. | [
"Bryan Shaddy",
"Deep Ray",
"Angel Farguell",
"Valentina Calaza",
"Jan Mandel",
"James Haley",
"Kyle Hilburn",
"Derek V. Mallia",
"Adam Kochanski",
"Assad Oberai"
] | 2023-09-05 23:24:34 | http://arxiv.org/abs/2309.02615v1 | http://arxiv.org/pdf/2309.02615v1 | 2309.02615v1 |
Utilizing Generative Adversarial Networks for Stable Structure Generation in Angry Birds | This paper investigates the suitability of using Generative Adversarial
Networks (GANs) to generate stable structures for the physics-based puzzle game
Angry Birds. While previous applications of GANs for level generation have been
mostly limited to tile-based representations, this paper explores their
suitability for creating stable structures made from multiple smaller blocks.
This includes a detailed encoding/decoding process for converting between Angry
Birds level descriptions and a suitable grid-based representation, as well as
utilizing state-of-the-art GAN architectures and training methods to produce
new structure designs. Our results show that GANs can be successfully applied
to generate a varied range of complex and stable Angry Birds structures. | [
"Frederic Abraham",
"Matthew Stephenson"
] | 2023-09-05 23:19:13 | http://arxiv.org/abs/2309.02614v1 | http://arxiv.org/pdf/2309.02614v1 | 2309.02614v1 |
T-SaS: Toward Shift-aware Dynamic Adaptation for Streaming Data | In many real-world scenarios, distribution shifts exist in the streaming data
across time steps. Many complex sequential data can be effectively divided into
distinct regimes that exhibit persistent dynamics. Discovering the shifted
behaviors and the evolving patterns underlying the streaming data are important
to understand the dynamic system. Existing methods typically train one robust
model to work for the evolving data of distinct distributions or sequentially
adapt the model utilizing explicitly given regime boundaries. However, there
are two challenges: (1) shifts in data streams could happen drastically and
abruptly without precursors. Boundaries of distribution shifts are usually
unavailable, and (2) training a shared model for all domains could fail to
capture varying patterns. This paper aims to solve the problem of sequential
data modeling in the presence of sudden distribution shifts that occur without
any precursors. Specifically, we design a Bayesian framework, dubbed as T-SaS,
with a discrete distribution-modeling variable to capture abrupt shifts of
data. Then, we design a model that enable adaptation with dynamic network
selection conditioned on that discrete variable. The proposed method learns
specific model parameters for each distribution by learning which neurons
should be activated in the full network. A dynamic masking strategy is adopted
here to support inter-distribution transfer through the overlapping of a set of
sparse networks. Extensive experiments show that our proposed method is
superior in both accurately detecting shift boundaries to get segments of
varying distributions and effectively adapting to downstream forecast or
classification tasks. | [
"Weijieying Ren",
"Tianxiang Zhao",
"Wei Qin",
"Kunpeng Liu"
] | 2023-09-05 22:55:10 | http://arxiv.org/abs/2309.02610v1 | http://arxiv.org/pdf/2309.02610v1 | 2309.02610v1 |
Distributed Variational Inference for Online Supervised Learning | Developing efficient solutions for inference problems in intelligent sensor
networks is crucial for the next generation of location, tracking, and mapping
services. This paper develops a scalable distributed probabilistic inference
algorithm that applies to continuous variables, intractable posteriors and
large-scale real-time data in sensor networks. In a centralized setting,
variational inference is a fundamental technique for performing approximate
Bayesian estimation, in which an intractable posterior density is approximated
with a parametric density. Our key contribution lies in the derivation of a
separable lower bound on the centralized estimation objective, which enables
distributed variational inference with one-hop communication in a sensor
network. Our distributed evidence lower bound (DELBO) consists of a weighted
sum of observation likelihood and divergence to prior densities, and its gap to
the measurement evidence is due to consensus and modeling errors. To solve
binary classification and regression problems while handling streaming data, we
design an online distributed algorithm that maximizes DELBO, and specialize it
to Gaussian variational densities with non-linear likelihoods. The resulting
distributed Gaussian variational inference (DGVI) efficiently inverts a
$1$-rank correction to the covariance matrix. Finally, we derive a diagonalized
version for online distributed inference in high-dimensional models, and apply
it to multi-robot probabilistic mapping using indoor LiDAR data. | [
"Parth Paritosh",
"Nikolay Atanasov",
"Sonia Martinez"
] | 2023-09-05 22:33:02 | http://arxiv.org/abs/2309.02606v2 | http://arxiv.org/pdf/2309.02606v2 | 2309.02606v2 |
Screening of Pneumonia and Urinary Tract Infection at Triage using TriNet | Due to the steady rise in population demographics and longevity, emergency
department visits are increasing across North America. As more patients visit
the emergency department, traditional clinical workflows become overloaded and
inefficient, leading to prolonged wait-times and reduced healthcare quality.
One of such workflows is the triage medical directive, impeded by limited human
workload, inaccurate diagnoses and invasive over-testing. To address this
issue, we propose TriNet: a machine learning model for medical directives that
automates first-line screening at triage for conditions requiring downstream
testing for diagnosis confirmation. To verify screening potential, TriNet was
trained on hospital triage data and achieved high positive predictive values in
detecting pneumonia (0.86) and urinary tract infection (0.93). These models
outperform current clinical benchmarks, indicating that machine-learning
medical directives can offer cost-free, non-invasive screening with high
specificity for common conditions, reducing the risk of over-testing while
increasing emergency department efficiency. | [
"Stephen Z. Lu"
] | 2023-09-05 22:25:30 | http://arxiv.org/abs/2309.02604v1 | http://arxiv.org/pdf/2309.02604v1 | 2309.02604v1 |
Self-Supervised Pretraining Improves Performance and Inference Efficiency in Multiple Lung Ultrasound Interpretation Tasks | In this study, we investigated whether self-supervised pretraining could
produce a neural network feature extractor applicable to multiple
classification tasks in B-mode lung ultrasound analysis. When fine-tuning on
three lung ultrasound tasks, pretrained models resulted in an improvement of
the average across-task area under the receiver operating curve (AUC) by 0.032
and 0.061 on local and external test sets respectively. Compact nonlinear
classifiers trained on features outputted by a single pretrained model did not
improve performance across all tasks; however, they did reduce inference time
by 49% compared to serial execution of separate fine-tuned models. When
training using 1% of the available labels, pretrained models consistently
outperformed fully supervised models, with a maximum observed test AUC increase
of 0.396 for the task of view classification. Overall, the results indicate
that self-supervised pretraining is useful for producing initial weights for
lung ultrasound classifiers. | [
"Blake VanBerlo",
"Brian Li",
"Jesse Hoey",
"Alexander Wong"
] | 2023-09-05 21:36:42 | http://arxiv.org/abs/2309.02596v1 | http://arxiv.org/pdf/2309.02596v1 | 2309.02596v1 |
Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning | We present CM3Leon (pronounced "Chameleon"), a retrieval-augmented,
token-based, decoder-only multi-modal language model capable of generating and
infilling both text and images. CM3Leon uses the CM3 multi-modal architecture
but additionally shows the extreme benefits of scaling up and tuning on more
diverse instruction-style data. It is the first multi-modal model trained with
a recipe adapted from text-only language models, including a large-scale
retrieval-augmented pre-training stage and a second multi-task supervised
fine-tuning (SFT) stage. It is also a general-purpose model that can do both
text-to-image and image-to-text generation, allowing us to introduce
self-contained contrastive decoding methods that produce high-quality outputs.
Extensive experiments demonstrate that this recipe is highly effective for
multi-modal models. CM3Leon achieves state-of-the-art performance in
text-to-image generation with 5x less training compute than comparable methods
(zero-shot MS-COCO FID of 4.88). After SFT, CM3Leon can also demonstrate
unprecedented levels of controllability in tasks ranging from language-guided
image editing to image-controlled generation and segmentation. | [
"Lili Yu",
"Bowen Shi",
"Ramakanth Pasunuru",
"Benjamin Muller",
"Olga Golovneva",
"Tianlu Wang",
"Arun Babu",
"Binh Tang",
"Brian Karrer",
"Shelly Sheynin",
"Candace Ross",
"Adam Polyak",
"Russell Howes",
"Vasu Sharma",
"Puxin Xu",
"Hovhannes Tamoyan",
"Oron Ashual",
"Uriel Singer",
"Shang-Wen Li",
"Susan Zhang",
"Richard James",
"Gargi Ghosh",
"Yaniv Taigman",
"Maryam Fazel-Zarandi",
"Asli Celikyilmaz",
"Luke Zettlemoyer",
"Armen Aghajanyan"
] | 2023-09-05 21:27:27 | http://arxiv.org/abs/2309.02591v1 | http://arxiv.org/pdf/2309.02591v1 | 2309.02591v1 |
Representation Learning for Sequential Volumetric Design Tasks | Volumetric design, also called massing design, is the first and critical step
in professional building design which is sequential in nature. As the
volumetric design process is complex, the underlying sequential design process
encodes valuable information for designers. Many efforts have been made to
automatically generate reasonable volumetric designs, but the quality of the
generated design solutions varies, and evaluating a design solution requires
either a prohibitively comprehensive set of metrics or expensive human
expertise. While previous approaches focused on learning only the final design
instead of sequential design tasks, we propose to encode the design knowledge
from a collection of expert or high-performing design sequences and extract
useful representations using transformer-based models. Later we propose to
utilize the learned representations for crucial downstream applications such as
design preference evaluation and procedural design generation. We develop the
preference model by estimating the density of the learned representations
whereas we train an autoregressive transformer model for sequential design
generation. We demonstrate our ideas by leveraging a novel dataset of thousands
of sequential volumetric designs. Our preference model can compare two
arbitrarily given design sequences and is almost 90% accurate in evaluation
against random design sequences. Our autoregressive model is also capable of
autocompleting a volumetric design sequence from a partial design sequence. | [
"Md Ferdous Alam",
"Yi Wang",
"Linh Tran",
"Chin-Yi Cheng",
"Jieliang Luo"
] | 2023-09-05 21:21:06 | http://arxiv.org/abs/2309.02583v1 | http://arxiv.org/pdf/2309.02583v1 | 2309.02583v1 |
Unveiling Intractable Epileptogenic Brain Networks with Deep Learning Algorithms: A Novel and Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients | Epilepsy is a prevalent neurological disorder affecting 50 million
individuals worldwide and 1.2 million Americans. There exist millions of
pediatric patients with intractable epilepsy, a condition in which seizures
fail to come under control. The occurrence of seizures can result in physical
injury, disorientation, unconsciousness, and additional symptoms that could
impede children's ability to participate in everyday tasks. Predicting seizures
can help parents and healthcare providers take precautions, prevent risky
situations, and mentally prepare children to minimize anxiety and nervousness
associated with the uncertainty of a seizure. This research proposes a novel
and comprehensive framework to predict seizures in pediatric patients by
evaluating machine learning algorithms on unimodal neuroimaging data consisting
of electroencephalogram signals. The bandpass filtering and independent
component analysis proved to be effective in reducing the noise and artifacts
from the dataset. Various machine learning algorithms' performance is evaluated
on important metrics such as accuracy, precision, specificity, sensitivity, F1
score and MCC. The results show that the deep learning algorithms are more
successful in predicting seizures than logistic Regression, and k nearest
neighbors. The recurrent neural network (RNN) gave the highest precision and F1
Score, long short-term memory (LSTM) outperformed RNN in accuracy and
convolutional neural network (CNN) resulted in the highest Specificity. This
research has significant implications for healthcare providers in proactively
managing seizure occurrence in pediatric patients, potentially transforming
clinical practices, and improving pediatric care. | [
"Bliss Singhal",
"Fnu Pooja"
] | 2023-09-05 21:03:36 | http://arxiv.org/abs/2309.02580v1 | http://arxiv.org/pdf/2309.02580v1 | 2309.02580v1 |
Anatomy-Driven Pathology Detection on Chest X-rays | Pathology detection and delineation enables the automatic interpretation of
medical scans such as chest X-rays while providing a high level of
explainability to support radiologists in making informed decisions. However,
annotating pathology bounding boxes is a time-consuming task such that large
public datasets for this purpose are scarce. Current approaches thus use weakly
supervised object detection to learn the (rough) localization of pathologies
from image-level annotations, which is however limited in performance due to
the lack of bounding box supervision. We therefore propose anatomy-driven
pathology detection (ADPD), which uses easy-to-annotate bounding boxes of
anatomical regions as proxies for pathologies. We study two training
approaches: supervised training using anatomy-level pathology labels and
multiple instance learning (MIL) with image-level pathology labels. Our results
show that our anatomy-level training approach outperforms weakly supervised
methods and fully supervised detection with limited training samples, and our
MIL approach is competitive with both baseline approaches, therefore
demonstrating the potential of our approach. | [
"Philip Müller",
"Felix Meissen",
"Johannes Brandt",
"Georgios Kaissis",
"Daniel Rueckert"
] | 2023-09-05 20:58:15 | http://arxiv.org/abs/2309.02578v1 | http://arxiv.org/pdf/2309.02578v1 | 2309.02578v1 |
Emphysema Subtyping on Thoracic Computed Tomography Scans using Deep Neural Networks | Accurate identification of emphysema subtypes and severity is crucial for
effective management of COPD and the study of disease heterogeneity. Manual
analysis of emphysema subtypes and severity is laborious and subjective. To
address this challenge, we present a deep learning-based approach for
automating the Fleischner Society's visual score system for emphysema subtyping
and severity analysis. We trained and evaluated our algorithm using 9650
subjects from the COPDGene study. Our algorithm achieved the predictive
accuracy at 52\%, outperforming a previously published method's accuracy of
45\%. In addition, the agreement between the predicted scores of our method and
the visual scores was good, where the previous method obtained only moderate
agreement. Our approach employs a regression training strategy to generate
categorical labels while simultaneously producing high-resolution localized
activation maps for visualizing the network predictions. By leveraging these
dense activation maps, our method possesses the capability to compute the
percentage of emphysema involvement per lung in addition to categorical
severity scores. Furthermore, the proposed method extends its predictive
capabilities beyond centrilobular emphysema to include paraseptal emphysema
subtypes. | [
"Weiyi Xie",
"Colin Jacobs",
"Jean-Paul Charbonnier",
"Dirk Jan Slebos",
"Bram van Ginneken"
] | 2023-09-05 20:54:41 | http://arxiv.org/abs/2309.02576v1 | http://arxiv.org/pdf/2309.02576v1 | 2309.02576v1 |
Causal Structure Recovery of Linear Dynamical Systems: An FFT based Approach | Learning causal effects from data is a fundamental and well-studied problem
across science, especially when the cause-effect relationship is static in
nature. However, causal effect is less explored when there are dynamical
dependencies, i.e., when dependencies exist between entities across time.
Identifying dynamic causal effects from time-series observations is
computationally expensive when compared to the static scenario. We demonstrate
that the computational complexity of recovering the causation structure for the
vector auto-regressive (VAR) model is $O(Tn^3N^2)$, where $n$ is the number of
nodes, $T$ is the number of samples, and $N$ is the largest time-lag in the
dependency between entities. We report a method, with a reduced complexity of
$O(Tn^3 \log N)$, to recover the causation structure to obtain frequency-domain
(FD) representations of time-series. Since FFT accumulates all the time
dependencies on every frequency, causal inference can be performed efficiently
by considering the state variables as random variables at any given frequency.
We additionally show that, for systems with interactions that are LTI,
do-calculus machinery can be realized in the FD resulting in versions of the
classical single-door (with cycles), front and backdoor criteria. We
demonstrate, for a large class of problems, graph reconstruction using
multivariate Wiener projections results in a significant computational
advantage with $O(n)$ complexity over reconstruction algorithms such as the PC
algorithm which has $O(n^q)$ complexity, where $q$ is the maximum neighborhood
size. This advantage accrues due to some remarkable properties of the phase
response of the frequency-dependent Wiener coefficients which is not present in
any time-domain approach. | [
"Mishfad Shaikh Veedu",
"James Melbourne",
"Murti V. Salapaka"
] | 2023-09-05 20:45:34 | http://arxiv.org/abs/2309.02571v1 | http://arxiv.org/pdf/2309.02571v1 | 2309.02571v1 |
Sparse Partitioning Around Medoids | Partitioning Around Medoids (PAM, k-Medoids) is a popular clustering
technique to use with arbitrary distance functions or similarities, where each
cluster is represented by its most central object, called the medoid or the
discrete median. In operations research, this family of problems is also known
as facility location problem (FLP). FastPAM recently introduced a speedup for
large k to make it applicable for larger problems, but the method still has a
runtime quadratic in N. In this chapter, we discuss a sparse and asymmetric
variant of this problem, to be used for example on graph data such as road
networks. By exploiting sparsity, we can avoid the quadratic runtime and memory
requirements, and make this method scalable to even larger problems, as long as
we are able to build a small enough graph of sufficient connectivity to perform
local optimization. Furthermore, we consider asymmetric cases, where the set of
medoids is not identical to the set of points to be covered (or in the
interpretation of facility location, where the possible facility locations are
not identical to the consumer locations). Because of sparsity, it may be
impossible to cover all points with just k medoids for too small k, which would
render the problem unsolvable, and this breaks common heuristics for finding a
good starting condition. We, hence, consider determining k as a part of the
optimization problem and propose to first construct a greedy initial solution
with a larger k, then to optimize the problem by alternating between PAM-style
"swap" operations where the result is improved by replacing medoids with better
alternatives and "remove" operations to reduce the number of k until neither
allows further improving the result quality. We demonstrate the usefulness of
this method on a problem from electrical engineering, with the input graph
derived from cartographic data. | [
"Lars Lenssen",
"Erich Schubert"
] | 2023-09-05 19:52:24 | http://arxiv.org/abs/2309.02557v1 | http://arxiv.org/pdf/2309.02557v1 | 2309.02557v1 |
Domain Adaptation for Efficiently Fine-tuning Vision Transformer with Encrypted Images | In recent years, deep neural networks (DNNs) trained with transformed data
have been applied to various applications such as privacy-preserving learning,
access control, and adversarial defenses. However, the use of transformed data
decreases the performance of models. Accordingly, in this paper, we propose a
novel method for fine-tuning models with transformed images under the use of
the vision transformer (ViT). The proposed domain adaptation method does not
cause the accuracy degradation of models, and it is carried out on the basis of
the embedding structure of ViT. In experiments, we confirmed that the proposed
method prevents accuracy degradation even when using encrypted images with the
CIFAR-10 and CIFAR-100 datasets. | [
"Teru Nagamori",
"Sayaka Shiota",
"Hitoshi Kiya"
] | 2023-09-05 19:45:27 | http://arxiv.org/abs/2309.02556v2 | http://arxiv.org/pdf/2309.02556v2 | 2309.02556v2 |
A Survey of the Impact of Self-Supervised Pretraining for Diagnostic Tasks with Radiological Images | Self-supervised pretraining has been observed to be effective at improving
feature representations for transfer learning, leveraging large amounts of
unlabelled data. This review summarizes recent research into its usage in
X-ray, computed tomography, magnetic resonance, and ultrasound imaging,
concentrating on studies that compare self-supervised pretraining to fully
supervised learning for diagnostic tasks such as classification and
segmentation. The most pertinent finding is that self-supervised pretraining
generally improves downstream task performance compared to full supervision,
most prominently when unlabelled examples greatly outnumber labelled examples.
Based on the aggregate evidence, recommendations are provided for practitioners
considering using self-supervised learning. Motivated by limitations identified
in current research, directions and practices for future study are suggested,
such as integrating clinical knowledge with theoretically justified
self-supervised learning methods, evaluating on public datasets, growing the
modest body of evidence for ultrasound, and characterizing the impact of
self-supervised pretraining on generalization. | [
"Blake VanBerlo",
"Jesse Hoey",
"Alexander Wong"
] | 2023-09-05 19:45:09 | http://arxiv.org/abs/2309.02555v1 | http://arxiv.org/pdf/2309.02555v1 | 2309.02555v1 |
Data Aggregation for Hierarchical Clustering | Hierarchical Agglomerative Clustering (HAC) is likely the earliest and most
flexible clustering method, because it can be used with many distances,
similarities, and various linkage strategies. It is often used when the number
of clusters the data set forms is unknown and some sort of hierarchy in the
data is plausible. Most algorithms for HAC operate on a full distance matrix,
and therefore require quadratic memory. The standard algorithm also has cubic
runtime to produce a full hierarchy. Both memory and runtime are especially
problematic in the context of embedded or otherwise very resource-constrained
systems. In this section, we present how data aggregation with BETULA, a
numerically stable version of the well known BIRCH data aggregation algorithm,
can be used to make HAC viable on systems with constrained resources with only
small losses on clustering quality, and hence allow exploratory data analysis
of very large data sets. | [
"Erich Schubert",
"Andreas Lang"
] | 2023-09-05 19:39:43 | http://arxiv.org/abs/2309.02552v1 | http://arxiv.org/pdf/2309.02552v1 | 2309.02552v1 |
Continual Improvement of Threshold-Based Novelty Detection | When evaluated in dynamic, open-world situations, neural networks struggle to
detect unseen classes. This issue complicates the deployment of continual
learners in realistic environments where agents are not explicitly informed
when novel categories are encountered. A common family of techniques for
detecting novelty relies on thresholds of similarity between observed data
points and the data used for training. However, these methods often require
manually specifying (ahead of time) the value of these thresholds, and are
therefore incapable of adapting to the nature of the data. We propose a new
method for automatically selecting these thresholds utilizing a linear search
and leave-one-out cross-validation on the ID classes. We demonstrate that this
novel method for selecting thresholds results in improved total accuracy on
MNIST, Fashion MNIST, and CIFAR-10. | [
"Abe Ejilemele",
"Jorge Mendez-Mendez"
] | 2023-09-05 19:37:45 | http://arxiv.org/abs/2309.02551v1 | http://arxiv.org/pdf/2309.02551v1 | 2309.02551v1 |
Structural Concept Learning via Graph Attention for Multi-Level Rearrangement Planning | Robotic manipulation tasks, such as object rearrangement, play a crucial role
in enabling robots to interact with complex and arbitrary environments.
Existing work focuses primarily on single-level rearrangement planning and,
even if multiple levels exist, dependency relations among substructures are
geometrically simpler, like tower stacking. We propose Structural Concept
Learning (SCL), a deep learning approach that leverages graph attention
networks to perform multi-level object rearrangement planning for scenes with
structural dependency hierarchies. It is trained on a self-generated simulation
data set with intuitive structures, works for unseen scenes with an arbitrary
number of objects and higher complexity of structures, infers independent
substructures to allow for task parallelization over multiple manipulators, and
generalizes to the real world. We compare our method with a range of classical
and model-based baselines to show that our method leverages its scene
understanding to achieve better performance, flexibility, and efficiency. The
dataset, supplementary details, videos, and code implementation are available
at: https://manavkulshrestha.github.io/scl | [
"Manav Kulshrestha",
"Ahmed H. Qureshi"
] | 2023-09-05 19:35:44 | http://arxiv.org/abs/2309.02547v1 | http://arxiv.org/pdf/2309.02547v1 | 2309.02547v1 |
A Generalized Bandsplit Neural Network for Cinematic Audio Source Separation | Cinematic audio source separation is a relatively new subtask of audio source
separation, with the aim of extracting the dialogue stem, the music stem, and
the effects stem from their mixture. In this work, we developed a model
generalizing the Bandsplit RNN for any complete or overcomplete partitions of
the frequency axis. Psycho-acoustically motivated frequency scales were used to
inform the band definitions which are now defined with redundancy for more
reliable feature extraction. A loss function motivated by the signal-to-noise
ratio and the sparsity-promoting property of the 1-norm was proposed. We
additionally exploit the information-sharing property of a common-encoder setup
to reduce computational complexity during both training and inference, improve
separation performance for hard-to-generalize classes of sounds, and allow
flexibility during inference time with easily detachable decoders. Our best
model sets the state of the art on the Divide and Remaster dataset with
performance above the ideal ratio mask for the dialogue stem. | [
"Karn N. Watcharasupat",
"Chih-Wei Wu",
"Yiwei Ding",
"Iroro Orife",
"Aaron J. Hipple",
"Phillip A. Williams",
"Scott Kramer",
"Alexander Lerch",
"William Wolcott"
] | 2023-09-05 19:19:22 | http://arxiv.org/abs/2309.02539v2 | http://arxiv.org/pdf/2309.02539v2 | 2309.02539v2 |
Experience and Prediction: A Metric of Hardness for a Novel Litmus Test | In the last decade, the Winograd Schema Challenge (WSC) has become a central
aspect of the research community as a novel litmus test. Consequently, the WSC
has spurred research interest because it can be seen as the means to understand
human behavior. In this regard, the development of new techniques has made
possible the usage of Winograd schemas in various fields, such as the design of
novel forms of CAPTCHAs.
Work from the literature that established a baseline for human adult
performance on the WSC has shown that not all schemas are the same, meaning
that they could potentially be categorized according to their perceived
hardness for humans. In this regard, this \textit{hardness-metric} could be
used in future challenges or in the WSC CAPTCHA service to differentiate
between Winograd schemas.
Recent work of ours has shown that this could be achieved via the design of
an automated system that is able to output the hardness-indexes of Winograd
schemas, albeit with limitations regarding the number of schemas it could be
applied on. This paper adds to previous research by presenting a new system
that is based on Machine Learning (ML), able to output the hardness of any
Winograd schema faster and more accurately than any other previously used
method. Our developed system, which works within two different approaches,
namely the random forest and deep learning (LSTM-based), is ready to be used as
an extension of any other system that aims to differentiate between Winograd
schemas, according to their perceived hardness for humans. At the same time,
along with our developed system we extend previous work by presenting the
results of a large-scale experiment that shows how human performance varies
across Winograd schemas. | [
"Nicos Isaak",
"Loizos Michael"
] | 2023-09-05 19:03:26 | http://arxiv.org/abs/2309.02534v1 | http://arxiv.org/pdf/2309.02534v1 | 2309.02534v1 |
Diffusion on the Probability Simplex | Diffusion models learn to reverse the progressive noising of a data
distribution to create a generative model. However, the desired continuous
nature of the noising process can be at odds with discrete data. To deal with
this tension between continuous and discrete objects, we propose a method of
performing diffusion on the probability simplex. Using the probability simplex
naturally creates an interpretation where points correspond to categorical
probability distributions. Our method uses the softmax function applied to an
Ornstein-Unlenbeck Process, a well-known stochastic differential equation. We
find that our methodology also naturally extends to include diffusion on the
unit cube which has applications for bounded image generation. | [
"Griffin Floto",
"Thorsteinn Jonsson",
"Mihai Nica",
"Scott Sanner",
"Eric Zhengyu Zhu"
] | 2023-09-05 18:52:35 | http://arxiv.org/abs/2309.02530v2 | http://arxiv.org/pdf/2309.02530v2 | 2309.02530v2 |
Adaptive Adversarial Training Does Not Increase Recourse Costs | Recent work has connected adversarial attack methods and algorithmic recourse
methods: both seek minimal changes to an input instance which alter a model's
classification decision. It has been shown that traditional adversarial
training, which seeks to minimize a classifier's susceptibility to malicious
perturbations, increases the cost of generated recourse; with larger
adversarial training radii correlating with higher recourse costs. From the
perspective of algorithmic recourse, however, the appropriate adversarial
training radius has always been unknown. Another recent line of work has
motivated adversarial training with adaptive training radii to address the
issue of instance-wise variable adversarial vulnerability, showing success in
domains with unknown attack radii. This work studies the effects of adaptive
adversarial training on algorithmic recourse costs. We establish that the
improvements in model robustness induced by adaptive adversarial training show
little effect on algorithmic recourse costs, providing a potential avenue for
affordable robustness in domains where recoursability is critical. | [
"Ian Hardy",
"Jayanth Yetukuri",
"Yang Liu"
] | 2023-09-05 18:40:22 | http://arxiv.org/abs/2309.02528v1 | http://arxiv.org/pdf/2309.02528v1 | 2309.02528v1 |
DeepTriNet: A Tri-Level Attention Based DeepLabv3+ Architecture for Semantic Segmentation of Satellite Images | The segmentation of satellite images is crucial in remote sensing
applications. Existing methods face challenges in recognizing small-scale
objects in satellite images for semantic segmentation primarily due to ignoring
the low-level characteristics of the underlying network and due to containing
distinct amounts of information by different feature maps. Thus, in this
research, a tri-level attention-based DeepLabv3+ architecture (DeepTriNet) is
proposed for the semantic segmentation of satellite images. The proposed hybrid
method combines squeeze-and-excitation networks (SENets) and tri-level
attention units (TAUs) with the vanilla DeepLabv3+ architecture, where the TAUs
are used to bridge the semantic feature gap among encoders output and the
SENets used to put more weight on relevant features. The proposed DeepTriNet
finds which features are the more relevant and more generalized way by its
self-supervision rather we annotate them. The study showed that the proposed
DeepTriNet performs better than many conventional techniques with an accuracy
of 98% and 77%, IoU 80% and 58%, precision 88% and 68%, and recall of 79% and
55% on the 4-class Land-Cover.ai dataset and the 15-class GID-2 dataset
respectively. The proposed method will greatly contribute to natural resource
management and change detection in rural and urban regions through efficient
and semantic satellite image segmentation | [
"Tareque Bashar Ovi",
"Shakil Mosharrof",
"Nomaiya Bashree",
"Md Shofiqul Islam",
"Muhammad Nazrul Islam"
] | 2023-09-05 18:35:34 | http://arxiv.org/abs/2310.06848v1 | http://arxiv.org/pdf/2310.06848v1 | 2310.06848v1 |
Comparative Analysis of CPU and GPU Profiling for Deep Learning Models | Deep Learning(DL) and Machine Learning(ML) applications are rapidly
increasing in recent days. Massive amounts of data are being generated over the
internet which can derive meaningful results by the use of ML and DL
algorithms. Hardware resources and open-source libraries have made it easy to
implement these algorithms. Tensorflow and Pytorch are one of the leading
frameworks for implementing ML projects. By using those frameworks, we can
trace the operations executed on both GPU and CPU to analyze the resource
allocations and consumption. This paper presents the time and memory allocation
of CPU and GPU while training deep neural networks using Pytorch. This paper
analysis shows that GPU has a lower running time as compared to CPU for deep
neural networks. For a simpler network, there are not many significant
improvements in GPU over the CPU. | [
"Dipesh Gyawali"
] | 2023-09-05 18:22:11 | http://arxiv.org/abs/2309.02521v1 | http://arxiv.org/pdf/2309.02521v1 | 2309.02521v1 |
Fairness Vs. Personalization: Towards Equity in Epistemic Utility | The applications of personalized recommender systems are rapidly expanding:
encompassing social media, online shopping, search engine results, and more.
These systems offer a more efficient way to navigate the vast array of items
available. However, alongside this growth, there has been increased recognition
of the potential for algorithmic systems to exhibit and perpetuate biases,
risking unfairness in personalized domains. In this work, we explicate the
inherent tension between personalization and conventional implementations of
fairness. As an alternative, we propose equity to achieve fairness in the
context of epistemic utility. We provide a mapping between goals and practical
implementations and detail policy recommendations across key stakeholders to
forge a path towards achieving fairness in personalized systems. | [
"Jennifer Chien",
"David Danks"
] | 2023-09-05 18:19:57 | http://arxiv.org/abs/2309.11503v1 | http://arxiv.org/pdf/2309.11503v1 | 2309.11503v1 |
Performance Analysis of Various EfficientNet Based U-Net++ Architecture for Automatic Building Extraction from High Resolution Satellite Images | Building extraction is an essential component of study in the science of
remote sensing, and applications for building extraction heavily rely on
semantic segmentation of high-resolution remote sensing imagery. Semantic
information extraction gap constraints in the present deep learning based
approaches, however can result in inadequate segmentation outcomes. To address
this issue and extract buildings with high accuracy, various efficientNet
backbone based U-Net++ has been proposed in this study. The designed network,
based on U-Net, can improve the sensitivity of the model by deep supervision,
voluminous redesigned skip-connections and hence reducing the influence of
irrelevant feature areas in the background. Various effecientNet backbone based
encoders have been employed when training the network to enhance the capacity
of the model to extract more relevant feature. According on the experimental
findings, the suggested model significantly outperforms previous cutting-edge
approaches. Among the 5 efficientNet variation Unet++ based on efficientb4
achieved the best result by scoring mean accuracy of 92.23%, mean iou of
88.32%, and mean precision of 93.2% on publicly available Massachusetts
building dataset and thus showing the promises of the model for automatic
building extraction from high resolution satellite images. | [
"Tareque Bashar Ovi",
"Nomaiya Bashree",
"Protik Mukherjee",
"Shakil Mosharrof",
"Masuma Anjum Parthima"
] | 2023-09-05 18:14:14 | http://arxiv.org/abs/2310.06847v1 | http://arxiv.org/pdf/2310.06847v1 | 2310.06847v1 |
Towards User Guided Actionable Recourse | Machine Learning's proliferation in critical fields such as healthcare,
banking, and criminal justice has motivated the creation of tools which ensure
trust and transparency in ML models. One such tool is Actionable Recourse (AR)
for negatively impacted users. AR describes recommendations of cost-efficient
changes to a user's actionable features to help them obtain favorable outcomes.
Existing approaches for providing recourse optimize for properties such as
proximity, sparsity, validity, and distance-based costs. However, an
often-overlooked but crucial requirement for actionability is a consideration
of User Preference to guide the recourse generation process. In this work, we
attempt to capture user preferences via soft constraints in three simple forms:
i) scoring continuous features, ii) bounding feature values and iii) ranking
categorical features. Finally, we propose a gradient-based approach to identify
User Preferred Actionable Recourse (UP-AR). We carried out extensive
experiments to verify the effectiveness of our approach. | [
"Jayanth Yetukuri",
"Ian Hardy",
"Yang Liu"
] | 2023-09-05 18:06:09 | http://arxiv.org/abs/2309.02517v1 | http://arxiv.org/pdf/2309.02517v1 | 2309.02517v1 |
Efficient RL via Disentangled Environment and Agent Representations | Agents that are aware of the separation between themselves and their
environments can leverage this understanding to form effective representations
of visual input. We propose an approach for learning such structured
representations for RL algorithms, using visual knowledge of the agent, such as
its shape or mask, which is often inexpensive to obtain. This is incorporated
into the RL objective using a simple auxiliary loss. We show that our method,
Structured Environment-Agent Representations, outperforms state-of-the-art
model-free approaches over 18 different challenging visual simulation
environments spanning 5 different robots. Website at https://sear-rl.github.io/ | [
"Kevin Gmelin",
"Shikhar Bahl",
"Russell Mendonca",
"Deepak Pathak"
] | 2023-09-05 17:59:45 | http://arxiv.org/abs/2309.02435v1 | http://arxiv.org/pdf/2309.02435v1 | 2309.02435v1 |
Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach | Estimating the transferability of publicly available pretrained models to a
target task has assumed an important place for transfer learning tasks in
recent years. Existing efforts propose metrics that allow a user to choose one
model from a pool of pre-trained models without having to fine-tune each model
individually and identify one explicitly. With the growth in the number of
available pre-trained models and the popularity of model ensembles, it also
becomes essential to study the transferability of multiple-source models for a
given target task. The few existing efforts study transferability in such
multi-source ensemble settings using just the outputs of the classification
layer and neglect possible domain or task mismatch. Moreover, they overlook the
most important factor while selecting the source models, viz., the cohesiveness
factor between them, which can impact the performance and confidence in the
prediction of the ensemble. To address these gaps, we propose a novel Optimal
tranSport-based suBmOdular tRaNsferability metric (OSBORN) to estimate the
transferability of an ensemble of models to a downstream task. OSBORN
collectively accounts for image domain difference, task difference, and
cohesiveness of models in the ensemble to provide reliable estimates of
transferability. We gauge the performance of OSBORN on both image
classification and semantic segmentation tasks. Our setup includes 28 source
datasets, 11 target datasets, 5 model architectures, and 2 pre-training
methods. We benchmark our method against current state-of-the-art metrics
MS-LEEP and E-LEEP, and outperform them consistently using the proposed
approach. | [
"Vimal K B",
"Saketh Bachu",
"Tanmay Garg",
"Niveditha Lakshmi Narasimhan",
"Raghavan Konuru",
"Vineeth N Balasubramanian"
] | 2023-09-05 17:57:31 | http://arxiv.org/abs/2309.02429v1 | http://arxiv.org/pdf/2309.02429v1 | 2309.02429v1 |
Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework | The burgeoning growth of public domain data and the increasing complexity of
deep learning model architectures have underscored the need for more efficient
data representation and analysis techniques. This paper is motivated by the
work of (Helal, 2023) and aims to present a comprehensive overview of
tensorization. This transformative approach bridges the gap between the
inherently multidimensional nature of data and the simplified 2-dimensional
matrices commonly used in linear algebra-based machine learning algorithms.
This paper explores the steps involved in tensorization, multidimensional data
sources, various multiway analysis methods employed, and the benefits of these
approaches. A small example of Blind Source Separation (BSS) is presented
comparing 2-dimensional algorithms and a multiway algorithm in Python. Results
indicate that multiway analysis is more expressive. Contrary to the intuition
of the dimensionality curse, utilising multidimensional datasets in their
native form and applying multiway analysis methods grounded in multilinear
algebra reveal a profound capacity to capture intricate interrelationships
among various dimensions while, surprisingly, reducing the number of model
parameters and accelerating processing. A survey of the multi-away analysis
methods and integration with various Deep Neural Networks models is presented
using case studies in different application domains. | [
"Manal Helal"
] | 2023-09-05 17:56:22 | http://arxiv.org/abs/2309.02428v3 | http://arxiv.org/pdf/2309.02428v3 | 2309.02428v3 |
Cognitive Architectures for Language Agents | Recent efforts have augmented large language models (LLMs) with external
resources (e.g., the Internet) or internal control flows (e.g., prompt
chaining) for tasks requiring grounding or reasoning, leading to a new class of
language agents. While these agents have achieved substantial empirical
success, we lack a systematic framework to organize existing agents and plan
future developments. In this paper, we draw on the rich history of cognitive
science and symbolic artificial intelligence to propose Cognitive Architectures
for Language Agents (CoALA). CoALA describes a language agent with modular
memory components, a structured action space to interact with internal memory
and external environments, and a generalized decision-making process to choose
actions. We use CoALA to retrospectively survey and organize a large body of
recent work, and prospectively identify actionable directions towards more
capable agents. Taken together, CoALA contextualizes today's language agents
within the broader history of AI and outlines a path towards language-based
general intelligence. | [
"Theodore R. Sumers",
"Shunyu Yao",
"Karthik Narasimhan",
"Thomas L. Griffiths"
] | 2023-09-05 17:56:20 | http://arxiv.org/abs/2309.02427v2 | http://arxiv.org/pdf/2309.02427v2 | 2309.02427v2 |
Monotone Tree-Based GAMI Models by Adapting XGBoost | Recent papers have used machine learning architecture to fit low-order
functional ANOVA models with main effects and second-order interactions. These
GAMI (GAM + Interaction) models are directly interpretable as the functional
main effects and interactions can be easily plotted and visualized.
Unfortunately, it is not easy to incorporate the monotonicity requirement into
the existing GAMI models based on boosted trees, such as EBM (Lou et al. 2013)
and GAMI-Lin-T (Hu et al. 2022). This paper considers models of the form
$f(x)=\sum_{j,k}f_{j,k}(x_j, x_k)$ and develops monotone tree-based GAMI
models, called monotone GAMI-Tree, by adapting the XGBoost algorithm. It is
straightforward to fit a monotone model to $f(x)$ using the options in XGBoost.
However, the fitted model is still a black box. We take a different approach:
i) use a filtering technique to determine the important interactions, ii) fit a
monotone XGBoost algorithm with the selected interactions, and finally iii)
parse and purify the results to get a monotone GAMI model. Simulated datasets
are used to demonstrate the behaviors of mono-GAMI-Tree and EBM, both of which
use piecewise constant fits. Note that the monotonicity requirement is for the
full model. Under certain situations, the main effects will also be monotone.
But, as seen in the examples, the interactions will not be monotone. | [
"Linwei Hu",
"Soroush Aramideh",
"Jie Chen",
"Vijayan N. Nair"
] | 2023-09-05 17:54:37 | http://arxiv.org/abs/2309.02426v1 | http://arxiv.org/pdf/2309.02426v1 | 2309.02426v1 |
On the Minimax Regret in Online Ranking with Top-k Feedback | In online ranking, a learning algorithm sequentially ranks a set of items and
receives feedback on its ranking in the form of relevance scores. Since
obtaining relevance scores typically involves human annotation, it is of great
interest to consider a partial feedback setting where feedback is restricted to
the top-$k$ items in the rankings. Chaudhuri and Tewari [2017] developed a
framework to analyze online ranking algorithms with top $k$ feedback. A key
element in their work was the use of techniques from partial monitoring. In
this paper, we further investigate online ranking with top $k$ feedback and
solve some open problems posed by Chaudhuri and Tewari [2017]. We provide a
full characterization of minimax regret rates with the top $k$ feedback model
for all $k$ and for the following ranking performance measures: Pairwise Loss,
Discounted Cumulative Gain, and Precision@n. In addition, we give an efficient
algorithm that achieves the minimax regret rate for Precision@n. | [
"Mingyuan Zhang",
"Ambuj Tewari"
] | 2023-09-05 17:53:10 | http://arxiv.org/abs/2309.02425v1 | http://arxiv.org/pdf/2309.02425v1 | 2309.02425v1 |
Maximum Mean Discrepancy Meets Neural Networks: The Radon-Kolmogorov-Smirnov Test | Maximum mean discrepancy (MMD) refers to a general class of nonparametric
two-sample tests that are based on maximizing the mean difference over samples
from one distribution $P$ versus another $Q$, over all choices of data
transformations $f$ living in some function space $\mathcal{F}$. Inspired by
recent work that connects what are known as functions of $\textit{Radon bounded
variation}$ (RBV) and neural networks (Parhi and Nowak, 2021, 2023), we study
the MMD defined by taking $\mathcal{F}$ to be the unit ball in the RBV space of
a given smoothness order $k \geq 0$. This test, which we refer to as the
$\textit{Radon-Kolmogorov-Smirnov}$ (RKS) test, can be viewed as a
generalization of the well-known and classical Kolmogorov-Smirnov (KS) test to
multiple dimensions and higher orders of smoothness. It is also intimately
connected to neural networks: we prove that the witness in the RKS test -- the
function $f$ achieving the maximum mean difference -- is always a ridge spline
of degree $k$, i.e., a single neuron in a neural network. This allows us to
leverage the power of modern deep learning toolkits to (approximately) optimize
the criterion that underlies the RKS test. We prove that the RKS test has
asymptotically full power at distinguishing any distinct pair $P \not= Q$ of
distributions, derive its asymptotic null distribution, and carry out extensive
experiments to elucidate the strengths and weakenesses of the RKS test versus
the more traditional kernel MMD test. | [
"Seunghoon Paik",
"Michael Celentano",
"Alden Green",
"Ryan J. Tibshirani"
] | 2023-09-05 17:51:00 | http://arxiv.org/abs/2309.02422v2 | http://arxiv.org/pdf/2309.02422v2 | 2309.02422v2 |
Computing SHAP Efficiently Using Model Structure Information | SHAP (SHapley Additive exPlanations) has become a popular method to attribute
the prediction of a machine learning model on an input to its features. One
main challenge of SHAP is the computation time. An exact computation of Shapley
values requires exponential time complexity. Therefore, many approximation
methods are proposed in the literature. In this paper, we propose methods that
can compute SHAP exactly in polynomial time or even faster for SHAP definitions
that satisfy our additivity and dummy assumptions (eg, kernal SHAP and baseline
SHAP). We develop different strategies for models with different levels of
model structure information: known functional decomposition, known order of
model (defined as highest order of interaction in the model), or unknown order.
For the first case, we demonstrate an additive property and a way to compute
SHAP from the lower-order functional components. For the second case, we derive
formulas that can compute SHAP in polynomial time. Both methods yield exact
SHAP results. Finally, if even the order of model is unknown, we propose an
iterative way to approximate Shapley values. The three methods we propose are
computationally efficient when the order of model is not high which is
typically the case in practice. We compare with sampling approach proposed in
Castor & Gomez (2008) using simulation studies to demonstrate the efficacy of
our proposed methods. | [
"Linwei Hu",
"Ke Wang"
] | 2023-09-05 17:48:09 | http://arxiv.org/abs/2309.02417v1 | http://arxiv.org/pdf/2309.02417v1 | 2309.02417v1 |
First and zeroth-order implementations of the regularized Newton method with lazy approximated Hessians | In this work, we develop first-order (Hessian-free) and zero-order
(derivative-free) implementations of the Cubically regularized Newton method
for solving general non-convex optimization problems. For that, we employ
finite difference approximations of the derivatives. We use a special adaptive
search procedure in our algorithms, which simultaneously fits both the
regularization constant and the parameters of the finite difference
approximations. It makes our schemes free from the need to know the actual
Lipschitz constants. Additionally, we equip our algorithms with the lazy
Hessian update that reuse a previously computed Hessian approximation matrix
for several iterations. Specifically, we prove the global complexity bound of
$\mathcal{O}( n^{1/2} \epsilon^{-3/2})$ function and gradient evaluations for
our new Hessian-free method, and a bound of $\mathcal{O}( n^{3/2}
\epsilon^{-3/2} )$ function evaluations for the derivative-free method, where
$n$ is the dimension of the problem and $\epsilon$ is the desired accuracy for
the gradient norm. These complexity bounds significantly improve the previously
known ones in terms of the joint dependence on $n$ and $\epsilon$, for the
first-order and zeroth-order non-convex optimization. | [
"Nikita Doikov",
"Geovani Nunes Grapiglia"
] | 2023-09-05 17:40:54 | http://arxiv.org/abs/2309.02412v1 | http://arxiv.org/pdf/2309.02412v1 | 2309.02412v1 |
Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices | In this paper, we present Delta-LoRA, which is a novel parameter-efficient
approach to fine-tune large language models (LLMs). In contrast to LoRA and
other low-rank adaptation methods such as AdaLoRA, Delta-LoRA not only updates
the low-rank matrices $\bA$ and $\bB$, but also propagate the learning to the
pre-trained weights $\bW$ via updates utilizing the delta of the product of two
low-rank matrices ($\bA^{(t+1)}\bB^{(t+1)} - \bA^{(t)}\bB^{(t)}$). Such a
strategy effectively addresses the limitation that the incremental update of
low-rank matrices is inadequate for learning representations capable for
downstream tasks. Moreover, as the update of $\bW$ does not need to compute the
gradients of $\bW$ and store their momentums, Delta-LoRA shares comparable
memory requirements and computational costs with LoRA. Extensive experiments
show that Delta-LoRA significantly outperforms existing low-rank adaptation
methods. We further support these results with comprehensive analyses that
underscore the effectiveness of Delta-LoRA. | [
"Bojia Zi",
"Xianbiao Qi",
"Lingzhi Wang",
"Jianan Wang",
"Kam-Fai Wong",
"Lei Zhang"
] | 2023-09-05 17:40:34 | http://arxiv.org/abs/2309.02411v1 | http://arxiv.org/pdf/2309.02411v1 | 2309.02411v1 |
In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms | The recent ubiquitous adoption of remote conferencing has been accompanied by
omnipresent frustration with distorted or otherwise unclear voice
communication. Audio enhancement can compensate for low-quality input signals
from, for example, small true wireless earbuds, by applying noise suppression
techniques. Such processing relies on voice activity detection (VAD) with low
latency and the added capability of discriminating the wearer's voice from
others - a task of significant computational complexity. The tight energy
budget of devices as small as modern earphones, however, requires any system
attempting to tackle this problem to do so with minimal power and processing
overhead, while not relying on speaker-specific voice samples and training due
to usability concerns.
This paper presents the design and implementation of a custom research
platform for low-power wireless earbuds based on novel, commercial, MEMS
bone-conduction microphones. Such microphones can record the wearer's speech
with much greater isolation, enabling personalized voice activity detection and
further audio enhancement applications. Furthermore, the paper accurately
evaluates a proposed low-power personalized speech detection algorithm based on
bone conduction data and a recurrent neural network running on the implemented
research platform. This algorithm is compared to an approach based on
traditional microphone input. The performance of the bone conduction system,
achieving detection of speech within 12.8ms at an accuracy of 95\% is
evaluated. Different SoC choices are contrasted, with the final implementation
based on the cutting-edge Ambiq Apollo 4 Blue SoC achieving 2.64mW average
power consumption at 14uJ per inference, reaching 43h of battery life on a
miniature 32mAh li-ion cell and without duty cycling. | [
"Philipp Schilk",
"Niccolò Polvani",
"Andrea Ronco",
"Milos Cernak",
"Michele Magno"
] | 2023-09-05 17:04:09 | http://arxiv.org/abs/2309.02393v1 | http://arxiv.org/pdf/2309.02393v1 | 2309.02393v1 |
Explaining grokking through circuit efficiency | One of the most surprising puzzles in neural network generalisation is
grokking: a network with perfect training accuracy but poor generalisation
will, upon further training, transition to perfect generalisation. We propose
that grokking occurs when the task admits a generalising solution and a
memorising solution, where the generalising solution is slower to learn but
more efficient, producing larger logits with the same parameter norm. We
hypothesise that memorising circuits become more inefficient with larger
training datasets while generalising circuits do not, suggesting there is a
critical dataset size at which memorisation and generalisation are equally
efficient. We make and confirm four novel predictions about grokking, providing
significant evidence in favour of our explanation. Most strikingly, we
demonstrate two novel and surprising behaviours: ungrokking, in which a network
regresses from perfect to low test accuracy, and semi-grokking, in which a
network shows delayed generalisation to partial rather than perfect test
accuracy. | [
"Vikrant Varma",
"Rohin Shah",
"Zachary Kenton",
"János Kramár",
"Ramana Kumar"
] | 2023-09-05 17:00:24 | http://arxiv.org/abs/2309.02390v1 | http://arxiv.org/pdf/2309.02390v1 | 2309.02390v1 |
A Lightweight and Transferable Design for Robust LEGO Manipulation | LEGO is a well-known platform for prototyping pixelized objects. However,
robotic LEGO prototyping (i.e. manipulating LEGO bricks) is challenging due to
the tight connections and accuracy requirement. This paper investigates safe
and efficient robotic LEGO manipulation. In particular, this paper reduces the
complexity of the manipulation by hardware-software co-design. An end-of-arm
tool (EOAT) is designed, which reduces the problem dimension and allows large
industrial robots to easily manipulate LEGO bricks. In addition, this paper
uses evolution strategy to safely optimize the robot motion for LEGO
manipulation. Experiments demonstrate that the EOAT performs reliably in
manipulating LEGO bricks and the learning framework can effectively and safely
improve the manipulation performance to a 100% success rate. The co-design is
deployed to multiple robots (i.e. FANUC LR-mate 200id/7L and Yaskawa GP4) to
demonstrate its generalizability and transferability. In the end, we show that
the proposed solution enables sustainable robotic LEGO prototyping, in which
the robot can repeatedly assemble and disassemble different prototypes. | [
"Ruixuan Liu",
"Yifan Sun",
"Changliu Liu"
] | 2023-09-05 16:11:37 | http://arxiv.org/abs/2309.02354v2 | http://arxiv.org/pdf/2309.02354v2 | 2309.02354v2 |
Exact Inference for Continuous-Time Gaussian Process Dynamics | Physical systems can often be described via a continuous-time dynamical
system. In practice, the true system is often unknown and has to be learned
from measurement data. Since data is typically collected in discrete time, e.g.
by sensors, most methods in Gaussian process (GP) dynamics model learning are
trained on one-step ahead predictions. This can become problematic in several
scenarios, e.g. if measurements are provided at irregularly-sampled time steps
or physical system properties have to be conserved. Thus, we aim for a GP model
of the true continuous-time dynamics. Higher-order numerical integrators
provide the necessary tools to address this problem by discretizing the
dynamics function with arbitrary accuracy. Many higher-order integrators
require dynamics evaluations at intermediate time steps making exact GP
inference intractable. In previous work, this problem is often tackled by
approximating the GP posterior with variational inference. However, exact GP
inference is preferable in many scenarios, e.g. due to its mathematical
guarantees. In order to make direct inference tractable, we propose to leverage
multistep and Taylor integrators. We demonstrate how to derive flexible
inference schemes for these types of integrators. Further, we derive tailored
sampling schemes that allow to draw consistent dynamics functions from the
learned posterior. This is crucial to sample consistent predictions from the
dynamics model. We demonstrate empirically and theoretically that our approach
yields an accurate representation of the continuous-time system. | [
"Katharina Ensinger",
"Nicholas Tagliapietra",
"Sebastian Ziesche",
"Sebastian Trimpe"
] | 2023-09-05 16:07:00 | http://arxiv.org/abs/2309.02351v1 | http://arxiv.org/pdf/2309.02351v1 | 2309.02351v1 |
PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference | Field-programmable gate arrays (FPGAs) are widely used to implement deep
learning inference. Standard deep neural network inference involves the
computation of interleaved linear maps and nonlinear activation functions.
Prior work for ultra-low latency implementations has hardcoded the combination
of linear maps and nonlinear activations inside FPGA lookup tables (LUTs). Our
work is motivated by the idea that the LUTs in an FPGA can be used to implement
a much greater variety of functions than this. In this paper, we propose a
novel approach to training neural networks for FPGA deployment using
multivariate polynomials as the basic building block. Our method takes
advantage of the flexibility offered by the soft logic, hiding the polynomial
evaluation inside the LUTs with zero overhead. We show that by using polynomial
building blocks, we can achieve the same accuracy using considerably fewer
layers of soft logic than by using linear functions, leading to significant
latency and area improvements. We demonstrate the effectiveness of this
approach in three tasks: network intrusion detection, jet identification at the
CERN Large Hadron Collider, and handwritten digit recognition using the MNIST
dataset. | [
"Marta Andronic",
"George A. Constantinides"
] | 2023-09-05 15:54:09 | http://arxiv.org/abs/2309.02334v1 | http://arxiv.org/pdf/2309.02334v1 | 2309.02334v1 |
Resilient VAE: Unsupervised Anomaly Detection at the SLAC Linac Coherent Light Source | Significant advances in utilizing deep learning for anomaly detection have
been made in recent years. However, these methods largely assume the existence
of a normal training set (i.e., uncontaminated by anomalies) or even a
completely labeled training set. In many complex engineering systems, such as
particle accelerators, labels are sparse and expensive; in order to perform
anomaly detection in these cases, we must drop these assumptions and utilize a
completely unsupervised method. This paper introduces the Resilient Variational
Autoencoder (ResVAE), a deep generative model specifically designed for anomaly
detection. ResVAE exhibits resilience to anomalies present in the training data
and provides feature-level anomaly attribution. During the training process,
ResVAE learns the anomaly probability for each sample as well as each
individual feature, utilizing these probabilities to effectively disregard
anomalous examples in the training data. We apply our proposed method to detect
anomalies in the accelerator status at the SLAC Linac Coherent Light Source
(LCLS). By utilizing shot-to-shot data from the beam position monitoring
system, we demonstrate the exceptional capability of ResVAE in identifying
various types of anomalies that are visible in the accelerator. | [
"Ryan Humble",
"William Colocho",
"Finn O'Shea",
"Daniel Ratner",
"Eric Darve"
] | 2023-09-05 15:53:41 | http://arxiv.org/abs/2309.02333v1 | http://arxiv.org/pdf/2309.02333v1 | 2309.02333v1 |
Information Processing by Neuron Populations in the Central Nervous System: Mathematical Structure of Data and Operations | In the intricate architecture of the mammalian central nervous system,
neurons form populations. Axonal bundles communicate between these clusters
using spike trains as their medium. However, these neuron populations' precise
encoding and operations have yet to be discovered. In our analysis, the
starting point is a state-of-the-art mechanistic model of a generic neuron
endowed with plasticity. From this simple framework emerges a profound
mathematical construct: The representation and manipulation of information can
be precisely characterized by an algebra of finite convex cones. Furthermore,
these neuron populations are not merely passive transmitters. They act as
operators within this algebraic structure, mirroring the functionality of a
low-level programming language. When these populations interconnect, they
embody succinct yet potent algebraic expressions. These networks allow them to
implement many operations, such as specialization, generalization, novelty
detection, dimensionality reduction, inverse modeling, prediction, and
associative memory. In broader terms, this work illuminates the potential of
matrix embeddings in advancing our understanding in fields like cognitive
science and AI. These embeddings enhance the capacity for concept processing
and hierarchical description over their vector counterparts. | [
"Martin N. P. Nilsson"
] | 2023-09-05 15:52:45 | http://arxiv.org/abs/2309.02332v1 | http://arxiv.org/pdf/2309.02332v1 | 2309.02332v1 |
SeisCLIP: A seismology foundation model pre-trained by multi-modal data for multi-purpose seismic feature extraction | Training specific deep learning models for particular tasks is common across
various domains within seismology. However, this approach encounters two
limitations: inadequate labeled data for certain tasks and limited
generalization across regions. To address these challenges, we develop
SeisCLIP, a seismology foundation model trained through contrastive learning
from multi-modal data. It consists of a transformer encoder for extracting
crucial features from time-frequency seismic spectrum and an MLP encoder for
integrating the phase and source information of the same event. These encoders
are jointly pre-trained on a vast dataset and the spectrum encoder is
subsequently fine-tuned on smaller datasets for various downstream tasks.
Notably, SeisCLIP's performance surpasses that of baseline methods in event
classification, localization, and focal mechanism analysis tasks, employing
distinct datasets from different regions. In conclusion, SeisCLIP holds
significant potential as a foundational model in the field of seismology,
paving the way for innovative directions in foundation-model-based seismology
research. | [
"Xu Si",
"Xinming Wu",
"Hanlin Sheng",
"Jun Zhu",
"Zefeng Li"
] | 2023-09-05 15:40:13 | http://arxiv.org/abs/2309.02320v1 | http://arxiv.org/pdf/2309.02320v1 | 2309.02320v1 |
A study on the impact of pre-trained model on Just-In-Time defect prediction | Previous researchers conducting Just-In-Time (JIT) defect prediction tasks
have primarily focused on the performance of individual pre-trained models,
without exploring the relationship between different pre-trained models as
backbones. In this study, we build six models: RoBERTaJIT, CodeBERTJIT,
BARTJIT, PLBARTJIT, GPT2JIT, and CodeGPTJIT, each with a distinct pre-trained
model as its backbone. We systematically explore the differences and
connections between these models. Specifically, we investigate the performance
of the models when using Commit code and Commit message as inputs, as well as
the relationship between training efficiency and model distribution among these
six models. Additionally, we conduct an ablation experiment to explore the
sensitivity of each model to inputs. Furthermore, we investigate how the models
perform in zero-shot and few-shot scenarios. Our findings indicate that each
model based on different backbones shows improvements, and when the backbone's
pre-training model is similar, the training resources that need to be consumed
are much more closer. We also observe that Commit code plays a significant role
in defect detection, and different pre-trained models demonstrate better defect
detection ability with a balanced dataset under few-shot scenarios. These
results provide new insights for optimizing JIT defect prediction tasks using
pre-trained models and highlight the factors that require more attention when
constructing such models. Additionally, CodeGPTJIT and GPT2JIT achieved better
performance than DeepJIT and CC2Vec on the two datasets respectively under 2000
training samples. These findings emphasize the effectiveness of
transformer-based pre-trained models in JIT defect prediction tasks, especially
in scenarios with limited training data. | [
"Yuxiang Guo",
"Xiaopeng Gao",
"Zhenyu Zhang",
"W. K. Chan",
"Bo Jiang"
] | 2023-09-05 15:34:22 | http://arxiv.org/abs/2309.02317v1 | http://arxiv.org/pdf/2309.02317v1 | 2309.02317v1 |
Graph Self-Contrast Representation Learning | Graph contrastive learning (GCL) has recently emerged as a promising approach
for graph representation learning. Some existing methods adopt the 1-vs-K
scheme to construct one positive and K negative samples for each graph, but it
is difficult to set K. For those methods that do not use negative samples, it
is often necessary to add additional strategies to avoid model collapse, which
could only alleviate the problem to some extent. All these drawbacks will
undoubtedly have an adverse impact on the generalizability and efficiency of
the model. In this paper, to address these issues, we propose a novel graph
self-contrast framework GraphSC, which only uses one positive and one negative
sample, and chooses triplet loss as the objective. Specifically, self-contrast
has two implications. First, GraphSC generates both positive and negative views
of a graph sample from the graph itself via graph augmentation functions of
various intensities, and use them for self-contrast. Second, GraphSC uses
Hilbert-Schmidt Independence Criterion (HSIC) to factorize the representations
into multiple factors and proposes a masked self-contrast mechanism to better
separate positive and negative samples. Further, Since the triplet loss only
optimizes the relative distance between the anchor and its positive/negative
samples, it is difficult to ensure the absolute distance between the anchor and
positive sample. Therefore, we explicitly reduced the absolute distance between
the anchor and positive sample to accelerate convergence. Finally, we conduct
extensive experiments to evaluate the performance of GraphSC against 19 other
state-of-the-art methods in both unsupervised and transfer learning settings. | [
"Minjie Chen",
"Yao Cheng",
"Ye Wang",
"Xiang Li",
"Ming Gao"
] | 2023-09-05 15:13:48 | http://arxiv.org/abs/2309.02304v1 | http://arxiv.org/pdf/2309.02304v1 | 2309.02304v1 |
Enhancing Semantic Communication with Deep Generative Models -- An ICASSP Special Session Overview | Semantic communication is poised to play a pivotal role in shaping the
landscape of future AI-driven communication systems. Its challenge of
extracting semantic information from the original complex content and
regenerating semantically consistent data at the receiver, possibly being
robust to channel corruptions, can be addressed with deep generative models.
This ICASSP special session overview paper discloses the semantic communication
challenges from the machine learning perspective and unveils how deep
generative models will significantly enhance semantic communication frameworks
in dealing with real-world complex data, extracting and exploiting semantic
information, and being robust to channel corruptions. Alongside establishing
this emerging field, this paper charts novel research pathways for the next
generative semantic communication frameworks. | [
"Eleonora Grassucci",
"Yuki Mitsufuji",
"Ping Zhang",
"Danilo Comminiello"
] | 2023-09-05 15:11:16 | http://arxiv.org/abs/2309.02478v1 | http://arxiv.org/pdf/2309.02478v1 | 2309.02478v1 |
Inferring effective couplings with Restricted Boltzmann Machines | Generative models offer a direct way to model complex data. Among them,
energy-based models provide us with a neural network model that aims to
accurately reproduce all statistical correlations observed in the data at the
level of the Boltzmann weight of the model. However, one challenge is to
understand the physical interpretation of such models. In this study, we
propose a simple solution by implementing a direct mapping between the energy
function of the Restricted Boltzmann Machine and an effective Ising spin
Hamiltonian that includes high-order interactions between spins. This mapping
includes interactions of all possible orders, going beyond the conventional
pairwise interactions typically considered in the inverse Ising approach, and
allowing the description of complex datasets. Earlier works attempted to
achieve this goal, but the proposed mappings did not do properly treat the
complexity of the problem or did not contain direct prescriptions for practical
application. To validate our method, we performed several controlled numerical
experiments where we trained the RBMs using equilibrium samples of predefined
models containing local external fields, two-body and three-body interactions
in various low-dimensional topologies. The results demonstrate the
effectiveness of our proposed approach in learning the correct interaction
network and pave the way for its application in modeling interesting datasets.
We also evaluate the quality of the inferred model based on different training
methods. | [
"Aurélien Decelle",
"Cyril Furtlehner",
"Alfonso De Jesus Navas Gómez",
"Beatriz Seoane"
] | 2023-09-05 14:55:09 | http://arxiv.org/abs/2309.02292v2 | http://arxiv.org/pdf/2309.02292v2 | 2309.02292v2 |
Haystack: A Panoptic Scene Graph Dataset to Evaluate Rare Predicate Classes | Current scene graph datasets suffer from strong long-tail distributions of
their predicate classes. Due to a very low number of some predicate classes in
the test sets, no reliable metrics can be retrieved for the rarest classes. We
construct a new panoptic scene graph dataset and a set of metrics that are
designed as a benchmark for the predictive performance especially on rare
predicate classes. To construct the new dataset, we propose a model-assisted
annotation pipeline that efficiently finds rare predicate classes that are
hidden in a large set of images like needles in a haystack.
Contrary to prior scene graph datasets, Haystack contains explicit negative
annotations, i.e. annotations that a given relation does not have a certain
predicate class. Negative annotations are helpful especially in the field of
scene graph generation and open up a whole new set of possibilities to improve
current scene graph generation models.
Haystack is 100% compatible with existing panoptic scene graph datasets and
can easily be integrated with existing evaluation pipelines. Our dataset and
code can be found here: https://lorjul.github.io/haystack/. It includes
annotation files and simple to use scripts and utilities, to help with
integrating our dataset in existing work. | [
"Julian Lorenz",
"Florian Barthel",
"Daniel Kienzle",
"Rainer Lienhart"
] | 2023-09-05 14:45:54 | http://arxiv.org/abs/2309.02286v1 | http://arxiv.org/pdf/2309.02286v1 | 2309.02286v1 |
PromptTTS 2: Describing and Generating Voices with Text Prompt | Speech conveys more information than text, as the same word can be uttered in
various voices to convey diverse information. Compared to traditional
text-to-speech (TTS) methods relying on speech prompts (reference speech) for
voice variability, using text prompts (descriptions) is more user-friendly
since speech prompts can be hard to find or may not exist at all. TTS
approaches based on the text prompt face two main challenges: 1) the
one-to-many problem, where not all details about voice variability can be
described in the text prompt, and 2) the limited availability of text prompt
datasets, where vendors and large cost of data labeling are required to write
text prompts for speech. In this work, we introduce PromptTTS 2 to address
these challenges with a variation network to provide variability information of
voice not captured by text prompts, and a prompt generation pipeline to utilize
the large language models (LLM) to compose high quality text prompts.
Specifically, the variation network predicts the representation extracted from
the reference speech (which contains full information about voice variability)
based on the text prompt representation. For the prompt generation pipeline, it
generates text prompts for speech with a speech language understanding model to
recognize voice attributes (e.g., gender, speed) from speech and a large
language model to formulate text prompts based on the recognition results.
Experiments on a large-scale (44K hours) speech dataset demonstrate that
compared to the previous works, PromptTTS 2 generates voices more consistent
with text prompts and supports the sampling of diverse voice variability,
thereby offering users more choices on voice generation. Additionally, the
prompt generation pipeline produces high-quality text prompts, eliminating the
large labeling cost. The demo page of PromptTTS 2 is available online. | [
"Yichong Leng",
"Zhifang Guo",
"Kai Shen",
"Xu Tan",
"Zeqian Ju",
"Yanqing Liu",
"Yufei Liu",
"Dongchao Yang",
"Leying Zhang",
"Kaitao Song",
"Lei He",
"Xiang-Yang Li",
"Sheng Zhao",
"Tao Qin",
"Jiang Bian"
] | 2023-09-05 14:45:27 | http://arxiv.org/abs/2309.02285v2 | http://arxiv.org/pdf/2309.02285v2 | 2309.02285v2 |
s-ID: Causal Effect Identification in a Sub-Population | Causal inference in a sub-population involves identifying the causal effect
of an intervention on a specific subgroup within a larger population. However,
ignoring the subtleties introduced by sub-populations can either lead to
erroneous inference or limit the applicability of existing methods. We
introduce and advocate for a causal inference problem in sub-populations
(henceforth called s-ID), in which we merely have access to observational data
of the targeted sub-population (as opposed to the entire population). Existing
inference problems in sub-populations operate on the premise that the given
data distributions originate from the entire population, thus, cannot tackle
the s-ID problem. To address this gap, we provide necessary and sufficient
conditions that must hold in the causal graph for a causal effect in a
sub-population to be identifiable from the observational distribution of that
sub-population. Given these conditions, we present a sound and complete
algorithm for the s-ID problem. | [
"Amir Mohammad Abouei",
"Ehsan Mokhtarian",
"Negar Kiyavash"
] | 2023-09-05 14:43:10 | http://arxiv.org/abs/2309.02281v1 | http://arxiv.org/pdf/2309.02281v1 | 2309.02281v1 |
A Comparison of Residual-based Methods on Fault Detection | An important initial step in fault detection for complex industrial systems
is gaining an understanding of their health condition. Subsequently, continuous
monitoring of this health condition becomes crucial to observe its evolution,
track changes over time, and isolate faults. As faults are typically rare
occurrences, it is essential to perform this monitoring in an unsupervised
manner. Various approaches have been proposed not only to detect faults in an
unsupervised manner but also to distinguish between different potential fault
types. In this study, we perform a comprehensive comparison between two
residual-based approaches: autoencoders, and the input-output models that
establish a mapping between operating conditions and sensor readings. We
explore the sensor-wise residuals and aggregated residuals for the entire
system in both methods. The performance evaluation focuses on three tasks:
health indicator construction, fault detection, and health indicator
interpretation. To perform the comparison, we utilize the Commercial Modular
Aero-Propulsion System Simulation (C-MAPSS) dynamical model, specifically a
subset of the turbofan engine dataset containing three different fault types.
All models are trained exclusively on healthy data. Fault detection is achieved
by applying a threshold that is determined based on the healthy condition. The
detection results reveal that both models are capable of detecting faults with
an average delay of around 20 cycles and maintain a low false positive rate.
While the fault detection performance is similar for both models, the
input-output model provides better interpretability regarding potential fault
types and the possible faulty components. | [
"Chi-Ching Hsu",
"Gaetan Frusque",
"Olga Fink"
] | 2023-09-05 14:39:27 | http://arxiv.org/abs/2309.02274v1 | http://arxiv.org/pdf/2309.02274v1 | 2309.02274v1 |
Graph-Based Automatic Feature Selection for Multi-Class Classification via Mean Simplified Silhouette | This paper introduces a novel graph-based filter method for automatic feature
selection (abbreviated as GB-AFS) for multi-class classification tasks. The
method determines the minimum combination of features required to sustain
prediction performance while maintaining complementary discriminating abilities
between different classes. It does not require any user-defined parameters such
as the number of features to select. The methodology employs the
Jeffries-Matusita (JM) distance in conjunction with t-distributed Stochastic
Neighbor Embedding (t-SNE) to generate a low-dimensional space reflecting how
effectively each feature can differentiate between each pair of classes. The
minimum number of features is selected using our newly developed Mean
Simplified Silhouette (abbreviated as MSS) index, designed to evaluate the
clustering results for the feature selection task. Experimental results on
public data sets demonstrate the superior performance of the proposed GB-AFS
over other filter-based techniques and automatic feature selection approaches.
Moreover, the proposed algorithm maintained the accuracy achieved when
utilizing all features, while using only $7\%$ to $30\%$ of the features.
Consequently, this resulted in a reduction of the time needed for
classifications, from $15\%$ to $70\%$. | [
"David Levin",
"Gonen Singer"
] | 2023-09-05 14:37:31 | http://arxiv.org/abs/2309.02272v1 | http://arxiv.org/pdf/2309.02272v1 | 2309.02272v1 |
Optimal Sample Selection Through Uncertainty Estimation and Its Application in Deep Learning | Modern deep learning heavily relies on large labeled datasets, which often
comse with high costs in terms of both manual labeling and computational
resources. To mitigate these challenges, researchers have explored the use of
informative subset selection techniques, including coreset selection and active
learning. Specifically, coreset selection involves sampling data with both
input ($\bx$) and output ($\by$), active learning focuses solely on the input
data ($\bx$).
In this study, we present a theoretically optimal solution for addressing
both coreset selection and active learning within the context of linear softmax
regression. Our proposed method, COPS (unCertainty based OPtimal Sub-sampling),
is designed to minimize the expected loss of a model trained on subsampled
data. Unlike existing approaches that rely on explicit calculations of the
inverse covariance matrix, which are not easily applicable to deep learning
scenarios, COPS leverages the model's logits to estimate the sampling ratio.
This sampling ratio is closely associated with model uncertainty and can be
effectively applied to deep learning tasks. Furthermore, we address the
challenge of model sensitivity to misspecification by incorporating a
down-weighting approach for low-density samples, drawing inspiration from
previous works.
To assess the effectiveness of our proposed method, we conducted extensive
empirical experiments using deep neural networks on benchmark datasets. The
results consistently showcase the superior performance of COPS compared to
baseline methods, reaffirming its efficacy. | [
"Yong Lin",
"Chen Liu",
"Chenlu Ye",
"Qing Lian",
"Yuan Yao",
"Tong Zhang"
] | 2023-09-05 14:06:33 | http://arxiv.org/abs/2309.02476v1 | http://arxiv.org/pdf/2309.02476v1 | 2309.02476v1 |
MA-VAE: Multi-head Attention-based Variational Autoencoder Approach for Anomaly Detection in Multivariate Time-series Applied to Automotive Endurance Powertrain Testing | A clear need for automatic anomaly detection applied to automotive testing
has emerged as more and more attention is paid to the data recorded and manual
evaluation by humans reaches its capacity. Such real-world data is massive,
diverse, multivariate and temporal in nature, therefore requiring modelling of
the testee behaviour. We propose a variational autoencoder with multi-head
attention (MA-VAE), which, when trained on unlabelled data, not only provides
very few false positives but also manages to detect the majority of the
anomalies presented. In addition to that, the approach offers a novel way to
avoid the bypass phenomenon, an undesirable behaviour investigated in
literature. Lastly, the approach also introduces a new method to remap
individual windows to a continuous time series. The results are presented in
the context of a real-world industrial data set and several experiments are
undertaken to further investigate certain aspects of the proposed model. When
configured properly, it is 9% of the time wrong when an anomaly is flagged and
discovers 67% of the anomalies present. Also, MA-VAE has the potential to
perform well with only a fraction of the training and validation subset,
however, to extract it, a more sophisticated threshold estimation method is
required. | [
"Lucas Correia",
"Jan-Christoph Goos",
"Philipp Klein",
"Thomas Bäck",
"Anna V. Kononova"
] | 2023-09-05 14:05:37 | http://arxiv.org/abs/2309.02253v1 | http://arxiv.org/pdf/2309.02253v1 | 2309.02253v1 |
RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for Supervised Learning | In the domain of machine learning algorithms, the significance of the loss
function is paramount, especially in supervised learning tasks. It serves as a
fundamental pillar that profoundly influences the behavior and efficacy of
supervised learning algorithms. Traditional loss functions, while widely used,
often struggle to handle noisy and high-dimensional data, impede model
interpretability, and lead to slow convergence during training. In this paper,
we address the aforementioned constraints by proposing a novel robust, bounded,
sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we
incorporate the RoBoSS loss function within the framework of support vector
machine (SVM) and introduce a new robust algorithm named
$\mathcal{L}_{rbss}$-SVM. For the theoretical analysis, the
classification-calibrated property and generalization ability are also
presented. These investigations are crucial for gaining deeper insights into
the performance of the RoBoSS loss function in the classification tasks and its
potential to generalize well to unseen data. To empirically demonstrate the
effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM, we evaluate it on $88$
real-world UCI and KEEL datasets from diverse domains. Additionally, to
exemplify the effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM within the
biomedical realm, we evaluated it on two medical datasets: the
electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis)
dataset. The numerical results substantiate the superiority of the proposed
$\mathcal{L}_{rbss}$-SVM model, both in terms of its remarkable generalization
performance and its efficiency in training time. | [
"Mushir Akhtar",
"M. Tanveer",
"Mohd. Arshad"
] | 2023-09-05 13:59:50 | http://arxiv.org/abs/2309.02250v1 | http://arxiv.org/pdf/2309.02250v1 | 2309.02250v1 |
Encoding Seasonal Climate Predictions for Demand Forecasting with Modular Neural Network | Current time-series forecasting problems use short-term weather attributes as
exogenous inputs. However, in specific time-series forecasting solutions (e.g.,
demand prediction in the supply chain), seasonal climate predictions are
crucial to improve its resilience. Representing mid to long-term seasonal
climate forecasts is challenging as seasonal climate predictions are uncertain,
and encoding spatio-temporal relationship of climate forecasts with demand is
complex.
We propose a novel modeling framework that efficiently encodes seasonal
climate predictions to provide robust and reliable time-series forecasting for
supply chain functions. The encoding framework enables effective learning of
latent representations -- be it uncertain seasonal climate prediction or other
time-series data (e.g., buyer patterns) -- via a modular neural network
architecture. Our extensive experiments indicate that learning such
representations to model seasonal climate forecast results in an error
reduction of approximately 13\% to 17\% across multiple real-world data sets
compared to existing demand forecasting methods. | [
"Smit Marvaniya",
"Jitendra Singh",
"Nicolas Galichet",
"Fred Ochieng Otieno",
"Geeth De Mel",
"Kommy Weldemariam"
] | 2023-09-05 13:58:59 | http://arxiv.org/abs/2309.02248v1 | http://arxiv.org/pdf/2309.02248v1 | 2309.02248v1 |
Enhancing Trustworthiness in ML-Based Network Intrusion Detection with Uncertainty Quantification | The evolution of Internet and its related communication technologies have
consistently increased the risk of cyber-attacks. In this context, a crucial
role is played by Intrusion Detection Systems (IDSs), which are security
devices designed to identify and mitigate attacks to modern networks. In the
last decade, data-driven approaches based on Machine Learning (ML) have gained
more and more popularity for executing the classification tasks required by
IDSs. However, typical ML models adopted for this purpose do not properly take
into account the uncertainty associated with their own prediction. This poses
significant challenges, as they tend to produce misleadingly high
classification scores for both misclassified inputs and inputs belonging to
unknown classes (e.g. novel attacks), limiting the trustworthiness of existing
ML-based solutions. In this paper we argue that ML-based IDSs should always
provide accurate uncertainty quantification to avoid overconfident predictions.
In fact, an uncertainty-aware classification would be beneficial to enhance
closed-set classification performance, would make it possible to efficiently
carry out Active Learning, and would help recognize inputs of unknown classes
as truly unknowns (i.e., not belonging to any known class), unlocking open-set
classification capabilities and Out-of-Distribution (OoD) detection. To verify
it, we compare various ML-based methods for uncertainty quantification and for
OoD detection, either specifically designed for or tailored to the domain of
network intrusion detection, showing how a proper estimation of the model
uncertainty can be exploited to significantly enhance the trustworthiness of
ML-based IDSs. Our results also confirm that conventional ML-based approaches
to network intrusion detection (e.g. based on traditional feed-forward Neural
Networks) may not be appropriate and should be adopted with caution. | [
"Jacopo Talpini",
"Fabio Sartori",
"Marco Savi"
] | 2023-09-05 13:52:41 | http://arxiv.org/abs/2310.10655v1 | http://arxiv.org/pdf/2310.10655v1 | 2310.10655v1 |
RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems | In practical cloud-edge scenarios, where a resource constrained edge performs
data acquisition and a cloud system (having sufficient resources) performs
inference tasks with a deep neural network (DNN), adversarial robustness is
critical for reliability and ubiquitous deployment. Adversarial detection is a
prime adversarial defence technique used in prior literature. However, in prior
detection works, the detector is attached to the classifier model and both
detector and classifier work in tandem to perform adversarial detection that
requires a high computational overhead which is not available at the low-power
edge. Therefore, prior works can only perform adversarial detection at the
cloud and not at the edge. This means that in case of adversarial attacks, the
unfavourable adversarial samples must be communicated to the cloud which leads
to energy wastage at the edge device. Therefore, a low-power edge-friendly
adversarial detection method is required to improve the energy efficiency of
the edge and robustness of the cloud-based classifier. To this end, RobustEdge
proposes Quantization-enabled Energy Separation (QES) training with "early
detection and exit" to perform edge-based low cost adversarial detection. The
QES-trained detector implemented at the edge blocks adversarial data
transmission to the classifier model, thereby improving adversarial robustness
and energy-efficiency of the Cloud-Edge system. | [
"Abhishek Moitra",
"Abhiroop Bhattacharjee",
"Youngeun Kim",
"Priyadarshini Panda"
] | 2023-09-05 13:51:28 | http://arxiv.org/abs/2310.06845v1 | http://arxiv.org/pdf/2310.06845v1 | 2310.06845v1 |
Self-Similarity-Based and Novelty-based loss for music structure analysis | Music Structure Analysis (MSA) is the task aiming at identifying musical
segments that compose a music track and possibly label them based on their
similarity. In this paper we propose a supervised approach for the task of
music boundary detection. In our approach we simultaneously learn features and
convolution kernels. For this we jointly optimize -- a loss based on the
Self-Similarity-Matrix (SSM) obtained with the learned features, denoted by
SSM-loss, and -- a loss based on the novelty score obtained applying the
learned kernels to the estimated SSM, denoted by novelty-loss. We also
demonstrate that relative feature learning, through self-attention, is
beneficial for the task of MSA. Finally, we compare the performances of our
approach to previously proposed approaches on the standard RWC-Pop, and various
subsets of SALAMI. | [
"Geoffroy Peeters"
] | 2023-09-05 13:49:29 | http://arxiv.org/abs/2309.02243v1 | http://arxiv.org/pdf/2309.02243v1 | 2309.02243v1 |
Sample Size in Natural Language Processing within Healthcare Research | Sample size calculation is an essential step in most data-based disciplines.
Large enough samples ensure representativeness of the population and determine
the precision of estimates. This is true for most quantitative studies,
including those that employ machine learning methods, such as natural language
processing, where free-text is used to generate predictions and classify
instances of text. Within the healthcare domain, the lack of sufficient corpora
of previously collected data can be a limiting factor when determining sample
sizes for new studies. This paper tries to address the issue by making
recommendations on sample sizes for text classification tasks in the healthcare
domain.
Models trained on the MIMIC-III database of critical care records from Beth
Israel Deaconess Medical Center were used to classify documents as having or
not having Unspecified Essential Hypertension, the most common diagnosis code
in the database. Simulations were performed using various classifiers on
different sample sizes and class proportions. This was repeated for a
comparatively less common diagnosis code within the database of diabetes
mellitus without mention of complication.
Smaller sample sizes resulted in better results when using a K-nearest
neighbours classifier, whereas larger sample sizes provided better results with
support vector machines and BERT models. Overall, a sample size larger than
1000 was sufficient to provide decent performance metrics.
The simulations conducted within this study provide guidelines that can be
used as recommendations for selecting appropriate sample sizes and class
proportions, and for predicting expected performance, when building classifiers
for textual healthcare data. The methodology used here can be modified for
sample size estimates calculations with other datasets. | [
"Jaya Chaturvedi",
"Diana Shamsutdinova",
"Felix Zimmer",
"Sumithra Velupillai",
"Daniel Stahl",
"Robert Stewart",
"Angus Roberts"
] | 2023-09-05 13:42:43 | http://arxiv.org/abs/2309.02237v1 | http://arxiv.org/pdf/2309.02237v1 | 2309.02237v1 |
Distributionally Robust Model-based Reinforcement Learning with Large State Spaces | Three major challenges in reinforcement learning are the complex dynamical
systems with large state spaces, the costly data acquisition processes, and the
deviation of real-world dynamics from the training environment deployment. To
overcome these issues, we study distributionally robust Markov decision
processes with continuous state spaces under the widely used Kullback-Leibler,
chi-square, and total variation uncertainty sets. We propose a model-based
approach that utilizes Gaussian Processes and the maximum variance reduction
algorithm to efficiently learn multi-output nominal transition dynamics,
leveraging access to a generative model (i.e., simulator). We further
demonstrate the statistical sample complexity of the proposed method for
different uncertainty sets. These complexity bounds are independent of the
number of states and extend beyond linear dynamics, ensuring the effectiveness
of our approach in identifying near-optimal distributionally-robust policies.
The proposed method can be further combined with other model-free
distributionally robust reinforcement learning methods to obtain a near-optimal
robust policy. Experimental results demonstrate the robustness of our algorithm
to distributional shifts and its superior performance in terms of the number of
samples needed. | [
"Shyam Sundhar Ramesh",
"Pier Giuseppe Sessa",
"Yifan Hu",
"Andreas Krause",
"Ilija Bogunovic"
] | 2023-09-05 13:42:11 | http://arxiv.org/abs/2309.02236v1 | http://arxiv.org/pdf/2309.02236v1 | 2309.02236v1 |
Improving equilibrium propagation without weight symmetry through Jacobian homeostasis | Equilibrium propagation (EP) is a compelling alternative to the
backpropagation of error algorithm (BP) for computing gradients of neural
networks on biological or analog neuromorphic substrates. Still, the algorithm
requires weight symmetry and infinitesimal equilibrium perturbations, i.e.,
nudges, to estimate unbiased gradients efficiently. Both requirements are
challenging to implement in physical systems. Yet, whether and how weight
asymmetry affects its applicability is unknown because, in practice, it may be
masked by biases introduced through the finite nudge. To address this question,
we study generalized EP, which can be formulated without weight symmetry, and
analytically isolate the two sources of bias. For complex-differentiable
non-symmetric networks, we show that the finite nudge does not pose a problem,
as exact derivatives can still be estimated via a Cauchy integral. In contrast,
weight asymmetry introduces bias resulting in low task performance due to poor
alignment of EP's neuronal error vectors compared to BP. To mitigate this
issue, we present a new homeostatic objective that directly penalizes
functional asymmetries of the Jacobian at the network's fixed point. This
homeostatic objective dramatically improves the network's ability to solve
complex tasks such as ImageNet 32x32. Our results lay the theoretical
groundwork for studying and mitigating the adverse effects of imperfections of
physical networks on learning algorithms that rely on the substrate's
relaxation dynamics. | [
"Axel Laborieux",
"Friedemann Zenke"
] | 2023-09-05 13:20:43 | http://arxiv.org/abs/2309.02214v1 | http://arxiv.org/pdf/2309.02214v1 | 2309.02214v1 |
Distributionally Robust Machine Learning with Multi-source Data | Classical machine learning methods may lead to poor prediction performance
when the target distribution differs from the source populations. This paper
utilizes data from multiple sources and introduces a group distributionally
robust prediction model defined to optimize an adversarial reward about
explained variance with respect to a class of target distributions. Compared to
classical empirical risk minimization, the proposed robust prediction model
improves the prediction accuracy for target populations with distribution
shifts. We show that our group distributionally robust prediction model is a
weighted average of the source populations' conditional outcome models. We
leverage this key identification result to robustify arbitrary machine learning
algorithms, including, for example, random forests and neural networks. We
devise a novel bias-corrected estimator to estimate the optimal aggregation
weight for general machine-learning algorithms and demonstrate its improvement
in the convergence rate. Our proposal can be seen as a distributionally robust
federated learning approach that is computationally efficient and easy to
implement using arbitrary machine learning base algorithms, satisfies some
privacy constraints, and has a nice interpretation of different sources'
importance for predicting a given target covariate distribution. We demonstrate
the performance of our proposed group distributionally robust method on
simulated and real data with random forests and neural networks as
base-learning algorithms. | [
"Zhenyu Wang",
"Peter Bühlmann",
"Zijian Guo"
] | 2023-09-05 13:19:40 | http://arxiv.org/abs/2309.02211v2 | http://arxiv.org/pdf/2309.02211v2 | 2309.02211v2 |
Latent Disentanglement in Mesh Variational Autoencoders Improves the Diagnosis of Craniofacial Syndromes and Aids Surgical Planning | The use of deep learning to undertake shape analysis of the complexities of
the human head holds great promise. However, there have traditionally been a
number of barriers to accurate modelling, especially when operating on both a
global and local level. In this work, we will discuss the application of the
Swap Disentangled Variational Autoencoder (SD-VAE) with relevance to Crouzon,
Apert and Muenke syndromes. Although syndrome classification is performed on
the entire mesh, it is also possible, for the first time, to analyse the
influence of each region of the head on the syndromic phenotype. By
manipulating specific parameters of the generative model, and producing
procedure-specific new shapes, it is also possible to simulate the outcome of a
range of craniofacial surgical procedures. This opens new avenues to advance
diagnosis, aids surgical planning and allows for the objective evaluation of
surgical outcomes. | [
"Simone Foti",
"Alexander J. Rickart",
"Bongjin Koo",
"Eimear O' Sullivan",
"Lara S. van de Lande",
"Athanasios Papaioannou",
"Roman Khonsari",
"Danail Stoyanov",
"N. u. Owase Jeelani",
"Silvia Schievano",
"David J. Dunaway",
"Matthew J. Clarkson"
] | 2023-09-05 13:16:53 | http://arxiv.org/abs/2309.10825v1 | http://arxiv.org/pdf/2309.10825v1 | 2309.10825v1 |
Language Models for Novelty Detection in System Call Traces | Due to the complexity of modern computer systems, novel and unexpected
behaviors frequently occur. Such deviations are either normal occurrences, such
as software updates and new user activities, or abnormalities, such as
misconfigurations, latency issues, intrusions, and software bugs. Regardless,
novel behaviors are of great interest to developers, and there is a genuine
need for efficient and effective methods to detect them. Nowadays, researchers
consider system calls to be the most fine-grained and accurate source of
information to investigate the behavior of computer systems. Accordingly, this
paper introduces a novelty detection methodology that relies on a probability
distribution over sequences of system calls, which can be seen as a language
model. Language models estimate the likelihood of sequences, and since
novelties deviate from previously observed behaviors by definition, they would
be unlikely under the model. Following the success of neural networks for
language models, three architectures are evaluated in this work: the widespread
LSTM, the state-of-the-art Transformer, and the lower-complexity Longformer.
However, large neural networks typically require an enormous amount of data to
be trained effectively, and to the best of our knowledge, no massive modern
datasets of kernel traces are publicly available. This paper addresses this
limitation by introducing a new open-source dataset of kernel traces comprising
over 2 million web requests with seven distinct behaviors. The proposed
methodology requires minimal expert hand-crafting and achieves an F-score and
AuROC greater than 95% on most novelties while being data- and task-agnostic.
The source code and trained models are publicly available on GitHub while the
datasets are available on Zenodo. | [
"Quentin Fournier",
"Daniel Aloise",
"Leandro R. Costa"
] | 2023-09-05 13:11:40 | http://arxiv.org/abs/2309.02206v1 | http://arxiv.org/pdf/2309.02206v1 | 2309.02206v1 |
On the Complexity of Differentially Private Best-Arm Identification with Fixed Confidence | Best Arm Identification (BAI) problems are progressively used for
data-sensitive applications, such as designing adaptive clinical trials, tuning
hyper-parameters, and conducting user studies to name a few. Motivated by the
data privacy concerns invoked by these applications, we study the problem of
BAI with fixed confidence under $\epsilon$-global Differential Privacy (DP).
First, to quantify the cost of privacy, we derive a lower bound on the sample
complexity of any $\delta$-correct BAI algorithm satisfying $\epsilon$-global
DP. Our lower bound suggests the existence of two privacy regimes depending on
the privacy budget $\epsilon$. In the high-privacy regime (small $\epsilon$),
the hardness depends on a coupled effect of privacy and a novel
information-theoretic quantity, called the Total Variation Characteristic Time.
In the low-privacy regime (large $\epsilon$), the sample complexity lower bound
reduces to the classical non-private lower bound. Second, we propose AdaP-TT,
an $\epsilon$-global DP variant of the Top Two algorithm. AdaP-TT runs in
arm-dependent adaptive episodes and adds Laplace noise to ensure a good
privacy-utility trade-off. We derive an asymptotic upper bound on the sample
complexity of AdaP-TT that matches with the lower bound up to multiplicative
constants in the high-privacy regime. Finally, we provide an experimental
analysis of AdaP-TT that validates our theoretical results. | [
"Achraf Azize",
"Marc Jourdan",
"Aymen Al Marjani",
"Debabrota Basu"
] | 2023-09-05 13:07:25 | http://arxiv.org/abs/2309.02202v1 | http://arxiv.org/pdf/2309.02202v1 | 2309.02202v1 |
Sparse Function-space Representation of Neural Networks | Deep neural networks (NNs) are known to lack uncertainty estimates and
struggle to incorporate new data. We present a method that mitigates these
issues by converting NNs from weight space to function space, via a dual
parameterization. Importantly, the dual parameterization enables us to
formulate a sparse representation that captures information from the entire
data set. This offers a compact and principled way of capturing uncertainty and
enables us to incorporate new data without retraining whilst retaining
predictive performance. We provide proof-of-concept demonstrations with the
proposed approach for quantifying uncertainty in supervised learning on UCI
benchmark tasks. | [
"Aidan Scannell",
"Riccardo Mereu",
"Paul Chang",
"Ella Tamir",
"Joni Pajarinen",
"Arno Solin"
] | 2023-09-05 12:56:35 | http://arxiv.org/abs/2309.02195v1 | http://arxiv.org/pdf/2309.02195v1 | 2309.02195v1 |
Personalized Federated Deep Reinforcement Learning-based Trajectory Optimization for Multi-UAV Assisted Edge Computing | In the era of 5G mobile communication, there has been a significant surge in
research focused on unmanned aerial vehicles (UAVs) and mobile edge computing
technology. UAVs can serve as intelligent servers in edge computing
environments, optimizing their flight trajectories to maximize communication
system throughput. Deep reinforcement learning (DRL)-based trajectory
optimization algorithms may suffer from poor training performance due to
intricate terrain features and inadequate training data. To overcome this
limitation, some studies have proposed leveraging federated learning (FL) to
mitigate the data isolation problem and expedite convergence. Nevertheless, the
efficacy of global FL models can be negatively impacted by the high
heterogeneity of local data, which could potentially impede the training
process and even compromise the performance of local agents. This work proposes
a novel solution to address these challenges, namely personalized federated
deep reinforcement learning (PF-DRL), for multi-UAV trajectory optimization.
PF-DRL aims to develop individualized models for each agent to address the data
scarcity issue and mitigate the negative impact of data heterogeneity.
Simulation results demonstrate that the proposed algorithm achieves superior
training performance with faster convergence rates, and improves service
quality compared to other DRL-based approaches. | [
"Zhengrong Song",
"Chuan Ma",
"Ming Ding",
"Howard H. Yang",
"Yuwen Qian",
"Xiangwei Zhou"
] | 2023-09-05 12:54:40 | http://arxiv.org/abs/2309.02193v1 | http://arxiv.org/pdf/2309.02193v1 | 2309.02193v1 |
Leveraging BERT Language Models for Multi-Lingual ESG Issue Identification | Environmental, Social, and Governance (ESG) has been used as a metric to
measure the negative impacts and enhance positive outcomes of companies in
areas such as the environment, society, and governance. Recently, investors
have increasingly recognized the significance of ESG criteria in their
investment choices, leading businesses to integrate ESG principles into their
operations and strategies. The Multi-Lingual ESG Issue Identification (ML-ESG)
shared task encompasses the classification of news documents into 35 distinct
ESG issue labels. In this study, we explored multiple strategies harnessing
BERT language models to achieve accurate classification of news documents
across these labels. Our analysis revealed that the RoBERTa classifier emerged
as one of the most successful approaches, securing the second-place position
for the English test dataset, and sharing the fifth-place position for the
French test dataset. Furthermore, our SVM-based binary model tailored for the
Chinese language exhibited exceptional performance, earning the second-place
rank on the test dataset. | [
"Elvys Linhares Pontes",
"Mohamed Benjannet",
"Lam Kim Ming"
] | 2023-09-05 12:48:21 | http://arxiv.org/abs/2309.02189v1 | http://arxiv.org/pdf/2309.02189v1 | 2309.02189v1 |
A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges | In recent years, the development of robotics and artificial intelligence (AI)
systems has been nothing short of remarkable. As these systems continue to
evolve, they are being utilized in increasingly complex and unstructured
environments, such as autonomous driving, aerial robotics, and natural language
processing. As a consequence, programming their behaviors manually or defining
their behavior through reward functions (as done in reinforcement learning
(RL)) has become exceedingly difficult. This is because such environments
require a high degree of flexibility and adaptability, making it challenging to
specify an optimal set of rules or reward signals that can account for all
possible situations. In such environments, learning from an expert's behavior
through imitation is often more appealing. This is where imitation learning
(IL) comes into play - a process where desired behavior is learned by imitating
an expert's behavior, which is provided through demonstrations.
This paper aims to provide an introduction to IL and an overview of its
underlying assumptions and approaches. It also offers a detailed description of
recent advances and emerging areas of research in the field. Additionally, the
paper discusses how researchers have addressed common challenges associated
with IL and provides potential directions for future research. Overall, the
goal of the paper is to provide a comprehensive guide to the growing field of
IL in robotics and AI. | [
"Maryam Zare",
"Parham M. Kebria",
"Abbas Khosravi",
"Saeid Nahavandi"
] | 2023-09-05 11:56:07 | http://arxiv.org/abs/2309.02473v1 | http://arxiv.org/pdf/2309.02473v1 | 2309.02473v1 |
Bias Propagation in Federated Learning | We show that participating in federated learning can be detrimental to group
fairness. In fact, the bias of a few parties against under-represented groups
(identified by sensitive attributes such as gender or race) can propagate
through the network to all the parties in the network. We analyze and explain
bias propagation in federated learning on naturally partitioned real-world
datasets. Our analysis reveals that biased parties unintentionally yet
stealthily encode their bias in a small number of model parameters, and
throughout the training, they steadily increase the dependence of the global
model on sensitive attributes. What is important to highlight is that the
experienced bias in federated learning is higher than what parties would
otherwise encounter in centralized training with a model trained on the union
of all their data. This indicates that the bias is due to the algorithm. Our
work calls for auditing group fairness in federated learning and designing
learning algorithms that are robust to bias propagation. | [
"Hongyan Chang",
"Reza Shokri"
] | 2023-09-05 11:55:03 | http://arxiv.org/abs/2309.02160v1 | http://arxiv.org/pdf/2309.02160v1 | 2309.02160v1 |
Model-based Offline Policy Optimization with Adversarial Network | Model-based offline reinforcement learning (RL), which builds a supervised
transition model with logging dataset to avoid costly interactions with the
online environment, has been a promising approach for offline policy
optimization. As the discrepancy between the logging data and online
environment may result in a distributional shift problem, many prior works have
studied how to build robust transition models conservatively and estimate the
model uncertainty accurately. However, the over-conservatism can limit the
exploration of the agent, and the uncertainty estimates may be unreliable. In
this work, we propose a novel Model-based Offline policy optimization framework
with Adversarial Network (MOAN). The key idea is to use adversarial learning to
build a transition model with better generalization, where an adversary is
introduced to distinguish between in-distribution and out-of-distribution
samples. Moreover, the adversary can naturally provide a quantification of the
model's uncertainty with theoretical guarantees. Extensive experiments showed
that our approach outperforms existing state-of-the-art baselines on widely
studied offline RL benchmarks. It can also generate diverse in-distribution
samples, and quantify the uncertainty more accurately. | [
"Junming Yang",
"Xingguo Chen",
"Shengyuan Wang",
"Bolei Zhang"
] | 2023-09-05 11:49:33 | http://arxiv.org/abs/2309.02157v1 | http://arxiv.org/pdf/2309.02157v1 | 2309.02157v1 |
Making Large Language Models Better Reasoners with Alignment | Reasoning is a cognitive process of using evidence to reach a sound
conclusion. The reasoning capability is essential for large language models
(LLMs) to serve as the brain of the artificial general intelligence agent.
Recent studies reveal that fine-tuning LLMs on data with the chain of thought
(COT) reasoning process can significantly enhance their reasoning capabilities.
However, we find that the fine-tuned LLMs suffer from an \textit{Assessment
Misalignment} problem, i.e., they frequently assign higher scores to subpar
COTs, leading to potential limitations in their reasoning abilities. To address
this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm,
which involves three steps: 1) fine-tuning LLMs with COT training data; 2)
generating multiple COT responses for each question, and categorizing them into
positive and negative ones based on whether they achieve the correct answer; 3)
calibrating the scores of positive and negative responses given by LLMs with a
novel constraint alignment loss. Specifically, the constraint alignment loss
has two objectives: a) Alignment, which guarantees that positive scores surpass
negative scores to encourage answers with high-quality COTs; b) Constraint,
which keeps the negative scores confined to a reasonable range to prevent the
model degradation. Beyond just the binary positive and negative feedback, the
constraint alignment loss can be seamlessly adapted to the ranking situations
when ranking feedback is accessible. Furthermore, we also delve deeply into
recent ranking-based alignment methods, such as DPO, RRHF, and PRO, and
discover that the constraint, which has been overlooked by these approaches, is
also crucial for their performance. Extensive experiments on four reasoning
benchmarks with both binary and ranking feedback demonstrate the effectiveness
of AFT. | [
"Peiyi Wang",
"Lei Li",
"Liang Chen",
"Feifan Song",
"Binghuai Lin",
"Yunbo Cao",
"Tianyu Liu",
"Zhifang Sui"
] | 2023-09-05 11:32:48 | http://arxiv.org/abs/2309.02144v1 | http://arxiv.org/pdf/2309.02144v1 | 2309.02144v1 |
Subsets and Splits