title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Spatiotemporal Graph Neural Networks with Uncertainty Quantification for Traffic Incident Risk Prediction | Predicting traffic incident risks at granular spatiotemporal levels is
challenging. The datasets predominantly feature zero values, indicating no
incidents, with sporadic high-risk values for severe incidents. Notably, a
majority of current models, especially deep learning methods, focus solely on
estimating risk values, overlooking the uncertainties arising from the
inherently unpredictable nature of incidents. To tackle this challenge, we
introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural Networks
(STZITD-GNNs). Our model merges the reliability of traditional statistical
models with the flexibility of graph neural networks, aiming to precisely
quantify uncertainties associated with road-level traffic incident risks. This
model strategically employs a compound model from the Tweedie family, as a
Poisson distribution to model risk frequency and a Gamma distribution to
account for incident severity. Furthermore, a zero-inflated component helps to
identify the non-incident risk scenarios. As a result, the STZITD-GNNs
effectively capture the dataset's skewed distribution, placing emphasis on
infrequent but impactful severe incidents. Empirical tests using real-world
traffic data from London, UK, demonstrate that our model excels beyond current
benchmarks. The forte of STZITD-GNN resides not only in its accuracy but also
in its adeptness at curtailing uncertainties, delivering robust predictions
over short (7 days) and extended (14 days) timeframes. | [
"Xiaowei Gao",
"Xinke Jiang",
"Dingyi Zhuang",
"Huanfa Chen",
"Shenhao Wang",
"James Haworth"
] | 2023-09-10 16:35:47 | http://arxiv.org/abs/2309.05072v1 | http://arxiv.org/pdf/2309.05072v1 | 2309.05072v1 |
Mutation-based Fault Localization of Deep Neural Networks | Deep neural networks (DNNs) are susceptible to bugs, just like other types of
software systems. A significant uptick in using DNN, and its applications in
wide-ranging areas, including safety-critical systems, warrant extensive
research on software engineering tools for improving the reliability of
DNN-based systems. One such tool that has gained significant attention in the
recent years is DNN fault localization. This paper revisits mutation-based
fault localization in the context of DNN models and proposes a novel technique,
named deepmufl, applicable to a wide range of DNN models. We have implemented
deepmufl and have evaluated its effectiveness using 109 bugs obtained from
StackOverflow. Our results show that deepmufl detects 53/109 of the bugs by
ranking the buggy layer in top-1 position, outperforming state-of-the-art
static and dynamic DNN fault localization systems that are also designed to
target the class of bugs supported by deepmufl. Moreover, we observed that we
can halve the fault localization time for a pre-trained model using mutation
selection, yet losing only 7.55% of the bugs localized in top-1 position. | [
"Ali Ghanbari",
"Deepak-George Thomas",
"Muhammad Arbab Arshad",
"Hridesh Rajan"
] | 2023-09-10 16:18:49 | http://arxiv.org/abs/2309.05067v1 | http://arxiv.org/pdf/2309.05067v1 | 2309.05067v1 |
Classification of Spam URLs Using Machine Learning Approaches | The Internet is used by billions of users daily because it offers fast and
free communication tools and platforms. Nevertheless, with this significant
increase in usage, huge amounts of spam are generated every second, which
wastes internet resources and, more importantly, users time. This study
investigates using machine learning models to classify URLs as spam or
non-spam. We first extract the features from the URL as it has only one
feature, and then we compare the performance of several models, including
k-nearest neighbors, bagging, random forest, logistic regression, and others.
We find that bagging achieves the best accuracy, with an accuracy of 96.5%.
This suggests that bagging is a promising approach for classifying URLs as spam
or nonspam. | [
"Omar Husni Odeh",
"Anas Arram",
"Murad Njoum"
] | 2023-09-10 16:15:09 | http://arxiv.org/abs/2310.05953v1 | http://arxiv.org/pdf/2310.05953v1 | 2310.05953v1 |
Federated Learning Incentive Mechanism under Buyers' Auction Market | Auction-based Federated Learning (AFL) enables open collaboration among
self-interested data consumers and data owners. Existing AFL approaches are
commonly under the assumption of sellers' market in that the service clients as
sellers are treated as scarce resources so that the aggregation servers as
buyers need to compete the bids. Yet, as the technology progresses, an
increasing number of qualified clients are now capable of performing federated
learning tasks, leading to shift from sellers' market to a buyers' market. In
this paper, we shift the angle by adapting the procurement auction framework,
aiming to explain the pricing behavior under buyers' market. Our modeling
starts with basic setting under complete information, then move further to the
scenario where sellers' information are not fully observable. In order to
select clients with high reliability and data quality, and to prevent from
external attacks, we utilize a blockchain-based reputation mechanism. The
experimental results validate the effectiveness of our approach. | [
"Jiaxi Yang",
"Zihao Guo",
"Sheng Cao",
"Cuifang Zhao",
"Li-Chuan Tsai"
] | 2023-09-10 16:09:02 | http://arxiv.org/abs/2309.05063v1 | http://arxiv.org/pdf/2309.05063v1 | 2309.05063v1 |
Machine Learning for maximizing the memristivity of single and coupled quantum memristors | We propose machine learning (ML) methods to characterize the memristive
properties of single and coupled quantum memristors. We show that maximizing
the memristivity leads to large values in the degree of entanglement of two
quantum memristors, unveiling the close relationship between quantum
correlations and memory. Our results strengthen the possibility of using
quantum memristors as key components of neuromorphic quantum computing. | [
"Carlos Hernani-Morales",
"Gabriel Alvarado",
"Francisco Albarrán-Arriagada",
"Yolanda Vives-Gilabert",
"Enrique Solano",
"José D. Martín-Guerrero"
] | 2023-09-10 16:07:18 | http://arxiv.org/abs/2309.05062v1 | http://arxiv.org/pdf/2309.05062v1 | 2309.05062v1 |
Implementing Learning Principles with a Personal AI Tutor: A Case Study | Effective learning strategies based on principles like personalization,
retrieval practice, and spaced repetition are often challenging to implement
due to practical constraints. Here we explore the integration of AI tutors to
complement learning programs in accordance with learning sciences. A
semester-long study was conducted at UniDistance Suisse, where an AI tutor app
was provided to psychology students taking a neuroscience course (N=51). After
automatically generating microlearning questions from existing course materials
using GPT-3, the AI tutor developed a dynamic neural-network model of each
student's grasp of key concepts. This enabled the implementation of distributed
retrieval practice, personalized to each student's individual level and
abilities. The results indicate that students who actively engaged with the AI
tutor achieved significantly higher grades. Moreover, active engagement led to
an average improvement of up to 15 percentile points compared to a parallel
course without AI tutor. Additionally, the grasp strongly correlated with the
exam grade, thus validating the relevance of neural-network predictions. This
research demonstrates the ability of personal AI tutors to model human learning
processes and effectively enhance academic performance. By integrating AI
tutors into their programs, educators can offer students personalized learning
experiences grounded in the principles of learning sciences, thereby addressing
the challenges associated with implementing effective learning strategies.
These findings contribute to the growing body of knowledge on the
transformative potential of AI in education. | [
"Ambroise Baillifard",
"Maxime Gabella",
"Pamela Banta Lavenex",
"Corinna S. Martarelli"
] | 2023-09-10 15:35:47 | http://arxiv.org/abs/2309.13060v1 | http://arxiv.org/pdf/2309.13060v1 | 2309.13060v1 |
Boosting Unsupervised Contrastive Learning Using Diffusion-Based Data Augmentation From Scratch | Unsupervised contrastive learning methods have recently seen significant
improvements, particularly through data augmentation strategies that aim to
produce robust and generalizable representations. However, prevailing data
augmentation methods, whether hand designed or based on foundation models, tend
to rely heavily on prior knowledge or external data. This dependence often
compromises their effectiveness and efficiency. Furthermore, the applicability
of most existing data augmentation strategies is limited when transitioning to
other research domains, especially science-related data. This limitation stems
from the paucity of prior knowledge and labeled data available in these
domains. To address these challenges, we introduce DiffAug-a novel and
efficient Diffusion-based data Augmentation technique. DiffAug aims to ensure
that the augmented and original data share a smoothed latent space, which is
achieved through diffusion steps. Uniquely, unlike traditional methods, DiffAug
first mines sufficient prior semantic knowledge about the neighborhood. This
provides a constraint to guide the diffusion steps, eliminating the need for
labels, external data/models, or prior knowledge. Designed as an
architecture-agnostic framework, DiffAug provides consistent improvements.
Specifically, it improves image classification and clustering accuracy by
1.6%~4.5%. When applied to biological data, DiffAug improves performance by up
to 10.1%, with an average improvement of 5.8%. DiffAug shows good performance
in both vision and biological domains. | [
"Zelin Zang",
"Hao Luo",
"Kai Wang",
"Panpan Zhang",
"Fan Wang",
"Stan. Z Li",
"Yang You"
] | 2023-09-10 13:28:46 | http://arxiv.org/abs/2309.07909v1 | http://arxiv.org/pdf/2309.07909v1 | 2309.07909v1 |
SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models | Diffusion Probabilistic Models (DPMs) have achieved considerable success in
generation tasks. As sampling from DPMs is equivalent to solving diffusion SDE
or ODE which is time-consuming, numerous fast sampling methods built upon
improved differential equation solvers are proposed. The majority of such
techniques consider solving the diffusion ODE due to its superior efficiency.
However, stochastic sampling could offer additional advantages in generating
diverse and high-quality data. In this work, we engage in a comprehensive
analysis of stochastic sampling from two aspects: variance-controlled diffusion
SDE and linear multi-step SDE solver. Based on our analysis, we propose
SA-Solver, which is an improved efficient stochastic Adams method for solving
diffusion SDE to generate data with high quality. Our experiments show that
SA-Solver achieves: 1) improved or comparable performance compared with the
existing state-of-the-art sampling methods for few-step sampling; 2) SOTA FID
scores on substantial benchmark datasets under a suitable number of function
evaluations (NFEs). | [
"Shuchen Xue",
"Mingyang Yi",
"Weijian Luo",
"Shifeng Zhang",
"Jiacheng Sun",
"Zhenguo Li",
"Zhi-Ming Ma"
] | 2023-09-10 12:44:54 | http://arxiv.org/abs/2309.05019v1 | http://arxiv.org/pdf/2309.05019v1 | 2309.05019v1 |
Computational Approaches for Predicting Drug-Disease Associations: A Comprehensive Review | In recent decades, traditional drug research and development have been facing
challenges such as high cost, long timelines, and high risks. To address these
issues, many computational approaches have been suggested for predicting the
relationship between drugs and diseases through drug repositioning, aiming to
reduce the cost, development cycle, and risks associated with developing new
drugs. Researchers have explored different computational methods to predict
drug-disease associations, including drug side effects-disease associations,
drug-target associations, and miRNAdisease associations. In this comprehensive
review, we focus on recent advances in predicting drug-disease association
methods for drug repositioning. We first categorize these methods into several
groups, including neural network-based algorithms, matrixbased algorithms,
recommendation algorithms, link-based reasoning algorithms, and text mining and
semantic reasoning. Then, we compare the prediction performance of existing
drug-disease association prediction algorithms. Lastly, we delve into the
present challenges and future prospects concerning drug-disease associations. | [
"Chunyan Ao",
"Zhichao Xiao",
"Lixin Guan",
"Liang Yu"
] | 2023-09-10 11:34:29 | http://arxiv.org/abs/2309.06388v1 | http://arxiv.org/pdf/2309.06388v1 | 2309.06388v1 |
Machine Translation Models Stand Strong in the Face of Adversarial Attacks | Adversarial attacks expose vulnerabilities of deep learning models by
introducing minor perturbations to the input, which lead to substantial
alterations in the output. Our research focuses on the impact of such
adversarial attacks on sequence-to-sequence (seq2seq) models, specifically
machine translation models. We introduce algorithms that incorporate basic text
perturbation heuristics and more advanced strategies, such as the
gradient-based attack, which utilizes a differentiable approximation of the
inherently non-differentiable translation metric. Through our investigation, we
provide evidence that machine translation models display robustness displayed
robustness against best performed known adversarial attacks, as the degree of
perturbation in the output is directly proportional to the perturbation in the
input. However, among underdogs, our attacks outperform alternatives, providing
the best relative performance. Another strong candidate is an attack based on
mixing of individual characters. | [
"Pavel Burnyshev",
"Elizaveta Kostenok",
"Alexey Zaytsev"
] | 2023-09-10 11:22:59 | http://arxiv.org/abs/2309.06527v1 | http://arxiv.org/pdf/2309.06527v1 | 2309.06527v1 |
Linear Speedup of Incremental Aggregated Gradient Methods on Streaming Data | This paper considers a type of incremental aggregated gradient (IAG) method
for large-scale distributed optimization. The IAG method is well suited for the
parameter server architecture as the latter can easily aggregate potentially
staled gradients contributed by workers. Although the convergence of IAG in the
case of deterministic gradient is well known, there are only a few results for
the case of its stochastic variant based on streaming data. Considering
strongly convex optimization, this paper shows that the streaming IAG method
achieves linear speedup when the workers are updating frequently enough, even
if the data sample distribution across workers are heterogeneous. We show that
the expected squared distance to optimal solution decays at O((1+T)/(nt)),
where $n$ is the number of workers, t is the iteration number, and T/n is the
update frequency of workers. Our analysis involves careful treatments of the
conditional expectations with staled gradients and a recursive system with both
delayed and noise terms, which are new to the analysis of IAG-type algorithms.
Numerical results are presented to verify our findings. | [
"Xiaolu Wang",
"Cheng Jin",
"Hoi-To Wai",
"Yuantao Gu"
] | 2023-09-10 10:08:52 | http://arxiv.org/abs/2309.04980v1 | http://arxiv.org/pdf/2309.04980v1 | 2309.04980v1 |
AVARS -- Alleviating Unexpected Urban Road Traffic Congestion using UAVs | Reducing unexpected urban traffic congestion caused by en-route events (e.g.,
road closures, car crashes, etc.) often requires fast and accurate reactions to
choose the best-fit traffic signals. Traditional traffic light control systems,
such as SCATS and SCOOT, are not efficient as their traffic data provided by
induction loops has a low update frequency (i.e., longer than 1 minute).
Moreover, the traffic light signal plans used by these systems are selected
from a limited set of candidate plans pre-programmed prior to unexpected
events' occurrence. Recent research demonstrates that camera-based traffic
light systems controlled by deep reinforcement learning (DRL) algorithms are
more effective in reducing traffic congestion, in which the cameras can provide
high-frequency high-resolution traffic data. However, these systems are costly
to deploy in big cities due to the excessive potential upgrades required to
road infrastructure. In this paper, we argue that Unmanned Aerial Vehicles
(UAVs) can play a crucial role in dealing with unexpected traffic congestion
because UAVs with onboard cameras can be economically deployed when and where
unexpected congestion occurs. Then, we propose a system called "AVARS" that
explores the potential of using UAVs to reduce unexpected urban traffic
congestion using DRL-based traffic light signal control. This approach is
validated on a widely used open-source traffic simulator with practical UAV
settings, including its traffic monitoring ranges and battery lifetime. Our
simulation results show that AVARS can effectively recover the unexpected
traffic congestion in Dublin, Ireland, back to its original un-congested level
within the typical battery life duration of a UAV. | [
"Jiaying Guo",
"Michael R. Jones",
"Soufiene Djahel",
"Shen Wang"
] | 2023-09-10 09:40:20 | http://arxiv.org/abs/2309.04976v1 | http://arxiv.org/pdf/2309.04976v1 | 2309.04976v1 |
Continual Robot Learning using Self-Supervised Task Inference | Endowing robots with the human ability to learn a growing set of skills over
the course of a lifetime as opposed to mastering single tasks is an open
problem in robot learning. While multi-task learning approaches have been
proposed to address this problem, they pay little attention to task inference.
In order to continually learn new tasks, the robot first needs to infer the
task at hand without requiring predefined task representations. In this paper,
we propose a self-supervised task inference approach. Our approach learns
action and intention embeddings from self-organization of the observed movement
and effect parts of unlabeled demonstrations and a higher-level behavior
embedding from self-organization of the joint action-intention embeddings. We
construct a behavior-matching self-supervised learning objective to train a
novel Task Inference Network (TINet) to map an unlabeled demonstration to its
nearest behavior embedding, which we use as the task representation. A
multi-task policy is built on top of the TINet and trained with reinforcement
learning to optimize performance over tasks. We evaluate our approach in the
fixed-set and continual multi-task learning settings with a humanoid robot and
compare it to different multi-task learning baselines. The results show that
our approach outperforms the other baselines, with the difference being more
pronounced in the challenging continual learning setting, and can infer tasks
from incomplete demonstrations. Our approach is also shown to generalize to
unseen tasks based on a single demonstration in one-shot task generalization
experiments. | [
"Muhammad Burhan Hafez",
"Stefan Wermter"
] | 2023-09-10 09:32:35 | http://arxiv.org/abs/2309.04974v1 | http://arxiv.org/pdf/2309.04974v1 | 2309.04974v1 |
LMBiS-Net: A Lightweight Multipath Bidirectional Skip Connection based CNN for Retinal Blood Vessel Segmentation | Blinding eye diseases are often correlated with altered retinal morphology,
which can be clinically identified by segmenting retinal structures in fundus
images. However, current methodologies often fall short in accurately
segmenting delicate vessels. Although deep learning has shown promise in
medical image segmentation, its reliance on repeated convolution and pooling
operations can hinder the representation of edge information, ultimately
limiting overall segmentation accuracy. In this paper, we propose a lightweight
pixel-level CNN named LMBiS-Net for the segmentation of retinal vessels with an
exceptionally low number of learnable parameters \textbf{(only 0.172 M)}. The
network used multipath feature extraction blocks and incorporates bidirectional
skip connections for the information flow between the encoder and decoder.
Additionally, we have optimized the efficiency of the model by carefully
selecting the number of filters to avoid filter overlap. This optimization
significantly reduces training time and enhances computational efficiency. To
assess the robustness and generalizability of LMBiS-Net, we performed
comprehensive evaluations on various aspects of retinal images. Specifically,
the model was subjected to rigorous tests to accurately segment retinal
vessels, which play a vital role in ophthalmological diagnosis and treatment.
By focusing on the retinal blood vessels, we were able to thoroughly analyze
the performance and effectiveness of the LMBiS-Net model. The results of our
tests demonstrate that LMBiS-Net is not only robust and generalizable but also
capable of maintaining high levels of segmentation accuracy. These
characteristics highlight the potential of LMBiS-Net as an efficient tool for
high-speed and accurate segmentation of retinal images in various clinical
applications. | [
"Mufassir M. Abbasi",
"Shahzaib Iqbal",
"Asim Naveed",
"Tariq M. Khan",
"Syed S. Naqvi",
"Wajeeha Khalid"
] | 2023-09-10 09:03:53 | http://arxiv.org/abs/2309.04968v1 | http://arxiv.org/pdf/2309.04968v1 | 2309.04968v1 |
A multiple k-means cluster ensemble framework for clustering citation trajectories | Citation maturity time varies for different articles. However, the impact of
all articles is measured in a fixed window. Clustering their citation
trajectories helps understand the knowledge diffusion process and reveals that
not all articles gain immediate success after publication. Moreover, clustering
trajectories is necessary for paper impact recommendation algorithms. It is a
challenging problem because citation time series exhibit significant
variability due to non linear and non stationary characteristics. Prior works
propose a set of arbitrary thresholds and a fixed rule based approach. All
methods are primarily parameter dependent. Consequently, it leads to
inconsistencies while defining similar trajectories and ambiguities regarding
their specific number. Most studies only capture extreme trajectories. Thus, a
generalised clustering framework is required. This paper proposes a feature
based multiple k means cluster ensemble framework. 1,95,783 and 41,732 well
cited articles from the Microsoft Academic Graph data are considered for
clustering short term (10 year) and long term (30 year) trajectories,
respectively. It has linear run time. Four distinct trajectories are obtained
Early Rise Rapid Decline (2.2%), Early Rise Slow Decline (45%), Delayed Rise No
Decline (53%), and Delayed Rise Slow Decline (0.8%). Individual trajectory
differences for two different spans are studied. Most papers exhibit Early Rise
Slow Decline and Delayed Rise No Decline patterns. The growth and decay times,
cumulative citation distribution, and peak characteristics of individual
trajectories are redefined empirically. A detailed comparative study reveals
our proposed methodology can detect all distinct trajectory classes. | [
"Joyita Chakraborty",
"Dinesh K. Pradhan",
"Subrata Nandi"
] | 2023-09-10 07:10:31 | http://arxiv.org/abs/2309.04949v1 | http://arxiv.org/pdf/2309.04949v1 | 2309.04949v1 |
Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle Counting Power | The ability of graph neural networks (GNNs) to count certain graph
substructures, especially cycles, is important for the success of GNNs on a
wide range of tasks. It has been recently used as a popular metric for
evaluating the expressive power of GNNs. Many of the proposed GNN models with
provable cycle counting power are based on subgraph GNNs, i.e., extracting a
bag of subgraphs from the input graph, generating representations for each
subgraph, and using them to augment the representation of the input graph.
However, those methods require heavy preprocessing, and suffer from high time
and memory costs. In this paper, we overcome the aforementioned limitations of
subgraph GNNs by proposing a novel class of GNNs -- $d$-Distance-Restricted
FWL(2) GNNs, or $d$-DRFWL(2) GNNs. $d$-DRFWL(2) GNNs use node pairs whose
mutual distances are at most $d$ as the units for message passing to balance
the expressive power and complexity. By performing message passing among
distance-restricted node pairs in the original graph, $d$-DRFWL(2) GNNs avoid
the expensive subgraph extraction operations in subgraph GNNs, making both the
time and space complexity lower. We theoretically show that the discriminative
power of $d$-DRFWL(2) GNNs strictly increases as $d$ increases. More
importantly, $d$-DRFWL(2) GNNs have provably strong cycle counting power even
with $d=2$: they can count all 3, 4, 5, 6-cycles. Since 6-cycles (e.g., benzene
rings) are ubiquitous in organic molecules, being able to detect and count them
is crucial for achieving robust and generalizable performance on molecular
tasks. Experiments on both synthetic datasets and molecular datasets verify our
theory. To the best of our knowledge, our model is the most efficient GNN model
to date (both theoretically and empirically) that can count up to 6-cycles. | [
"Junru Zhou",
"Jiarui Feng",
"Xiyuan Wang",
"Muhan Zhang"
] | 2023-09-10 06:13:29 | http://arxiv.org/abs/2309.04941v1 | http://arxiv.org/pdf/2309.04941v1 | 2309.04941v1 |
Knowledge-based Refinement of Scientific Publication Knowledge Graphs | We consider the problem of identifying authorship by posing it as a knowledge
graph construction and refinement. To this effect, we model this problem as
learning a probabilistic logic model in the presence of human guidance
(knowledge-based learning). Specifically, we learn relational regression trees
using functional gradient boosting that outputs explainable rules. To
incorporate human knowledge, advice in the form of first-order clauses is
injected to refine the trees. We demonstrate the usefulness of human knowledge
both quantitatively and qualitatively in seven authorship domains. | [
"Siwen Yan",
"Phillip Odom",
"Sriraam Natarajan"
] | 2023-09-10 02:06:49 | http://arxiv.org/abs/2309.05681v1 | http://arxiv.org/pdf/2309.05681v1 | 2309.05681v1 |
A Review of Machine Learning-based Security in Cloud Computing | Cloud Computing (CC) is revolutionizing the way IT resources are delivered to
users, allowing them to access and manage their systems with increased
cost-effectiveness and simplified infrastructure. However, with the growth of
CC comes a host of security risks, including threats to availability,
integrity, and confidentiality. To address these challenges, Machine Learning
(ML) is increasingly being used by Cloud Service Providers (CSPs) to reduce the
need for human intervention in identifying and resolving security issues. With
the ability to analyze vast amounts of data, and make high-accuracy
predictions, ML can transform the way CSPs approach security. In this paper, we
will explore some of the most recent research in the field of ML-based security
in Cloud Computing. We will examine the features and effectiveness of a range
of ML algorithms, highlighting their unique strengths and potential
limitations. Our goal is to provide a comprehensive overview of the current
state of ML in cloud security and to shed light on the exciting possibilities
that this emerging field has to offer. | [
"Aptin Babaei",
"Parham M. Kebria",
"Mohsen Moradi Dalvand",
"Saeid Nahavandi"
] | 2023-09-10 01:52:23 | http://arxiv.org/abs/2309.04911v1 | http://arxiv.org/pdf/2309.04911v1 | 2309.04911v1 |
Mitigating Denial of Service Attacks in Fog-Based Wireless Sensor Networks Using Machine Learning Techniques | Wireless sensor networks are considered to be among the most significant and
innovative technologies in the 21st century due to their wide range of
industrial applications. Sensor nodes in these networks are susceptible to a
variety of assaults due to their special qualities and method of deployment. In
WSNs, denial of service attacks are common attacks in sensor networks. It is
difficult to design a detection and prevention system that would effectively
reduce the impact of these attacks on WSNs. In order to identify assaults on
WSNs, this study suggests using two machine learning models: decision trees and
XGBoost. The WSNs dataset was the subject of extensive tests to identify denial
of service attacks. The experimental findings demonstrate that the XGBoost
model, when applied to the entire dataset, has a higher true positive rate
(98.3%) than the Decision tree approach (97.3%) and a lower false positive rate
(1.7%) than the Decision tree technique (2.7%). Like this, with selected
dataset assaults, the XGBoost approach has a higher true positive rate (99.01%)
than the Decision tree technique (97.50%) and a lower false positive rate
(0.99%) than the Decision tree technique (2.50%). | [
"Ademola Abidoye",
"Ibidun Obagbuwa",
"Nureni Azeez"
] | 2023-09-10 00:29:25 | http://arxiv.org/abs/2310.05952v1 | http://arxiv.org/pdf/2310.05952v1 | 2310.05952v1 |
Symplectic Structure-Aware Hamiltonian (Graph) Embeddings | In traditional Graph Neural Networks (GNNs), the assumption of a fixed
embedding manifold often limits their adaptability to diverse graph geometries.
Recently, Hamiltonian system-inspired GNNs are proposed to address the dynamic
nature of such embeddings by incorporating physical laws into node feature
updates. In this work, we present SAH-GNN, a novel approach that generalizes
Hamiltonian dynamics for more flexible node feature updates. Unlike existing
Hamiltonian-inspired GNNs, SAH-GNN employs Riemannian optimization on the
symplectic Stiefel manifold to adaptively learn the underlying symplectic
structure during training, circumventing the limitations of existing
Hamiltonian GNNs that rely on a pre-defined form of standard symplectic
structure. This innovation allows SAH-GNN to automatically adapt to various
graph datasets without extensive hyperparameter tuning. Moreover, it conserves
energy during training such that the implicit Hamiltonian system is physically
meaningful. To this end, we empirically validate SAH-GNN's superior performance
and adaptability in node classification tasks across multiple types of graph
datasets. | [
"Jiaxu Liu",
"Xinping Yi",
"Tianle Zhang",
"Xiaowei Huang"
] | 2023-09-09 22:27:38 | http://arxiv.org/abs/2309.04885v1 | http://arxiv.org/pdf/2309.04885v1 | 2309.04885v1 |
A Gentle Introduction to Gradient-Based Optimization and Variational Inequalities for Machine Learning | The rapid progress in machine learning in recent years has been based on a
highly productive connection to gradient-based optimization. Further progress
hinges in part on a shift in focus from pattern recognition to decision-making
and multi-agent problems. In these broader settings, new mathematical
challenges emerge that involve equilibria and game theory instead of optima.
Gradient-based methods remain essential -- given the high dimensionality and
large scale of machine-learning problems -- but simple gradient descent is no
longer the point of departure for algorithm design. We provide a gentle
introduction to a broader framework for gradient-based algorithms in machine
learning, beginning with saddle points and monotone games, and proceeding to
general variational inequalities. While we provide convergence proofs for
several of the algorithms that we present, our main focus is that of providing
motivation and intuition. | [
"Neha S. Wadia",
"Yatin Dandi",
"Michael I. Jordan"
] | 2023-09-09 21:36:51 | http://arxiv.org/abs/2309.04877v1 | http://arxiv.org/pdf/2309.04877v1 | 2309.04877v1 |
Approximating ReLU on a Reduced Ring for Efficient MPC-based Private Inference | Secure multi-party computation (MPC) allows users to offload machine learning
inference on untrusted servers without having to share their privacy-sensitive
data. Despite their strong security properties, MPC-based private inference has
not been widely adopted in the real world due to their high communication
overhead. When evaluating ReLU layers, MPC protocols incur a significant amount
of communication between the parties, making the end-to-end execution time
multiple orders slower than its non-private counterpart.
This paper presents HummingBird, an MPC framework that reduces the ReLU
communication overhead significantly by using only a subset of the bits to
evaluate ReLU on a smaller ring. Based on theoretical analyses, HummingBird
identifies bits in the secret share that are not crucial for accuracy and
excludes them during ReLU evaluation to reduce communication. With its
efficient search engine, HummingBird discards 87--91% of the bits during ReLU
and still maintains high accuracy. On a real MPC setup involving multiple
servers, HummingBird achieves on average 2.03--2.67x end-to-end speedup without
introducing any errors, and up to 8.64x average speedup when some amount of
accuracy degradation can be tolerated, due to its up to 8.76x communication
reduction. | [
"Kiwan Maeng",
"G. Edward Suh"
] | 2023-09-09 20:49:12 | http://arxiv.org/abs/2309.04875v1 | http://arxiv.org/pdf/2309.04875v1 | 2309.04875v1 |
Approximation Results for Gradient Descent trained Neural Networks | The paper contains approximation guarantees for neural networks that are
trained with gradient flow, with error measured in the continuous
$L_2(\mathbb{S}^{d-1})$-norm on the $d$-dimensional unit sphere and targets
that are Sobolev smooth. The networks are fully connected of constant depth and
increasing width. Although all layers are trained, the gradient flow
convergence is based on a neural tangent kernel (NTK) argument for the
non-convex second but last layer. Unlike standard NTK analysis, the continuous
error norm implies an under-parametrized regime, possible by the natural
smoothness assumption required for approximation. The typical
over-parametrization re-enters the results in form of a loss in approximation
rate relative to established approximation methods for Sobolev smooth
functions. | [
"G. Welper"
] | 2023-09-09 18:47:55 | http://arxiv.org/abs/2309.04860v1 | http://arxiv.org/pdf/2309.04860v1 | 2309.04860v1 |
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System | Neural language models are increasingly deployed into APIs and websites that
allow a user to pass in a prompt and receive generated text. Many of these
systems do not reveal generation parameters. In this paper, we present methods
to reverse-engineer the decoding method used to generate text (i.e., top-$k$ or
nucleus sampling). Our ability to discover which decoding strategy was used has
implications for detecting generated text. Additionally, the process of
discovering the decoding strategy can reveal biases caused by selecting
decoding settings which severely truncate a model's predicted distributions. We
perform our attack on several families of open-source language models, as well
as on production systems (e.g., ChatGPT). | [
"Daphne Ippolito",
"Nicholas Carlini",
"Katherine Lee",
"Milad Nasr",
"Yun William Yu"
] | 2023-09-09 18:19:47 | http://arxiv.org/abs/2309.04858v1 | http://arxiv.org/pdf/2309.04858v1 | 2309.04858v1 |
AmbientFlow: Invertible generative models from incomplete, noisy measurements | Generative models have gained popularity for their potential applications in
imaging science, such as image reconstruction, posterior sampling and data
sharing. Flow-based generative models are particularly attractive due to their
ability to tractably provide exact density estimates along with fast,
inexpensive and diverse samples. Training such models, however, requires a
large, high quality dataset of objects. In applications such as computed
imaging, it is often difficult to acquire such data due to requirements such as
long acquisition time or high radiation dose, while acquiring noisy or
partially observed measurements of these objects is more feasible. In this
work, we propose AmbientFlow, a framework for learning flow-based generative
models directly from noisy and incomplete data. Using variational Bayesian
methods, a novel framework for establishing flow-based generative models from
noisy, incomplete data is proposed. Extensive numerical studies demonstrate the
effectiveness of AmbientFlow in correctly learning the object distribution. The
utility of AmbientFlow in a downstream inference task of image reconstruction
is demonstrated. | [
"Varun A. Kelkar",
"Rucha Deshpande",
"Arindam Banerjee",
"Mark A. Anastasio"
] | 2023-09-09 18:08:56 | http://arxiv.org/abs/2309.04856v1 | http://arxiv.org/pdf/2309.04856v1 | 2309.04856v1 |
Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations | We propose EmoDistill, a novel speech emotion recognition (SER) framework
that leverages cross-modal knowledge distillation during training to learn
strong linguistic and prosodic representations of emotion from speech. During
inference, our method only uses a stream of speech signals to perform unimodal
SER thus reducing computation overhead and avoiding run-time transcription and
prosodic feature extraction errors. During training, our method distills
information at both embedding and logit levels from a pair of pre-trained
Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on
the IEMOCAP benchmark demonstrate that our method outperforms other unimodal
and multimodal techniques by a considerable margin, and achieves
state-of-the-art performance of 77.49% unweighted accuracy and 78.91% weighted
accuracy. Detailed ablation studies demonstrate the impact of each component of
our method. | [
"Debaditya Shome",
"Ali Etemad"
] | 2023-09-09 17:30:35 | http://arxiv.org/abs/2309.04849v1 | http://arxiv.org/pdf/2309.04849v1 | 2309.04849v1 |
Verifiable Reinforcement Learning Systems via Compositionality | We propose a framework for verifiable and compositional reinforcement
learning (RL) in which a collection of RL subsystems, each of which learns to
accomplish a separate subtask, are composed to achieve an overall task. The
framework consists of a high-level model, represented as a parametric Markov
decision process, which is used to plan and analyze compositions of subsystems,
and of the collection of low-level subsystems themselves. The subsystems are
implemented as deep RL agents operating under partial observability. By
defining interfaces between the subsystems, the framework enables automatic
decompositions of task specifications, e.g., reach a target set of states with
a probability of at least 0.95, into individual subtask specifications, i.e.
achieve the subsystem's exit conditions with at least some minimum probability,
given that its entry conditions are met. This in turn allows for the
independent training and testing of the subsystems. We present theoretical
results guaranteeing that if each subsystem learns a policy satisfying its
subtask specification, then their composition is guaranteed to satisfy the
overall task specification. Conversely, if the subtask specifications cannot
all be satisfied by the learned policies, we present a method, formulated as
the problem of finding an optimal set of parameters in the high-level model, to
automatically update the subtask specifications to account for the observed
shortcomings. The result is an iterative procedure for defining subtask
specifications, and for training the subsystems to meet them. Experimental
results demonstrate the presented framework's novel capabilities in
environments with both full and partial observability, discrete and continuous
state and action spaces, as well as deterministic and stochastic dynamics. | [
"Cyrus Neary",
"Aryaman Singh Samyal",
"Christos Verginis",
"Murat Cubuktepe",
"Ufuk Topcu"
] | 2023-09-09 17:11:44 | http://arxiv.org/abs/2309.06420v1 | http://arxiv.org/pdf/2309.06420v1 | 2309.06420v1 |
HAct: Out-of-Distribution Detection with Neural Net Activation Histograms | We propose a simple, efficient, and accurate method for detecting
out-of-distribution (OOD) data for trained neural networks. We propose a novel
descriptor, HAct - activation histograms, for OOD detection, that is,
probability distributions (approximated by histograms) of output values of
neural network layers under the influence of incoming data. We formulate an OOD
detector based on HAct descriptors. We demonstrate that HAct is significantly
more accurate than state-of-the-art in OOD detection on multiple image
classification benchmarks. For instance, our approach achieves a true positive
rate (TPR) of 95% with only 0.03% false-positives using Resnet-50 on standard
OOD benchmarks, outperforming previous state-of-the-art by 20.67% in the false
positive rate (at the same TPR of 95%). The computational efficiency and the
ease of implementation makes HAct suitable for online implementation in
monitoring deployed neural networks in practice at scale. | [
"Sudeepta Mondal",
"Ganesh Sundaramoorthi"
] | 2023-09-09 16:22:18 | http://arxiv.org/abs/2309.04837v2 | http://arxiv.org/pdf/2309.04837v2 | 2309.04837v2 |
Global Convergence of Receding-Horizon Policy Search in Learning Estimator Designs | We introduce the receding-horizon policy gradient (RHPG) algorithm, the first
PG algorithm with provable global convergence in learning the optimal linear
estimator designs, i.e., the Kalman filter (KF). Notably, the RHPG algorithm
does not require any prior knowledge of the system for initialization and does
not require the target system to be open-loop stable. The key of RHPG is that
we integrate vanilla PG (or any other policy search directions) into a dynamic
programming outer loop, which iteratively decomposes the infinite-horizon KF
problem that is constrained and non-convex in the policy parameter into a
sequence of static estimation problems that are unconstrained and
strongly-convex, thus enabling global convergence. We further provide
fine-grained analyses of the optimization landscape under RHPG and detail the
convergence and sample complexity guarantees of the algorithm. This work serves
as an initial attempt to develop reinforcement learning algorithms specifically
for control applications with performance guarantees by utilizing classic
control theory in both algorithmic design and theoretical analyses. Lastly, we
validate our theories by deploying the RHPG algorithm to learn the Kalman
filter design of a large-scale convection-diffusion model. We open-source the
code repository at \url{https://github.com/xiangyuan-zhang/LearningKF}. | [
"Xiangyuan Zhang",
"Saviz Mowlavi",
"Mouhacine Benosman",
"Tamer Başar"
] | 2023-09-09 16:03:49 | http://arxiv.org/abs/2309.04831v1 | http://arxiv.org/pdf/2309.04831v1 | 2309.04831v1 |
Correcting sampling biases via importance reweighting for spatial modeling | In machine learning models, the estimation of errors is often complex due to
distribution bias, particularly in spatial data such as those found in
environmental studies. We introduce an approach based on the ideas of
importance sampling to obtain an unbiased estimate of the target error. By
taking into account difference between desirable error and available data, our
method reweights errors at each sample point and neutralizes the shift.
Importance sampling technique and kernel density estimation were used for
reweighteing. We validate the effectiveness of our approach using artificial
data that resemble real-world spatial datasets. Our findings demonstrate
advantages of the proposed approach for the estimation of the target error,
offering a solution to a distribution shift problem. Overall error of
predictions dropped from 7% to just 2% and it gets smaller for larger samples. | [
"Boris Prokhorov",
"Diana Koldasbayeva",
"Alexey Zaytsev"
] | 2023-09-09 15:36:28 | http://arxiv.org/abs/2309.04824v2 | http://arxiv.org/pdf/2309.04824v2 | 2309.04824v2 |
ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class Class-agnostic Counting | Class-agnostic counting methods enumerate objects of an arbitrary class,
providing tremendous utility in many fields. Prior works have limited
usefulness as they require either a set of examples of the type to be counted
or that the image contains only a single type of object. A significant factor
in these shortcomings is the lack of a dataset to properly address counting in
settings with more than one kind of object present. To address these issues, we
propose the first Multi-class, Class-Agnostic Counting dataset (MCAC) and A
Blind Counter (ABC123), a method that can count multiple types of objects
simultaneously without using examples of type during training or inference.
ABC123 introduces a new paradigm where instead of requiring exemplars to guide
the enumeration, examples are found after the counting stage to help a user
understand the generated outputs. We show that ABC123 outperforms contemporary
methods on MCAC without the requirement of human in-the-loop annotations. We
also show that this performance transfers to FSC-147, the standard
class-agnostic counting dataset. | [
"Michael A. Hobley",
"Victor A. Prisacariu"
] | 2023-09-09 15:18:46 | http://arxiv.org/abs/2309.04820v1 | http://arxiv.org/pdf/2309.04820v1 | 2309.04820v1 |
Detecting Violations of Differential Privacy for Quantum Algorithms | Quantum algorithms for solving a wide range of practical problems have been
proposed in the last ten years, such as data search and analysis, product
recommendation, and credit scoring. The concern about privacy and other ethical
issues in quantum computing naturally rises up. In this paper, we define a
formal framework for detecting violations of differential privacy for quantum
algorithms. A detection algorithm is developed to verify whether a (noisy)
quantum algorithm is differentially private and automatically generate bugging
information when the violation of differential privacy is reported. The
information consists of a pair of quantum states that violate the privacy, to
illustrate the cause of the violation. Our algorithm is equipped with Tensor
Networks, a highly efficient data structure, and executed both on TensorFlow
Quantum and TorchQuantum which are the quantum extensions of famous machine
learning platforms -- TensorFlow and PyTorch, respectively. The effectiveness
and efficiency of our algorithm are confirmed by the experimental results of
almost all types of quantum algorithms already implemented on realistic quantum
computers, including quantum supremacy algorithms (beyond the capability of
classical algorithms), quantum machine learning models, quantum approximate
optimization algorithms, and variational quantum eigensolvers with up to 21
quantum bits. | [
"Ji Guan",
"Wang Fang",
"Mingyu Huang",
"Mingsheng Ying"
] | 2023-09-09 15:07:31 | http://arxiv.org/abs/2309.04819v1 | http://arxiv.org/pdf/2309.04819v1 | 2309.04819v1 |
Good-looking but Lacking Faithfulness: Understanding Local Explanation Methods through Trend-based Testing | While enjoying the great achievements brought by deep learning (DL), people
are also worried about the decision made by DL models, since the high degree of
non-linearity of DL models makes the decision extremely difficult to
understand. Consequently, attacks such as adversarial attacks are easy to carry
out, but difficult to detect and explain, which has led to a boom in the
research on local explanation methods for explaining model decisions. In this
paper, we evaluate the faithfulness of explanation methods and find that
traditional tests on faithfulness encounter the random dominance problem, \ie,
the random selection performs the best, especially for complex data. To further
solve this problem, we propose three trend-based faithfulness tests and
empirically demonstrate that the new trend tests can better assess faithfulness
than traditional tests on image, natural language and security tasks. We
implement the assessment system and evaluate ten popular explanation methods.
Benefiting from the trend tests, we successfully assess the explanation methods
on complex data for the first time, bringing unprecedented discoveries and
inspiring future research. Downstream tasks also greatly benefit from the
tests. For example, model debugging equipped with faithful explanation methods
performs much better for detecting and correcting accuracy and security
problems. | [
"Jinwen He",
"Kai Chen",
"Guozhu Meng",
"Jiangshan Zhang",
"Congyi Li"
] | 2023-09-09 14:44:39 | http://arxiv.org/abs/2309.05679v1 | http://arxiv.org/pdf/2309.05679v1 | 2309.05679v1 |
Neural Latent Geometry Search: Product Manifold Inference via Gromov-Hausdorff-Informed Bayesian Optimization | Recent research indicates that the performance of machine learning models can
be improved by aligning the geometry of the latent space with the underlying
data structure. Rather than relying solely on Euclidean space, researchers have
proposed using hyperbolic and spherical spaces with constant curvature, or
combinations thereof, to better model the latent space and enhance model
performance. However, little attention has been given to the problem of
automatically identifying the optimal latent geometry for the downstream task.
We mathematically define this novel formulation and coin it as neural latent
geometry search (NLGS). More specifically, we introduce a principled method
that searches for a latent geometry composed of a product of constant curvature
model spaces with minimal query evaluations. To accomplish this, we propose a
novel notion of distance between candidate latent geometries based on the
Gromov-Hausdorff distance from metric geometry. In order to compute the
Gromov-Hausdorff distance, we introduce a mapping function that enables the
comparison of different manifolds by embedding them in a common
high-dimensional ambient space. Finally, we design a graph search space based
on the calculated distances between candidate manifolds and use Bayesian
optimization to search for the optimal latent geometry in a query-efficient
manner. This is a general method which can be applied to search for the optimal
latent geometry for a variety of models and downstream tasks. Extensive
experiments on synthetic and real-world datasets confirm the efficacy of our
method in identifying the optimal latent geometry for multiple machine learning
problems. | [
"Haitz Saez de Ocariz Borde",
"Alvaro Arroyo",
"Ismael Morales",
"Ingmar Posner",
"Xiaowen Dong"
] | 2023-09-09 14:29:22 | http://arxiv.org/abs/2309.04810v2 | http://arxiv.org/pdf/2309.04810v2 | 2309.04810v2 |
Finding Influencers in Complex Networks: An Effective Deep Reinforcement Learning Approach | Maximizing influences in complex networks is a practically important but
computationally challenging task for social network analysis, due to its NP-
hard nature. Most current approximation or heuristic methods either require
tremendous human design efforts or achieve unsatisfying balances between
effectiveness and efficiency. Recent machine learning attempts only focus on
speed but lack performance enhancement. In this paper, different from previous
attempts, we propose an effective deep reinforcement learning model that
achieves superior performances over traditional best influence maximization
algorithms. Specifically, we design an end-to-end learning framework that
combines graph neural network as the encoder and reinforcement learning as the
decoder, named DREIM. Trough extensive training on small synthetic graphs,
DREIM outperforms the state-of-the-art baseline methods on very large synthetic
and real-world networks on solution quality, and we also empirically show its
linear scalability with regard to the network size, which demonstrates its
superiority in solving this problem. | [
"Changan Liu",
"Changjun Fan",
"Zhongzhi Zhang"
] | 2023-09-09 14:19:00 | http://arxiv.org/abs/2309.07153v1 | http://arxiv.org/pdf/2309.07153v1 | 2309.07153v1 |
A Full-fledged Commit Message Quality Checker Based on Machine Learning | Commit messages (CMs) are an essential part of version control. By providing
important context in regard to what has changed and why, they strongly support
software maintenance and evolution. But writing good CMs is difficult and often
neglected by developers. So far, there is no tool suitable for practice that
automatically assesses how well a CM is written, including its meaning and
context. Since this task is challenging, we ask the research question: how well
can the CM quality, including semantics and context, be measured with machine
learning methods? By considering all rules from the most popular CM quality
guideline, creating datasets for those rules, and training and evaluating
state-of-the-art machine learning models to check those rules, we can answer
the research question with: sufficiently well for practice, with the lowest
F$_1$ score of 82.9\%, for the most challenging task. We develop a full-fledged
open-source framework that checks all these CM quality rules. It is useful for
research, e.g., automatic CM generation, but most importantly for software
practitioners to raise the quality of CMs and thus the maintainability and
evolution speed of their software. | [
"David Faragó",
"Michael Färber",
"Christian Petrov"
] | 2023-09-09 13:43:43 | http://arxiv.org/abs/2309.04797v1 | http://arxiv.org/pdf/2309.04797v1 | 2309.04797v1 |
Stochastic Gradient Descent outperforms Gradient Descent in recovering a high-dimensional signal in a glassy energy landscape | Stochastic Gradient Descent (SGD) is an out-of-equilibrium algorithm used
extensively to train artificial neural networks. However very little is known
on to what extent SGD is crucial for to the success of this technology and, in
particular, how much it is effective in optimizing high-dimensional non-convex
cost functions as compared to other optimization algorithms such as Gradient
Descent (GD). In this work we leverage dynamical mean field theory to analyze
exactly its performances in the high-dimensional limit. We consider the problem
of recovering a hidden high-dimensional non-linearly encrypted signal, a
prototype high-dimensional non-convex hard optimization problem. We compare the
performances of SGD to GD and we show that SGD largely outperforms GD. In
particular, a power law fit of the relaxation time of these algorithms shows
that the recovery threshold for SGD with small batch size is smaller than the
corresponding one of GD. | [
"Persia Jana Kamali",
"Pierfrancesco Urbani"
] | 2023-09-09 13:29:17 | http://arxiv.org/abs/2309.04788v1 | http://arxiv.org/pdf/2309.04788v1 | 2309.04788v1 |
RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition | Time-frequency analysis is an important and challenging task in many
applications. Fourier and wavelet analysis are two classic methods that have
achieved remarkable success in many fields. They also exhibit limitations when
applied to nonlinear and non-stationary signals. To address this challenge, a
series of nonlinear and adaptive methods, pioneered by the empirical mode
decomposition method have been proposed. Their aim is to decompose a
non-stationary signal into quasi-stationary components which reveal better
features in the time-frequency analysis. Recently, inspired by deep learning,
we proposed a novel method called residual recursive convolutional neural
network (RRCNN). Not only RRCNN can achieve more stable decomposition than
existing methods while batch processing large-scale signals with low
computational cost, but also deep learning provides a unique perspective for
non-stationary signal decomposition. In this study, we aim to further improve
RRCNN with the help of several nimble techniques from deep learning and
optimization to ameliorate the method and overcome some of the limitations of
this technique. | [
"Feng Zhou",
"Antonio Cicone",
"Haomin Zhou"
] | 2023-09-09 13:00:30 | http://arxiv.org/abs/2309.04782v1 | http://arxiv.org/pdf/2309.04782v1 | 2309.04782v1 |
Towards Robust Model Watermark via Reducing Parametric Vulnerability | Deep neural networks are valuable assets considering their commercial
benefits and huge demands for costly annotation and computation resources. To
protect the copyright of DNNs, backdoor-based ownership verification becomes
popular recently, in which the model owner can watermark the model by embedding
a specific backdoor behavior before releasing it. The defenders (usually the
model owners) can identify whether a suspicious third-party model is ``stolen''
from them based on the presence of the behavior. Unfortunately, these
watermarks are proven to be vulnerable to removal attacks even like
fine-tuning. To further explore this vulnerability, we investigate the
parameter space and find there exist many watermark-removed models in the
vicinity of the watermarked one, which may be easily used by removal attacks.
Inspired by this finding, we propose a mini-max formulation to find these
watermark-removed models and recover their watermark behavior. Extensive
experiments demonstrate that our method improves the robustness of the model
watermarking against parametric changes and numerous watermark-removal attacks.
The codes for reproducing our main experiments are available at
\url{https://github.com/GuanhaoGan/robust-model-watermarking}. | [
"Guanhao Gan",
"Yiming Li",
"Dongxian Wu",
"Shu-Tao Xia"
] | 2023-09-09 12:46:08 | http://arxiv.org/abs/2309.04777v1 | http://arxiv.org/pdf/2309.04777v1 | 2309.04777v1 |
AudRandAug: Random Image Augmentations for Audio Classification | Data augmentation has proven to be effective in training neural networks.
Recently, a method called RandAug was proposed, randomly selecting data
augmentation techniques from a predefined search space. RandAug has
demonstrated significant performance improvements for image-related tasks while
imposing minimal computational overhead. However, no prior research has
explored the application of RandAug specifically for audio data augmentation,
which converts audio into an image-like pattern. To address this gap, we
introduce AudRandAug, an adaptation of RandAug for audio data. AudRandAug
selects data augmentation policies from a dedicated audio search space. To
evaluate the effectiveness of AudRandAug, we conducted experiments using
various models and datasets. Our findings indicate that AudRandAug outperforms
other existing data augmentation methods regarding accuracy performance. | [
"Teerath Kumar",
"Muhammad Turab",
"Alessandra Mileo",
"Malika Bendechache",
"Takfarinas Saber"
] | 2023-09-09 11:25:03 | http://arxiv.org/abs/2309.04762v1 | http://arxiv.org/pdf/2309.04762v1 | 2309.04762v1 |
A Comprehensive Survey on Deep Learning Techniques in Educational Data Mining | Educational Data Mining (EDM) has emerged as a vital field of research, which
harnesses the power of computational techniques to analyze educational data.
With the increasing complexity and diversity of educational data, Deep Learning
techniques have shown significant advantages in addressing the challenges
associated with analyzing and modeling this data. This survey aims to
systematically review the state-of-the-art in EDM with Deep Learning. We begin
by providing a brief introduction to EDM and Deep Learning, highlighting their
relevance in the context of modern education. Next, we present a detailed
review of Deep Learning techniques applied in four typical educational
scenarios, including knowledge tracing, undesirable student detecting,
performance prediction, and personalized recommendation. Furthermore, a
comprehensive overview of public datasets and processing tools for EDM is
provided. Finally, we point out emerging trends and future directions in this
research area. | [
"Yuanguo Lin",
"Hong Chen",
"Wei Xia",
"Fan Lin",
"Pengcheng Wu",
"Zongyue Wang",
"Yong Liu"
] | 2023-09-09 11:20:40 | http://arxiv.org/abs/2309.04761v2 | http://arxiv.org/pdf/2309.04761v2 | 2309.04761v2 |
Gromov-Hausdorff Distances for Comparing Product Manifolds of Model Spaces | Recent studies propose enhancing machine learning models by aligning the
geometric characteristics of the latent space with the underlying data
structure. Instead of relying solely on Euclidean space, researchers have
suggested using hyperbolic and spherical spaces with constant curvature, or
their combinations (known as product manifolds), to improve model performance.
However, there exists no principled technique to determine the best latent
product manifold signature, which refers to the choice and dimensionality of
manifold components. To address this, we introduce a novel notion of distance
between candidate latent geometries using the Gromov-Hausdorff distance from
metric geometry. We propose using a graph search space that uses the estimated
Gromov-Hausdorff distances to search for the optimal latent geometry. In this
work we focus on providing a description of an algorithm to compute the
Gromov-Hausdorff distance between model spaces and its computational
implementation. | [
"Haitz Saez de Ocariz Borde",
"Alvaro Arroyo",
"Ismael Morales",
"Ingmar Posner",
"Xiaowen Dong"
] | 2023-09-09 11:17:06 | http://arxiv.org/abs/2309.05678v1 | http://arxiv.org/pdf/2309.05678v1 | 2309.05678v1 |
RR-CP: Reliable-Region-Based Conformal Prediction for Trustworthy Medical Image Classification | Conformal prediction (CP) generates a set of predictions for a given test
sample such that the prediction set almost always contains the true label
(e.g., 99.5\% of the time). CP provides comprehensive predictions on possible
labels of a given test sample, and the size of the set indicates how certain
the predictions are (e.g., a set larger than one is `uncertain'). Such distinct
properties of CP enable effective collaborations between human experts and
medical AI models, allowing efficient intervention and quality check in
clinical decision-making. In this paper, we propose a new method called
Reliable-Region-Based Conformal Prediction (RR-CP), which aims to impose a
stronger statistical guarantee so that the user-specified error rate (e.g.,
0.5\%) can be achieved in the test time, and under this constraint, the size of
the prediction set is optimized (to be small). We consider a small prediction
set size an important measure only when the user-specified error rate is
achieved. Experiments on five public datasets show that our RR-CP performs
well: with a reasonably small-sized prediction set, it achieves the
user-specified error rate (e.g., 0.5\%) significantly more frequently than
exiting CP methods. | [
"Yizhe Zhang",
"Shuo Wang",
"Yejia Zhang",
"Danny Z. Chen"
] | 2023-09-09 11:14:04 | http://arxiv.org/abs/2309.04760v1 | http://arxiv.org/pdf/2309.04760v1 | 2309.04760v1 |
Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks | We consider the problem of performing Bayesian inference for logistic
regression using appropriate extensions of the ensemble Kalman filter. Two
interacting particle systems are proposed that sample from an approximate
posterior and prove quantitative convergence rates of these interacting
particle systems to their mean-field limit as the number of particles tends to
infinity. Furthermore, we apply these techniques and examine their
effectiveness as methods of Bayesian approximation for quantifying predictive
uncertainty in ReLU networks. | [
"Diksha Bhandari",
"Jakiw Pidstrigach",
"Sebastian Reich"
] | 2023-09-09 10:01:51 | http://arxiv.org/abs/2309.04742v1 | http://arxiv.org/pdf/2309.04742v1 | 2309.04742v1 |
Learning Spiking Neural Network from Easy to Hard task | Starting with small and simple concepts, and gradually introducing complex
and difficult concepts is the natural process of human learning. Spiking Neural
Networks (SNNs) aim to mimic the way humans process information, but current
SNNs models treat all samples equally, which does not align with the principles
of human learning and overlooks the biological plausibility of SNNs. To address
this, we propose a CL-SNN model that introduces Curriculum Learning(CL) into
SNNs, making SNNs learn more like humans and providing higher biological
interpretability. CL is a training strategy that advocates presenting easier
data to models before gradually introducing more challenging data, mimicking
the human learning process. We use a confidence-aware loss to measure and
process the samples with different difficulty levels. By learning the
confidence of different samples, the model reduces the contribution of
difficult samples to parameter optimization automatically. We conducted
experiments on static image datasets MNIST, Fashion-MNIST, CIFAR10, and
neuromorphic datasets N-MNIST, CIFAR10-DVS, DVS-Gesture. The results are
promising. To our best knowledge, this is the first proposal to enhance the
biologically plausibility of SNNs by introducing CL. | [
"Lingling Tang",
"Jiangtao Hu",
"Hua Yu",
"Surui Liu",
"Jielei Chu"
] | 2023-09-09 09:46:32 | http://arxiv.org/abs/2309.04737v3 | http://arxiv.org/pdf/2309.04737v3 | 2309.04737v3 |
A Spatiotemporal Deep Neural Network for Fine-Grained Multi-Horizon Wind Prediction | The prediction of wind in terms of both wind speed and direction, which has a
crucial impact on many real-world applications like aviation and wind power
generation, is extremely challenging due to the high stochasticity and
complicated correlation in the weather data. Existing methods typically focus
on a sub-set of influential factors and thus lack a systematic treatment of the
problem. In addition, fine-grained forecasting is essential for efficient
industry operations, but has been less attended in the literature. In this
work, we propose a novel data-driven model, Multi-Horizon SpatioTemporal
Network (MHSTN), generally for accurate and efficient fine-grained wind
prediction. MHSTN integrates multiple deep neural networks targeting different
factors in a sequence-to-sequence (Seq2Seq) backbone to effectively extract
features from various data sources and produce multi-horizon predictions for
all sites within a given region. MHSTN is composed of four major modules.
First, a temporal module fuses coarse-grained forecasts derived by Numerical
Weather Prediction (NWP) and historical on-site observation data at stations so
as to leverage both global and local atmospheric information. Second, a spatial
module exploits spatial correlation by modeling the joint representation of all
stations. Third, an ensemble module weighs the above two modules for final
predictions. Furthermore, a covariate selection module automatically choose
influential meteorological variables as initial input. MHSTN is already
integrated into the scheduling platform of one of the busiest international
airports of China. The evaluation results demonstrate that our model
outperforms competitors by a significant margin. | [
"Fanling Huang",
"Yangdong Deng"
] | 2023-09-09 09:36:28 | http://arxiv.org/abs/2309.04733v1 | http://arxiv.org/pdf/2309.04733v1 | 2309.04733v1 |
TCGAN: Convolutional Generative Adversarial Network for Time Series Classification and Clustering | Recent works have demonstrated the superiority of supervised Convolutional
Neural Networks (CNNs) in learning hierarchical representations from time
series data for successful classification. These methods require sufficiently
large labeled data for stable learning, however acquiring high-quality labeled
time series data can be costly and potentially infeasible. Generative
Adversarial Networks (GANs) have achieved great success in enhancing
unsupervised and semi-supervised learning. Nonetheless, to our best knowledge,
it remains unclear how effectively GANs can serve as a general-purpose solution
to learn representations for time series recognition, i.e., classification and
clustering. The above considerations inspire us to introduce a Time-series
Convolutional GAN (TCGAN). TCGAN learns by playing an adversarial game between
two one-dimensional CNNs (i.e., a generator and a discriminator) in the absence
of label information. Parts of the trained TCGAN are then reused to construct a
representation encoder to empower linear recognition methods. We conducted
comprehensive experiments on synthetic and real-world datasets. The results
demonstrate that TCGAN is faster and more accurate than existing time-series
GANs. The learned representations enable simple classification and clustering
methods to achieve superior and stable performance. Furthermore, TCGAN retains
high efficacy in scenarios with few-labeled and imbalanced-labeled data. Our
work provides a promising path to effectively utilize abundant unlabeled time
series data. | [
"Fanling Huang",
"Yangdong Deng"
] | 2023-09-09 09:33:25 | http://arxiv.org/abs/2309.04732v1 | http://arxiv.org/pdf/2309.04732v1 | 2309.04732v1 |
Transitions in echo index and dependence on input repetitions | The echo index counts the number of simultaneously stable asymptotic
responses of a nonautonomous (i.e. input-driven) dynamical system. It
generalizes the well-known echo state property for recurrent neural networks -
this corresponds to the echo index being equal to one. In this paper, we
investigate how the echo index depends on parameters that govern typical
responses to a finite-state ergodic external input that forces the dynamics. We
consider the echo index for a nonautonomous system that switches between a
finite set of maps, where we assume that each map possesses a finite set of
hyperbolic equilibrium attractors. We find the minimum and maximum repetitions
of each map are crucial for the resulting echo index. Casting our theoretical
findings in the RNN computing framework, we obtain that for small amplitude
forcing the echo index corresponds to the number of attractors for the
input-free system, while for large amplitude forcing, the echo index reduces to
one. The intermediate regime is the most interesting; in this region the echo
index depends not just on the amplitude of forcing but also on more subtle
properties of the input. | [
"Peter Ashwin",
"Andrea Ceni"
] | 2023-09-09 09:27:31 | http://arxiv.org/abs/2309.04728v1 | http://arxiv.org/pdf/2309.04728v1 | 2309.04728v1 |
MultiCaM-Vis: Visual Exploration of Multi-Classification Model with High Number of Classes | Visual exploration of multi-classification models with large number of
classes would help machine learning experts in identifying the root cause of a
problem that occurs during learning phase such as miss-classification of
instances. Most of the previous visual analytics solutions targeted only a few
classes. In this paper, we present our interactive visual analytics tool,
called MultiCaM-Vis, that provides \Emph{overview+detail} style parallel
coordinate views and a Chord diagram for exploration and inspection of
class-level miss-classification of instances. We also present results of a
preliminary user study with 12 participants. | [
"Syed Ahsan Ali Dilawer",
"Shah Rukh Humayoun"
] | 2023-09-09 08:55:22 | http://arxiv.org/abs/2309.05676v1 | http://arxiv.org/pdf/2309.05676v1 | 2309.05676v1 |
SHAPE: A Sample-adaptive Hierarchical Prediction Network for Medication Recommendation | Effectively medication recommendation with complex multimorbidity conditions
is a critical task in healthcare. Most existing works predicted medications
based on longitudinal records, which assumed the information transmitted
patterns of learning longitudinal sequence data are stable and intra-visit
medical events are serialized. However, the following conditions may have been
ignored: 1) A more compact encoder for intra-relationship in the intra-visit
medical event is urgent; 2) Strategies for learning accurate representations of
the variable longitudinal sequences of patients are different. In this paper,
we proposed a novel Sample-adaptive Hierarchical medicAtion Prediction nEtwork,
termed SHAPE, to tackle the above challenges in the medication recommendation
task. Specifically, we design a compact intra-visit set encoder to encode the
relationship in the medical event for obtaining visit-level representation and
then develop an inter-visit longitudinal encoder to learn the patient-level
longitudinal representation efficiently. To endow the model with the capability
of modeling the variable visit length, we introduce a soft curriculum learning
method to assign the difficulty of each sample automatically by the visit
length. Extensive experiments on a benchmark dataset verify the superiority of
our model compared with several state-of-the-art baselines. | [
"Sicen Liu",
"Xiaolong Wang",
"JIngcheng Du",
"Yongshuai Hou",
"Xianbing Zhao",
"Hui Xu",
"Hui Wang",
"Yang Xiang",
"Buzhou Tang"
] | 2023-09-09 08:28:04 | http://arxiv.org/abs/2309.05675v1 | http://arxiv.org/pdf/2309.05675v1 | 2309.05675v1 |
Toward Reproducing Network Research Results Using Large Language Models | Reproducing research results in the networking community is important for
both academia and industry. The current best practice typically resorts to
three approaches: (1) looking for publicly available prototypes; (2) contacting
the authors to get a private prototype; and (3) manually implementing a
prototype following the description of the publication. However, most published
network research does not have public prototypes and private prototypes are
hard to get. As such, most reproducing efforts are spent on manual
implementation based on the publications, which is both time and labor
consuming and error-prone. In this paper, we boldly propose reproducing network
research results using the emerging large language models (LLMs). In
particular, we first prove its feasibility with a small-scale experiment, in
which four students with essential networking knowledge each reproduces a
different networking system published in prominent conferences and journals by
prompt engineering ChatGPT. We report the experiment's observations and lessons
and discuss future open research questions of this proposal. This work raises
no ethical issue. | [
"Qiao Xiang",
"Yuling Lin",
"Mingjun Fang",
"Bang Huang",
"Siyong Huang",
"Ridi Wen",
"Franck Le",
"Linghe Kong",
"Jiwu Shu"
] | 2023-09-09 08:07:54 | http://arxiv.org/abs/2309.04716v1 | http://arxiv.org/pdf/2309.04716v1 | 2309.04716v1 |
Advantage Actor-Critic with Reasoner: Explaining the Agent's Behavior from an Exploratory Perspective | Reinforcement learning (RL) is a powerful tool for solving complex
decision-making problems, but its lack of transparency and interpretability has
been a major challenge in domains where decisions have significant real-world
consequences. In this paper, we propose a novel Advantage Actor-Critic with
Reasoner (A2CR), which can be easily applied to Actor-Critic-based RL models
and make them interpretable. A2CR consists of three interconnected networks:
the Policy Network, the Value Network, and the Reasoner Network. By predefining
and classifying the underlying purpose of the actor's actions, A2CR
automatically generates a more comprehensive and interpretable paradigm for
understanding the agent's decision-making process. It offers a range of
functionalities such as purpose-based saliency, early failure detection, and
model supervision, thereby promoting responsible and trustworthy RL.
Evaluations conducted in action-rich Super Mario Bros environments yield
intriguing findings: Reasoner-predicted label proportions decrease for
``Breakout" and increase for ``Hovering" as the exploration level of the RL
algorithm intensifies. Additionally, purpose-based saliencies are more focused
and comprehensible. | [
"Muzhe Guo",
"Feixu Yu",
"Tian Lan",
"Fang Jin"
] | 2023-09-09 07:19:20 | http://arxiv.org/abs/2309.04707v1 | http://arxiv.org/pdf/2309.04707v1 | 2309.04707v1 |
Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model | The paper considers the possibility of fine-tuning Llama 2 large language
model (LLM) for the disinformation analysis and fake news detection. For
fine-tuning, the PEFT/LoRA based approach was used. In the study, the model was
fine-tuned for the following tasks: analysing a text on revealing
disinformation and propaganda narratives, fact checking, fake news detection,
manipulation analytics, extracting named entities with their sentiments. The
obtained results show that the fine-tuned Llama 2 model can perform a deep
analysis of texts and reveal complex styles and narratives. Extracted
sentiments for named entities can be considered as predictive features in
supervised machine learning models. | [
"Bohdan M. Pavlyshenko"
] | 2023-09-09 07:10:19 | http://arxiv.org/abs/2309.04704v1 | http://arxiv.org/pdf/2309.04704v1 | 2309.04704v1 |
Weak-PDE-LEARN: A Weak Form Based Approach to Discovering PDEs From Noisy, Limited Data | We introduce Weak-PDE-LEARN, a Partial Differential Equation (PDE) discovery
algorithm that can identify non-linear PDEs from noisy, limited measurements of
their solutions. Weak-PDE-LEARN uses an adaptive loss function based on weak
forms to train a neural network, $U$, to approximate the PDE solution while
simultaneously identifying the governing PDE. This approach yields an algorithm
that is robust to noise and can discover a range of PDEs directly from noisy,
limited measurements of their solutions. We demonstrate the efficacy of
Weak-PDE-LEARN by learning several benchmark PDEs. | [
"Robert Stephany",
"Christopher Earls"
] | 2023-09-09 06:45:15 | http://arxiv.org/abs/2309.04699v1 | http://arxiv.org/pdf/2309.04699v1 | 2309.04699v1 |
Redundancy-Free Self-Supervised Relational Learning for Graph Clustering | Graph clustering, which learns the node representations for effective cluster
assignments, is a fundamental yet challenging task in data analysis and has
received considerable attention accompanied by graph neural networks in recent
years. However, most existing methods overlook the inherent relational
information among the non-independent and non-identically distributed nodes in
a graph. Due to the lack of exploration of relational attributes, the semantic
information of the graph-structured data fails to be fully exploited which
leads to poor clustering performance. In this paper, we propose a novel
self-supervised deep graph clustering method named Relational Redundancy-Free
Graph Clustering (R$^2$FGC) to tackle the problem. It extracts the attribute-
and structure-level relational information from both global and local views
based on an autoencoder and a graph autoencoder. To obtain effective
representations of the semantic information, we preserve the consistent
relation among augmented nodes, whereas the redundant relation is further
reduced for learning discriminative embeddings. In addition, a simple yet valid
strategy is utilized to alleviate the over-smoothing issue. Extensive
experiments are performed on widely used benchmark datasets to validate the
superiority of our R$^2$FGC over state-of-the-art baselines. Our codes are
available at https://github.com/yisiyu95/R2FGC. | [
"Si-Yu Yi",
"Wei Ju",
"Yifang Qin",
"Xiao Luo",
"Luchen Liu",
"Yong-Dao Zhou",
"Ming Zhang"
] | 2023-09-09 06:18:50 | http://arxiv.org/abs/2309.04694v1 | http://arxiv.org/pdf/2309.04694v1 | 2309.04694v1 |
Flexible and Robust Counterfactual Explanations with Minimal Satisfiable Perturbations | Counterfactual explanations (CFEs) exemplify how to minimally modify a
feature vector to achieve a different prediction for an instance. CFEs can
enhance informational fairness and trustworthiness, and provide suggestions for
users who receive adverse predictions. However, recent research has shown that
multiple CFEs can be offered for the same instance or instances with slight
differences. Multiple CFEs provide flexible choices and cover diverse
desiderata for user selection. However, individual fairness and model
reliability will be damaged if unstable CFEs with different costs are returned.
Existing methods fail to exploit flexibility and address the concerns of
non-robustness simultaneously. To address these issues, we propose a
conceptually simple yet effective solution named Counterfactual Explanations
with Minimal Satisfiable Perturbations (CEMSP). Specifically, CEMSP constrains
changing values of abnormal features with the help of their semantically
meaningful normal ranges. For efficiency, we model the problem as a Boolean
satisfiability problem to modify as few features as possible. Additionally,
CEMSP is a general framework and can easily accommodate more practical
requirements, e.g., casualty and actionability. Compared to existing methods,
we conduct comprehensive experiments on both synthetic and real-world datasets
to demonstrate that our method provides more robust explanations while
preserving flexibility. | [
"Yongjie Wang",
"Hangwei Qian",
"Yongjie Liu",
"Wei Guo",
"Chunyan Miao"
] | 2023-09-09 04:05:56 | http://arxiv.org/abs/2309.04676v1 | http://arxiv.org/pdf/2309.04676v1 | 2309.04676v1 |
Compact: Approximating Complex Activation Functions for Secure Computation | Secure multi-party computation (MPC) techniques can be used to provide data
privacy when users query deep neural network (DNN) models hosted on a public
cloud. State-of-the-art MPC techniques can be directly leveraged for DNN models
that use simple activation functions (AFs) such as ReLU. However, DNN model
architectures designed for cutting-edge applications often use complex and
highly non-linear AFs. Designing efficient MPC techniques for such complex AFs
is an open problem.
Towards this, we propose Compact, which produces piece-wise polynomial
approximations of complex AFs to enable their efficient use with
state-of-the-art MPC techniques. Compact neither requires nor imposes any
restriction on model training and results in near-identical model accuracy. We
extensively evaluate Compact on four different machine-learning tasks with DNN
architectures that use popular complex AFs SiLU, GeLU, and Mish. Our
experimental results show that Compact incurs negligible accuracy loss compared
to DNN-specific approaches for handling complex non-linear AFs. We also
incorporate Compact in two state-of-the-art MPC libraries for
privacy-preserving inference and demonstrate that Compact provides 2x-5x
speedup in computation compared to the state-of-the-art approximation approach
for non-linear functions -- while providing similar or better accuracy for DNN
models with large number of hidden layers | [
"Mazharul Islam",
"Sunpreet S. Arora",
"Rahul Chatterjee",
"Peter Rindal",
"Maliheh Shirvanian"
] | 2023-09-09 02:44:41 | http://arxiv.org/abs/2309.04664v1 | http://arxiv.org/pdf/2309.04664v1 | 2309.04664v1 |
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset | We introduce MADLAD-400, a manually audited, general domain 3T token
monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss
the limitations revealed by self-auditing MADLAD-400, and the role data
auditing had in the dataset creation process. We then train and release a
10.7B-parameter multilingual machine translation model on 250 billion tokens
covering over 450 languages using publicly available data, and find that it is
competitive with models that are significantly larger, and report the results
on different domains. In addition, we train a 8B-parameter language model, and
assess the results on few-shot translation. We make the baseline models
available to the research community. | [
"Sneha Kudugunta",
"Isaac Caswell",
"Biao Zhang",
"Xavier Garcia",
"Christopher A. Choquette-Choo",
"Katherine Lee",
"Derrick Xin",
"Aditya Kusupati",
"Romi Stella",
"Ankur Bapna",
"Orhan Firat"
] | 2023-09-09 02:34:01 | http://arxiv.org/abs/2309.04662v1 | http://arxiv.org/pdf/2309.04662v1 | 2309.04662v1 |
Intelligent upper-limb exoskeleton using deep learning to predict human intention for sensory-feedback augmentation | The age and stroke-associated decline in musculoskeletal strength degrades
the ability to perform daily human tasks using the upper extremities. Although
there are a few examples of exoskeletons, they need manual operations due to
the absence of sensor feedback and no intention prediction of movements. Here,
we introduce an intelligent upper-limb exoskeleton system that uses cloud-based
deep learning to predict human intention for strength augmentation. The
embedded soft wearable sensors provide sensory feedback by collecting real-time
muscle signals, which are simultaneously computed to determine the user's
intended movement. The cloud-based deep-learning predicts four upper-limb joint
motions with an average accuracy of 96.2% at a 200-250 millisecond response
rate, suggesting that the exoskeleton operates just by human intention. In
addition, an array of soft pneumatics assists the intended movements by
providing 897 newton of force and 78.7 millimeter of displacement at maximum.
Collectively, the intent-driven exoskeleton can augment human strength by 5.15
times on average compared to the unassisted exoskeleton. This report
demonstrates an exoskeleton robot that augments the upper-limb joint movements
by human intention based on a machine-learning cloud computing and sensory
feedback. | [
"Jinwoo Lee",
"Kangkyu Kwon",
"Ira Soltis",
"Jared Matthews",
"Yoonjae Lee",
"Hojoong Kim",
"Lissette Romero",
"Nathan Zavanelli",
"Youngjin Kwon",
"Shinjae Kwon",
"Jimin Lee",
"Yewon Na",
"Sung Hoon Lee",
"Ki Jun Yu",
"Minoru Shinohara",
"Frank L. Hammond",
"Woon-Hong Yeo"
] | 2023-09-09 01:30:07 | http://arxiv.org/abs/2309.04655v1 | http://arxiv.org/pdf/2309.04655v1 | 2309.04655v1 |
Towards Understanding Neural Collapse: The Effects of Batch Normalization and Weight Decay | Neural Collapse (NC) is a geometric structure recently observed in the final
layer of neural network classifiers. In this paper, we investigate the
interrelationships between batch normalization (BN), weight decay, and
proximity to the NC structure. Our work introduces the geometrically intuitive
intra-class and inter-class cosine similarity measure, which encapsulates
multiple core aspects of NC. Leveraging this measure, we establish theoretical
guarantees for the emergence of NC under the influence of last-layer BN and
weight decay, specifically in scenarios where the regularized cross-entropy
loss is near-optimal. Experimental evidence substantiates our theoretical
findings, revealing a pronounced occurrence of NC in models incorporating BN
and appropriate weight-decay values. This combination of theoretical and
empirical insights suggests a greatly influential role of BN and weight decay
in the emergence of NC. | [
"Leyan Pan",
"Xinyuan Cao"
] | 2023-09-09 00:05:45 | http://arxiv.org/abs/2309.04644v2 | http://arxiv.org/pdf/2309.04644v2 | 2309.04644v2 |
Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation | In many contact-rich tasks, force sensing plays an essential role in adapting
the motion to the physical properties of the manipulated object. To enable
robots to capture the underlying distribution of object properties necessary
for generalising learnt manipulation tasks to unseen objects, existing Learning
from Demonstration (LfD) approaches require a large number of costly human
demonstrations. Our proposed semi-supervised LfD approach decouples the learnt
model into an haptic representation encoder and a motion generation decoder.
This enables us to pre-train the first using large amount of unsupervised data,
easily accessible, while using few-shot LfD to train the second, leveraging the
benefits of learning skills from humans. We validate the approach on the wiping
task using sponges with different stiffness and surface friction. Our results
demonstrate that pre-training significantly improves the ability of the LfD
model to recognise physical properties and generate desired wiping motions for
unseen sponges, outperforming the LfD method without pre-training. We validate
the motion generated by our semi-supervised LfD model on the physical robot
hardware using the KUKA iiwa robot arm. We also validate that the haptic
representation encoder, pre-trained in simulation, captures the properties of
real objects, explaining its contribution to improving the generalisation of
the downstream task. | [
"Marina Y. Aoyama",
"João Moura",
"Namiko Saito",
"Sethu Vijayakumar"
] | 2023-09-08 23:42:59 | http://arxiv.org/abs/2309.04640v1 | http://arxiv.org/pdf/2309.04640v1 | 2309.04640v1 |
Probabilistic Safety Regions Via Finite Families of Scalable Classifiers | Supervised classification recognizes patterns in the data to separate classes
of behaviours. Canonical solutions contain misclassification errors that are
intrinsic to the numerical approximating nature of machine learning. The data
analyst may minimize the classification error on a class at the expense of
increasing the error of the other classes. The error control of such a design
phase is often done in a heuristic manner. In this context, it is key to
develop theoretical foundations capable of providing probabilistic
certifications to the obtained classifiers. In this perspective, we introduce
the concept of probabilistic safety region to describe a subset of the input
space in which the number of misclassified instances is probabilistically
controlled. The notion of scalable classifiers is then exploited to link the
tuning of machine learning with error control. Several tests corroborate the
approach. They are provided through synthetic data in order to highlight all
the steps involved, as well as through a smart mobility application. | [
"Alberto Carlevaro",
"Teodoro Alamo",
"Fabrizio Dabbene",
"Maurizio Mongelli"
] | 2023-09-08 22:40:19 | http://arxiv.org/abs/2309.04627v1 | http://arxiv.org/pdf/2309.04627v1 | 2309.04627v1 |
Perceptual adjustment queries and an inverted measurement paradigm for low-rank metric learning | We introduce a new type of query mechanism for collecting human feedback,
called the perceptual adjustment query ( PAQ). Being both informative and
cognitively lightweight, the PAQ adopts an inverted measurement scheme, and
combines advantages from both cardinal and ordinal queries. We showcase the PAQ
in the metric learning problem, where we collect PAQ measurements to learn an
unknown Mahalanobis distance. This gives rise to a high-dimensional, low-rank
matrix estimation problem to which standard matrix estimators cannot be
applied. Consequently, we develop a two-stage estimator for metric learning
from PAQs, and provide sample complexity guarantees for this estimator. We
present numerical simulations demonstrating the performance of the estimator
and its notable properties. | [
"Austin Xu",
"Andrew D. McRae",
"Jingyan Wang",
"Mark A. Davenport",
"Ashwin Pananjady"
] | 2023-09-08 22:36:33 | http://arxiv.org/abs/2309.04626v1 | http://arxiv.org/pdf/2309.04626v1 | 2309.04626v1 |
Knowledge Distillation-Empowered Digital Twin for Anomaly Detection | Cyber-physical systems (CPSs), like train control and management systems
(TCMS), are becoming ubiquitous in critical infrastructures. As safety-critical
systems, ensuring their dependability during operation is crucial. Digital
twins (DTs) have been increasingly studied for this purpose owing to their
capability of runtime monitoring and warning, prediction and detection of
anomalies, etc. However, constructing a DT for anomaly detection in TCMS
necessitates sufficient training data and extracting both chronological and
context features with high quality. Hence, in this paper, we propose a novel
method named KDDT for TCMS anomaly detection. KDDT harnesses a language model
(LM) and a long short-term memory (LSTM) network to extract contexts and
chronological features, respectively. To enrich data volume, KDDT benefits from
out-of-domain data with knowledge distillation (KD). We evaluated KDDT with two
datasets from our industry partner Alstom and obtained the F1 scores of 0.931
and 0.915, respectively, demonstrating the effectiveness of KDDT. We also
explored individual contributions of the DT model, LM, and KD to the overall
performance of KDDT, via a comprehensive empirical study, and observed average
F1 score improvements of 12.4%, 3%, and 6.05%, respectively. | [
"Qinghua Xu",
"Shaukat Ali",
"Tao Yue",
"Zaimovic Nedim",
"Inderjeet Singh"
] | 2023-09-08 22:13:03 | http://arxiv.org/abs/2309.04616v2 | http://arxiv.org/pdf/2309.04616v2 | 2309.04616v2 |
Leveraging World Model Disentanglement in Value-Based Multi-Agent Reinforcement Learning | In this paper, we propose a novel model-based multi-agent reinforcement
learning approach named Value Decomposition Framework with Disentangled World
Model to address the challenge of achieving a common goal of multiple agents
interacting in the same environment with reduced sample complexity. Due to
scalability and non-stationarity problems posed by multi-agent systems,
model-free methods rely on a considerable number of samples for training. In
contrast, we use a modularized world model, composed of action-conditioned,
action-free, and static branches, to unravel the environment dynamics and
produce imagined outcomes based on past experience, without sampling directly
from the real environment. We employ variational auto-encoders and variational
graph auto-encoders to learn the latent representations for the world model,
which is merged with a value-based framework to predict the joint action-value
function and optimize the overall training objective. We present experimental
results in Easy, Hard, and Super-Hard StarCraft II micro-management challenges
to demonstrate that our method achieves high sample efficiency and exhibits
superior performance in defeating the enemy armies compared to other baselines. | [
"Zhizun Wang",
"David Meger"
] | 2023-09-08 22:12:43 | http://arxiv.org/abs/2309.04615v1 | http://arxiv.org/pdf/2309.04615v1 | 2309.04615v1 |
Self-optimizing Feature Generation via Categorical Hashing Representation and Hierarchical Reinforcement Crossing | Feature generation aims to generate new and meaningful features to create a
discriminative representation space.A generated feature is meaningful when the
generated feature is from a feature pair with inherent feature interaction. In
the real world, experienced data scientists can identify potentially useful
feature-feature interactions, and generate meaningful dimensions from an
exponentially large search space, in an optimal crossing form over an optimal
generation path. But, machines have limited human-like abilities.We generalize
such learning tasks as self-optimizing feature generation. Self-optimizing
feature generation imposes several under-addressed challenges on existing
systems: meaningful, robust, and efficient generation. To tackle these
challenges, we propose a principled and generic representation-crossing
framework to solve self-optimizing feature generation.To achieve hashing
representation, we propose a three-step approach: feature discretization,
feature hashing, and descriptive summarization. To achieve reinforcement
crossing, we develop a hierarchical reinforcement feature crossing approach.We
present extensive experimental results to demonstrate the effectiveness and
efficiency of the proposed method. The code is available at
https://github.com/yingwangyang/HRC_feature_cross.git. | [
"Wangyang Ying",
"Dongjie Wang",
"Kunpeng Liu",
"Leilei Sun",
"Yanjie Fu"
] | 2023-09-08 22:05:27 | http://arxiv.org/abs/2309.04612v2 | http://arxiv.org/pdf/2309.04612v2 | 2309.04612v2 |
Online Infinite-Dimensional Regression: Learning Linear Operators | We consider the problem of learning linear operators under squared loss
between two infinite-dimensional Hilbert spaces in the online setting. We show
that the class of linear operators with uniformly bounded $p$-Schatten norm is
online learnable for any $p \in [1, \infty)$. On the other hand, we prove an
impossibility result by showing that the class of uniformly bounded linear
operators with respect to the operator norm is \textit{not} online learnable.
Moreover, we show a separation between online uniform convergence and online
learnability by identifying a class of bounded linear operators that is online
learnable but uniform convergence does not hold. Finally, we prove that the
impossibility result and the separation between uniform convergence and
learnability also hold in the agnostic PAC setting. | [
"Vinod Raman",
"Unique Subedi",
"Ambuj Tewari"
] | 2023-09-08 21:34:52 | http://arxiv.org/abs/2309.06548v2 | http://arxiv.org/pdf/2309.06548v2 | 2309.06548v2 |
Motif-aware Attribute Masking for Molecular Graph Pre-training | Attribute reconstruction is used to predict node or edge features in the
pre-training of graph neural networks. Given a large number of molecules, they
learn to capture structural knowledge, which is transferable for various
downstream property prediction tasks and vital in chemistry, biomedicine, and
material science. Previous strategies that randomly select nodes to do
attribute masking leverage the information of local neighbors However, the
over-reliance of these neighbors inhibits the model's ability to learn from
higher-level substructures. For example, the model would learn little from
predicting three carbon atoms in a benzene ring based on the other three but
could learn more from the inter-connections between the functional groups, or
called chemical motifs. In this work, we propose and investigate motif-aware
attribute masking strategies to capture inter-motif structures by leveraging
the information of atoms in neighboring motifs. Once each graph is decomposed
into disjoint motifs, the features for every node within a sample motif are
masked. The graph decoder then predicts the masked features of each node within
the motif for reconstruction. We evaluate our approach on eight molecular
property prediction datasets and demonstrate its advantages. | [
"Eric Inae",
"Gang Liu",
"Meng Jiang"
] | 2023-09-08 20:36:03 | http://arxiv.org/abs/2309.04589v1 | http://arxiv.org/pdf/2309.04589v1 | 2309.04589v1 |
Dynamic Mesh-Aware Radiance Fields | Embedding polygonal mesh assets within photorealistic Neural Radience Fields
(NeRF) volumes, such that they can be rendered and their dynamics simulated in
a physically consistent manner with the NeRF, is under-explored from the system
perspective of integrating NeRF into the traditional graphics pipeline. This
paper designs a two-way coupling between mesh and NeRF during rendering and
simulation. We first review the light transport equations for both mesh and
NeRF, then distill them into an efficient algorithm for updating radiance and
throughput along a cast ray with an arbitrary number of bounces. To resolve the
discrepancy between the linear color space that the path tracer assumes and the
sRGB color space that standard NeRF uses, we train NeRF with High Dynamic Range
(HDR) images. We also present a strategy to estimate light sources and cast
shadows on the NeRF. Finally, we consider how the hybrid surface-volumetric
formulation can be efficiently integrated with a high-performance physics
simulator that supports cloth, rigid and soft bodies. The full rendering and
simulation system can be run on a GPU at interactive rates. We show that a
hybrid system approach outperforms alternatives in visual realism for mesh
insertion, because it allows realistic light transport from volumetric NeRF
media onto surfaces, which affects the appearance of reflective/refractive
surfaces and illumination of diffuse surfaces informed by the dynamic scene. | [
"Yi-Ling Qiao",
"Alexander Gao",
"Yiran Xu",
"Yue Feng",
"Jia-Bin Huang",
"Ming C. Lin"
] | 2023-09-08 20:18:18 | http://arxiv.org/abs/2309.04581v1 | http://arxiv.org/pdf/2309.04581v1 | 2309.04581v1 |
Circles: Inter-Model Comparison of Multi-Classification Problems with High Number of Classes | The recent advancements in machine learning have motivated researchers to
generate classification models dealing with hundreds of classes such as in the
case of image datasets. However, visualization of classification models with
high number of classes and inter-model comparison in such classification
problems are two areas that have not received much attention in the literature,
despite the ever-increasing use of classification models to address problems
with very large class categories. In this paper, we present our interactive
visual analytics tool, called Circles, that allows a visual inter-model
comparison of numerous classification models with 1K classes in one view. To
mitigate the tricky issue of visual clutter, we chose concentric a radial line
layout for our inter-model comparison task. Our prototype shows the results of
9 models with 1K classes | [
"Nina Mir",
"Ragaad AlTarawneh",
"Shah Rukh Humayoun"
] | 2023-09-08 19:39:46 | http://arxiv.org/abs/2309.05672v1 | http://arxiv.org/pdf/2309.05672v1 | 2309.05672v1 |
Unleashing the Power of Graph Learning through LLM-based Autonomous Agents | Graph structured data are widely existed and applied in the real-world
applications, while it is a challenge to handling these diverse data and
learning tasks on graph in an efficient manner. When facing the complicated
graph learning tasks, experts have designed diverse Graph Neural Networks
(GNNs) in recent years. They have also implemented AutoML in Graph, also known
as AutoGraph, to automatically generate data-specific solutions. Despite their
success, they encounter limitations in (1) managing diverse learning tasks at
various levels, (2) dealing with different procedures in graph learning beyond
architecture design, and (3) the huge requirements on the prior knowledge when
using AutoGraph. In this paper, we propose to use Large Language Models (LLMs)
as autonomous agents to simplify the learning process on diverse real-world
graphs. Specifically, in response to a user request which may contain varying
data and learning targets at the node, edge, or graph levels, the complex graph
learning task is decomposed into three components following the agent planning,
namely, detecting the learning intent, configuring solutions based on
AutoGraph, and generating a response. The AutoGraph agents manage crucial
procedures in automated graph learning, including data-processing, AutoML
configuration, searching architectures, and hyper-parameter fine-tuning. With
these agents, those components are processed by decomposing and completing step
by step, thereby generating a solution for the given data automatically,
regardless of the learning task on node or graph. The proposed method is dubbed
Auto$^2$Graph, and the comparable performance on different datasets and
learning tasks. Its effectiveness is demonstrated by its comparable performance
on different datasets and learning tasks, as well as the human-like decisions
made by the agents. | [
"Lanning Wei",
"Zhiqiang He",
"Huan Zhao",
"Quanming Yao"
] | 2023-09-08 19:34:29 | http://arxiv.org/abs/2309.04565v1 | http://arxiv.org/pdf/2309.04565v1 | 2309.04565v1 |
When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale | Large volumes of text data have contributed significantly to the development
of large language models (LLMs) in recent years. This data is typically
acquired by scraping the internet, leading to pretraining datasets comprised of
noisy web text. To date, efforts to prune these datasets down to a higher
quality subset have relied on hand-crafted heuristics encoded as rule-based
filters. In this work, we take a wider view and explore scalable estimates of
data quality that can be used to systematically measure the quality of
pretraining data. We perform a rigorous comparison at scale of the simple data
quality estimator of perplexity, as well as more sophisticated and
computationally intensive estimates of the Error L2-Norm and memorization.
These metrics are used to rank and prune pretraining corpora, and we
subsequently compare LLMs trained on these pruned datasets. Surprisingly, we
find that the simple technique of perplexity outperforms our more
computationally expensive scoring methods. We improve over our no-pruning
baseline while training on as little as 30% of the original training dataset.
Our work sets the foundation for unexplored strategies in automatically
curating high quality corpora and suggests the majority of pretraining data can
be removed while retaining performance. | [
"Max Marion",
"Ahmet Üstün",
"Luiza Pozzobon",
"Alex Wang",
"Marzieh Fadaee",
"Sara Hooker"
] | 2023-09-08 19:34:05 | http://arxiv.org/abs/2309.04564v1 | http://arxiv.org/pdf/2309.04564v1 | 2309.04564v1 |
Towards Interpretable Solar Flare Prediction with Attention-based Deep Neural Networks | Solar flare prediction is a central problem in space weather forecasting and
recent developments in machine learning and deep learning accelerated the
adoption of complex models for data-driven solar flare forecasting. In this
work, we developed an attention-based deep learning model as an improvement
over the standard convolutional neural network (CNN) pipeline to perform
full-disk binary flare predictions for the occurrence of $\geq$M1.0-class
flares within the next 24 hours. For this task, we collected compressed images
created from full-disk line-of-sight (LoS) magnetograms. We used data-augmented
oversampling to address the class imbalance issue and used true skill statistic
(TSS) and Heidke skill score (HSS) as the evaluation metrics. Furthermore, we
interpreted our model by overlaying attention maps on input magnetograms and
visualized the important regions focused on by the model that led to the
eventual decision. The significant findings of this study are: (i) We
successfully implemented an attention-based full-disk flare predictor ready for
operational forecasting where the candidate model achieves an average
TSS=0.54$\pm$0.03 and HSS=0.37$\pm$0.07. (ii) we demonstrated that our
full-disk model can learn conspicuous features corresponding to active regions
from full-disk magnetogram images, and (iii) our experimental evaluation
suggests that our model can predict near-limb flares with adept skill and the
predictions are based on relevant active regions (ARs) or AR characteristics
from full-disk magnetograms. | [
"Chetraj Pandey",
"Anli Ji",
"Rafal A. Angryk",
"Berkay Aydin"
] | 2023-09-08 19:21:10 | http://arxiv.org/abs/2309.04558v1 | http://arxiv.org/pdf/2309.04558v1 | 2309.04558v1 |
Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing | We propose an optimal iterative scheme for federated transfer learning, where
a central planner has access to datasets ${\cal D}_1,\dots,{\cal D}_N$ for the
same learning model $f_{\theta}$. Our objective is to minimize the cumulative
deviation of the generated parameters $\{\theta_i(t)\}_{t=0}^T$ across all $T$
iterations from the specialized parameters
$\theta^\star_{1},\ldots,\theta^\star_N$ obtained for each dataset, while
respecting the loss function for the model $f_{\theta(T)}$ produced by the
algorithm upon halting. We only allow for continual communication between each
of the specialized models (nodes/agents) and the central planner (server), at
each iteration (round). For the case where the model $f_{\theta}$ is a
finite-rank kernel regression, we derive explicit updates for the
regret-optimal algorithm. By leveraging symmetries within the regret-optimal
algorithm, we further develop a nearly regret-optimal heuristic that runs with
$\mathcal{O}(Np^2)$ fewer elementary operations, where $p$ is the dimension of
the parameter space. Additionally, we investigate the adversarial robustness of
the regret-optimal algorithm showing that an adversary which perturbs $q$
training pairs by at-most $\varepsilon>0$, across all training sets, cannot
reduce the regret-optimal algorithm's regret by more than
$\mathcal{O}(\varepsilon q \bar{N}^{1/2})$, where $\bar{N}$ is the aggregate
number of training pairs. To validate our theoretical findings, we conduct
numerical experiments in the context of American option pricing, utilizing a
randomly generated finite-rank kernel. | [
"Xuwei Yang",
"Anastasis Kratsios",
"Florian Krach",
"Matheus Grasselli",
"Aurelien Lucchi"
] | 2023-09-08 19:17:03 | http://arxiv.org/abs/2309.04557v1 | http://arxiv.org/pdf/2309.04557v1 | 2309.04557v1 |
Connecting NTK and NNGP: A Unified Theoretical Framework for Neural Network Learning Dynamics in the Kernel Regime | Artificial neural networks have revolutionized machine learning in recent
years, but a complete theoretical framework for their learning process is still
lacking. Substantial progress has been made for infinitely wide networks. In
this regime, two disparate theoretical frameworks have been used, in which the
network's output is described using kernels: one framework is based on the
Neural Tangent Kernel (NTK) which assumes linearized gradient descent dynamics,
while the Neural Network Gaussian Process (NNGP) kernel assumes a Bayesian
framework. However, the relation between these two frameworks has remained
elusive. This work unifies these two distinct theories using a Markov proximal
learning model for learning dynamics in an ensemble of randomly initialized
infinitely wide deep networks. We derive an exact analytical expression for the
network input-output function during and after learning, and introduce a new
time-dependent Neural Dynamical Kernel (NDK) from which both NTK and NNGP
kernels can be derived. We identify two learning phases characterized by
different time scales: gradient-driven and diffusive learning. In the initial
gradient-driven learning phase, the dynamics is dominated by deterministic
gradient descent, and is described by the NTK theory. This phase is followed by
the diffusive learning stage, during which the network parameters sample the
solution space, ultimately approaching the equilibrium distribution
corresponding to NNGP. Combined with numerical evaluations on synthetic and
benchmark datasets, we provide novel insights into the different roles of
initialization, regularization, and network depth, as well as phenomena such as
early stopping and representational drift. This work closes the gap between the
NTK and NNGP theories, providing a comprehensive framework for understanding
the learning process of deep neural networks in the infinite width limit. | [
"Yehonatan Avidan",
"Qianyi Li",
"Haim Sompolinsky"
] | 2023-09-08 18:00:01 | http://arxiv.org/abs/2309.04522v1 | http://arxiv.org/pdf/2309.04522v1 | 2309.04522v1 |
On the Actionability of Outcome Prediction | Predicting future outcomes is a prevalent application of machine learning in
social impact domains. Examples range from predicting student success in
education to predicting disease risk in healthcare. Practitioners recognize
that the ultimate goal is not just to predict but to act effectively.
Increasing evidence suggests that relying on outcome predictions for downstream
interventions may not have desired results.
In most domains there exists a multitude of possible interventions for each
individual, making the challenge of taking effective action more acute. Even
when causal mechanisms connecting the individual's latent states to outcomes is
well understood, in any given instance (a specific student or patient),
practitioners still need to infer -- from budgeted measurements of latent
states -- which of many possible interventions will be most effective for this
individual. With this in mind, we ask: when are accurate predictors of outcomes
helpful for identifying the most suitable intervention?
Through a simple model encompassing actions, latent states, and measurements,
we demonstrate that pure outcome prediction rarely results in the most
effective policy for taking actions, even when combined with other
measurements. We find that except in cases where there is a single decisive
action for improving the outcome, outcome prediction never maximizes "action
value", the utility of taking actions. Making measurements of actionable latent
states, where specific actions lead to desired outcomes, considerably enhances
the action value compared to outcome prediction, and the degree of improvement
depends on action costs and the outcome model. This analysis emphasizes the
need to go beyond generic outcome prediction in interventional settings by
incorporating knowledge of plausible actions and latent states. | [
"Lydia T. Liu",
"Solon Barocas",
"Jon Kleinberg",
"Karen Levy"
] | 2023-09-08 17:57:31 | http://arxiv.org/abs/2309.04470v1 | http://arxiv.org/pdf/2309.04470v1 | 2309.04470v1 |
Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models | Vision-language models (VLMs) have recently demonstrated strong efficacy as
visual assistants that can parse natural queries about the visual content and
generate human-like outputs. In this work, we explore the ability of these
models to demonstrate human-like reasoning based on the perceived information.
To address a crucial concern regarding the extent to which their reasoning
capabilities are fully consistent and grounded, we also measure the reasoning
consistency of these models. We achieve this by proposing a chain-of-thought
(CoT) based consistency measure. However, such an evaluation requires a
benchmark that encompasses both high-level inference and detailed reasoning
chains, which is costly. We tackle this challenge by proposing a
LLM-Human-in-the-Loop pipeline, which notably reduces cost while simultaneously
ensuring the generation of a high-quality dataset. Based on this pipeline and
the existing coarse-grained annotated dataset, we build the CURE benchmark to
measure both the zero-shot reasoning performance and consistency of VLMs. We
evaluate existing state-of-the-art VLMs, and find that even the best-performing
model is unable to demonstrate strong visual reasoning capabilities and
consistency, indicating that substantial efforts are required to enable VLMs to
perform visual reasoning as systematically and consistently as humans. As an
early step, we propose a two-stage training framework aimed at improving both
the reasoning performance and consistency of VLMs. The first stage involves
employing supervised fine-tuning of VLMs using step-by-step reasoning samples
automatically generated by LLMs. In the second stage, we further augment the
training process by incorporating feedback provided by LLMs to produce
reasoning chains that are highly consistent and grounded. We empirically
highlight the effectiveness of our framework in both reasoning performance and
consistency. | [
"Yangyi Chen",
"Karan Sikka",
"Michael Cogswell",
"Heng Ji",
"Ajay Divakaran"
] | 2023-09-08 17:49:44 | http://arxiv.org/abs/2309.04461v1 | http://arxiv.org/pdf/2309.04461v1 | 2309.04461v1 |
tSPM+; a high-performance algorithm for mining transitive sequential patterns from clinical data | The increasing availability of large clinical datasets collected from
patients can enable new avenues for computational characterization of complex
diseases using different analytic algorithms. One of the promising new methods
for extracting knowledge from large clinical datasets involves temporal pattern
mining integrated with machine learning workflows. However, mining these
temporal patterns is a computational intensive task and has memory
repercussions. Current algorithms, such as the temporal sequence pattern mining
(tSPM) algorithm, are already providing promising outcomes, but still leave
room for optimization. In this paper, we present the tSPM+ algorithm, a
high-performance implementation of the tSPM algorithm, which adds a new
dimension by adding the duration to the temporal patterns. We show that the
tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold
improvement in memory consumption. Moreover, we present a docker container with
an R-package, We also provide vignettes for an easy integration into already
existing machine learning workflows and use the mined temporal sequences to
identify Post COVID-19 patients and their symptoms according to the WHO
definition. | [
"Jonas Hügel",
"Ulrich Sax",
"Shawn N. Murphy",
"Hossein Estiri"
] | 2023-09-08 17:47:31 | http://arxiv.org/abs/2309.05671v1 | http://arxiv.org/pdf/2309.05671v1 | 2309.05671v1 |
Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning | Exploration in sparse-reward reinforcement learning is difficult due to the
requirement of long, coordinated sequences of actions in order to achieve any
reward. Moreover, in continuous action spaces there are an infinite number of
possible actions, which only increases the difficulty of exploration. One class
of methods designed to address these issues forms temporally extended actions,
often called skills, from interaction data collected in the same domain, and
optimizes a policy on top of this new action space. Typically such methods
require a lengthy pretraining phase, especially in continuous action spaces, in
order to form the skills before reinforcement learning can begin. Given prior
evidence that the full range of the continuous action space is not required in
such tasks, we propose a novel approach to skill-generation with two
components. First we discretize the action space through clustering, and second
we leverage a tokenization technique borrowed from natural language processing
to generate temporally extended actions. Such a method outperforms baselines
for skill-generation in several challenging sparse-reward domains, and requires
orders-of-magnitude less computation in skill-generation and online rollouts. | [
"David Yunis",
"Justin Jung",
"Falcon Dai",
"Matthew Walter"
] | 2023-09-08 17:37:05 | http://arxiv.org/abs/2309.04459v1 | http://arxiv.org/pdf/2309.04459v1 | 2309.04459v1 |
Postprocessing of Ensemble Weather Forecasts Using Permutation-invariant Neural Networks | Statistical postprocessing is used to translate ensembles of raw numerical
weather forecasts into reliable probabilistic forecast distributions. In this
study, we examine the use of permutation-invariant neural networks for this
task. In contrast to previous approaches, which often operate on ensemble
summary statistics and dismiss details of the ensemble distribution, we propose
networks which treat forecast ensembles as a set of unordered member forecasts
and learn link functions that are by design invariant to permutations of the
member ordering. We evaluate the quality of the obtained forecast distributions
in terms of calibration and sharpness, and compare the models against classical
and neural network-based benchmark methods. In case studies addressing the
postprocessing of surface temperature and wind gust forecasts, we demonstrate
state-of-the-art prediction quality. To deepen the understanding of the learned
inference process, we further propose a permutation-based importance analysis
for ensemble-valued predictors, which highlights specific aspects of the
ensemble forecast that are considered important by the trained postprocessing
models. Our results suggest that most of the relevant information is contained
in few ensemble-internal degrees of freedom, which may impact the design of
future ensemble forecasting and postprocessing systems. | [
"Kevin Höhlein",
"Benedikt Schulz",
"Rüdiger Westermann",
"Sebastian Lerch"
] | 2023-09-08 17:20:51 | http://arxiv.org/abs/2309.04452v1 | http://arxiv.org/pdf/2309.04452v1 | 2309.04452v1 |
End-to-End Speech Recognition and Disfluency Removal with Acoustic Language Model Pretraining | The SOTA in transcription of disfluent and conversational speech has in
recent years favored two-stage models, with separate transcription and cleaning
stages. We believe that previous attempts at end-to-end disfluency removal have
fallen short because of the representational advantage that large-scale
language model pretraining has given to lexical models. Until recently, the
high dimensionality and limited availability of large audio datasets inhibited
the development of large-scale self-supervised pretraining objectives for
learning effective audio representations, giving a relative advantage to the
two-stage approach, which utilises pretrained representations for lexical
tokens. In light of recent successes in large scale audio pretraining, we
revisit the performance comparison between two-stage and end-to-end model and
find that audio based language models pretrained using weak self-supervised
objectives match or exceed the performance of similarly trained two-stage
models, and further, that the choice of pretraining objective substantially
effects a model's ability to be adapted to the disfluency removal task. | [
"Saksham Bassi",
"Giulio Duregon",
"Siddhartha Jalagam",
"David Roth"
] | 2023-09-08 17:12:14 | http://arxiv.org/abs/2309.04516v1 | http://arxiv.org/pdf/2309.04516v1 | 2309.04516v1 |
Physics-Informed Neural Networks for an optimal counterdiabatic quantum computation | We introduce a novel methodology that leverages the strength of
Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD)
protocol in the optimization of quantum circuits comprised of systems with
$N_{Q}$ qubits. The primary objective is to utilize physics-inspired deep
learning techniques to accurately solve the time evolution of the different
physical observables within the quantum system. To accomplish this objective,
we embed the necessary physical information into an underlying neural network
to effectively tackle the problem. In particular, we impose the hermiticity
condition on all physical observables and make use of the principle of least
action, guaranteeing the acquisition of the most appropriate counterdiabatic
terms based on the underlying physics. The proposed approach offers a
dependable alternative to address the CD driving problem, free from the
constraints typically encountered in previous methodologies relying on
classical numerical approximations. Our method provides a general framework to
obtain optimal results from the physical observables relevant to the problem,
including the external parameterization in time known as scheduling function,
the gauge potential or operator involving the non-adiabatic terms, as well as
the temporal evolution of the energy levels of the system, among others. The
main applications of this methodology have been the $\mathrm{H_{2}}$ and
$\mathrm{LiH}$ molecules, represented by a 2-qubit and 4-qubit systems
employing the STO-3G basis. The presented results demonstrate the successful
derivation of a desirable decomposition for the non-adiabatic terms, achieved
through a linear combination utilizing Pauli operators. This attribute confers
significant advantages to its practical implementation within quantum computing
algorithms. | [
"Antonio Ferrer-Sánchez",
"Carlos Flores-Garrigos",
"Carlos Hernani-Morales",
"José J. Orquín-Marqués",
"Narendra N. Hegade",
"Alejandro Gomez Cadavid",
"Iraitz Montalban",
"Enrique Solano",
"Yolanda Vives-Gilabert",
"José D. Martín-Guerrero"
] | 2023-09-08 16:55:39 | http://arxiv.org/abs/2309.04434v2 | http://arxiv.org/pdf/2309.04434v2 | 2309.04434v2 |
Variations and Relaxations of Normalizing Flows | Normalizing Flows (NFs) describe a class of models that express a complex
target distribution as the composition of a series of bijective transformations
over a simpler base distribution. By limiting the space of candidate
transformations to diffeomorphisms, NFs enjoy efficient, exact sampling and
density evaluation, enabling NFs to flexibly behave as both discriminative and
generative models. Their restriction to diffeomorphisms, however, enforces that
input, output and all intermediary spaces share the same dimension, limiting
their ability to effectively represent target distributions with complex
topologies. Additionally, in cases where the prior and target distributions are
not homeomorphic, Normalizing Flows can leak mass outside of the support of the
target. This survey covers a selection of recent works that combine aspects of
other generative model classes, such as VAEs and score-based diffusion, and in
doing so loosen the strict bijectivity constraints of NFs to achieve a balance
of expressivity, training speed, sample efficiency and likelihood tractability. | [
"Keegan Kelly",
"Lorena Piedras",
"Sukrit Rao",
"David Roth"
] | 2023-09-08 16:55:23 | http://arxiv.org/abs/2309.04433v1 | http://arxiv.org/pdf/2309.04433v1 | 2309.04433v1 |
Soft Quantization using Entropic Regularization | The quantization problem aims to find the best possible approximation of
probability measures on ${\mathbb{R}}^d$ using finite, discrete measures. The
Wasserstein distance is a typical choice to measure the quality of the
approximation. This contribution investigates the properties and robustness of
the entropy-regularized quantization problem, which relaxes the standard
quantization problem. The proposed approximation technique naturally adopts the
softmin function, which is well known for its robustness in terms of
theoretical and practicability standpoints. Moreover, we use the
entropy-regularized Wasserstein distance to evaluate the quality of the soft
quantization problem's approximation, and we implement a stochastic gradient
approach to achieve the optimal solutions. The control parameter in our
proposed method allows for the adjustment of the optimization problem's
difficulty level, providing significant advantages when dealing with
exceptionally challenging problems of interest. As well, this contribution
empirically illustrates the performance of the method in various expositions. | [
"Rajmadan Lakshmanan",
"Alois Pichler"
] | 2023-09-08 16:41:26 | http://arxiv.org/abs/2309.04428v1 | http://arxiv.org/pdf/2309.04428v1 | 2309.04428v1 |
Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach | Several domains increasingly rely on machine learning in their applications.
The resulting heavy dependence on data has led to the emergence of various laws
and regulations around data ethics and privacy and growing awareness of the
need for privacy-preserving machine learning (ppML). Current ppML techniques
utilize methods that are either purely based on cryptography, such as
homomorphic encryption, or that introduce noise into the input, such as
differential privacy. The main criticism given to those techniques is the fact
that they either are too slow or they trade off a model s performance for
improved confidentiality. To address this performance reduction, we aim to
leverage robust representation learning as a way of encoding our data while
optimizing the privacy-utility trade-off. Our method centers on training
autoencoders in a multi-objective manner and then concatenating the latent and
learned features from the encoding part as the encoded form of our data. Such a
deep learning-powered encoding can then safely be sent to a third party for
intensive training and hyperparameter tuning. With our proposed framework, we
can share our data and use third party tools without being under the threat of
revealing its original form. We empirically validate our results on unimodal
and multimodal settings, the latter following a vertical splitting system and
show improved performance over state-of-the-art. | [
"Sofiane Ouaari",
"Ali Burak Ünal",
"Mete Akgün",
"Nico Pfeifer"
] | 2023-09-08 16:41:25 | http://arxiv.org/abs/2309.04427v1 | http://arxiv.org/pdf/2309.04427v1 | 2309.04427v1 |
Parallel and Limited Data Voice Conversion Using Stochastic Variational Deep Kernel Learning | Typically, voice conversion is regarded as an engineering problem with
limited training data. The reliance on massive amounts of data hinders the
practical applicability of deep learning approaches, which have been
extensively researched in recent years. On the other hand, statistical methods
are effective with limited data but have difficulties in modelling complex
mapping functions. This paper proposes a voice conversion method that works
with limited data and is based on stochastic variational deep kernel learning
(SVDKL). At the same time, SVDKL enables the use of deep neural networks'
expressive capability as well as the high flexibility of the Gaussian process
as a Bayesian and non-parametric method. When the conventional kernel is
combined with the deep neural network, it is possible to estimate non-smooth
and more complex functions. Furthermore, the model's sparse variational
Gaussian process solves the scalability problem and, unlike the exact Gaussian
process, allows for the learning of a global mapping function for the entire
acoustic space. One of the most important aspects of the proposed scheme is
that the model parameters are trained using marginal likelihood optimization,
which considers both data fitting and model complexity. Considering the
complexity of the model reduces the amount of training data by increasing the
resistance to overfitting. To evaluate the proposed scheme, we examined the
model's performance with approximately 80 seconds of training data. The results
indicated that our method obtained a higher mean opinion score, smaller
spectral distortion, and better preference tests than the compared methods. | [
"Mohamadreza Jafaryani",
"Hamid Sheikhzadeh",
"Vahid Pourahmadi"
] | 2023-09-08 16:32:47 | http://arxiv.org/abs/2309.04420v1 | http://arxiv.org/pdf/2309.04420v1 | 2309.04420v1 |
Privacy Preserving Federated Learning with Convolutional Variational Bottlenecks | Gradient inversion attacks are an ubiquitous threat in federated learning as
they exploit gradient leakage to reconstruct supposedly private training data.
Recent work has proposed to prevent gradient leakage without loss of model
utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on
variational modeling. Without further analysis, it was shown that PRECODE
successfully protects against gradient inversion attacks. In this paper, we
make multiple contributions. First, we investigate the effect of PRECODE on
gradient inversion attacks to reveal its underlying working principle. We show
that variational modeling introduces stochasticity into the gradients of
PRECODE and the subsequent layers in a neural network. The stochastic gradients
of these layers prevent iterative gradient inversion attacks from converging.
Second, we formulate an attack that disables the privacy preserving effect of
PRECODE by purposefully omitting stochastic gradients during attack
optimization. To preserve the privacy preserving effect of PRECODE, our
analysis reveals that variational modeling must be placed early in the network.
However, early placement of PRECODE is typically not feasible due to reduced
model utility and the exploding number of additional model parameters.
Therefore, as a third contribution, we propose a novel privacy module -- the
Convolutional Variational Bottleneck (CVB) -- that can be placed early in a
neural network without suffering from these drawbacks. We conduct an extensive
empirical study on three seminal model architectures and six image
classification datasets. We find that all architectures are susceptible to
gradient leakage attacks, which can be prevented by our proposed CVB. Compared
to PRECODE, we show that our novel privacy module requires fewer trainable
parameters, and thus computational and communication costs, to effectively
preserve privacy. | [
"Daniel Scheliga",
"Patrick Mäder",
"Marco Seeland"
] | 2023-09-08 16:23:25 | http://arxiv.org/abs/2309.04515v1 | http://arxiv.org/pdf/2309.04515v1 | 2309.04515v1 |
Emergent learning in physical systems as feedback-based aging in a glassy landscape | By training linear physical networks to learn linear transformations, we
discern how their physical properties evolve due to weight update rules. Our
findings highlight a striking similarity between the learning behaviors of such
networks and the processes of aging and memory formation in disordered and
glassy systems. We show that the learning dynamics resembles an aging process,
where the system relaxes in response to repeated application of the feedback
boundary forces in presence of an input force, thus encoding a memory of the
input-output relationship. With this relaxation comes an increase in the
correlation length, which is indicated by the two-point correlation function
for the components of the network. We also observe that the square root of the
mean-squared error as a function of epoch takes on a non-exponential form,
which is a typical feature of glassy systems. This physical interpretation
suggests that by encoding more detailed information into input and feedback
boundary forces, the process of emergent learning can be rather ubiquitous and,
thus, serve as a very early physical mechanism, from an evolutionary
standpoint, for learning in biological systems. | [
"Vidyesh Rao Anisetti",
"Ananth Kandala",
"J. M. Schwarz"
] | 2023-09-08 15:24:55 | http://arxiv.org/abs/2309.04382v1 | http://arxiv.org/pdf/2309.04382v1 | 2309.04382v1 |
Generalization Bounds: Perspectives from Information Theory and PAC-Bayes | A fundamental question in theoretical machine learning is generalization.
Over the past decades, the PAC-Bayesian approach has been established as a
flexible framework to address the generalization capabilities of machine
learning algorithms, and design new ones. Recently, it has garnered increased
interest due to its potential applicability for a variety of learning
algorithms, including deep neural networks. In parallel, an
information-theoretic view of generalization has developed, wherein the
relation between generalization and various information measures has been
established. This framework is intimately connected to the PAC-Bayesian
approach, and a number of results have been independently discovered in both
strands. In this monograph, we highlight this strong connection and present a
unified treatment of generalization. We present techniques and results that the
two perspectives have in common, and discuss the approaches and interpretations
that differ. In particular, we demonstrate how many proofs in the area share a
modular structure, through which the underlying ideas can be intuited. We pay
special attention to the conditional mutual information (CMI) framework;
analytical studies of the information complexity of learning algorithms; and
the application of the proposed methods to deep learning. This monograph is
intended to provide a comprehensive introduction to information-theoretic
generalization bounds and their connection to PAC-Bayes, serving as a
foundation from which the most recent developments are accessible. It is aimed
broadly towards researchers with an interest in generalization and theoretical
machine learning. | [
"Fredrik Hellström",
"Giuseppe Durisi",
"Benjamin Guedj",
"Maxim Raginsky"
] | 2023-09-08 15:23:40 | http://arxiv.org/abs/2309.04381v1 | http://arxiv.org/pdf/2309.04381v1 | 2309.04381v1 |
Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control | Seeing-eye robots are very useful tools for guiding visually impaired people,
potentially producing a huge societal impact given the low availability and
high cost of real guide dogs. Although a few seeing-eye robot systems have
already been demonstrated, none considered external tugs from humans, which
frequently occur in a real guide dog setting. In this paper, we simultaneously
train a locomotion controller that is robust to external tugging forces via
Reinforcement Learning (RL), and an external force estimator via supervised
learning. The controller ensures stable walking, and the force estimator
enables the robot to respond to the external forces from the human. These
forces are used to guide the robot to the global goal, which is unknown to the
robot, while the robot guides the human around nearby obstacles via a local
planner. Experimental results in simulation and on hardware show that our
controller is robust to external forces, and our seeing-eye system can
accurately detect force direction. We demonstrate our full seeing-eye robot
system on a real quadruped robot with a blindfolded human. The video can be
seen at our project page: https://bu-air-lab.github.io/guide_dog/ | [
"David DeFazio",
"Eisuke Hirota",
"Shiqi Zhang"
] | 2023-09-08 15:02:46 | http://arxiv.org/abs/2309.04370v2 | http://arxiv.org/pdf/2309.04370v2 | 2309.04370v2 |
Active Learning for Classifying 2D Grid-Based Level Completability | Determining the completability of levels generated by procedural generators
such as machine learning models can be challenging, as it can involve the use
of solver agents that often require a significant amount of time to analyze and
solve levels. Active learning is not yet widely adopted in game evaluations,
although it has been used successfully in natural language processing, image
and speech recognition, and computer vision, where the availability of labeled
data is limited or expensive. In this paper, we propose the use of active
learning for learning level completability classification. Through an active
learning approach, we train deep-learning models to classify the completability
of generated levels for Super Mario Bros., Kid Icarus, and a Zelda-like game.
We compare active learning for querying levels to label with completability
against random queries. Our results show using an active learning approach to
label levels results in better classifier performance with the same amount of
labeled data. | [
"Mahsa Bazzaz",
"Seth Cooper"
] | 2023-09-08 14:56:22 | http://arxiv.org/abs/2309.04367v1 | http://arxiv.org/pdf/2309.04367v1 | 2309.04367v1 |
Learning from Power Signals: An Automated Approach to Electrical Disturbance Identification Within a Power Transmission System | As power quality becomes a higher priority in the electric utility industry,
the amount of disturbance event data continues to grow. Utilities do not have
the required personnel to analyze each event by hand. This work presents an
automated approach for analyzing power quality events recorded by digital fault
recorders and power quality monitors operating within a power transmission
system. The automated approach leverages rule-based analytics to examine the
time and frequency domain characteristics of the voltage and current signals.
Customizable thresholds are set to categorize each disturbance event. The
events analyzed within this work include various faults, motor starting, and
incipient instrument transformer failure. Analytics for fourteen different
event types have been developed. The analytics were tested on 160 signal files
and yielded an accuracy of ninety-nine percent. Continuous, nominal signal data
analysis is performed using an approach coined as the cyclic histogram. The
cyclic histogram process will be integrated into the digital fault recorders
themselves to facilitate the detection of subtle signal variations that are too
small to trigger a disturbance event and that can occur over hours or days. In
addition to reducing memory requirements by a factor of 320, it is anticipated
that cyclic histogram processing will aid in identifying incipient events and
identifiers. This project is expected to save engineers time by automating the
classification of disturbance events and increase the reliability of the
transmission system by providing near real time detection and identification of
disturbances as well as prevention of problems before they occur. | [
"Jonathan D. Boyd",
"Joshua H. Tyler",
"Anthony M. Murphy",
"Donald R. Reising"
] | 2023-09-08 14:41:21 | http://arxiv.org/abs/2309.04361v1 | http://arxiv.org/pdf/2309.04361v1 | 2309.04361v1 |
Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for Redundant Data | Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression
formats for sparse matrices. However, both CSC and COO are general purpose and
cannot take advantage of any of the properties of the data other than sparsity,
such as data redundancy. Highly redundant sparse data is common in many machine
learning applications, such as genomics, and is often too large for in-core
computation using conventional sparse storage formats. In this paper, we
present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and
(2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of
high redundancy within a column to further compress data up to 3-fold over COO
and 2.25-fold over CSC, without significant negative impact to performance
characteristics. IVCSC extends VCSC by compressing index arrays through delta
encoding and byte-packing, achieving a 10-fold decrease in memory usage over
COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data
show that VCSC and IVCSC can be read in compressed form with little added
computational cost. These two novel compression formats offer a broadly useful
solution to encoding and reading redundant sparse data. | [
"Skyler Ruiter",
"Seth Wolfgang",
"Marc Tunnell",
"Timothy Triche Jr.",
"Erin Carrier",
"Zachary DeBruine"
] | 2023-09-08 14:24:40 | http://arxiv.org/abs/2309.04355v1 | http://arxiv.org/pdf/2309.04355v1 | 2309.04355v1 |
Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts | Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due
to their ability to decouple model size from inference efficiency by only
activating a small subset of the model parameters for any given input token. As
such, sparse MoEs have enabled unprecedented scalability, resulting in
tremendous successes across domains such as natural language processing and
computer vision. In this work, we instead explore the use of sparse MoEs to
scale-down Vision Transformers (ViTs) to make them more attractive for
resource-constrained vision applications. To this end, we propose a simplified
and mobile-friendly MoE design where entire images rather than individual
patches are routed to the experts. We also propose a stable MoE training
procedure that uses super-class information to guide the router. We empirically
show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off
between performance and efficiency than the corresponding dense ViTs. For
example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense
counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only
54M FLOPs inference cost, our MoE achieves an improvement of 4.66%. | [
"Erik Daxberger",
"Floris Weers",
"Bowen Zhang",
"Tom Gunter",
"Ruoming Pang",
"Marcin Eichner",
"Michael Emmersberger",
"Yinfei Yang",
"Alexander Toshev",
"Xianzhi Du"
] | 2023-09-08 14:24:10 | http://arxiv.org/abs/2309.04354v1 | http://arxiv.org/pdf/2309.04354v1 | 2309.04354v1 |
Zero-Shot Robustification of Zero-Shot Models With Foundation Models | Zero-shot inference is a powerful paradigm that enables the use of large
pretrained models for downstream classification tasks without further training.
However, these models are vulnerable to inherited biases that can impact their
performance. The traditional solution is fine-tuning, but this undermines the
key advantage of pretrained models, which is their ability to be used
out-of-the-box. We propose RoboShot, a method that improves the robustness of
pretrained model embeddings in a fully zero-shot fashion. First, we use
zero-shot language models (LMs) to obtain useful insights from task
descriptions. These insights are embedded and used to remove harmful and boost
useful components in embeddings -- without any supervision. Theoretically, we
provide a simple and tractable model for biases in zero-shot embeddings and
give a result characterizing under what conditions our approach can boost
performance. Empirically, we evaluate RoboShot on nine image and NLP
classification tasks and show an average improvement of 15.98% over several
zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible
with a variety of pretrained and language models. | [
"Dyah Adila",
"Changho Shin",
"Linrong Cai",
"Frederic Sala"
] | 2023-09-08 14:15:47 | http://arxiv.org/abs/2309.04344v1 | http://arxiv.org/pdf/2309.04344v1 | 2309.04344v1 |
Online Submodular Maximization via Online Convex Optimization | We study monotone submodular maximization under general matroid constraints
in the online setting. We prove that online optimization of a large class of
submodular functions, namely, weighted threshold potential functions, reduces
to online convex optimization (OCO). This is precisely because functions in
this class admit a concave relaxation; as a result, OCO policies, coupled with
an appropriate rounding scheme, can be used to achieve sublinear regret in the
combinatorial setting. We show that our reduction extends to many different
versions of the online learning problem, including the dynamic regret, bandit,
and optimistic-learning settings. | [
"Tareq Si-Salem",
"Gözde Özcan",
"Iasonas Nikolaou",
"Evimaria Terzi",
"Stratis Ioannidis"
] | 2023-09-08 14:08:19 | http://arxiv.org/abs/2309.04339v2 | http://arxiv.org/pdf/2309.04339v2 | 2309.04339v2 |
Decreasing the Computing Time of Bayesian Optimization using Generalizable Memory Pruning | Bayesian optimization (BO) suffers from long computing times when processing
highly-dimensional or large data sets. These long computing times are a result
of the Gaussian process surrogate model having a polynomial time complexity
with the number of experiments. Running BO on high-dimensional or massive data
sets becomes intractable due to this time complexity scaling, in turn,
hindering experimentation. Alternative surrogate models have been developed to
reduce the computing utilization of the BO procedure, however, these methods
require mathematical alteration of the inherit surrogate function, pigeonholing
use into only that function. In this paper, we demonstrate a generalizable BO
wrapper of memory pruning and bounded optimization, capable of being used with
any surrogate model and acquisition function. Using this memory pruning
approach, we show a decrease in wall-clock computing times per experiment of BO
from a polynomially increasing pattern to a sawtooth pattern that has a
non-increasing trend without sacrificing convergence performance. Furthermore,
we illustrate the generalizability of the approach across two unique data sets,
two unique surrogate models, and four unique acquisition functions. All model
implementations are run on the MIT Supercloud state-of-the-art computing
hardware. | [
"Alexander E. Siemenn",
"Tonio Buonassisi"
] | 2023-09-08 14:05:56 | http://arxiv.org/abs/2309.04510v1 | http://arxiv.org/pdf/2309.04510v1 | 2309.04510v1 |
Encoding Multi-Domain Scientific Papers by Ensembling Multiple CLS Tokens | Many useful tasks on scientific documents, such as topic classification and
citation prediction, involve corpora that span multiple scientific domains.
Typically, such tasks are accomplished by representing the text with a vector
embedding obtained from a Transformer's single CLS token. In this paper, we
argue that using multiple CLS tokens could make a Transformer better specialize
to multiple scientific domains. We present Multi2SPE: it encourages each of
multiple CLS tokens to learn diverse ways of aggregating token embeddings, then
sums them up together to create a single vector representation. We also propose
our new multi-domain benchmark, Multi-SciDocs, to test scientific paper vector
encoders under multi-domain settings. We show that Multi2SPE reduces error by
up to 25 percent in multi-domain citation prediction, while requiring only a
negligible amount of computation in addition to one BERT forward pass. | [
"Ronald Seoh",
"Haw-Shiuan Chang",
"Andrew McCallum"
] | 2023-09-08 14:00:29 | http://arxiv.org/abs/2309.04333v1 | http://arxiv.org/pdf/2309.04333v1 | 2309.04333v1 |
Graph Neural Networks Use Graphs When They Shouldn't | Predictions over graphs play a crucial role in various domains, including
social networks, molecular biology, medicine, and more. Graph Neural Networks
(GNNs) have emerged as the dominant approach for learning on graph data.
Instances of graph labeling problems consist of the graph-structure (i.e., the
adjacency matrix), along with node-specific feature vectors. In some cases,
this graph-structure is non-informative for the predictive task. For instance,
molecular properties such as molar mass depend solely on the constituent atoms
(node features), and not on the molecular structure. While GNNs have the
ability to ignore the graph-structure in such cases, it is not clear that they
will. In this work, we show that GNNs actually tend to overfit the
graph-structure in the sense that they use it even when a better solution can
be obtained by ignoring it. We examine this phenomenon with respect to
different graph distributions and find that regular graphs are more robust to
this overfitting. We then provide a theoretical explanation for this
phenomenon, via analyzing the implicit bias of gradient-descent-based learning
of GNNs in this setting. Finally, based on our empirical and theoretical
findings, we propose a graph-editing method to mitigate the tendency of GNNs to
overfit graph-structures that should be ignored. We show that this method
indeed improves the accuracy of GNNs across multiple benchmarks. | [
"Maya Bechler-Speicher",
"Ido Amos",
"Ran Gilad-Bachrach",
"Amir Globerson"
] | 2023-09-08 13:59:18 | http://arxiv.org/abs/2309.04332v1 | http://arxiv.org/pdf/2309.04332v1 | 2309.04332v1 |
Generating the Ground Truth: Synthetic Data for Label Noise Research | Most real-world classification tasks suffer from label noise to some extent.
Such noise in the data adversely affects the generalization error of learned
models and complicates the evaluation of noise-handling methods, as their
performance cannot be accurately measured without clean labels. In label noise
research, typically either noisy or incomplex simulated data are accepted as a
baseline, into which additional noise with known properties is injected. In
this paper, we propose SYNLABEL, a framework that aims to improve upon the
aforementioned methodologies. It allows for creating a noiseless dataset
informed by real data, by either pre-specifying or learning a function and
defining it as the ground truth function from which labels are generated.
Furthermore, by resampling a number of values for selected features in the
function domain, evaluating the function and aggregating the resulting labels,
each data point can be assigned a soft label or label distribution. Such
distributions allow for direct injection and quantification of label noise. The
generated datasets serve as a clean baseline of adjustable complexity into
which different types of noise may be introduced. We illustrate how the
framework can be applied, how it enables quantification of label noise and how
it improves over existing methodologies. | [
"Sjoerd de Vries",
"Dirk Thierens"
] | 2023-09-08 13:31:06 | http://arxiv.org/abs/2309.04318v1 | http://arxiv.org/pdf/2309.04318v1 | 2309.04318v1 |
Subsets and Splits