title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
A Discussion on Generalization in Next-Activity Prediction | Next activity prediction aims to forecast the future behavior of running
process instances. Recent publications in this field predominantly employ deep
learning techniques and evaluate their prediction performance using publicly
available event logs. This paper presents empirical evidence that calls into
question the effectiveness of these current evaluation approaches. We show that
there is an enormous amount of example leakage in all of the commonly used
event logs, so that rather trivial prediction approaches perform almost as well
as ones that leverage deep learning. We further argue that designing robust
evaluations requires a more profound conceptual engagement with the topic of
next-activity prediction, and specifically with the notion of generalization to
new data. To this end, we present various prediction scenarios that necessitate
different types of generalization to guide future research. | [
"Luka Abb",
"Peter Pfeiffer",
"Peter Fettke",
"Jana-Rebecca Rehse"
] | 2023-09-18 09:42:36 | http://arxiv.org/abs/2309.09618v1 | http://arxiv.org/pdf/2309.09618v1 | 2309.09618v1 |
Gradpaint: Gradient-Guided Inpainting with Diffusion Models | Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved
remarkable results in conditional and unconditional image generation. The
pre-trained models can be adapted without further training to different
downstream tasks, by guiding their iterative denoising process at inference
time to satisfy additional constraints. For the specific task of image
inpainting, the current guiding mechanism relies on copying-and-pasting the
known regions from the input image at each denoising step. However, diffusion
models are strongly conditioned by the initial random noise, and therefore
struggle to harmonize predictions inside the inpainting mask with the real
parts of the input image, often producing results with unnatural artifacts.
Our method, dubbed GradPaint, steers the generation towards a globally
coherent image. At each step in the denoising process, we leverage the model's
"denoised image estimation" by calculating a custom loss measuring its
coherence with the masked input image. Our guiding mechanism uses the gradient
obtained from backpropagating this loss through the diffusion model itself.
GradPaint generalizes well to diffusion models trained on various datasets,
improving upon current state-of-the-art supervised and unsupervised methods. | [
"Asya Grechka",
"Guillaume Couairon",
"Matthieu Cord"
] | 2023-09-18 09:36:24 | http://arxiv.org/abs/2309.09614v1 | http://arxiv.org/pdf/2309.09614v1 | 2309.09614v1 |
Proposition from the Perspective of Chinese Language: A Chinese Proposition Classification Evaluation Benchmark | Existing propositions often rely on logical constants for classification.
Compared with Western languages that lean towards hypotaxis such as English,
Chinese often relies on semantic or logical understanding rather than logical
connectives in daily expressions, exhibiting the characteristics of parataxis.
However, existing research has rarely paid attention to this issue. And
accurately classifying these propositions is crucial for natural language
understanding and reasoning. In this paper, we put forward the concepts of
explicit and implicit propositions and propose a comprehensive multi-level
proposition classification system based on linguistics and logic.
Correspondingly, we create a large-scale Chinese proposition dataset PEACE from
multiple domains, covering all categories related to propositions. To evaluate
the Chinese proposition classification ability of existing models and explore
their limitations, We conduct evaluations on PEACE using several different
methods including the Rule-based method, SVM, BERT, RoBERTA, and ChatGPT.
Results show the importance of properly modeling the semantic features of
propositions. BERT has relatively good proposition classification capability,
but lacks cross-domain transferability. ChatGPT performs poorly, but its
classification ability can be improved by providing more proposition
information. Many issues are still far from being resolved and require further
study. | [
"Conghui Niu",
"Mengyang Hu",
"Lin Bo",
"Xiaoli He",
"Dong Yu",
"Pengyuan Liu"
] | 2023-09-18 09:18:39 | http://arxiv.org/abs/2309.09602v1 | http://arxiv.org/pdf/2309.09602v1 | 2309.09602v1 |
MEDL-U: Uncertainty-aware 3D Automatic Annotator based on Evidential Deep Learning | Advancements in deep learning-based 3D object detection necessitate the
availability of large-scale datasets. However, this requirement introduces the
challenge of manual annotation, which is often both burdensome and
time-consuming. To tackle this issue, the literature has seen the emergence of
several weakly supervised frameworks for 3D object detection which can
automatically generate pseudo labels for unlabeled data. Nevertheless, these
generated pseudo labels contain noise and are not as accurate as those labeled
by humans. In this paper, we present the first approach that addresses the
inherent ambiguities present in pseudo labels by introducing an Evidential Deep
Learning (EDL) based uncertainty estimation framework. Specifically, we propose
MEDL-U, an EDL framework based on MTrans, which not only generates pseudo
labels but also quantifies the associated uncertainties. However, applying EDL
to 3D object detection presents three primary challenges: (1) relatively lower
pseudolabel quality in comparison to other autolabelers; (2) excessively high
evidential uncertainty estimates; and (3) lack of clear interpretability and
effective utilization of uncertainties for downstream tasks. We tackle these
issues through the introduction of an uncertainty-aware IoU-based loss, an
evidence-aware multi-task loss function, and the implementation of a
post-processing stage for uncertainty refinement. Our experimental results
demonstrate that probabilistic detectors trained using the outputs of MEDL-U
surpass deterministic detectors trained using outputs from previous 3D
annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U
achieves state-of-the-art results on the KITTI official test set compared to
existing 3D automatic annotators. | [
"Helbert Paat",
"Qing Lian",
"Weilong Yao",
"Tong Zhang"
] | 2023-09-18 09:14:03 | http://arxiv.org/abs/2309.09599v1 | http://arxiv.org/pdf/2309.09599v1 | 2309.09599v1 |
Latent assimilation with implicit neural representations for unknown dynamics | Data assimilation is crucial in a wide range of applications, but it often
faces challenges such as high computational costs due to data dimensionality
and incomplete understanding of underlying mechanisms. To address these
challenges, this study presents a novel assimilation framework, termed Latent
Assimilation with Implicit Neural Representations (LAINR). By introducing
Spherical Implicit Neural Representations (SINR) along with a data-driven
uncertainty estimator of the trained neural networks, LAINR enhances efficiency
in assimilation process. Experimental results indicate that LAINR holds certain
advantage over existing methods based on AutoEncoders, both in terms of
accuracy and efficiency. | [
"Zhuoyuan Li",
"Bin Dong",
"Pingwen Zhang"
] | 2023-09-18 08:33:23 | http://arxiv.org/abs/2309.09574v1 | http://arxiv.org/pdf/2309.09574v1 | 2309.09574v1 |
New Bounds on the Accuracy of Majority Voting for Multi-Class Classification | Majority voting is a simple mathematical function that returns the value that
appears most often in a set. As a popular decision fusion technique, the
majority voting function (MVF) finds applications in resolving conflicts, where
a number of independent voters report their opinions on a classification
problem. Despite its importance and its various applications in ensemble
learning, data crowd-sourcing, remote sensing, and data oracles for
blockchains, the accuracy of the MVF for the general multi-class classification
problem has remained unknown. In this paper, we derive a new upper bound on the
accuracy of the MVF for the multi-class classification problem. More
specifically, we show that under certain conditions, the error rate of the MVF
exponentially decays toward zero as the number of independent voters increases.
Conversely, the error rate of the MVF exponentially grows with the number of
independent voters if these conditions are not met.
We first explore the problem for independent and identically distributed
voters where we assume that every voter follows the same conditional
probability distribution of voting for different classes, given the true
classification of the data point. Next, we extend our results for the case
where the voters are independent but non-identically distributed. Using the
derived results, we then provide a discussion on the accuracy of the truth
discovery algorithms. We show that in the best-case scenarios, truth discovery
algorithms operate as an amplified MVF and thereby achieve a small error rate
only when the MVF achieves a small error rate, and vice versa, achieve a large
error rate when the MVF also achieves a large error rate. In the worst-case
scenario, the truth discovery algorithms may achieve a higher error rate than
the MVF. Finally, we confirm our theoretical results using numerical
simulations. | [
"Sina Aeeneh",
"Nikola Zlatanov",
"Jiangshan Yu"
] | 2023-09-18 08:16:41 | http://arxiv.org/abs/2309.09564v1 | http://arxiv.org/pdf/2309.09564v1 | 2309.09564v1 |
Utilizing Whisper to Enhance Multi-Branched Speech Intelligibility Prediction Model for Hearing Aids | Automated assessment of speech intelligibility in hearing aid (HA) devices is
of great importance. Our previous work introduced a non-intrusive
multi-branched speech intelligibility prediction model called MBI-Net, which
achieved top performance in the Clarity Prediction Challenge 2022. Based on the
promising results of the MBI-Net model, we aim to further enhance its
performance by leveraging Whisper embeddings to enrich acoustic features. In
this study, we propose two improved models, namely MBI-Net+ and MBI-Net++.
MBI-Net+ maintains the same model architecture as MBI-Net, but replaces
self-supervised learning (SSL) speech embeddings with Whisper embeddings to
deploy cross-domain features. On the other hand, MBI-Net++ further employs a
more elaborate design, incorporating an auxiliary task to predict frame-level
and utterance-level scores of the objective speech intelligibility metric HASPI
(Hearing Aid Speech Perception Index) and multi-task learning. Experimental
results confirm that both MBI-Net++ and MBI-Net+ achieve better prediction
performance than MBI-Net in terms of multiple metrics, and MBI-Net++ is better
than MBI-Net+. | [
"Ryandhimas E. Zezario",
"Fei Chen",
"Chiou-Shann Fuh",
"Hsin-Min Wang",
"Yu Tsao"
] | 2023-09-18 07:51:09 | http://arxiv.org/abs/2309.09548v1 | http://arxiv.org/pdf/2309.09548v1 | 2309.09548v1 |
Quantum Wasserstein GANs for State Preparation at Unseen Points of a Phase Diagram | Generative models and in particular Generative Adversarial Networks (GANs)
have become very popular and powerful data generation tool. In recent years,
major progress has been made in extending this concept into the quantum realm.
However, most of the current methods focus on generating classes of states that
were supplied in the input set and seen at the training time. In this work, we
propose a new hybrid classical-quantum method based on quantum Wasserstein GANs
that overcomes this limitation. It allows to learn the function governing the
measurement expectations of the supplied states and generate new states, that
were not a part of the input set, but which expectations follow the same
underlying function. | [
"Wiktor Jurasz",
"Christian B. Mendl"
] | 2023-09-18 07:39:51 | http://arxiv.org/abs/2309.09543v1 | http://arxiv.org/pdf/2309.09543v1 | 2309.09543v1 |
FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural Networks | Federated training of Graph Neural Networks (GNN) has become popular in
recent years due to its ability to perform graph-related tasks under data
isolation scenarios while preserving data privacy. However, graph heterogeneity
issues in federated GNN systems continue to pose challenges. Existing
frameworks address the problem by representing local tasks using different
statistics and relating them through a simple aggregation mechanism. However,
these approaches suffer from limited efficiency from two aspects: low quality
of task-relatedness quantification and inefficacy of exploiting the
collaboration structure. To address these issues, we propose FedGKD, a novel
federated GNN framework that utilizes a novel client-side graph dataset
distillation method to extract task features that better describe
task-relatedness, and introduces a novel server-side aggregation mechanism that
is aware of the global collaboration structure. We conduct extensive
experiments on six real-world datasets of different scales, demonstrating our
framework's outperformance. | [
"Qiying Pan",
"Ruofan Wu",
"Tengfei Liu",
"Tianyi Zhang",
"Yifei Zhu",
"Weiqiang Wang"
] | 2023-09-18 06:55:14 | http://arxiv.org/abs/2309.09517v3 | http://arxiv.org/pdf/2309.09517v3 | 2309.09517v3 |
Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech | Text language models have shown remarkable zero-shot capability in
generalizing to unseen tasks when provided with well-formulated instructions.
However, existing studies in speech processing primarily focus on limited or
specific tasks. Moreover, the lack of standardized benchmarks hinders a fair
comparison across different approaches. Thus, we present Dynamic-SUPERB, a
benchmark designed for building universal speech models capable of leveraging
instruction tuning to perform multiple tasks in a zero-shot fashion. To achieve
comprehensive coverage of diverse speech tasks and harness instruction tuning,
we invite the community to collaborate and contribute, facilitating the dynamic
growth of the benchmark. To initiate, Dynamic-SUPERB features 55 evaluation
instances by combining 33 tasks and 22 datasets. This spans a broad spectrum of
dimensions, providing a comprehensive platform for evaluation. Additionally, we
propose several approaches to establish benchmark baselines. These include the
utilization of speech models, text language models, and the multimodal encoder.
Evaluation results indicate that while these baselines perform reasonably on
seen tasks, they struggle with unseen ones. We also conducted an ablation study
to assess the robustness and seek improvements in the performance. We release
all materials to the public and welcome researchers to collaborate on the
project, advancing technologies in the field together. | [
"Chien-yu Huang",
"Ke-Han Lu",
"Shih-Heng Wang",
"Chi-Yuan Hsiao",
"Chun-Yi Kuan",
"Haibin Wu",
"Siddhant Arora",
"Kai-Wei Chang",
"Jiatong Shi",
"Yifan Peng",
"Roshan Sharma",
"Shinji Watanabe",
"Bhiksha Ramakrishnan",
"Shady Shehata",
"Hung-yi Lee"
] | 2023-09-18 06:43:30 | http://arxiv.org/abs/2309.09510v1 | http://arxiv.org/pdf/2309.09510v1 | 2309.09510v1 |
Outlier-Insensitive Kalman Filtering: Theory and Applications | State estimation of dynamical systems from noisy observations is a
fundamental task in many applications. It is commonly addressed using the
linear Kalman filter (KF), whose performance can significantly degrade in the
presence of outliers in the observations, due to the sensitivity of its convex
quadratic objective function. To mitigate such behavior, outlier detection
algorithms can be applied. In this work, we propose a parameter-free algorithm
which mitigates the harmful effect of outliers while requiring only a short
iterative process of the standard update step of the KF. To that end, we model
each potential outlier as a normal process with unknown variance and apply
online estimation through either expectation maximization or alternating
maximization algorithms. Simulations and field experiment evaluations
demonstrate competitive performance of our method, showcasing its robustness to
outliers in filtering scenarios compared to alternative algorithms. | [
"Shunit Truzman",
"Guy Revach",
"Nir Shlezinger",
"Itzik Klein"
] | 2023-09-18 06:33:28 | http://arxiv.org/abs/2309.09505v1 | http://arxiv.org/pdf/2309.09505v1 | 2309.09505v1 |
Machine Learning Approaches to Predict and Detect Early-Onset of Digital Dermatitis in Dairy Cows using Sensor Data | The aim of this study was to employ machine learning algorithms based on
sensor behavior data for (1) early-onset detection of digital dermatitis (DD);
and (2) DD prediction in dairy cows. With the ultimate goal to set-up early
warning tools for DD prediction, which would than allow a better monitoring and
management of DD under commercial settings, resulting in a decrease of DD
prevalence and severity, while improving animal welfare. A machine learning
model that is capable of predicting and detecting digital dermatitis in cows
housed under free-stall conditions based on behavior sensor data has been
purposed and tested in this exploratory study. The model for DD detection on
day 0 of the appearance of the clinical signs has reached an accuracy of 79%,
while the model for prediction of DD 2 days prior to the appearance of the
first clinical signs has reached an accuracy of 64%. The proposed machine
learning models could help to develop a real-time automated tool for monitoring
and diagnostic of DD in lactating dairy cows, based on behavior sensor data
under conventional dairy environments. Results showed that alterations in
behavioral patterns at individual levels can be used as inputs in an early
warning system for herd management in order to detect variances in health of
individual cows. | [
"Jennifer Magana",
"Dinu Gavojdian",
"Yakir Menachem",
"Teddy Lazebnik",
"Anna Zamansky",
"Amber Adams-Progar"
] | 2023-09-18 06:08:26 | http://arxiv.org/abs/2309.10010v1 | http://arxiv.org/pdf/2309.10010v1 | 2309.10010v1 |
Search and Learning for Unsupervised Text Generation | With the advances of deep learning techniques, text generation is attracting
increasing interest in the artificial intelligence (AI) community, because of
its wide applications and because it is an essential component of AI.
Traditional text generation systems are trained in a supervised way, requiring
massive labeled parallel corpora. In this paper, I will introduce our recent
work on search and learning approaches to unsupervised text generation, where a
heuristic objective function estimates the quality of a candidate sentence, and
discrete search algorithms generate a sentence by maximizing the search
objective. A machine learning model further learns from the search results to
smooth out noise and improve efficiency. Our approach is important to the
industry for building minimal viable products for a new task; it also has high
social impacts for saving human annotation labor and for processing
low-resource languages. | [
"Lili Mou"
] | 2023-09-18 05:44:11 | http://arxiv.org/abs/2309.09497v1 | http://arxiv.org/pdf/2309.09497v1 | 2309.09497v1 |
Mechanic Maker 2.0: Reinforcement Learning for Evaluating Generated Rules | Automated game design (AGD), the study of automatically generating game
rules, has a long history in technical games research. AGD approaches generally
rely on approximations of human play, either objective functions or AI agents.
Despite this, the majority of these approximators are static, meaning they do
not reflect human player's ability to learn and improve in a game. In this
paper, we investigate the application of Reinforcement Learning (RL) as an
approximator for human play for rule generation. We recreate the classic AGD
environment Mechanic Maker in Unity as a new, open-source rule generation
framework. Our results demonstrate that RL produces distinct sets of rules from
an A* agent baseline, which may be more usable by humans. | [
"Johor Jara Gonzalez",
"Seth Cooper",
"Matthew Guzdial"
] | 2023-09-18 04:15:09 | http://arxiv.org/abs/2309.09476v3 | http://arxiv.org/pdf/2309.09476v3 | 2309.09476v3 |
Reconstructing Existing Levels through Level Inpainting | Procedural Content Generation (PCG) and Procedural Content Generation via
Machine Learning (PCGML) have been used in prior work for generating levels in
various games. This paper introduces Content Augmentation and focuses on the
subproblem of level inpainting, which involves reconstructing and extending
video game levels. Drawing inspiration from image inpainting, we adapt two
techniques from this domain to address our specific use case. We present two
approaches for level inpainting: an Autoencoder and a U-net. Through a
comprehensive case study, we demonstrate their superior performance compared to
a baseline method and discuss their relative merits. Furthermore, we provide a
practical demonstration of both approaches for the level inpainting task and
offer insights into potential directions for future research. | [
"Johor Jara Gonzalez",
"Matthew Guzdial"
] | 2023-09-18 04:10:27 | http://arxiv.org/abs/2309.09472v3 | http://arxiv.org/pdf/2309.09472v3 | 2309.09472v3 |
Face-Driven Zero-Shot Voice Conversion with Memory-based Face-Voice Alignment | This paper presents a novel task, zero-shot voice conversion based on face
images (zero-shot FaceVC), which aims at converting the voice characteristics
of an utterance from any source speaker to a newly coming target speaker,
solely relying on a single face image of the target speaker. To address this
task, we propose a face-voice memory-based zero-shot FaceVC method. This method
leverages a memory-based face-voice alignment module, in which slots act as the
bridge to align these two modalities, allowing for the capture of voice
characteristics from face images. A mixed supervision strategy is also
introduced to mitigate the long-standing issue of the inconsistency between
training and inference phases for voice conversion tasks. To obtain
speaker-independent content-related representations, we transfer the knowledge
from a pretrained zero-shot voice conversion model to our zero-shot FaceVC
model. Considering the differences between FaceVC and traditional voice
conversion tasks, systematic subjective and objective metrics are designed to
thoroughly evaluate the homogeneity, diversity and consistency of voice
characteristics controlled by face images. Through extensive experiments, we
demonstrate the superiority of our proposed method on the zero-shot FaceVC
task. Samples are presented on our demo website. | [
"Zheng-Yan Sheng",
"Yang Ai",
"Yan-Nian Chen",
"Zhen-Hua Ling"
] | 2023-09-18 04:08:02 | http://arxiv.org/abs/2309.09470v1 | http://arxiv.org/pdf/2309.09470v1 | 2309.09470v1 |
Active anomaly detection based on deep one-class classification | Active learning has been utilized as an efficient tool in building anomaly
detection models by leveraging expert feedback. In an active learning
framework, a model queries samples to be labeled by experts and re-trains the
model with the labeled data samples. It unburdens in obtaining annotated
datasets while improving anomaly detection performance. However, most of the
existing studies focus on helping experts identify as many abnormal data
samples as possible, which is a sub-optimal approach for one-class
classification-based deep anomaly detection. In this paper, we tackle two
essential problems of active learning for Deep SVDD: query strategy and
semi-supervised learning method. First, rather than solely identifying
anomalies, our query strategy selects uncertain samples according to an
adaptive boundary. Second, we apply noise contrastive estimation in training a
one-class classification model to incorporate both labeled normal and abnormal
data effectively. We analyze that the proposed query strategy and
semi-supervised loss individually improve an active learning process of anomaly
detection and further improve when combined together on seven anomaly detection
datasets. | [
"Minkyung Kim",
"Junsik Kim",
"Jongmin Yu",
"Jun Kyun Choi"
] | 2023-09-18 03:56:45 | http://arxiv.org/abs/2309.09465v1 | http://arxiv.org/pdf/2309.09465v1 | 2309.09465v1 |
Reducing Adversarial Training Cost with Gradient Approximation | Deep learning models have achieved state-of-the-art performances in various
domains, while they are vulnerable to the inputs with well-crafted but small
perturbations, which are named after adversarial examples (AEs). Among many
strategies to improve the model robustness against AEs, Projected Gradient
Descent (PGD) based adversarial training is one of the most effective methods.
Unfortunately, the prohibitive computational overhead of generating strong
enough AEs, due to the maximization of the loss function, sometimes makes the
regular PGD adversarial training impractical when using larger and more
complicated models. In this paper, we propose that the adversarial loss can be
approximated by the partial sum of Taylor series. Furthermore, we approximate
the gradient of adversarial loss and propose a new and efficient adversarial
training method, adversarial training with gradient approximation (GAAT), to
reduce the cost of building up robust models. Additionally, extensive
experiments demonstrate that this efficiency improvement can be achieved
without any or with very little loss in accuracy on natural and adversarial
examples, which show that our proposed method saves up to 60\% of the training
time with comparable model test accuracy on MNIST, CIFAR-10 and CIFAR-100
datasets. | [
"Huihui Gong"
] | 2023-09-18 03:55:41 | http://arxiv.org/abs/2309.09464v3 | http://arxiv.org/pdf/2309.09464v3 | 2309.09464v3 |
Exploring and Learning in Sparse Linear MDPs without Computationally Intractable Oracles | The key assumption underlying linear Markov Decision Processes (MDPs) is that
the learner has access to a known feature map $\phi(x, a)$ that maps
state-action pairs to $d$-dimensional vectors, and that the rewards and
transitions are linear functions in this representation. But where do these
features come from? In the absence of expert domain knowledge, a tempting
strategy is to use the ``kitchen sink" approach and hope that the true features
are included in a much larger set of potential features. In this paper we
revisit linear MDPs from the perspective of feature selection. In a $k$-sparse
linear MDP, there is an unknown subset $S \subset [d]$ of size $k$ containing
all the relevant features, and the goal is to learn a near-optimal policy in
only poly$(k,\log d)$ interactions with the environment. Our main result is the
first polynomial-time algorithm for this problem. In contrast, earlier works
either made prohibitively strong assumptions that obviated the need for
exploration, or required solving computationally intractable optimization
problems.
Along the way we introduce the notion of an emulator: a succinct approximate
representation of the transitions that suffices for computing certain Bellman
backups. Since linear MDPs are a non-parametric model, it is not even obvious
whether polynomial-sized emulators exist. We show that they do exist and can be
computed efficiently via convex programming.
As a corollary of our main result, we give an algorithm for learning a
near-optimal policy in block MDPs whose decoding function is a low-depth
decision tree; the algorithm runs in quasi-polynomial time and takes a
polynomial number of samples. This can be seen as a reinforcement learning
analogue of classic results in computational learning theory. Furthermore, it
gives a natural model where improving the sample complexity via representation
learning is computationally feasible. | [
"Noah Golowich",
"Ankur Moitra",
"Dhruv Rohatgi"
] | 2023-09-18 03:35:48 | http://arxiv.org/abs/2309.09457v2 | http://arxiv.org/pdf/2309.09457v2 | 2309.09457v2 |
CaT: Balanced Continual Graph Learning with Graph Condensation | Continual graph learning (CGL) is purposed to continuously update a graph
model with graph data being fed in a streaming manner. Since the model easily
forgets previously learned knowledge when training with new-coming data, the
catastrophic forgetting problem has been the major focus in CGL. Recent
replay-based methods intend to solve this problem by updating the model using
both (1) the entire new-coming data and (2) a sampling-based memory bank that
stores replayed graphs to approximate the distribution of historical data.
After updating the model, a new replayed graph sampled from the incoming graph
will be added to the existing memory bank. Despite these methods are intuitive
and effective for the CGL, two issues are identified in this paper. Firstly,
most sampling-based methods struggle to fully capture the historical
distribution when the storage budget is tight. Secondly, a significant data
imbalance exists in terms of the scales of the complex new-coming graph data
and the lightweight memory bank, resulting in unbalanced training. To solve
these issues, a Condense and Train (CaT) framework is proposed in this paper.
Prior to each model update, the new-coming graph is condensed to a small yet
informative synthesised replayed graph, which is then stored in a Condensed
Graph Memory with historical replay graphs. In the continual learning phase, a
Training in Memory scheme is used to update the model directly with the
Condensed Graph Memory rather than the whole new-coming graph, which alleviates
the data imbalance problem. Extensive experiments conducted on four benchmark
datasets successfully demonstrate superior performances of the proposed CaT
framework in terms of effectiveness and efficiency. The code has been released
on https://github.com/superallen13/CaT-CGL. | [
"Yilun Liu",
"Ruihong Qiu",
"Zi Huang"
] | 2023-09-18 03:28:49 | http://arxiv.org/abs/2309.09455v2 | http://arxiv.org/pdf/2309.09455v2 | 2309.09455v2 |
Asymptotically Efficient Online Learning for Censored Regression Models Under Non-I.I.D Data | The asymptotically efficient online learning problem is investigated for
stochastic censored regression models, which arise from various fields of
learning and statistics but up to now still lacks comprehensive theoretical
studies on the efficiency of the learning algorithms. For this, we propose a
two-step online algorithm, where the first step focuses on achieving algorithm
convergence, and the second step is dedicated to improving the estimation
performance. Under a general excitation condition on the data, we show that our
algorithm is strongly consistent and asymptotically normal by employing the
stochastic Lyapunov function method and limit theories for martingales.
Moreover, we show that the covariances of the estimates can achieve the
Cramer-Rao (C-R) bound asymptotically, indicating that the performance of the
proposed algorithm is the best possible that one can expect in general. Unlike
most of the existing works, our results are obtained without resorting to the
traditionally used but stringent conditions such as independent and identically
distributed (i.i.d) assumption on the data, and thus our results do not exclude
applications to stochastic dynamical systems with feedback. A numerical example
is also provided to illustrate the superiority of the proposed online algorithm
over the existing related ones in the literature. | [
"Lantian Zhang",
"Lei Guo"
] | 2023-09-18 03:28:48 | http://arxiv.org/abs/2309.09454v2 | http://arxiv.org/pdf/2309.09454v2 | 2309.09454v2 |
On the Use of the Kantorovich-Rubinstein Distance for Dimensionality Reduction | The goal of this thesis is to study the use of the Kantorovich-Rubinstein
distance as to build a descriptor of sample complexity in classification
problems. The idea is to use the fact that the Kantorovich-Rubinstein distance
is a metric in the space of measures that also takes into account the geometry
and topology of the underlying metric space. We associate to each class of
points a measure and thus study the geometrical information that we can obtain
from the Kantorovich-Rubinstein distance between those measures. We show that a
large Kantorovich-Rubinstein distance between those measures allows to conclude
that there exists a 1-Lipschitz classifier that classifies well the classes of
points. We also discuss the limitation of the Kantorovich-Rubinstein distance
as a descriptor. | [
"Gaël Giordano"
] | 2023-09-18 02:49:51 | http://arxiv.org/abs/2309.09442v1 | http://arxiv.org/pdf/2309.09442v1 | 2309.09442v1 |
DeepHEN: quantitative prediction essential lncRNA genes and rethinking essentialities of lncRNA genes | Gene essentiality refers to the degree to which a gene is necessary for the
survival and reproductive efficacy of a living organism. Although the
essentiality of non-coding genes has been documented, there are still aspects
of non-coding genes' essentiality that are unknown to us. For example, We do
not know the contribution of sequence features and network spatial features to
essentiality. As a consequence, in this work, we propose DeepHEN that could
answer the above question. By buidling a new lncRNA-proteion-protein network
and utilizing both representation learning and graph neural network, we
successfully build our DeepHEN models that could predict the essentiality of
lncRNA genes. Compared to other methods for predicting the essentiality of
lncRNA genes, our DeepHEN model not only tells whether sequence features or
network spatial features have a greater influence on essentiality but also
addresses the overfitting issue of those methods caused by the low number of
essential lncRNA genes, as evidenced by the results of enrichment analysis. | [
"Hanlin Zhang",
"Wenzheng Cheng"
] | 2023-09-18 02:46:33 | http://arxiv.org/abs/2309.10008v1 | http://arxiv.org/pdf/2309.10008v1 | 2309.10008v1 |
Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive Autonomous Vehicles using AutoDRIVE Ecosystem | This work presents a modular and parallelizable multi-agent deep
reinforcement learning framework for imbibing cooperative as well as
competitive behaviors within autonomous vehicles. We introduce AutoDRIVE
Ecosystem as an enabler to develop physically accurate and graphically
realistic digital twins of Nigel and F1TENTH, two scaled autonomous vehicle
platforms with unique qualities and capabilities, and leverage this ecosystem
to train and deploy multi-agent reinforcement learning policies. We first
investigate an intersection traversal problem using a set of cooperative
vehicles (Nigel) that share limited state information with each other in single
as well as multi-agent learning settings using a common policy approach. We
then investigate an adversarial head-to-head autonomous racing problem using a
different set of vehicles (F1TENTH) in a multi-agent learning setting using an
individual policy approach. In either set of experiments, a decentralized
learning architecture was adopted, which allowed robust training and testing of
the approaches in stochastic environments, since the agents were mutually
independent and exhibited asynchronous motion behavior. The problems were
further aggravated by providing the agents with sparse observation spaces and
requiring them to sample control commands that implicitly satisfied the imposed
kinodynamic as well as safety constraints. The experimental results for both
problem statements are reported in terms of quantitative metrics and
qualitative remarks for training as well as deployment phases. | [
"Tanmay Vilas Samak",
"Chinmay Vilas Samak",
"Venkat Krovi"
] | 2023-09-18 02:43:59 | http://arxiv.org/abs/2309.10007v2 | http://arxiv.org/pdf/2309.10007v2 | 2309.10007v2 |
An Iterative Method for Unsupervised Robust Anomaly Detection Under Data Contamination | Most deep anomaly detection models are based on learning normality from
datasets due to the difficulty of defining abnormality by its diverse and
inconsistent nature. Therefore, it has been a common practice to learn
normality under the assumption that anomalous data are absent in a training
dataset, which we call normality assumption. However, in practice, the
normality assumption is often violated due to the nature of real data
distributions that includes anomalous tails, i.e., a contaminated dataset.
Thereby, the gap between the assumption and actual training data affects
detrimentally in learning of an anomaly detection model. In this work, we
propose a learning framework to reduce this gap and achieve better normality
representation. Our key idea is to identify sample-wise normality and utilize
it as an importance weight, which is updated iteratively during the training.
Our framework is designed to be model-agnostic and hyperparameter insensitive
so that it applies to a wide range of existing methods without careful
parameter tuning. We apply our framework to three different representative
approaches of deep anomaly detection that are classified into one-class
classification-, probabilistic model-, and reconstruction-based approaches. In
addition, we address the importance of a termination condition for iterative
methods and propose a termination criterion inspired by the anomaly detection
objective. We validate that our framework improves the robustness of the
anomaly detection models under different levels of contamination ratios on five
anomaly detection benchmark datasets and two image datasets. On various
contaminated datasets, our framework improves the performance of three
representative anomaly detection methods, measured by area under the ROC curve. | [
"Minkyung Kim",
"Jongmin Yu",
"Junsik Kim",
"Tae-Hyun Oh",
"Jun Kyun Choi"
] | 2023-09-18 02:36:19 | http://arxiv.org/abs/2309.09436v1 | http://arxiv.org/pdf/2309.09436v1 | 2309.09436v1 |
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM | Recently, Large Language Models (LLMs) have made significant advancements and
are now widely used across various domains. Unfortunately, there has been a
rising concern that LLMs can be misused to generate harmful or malicious
content. Though a line of research has focused on aligning LLMs with human
values and preventing them from producing inappropriate content, such
alignments are usually vulnerable and can be bypassed by alignment-breaking
attacks via adversarially optimized or handcrafted jailbreaking prompts. In
this work, we introduce a Robustly Aligned LLM (RA-LLM) to defend against
potential alignment-breaking attacks. RA-LLM can be directly constructed upon
an existing aligned LLM with a robust alignment checking function, without
requiring any expensive retraining or fine-tuning process of the original LLM.
Furthermore, we also provide a theoretical analysis for RA-LLM to verify its
effectiveness in defending against alignment-breaking attacks. Through
real-world experiments on open-source large language models, we demonstrate
that RA-LLM can successfully defend against both state-of-the-art adversarial
prompts and popular handcrafted jailbreaking prompts by reducing their attack
success rates from nearly 100\% to around 10\% or less. | [
"Bochuan Cao",
"Yuanpu Cao",
"Lu Lin",
"Jinghui Chen"
] | 2023-09-18 02:07:22 | http://arxiv.org/abs/2309.14348v1 | http://arxiv.org/pdf/2309.14348v1 | 2309.14348v1 |
Joint Demosaicing and Denoising with Double Deep Image Priors | Demosaicing and denoising of RAW images are crucial steps in the processing
pipeline of modern digital cameras. As only a third of the color information
required to produce a digital image is captured by the camera sensor, the
process of demosaicing is inherently ill-posed. The presence of noise further
exacerbates this problem. Performing these two steps sequentially may distort
the content of the captured RAW images and accumulate errors from one step to
another. Recent deep neural-network-based approaches have shown the
effectiveness of joint demosaicing and denoising to mitigate such challenges.
However, these methods typically require a large number of training samples and
do not generalize well to different types and intensities of noise. In this
paper, we propose a novel joint demosaicing and denoising method, dubbed
JDD-DoubleDIP, which operates directly on a single RAW image without requiring
any training data. We validate the effectiveness of our method on two popular
datasets -- Kodak and McMaster -- with various noises and noise intensities.
The experimental results show that our method consistently outperforms other
compared methods in terms of PSNR, SSIM, and qualitative visual perception. | [
"Taihui Li",
"Anish Lahiri",
"Yutong Dai",
"Owen Mayer"
] | 2023-09-18 01:53:10 | http://arxiv.org/abs/2309.09426v1 | http://arxiv.org/pdf/2309.09426v1 | 2309.09426v1 |
Distributionally Time-Varying Online Stochastic Optimization under Polyak-Łojasiewicz Condition with Application in Conditional Value-at-Risk Statistical Learning | In this work, we consider a sequence of stochastic optimization problems
following a time-varying distribution via the lens of online optimization.
Assuming that the loss function satisfies the Polyak-{\L}ojasiewicz condition,
we apply online stochastic gradient descent and establish its dynamic regret
bound that is composed of cumulative distribution drifts and cumulative
gradient biases caused by stochasticity. The distribution metric we adopt here
is Wasserstein distance, which is well-defined without the absolute continuity
assumption or with a time-varying support set. We also establish a regret bound
of online stochastic proximal gradient descent when the objective function is
regularized. Moreover, we show that the above framework can be applied to the
Conditional Value-at-Risk (CVaR) learning problem. Particularly, we improve an
existing proof on the discovery of the PL condition of the CVaR problem,
resulting in a regret bound of online stochastic gradient descent. | [
"Yuen-Man Pun",
"Farhad Farokhi",
"Iman Shames"
] | 2023-09-18 00:47:08 | http://arxiv.org/abs/2309.09411v1 | http://arxiv.org/pdf/2309.09411v1 | 2309.09411v1 |
Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration | Safe Reinforcement Learning (RL) aims to find a policy that achieves high
rewards while satisfying cost constraints. When learning from scratch, safe RL
agents tend to be overly conservative, which impedes exploration and restrains
the overall performance. In many realistic tasks, e.g. autonomous driving,
large-scale expert demonstration data are available. We argue that extracting
expert policy from offline data to guide online exploration is a promising
solution to mitigate the conserveness issue. Large-capacity models, e.g.
decision transformers (DT), have been proven to be competent in offline policy
learning. However, data collected in real-world scenarios rarely contain
dangerous cases (e.g., collisions), which makes it prohibitive for the policies
to learn safety concepts. Besides, these bulk policy networks cannot meet the
computation speed requirements at inference time on real-world tasks such as
autonomous driving. To this end, we propose Guided Online Distillation (GOLD),
an offline-to-online safe RL framework. GOLD distills an offline DT policy into
a lightweight policy network through guided online safe RL training, which
outperforms both the offline DT policy and online safe RL algorithms.
Experiments in both benchmark safe RL tasks and real-world driving tasks based
on the Waymo Open Motion Dataset (WOMD) demonstrate that GOLD can successfully
distill lightweight policies and solve decision-making problems in challenging
safety-critical scenarios. | [
"Jinning Li",
"Xinyi Liu",
"Banghua Zhu",
"Jiantao Jiao",
"Masayoshi Tomizuka",
"Chen Tang",
"Wei Zhan"
] | 2023-09-18 00:22:59 | http://arxiv.org/abs/2309.09408v2 | http://arxiv.org/pdf/2309.09408v2 | 2309.09408v2 |
Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings | As Large Language Models are deployed within Artificial Intelligence systems,
that are increasingly integrated with human society, it becomes more important
than ever to study their internal structures. Higher level abilities of LLMs
such as GPT-3.5 emerge in large part due to informative language
representations they induce from raw text data during pre-training on trillions
of words. These embeddings exist in vector spaces of several thousand
dimensions, and their processing involves mapping between multiple vector
spaces, with total number of parameters on the order of trillions. Furthermore,
these language representations are induced by gradient optimization, resulting
in a black box system that is hard to interpret. In this paper, we take a look
at the topological structure of neuronal activity in the "brain" of Chat-GPT's
foundation language model, and analyze it with respect to a metric representing
the notion of fairness. We develop a novel approach to visualize GPT's moral
dimensions. We first compute a fairness metric, inspired by social psychology
literature, to identify factors that typically influence fairness assessments
in humans, such as legitimacy, need, and responsibility. Subsequently, we
summarize the manifold's shape using a lower-dimensional simplicial complex,
whose topology is derived from this metric. We color it with a heat map
associated with this fairness metric, producing human-readable visualizations
of the high-dimensional sentence manifold. Our results show that sentence
embeddings based on GPT-3.5 can be decomposed into two submanifolds
corresponding to fair and unfair moral judgments. This indicates that GPT-based
language models develop a moral dimension within their representation spaces
and induce an understanding of fairness during their training process. | [
"Stephen Fitz"
] | 2023-09-17 23:38:39 | http://arxiv.org/abs/2309.09397v1 | http://arxiv.org/pdf/2309.09397v1 | 2309.09397v1 |
Mitigating Over-Smoothing and Over-Squashing using Augmentations of Forman-Ricci Curvature | While Graph Neural Networks (GNNs) have been successfully leveraged for
learning on graph-structured data across domains, several potential pitfalls
have been described recently. Those include the inability to accurately
leverage information encoded in long-range connections (over-squashing), as
well as difficulties distinguishing the learned representations of nearby nodes
with growing network depth (over-smoothing). An effective way to characterize
both effects is discrete curvature: Long-range connections that underlie
over-squashing effects have low curvature, whereas edges that contribute to
over-smoothing have high curvature. This observation has given rise to rewiring
techniques, which add or remove edges to mitigate over-smoothing and
over-squashing. Several rewiring approaches utilizing graph characteristics,
such as curvature or the spectrum of the graph Laplacian, have been proposed.
However, existing methods, especially those based on curvature, often require
expensive subroutines and careful hyperparameter tuning, which limits their
applicability to large-scale graphs. Here we propose a rewiring technique based
on Augmented Forman-Ricci curvature (AFRC), a scalable curvature notation,
which can be computed in linear time. We prove that AFRC effectively
characterizes over-smoothing and over-squashing effects in message-passing
GNNs. We complement our theoretical results with experiments, which demonstrate
that the proposed approach achieves state-of-the-art performance while
significantly reducing the computational cost in comparison with other methods.
Utilizing fundamental properties of discrete curvature, we propose effective
heuristics for hyperparameters in curvature-based rewiring, which avoids
expensive hyperparameter searches, further improving the scalability of the
proposed approach. | [
"Lukas Fesser",
"Melanie Weber"
] | 2023-09-17 21:43:18 | http://arxiv.org/abs/2309.09384v1 | http://arxiv.org/pdf/2309.09384v1 | 2309.09384v1 |
Federated Learning in Temporal Heterogeneity | In this work, we explored federated learning in temporal heterogeneity across
clients. We observed that global model obtained by \texttt{FedAvg} trained with
fixed-length sequences shows faster convergence than varying-length sequences.
We proposed methods to mitigate temporal heterogeneity for efficient federated
learning based on the empirical observation. | [
"Junghwan Lee"
] | 2023-09-17 21:20:35 | http://arxiv.org/abs/2309.09381v1 | http://arxiv.org/pdf/2309.09381v1 | 2309.09381v1 |
Mitigating Shortcuts in Language Models with Soft Label Encoding | Recent research has shown that large language models rely on spurious
correlations in the data for natural language understanding (NLU) tasks. In
this work, we aim to answer the following research question: Can we reduce
spurious correlations by modifying the ground truth labels of the training
data? Specifically, we propose a simple yet effective debiasing framework,
named Soft Label Encoding (SoftLE). We first train a teacher model with hard
labels to determine each sample's degree of relying on shortcuts. We then add
one dummy class to encode the shortcut degree, which is used to smooth other
dimensions in the ground truth label to generate soft labels. This new ground
truth label is used to train a more robust student model. Extensive experiments
on two NLU benchmark tasks demonstrate that SoftLE significantly improves
out-of-distribution generalization while maintaining satisfactory
in-distribution accuracy. | [
"Zirui He",
"Huiqi Deng",
"Haiyan Zhao",
"Ninghao Liu",
"Mengnan Du"
] | 2023-09-17 21:18:02 | http://arxiv.org/abs/2309.09380v1 | http://arxiv.org/pdf/2309.09380v1 | 2309.09380v1 |
Fully Convolutional Generative Machine Learning Method for Accelerating Non-Equilibrium Greens Function Simulations | This work describes a novel simulation approach that combines machine
learning and device modelling simulations. The device simulations are based on
the quantum mechanical non-equilibrium Greens function (NEGF) approach and the
machine learning method is an extension to a convolutional generative network.
We have named our new simulation approach ML-NEGF and we have implemented it in
our in-house simulator called NESS (nano-electronics simulations software). The
reported results demonstrate the improved convergence speed of the ML-NEGF
method in comparison to the standard NEGF approach. The trained ML model
effectively learns the underlying physics of nano-sheet transistor behaviour,
resulting in faster convergence of the coupled Poisson-NEGF simulations.
Quantitatively, our ML- NEGF approach achieves an average convergence
acceleration of 60%, substantially reducing the computational time while
maintaining the same accuracy. | [
"Preslav Aleksandrov",
"Ali Rezaei",
"Nikolas Xeni",
"Tapas Dutta",
"Asen Asenov",
"Vihar Georgiev"
] | 2023-09-17 20:43:54 | http://arxiv.org/abs/2309.09374v1 | http://arxiv.org/pdf/2309.09374v1 | 2309.09374v1 |
A Survey on Congestion Control and Scheduling for Multipath TCP: Machine Learning vs Classical Approaches | Multipath TCP (MPTCP) has been widely used as an efficient way for
communication in many applications. Data centers, smartphones, and network
operators use MPTCP to balance the traffic in a network efficiently. MPTCP is
an extension of TCP (Transmission Control Protocol), which provides multiple
paths, leading to higher throughput and low latency. Although MPTCP has shown
better performance than TCP in many applications, it has its own challenges.
The network can become congested due to heavy traffic in the multiple paths
(subflows) if the subflow rates are not determined correctly. Moreover,
communication latency can occur if the packets are not scheduled correctly
between the subflows. This paper reviews techniques to solve the
above-mentioned problems based on two main approaches; non data-driven
(classical) and data-driven (Machine Learning) approaches. This paper compares
these two approaches and highlights their strengths and weaknesses with a view
to motivating future researchers in this exciting area of machine learning for
communications. This paper also provides details on the simulation of MPTCP and
its implementations in real environments. | [
"Maisha Maliha",
"Golnaz Habibi",
"Mohammed Atiquzzaman"
] | 2023-09-17 20:33:06 | http://arxiv.org/abs/2309.09372v1 | http://arxiv.org/pdf/2309.09372v1 | 2309.09372v1 |
An Automatic Tuning MPC with Application to Ecological Cruise Control | Model predictive control (MPC) is a powerful tool for planning and
controlling dynamical systems due to its capacity for handling constraints and
taking advantage of preview information. Nevertheless, MPC performance is
highly dependent on the choice of cost function tuning parameters. In this
work, we demonstrate an approach for online automatic tuning of an MPC
controller with an example application to an ecological cruise control system
that saves fuel by using a preview of road grade. We solve the global fuel
consumption minimization problem offline using dynamic programming and find the
corresponding MPC cost function by solving the inverse optimization problem. A
neural network fitted to these offline results is used to generate the desired
MPC cost function weight during online operation. The effectiveness of the
proposed approach is verified in simulation for different road geometries. | [
"Mohammad Abtahi",
"Mahdis Rabbani",
"Shima Nazari"
] | 2023-09-17 19:49:47 | http://arxiv.org/abs/2309.09358v1 | http://arxiv.org/pdf/2309.09358v1 | 2309.09358v1 |
Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties | The application of machine learning (ML) techniques in computational
chemistry has led to significant advances in predicting molecular properties,
accelerating drug discovery, and material design. ML models can extract hidden
patterns and relationships from complex and large datasets, allowing for the
prediction of various chemical properties with high accuracy. The use of such
methods has enabled the discovery of molecules and materials that were
previously difficult to identify. This paper introduces a new ML model based on
deep learning techniques, such as a multilayer encoder and decoder
architecture, for classification tasks. We demonstrate the opportunities
offered by our approach by applying it to various types of input data,
including organic and inorganic compounds. In particular, we developed and
tested the model using the Matbench and Moleculenet benchmarks, which include
crystal properties and drug design-related benchmarks. We also conduct a
comprehensive analysis of vector representations of chemical compounds,
shedding light on the underlying patterns in molecular data. The models used in
this work exhibit a high degree of predictive power, underscoring the progress
that can be made with refined machine learning when applied to molecular and
material datasets. For instance, on the Tox21 dataset, we achieved an average
accuracy of 96%, surpassing the previous best result by 10%. Our code is
publicly available at https://github.com/dmamur/elembert. | [
"Shokirbek Shermukhamedov",
"Dilorom Mamurjonova",
"Michael Probst"
] | 2023-09-17 19:41:32 | http://arxiv.org/abs/2309.09355v1 | http://arxiv.org/pdf/2309.09355v1 | 2309.09355v1 |
Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis | Today's AI systems for medical decision support often succeed on benchmark
datasets in research papers but fail in real-world deployment. This work
focuses on the decision making of sepsis, an acute life-threatening systematic
infection that requires an early diagnosis with high uncertainty from the
clinician. Our aim is to explore the design requirements for AI systems that
can support clinical experts in making better decisions for the early diagnosis
of sepsis. The study begins with a formative study investigating why clinical
experts abandon an existing AI-powered Sepsis predictive module in their
electrical health record (EHR) system. We argue that a human-centered AI system
needs to support human experts in the intermediate stages of a medical
decision-making process (e.g., generating hypotheses or gathering data),
instead of focusing only on the final decision. Therefore, we build SepsisLab
based on a state-of-the-art AI algorithm and extend it to predict the future
projection of sepsis development, visualize the prediction uncertainty, and
propose actionable suggestions (i.e., which additional laboratory tests can be
collected) to reduce such uncertainty. Through heuristic evaluation with six
clinicians using our prototype system, we demonstrate that SepsisLab enables a
promising human-AI collaboration paradigm for the future of AI-assisted sepsis
diagnosis and other high-stakes medical decision making. | [
"Shao Zhang",
"Jianing Yu",
"Xuhai Xu",
"Changchang Yin",
"Yuxuan Lu",
"Bingsheng Yao",
"Melanie Tory",
"Lace M. Padilla",
"Jeffrey Caterino",
"Ping Zhang",
"Dakuo Wang"
] | 2023-09-17 19:19:39 | http://arxiv.org/abs/2309.12368v1 | http://arxiv.org/pdf/2309.12368v1 | 2309.12368v1 |
Simulation-based Inference for Exoplanet Atmospheric Retrieval: Insights from winning the Ariel Data Challenge 2023 using Normalizing Flows | Advancements in space telescopes have opened new avenues for gathering vast
amounts of data on exoplanet atmosphere spectra. However, accurately extracting
chemical and physical properties from these spectra poses significant
challenges due to the non-linear nature of the underlying physics.
This paper presents novel machine learning models developed by the AstroAI
team for the Ariel Data Challenge 2023, where one of the models secured the top
position among 293 competitors. Leveraging Normalizing Flows, our models
predict the posterior probability distribution of atmospheric parameters under
different atmospheric assumptions.
Moreover, we introduce an alternative model that exhibits higher performance
potential than the winning model, despite scoring lower in the challenge. These
findings highlight the need to reevaluate the evaluation metric and prompt
further exploration of more efficient and accurate approaches for exoplanet
atmosphere spectra analysis.
Finally, we present recommendations to enhance the challenge and models,
providing valuable insights for future applications on real observational data.
These advancements pave the way for more effective and timely analysis of
exoplanet atmospheric properties, advancing our understanding of these distant
worlds. | [
"Mayeul Aubin",
"Carolina Cuesta-Lazaro",
"Ethan Tregidga",
"Javier Viaña",
"Cecilia Garraffo",
"Iouli E. Gordon",
"Mercedes López-Morales",
"Robert J. Hargreaves",
"Vladimir Yu. Makhnev",
"Jeremy J. Drake",
"Douglas P. Finkbeiner",
"Phillip Cargile"
] | 2023-09-17 17:59:59 | http://arxiv.org/abs/2309.09337v1 | http://arxiv.org/pdf/2309.09337v1 | 2309.09337v1 |
Unleashing the Power of Dynamic Mode Decomposition and Deep Learning for Rainfall Prediction in North-East India | Accurate rainfall forecasting is crucial for effective disaster preparedness
and mitigation in the North-East region of India, which is prone to extreme
weather events such as floods and landslides. In this study, we investigated
the use of two data-driven methods, Dynamic Mode Decomposition (DMD) and Long
Short-Term Memory (LSTM), for rainfall forecasting using daily rainfall data
collected from India Meteorological Department in northeast region over a
period of 118 years. We conducted a comparative analysis of these methods to
determine their relative effectiveness in predicting rainfall patterns. Using
historical rainfall data from multiple weather stations, we trained and
validated our models to forecast future rainfall patterns. Our results indicate
that both DMD and LSTM are effective in forecasting rainfall, with LSTM
outperforming DMD in terms of accuracy, revealing that LSTM has the ability to
capture complex nonlinear relationships in the data, making it a powerful tool
for rainfall forecasting. Our findings suggest that data-driven methods such as
DMD and deep learning approaches like LSTM can significantly improve rainfall
forecasting accuracy in the North-East region of India, helping to mitigate the
impact of extreme weather events and enhance the region's resilience to climate
change. | [
"Paleti Nikhil Chowdary",
"Sathvika P",
"Pranav U",
"Rohan S",
"Sowmya V",
"Gopalakrishnan E A",
"Dhanya M"
] | 2023-09-17 17:58:06 | http://arxiv.org/abs/2309.09336v1 | http://arxiv.org/pdf/2309.09336v1 | 2309.09336v1 |
Enhancing Knee Osteoarthritis severity level classification using diffusion augmented images | This research paper explores the classification of knee osteoarthritis (OA)
severity levels using advanced computer vision models and augmentation
techniques. The study investigates the effectiveness of data preprocessing,
including Contrast-Limited Adaptive Histogram Equalization (CLAHE), and data
augmentation using diffusion models. Three experiments were conducted: training
models on the original dataset, training models on the preprocessed dataset,
and training models on the augmented dataset. The results show that data
preprocessing and augmentation significantly improve the accuracy of the
models. The EfficientNetB3 model achieved the highest accuracy of 84\% on the
augmented dataset. Additionally, attention visualization techniques, such as
Grad-CAM, are utilized to provide detailed attention maps, enhancing the
understanding and trustworthiness of the models. These findings highlight the
potential of combining advanced models with augmented data and attention
visualization for accurate knee OA severity classification. | [
"Paleti Nikhil Chowdary",
"Gorantla V N S L Vishnu Vardhan",
"Menta Sai Akshay",
"Menta Sai Aashish",
"Vadlapudi Sai Aravind",
"Garapati Venkata Krishna Rayalu",
"Aswathy P"
] | 2023-09-17 17:22:29 | http://arxiv.org/abs/2309.09328v1 | http://arxiv.org/pdf/2309.09328v1 | 2309.09328v1 |
Experiential-Informed Data Reconstruction for Fishery Sustainability and Policies in the Azores | Fishery analysis is critical in maintaining the long-term sustainability of
species and the livelihoods of millions of people who depend on fishing for
food and income. The fishing gear, or metier, is a key factor significantly
impacting marine habitats, selectively targeting species and fish sizes.
Analysis of commercial catches or landings by metier in fishery stock
assessment and management is crucial, providing robust estimates of fishing
efforts and their impact on marine ecosystems. In this paper, we focus on a
unique data set from the Azores' fishing data collection programs between 2010
and 2017, where little information on metiers is available and sparse
throughout our timeline. Our main objective is to tackle the task of data set
reconstruction, leveraging domain knowledge and machine learning methods to
retrieve or associate metier-related information to each fish landing. We
empirically validate the feasibility of this task using a diverse set of
modeling approaches and demonstrate how it provides new insights into different
fisheries' behavior and the impact of metiers over time, which are essential
for future fish population assessments, management, and conservation efforts. | [
"Brenda Nogueira",
"Gui M. Menezes",
"Nuno Moniz"
] | 2023-09-17 17:17:38 | http://arxiv.org/abs/2309.09326v1 | http://arxiv.org/pdf/2309.09326v1 | 2309.09326v1 |
Answering Layer 3 queries with DiscoSCMs | Addressing causal queries across the Pearl Causal Hierarchy (PCH) (i.e.,
associational, interventional and counterfactual), which is formalized as
\Layer{} Valuations, is a central task in contemporary causal inference
research. Counterfactual questions, in particular, pose a significant challenge
as they often necessitate a complete knowledge of structural equations. This
paper identifies \textbf{the degeneracy problem} caused by the consistency
rule. To tackle this, the \textit{Distribution-consistency Structural Causal
Models} (DiscoSCMs) is introduced, which extends both the structural causal
models (SCM) and the potential outcome framework. The correlation pattern of
potential outcomes in personalized incentive scenarios, described by $P(y_x,
y'_{x'})$, is used as a case study for elucidation. Although counterfactuals
are no longer degenerate, they remain indeterminable. As a result, the
condition of independent potential noise is incorporated into DiscoSCM. It is
found that by adeptly using homogeneity, counterfactuals can be identified.
Furthermore, more refined results are achieved in the unit problem scenario. In
simpler terms, when modeling counterfactuals, one should contemplate: "Consider
a person with average ability who takes a test and, due to good luck, achieves
an exceptionally high score. If this person were to retake the test under
identical external conditions, what score will he obtain? An exceptionally high
score or an average score?" If your choose is predicting an average score, then
you are essentially choosing DiscoSCM over the traditional frameworks based on
the consistency rule. | [
"Heyang Gong"
] | 2023-09-17 17:01:05 | http://arxiv.org/abs/2309.09323v2 | http://arxiv.org/pdf/2309.09323v2 | 2309.09323v2 |
A novel approach to measuring patent claim scope based on probabilities obtained from (large) language models | This work proposes to measure the scope of a patent claim as the reciprocal
of the self-information contained in this claim. A probability of occurrence of
the claim is obtained from a language model and this probability is used to
compute the self-information. Grounded in information theory, this approach is
based on the assumption that an unlikely concept is more informative than a
usual concept, insofar as it is more surprising. In turn, the more surprising
the information required to defined the claim, the narrower its scope. Five
language models are considered, ranging from simplest models (each word or
character is assigned an identical probability) to intermediate models (using
average word or character frequencies), to a large language model (GPT2).
Interestingly, the scope resulting from the simplest language models is
proportional to the reciprocal of the number of words or characters involved in
the claim, a metric already used in previous works. Application is made to
multiple series of patent claims directed to distinct inventions, where each
series consists of claims devised to have a gradually decreasing scope. The
performance of the language models is assessed with respect to several ad hoc
tests. The more sophisticated the model, the better the results. I.e., the GPT2
probability model outperforms models based on word and character frequencies,
which themselves outdo the simplest models based on word or character counts.
Still, the character count appears to be a more reliable indicator than the
word count. | [
"Sébastien Ragot"
] | 2023-09-17 16:50:07 | http://arxiv.org/abs/2309.10003v2 | http://arxiv.org/pdf/2309.10003v2 | 2309.10003v2 |
Active Learning for Semantic Segmentation with Multi-class Label Query | This paper proposes a new active learning method for semantic segmentation.
The core of our method lies in a new annotation query design. It samples
informative local image regions (e.g., superpixels), and for each of such
regions, asks an oracle for a multi-hot vector indicating all classes existing
in the region. This multi-class labeling strategy is substantially more
efficient than existing ones like segmentation, polygon, and even dominant
class labeling in terms of annotation time per click. However, it introduces
the class ambiguity issue in training since it assigns partial labels (i.e., a
set of candidate classes) to individual pixels. We thus propose a new algorithm
for learning semantic segmentation while disambiguating the partial labels in
two stages. In the first stage, it trains a segmentation model directly with
the partial labels through two new loss functions motivated by partial label
learning and multiple instance learning. In the second stage, it disambiguates
the partial labels by generating pixel-wise pseudo labels, which are used for
supervised learning of the model. Equipped with a new acquisition function
dedicated to the multi-class labeling, our method outperformed previous work on
Cityscapes and PASCAL VOC 2012 while spending less annotation cost. | [
"Sehyun Hwang",
"Sohyun Lee",
"Hoyoung Kim",
"Minhyeon Oh",
"Jungseul Ok",
"Suha Kwak"
] | 2023-09-17 16:23:34 | http://arxiv.org/abs/2309.09319v1 | http://arxiv.org/pdf/2309.09319v1 | 2309.09319v1 |
Kinematics-aware Trajectory Generation and Prediction with Latent Stochastic Differential Modeling | Trajectory generation and trajectory prediction are two critical tasks for
autonomous vehicles, which generate various trajectories during development and
predict the trajectories of surrounding vehicles during operation,
respectively. However, despite significant advances in improving their
performance, it remains a challenging problem to ensure that the
generated/predicted trajectories are realistic, explainable, and physically
feasible. Existing model-based methods provide explainable results, but are
constrained by predefined model structures, limiting their capabilities to
address complex scenarios. Conversely, existing deep learning-based methods
have shown great promise in learning various traffic scenarios and improving
overall performance, but they often act as opaque black boxes and lack
explainability. In this work, we integrate kinematic knowledge with neural
stochastic differential equations (SDE) and develop a variational autoencoder
based on a novel latent kinematics-aware SDE (LK-SDE) to generate vehicle
motions. Our approach combines the advantages of both model-based and deep
learning-based techniques. Experimental results demonstrate that our method
significantly outperforms baseline approaches in producing realistic,
physically-feasible, and precisely-controllable vehicle trajectories,
benefiting both generation and prediction tasks. | [
"Ruochen Jiao",
"Yixuan Wang",
"Xiangguo Liu",
"Chao Huang",
"Qi Zhu"
] | 2023-09-17 16:06:38 | http://arxiv.org/abs/2309.09317v1 | http://arxiv.org/pdf/2309.09317v1 | 2309.09317v1 |
Energy stable neural network for gradient flow equations | In this paper, we propose an energy stable network (EStable-Net) for solving
gradient flow equations. The solution update scheme in our neural network
EStable-Net is inspired by a proposed auxiliary variable based equivalent form
of the gradient flow equation. EStable-Net enables decreasing of a discrete
energy along the neural network, which is consistent with the property in the
evolution process of the gradient flow equation. The architecture of the neural
network EStable-Net consists of a few energy decay blocks, and the output of
each block can be interpreted as an intermediate state of the evolution process
of the gradient flow equation. This design provides a stable, efficient and
interpretable network structure. Numerical experimental results demonstrate
that our network is able to generate high accuracy and stable predictions. | [
"Ganghua Fan",
"Tianyu Jin",
"Yuan Lan",
"Yang Xiang",
"Luchan Zhang"
] | 2023-09-17 15:05:27 | http://arxiv.org/abs/2309.10002v1 | http://arxiv.org/pdf/2309.10002v1 | 2309.10002v1 |
MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene Classification | Vision Transformer (ViT) models have recently emerged as powerful and
versatile models for various visual tasks. Recently, a work called PMF has
achieved promising results in few-shot image classification by utilizing
pre-trained vision transformer models. However, PMF employs full fine-tuning
for learning the downstream tasks, leading to significant overfitting and
storage issues, especially in the remote sensing domain. In order to tackle
these issues, we turn to the recently proposed parameter-efficient tuning
methods, such as VPT, which updates only the newly added prompt parameters
while keeping the pre-trained backbone frozen. Inspired by VPT, we propose the
Meta Visual Prompt Tuning (MVP) method. Specifically, we integrate the VPT
method into the meta-learning framework and tailor it to the remote sensing
domain, resulting in an efficient framework for Few-Shot Remote Sensing Scene
Classification (FS-RSSC). Furthermore, we introduce a novel data augmentation
strategy based on patch embedding recombination to enhance the representation
and diversity of scenes for classification purposes. Experiment results on the
FS-RSSC benchmark demonstrate the superior performance of the proposed MVP over
existing methods in various settings, such as various-way-various-shot,
various-way-one-shot, and cross-domain adaptation. | [
"Junjie Zhu",
"Yiying Li",
"Chunping Qiu",
"Ke Yang",
"Naiyang Guan",
"Xiaodong Yi"
] | 2023-09-17 13:51:05 | http://arxiv.org/abs/2309.09276v1 | http://arxiv.org/pdf/2309.09276v1 | 2309.09276v1 |
Visual Forecasting as a Mid-level Representation for Avoidance | The challenge of navigation in environments with dynamic objects continues to
be a central issue in the study of autonomous agents. While predictive methods
hold promise, their reliance on precise state information makes them less
practical for real-world implementation. This study presents visual forecasting
as an innovative alternative. By introducing intuitive visual cues, this
approach projects the future trajectories of dynamic objects to improve agent
perception and enable anticipatory actions. Our research explores two distinct
strategies for conveying predictive information through visual forecasting: (1)
sequences of bounding boxes, and (2) augmented paths. To validate the proposed
visual forecasting strategies, we initiate evaluations in simulated
environments using the Unity engine and then extend these evaluations to
real-world scenarios to assess both practicality and effectiveness. The results
confirm the viability of visual forecasting as a promising solution for
navigation and obstacle avoidance in dynamic environments. | [
"Hsuan-Kung Yang",
"Tsung-Chih Chiang",
"Ting-Ru Liu",
"Chun-Wei Huang",
"Jou-Min Liu",
"Chun-Yi Lee"
] | 2023-09-17 13:32:03 | http://arxiv.org/abs/2310.07724v1 | http://arxiv.org/pdf/2310.07724v1 | 2310.07724v1 |
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets | In this note, we demonstrate a first-of-its-kind provable convergence of SGD
to the global minima of appropriately regularized logistic empirical risk of
depth $2$ nets -- for arbitrary data and with any number of gates with
adequately smooth and bounded activations like sigmoid and tanh. We also prove
an exponentially fast convergence rate for continuous time SGD that also
applies to smooth unbounded activations like SoftPlus. Our key idea is to show
the existence of Frobenius norm regularized logistic loss functions on
constant-sized neural nets which are "Villani functions" and thus be able to
build on recent progress with analyzing SGD on such objectives. | [
"Pulkit Gopalani",
"Samyak Jha",
"Anirbit Mukherjee"
] | 2023-09-17 12:44:07 | http://arxiv.org/abs/2309.09258v1 | http://arxiv.org/pdf/2309.09258v1 | 2309.09258v1 |
User Assignment and Resource Allocation for Hierarchical Federated Learning over Wireless Networks | The large population of wireless users is a key driver of data-crowdsourced
Machine Learning (ML). However, data privacy remains a significant concern.
Federated Learning (FL) encourages data sharing in ML without requiring data to
leave users' devices but imposes heavy computation and communications overheads
on mobile devices. Hierarchical FL (HFL) alleviates this problem by performing
partial model aggregation at edge servers. HFL can effectively reduce energy
consumption and latency through effective resource allocation and appropriate
user assignment. Nevertheless, resource allocation in HFL involves optimizing
multiple variables, and the objective function should consider both energy
consumption and latency, making the development of resource allocation
algorithms very complicated. Moreover, it is challenging to perform user
assignment, which is a combinatorial optimization problem in a large search
space. This article proposes a spectrum resource optimization algorithm (SROA)
and a two-stage iterative algorithm (TSIA) for HFL. Given an arbitrary user
assignment pattern, SROA optimizes CPU frequency, transmit power, and bandwidth
to minimize system cost. TSIA aims to find a user assignment pattern that
considerably reduces the total system cost. Experimental results demonstrate
the superiority of the proposed HFL framework over existing studies in energy
and latency reduction. | [
"Tinghao Zhang",
"Kwok-Yan Lam",
"Jun Zhao"
] | 2023-09-17 12:10:39 | http://arxiv.org/abs/2309.09253v1 | http://arxiv.org/pdf/2309.09253v1 | 2309.09253v1 |
Private Matrix Factorization with Public Item Features | We consider the problem of training private recommendation models with access
to public item features. Training with Differential Privacy (DP) offers strong
privacy guarantees, at the expense of loss in recommendation quality. We show
that incorporating public item features during training can help mitigate this
loss in quality. We propose a general approach based on collective matrix
factorization (CMF), that works by simultaneously factorizing two matrices: the
user feedback matrix (representing sensitive data) and an item feature matrix
that encodes publicly available (non-sensitive) item information.
The method is conceptually simple, easy to tune, and highly scalable. It can
be applied to different types of public item data, including: (1) categorical
item features; (2) item-item similarities learned from public sources; and (3)
publicly available user feedback. Furthermore, these data modalities can be
collectively utilized to fully leverage public data.
Evaluating our method on a standard DP recommendation benchmark, we find that
using public item features significantly narrows the quality gap between
private models and their non-private counterparts. As privacy constraints
become more stringent, models rely more heavily on public side features for
recommendation. This results in a smooth transition from collaborative
filtering to item-based contextual recommendations. | [
"Mihaela Curmei",
"Walid Krichene",
"Li Zhang",
"Mukund Sundararajan"
] | 2023-09-17 11:13:52 | http://arxiv.org/abs/2309.11516v1 | http://arxiv.org/pdf/2309.11516v1 | 2309.11516v1 |
High-dimensional manifold of solutions in neural networks: insights from statistical physics | In these pedagogic notes I review the statistical mechanics approach to
neural networks, focusing on the paradigmatic example of the perceptron
architecture with binary an continuous weights, in the classification setting.
I will review the Gardner's approach based on replica method and the derivation
of the SAT/UNSAT transition in the storage setting. Then, I discuss some recent
works that unveiled how the zero training error configurations are
geometrically arranged, and how this arrangement changes as the size of the
training set increases. I also illustrate how different regions of solution
space can be explored analytically and how the landscape in the vicinity of a
solution can be characterized. I give evidence how, in binary weight models,
algorithmic hardness is a consequence of the disappearance of a clustered
region of solutions that extends to very large distances. Finally, I
demonstrate how the study of linear mode connectivity between solutions can
give insights into the average shape of the solution manifold. | [
"Enrico M. Malatesta"
] | 2023-09-17 11:10:25 | http://arxiv.org/abs/2309.09240v1 | http://arxiv.org/pdf/2309.09240v1 | 2309.09240v1 |
Globally Convergent Accelerated Algorithms for Multilinear Sparse Logistic Regression with $\ell_0$-constraints | Tensor data represents a multidimensional array. Regression methods based on
low-rank tensor decomposition leverage structural information to reduce the
parameter count. Multilinear logistic regression serves as a powerful tool for
the analysis of multidimensional data. To improve its efficacy and
interpretability, we present a Multilinear Sparse Logistic Regression model
with $\ell_0$-constraints ($\ell_0$-MLSR). In contrast to the $\ell_1$-norm and
$\ell_2$-norm, the $\ell_0$-norm constraint is better suited for feature
selection. However, due to its nonconvex and nonsmooth properties, solving it
is challenging and convergence guarantees are lacking. Additionally, the
multilinear operation in $\ell_0$-MLSR also brings non-convexity. To tackle
these challenges, we propose an Accelerated Proximal Alternating Linearized
Minimization with Adaptive Momentum (APALM$^+$) method to solve the
$\ell_0$-MLSR model. We provide a proof that APALM$^+$ can ensure the
convergence of the objective function of $\ell_0$-MLSR. We also demonstrate
that APALM$^+$ is globally convergent to a first-order critical point as well
as establish convergence rate by using the Kurdyka-Lojasiewicz property.
Empirical results obtained from synthetic and real-world datasets validate the
superior performance of our algorithm in terms of both accuracy and speed
compared to other state-of-the-art methods. | [
"Weifeng Yang",
"Wenwen Min"
] | 2023-09-17 11:05:08 | http://arxiv.org/abs/2309.09239v1 | http://arxiv.org/pdf/2309.09239v1 | 2309.09239v1 |
Detection and Localization of Firearm Carriers in Complex Scenes for Improved Safety Measures | Detecting firearms and accurately localizing individuals carrying them in
images or videos is of paramount importance in security, surveillance, and
content customization. However, this task presents significant challenges in
complex environments due to clutter and the diverse shapes of firearms. To
address this problem, we propose a novel approach that leverages human-firearm
interaction information, which provides valuable clues for localizing firearm
carriers. Our approach incorporates an attention mechanism that effectively
distinguishes humans and firearms from the background by focusing on relevant
areas. Additionally, we introduce a saliency-driven locality-preserving
constraint to learn essential features while preserving foreground information
in the input image. By combining these components, our approach achieves
exceptional results on a newly proposed dataset. To handle inputs of varying
sizes, we pass paired human-firearm instances with attention masks as channels
through a deep network for feature computation, utilizing an adaptive average
pooling layer. We extensively evaluate our approach against existing methods in
human-object interaction detection and achieve significant results (AP=77.8\%)
compared to the baseline approach (AP=63.1\%). This demonstrates the
effectiveness of leveraging attention mechanisms and saliency-driven locality
preservation for accurate human-firearm interaction detection. Our findings
contribute to advancing the fields of security and surveillance, enabling more
efficient firearm localization and identification in diverse scenarios. | [
"Arif Mahmood",
"Abdul Basit",
"M. Akhtar Munir",
"Mohsen Ali"
] | 2023-09-17 10:50:46 | http://arxiv.org/abs/2309.09236v1 | http://arxiv.org/pdf/2309.09236v1 | 2309.09236v1 |
Provable learning of quantum states with graphical models | The complete learning of an $n$-qubit quantum state requires samples
exponentially in $n$. Several works consider subclasses of quantum states that
can be learned in polynomial sample complexity such as stabilizer states or
high-temperature Gibbs states. Other works consider a weaker sense of learning,
such as PAC learning and shadow tomography. In this work, we consider learning
states that are close to neural network quantum states, which can efficiently
be represented by a graphical model called restricted Boltzmann machines
(RBMs). To this end, we exhibit robustness results for efficient provable
two-hop neighborhood learning algorithms for ferromagnetic and locally
consistent RBMs. We consider the $L_p$-norm as a measure of closeness,
including both total variation distance and max-norm distance in the limit. Our
results allow certain quantum states to be learned with a sample complexity
\textit{exponentially} better than naive tomography. We hence provide new
classes of efficiently learnable quantum states and apply new strategies to
learn them. | [
"Liming Zhao",
"Naixu Guo",
"Ming-Xing Luo",
"Patrick Rebentrost"
] | 2023-09-17 10:36:24 | http://arxiv.org/abs/2309.09235v1 | http://arxiv.org/pdf/2309.09235v1 | 2309.09235v1 |
Double Normalizing Flows: Flexible Bayesian Gaussian Process ODEs Learning | Recently, Gaussian processes have been utilized to model the vector field of
continuous dynamical systems. Bayesian inference for such models
\cite{hegde2022variational} has been extensively studied and has been applied
in tasks such as time series prediction, providing uncertain estimates.
However, previous Gaussian Process Ordinary Differential Equation (ODE) models
may underperform on datasets with non-Gaussian process priors, as their
constrained priors and mean-field posteriors may lack flexibility. To address
this limitation, we incorporate normalizing flows to reparameterize the vector
field of ODEs, resulting in a more flexible and expressive prior distribution.
Additionally, due to the analytically tractable probability density functions
of normalizing flows, we apply them to the posterior inference of GP ODEs,
generating a non-Gaussian posterior. Through these dual applications of
normalizing flows, our model improves accuracy and uncertainty estimates for
Bayesian Gaussian Process ODEs. The effectiveness of our approach is
demonstrated on simulated dynamical systems and real-world human motion data,
including tasks such as time series prediction and missing data recovery.
Experimental results indicate that our proposed method effectively captures
model uncertainty while improving accuracy. | [
"Jian Xu",
"Shian Du",
"Junmei Yang",
"Xinghao Ding",
"John Paisley",
"Delu Zeng"
] | 2023-09-17 09:28:47 | http://arxiv.org/abs/2309.09222v1 | http://arxiv.org/pdf/2309.09222v1 | 2309.09222v1 |
Differentiable SLAM Helps Deep Learning-based LiDAR Perception Tasks | We investigate a new paradigm that uses differentiable SLAM architectures in
a self-supervised manner to train end-to-end deep learning models in various
LiDAR based applications. To the best of our knowledge there does not exist any
work that leverages SLAM as a training signal for deep learning based models.
We explore new ways to improve the efficiency, robustness, and adaptability of
LiDAR systems with deep learning techniques. We focus on the potential benefits
of differentiable SLAM architectures for improving performance of deep learning
tasks such as classification, regression as well as SLAM. Our experimental
results demonstrate a non-trivial increase in the performance of two deep
learning applications - Ground Level Estimation and Dynamic to Static LiDAR
Translation, when used with differentiable SLAM architectures. Overall, our
findings provide important insights that enhance the performance of LiDAR based
navigation systems. We demonstrate that this new paradigm of using SLAM Loss
signal while training LiDAR based models can be easily adopted by the
community. | [
"Prashant Kumar",
"Dheeraj Vattikonda",
"Vedang Bhupesh Shenvi Nadkarni",
"Erqun Dong",
"Sabyasachi Sahoo"
] | 2023-09-17 08:24:16 | http://arxiv.org/abs/2309.09206v1 | http://arxiv.org/pdf/2309.09206v1 | 2309.09206v1 |
MFRL-BI: Design of a Model-free Reinforcement Learning Process Control Scheme by Using Bayesian Inference | Design of process control scheme is critical for quality assurance to reduce
variations in manufacturing systems. Taking semiconductor manufacturing as an
example, extensive literature focuses on control optimization based on certain
process models (usually linear models), which are obtained by experiments
before a manufacturing process starts. However, in real applications,
pre-defined models may not be accurate, especially for a complex manufacturing
system. To tackle model inaccuracy, we propose a model-free reinforcement
learning (MFRL) approach to conduct experiments and optimize control
simultaneously according to real-time data. Specifically, we design a novel
MFRL control scheme by updating the distribution of disturbances using Bayesian
inference to reduce their large variations during manufacturing processes. As a
result, the proposed MFRL controller is demonstrated to perform well in a
nonlinear chemical mechanical planarization (CMP) process when the process
model is unknown. Theoretical properties are also guaranteed when disturbances
are additive. The numerical studies also demonstrate the effectiveness and
efficiency of our methodology. | [
"Yanrong Li",
"Juan Du",
"Wei Jiang"
] | 2023-09-17 08:18:55 | http://arxiv.org/abs/2309.09205v1 | http://arxiv.org/pdf/2309.09205v1 | 2309.09205v1 |
SplitEE: Early Exit in Deep Neural Networks with Split Computing | Deep Neural Networks (DNNs) have drawn attention because of their outstanding
performance on various tasks. However, deploying full-fledged DNNs in
resource-constrained devices (edge, mobile, IoT) is difficult due to their
large size. To overcome the issue, various approaches are considered, like
offloading part of the computation to the cloud for final inference (split
computing) or performing the inference at an intermediary layer without passing
through all layers (early exits). In this work, we propose combining both
approaches by using early exits in split computing. In our approach, we decide
up to what depth of DNNs computation to perform on the device (splitting layer)
and whether a sample can exit from this layer or need to be offloaded. The
decisions are based on a weighted combination of accuracy, computational, and
communication costs. We develop an algorithm named SplitEE to learn an optimal
policy. Since pre-trained DNNs are often deployed in new domains where the
ground truths may be unavailable and samples arrive in a streaming fashion,
SplitEE works in an online and unsupervised setup. We extensively perform
experiments on five different datasets. SplitEE achieves a significant cost
reduction ($>50\%$) with a slight drop in accuracy ($<2\%$) as compared to the
case when all samples are inferred at the final layer. The anonymized source
code is available at
\url{https://anonymous.4open.science/r/SplitEE_M-B989/README.md}. | [
"Divya J. Bajpai",
"Vivek K. Trivedi",
"Sohan L. Yadav",
"Manjesh K. Hanawal"
] | 2023-09-17 07:48:22 | http://arxiv.org/abs/2309.09195v1 | http://arxiv.org/pdf/2309.09195v1 | 2309.09195v1 |
End-to-End Optimized Pipeline for Prediction of Protein Folding Kinetics | Protein folding is the intricate process by which a linear sequence of amino
acids self-assembles into a unique three-dimensional structure. Protein folding
kinetics is the study of pathways and time-dependent mechanisms a protein
undergoes when it folds. Understanding protein kinetics is essential as a
protein needs to fold correctly for it to perform its biological functions
optimally, and a misfolded protein can sometimes be contorted into shapes that
are not ideal for a cellular environment giving rise to many degenerative,
neuro-degenerative disorders and amyloid diseases. Monitoring at-risk
individuals and detecting protein discrepancies in a protein's folding kinetics
at the early stages could majorly result in public health benefits, as
preventive measures can be taken. This research proposes an efficient pipeline
for predicting protein folding kinetics with high accuracy and low memory
footprint. The deployed machine learning (ML) model outperformed the
state-of-the-art ML models by 4.8% in terms of accuracy while consuming 327x
lesser memory and being 7.3% faster. | [
"Vijay Arvind. R",
"Haribharathi Sivakumar",
"Brindha. R"
] | 2023-09-17 07:35:54 | http://arxiv.org/abs/2309.09191v1 | http://arxiv.org/pdf/2309.09191v1 | 2309.09191v1 |
Detecting covariate drift in text data using document embeddings and dimensionality reduction | Detecting covariate drift in text data is essential for maintaining the
reliability and performance of text analysis models. In this research, we
investigate the effectiveness of different document embeddings, dimensionality
reduction techniques, and drift detection methods for identifying covariate
drift in text data. We explore three popular document embeddings: term
frequency-inverse document frequency (TF-IDF) using Latent semantic
analysis(LSA) for dimentionality reduction and Doc2Vec, and BERT embeddings,
with and without using principal component analysis (PCA) for dimensionality
reduction. To quantify the divergence between training and test data
distributions, we employ the Kolmogorov-Smirnov (KS) statistic and the Maximum
Mean Discrepancy (MMD) test as drift detection methods. Experimental results
demonstrate that certain combinations of embeddings, dimensionality reduction
techniques, and drift detection methods outperform others in detecting
covariate drift. Our findings contribute to the advancement of reliable text
analysis models by providing insights into effective approaches for addressing
covariate drift in text data. | [
"Vinayak Sodar",
"Ankit Sekseria"
] | 2023-09-17 07:34:57 | http://arxiv.org/abs/2309.10000v1 | http://arxiv.org/pdf/2309.10000v1 | 2309.10000v1 |
Data-Driven Reachability Analysis of Stochastic Dynamical Systems with Conformal Inference | We consider data-driven reachability analysis of discrete-time stochastic
dynamical systems using conformal inference. We assume that we are not provided
with a symbolic representation of the stochastic system, but instead have
access to a dataset of $K$-step trajectories. The reachability problem is to
construct a probabilistic flowpipe such that the probability that a $K$-step
trajectory can violate the bounds of the flowpipe does not exceed a
user-specified failure probability threshold. The key ideas in this paper are:
(1) to learn a surrogate predictor model from data, (2) to perform reachability
analysis using the surrogate model, and (3) to quantify the surrogate model's
incurred error using conformal inference in order to give probabilistic
reachability guarantees. We focus on learning-enabled control systems with
complex closed-loop dynamics that are difficult to model symbolically, but
where state transition pairs can be queried, e.g., using a simulator. We
demonstrate the applicability of our method on examples from the domain of
learning-enabled cyber-physical systems. | [
"Navid Hashemi",
"Xin Qin",
"Lars Lindemann",
"Jyotirmoy V. Deshmukh"
] | 2023-09-17 07:23:01 | http://arxiv.org/abs/2309.09187v1 | http://arxiv.org/pdf/2309.09187v1 | 2309.09187v1 |
Imbalanced Data Stream Classification using Dynamic Ensemble Selection | Modern streaming data categorization faces significant challenges from
concept drift and class imbalanced data. This negatively impacts the output of
the classifier, leading to improper classification. Furthermore, other factors
such as the overlapping of multiple classes limit the extent of the correctness
of the output. This work proposes a novel framework for integrating data
pre-processing and dynamic ensemble selection, by formulating the
classification framework for the nonstationary drifting imbalanced data stream,
which employs the data pre-processing and dynamic ensemble selection
techniques. The proposed framework was evaluated using six artificially
generated data streams with differing imbalance ratios in combination with two
different types of concept drifts. Each stream is composed of 200 chunks of 500
objects described by eight features and contains five concept drifts. Seven
pre-processing techniques and two dynamic ensemble selection methods were
considered. According to experimental results, data pre-processing combined
with Dynamic Ensemble Selection techniques significantly delivers more accuracy
when dealing with imbalanced data streams. | [
"Priya. S",
"Haribharathi Sivakumar",
"Vijay Arvind. R"
] | 2023-09-17 06:51:29 | http://arxiv.org/abs/2309.09175v2 | http://arxiv.org/pdf/2309.09175v2 | 2309.09175v2 |
On the Connection Between Riemann Hypothesis and a Special Class of Neural Networks | The Riemann hypothesis (RH) is a long-standing open problem in mathematics.
It conjectures that non-trivial zeros of the zeta function all have real part
equal to 1/2. The extent of the consequences of RH is far-reaching and touches
a wide spectrum of topics including the distribution of prime numbers, the
growth of arithmetic functions, the growth of Euler totient, etc. In this note,
we revisit and extend an old analytic criterion of the RH known as the
Nyman-Beurling criterion which connects the RH to a minimization problem that
involves a special class of neural networks. This note is intended for an
audience unfamiliar with RH. A gentle introduction to RH is provided. | [
"Soufiane Hayou"
] | 2023-09-17 05:50:12 | http://arxiv.org/abs/2309.09171v1 | http://arxiv.org/pdf/2309.09171v1 | 2309.09171v1 |
Integration of geoelectric and geochemical data using Self-Organizing Maps (SOM) to characterize a landfill | Leachates from garbage dumps can significantly compromise their surrounding
area. Even if the distance between these and the populated areas could be
considerable, the risk of affecting the aquifers for public use is imminent in
most cases. For this reason, the delimitation and monitoring of the leachate
plume are of significant importance. Geoelectric data (resistivity and IP), and
surface methane measurements, are integrated and classified using an
unsupervised Neural Network to identify possible risk zones in areas
surrounding a landfill. The Neural Network used is a Kohonen type, which
generates; as a result, Self-Organizing Classification Maps or SOM
(Self-Organizing Map). Two graphic outputs were obtained from the training
performed in which groups of neurons that presented a similar behaviour were
selected. Contour maps corresponding to the location of these groups and the
individual variables were generated to compare the classification obtained and
the different anomalies associated with each of these variables. Two of the
groups resulting from the classification are related to typical values of
liquids percolated in the landfill for the parameters evaluated individually.
In this way, a precise delimitation of the affected areas in the studied
landfill was obtained, integrating the input variables via SOMs. The location
of the study area is not detailed for confidentiality reasons. | [
"Camila Juliao",
"Johan Diaz",
"Yosmely BermÚdez",
"Milagrosa Aldana"
] | 2023-09-17 05:38:54 | http://arxiv.org/abs/2309.09164v1 | http://arxiv.org/pdf/2309.09164v1 | 2309.09164v1 |
Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach | With increasing frequency of high-profile privacy breaches in various online
platforms, users are becoming more concerned about their privacy. And
recommender system is the core component of online platforms for providing
personalized service, consequently, its privacy preservation has attracted
great attention. As the gold standard of privacy protection, differential
privacy has been widely adopted to preserve privacy in recommender systems.
However, existing differentially private recommender systems only consider
static and independent interactions, so they cannot apply to sequential
recommendation where behaviors are dynamic and dependent. Meanwhile, little
attention has been paid on the privacy risk of sensitive user features, most of
them only protect user feedbacks. In this work, we propose a novel
DIfferentially Private Sequential recommendation framework with a noisy Graph
Neural Network approach (denoted as DIPSGNN) to address these limitations. To
the best of our knowledge, we are the first to achieve differential privacy in
sequential recommendation with dependent interactions. Specifically, in
DIPSGNN, we first leverage piecewise mechanism to protect sensitive user
features. Then, we innovatively add calibrated noise into aggregation step of
graph neural network based on aggregation perturbation mechanism. And this
noisy graph neural network can protect sequentially dependent interactions and
capture user preferences simultaneously. Extensive experiments demonstrate the
superiority of our method over state-of-the-art differentially private
recommender systems in terms of better balance between privacy and accuracy. | [
"Wentao Hu",
"Hui Fang"
] | 2023-09-17 03:12:33 | http://arxiv.org/abs/2309.11515v1 | http://arxiv.org/pdf/2309.11515v1 | 2309.11515v1 |
Total Variation Distance Estimation Is as Easy as Probabilistic Inference | In this paper, we establish a novel connection between total variation (TV)
distance estimation and probabilistic inference. In particular, we present an
efficient, structure-preserving reduction from relative approximation of TV
distance to probabilistic inference over directed graphical models. This
reduction leads to a fully polynomial randomized approximation scheme (FPRAS)
for estimating TV distances between distributions over any class of Bayes nets
for which there is an efficient probabilistic inference algorithm. In
particular, it leads to an FPRAS for estimating TV distances between
distributions that are defined by Bayes nets of bounded treewidth. Prior to
this work, such approximation schemes only existed for estimating TV distances
between product distributions. Our approach employs a new notion of $partial$
couplings of high-dimensional distributions, which might be of independent
interest. | [
"Arnab Bhattacharyya",
"Sutanu Gayen",
"Kuldeep S. Meel",
"Dimitrios Myrisiotis",
"A. Pavan",
"N. V. Vinodchandran"
] | 2023-09-17 02:12:36 | http://arxiv.org/abs/2309.09134v1 | http://arxiv.org/pdf/2309.09134v1 | 2309.09134v1 |
Conditional Mutual Information Constrained Deep Learning for Classification | The concepts of conditional mutual information (CMI) and normalized
conditional mutual information (NCMI) are introduced to measure the
concentration and separation performance of a classification deep neural
network (DNN) in the output probability distribution space of the DNN, where
CMI and the ratio between CMI and NCMI represent the intra-class concentration
and inter-class separation of the DNN, respectively. By using NCMI to evaluate
popular DNNs pretrained over ImageNet in the literature, it is shown that their
validation accuracies over ImageNet validation data set are more or less
inversely proportional to their NCMI values. Based on this observation, the
standard deep learning (DL) framework is further modified to minimize the
standard cross entropy function subject to an NCMI constraint, yielding CMI
constrained deep learning (CMIC-DL). A novel alternating learning algorithm is
proposed to solve such a constrained optimization problem. Extensive experiment
results show that DNNs trained within CMIC-DL outperform the state-of-the-art
models trained within the standard DL and other loss functions in the
literature in terms of both accuracy and robustness against adversarial
attacks. In addition, visualizing the evolution of learning process through the
lens of CMI and NCMI is also advocated. | [
"En-Hui Yang",
"Shayan Mohajer Hamidi",
"Linfeng Ye",
"Renhao Tan",
"Beverly Yang"
] | 2023-09-17 01:16:45 | http://arxiv.org/abs/2309.09123v1 | http://arxiv.org/pdf/2309.09123v1 | 2309.09123v1 |
Red Teaming Generative AI/NLP, the BB84 quantum cryptography protocol and the NIST-approved Quantum-Resistant Cryptographic Algorithms | In the contemporary digital age, Quantum Computing and Artificial
Intelligence (AI) convergence is reshaping the cyber landscape, introducing
unprecedented opportunities and potential vulnerabilities.This research,
conducted over five years, delves into the cybersecurity implications of this
convergence, with a particular focus on AI/Natural Language Processing (NLP)
models and quantum cryptographic protocols, notably the BB84 method and
specific NIST-approved algorithms. Utilising Python and C++ as primary
computational tools, the study employs a "red teaming" approach, simulating
potential cyber-attacks to assess the robustness of quantum security measures.
Preliminary research over 12 months laid the groundwork, which this study seeks
to expand upon, aiming to translate theoretical insights into actionable,
real-world cybersecurity solutions. Located at the University of Oxford's
technology precinct, the research benefits from state-of-the-art infrastructure
and a rich collaborative environment. The study's overarching goal is to ensure
that as the digital world transitions to quantum-enhanced operations, it
remains resilient against AI-driven cyber threats. The research aims to foster
a safer, quantum-ready digital future through iterative testing, feedback
integration, and continuous improvement. The findings are intended for broad
dissemination, ensuring that the knowledge benefits academia and the global
community, emphasising the responsible and secure harnessing of quantum
technology. | [
"Petar Radanliev",
"David De Roure",
"Omar Santos"
] | 2023-09-17 00:59:14 | http://arxiv.org/abs/2310.04425v1 | http://arxiv.org/pdf/2310.04425v1 | 2310.04425v1 |
Reducing sequential change detection to sequential estimation | We consider the problem of sequential change detection, where the goal is to
design a scheme for detecting any changes in a parameter or functional $\theta$
of the data stream distribution that has small detection delay, but guarantees
control on the frequency of false alarms in the absence of changes. In this
paper, we describe a simple reduction from sequential change detection to
sequential estimation using confidence sequences: we begin a new
$(1-\alpha)$-confidence sequence at each time step, and proclaim a change when
the intersection of all active confidence sequences becomes empty. We prove
that the average run length is at least $1/\alpha$, resulting in a change
detection scheme with minimal structural assumptions~(thus allowing for
possibly dependent observations, and nonparametric distribution classes), but
strong guarantees. Our approach bears an interesting parallel with the
reduction from change detection to sequential testing of Lorden (1971) and the
e-detector of Shin et al. (2022). | [
"Shubhanshu Shekhar",
"Aaditya Ramdas"
] | 2023-09-16 23:48:47 | http://arxiv.org/abs/2309.09111v1 | http://arxiv.org/pdf/2309.09111v1 | 2309.09111v1 |
DEUX: Active Exploration for Learning Unsupervised Depth Perception | Depth perception models are typically trained on non-interactive datasets
with predefined camera trajectories. However, this often introduces systematic
biases into the learning process correlated to specific camera paths chosen
during data acquisition. In this paper, we investigate the role of how data is
collected for learning depth completion, from a robot navigation perspective,
by leveraging 3D interactive environments. First, we evaluate four depth
completion models trained on data collected using conventional navigation
techniques. Our key insight is that existing exploration paradigms do not
necessarily provide task-specific data points to achieve competent unsupervised
depth completion learning. We then find that data collected with respect to
photometric reconstruction has a direct positive influence on model
performance. As a result, we develop an active, task-informed, depth
uncertainty-based motion planning approach for learning depth completion, which
we call DEpth Uncertainty-guided eXploration (DEUX). Training with data
collected by our approach improves depth completion by an average greater than
18% across four depth completion models compared to existing exploration
methods on the MP3D test set. We show that our approach further improves
zero-shot generalization, while offering new insights into integrating robot
learning-based depth estimation. | [
"Marvin Chancán",
"Alex Wong",
"Ian Abraham"
] | 2023-09-16 23:33:15 | http://arxiv.org/abs/2310.06164v1 | http://arxiv.org/pdf/2310.06164v1 | 2310.06164v1 |
Interactively Teaching an Inverse Reinforcement Learner with Limited Feedback | We study the problem of teaching via demonstrations in sequential
decision-making tasks. In particular, we focus on the situation when the
teacher has no access to the learner's model and policy, and the feedback from
the learner is limited to trajectories that start from states selected by the
teacher. The necessity to select the starting states and infer the learner's
policy creates an opportunity for using the methods of inverse reinforcement
learning and active learning by the teacher. In this work, we formalize the
teaching process with limited feedback and propose an algorithm that solves
this teaching problem. The algorithm uses a modified version of the active
value-at-risk method to select the starting states, a modified maximum causal
entropy algorithm to infer the policy, and the difficulty score ratio method to
choose the teaching demonstrations. We test the algorithm in a synthetic car
driving environment and conclude that the proposed algorithm is an effective
solution when the learner's feedback is limited. | [
"Rustam Zayanov",
"Francisco S. Melo",
"Manuel Lopes"
] | 2023-09-16 21:12:04 | http://arxiv.org/abs/2309.09095v1 | http://arxiv.org/pdf/2309.09095v1 | 2309.09095v1 |
Improving Speech Recognition for African American English With Audio Classification | Automatic speech recognition (ASR) systems have been shown to have large
quality disparities between the language varieties they are intended or
expected to recognize. One way to mitigate this is to train or fine-tune models
with more representative datasets. But this approach can be hindered by limited
in-domain data for training and evaluation. We propose a new way to improve the
robustness of a US English short-form speech recognizer using a small amount of
out-of-domain (long-form) African American English (AAE) data. We use CORAAL,
YouTube and Mozilla Common Voice to train an audio classifier to approximately
output whether an utterance is AAE or some other variety including Mainstream
American English (MAE). By combining the classifier output with coarse
geographic information, we can select a subset of utterances from a large
corpus of untranscribed short-form queries for semi-supervised learning at
scale. Fine-tuning on this data results in a 38.5% relative word error rate
disparity reduction between AAE and MAE without reducing MAE quality. | [
"Shefali Garg",
"Zhouyuan Huo",
"Khe Chai Sim",
"Suzan Schwartz",
"Mason Chua",
"Alëna Aksënova",
"Tsendsuren Munkhdalai",
"Levi King",
"Darryl Wright",
"Zion Mengesha",
"Dongseong Hwang",
"Tara Sainath",
"Françoise Beaufays",
"Pedro Moreno Mengibar"
] | 2023-09-16 19:57:45 | http://arxiv.org/abs/2309.09996v1 | http://arxiv.org/pdf/2309.09996v1 | 2309.09996v1 |
Test-Time Compensated Representation Learning for Extreme Traffic Forecasting | Traffic forecasting is a challenging task due to the complex spatio-temporal
correlations among traffic series. In this paper, we identify an underexplored
problem in multivariate traffic series prediction: extreme events. Road
congestion and rush hours can result in low correlation in vehicle speeds at
various intersections during adjacent time periods. Existing methods generally
predict future series based on recent observations and entirely discard
training data during the testing phase, rendering them unreliable for
forecasting highly nonlinear multivariate time series. To tackle this issue, we
propose a test-time compensated representation learning framework comprising a
spatio-temporal decomposed data bank and a multi-head spatial transformer model
(CompFormer). The former component explicitly separates all training data along
the temporal dimension according to periodicity characteristics, while the
latter component establishes a connection between recent observations and
historical series in the data bank through a spatial attention matrix. This
enables the CompFormer to transfer robust features to overcome anomalous events
while using fewer computational resources. Our modules can be flexibly
integrated with existing forecasting methods through end-to-end training, and
we demonstrate their effectiveness on the METR-LA and PEMS-BAY benchmarks.
Extensive experimental results show that our method is particularly important
in extreme events, and can achieve significant improvements over six strong
baselines, with an overall improvement of up to 28.2%. | [
"Zhiwei Zhang",
"Weizhong Zhang",
"Yaowei Huang",
"Kani Chen"
] | 2023-09-16 18:46:34 | http://arxiv.org/abs/2309.09074v1 | http://arxiv.org/pdf/2309.09074v1 | 2309.09074v1 |
Enhancing personalised thermal comfort models with Active Learning for improved HVAC controls | Developing personalised thermal comfort models to inform occupant-centric
controls (OCC) in buildings requires collecting large amounts of real-time
occupant preference data. This process can be highly intrusive and
labour-intensive for large-scale implementations, limiting the practicality of
real-world OCC implementations. To address this issue, this study proposes a
thermal preference-based HVAC control framework enhanced with Active Learning
(AL) to address the data challenges related to real-world implementations of
such OCC systems. The proposed AL approach proactively identifies the most
informative thermal conditions for human annotation and iteratively updates a
supervised thermal comfort model. The resulting model is subsequently used to
predict the occupants' thermal preferences under different thermal conditions,
which are integrated into the building's HVAC controls. The feasibility of our
proposed AL-enabled OCC was demonstrated in an EnergyPlus simulation of a
real-world testbed supplemented with the thermal preference data of 58 study
occupants. The preliminary results indicated a significant reduction in overall
labelling effort (i.e., 31.0%) between our AL-enabled OCC and conventional OCC
while still achieving a slight increase in energy savings (i.e., 1.3%) and
thermal satisfaction levels above 98%. This result demonstrates the potential
for deploying such systems in future real-world implementations, enabling
personalised comfort and energy-efficient building operations. | [
"Zeynep Duygu Tekler",
"Yue Lei",
"Xilei Dai",
"Adrian Chong"
] | 2023-09-16 18:42:58 | http://arxiv.org/abs/2309.09073v1 | http://arxiv.org/pdf/2309.09073v1 | 2309.09073v1 |
Recovering Missing Node Features with Local Structure-based Embeddings | Node features bolster graph-based learning when exploited jointly with
network structure. However, a lack of nodal attributes is prevalent in graph
data. We present a framework to recover completely missing node features for a
set of graphs, where we only know the signals of a subset of graphs. Our
approach incorporates prior information from both graph topology and existing
nodal values. We demonstrate an example implementation of our framework where
we assume that node features depend on local graph structure. Missing nodal
values are estimated by aggregating known features from the most similar nodes.
Similarity is measured through a node embedding space that preserves local
topological features, which we train using a Graph AutoEncoder. We empirically
show not only the accuracy of our feature estimation approach but also its
value for downstream graph classification. Our success embarks on and implies
the need to emphasize the relationship between node features and graph
structure in graph-based learning. | [
"Victor M. Tenorio",
"Madeline Navarro",
"Santiago Segarra",
"Antonio G. Marques"
] | 2023-09-16 18:23:14 | http://arxiv.org/abs/2309.09068v1 | http://arxiv.org/pdf/2309.09068v1 | 2309.09068v1 |
Examining the Influence of Varied Levels of Domain Knowledge Base Inclusion in GPT-based Intelligent Tutors | Recent advancements in large language models (LLMs) have facilitated the
development of chatbots with sophisticated conversational capabilities.
However, LLMs exhibit frequent inaccurate responses to queries, hindering
applications in educational settings. In this paper, we investigate the
effectiveness of integrating a knowledge base (KB) with LLM intelligent tutors
to increase response reliability. To achieve this, we design a scaleable KB
that affords educational supervisors seamless integration of lesson curricula,
which is automatically processed by the intelligent tutoring system. We then
detail an evaluation, where student participants were presented with questions
about the artificial intelligence curriculum to respond to. GPT-4 intelligent
tutors with varying hierarchies of KB access and human domain experts then
assessed these responses. Lastly, students cross-examined the intelligent
tutors' responses to the domain experts' and ranked their various pedagogical
abilities. Results suggest that, although these intelligent tutors still
demonstrate a lower accuracy compared to domain experts, the accuracy of the
intelligent tutors increases when access to a KB is granted. We also observe
that the intelligent tutors with KB access exhibit better pedagogical abilities
to speak like a teacher and understand students than those of domain experts,
while their ability to help students remains lagging behind domain experts. | [
"Blake Castleman",
"Mehmet Kerem Turkcan"
] | 2023-09-16 17:12:05 | http://arxiv.org/abs/2309.12367v1 | http://arxiv.org/pdf/2309.12367v1 | 2309.12367v1 |
Temporal Smoothness Regularisers for Neural Link Predictors | Most algorithms for representation learning and link prediction on relational
data are designed for static data. However, the data to which they are applied
typically evolves over time, including online social networks or interactions
between users and items in recommender systems. This is also the case for
graph-structured knowledge bases -- knowledge graphs -- which contain facts
that are valid only for specific points in time. In such contexts, it becomes
crucial to correctly identify missing links at a precise time point, i.e. the
temporal prediction link task. Recently, Lacroix et al. and Sadeghian et al.
proposed a solution to the problem of link prediction for knowledge graphs
under temporal constraints inspired by the canonical decomposition of 4-order
tensors, where they regularise the representations of time steps by enforcing
temporal smoothing, i.e. by learning similar transformation for adjacent
timestamps. However, the impact of the choice of temporal regularisation terms
is still poorly understood. In this work, we systematically analyse several
choices of temporal smoothing regularisers using linear functions and recurrent
architectures. In our experiments, we show that by carefully selecting the
temporal smoothing regulariser and regularisation weight, a simple method like
TNTComplEx can produce significantly more accurate results than
state-of-the-art methods on three widely used temporal link prediction
datasets. Furthermore, we evaluate the impact of a wide range of temporal
smoothing regularisers on two state-of-the-art temporal link prediction models.
Our work shows that simple tensor factorisation models can produce new
state-of-the-art results using newly proposed temporal regularisers,
highlighting a promising avenue for future research. | [
"Manuel Dileo",
"Pasquale Minervini",
"Matteo Zignani",
"Sabrina Gaito"
] | 2023-09-16 16:52:49 | http://arxiv.org/abs/2309.09045v1 | http://arxiv.org/pdf/2309.09045v1 | 2309.09045v1 |
Study of Enhanced MISC-Based Sparse Arrays with High uDOFs and Low Mutual Coupling | In this letter, inspired by the maximum inter-element spacing (IES)
constraint (MISC) criterion, an enhanced MISC-based (EMISC) sparse array (SA)
with high uniform degrees-of-freedom (uDOFs) and low mutual-coupling (MC) is
proposed, analyzed and discussed in detail. For the EMISC SA, an IES set is
first determined by the maximum IES and number of elements. Then, the EMISC SA
is composed of seven uniform linear sub-arrays (ULSAs) derived from an IES set.
An analysis of the uDOFs and weight function shows that, the proposed EMISC SA
outperforms the IMISC SA in terms of uDOF and MC. Simulation results show a
significant advantage of the EMISC SA over other existing SAs. | [
"X. Sheng",
"D. Lu",
"Y. Li",
"R. C. de Lamare"
] | 2023-09-16 16:50:38 | http://arxiv.org/abs/2309.09044v1 | http://arxiv.org/pdf/2309.09044v1 | 2309.09044v1 |
Forward Invariance in Neural Network Controlled Systems | We present a framework based on interval analysis and monotone systems theory
to certify and search for forward invariant sets in nonlinear systems with
neural network controllers. The framework (i) constructs localized first-order
inclusion functions for the closed-loop system using Jacobian bounds and
existing neural network verification tools; (ii) builds a dynamical embedding
system where its evaluation along a single trajectory directly corresponds with
a nested family of hyper-rectangles provably converging to an attractive set of
the original system; (iii) utilizes linear transformations to build families of
nested paralleletopes with the same properties. The framework is automated in
Python using our interval analysis toolbox $\texttt{npinterval}$, in
conjunction with the symbolic arithmetic toolbox $\texttt{sympy}$, demonstrated
on an $8$-dimensional leader-follower system. | [
"Akash Harapanahalli",
"Saber Jafarpour",
"Samuel Coogan"
] | 2023-09-16 16:49:19 | http://arxiv.org/abs/2309.09043v1 | http://arxiv.org/pdf/2309.09043v1 | 2309.09043v1 |
Solving Quadratic Systems with Full-Rank Matrices Using Sparse or Generative Priors | The problem of recovering a signal $\boldsymbol{x} \in \mathbb{R}^n$ from a
quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\
i=1,\ldots,m\}$ with full-rank matrices $\boldsymbol{A}_i$ frequently arises in
applications such as unassigned distance geometry and sub-wavelength imaging.
With i.i.d. standard Gaussian matrices $\boldsymbol{A}_i$, this paper addresses
the high-dimensional case where $m\ll n$ by incorporating prior knowledge of
$\boldsymbol{x}$. First, we consider a $k$-sparse $\boldsymbol{x}$ and
introduce the thresholded Wirtinger flow (TWF) algorithm that does not require
the sparsity level $k$. TWF comprises two steps: the spectral initialization
that identifies a point sufficiently close to $\boldsymbol{x}$ (up to a sign
flip) when $m=O(k^2\log n)$, and the thresholded gradient descent (with a good
initialization) that produces a sequence linearly converging to
$\boldsymbol{x}$ with $m=O(k\log n)$ measurements. Second, we explore the
generative prior, assuming that $\boldsymbol{x}$ lies in the range of an
$L$-Lipschitz continuous generative model with $k$-dimensional inputs in an
$\ell_2$-ball of radius $r$. We develop the projected gradient descent (PGD)
algorithm that also comprises two steps: the projected power method that
provides an initial vector with $O\big(\sqrt{\frac{k \log L}{m}}\big)$
$\ell_2$-error given $m=O(k\log(Lnr))$ measurements, and the projected gradient
descent that refines the $\ell_2$-error to $O(\delta)$ at a geometric rate when
$m=O(k\log\frac{Lrn}{\delta^2})$. Experimental results corroborate our
theoretical findings and show that: (i) our approach for the sparse case
notably outperforms the existing provable algorithm sparse power factorization;
(ii) leveraging the generative prior allows for precise image recovery in the
MNIST dataset from a small number of quadratic measurements. | [
"Junren Chen",
"Shuai Huang",
"Michael K. Ng",
"Zhaoqiang Liu"
] | 2023-09-16 16:00:07 | http://arxiv.org/abs/2309.09032v1 | http://arxiv.org/pdf/2309.09032v1 | 2309.09032v1 |
Improve Deep Forest with Learnable Layerwise Augmentation Policy Schedule | As a modern ensemble technique, Deep Forest (DF) employs a cascading
structure to construct deep models, providing stronger representational power
compared to traditional decision forests. However, its greedy multi-layer
learning procedure is prone to overfitting, limiting model effectiveness and
generalizability. This paper presents an optimized Deep Forest, featuring
learnable, layerwise data augmentation policy schedules. Specifically, We
introduce the Cut Mix for Tabular data (CMT) augmentation technique to mitigate
overfitting and develop a population-based search algorithm to tailor
augmentation intensity for each layer. Additionally, we propose to incorporate
outputs from intermediate layers into a checkpoint ensemble for more stable
performance. Experimental results show that our method sets new
state-of-the-art (SOTA) benchmarks in various tabular classification tasks,
outperforming shallow tree ensembles, deep forests, deep neural network, and
AutoML competitors. The learned policies also transfer effectively to Deep
Forest variants, underscoring its potential for enhancing non-differentiable
deep learning modules in tabular signal processing. | [
"Hongyu Zhu",
"Sichu Liang",
"Wentao Hu",
"Fang-Qi Li",
"Yali yuan",
"Shi-Lin Wang",
"Guang Cheng"
] | 2023-09-16 15:54:25 | http://arxiv.org/abs/2309.09030v1 | http://arxiv.org/pdf/2309.09030v1 | 2309.09030v1 |
gym-saturation: Gymnasium environments for saturation provers (System description) | This work describes a new version of a previously published Python package -
gym-saturation: a collection of OpenAI Gym environments for guiding
saturation-style provers based on the given clause algorithm with reinforcement
learning. We contribute usage examples with two different provers: Vampire and
iProver. We also have decoupled the proof state representation from
reinforcement learning per se and provided examples of using a known ast2vec
Python code embedding model as a first-order logic representation. In addition,
we demonstrate how environment wrappers can transform a prover into a problem
similar to a multi-armed bandit. We applied two reinforcement learning
algorithms (Thompson sampling and Proximal policy optimisation) implemented in
Ray RLlib to show the ease of experimentation with the new release of our
package. | [
"Boris Shminke"
] | 2023-09-16 15:25:39 | http://arxiv.org/abs/2309.09022v1 | http://arxiv.org/pdf/2309.09022v1 | 2309.09022v1 |
RMP: A Random Mask Pretrain Framework for Motion Prediction | As the pretraining technique is growing in popularity, little work has been
done on pretrained learning-based motion prediction methods in autonomous
driving. In this paper, we propose a framework to formalize the pretraining
task for trajectory prediction of traffic participants. Within our framework,
inspired by the random masked model in natural language processing (NLP) and
computer vision (CV), objects' positions at random timesteps are masked and
then filled in by the learned neural network (NN). By changing the mask
profile, our framework can easily switch among a range of motion-related tasks.
We show that our proposed pretraining framework is able to deal with noisy
inputs and improves the motion prediction accuracy and miss rate, especially
for objects occluded over time by evaluating it on Argoverse and NuScenes
datasets. | [
"Yi Yang",
"Qingwen Zhang",
"Thomas Gilles",
"Nazre Batool",
"John Folkesson"
] | 2023-09-16 13:09:02 | http://arxiv.org/abs/2309.08989v1 | http://arxiv.org/pdf/2309.08989v1 | 2309.08989v1 |
Data-driven Reachability using Christoffel Functions and Conformal Prediction | An important mathematical tool in the analysis of dynamical systems is the
approximation of the reach set, i.e., the set of states reachable after a given
time from a given initial state. This set is difficult to compute for complex
systems even if the system dynamics are known and given by a system of ordinary
differential equations with known coefficients. In practice, parameters are
often unknown and mathematical models difficult to obtain. Data-based
approaches are promised to avoid these difficulties by estimating the reach set
based on a sample of states. If a model is available, this training set can be
obtained through numerical simulation. In the absence of a model, real-life
observations can be used instead. A recently proposed approach for data-based
reach set approximation uses Christoffel functions to approximate the reach
set. Under certain assumptions, the approximation is guaranteed to converge to
the true solution. In this paper, we improve upon these results by notably
improving the sample efficiency and relaxing some of the assumptions by
exploiting statistical guarantees from conformal prediction with training and
calibration sets. In addition, we exploit an incremental way to compute the
Christoffel function to avoid the calibration set while maintaining the
statistical convergence guarantees. Furthermore, our approach is robust to
outliers in the training and calibration set. | [
"Abdelmouaiz Tebjou",
"Goran Frehse",
"Faïcel Chamroukhi"
] | 2023-09-16 12:21:57 | http://arxiv.org/abs/2309.08976v1 | http://arxiv.org/pdf/2309.08976v1 | 2309.08976v1 |
Regularized Contrastive Pre-training for Few-shot Bioacoustic Sound Detection | Bioacoustic sound event detection allows for better understanding of animal
behavior and for better monitoring biodiversity using audio. Deep learning
systems can help achieve this goal, however it is difficult to acquire
sufficient annotated data to train these systems from scratch. To address this
limitation, the Detection and Classification of Acoustic Scenes and Events
(DCASE) community has recasted the problem within the framework of few-shot
learning and organize an annual challenge for learning to detect animal sounds
from only five annotated examples. In this work, we regularize supervised
contrastive pre-training to learn features that can transfer well on new target
tasks with animal sounds unseen during training, achieving a high F-score of
61.52%(0.48) when no feature adaptation is applied, and an F-score of
68.19%(0.75) when we further adapt the learned features for each new target
task. This work aims to lower the entry bar to few-shot bioacoustic sound event
detection by proposing a simple and yet effective framework for this task, by
also providing open-source code. | [
"Ilyass Moummad",
"Romain Serizel",
"Nicolas Farrugia"
] | 2023-09-16 12:11:11 | http://arxiv.org/abs/2309.08971v1 | http://arxiv.org/pdf/2309.08971v1 | 2309.08971v1 |
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT) | The rapid advancement of large language models (LLMs) has revolutionized
natural language processing (NLP). While these models excel at understanding
and generating human-like text, their widespread deployment can be
prohibitively expensive. SortedNet is a recent training technique for enabling
dynamic inference for deep neural networks. It leverages network modularity to
create sub-models with varying computational loads, sorting them based on
computation/accuracy characteristics in a nested manner. We extend SortedNet to
generative NLP tasks, making large language models dynamic without any
pretraining and by only replacing standard Supervised Fine-Tuning (SFT) with
Sorted Fine-Tuning (SoFT) at the same costs. Our approach boosts model
efficiency, eliminating the need for multiple models for various scenarios
during inference. We show that using this approach, we are able to unlock the
potential of intermediate layers of transformers in generating the target
output. Our sub-models remain integral components of the original model,
minimizing storage requirements and transition costs between different
computational/latency budgets. By applying this approach on LLaMa 2 13B for
tuning on the Stanford Alpaca dataset and comparing it to normal tuning and
early exit via PandaLM benchmark, we show that Sorted Fine-Tuning can deliver
models twice as fast as the original model while maintaining or exceeding
performance. | [
"Parsa Kavehzadeh",
"Mojtaba Valipour",
"Marzieh Tahaei",
"Ali Ghodsi",
"Boxing Chen",
"Mehdi Rezagholizadeh"
] | 2023-09-16 11:58:34 | http://arxiv.org/abs/2309.08968v1 | http://arxiv.org/pdf/2309.08968v1 | 2309.08968v1 |
Multiagent Reinforcement Learning with an Attention Mechanism for Improving Energy Efficiency in LoRa Networks | Long Range (LoRa) wireless technology, characterized by low power consumption
and a long communication range, is regarded as one of the enabling technologies
for the Industrial Internet of Things (IIoT). However, as the network scale
increases, the energy efficiency (EE) of LoRa networks decreases sharply due to
severe packet collisions. To address this issue, it is essential to
appropriately assign transmission parameters such as the spreading factor and
transmission power for each end device (ED). However, due to the sporadic
traffic and low duty cycle of LoRa networks, evaluating the system EE
performance under different parameter settings is time-consuming. Therefore, we
first formulate an analytical model to calculate the system EE. On this basis,
we propose a transmission parameter allocation algorithm based on multiagent
reinforcement learning (MALoRa) with the aim of maximizing the system EE of
LoRa networks. Notably, MALoRa employs an attention mechanism to guide each ED
to better learn how much ''attention'' should be given to the parameter
assignments for relevant EDs when seeking to improve the system EE. Simulation
results demonstrate that MALoRa significantly improves the system EE compared
with baseline algorithms with an acceptable degradation in packet delivery rate
(PDR). | [
"Xu Zhang",
"Ziqi Lin",
"Shimin Gong",
"Bo Gu",
"Dusit Niyato"
] | 2023-09-16 11:37:23 | http://arxiv.org/abs/2309.08965v1 | http://arxiv.org/pdf/2309.08965v1 | 2309.08965v1 |
UNIDEAL: Curriculum Knowledge Distillation Federated Learning | Federated Learning (FL) has emerged as a promising approach to enable
collaborative learning among multiple clients while preserving data privacy.
However, cross-domain FL tasks, where clients possess data from different
domains or distributions, remain a challenging problem due to the inherent
heterogeneity. In this paper, we present UNIDEAL, a novel FL algorithm
specifically designed to tackle the challenges of cross-domain scenarios and
heterogeneous model architectures. The proposed method introduces Adjustable
Teacher-Student Mutual Evaluation Curriculum Learning, which significantly
enhances the effectiveness of knowledge distillation in FL settings. We conduct
extensive experiments on various datasets, comparing UNIDEAL with
state-of-the-art baselines. Our results demonstrate that UNIDEAL achieves
superior performance in terms of both model accuracy and communication
efficiency. Additionally, we provide a convergence analysis of the algorithm,
showing a convergence rate of O(1/T) under non-convex conditions. | [
"Yuwen Yang",
"Chang Liu",
"Xun Cai",
"Suizhi Huang",
"Hongtao Lu",
"Yue Ding"
] | 2023-09-16 11:30:29 | http://arxiv.org/abs/2309.08961v1 | http://arxiv.org/pdf/2309.08961v1 | 2309.08961v1 |
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements | We present a neural network for mitigating pseudorange bias to improve
localization performance with data collected from Android smartphones. We
represent pseudorange bias using a pragmatic satellite-wise Multiple Layer
Perceptron (MLP), the inputs of which are six
satellite-receiver-context-related features derived from Android raw Global
Navigation Satellite System (GNSS) measurements. To supervise the training
process, we carefully calculate the target values of pseudorange bias using
location ground truth and smoothing techniques and optimize a loss function
containing the estimation residuals of smartphone clock bias. During the
inference process, we employ model-based localization engines to compute
locations with pseudoranges corrected by the neural network. Consequently, this
hybrid pipeline can attend to both pseudorange bias and noise. We evaluate the
framework on an open dataset and consider four application scenarios for
investigating fingerprinting and cross-trace localization in rural and urban
areas. Extensive experiments demonstrate that the proposed framework
outperforms model-based and state-of-the-art data-driven approaches. | [
"Xu Weng",
"Keck Voon Ling",
"Haochen Liu"
] | 2023-09-16 10:43:59 | http://arxiv.org/abs/2309.12204v1 | http://arxiv.org/pdf/2309.12204v1 | 2309.12204v1 |
Reducing Memory Requirements for the IPU using Butterfly Factorizations | High Performance Computing (HPC) benefits from different improvements during
last decades, specially in terms of hardware platforms to provide more
processing power while maintaining the power consumption at a reasonable level.
The Intelligence Processing Unit (IPU) is a new type of massively parallel
processor, designed to speedup parallel computations with huge number of
processing cores and on-chip memory components connected with high-speed
fabrics. IPUs mainly target machine learning applications, however, due to the
architectural differences between GPUs and IPUs, especially significantly less
memory capacity on an IPU, methods for reducing model size by sparsification
have to be considered. Butterfly factorizations are well-known replacements for
fully-connected and convolutional layers. In this paper, we examine how
butterfly structures can be implemented on an IPU and study their behavior and
performance compared to a GPU. Experimental results indicate that these methods
can provide 98.5% compression ratio to decrease the immense need for memory,
the IPU implementation can benefit from 1.3x and 1.6x performance improvement
for butterfly and pixelated butterfly, respectively. We also reach to 1.62x
training time speedup on a real-word dataset such as CIFAR10. | [
"S. -Kazem Shekofteh",
"Christian Alles",
"Holger Fröning"
] | 2023-09-16 10:38:38 | http://arxiv.org/abs/2309.08946v1 | http://arxiv.org/pdf/2309.08946v1 | 2309.08946v1 |
Inverse classification with logistic and softmax classifiers: efficient optimization | In recent years, a certain type of problems have become of interest where one
wants to query a trained classifier. Specifically, one wants to find the
closest instance to a given input instance such that the classifier's predicted
label is changed in a desired way. Examples of these ``inverse classification''
problems are counterfactual explanations, adversarial examples and model
inversion. All of them are fundamentally optimization problems over the input
instance vector involving a fixed classifier, and it is of interest to achieve
a fast solution for interactive or real-time applications. We focus on solving
this problem efficiently for two of the most widely used classifiers: logistic
regression and softmax classifiers. Owing to special properties of these
models, we show that the optimization can be solved in closed form for logistic
regression, and iteratively but extremely fast for the softmax classifier. This
allows us to solve either case exactly (to nearly machine precision) in a
runtime of milliseconds to around a second even for very high-dimensional
instances and many classes. | [
"Miguel Á. Carreira-Perpiñán",
"Suryabhan Singh Hada"
] | 2023-09-16 10:34:40 | http://arxiv.org/abs/2309.08945v1 | http://arxiv.org/pdf/2309.08945v1 | 2309.08945v1 |
Universal Metric Learning with Parameter-Efficient Transfer Learning | A common practice in metric learning is to train and test an embedding model
for each dataset. This dataset-specific approach fails to simulate real-world
scenarios that involve multiple heterogeneous distributions of data. In this
regard, we introduce a novel metric learning paradigm, called Universal Metric
Learning (UML), which learns a unified distance metric capable of capturing
relations across multiple data distributions. UML presents new challenges, such
as imbalanced data distribution and bias towards dominant distributions. To
address these challenges, we propose Parameter-efficient Universal Metric
leArning (PUMA), which consists of a pre-trained frozen model and two
additional modules, stochastic adapter and prompt pool. These modules enable to
capture dataset-specific knowledge while avoiding bias towards dominant
distributions. Additionally, we compile a new universal metric learning
benchmark with a total of 8 different datasets. PUMA outperformed the
state-of-the-art dataset-specific models while using about 69 times fewer
trainable parameters. | [
"Sungyeon Kim",
"Donghyun Kim",
"Suha Kwak"
] | 2023-09-16 10:34:01 | http://arxiv.org/abs/2309.08944v1 | http://arxiv.org/pdf/2309.08944v1 | 2309.08944v1 |
DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning | Model-based reinforcement learning (RL), which learns environment model from
offline dataset and generates more out-of-distribution model data, has become
an effective approach to the problem of distribution shift in offline RL. Due
to the gap between the learned and actual environment, conservatism should be
incorporated into the algorithm to balance accurate offline data and imprecise
model data. The conservatism of current algorithms mostly relies on model
uncertainty estimation. However, uncertainty estimation is unreliable and leads
to poor performance in certain scenarios, and the previous methods ignore
differences between the model data, which brings great conservatism. Therefore,
this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm
(DOMAIN) without estimating model uncertainty to address the above issues.
DOMAIN introduces adaptive sampling distribution of model samples, which can
adaptively adjust the model data penalty. In this paper, we theoretically
demonstrate that the Q value learned by the DOMAIN outside the region is a
lower bound of the true Q value, the DOMAIN is less conservative than previous
model-based offline RL algorithms and has the guarantee of security policy
improvement. The results of extensive experiments show that DOMAIN outperforms
prior RL algorithms on the D4RL dataset benchmark, and achieves better
performance than other RL algorithms on tasks that require generalization. | [
"Xiao-Yin Liu",
"Xiao-Hu Zhou",
"Xiao-Liang Xie",
"Shi-Qi Liu",
"Zhen-Qiu Feng",
"Hao Li",
"Mei-Jiang Gui",
"Tian-Yu Xiang",
"De-Xing Huang",
"Zeng-Guang Hou"
] | 2023-09-16 08:39:28 | http://arxiv.org/abs/2309.08925v1 | http://arxiv.org/pdf/2309.08925v1 | 2309.08925v1 |
Fast Approximation of the Shapley Values Based on Order-of-Addition Experimental Designs | Shapley value is originally a concept in econometrics to fairly distribute
both gains and costs to players in a coalition game. In the recent decades, its
application has been extended to other areas such as marketing, engineering and
machine learning. For example, it produces reasonable solutions for problems in
sensitivity analysis, local model explanation towards the interpretable machine
learning, node importance in social network, attribution models, etc. However,
its heavy computational burden has been long recognized but rarely
investigated. Specifically, in a $d$-player coalition game, calculating a
Shapley value requires the evaluation of $d!$ or $2^d$ marginal contribution
values, depending on whether we are taking the permutation or combination
formulation of the Shapley value. Hence it becomes infeasible to calculate the
Shapley value when $d$ is reasonably large. A common remedy is to take a random
sample of the permutations to surrogate for the complete list of permutations.
We find an advanced sampling scheme can be designed to yield much more accurate
estimation of the Shapley value than the simple random sampling (SRS). Our
sampling scheme is based on combinatorial structures in the field of design of
experiments (DOE), particularly the order-of-addition experimental designs for
the study of how the orderings of components would affect the output. We show
that the obtained estimates are unbiased, and can sometimes deterministically
recover the original Shapley value. Both theoretical and simulations results
show that our DOE-based sampling scheme outperforms SRS in terms of estimation
accuracy. Surprisingly, it is also slightly faster than SRS. Lastly, real data
analysis is conducted for the C. elegans nervous system and the 9/11 terrorist
network. | [
"Liuqing Yang",
"Yongdao Zhou",
"Haoda Fu",
"Min-Qian Liu",
"Wei Zheng"
] | 2023-09-16 08:28:15 | http://arxiv.org/abs/2309.08923v1 | http://arxiv.org/pdf/2309.08923v1 | 2309.08923v1 |
A Statistical Turing Test for Generative Models | The emergence of human-like abilities of AI systems for content generation in
domains such as text, audio, and vision has prompted the development of
classifiers to determine whether content originated from a human or a machine.
Implicit in these efforts is an assumption that the generation properties of a
human are different from that of the machine. In this work, we provide a
framework in the language of statistical pattern recognition that quantifies
the difference between the distributions of human and machine-generated content
conditioned on an evaluation context. We describe current methods in the
context of the framework and demonstrate how to use the framework to evaluate
the progression of generative models towards human-like capabilities, among
many axes of analysis. | [
"Hayden Helm",
"Carey E. Priebe",
"Weiwei Yang"
] | 2023-09-16 07:36:07 | http://arxiv.org/abs/2309.08913v1 | http://arxiv.org/pdf/2309.08913v1 | 2309.08913v1 |
Efficient Methods for Non-stationary Online Learning | Non-stationary online learning has drawn much attention in recent years. In
particular, dynamic regret and adaptive regret are proposed as two principled
performance measures for online convex optimization in non-stationary
environments. To optimize them, a two-layer online ensemble is usually deployed
due to the inherent uncertainty of the non-stationarity, in which a group of
base-learners are maintained and a meta-algorithm is employed to track the best
one on the fly. However, the two-layer structure raises the concern about the
computational complexity -- those methods typically maintain $\mathcal{O}(\log
T)$ base-learners simultaneously for a $T$-round online game and thus perform
multiple projections onto the feasible domain per round, which becomes the
computational bottleneck when the domain is complicated. In this paper, we
present efficient methods for optimizing dynamic regret and adaptive regret,
which reduce the number of projections per round from $\mathcal{O}(\log T)$ to
$1$. Moreover, our obtained algorithms require only one gradient query and one
function evaluation at each round. Our technique hinges on the reduction
mechanism developed in parameter-free online learning and requires non-trivial
twists on non-stationary online methods. Empirical studies verify our
theoretical findings. | [
"Peng Zhao",
"Yan-Feng Xie",
"Lijun Zhang",
"Zhi-Hua Zhou"
] | 2023-09-16 07:30:12 | http://arxiv.org/abs/2309.08911v1 | http://arxiv.org/pdf/2309.08911v1 | 2309.08911v1 |
Robust Online Covariance and Sparse Precision Estimation Under Arbitrary Data Corruption | Gaussian graphical models are widely used to represent correlations among
entities but remain vulnerable to data corruption. In this work, we introduce a
modified trimmed-inner-product algorithm to robustly estimate the covariance in
an online scenario even in the presence of arbitrary and adversarial data
attacks. At each time step, data points, drawn nominally independently and
identically from a multivariate Gaussian distribution, arrive. However, a
certain fraction of these points may have been arbitrarily corrupted. We
propose an online algorithm to estimate the sparse inverse covariance (i.e.,
precision) matrix despite this corruption. We provide the error-bound and
convergence properties of the estimates to the true precision matrix under our
algorithms. | [
"Tong Yao",
"Shreyas Sundaram"
] | 2023-09-16 05:37:28 | http://arxiv.org/abs/2309.08884v1 | http://arxiv.org/pdf/2309.08884v1 | 2309.08884v1 |
Data-Driven H-infinity Control with a Real-Time and Efficient Reinforcement Learning Algorithm: An Application to Autonomous Mobility-on-Demand Systems | Reinforcement learning (RL) is a class of artificial intelligence algorithms
being used to design adaptive optimal controllers through online learning. This
paper presents a model-free, real-time, data-efficient Q-learning-based
algorithm to solve the H$_{\infty}$ control of linear discrete-time systems.
The computational complexity is shown to reduce from
$\mathcal{O}(\underline{q}^3)$ in the literature to
$\mathcal{O}(\underline{q}^2)$ in the proposed algorithm, where $\underline{q}$
is quadratic in the sum of the size of state variables, control inputs, and
disturbance. An adaptive optimal controller is designed and the parameters of
the action and critic networks are learned online without the knowledge of the
system dynamics, making the proposed algorithm completely model-free. Also, a
sufficient probing noise is only needed in the first iteration and does not
affect the proposed algorithm. With no need for an initial stabilizing policy,
the algorithm converges to the closed-form solution obtained by solving the
Riccati equation. A simulation study is performed by applying the proposed
algorithm to real-time control of an autonomous mobility-on-demand (AMoD)
system for a real-world case study to evaluate the effectiveness of the
proposed algorithm. | [
"Ali Aalipour",
"Alireza Khani"
] | 2023-09-16 05:02:41 | http://arxiv.org/abs/2309.08880v1 | http://arxiv.org/pdf/2309.08880v1 | 2309.08880v1 |
Subsets and Splits