title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs | Recently, theoretical analyses of deep neural networks have broadly focused
on two directions: 1) Providing insight into neural network training by SGD in
the limit of infinite hidden-layer width and infinitesimally small learning
rate (also known as gradient flow) via the Neural Tangent Kernel (NTK), and 2)
Globally optimizing the regularized training objective via cone-constrained
convex reformulations of ReLU networks. The latter research direction also
yielded an alternative formulation of the ReLU network, called a gated ReLU
network, that is globally optimizable via efficient unconstrained convex
programs. In this work, we interpret the convex program for this gated ReLU
network as a Multiple Kernel Learning (MKL) model with a weighted data masking
feature map and establish a connection to the NTK. Specifically, we show that
for a particular choice of mask weights that do not depend on the learning
targets, this kernel is equivalent to the NTK of the gated ReLU network on the
training data. A consequence of this lack of dependence on the targets is that
the NTK cannot perform better than the optimal MKL kernel on the training set.
By using iterative reweighting, we improve the weights induced by the NTK to
obtain the optimal MKL kernel which is equivalent to the solution of the exact
convex reformulation of the gated ReLU network. We also provide several
numerical simulations corroborating our theory. Additionally, we provide an
analysis of the prediction error of the resulting optimal kernel via
consistency results for the group lasso. | [
"Rajat Vadiraj Dwaraknath",
"Tolga Ergen",
"Mert Pilanci"
] | 2023-09-26 17:42:52 | http://arxiv.org/abs/2309.15096v1 | http://arxiv.org/pdf/2309.15096v1 | 2309.15096v1 |
Automated Detection of Persistent Inflammatory Biomarkers in Post-COVID-19 Patients Using Machine Learning Techniques | The COVID-19 pandemic has left a lasting impact on individuals, with many
experiencing persistent symptoms, including inflammation, in the post-acute
phase of the disease. Detecting and monitoring these inflammatory biomarkers is
critical for timely intervention and improved patient outcomes. This study
employs machine learning techniques to automate the identification of
persistent inflammatory biomarkers in 290 post-COVID-19 patients, based on
medical data collected from hospitals in Iraq. The data encompassed a wide
array of clinical parameters, such as C-reactive protein and interleukin-6
levels, patient demographics, comorbidities, and treatment histories. Rigorous
data preprocessing and feature selection processes were implemented to optimize
the dataset for machine learning analysis. Various machine learning algorithms,
including logistic regression, random forests, support vector machines, and
gradient boosting, were deployed to construct predictive models. These models
exhibited promising results, showcasing high accuracy and precision in the
identification of patients with persistent inflammation. The findings of this
study underscore the potential of machine learning in automating the detection
of persistent inflammatory biomarkers in post-COVID-19 patients. These models
can serve as valuable tools for healthcare providers, facilitating early
diagnosis and personalized treatment strategies for individuals at risk of
persistent inflammation, ultimately contributing to improved post-acute
COVID-19 care and patient well-being. Keywords: COVID-19, post-COVID-19,
inflammation, biomarkers, machine learning, early detection. | [
"Ghizal Fatima",
"Fadhil G. Al-Amran",
"Maitham G. Yousif"
] | 2023-09-26 17:41:10 | http://arxiv.org/abs/2309.15838v1 | http://arxiv.org/pdf/2309.15838v1 | 2309.15838v1 |
Identifying Simulation Model Through Alternative Techniques for a Medical Device Assembly Process | This scientific paper explores two distinct approaches for identifying and
approximating the simulation model, particularly in the context of the snap
process crucial to medical device assembly. Simulation models play a pivotal
role in providing engineers with insights into industrial processes, enabling
experimentation and troubleshooting before physical assembly. However, their
complexity often results in time-consuming computations.
To mitigate this complexity, we present two distinct methods for identifying
simulation models: one utilizing Spline functions and the other harnessing
Machine Learning (ML) models. Our goal is to create adaptable models that
accurately represent the snap process and can accommodate diverse scenarios.
Such models hold promise for enhancing process understanding and aiding in
decision-making, especially when data availability is limited. | [
"Fatemeh Kakavandi"
] | 2023-09-26 17:40:29 | http://arxiv.org/abs/2309.15094v1 | http://arxiv.org/pdf/2309.15094v1 | 2309.15094v1 |
VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning | Although recent text-to-video (T2V) generation methods have seen significant
advancements, most of these works focus on producing short video clips of a
single event with a single background (i.e., single-scene videos). Meanwhile,
recent large language models (LLMs) have demonstrated their capability in
generating layouts and programs to control downstream visual modules such as
image generation models. This raises an important question: can we leverage the
knowledge embedded in these LLMs for temporally consistent long video
generation? In this paper, we propose VideoDirectorGPT, a novel framework for
consistent multi-scene video generation that uses the knowledge of LLMs for
video content planning and grounded video generation. Specifically, given a
single text prompt, we first ask our video planner LLM (GPT-4) to expand it
into a 'video plan', which involves generating the scene descriptions, the
entities with their respective layouts, the background for each scene, and
consistency groupings of the entities and backgrounds. Next, guided by this
output from the video planner, our video generator, Layout2Vid, has explicit
control over spatial layouts and can maintain temporal consistency of
entities/backgrounds across scenes, while only trained with image-level
annotations. Our experiments demonstrate that VideoDirectorGPT framework
substantially improves layout and movement control in both single- and
multi-scene video generation and can generate multi-scene videos with visual
consistency across scenes, while achieving competitive performance with SOTAs
in open-domain single-scene T2V generation. We also demonstrate that our
framework can dynamically control the strength for layout guidance and can also
generate videos with user-provided images. We hope our framework can inspire
future work on better integrating the planning ability of LLMs into consistent
long video generation. | [
"Han Lin",
"Abhay Zala",
"Jaemin Cho",
"Mohit Bansal"
] | 2023-09-26 17:36:26 | http://arxiv.org/abs/2309.15091v1 | http://arxiv.org/pdf/2309.15091v1 | 2309.15091v1 |
Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers | This PhD thesis is focused on the central idea that single neurons in the
brain should be regarded as temporally precise and highly complex
spatio-temporal pattern recognizers. This is opposed to the prevalent view of
biological neurons as simple and mainly spatial pattern recognizers by most
neuroscientists today. In this thesis, I will attempt to demonstrate that this
is an important distinction, predominantly because the above-mentioned
computational properties of single neurons have far-reaching implications with
respect to the various brain circuits that neurons compose, and on how
information is encoded by neuronal activity in the brain. Namely, that these
particular "low-level" details at the single neuron level have substantial
system-wide ramifications. In the introduction we will highlight the main
components that comprise a neural microcircuit that can perform useful
computations and illustrate the inter-dependence of these components from a
system perspective. In chapter 1 we discuss the great complexity of the
spatio-temporal input-output relationship of cortical neurons that are the
result of morphological structure and biophysical properties of the neuron. In
chapter 2 we demonstrate that single neurons can generate temporally precise
output patterns in response to specific spatio-temporal input patterns with a
very simple biologically plausible learning rule. In chapter 3, we use the
differentiable deep network analog of a realistic cortical neuron as a tool to
approximate the gradient of the output of the neuron with respect to its input
and use this capability in an attempt to teach the neuron to perform nonlinear
XOR operation. In chapter 4 we expand chapter 3 to describe extension of our
ideas to neuronal networks composed of many realistic biological spiking
neurons that represent either small microcircuits or entire brain regions. | [
"David Beniaguev"
] | 2023-09-26 17:32:08 | http://arxiv.org/abs/2309.15090v1 | http://arxiv.org/pdf/2309.15090v1 | 2309.15090v1 |
On Excess Risk Convergence Rates of Neural Network Classifiers | The recent success of neural networks in pattern recognition and
classification problems suggests that neural networks possess qualities
distinct from other more classical classifiers such as SVMs or boosting
classifiers. This paper studies the performance of plug-in classifiers based on
neural networks in a binary classification setting as measured by their excess
risks. Compared to the typical settings imposed in the literature, we consider
a more general scenario that resembles actual practice in two respects: first,
the function class to be approximated includes the Barron functions as a proper
subset, and second, the neural network classifier constructed is the minimizer
of a surrogate loss instead of the $0$-$1$ loss so that gradient descent-based
numerical optimizations can be easily applied. While the class of functions we
consider is quite large that optimal rates cannot be faster than
$n^{-\frac{1}{3}}$, it is a regime in which dimension-free rates are possible
and approximation power of neural networks can be taken advantage of. In
particular, we analyze the estimation and approximation properties of neural
networks to obtain a dimension-free, uniform rate of convergence for the excess
risk. Finally, we show that the rate obtained is in fact minimax optimal up to
a logarithmic factor, and the minimax lower bound shows the effect of the
margin assumption in this regime. | [
"Hyunouk Ko",
"Namjoon Suh",
"Xiaoming Huo"
] | 2023-09-26 17:14:10 | http://arxiv.org/abs/2309.15075v1 | http://arxiv.org/pdf/2309.15075v1 | 2309.15075v1 |
Targeting Relative Risk Heterogeneity with Causal Forests | Treatment effect heterogeneity (TEH), or variability in treatment effect for
different subgroups within a population, is of significant interest in clinical
trial analysis. Causal forests (Wager and Athey, 2018) is a highly popular
method for this problem, but like many other methods for detecting TEH, its
criterion for separating subgroups focuses on differences in absolute risk.
This can dilute statistical power by masking nuance in the relative risk, which
is often a more appropriate quantity of clinical interest. In this work, we
propose and implement a methodology for modifying causal forests to target
relative risk using a novel node-splitting procedure based on generalized
linear model (GLM) comparison. We present results on simulated and real-world
data that suggest relative risk causal forests can capture otherwise unobserved
sources of heterogeneity. | [
"Vik Shirvaikar",
"Chris Holmes"
] | 2023-09-26 16:57:46 | http://arxiv.org/abs/2309.15793v1 | http://arxiv.org/pdf/2309.15793v1 | 2309.15793v1 |
QUILT: Effective Multi-Class Classification on Quantum Computers Using an Ensemble of Diverse Quantum Classifiers | Quantum computers can theoretically have significant acceleration over
classical computers; but, the near-future era of quantum computing is limited
due to small number of qubits that are also error prone. Quilt is a framework
for performing multi-class classification task designed to work effectively on
current error-prone quantum computers. Quilt is evaluated with real quantum
machines as well as with projected noise levels as quantum machines become more
noise-free. Quilt demonstrates up to 85% multi-class classification accuracy
with the MNIST dataset on a five-qubit system. | [
"Daniel Silver",
"Tirthak Patel",
"Devesh Tiwari"
] | 2023-09-26 16:36:11 | http://arxiv.org/abs/2309.15056v1 | http://arxiv.org/pdf/2309.15056v1 | 2309.15056v1 |
A Review on AI Algorithms for Energy Management in E-Mobility Services | E-mobility, or electric mobility, has emerged as a pivotal solution to
address pressing environmental and sustainability concerns in the
transportation sector. The depletion of fossil fuels, escalating greenhouse gas
emissions, and the imperative to combat climate change underscore the
significance of transitioning to electric vehicles (EVs). This paper seeks to
explore the potential of artificial intelligence (AI) in addressing various
challenges related to effective energy management in e-mobility systems (EMS).
These challenges encompass critical factors such as range anxiety, charge rate
optimization, and the longevity of energy storage in EVs. By analyzing existing
literature, we delve into the role that AI can play in tackling these
challenges and enabling efficient energy management in EMS. Our objectives are
twofold: to provide an overview of the current state-of-the-art in this
research domain and propose effective avenues for future investigations.
Through this analysis, we aim to contribute to the advancement of sustainable
and efficient e-mobility solutions, shaping a greener and more sustainable
future for transportation. | [
"Sen Yan",
"Maqsood Hussain Shah",
"Ji Li",
"Noel O'Connor",
"Mingming Liu"
] | 2023-09-26 16:34:35 | http://arxiv.org/abs/2309.15140v1 | http://arxiv.org/pdf/2309.15140v1 | 2309.15140v1 |
Class Incremental Learning via Likelihood Ratio Based Task Prediction | Class incremental learning (CIL) is a challenging setting of continual
learning, which learns a series of tasks sequentially. Each task consists of a
set of unique classes. The key feature of CIL is that no task identifier (or
task-id) is provided at test time for each test sample. Predicting the task-id
for each test sample is a challenging problem. An emerging theoretically
justified and effective approach is to train a task-specific model for each
task in a shared network for all tasks based on a task-incremental learning
(TIL) method to deal with forgetting. The model for each task in this approach
is an out-of-distribution (OOD) detector rather than a conventional classifier.
The OOD detector can perform both within-task (in-distribution (IND)) class
prediction and OOD detection. The OOD detection capability is the key for
task-id prediction during inference for each test sample. However, this paper
argues that using a traditional OOD detector for task-id prediction is
sub-optimal because additional information (e.g., the replay data and the
learned tasks) available in CIL can be exploited to design a better and
principled method for task-id prediction. We call the new method TPLR (Task-id
Prediction based on Likelihood Ratio}). TPLR markedly outperforms strong CIL
baselines. | [
"Haowei Lin",
"Yijia Shao",
"Weinan Qian",
"Ningxin Pan",
"Yiduo Guo",
"Bing Liu"
] | 2023-09-26 16:25:57 | http://arxiv.org/abs/2309.15048v2 | http://arxiv.org/pdf/2309.15048v2 | 2309.15048v2 |
Combining Survival Analysis and Machine Learning for Mass Cancer Risk Prediction using EHR data | Purely medical cancer screening methods are often costly, time-consuming, and
weakly applicable on a large scale. Advanced Artificial Intelligence (AI)
methods greatly help cancer detection but require specific or deep medical
data. These aspects affect the mass implementation of cancer screening methods.
For these reasons, it is a disruptive change for healthcare to apply AI methods
for mass personalized assessment of the cancer risk among patients based on the
existing Electronic Health Records (EHR) volume.
This paper presents a novel method for mass cancer risk prediction using EHR
data. Among other methods, our one stands out by the minimum data greedy
policy, requiring only a history of medical service codes and diagnoses from
EHR. We formulate the problem as a binary classification. This dataset contains
175 441 de-identified patients (2 861 diagnosed with cancer). As a baseline, we
implement a solution based on a recurrent neural network (RNN). We propose a
method that combines machine learning and survival analysis since these
approaches are less computationally heavy, can be combined into an ensemble
(the Survival Ensemble), and can be reproduced in most medical institutions.
We test the Survival Ensemble in some studies. Firstly, we obtain a
significant difference between values of the primary metric (Average Precision)
with 22.8% (ROC AUC 83.7%, F1 17.8%) for the Survival Ensemble versus 15.1%
(ROC AUC 84.9%, F1 21.4%) for the Baseline. Secondly, the performance of the
Survival Ensemble is also confirmed during the ablation study. Thirdly, our
method exceeds age baselines by a significant margin. Fourthly, in the blind
retrospective out-of-time experiment, the proposed method is reliable in cancer
patient detection (9 out of 100 selected). Such results exceed the estimates of
medical screenings, e.g., the best Number Needed to Screen (9 out of 1000
screenings). | [
"Petr Philonenko",
"Vladimir Kokh",
"Pavel Blinov"
] | 2023-09-26 16:15:54 | http://arxiv.org/abs/2309.15039v1 | http://arxiv.org/pdf/2309.15039v1 | 2309.15039v1 |
HPCR: Holistic Proxy-based Contrastive Replay for Online Continual Learning | Online continual learning (OCL) aims to continuously learn new data from a
single pass over the online data stream. It generally suffers from the
catastrophic forgetting issue. Existing replay-based methods effectively
alleviate this issue by replaying part of old data in a proxy-based or
contrastive-based replay manner. In this paper, we conduct a comprehensive
analysis of these two replay manners and find they can be complementary.
Inspired by this finding, we propose a novel replay-based method called
proxy-based contrastive replay (PCR), which replaces anchor-to-sample pairs
with anchor-to-proxy pairs in the contrastive-based loss to alleviate the
phenomenon of forgetting. Based on PCR, we further develop a more advanced
method named holistic proxy-based contrastive replay (HPCR), which consists of
three components. The contrastive component conditionally incorporates
anchor-to-sample pairs to PCR, learning more fine-grained semantic information
with a large training batch. The second is a temperature component that
decouples the temperature coefficient into two parts based on their impacts on
the gradient and sets different values for them to learn more novel knowledge.
The third is a distillation component that constrains the learning process to
keep more historical knowledge. Experiments on four datasets consistently
demonstrate the superiority of HPCR over various state-of-the-art methods. | [
"Huiwei Lin",
"Shanshan Feng",
"Baoquan Zhang",
"Xutao Li",
"Yew-soon Ong",
"Yunming Ye"
] | 2023-09-26 16:12:57 | http://arxiv.org/abs/2309.15038v1 | http://arxiv.org/pdf/2309.15038v1 | 2309.15038v1 |
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions | Large language models (LLMs) can "lie", which we define as outputting false
statements despite "knowing" the truth in a demonstrable sense. LLMs might
"lie", for example, when instructed to output misinformation. Here, we develop
a simple lie detector that requires neither access to the LLM's activations
(black-box) nor ground-truth knowledge of the fact in question. The detector
works by asking a predefined set of unrelated follow-up questions after a
suspected lie, and feeding the LLM's yes/no answers into a logistic regression
classifier. Despite its simplicity, this lie detector is highly accurate and
surprisingly general. When trained on examples from a single setting --
prompting GPT-3.5 to lie about factual questions -- the detector generalises
out-of-distribution to (1) other LLM architectures, (2) LLMs fine-tuned to lie,
(3) sycophantic lies, and (4) lies emerging in real-life scenarios such as
sales. These results indicate that LLMs have distinctive lie-related
behavioural patterns, consistent across architectures and contexts, which could
enable general-purpose lie detection. | [
"Lorenzo Pacchiardi",
"Alex J. Chan",
"Sören Mindermann",
"Ilan Moscovitz",
"Alexa Y. Pan",
"Yarin Gal",
"Owain Evans",
"Jan Brauner"
] | 2023-09-26 16:07:54 | http://arxiv.org/abs/2309.15840v1 | http://arxiv.org/pdf/2309.15840v1 | 2309.15840v1 |
Don't throw away your value model! Making PPO even better via Value-Guided Monte-Carlo Tree Search decoding | Inference-time search algorithms such as Monte-Carlo Tree Search (MCTS) may
seem unnecessary when generating natural language text based on
state-of-the-art reinforcement learning such as Proximal Policy Optimization
(PPO). In this paper, we demonstrate that it is possible to get extra mileage
out of PPO by integrating MCTS on top. The key idea is not to throw out the
value network, a byproduct of PPO training for evaluating partial output
sequences, when decoding text out of the policy network. More concretely, we
present a novel value-guided decoding algorithm called PPO-MCTS, which can
integrate the value network from PPO to work closely with the policy network
during inference-time generation. Compared to prior approaches based on MCTS
for controlled text generation, the key strength of our approach is to reduce
the fundamental mismatch of the scoring mechanisms of the partial outputs
between training and test. Evaluation on four text generation tasks demonstrate
that PPO-MCTS greatly improves the preferability of generated text compared to
the standard practice of using only the PPO policy. Our results demonstrate the
promise of search algorithms even on top of the aligned language models from
PPO, and the under-explored benefit of the value network. | [
"Jiacheng Liu",
"Andrew Cohen",
"Ramakanth Pasunuru",
"Yejin Choi",
"Hannaneh Hajishirzi",
"Asli Celikyilmaz"
] | 2023-09-26 15:57:57 | http://arxiv.org/abs/2309.15028v2 | http://arxiv.org/pdf/2309.15028v2 | 2309.15028v2 |
Synthia's Melody: A Benchmark Framework for Unsupervised Domain Adaptation in Audio | Despite significant advancements in deep learning for vision and natural
language, unsupervised domain adaptation in audio remains relatively
unexplored. We, in part, attribute this to the lack of an appropriate benchmark
dataset. To address this gap, we present Synthia's melody, a novel audio data
generation framework capable of simulating an infinite variety of 4-second
melodies with user-specified confounding structures characterised by musical
keys, timbre, and loudness. Unlike existing datasets collected under
observational settings, Synthia's melody is free of unobserved biases, ensuring
the reproducibility and comparability of experiments. To showcase its utility,
we generate two types of distribution shifts-domain shift and sample selection
bias-and evaluate the performance of acoustic deep learning models under these
shifts. Our evaluations reveal that Synthia's melody provides a robust testbed
for examining the susceptibility of these models to varying levels of
distribution shift. | [
"Chia-Hsin Lin",
"Charles Jones",
"Björn W. Schuller",
"Harry Coppock"
] | 2023-09-26 15:46:06 | http://arxiv.org/abs/2309.15024v1 | http://arxiv.org/pdf/2309.15024v1 | 2309.15024v1 |
PINF: Continuous Normalizing Flows for Physics-Constrained Deep Learning | The normalization constraint on probability density poses a significant
challenge for solving the Fokker-Planck equation. Normalizing Flow, an
invertible generative model leverages the change of variables formula to ensure
probability density conservation and enable the learning of complex data
distributions. In this paper, we introduce Physics-Informed Normalizing Flows
(PINF), a novel extension of continuous normalizing flows, incorporating
diffusion through the method of characteristics. Our method, which is mesh-free
and causality-free, can efficiently solve high dimensional time-dependent and
steady-state Fokker-Planck equations. | [
"Feng Liu",
"Faguo Wu",
"Xiao Zhang"
] | 2023-09-26 15:38:57 | http://arxiv.org/abs/2309.15139v1 | http://arxiv.org/pdf/2309.15139v1 | 2309.15139v1 |
Automating question generation from educational text | The use of question-based activities (QBAs) is wide-spread in education,
traditionally forming an integral part of the learning and assessment process.
In this paper, we design and evaluate an automated question generation tool for
formative and summative assessment in schools. We present an expert survey of
one hundred and four teachers, demonstrating the need for automated generation
of QBAs, as a tool that can significantly reduce the workload of teachers and
facilitate personalized learning experiences. Leveraging the recent
advancements in generative AI, we then present a modular framework employing
transformer based language models for automatic generation of multiple-choice
questions (MCQs) from textual content. The presented solution, with distinct
modules for question generation, correct answer prediction, and distractor
formulation, enables us to evaluate different language models and generation
techniques. Finally, we perform an extensive quantitative and qualitative
evaluation, demonstrating trade-offs in the use of different techniques and
models. | [
"Ayan Kumar Bhowmick",
"Ashish Jagmohan",
"Aditya Vempaty",
"Prasenjit Dey",
"Leigh Hall",
"Jeremy Hartman",
"Ravi Kokku",
"Hema Maheshwari"
] | 2023-09-26 15:18:44 | http://arxiv.org/abs/2309.15004v1 | http://arxiv.org/pdf/2309.15004v1 | 2309.15004v1 |
Measurement Models For Sailboats Price vs. Features And Regional Areas | In this study, we investigated the relationship between sailboat technical
specifications and their prices, as well as regional pricing influences.
Utilizing a dataset encompassing characteristics like length, beam, draft,
displacement, sail area, and waterline, we applied multiple machine learning
models to predict sailboat prices. The gradient descent model demonstrated
superior performance, producing the lowest MSE and MAE. Our analysis revealed
that monohulled boats are generally more affordable than catamarans, and that
certain specifications such as length, beam, displacement, and sail area
directly correlate with higher prices. Interestingly, lower draft was
associated with higher listing prices. We also explored regional price
determinants and found that the United States tops the list in average sailboat
prices, followed by Europe, Hong Kong, and the Caribbean. Contrary to our
initial hypothesis, a country's GDP showed no direct correlation with sailboat
prices. Utilizing a 50% cross-validation method, our models yielded consistent
results across test groups. Our research offers a machine learning-enhanced
perspective on sailboat pricing, aiding prospective buyers in making informed
decisions. | [
"Jiaqi Weng",
"Chunlin Feng",
"Yihan Shao"
] | 2023-09-26 15:03:05 | http://arxiv.org/abs/2309.14994v1 | http://arxiv.org/pdf/2309.14994v1 | 2309.14994v1 |
Tempo Adaption in Non-stationary Reinforcement Learning | We first raise and tackle ``time synchronization'' issue between the agent
and the environment in non-stationary reinforcement learning (RL), a crucial
factor hindering its real-world applications. In reality, environmental changes
occur over wall-clock time ($\mathfrak{t}$) rather than episode progress ($k$),
where wall-clock time signifies the actual elapsed time within the fixed
duration $\mathfrak{t} \in [0, T]$. In existing works, at episode $k$, the
agent rollouts a trajectory and trains a policy before transitioning to episode
$k+1$. In the context of the time-desynchronized environment, however, the
agent at time $\mathfrak{t}_k$ allocates $\Delta \mathfrak{t}$ for trajectory
generation and training, subsequently moves to the next episode at
$\mathfrak{t}_{k+1}=\mathfrak{t}_{k}+\Delta \mathfrak{t}$. Despite a fixed
total episode ($K$), the agent accumulates different trajectories influenced by
the choice of \textit{interaction times}
($\mathfrak{t}_1,\mathfrak{t}_2,...,\mathfrak{t}_K$), significantly impacting
the sub-optimality gap of policy. We propose a Proactively Synchronizing Tempo
(ProST) framework that computes optimal $\{
\mathfrak{t}_1,\mathfrak{t}_2,...,\mathfrak{t}_K \} (= \{ \mathfrak{t}
\}_{1:K})$. Our main contribution is that we show optimal $\{ \mathfrak{t}
\}_{1:K}$ trades-off between the policy training time (agent tempo) and how
fast the environment changes (environment tempo). Theoretically, this work
establishes an optimal $\{ \mathfrak{t} \}_{1:K}$ as a function of the degree
of the environment's non-stationarity while also achieving a sublinear dynamic
regret. Our experimental evaluation on various high dimensional non-stationary
environments shows that the ProST framework achieves a higher online return at
optimal $\{ \mathfrak{t} \}_{1:K}$ than the existing methods. | [
"Hyunin Lee",
"Yuhao Ding",
"Jongmin Lee",
"Ming Jin",
"Javad Lavaei",
"Somayeh Sojoudi"
] | 2023-09-26 15:01:21 | http://arxiv.org/abs/2309.14989v1 | http://arxiv.org/pdf/2309.14989v1 | 2309.14989v1 |
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition | The extensive ubiquitous availability of sensors in smart devices and the
Internet of Things (IoT) has opened up the possibilities for implementing
sensor-based activity recognition. As opposed to traditional sensor time-series
processing and hand-engineered feature extraction, in light of deep learning's
proven effectiveness across various domains, numerous deep methods have been
explored to tackle the challenges in activity recognition, outperforming the
traditional signal processing and traditional machine learning approaches. In
this work, by performing extensive experimental studies on two human activity
recognition datasets, we investigate the performance of common deep learning
and machine learning approaches as well as different training mechanisms (such
as contrastive learning), and various feature representations extracted from
the sensor time-series data and measure their effectiveness for the human
activity recognition task. | [
"Danial Ahangarani",
"Mohammad Shirazi",
"Navid Ashraf"
] | 2023-09-26 14:55:32 | http://arxiv.org/abs/2310.03760v1 | http://arxiv.org/pdf/2310.03760v1 | 2310.03760v1 |
Statistical Analysis of Quantum State Learning Process in Quantum Neural Networks | Quantum neural networks (QNNs) have been a promising framework in pursuing
near-term quantum advantage in various fields, where many applications can be
viewed as learning a quantum state that encodes useful data. As a quantum
analog of probability distribution learning, quantum state learning is
theoretically and practically essential in quantum machine learning. In this
paper, we develop a no-go theorem for learning an unknown quantum state with
QNNs even starting from a high-fidelity initial state. We prove that when the
loss value is lower than a critical threshold, the probability of avoiding
local minima vanishes exponentially with the qubit count, while only grows
polynomially with the circuit depth. The curvature of local minima is
concentrated to the quantum Fisher information times a loss-dependent constant,
which characterizes the sensibility of the output state with respect to
parameters in QNNs. These results hold for any circuit structures,
initialization strategies, and work for both fixed ansatzes and adaptive
methods. Extensive numerical simulations are performed to validate our
theoretical results. Our findings place generic limits on good initial guesses
and adaptive methods for improving the learnability and scalability of QNNs,
and deepen the understanding of prior information's role in QNNs. | [
"Hao-kai Zhang",
"Chenghong Zhu",
"Mingrui Jing",
"Xin Wang"
] | 2023-09-26 14:54:50 | http://arxiv.org/abs/2309.14980v1 | http://arxiv.org/pdf/2309.14980v1 | 2309.14980v1 |
Deep Generative Methods for Producing Forecast Trajectories in Power Systems | With the expansion of renewables in the electricity mix, power grid
variability will increase, hence a need to robustify the system to guarantee
its security. Therefore, Transport System Operators (TSOs) must conduct
analyses to simulate the future functioning of power systems. Then, these
simulations are used as inputs in decision-making processes. In this context,
we investigate using deep learning models to generate energy production and
load forecast trajectories. To capture the spatiotemporal correlations in these
multivariate time series, we adapt autoregressive networks and normalizing
flows, demonstrating their effectiveness against the current copula-based
statistical approach. We conduct extensive experiments on the French TSO RTE
wind forecast data and compare the different models with \textit{ad hoc}
evaluation metrics for time series generation. | [
"Nathan Weill",
"Jonathan Dumas"
] | 2023-09-26 14:43:01 | http://arxiv.org/abs/2309.15137v1 | http://arxiv.org/pdf/2309.15137v1 | 2309.15137v1 |
Recurrent Hypernetworks are Surprisingly Strong in Meta-RL | Deep reinforcement learning (RL) is notoriously impractical to deploy due to
sample inefficiency. Meta-RL directly addresses this sample inefficiency by
learning to perform few-shot learning when a distribution of related tasks is
available for meta-training. While many specialized meta-RL methods have been
proposed, recent work suggests that end-to-end learning in conjunction with an
off-the-shelf sequential model, such as a recurrent network, is a surprisingly
strong baseline. However, such claims have been controversial due to limited
supporting evidence, particularly in the face of prior work establishing
precisely the opposite. In this paper, we conduct an empirical investigation.
While we likewise find that a recurrent network can achieve strong performance,
we demonstrate that the use of hypernetworks is crucial to maximizing their
potential. Surprisingly, when combined with hypernetworks, the recurrent
baselines that are far simpler than existing specialized methods actually
achieve the strongest performance of all methods evaluated. | [
"Jacob Beck",
"Risto Vuorio",
"Zheng Xiong",
"Shimon Whiteson"
] | 2023-09-26 14:42:28 | http://arxiv.org/abs/2309.14970v3 | http://arxiv.org/pdf/2309.14970v3 | 2309.14970v3 |
Context-Aware Generative Models for Prediction of Aircraft Ground Tracks | Trajectory prediction (TP) plays an important role in supporting the
decision-making of Air Traffic Controllers (ATCOs). Traditional TP methods are
deterministic and physics-based, with parameters that are calibrated using
aircraft surveillance data harvested across the world. These models are,
therefore, agnostic to the intentions of the pilots and ATCOs, which can have a
significant effect on the observed trajectory, particularly in the lateral
plane. This work proposes a generative method for lateral TP, using
probabilistic machine learning to model the effect of the epistemic uncertainty
arising from the unknown effect of pilot behaviour and ATCO intentions. The
models are trained to be specific to a particular sector, allowing local
procedures such as coordinated entry and exit points to be modelled. A dataset
comprising a week's worth of aircraft surveillance data, passing through a busy
sector of the United Kingdom's upper airspace, was used to train and test the
models. Specifically, a piecewise linear model was used as a functional,
low-dimensional representation of the ground tracks, with its control points
determined by a generative model conditioned on partial context. It was found
that, of the investigated models, a Bayesian Neural Network using the Laplace
approximation was able to generate the most plausible trajectories in order to
emulate the flow of traffic through the sector. | [
"Nick Pepper",
"George De Ath",
"Marc Thomas",
"Richard Everson",
"Tim Dodwell"
] | 2023-09-26 14:20:09 | http://arxiv.org/abs/2309.14957v1 | http://arxiv.org/pdf/2309.14957v1 | 2309.14957v1 |
Contrastive Continual Multi-view Clustering with Filtered Structural Fusion | Multi-view clustering thrives in applications where views are collected in
advance by extracting consistent and complementary information among views.
However, it overlooks scenarios where data views are collected sequentially,
i.e., real-time data. Due to privacy issues or memory burden, previous views
are not available with time in these situations. Some methods are proposed to
handle it but are trapped in a stability-plasticity dilemma. In specific, these
methods undergo a catastrophic forgetting of prior knowledge when a new view is
attained. Such a catastrophic forgetting problem (CFP) would cause the
consistent and complementary information hard to get and affect the clustering
performance. To tackle this, we propose a novel method termed Contrastive
Continual Multi-view Clustering with Filtered Structural Fusion (CCMVC-FSF).
Precisely, considering that data correlations play a vital role in clustering
and prior knowledge ought to guide the clustering process of a new view, we
develop a data buffer with fixed size to store filtered structural information
and utilize it to guide the generation of a robust partition matrix via
contrastive learning. Furthermore, we theoretically connect CCMVC-FSF with
semi-supervised learning and knowledge distillation. Extensive experiments
exhibit the excellence of the proposed method. | [
"Xinhang Wan",
"Jiyuan Liu",
"Ao Li",
"Xinwang Liu",
"En Zhu"
] | 2023-09-26 14:18:29 | http://arxiv.org/abs/2309.15135v1 | http://arxiv.org/pdf/2309.15135v1 | 2309.15135v1 |
Towards Real-World Test-Time Adaptation: Tri-Net Self-Training with Balanced Normalization | Test-Time Adaptation aims to adapt source domain model to testing data at
inference stage with success demonstrated in adapting to unseen corruptions.
However, these attempts may fail under more challenging real-world scenarios.
Existing works mainly consider real-world test-time adaptation under non-i.i.d.
data stream and continual domain shift. In this work, we first complement the
existing real-world TTA protocol with a globally class imbalanced testing set.
We demonstrate that combining all settings together poses new challenges to
existing methods. We argue the failure of state-of-the-art methods is first
caused by indiscriminately adapting normalization layers to imbalanced testing
data. To remedy this shortcoming, we propose a balanced batchnorm layer to swap
out the regular batchnorm at inference stage. The new batchnorm layer is
capable of adapting without biasing towards majority classes. We are further
inspired by the success of self-training~(ST) in learning from unlabeled data
and adapt ST for test-time adaptation. However, ST alone is prone to over
adaption which is responsible for the poor performance under continual domain
shift. Hence, we propose to improve self-training under continual domain shift
by regularizing model updates with an anchored loss. The final TTA model,
termed as TRIBE, is built upon a tri-net architecture with balanced batchnorm
layers. We evaluate TRIBE on four datasets representing real-world TTA
settings. TRIBE consistently achieves the state-of-the-art performance across
multiple evaluation protocols. The code is available at
\url{https://github.com/Gorilla-Lab-SCUT/TRIBE}. | [
"Yongyi Su",
"Xun Xu",
"Kui Jia"
] | 2023-09-26 14:06:26 | http://arxiv.org/abs/2309.14949v1 | http://arxiv.org/pdf/2309.14949v1 | 2309.14949v1 |
Learning Generative Models for Climbing Aircraft from Radar Data | Accurate trajectory prediction (TP) for climbing aircraft is hampered by the
presence of epistemic uncertainties concerning aircraft operation, which can
lead to significant misspecification between predicted and observed
trajectories. This paper proposes a generative model for climbing aircraft in
which the standard Base of Aircraft Data (BADA) model is enriched by a
functional correction to the thrust that is learned from data. The method
offers three features: predictions of the arrival time with 66.3% less error
when compared to BADA; generated trajectories that are realistic when compared
to test data; and a means of computing confidence bounds for minimal
computational cost. | [
"Nick Pepper",
"Marc Thomas"
] | 2023-09-26 13:53:53 | http://arxiv.org/abs/2309.14941v1 | http://arxiv.org/pdf/2309.14941v1 | 2309.14941v1 |
Parallel Multi-Objective Hyperparameter Optimization with Uniform Normalization and Bounded Objectives | Machine learning (ML) methods offer a wide range of configurable
hyperparameters that have a significant influence on their performance. While
accuracy is a commonly used performance objective, in many settings, it is not
sufficient. Optimizing the ML models with respect to multiple objectives such
as accuracy, confidence, fairness, calibration, privacy, latency, and memory
consumption is becoming crucial. To that end, hyperparameter optimization, the
approach to systematically optimize the hyperparameters, which is already
challenging for a single objective, is even more challenging for multiple
objectives. In addition, the differences in objective scales, the failures, and
the presence of outlier values in objectives make the problem even harder. We
propose a multi-objective Bayesian optimization (MoBO) algorithm that addresses
these problems through uniform objective normalization and randomized weights
in scalarization. We increase the efficiency of our approach by imposing
constraints on the objective to avoid exploring unnecessary configurations
(e.g., insufficient accuracy). Finally, we leverage an approach to parallelize
the MoBO which results in a 5x speed-up when using 16x more workers. | [
"Romain Egele",
"Tyler Chang",
"Yixuan Sun",
"Venkatram Vishwanath",
"Prasanna Balaprakash"
] | 2023-09-26 13:48:04 | http://arxiv.org/abs/2309.14936v1 | http://arxiv.org/pdf/2309.14936v1 | 2309.14936v1 |
Noise-Tolerant Unsupervised Adapter for Vision-Language Models | Recent advances in large-scale vision-language models have achieved very
impressive performance in various zero-shot image classification tasks. While
prior studies have demonstrated significant improvements by introducing
few-shot labelled target samples, they still require labelling of target
samples, which greatly degrades their scalability while handling various visual
recognition tasks. We design NtUA, a Noise-tolerant Unsupervised Adapter that
allows learning superior target models with few-shot unlabelled target samples.
NtUA works as a key-value cache that formulates visual features and predicted
pseudo-labels of the few-shot unlabelled target samples as key-value pairs. It
consists of two complementary designs. The first is adaptive cache formation
that combats pseudo-label noises by weighting the key-value pairs according to
their prediction confidence. The second is pseudo-label rectification, which
corrects both pair values (i.e., pseudo-labels) and cache weights by leveraging
knowledge distillation from large-scale vision language models. Extensive
experiments show that NtUA achieves superior performance consistently across
multiple widely adopted benchmarks. | [
"Eman Ali",
"Dayan Guan",
"Shijian Lu",
"Abdulmotaleb Elsaddik"
] | 2023-09-26 13:35:31 | http://arxiv.org/abs/2309.14928v1 | http://arxiv.org/pdf/2309.14928v1 | 2309.14928v1 |
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias | Node representation learning on attributed graphs -- whose nodes are
associated with rich attributes (e.g., texts and protein sequences) -- plays a
crucial role in many important downstream tasks. To encode the attributes and
graph structures simultaneously, recent studies integrate pre-trained models
with graph neural networks (GNNs), where pre-trained models serve as node
encoders (NEs) to encode the attributes. As jointly training large NEs and GNNs
on large-scale graphs suffers from severe scalability issues, many methods
propose to train NEs and GNNs separately. Consequently, they do not take
feature convolutions in GNNs into consideration in the training phase of NEs,
leading to a significant learning bias from that by the joint training. To
address this challenge, we propose an efficient label regularization technique,
namely Label Deconvolution (LD), to alleviate the learning bias by a novel and
highly scalable approximation to the inverse mapping of GNNs. The inverse
mapping leads to an objective function that is equivalent to that by the joint
training, while it can effectively incorporate GNNs in the training phase of
NEs against the learning bias. More importantly, we show that LD converges to
the optimal objective function values by thejoint training under mild
assumptions. Experiments demonstrate LD significantly outperforms
state-of-the-art methods on Open Graph Benchmark datasets. | [
"Zhihao Shi",
"Jie Wang",
"Fanghua Lu",
"Hanzhu Chen",
"Defu Lian",
"Zheng Wang",
"Jieping Ye",
"Feng Wu"
] | 2023-09-26 13:09:43 | http://arxiv.org/abs/2309.14907v1 | http://arxiv.org/pdf/2309.14907v1 | 2309.14907v1 |
Learning from Flawed Data: Weakly Supervised Automatic Speech Recognition | Training automatic speech recognition (ASR) systems requires large amounts of
well-curated paired data. However, human annotators usually perform
"non-verbatim" transcription, which can result in poorly trained models. In
this paper, we propose Omni-temporal Classification (OTC), a novel training
criterion that explicitly incorporates label uncertainties originating from
such weak supervision. This allows the model to effectively learn speech-text
alignments while accommodating errors present in the training transcripts. OTC
extends the conventional CTC objective for imperfect transcripts by leveraging
weighted finite state transducers. Through experiments conducted on the
LibriSpeech and LibriVox datasets, we demonstrate that training ASR models with
OTC avoids performance degradation even with transcripts containing up to 70%
errors, a scenario where CTC models fail completely. Our implementation is
available at https://github.com/k2-fsa/icefall. | [
"Dongji Gao",
"Hainan Xu",
"Desh Raj",
"Leibny Paola Garcia Perera",
"Daniel Povey",
"Sanjeev Khudanpur"
] | 2023-09-26 12:58:40 | http://arxiv.org/abs/2309.15796v1 | http://arxiv.org/pdf/2309.15796v1 | 2309.15796v1 |
FDLS: A Deep Learning Approach to Production Quality, Controllable, and Retargetable Facial Performances | Visual effects commonly requires both the creation of realistic synthetic
humans as well as retargeting actors' performances to humanoid characters such
as aliens and monsters. Achieving the expressive performances demanded in
entertainment requires manipulating complex models with hundreds of parameters.
Full creative control requires the freedom to make edits at any stage of the
production, which prohibits the use of a fully automatic ``black box'' solution
with uninterpretable parameters. On the other hand, producing realistic
animation with these sophisticated models is difficult and laborious. This
paper describes FDLS (Facial Deep Learning Solver), which is Weta Digital's
solution to these challenges. FDLS adopts a coarse-to-fine and
human-in-the-loop strategy, allowing a solved performance to be verified and
edited at several stages in the solving process. To train FDLS, we first
transform the raw motion-captured data into robust graph features. Secondly,
based on the observation that the artists typically finalize the jaw pass
animation before proceeding to finer detail, we solve for the jaw motion first
and predict fine expressions with region-based networks conditioned on the jaw
position. Finally, artists can optionally invoke a non-linear finetuning
process on top of the FDLS solution to follow the motion-captured virtual
markers as closely as possible. FDLS supports editing if needed to improve the
results of the deep learning solution and it can handle small daily changes in
the actor's face shape. FDLS permits reliable and production-quality
performance solving with minimal training and little or no manual effort in
many cases, while also allowing the solve to be guided and edited in unusual
and difficult cases. The system has been under development for several years
and has been used in major movies. | [
"Wan-Duo Kurt Ma",
"Muhammad Ghifary",
"J. P. Lewis",
"Byungkuk Choi",
"Haekwang Eom"
] | 2023-09-26 12:54:58 | http://arxiv.org/abs/2309.14897v1 | http://arxiv.org/pdf/2309.14897v1 | 2309.14897v1 |
Verifiable Learned Behaviors via Motion Primitive Composition: Applications to Scooping of Granular Media | A robotic behavior model that can reliably generate behaviors from natural
language inputs in real time would substantially expedite the adoption of
industrial robots due to enhanced system flexibility. To facilitate these
efforts, we construct a framework in which learned behaviors, created by a
natural language abstractor, are verifiable by construction. Leveraging recent
advancements in motion primitives and probabilistic verification, we construct
a natural-language behavior abstractor that generates behaviors by synthesizing
a directed graph over the provided motion primitives. If these component motion
primitives are constructed according to the criteria we specify, the resulting
behaviors are probabilistically verifiable. We demonstrate this verifiable
behavior generation capacity in both simulation on an exploration task and on
hardware with a robot scooping granular media. | [
"Andrew Benton",
"Eugen Solowjow",
"Prithvi Akella"
] | 2023-09-26 12:51:03 | http://arxiv.org/abs/2309.14894v1 | http://arxiv.org/pdf/2309.14894v1 | 2309.14894v1 |
Locality-preserving Directions for Interpreting the Latent Space of Satellite Image GANs | We present a locality-aware method for interpreting the latent space of
wavelet-based Generative Adversarial Networks (GANs), that can well capture the
large spatial and spectral variability that is characteristic to satellite
imagery. By focusing on preserving locality, the proposed method is able to
decompose the weight-space of pre-trained GANs and recover interpretable
directions that correspond to high-level semantic concepts (such as
urbanization, structure density, flora presence) - that can subsequently be
used for guided synthesis of satellite imagery. In contrast to typically used
approaches that focus on capturing the variability of the weight-space in a
reduced dimensionality space (i.e., based on Principal Component Analysis,
PCA), we show that preserving locality leads to vectors with different angles,
that are more robust to artifacts and can better preserve class information.
Via a set of quantitative and qualitative examples, we further show that the
proposed approach can outperform both baseline geometric augmentations, as well
as global, PCA-based approaches for data synthesis in the context of data
augmentation for satellite scene classification. | [
"Georgia Kourmouli",
"Nikos Kostagiolas",
"Yannis Panagakis",
"Mihalis A. Nicolaou"
] | 2023-09-26 12:29:36 | http://arxiv.org/abs/2309.14883v1 | http://arxiv.org/pdf/2309.14883v1 | 2309.14883v1 |
Credit Card Fraud Detection with Subspace Learning-based One-Class Classification | In an increasingly digitalized commerce landscape, the proliferation of
credit card fraud and the evolution of sophisticated fraudulent techniques have
led to substantial financial losses. Automating credit card fraud detection is
a viable way to accelerate detection, reducing response times and minimizing
potential financial losses. However, addressing this challenge is complicated
by the highly imbalanced nature of the datasets, where genuine transactions
vastly outnumber fraudulent ones. Furthermore, the high number of dimensions
within the feature set gives rise to the ``curse of dimensionality". In this
paper, we investigate subspace learning-based approaches centered on One-Class
Classification (OCC) algorithms, which excel in handling imbalanced data
distributions and possess the capability to anticipate and counter the
transactions carried out by yet-to-be-invented fraud techniques. The study
highlights the potential of subspace learning-based OCC algorithms by
investigating the limitations of current fraud detection strategies and the
specific challenges of credit card fraud detection. These algorithms integrate
subspace learning into the data description; hence, the models transform the
data into a lower-dimensional subspace optimized for OCC. Through rigorous
experimentation and analysis, the study validated that the proposed approach
helps tackle the curse of dimensionality and the imbalanced nature of credit
card data for automatic fraud detection to mitigate financial losses caused by
fraudulent activities. | [
"Zaffar Zaffar",
"Fahad Sohrab",
"Juho Kanniainen",
"Moncef Gabbouj"
] | 2023-09-26 12:26:28 | http://arxiv.org/abs/2309.14880v1 | http://arxiv.org/pdf/2309.14880v1 | 2309.14880v1 |
Navigating Text-To-Image Customization:From LyCORIS Fine-Tuning to Model Evaluation | Text-to-image generative models have garnered immense attention for their
ability to produce high-fidelity images from text prompts. Among these, Stable
Diffusion distinguishes itself as a leading open-source model in this
fast-growing field. However, the intricacies of fine-tuning these models pose
multiple challenges from new methodology integration to systematic evaluation.
Addressing these issues, this paper introduces LyCORIS (Lora beYond
Conventional methods, Other Rank adaptation Implementations for Stable
diffusion) [https://github.com/KohakuBlueleaf/LyCORIS], an open-source library
that offers a wide selection of fine-tuning methodologies for Stable Diffusion.
Furthermore, we present a thorough framework for the systematic assessment of
varied fine-tuning techniques. This framework employs a diverse suite of
metrics and delves into multiple facets of fine-tuning, including
hyperparameter adjustments and the evaluation with different prompt types
across various concept categories. Through this comprehensive approach, our
work provides essential insights into the nuanced effects of fine-tuning
parameters, bridging the gap between state-of-the-art research and practical
application. | [
"Shin-Ying Yeh",
"Yu-Guan Hsieh",
"Zhidong Gao",
"Bernard B W Yang",
"Giyeong Oh",
"Yanmin Gong"
] | 2023-09-26 11:36:26 | http://arxiv.org/abs/2309.14859v1 | http://arxiv.org/pdf/2309.14859v1 | 2309.14859v1 |
Cluster Exploration using Informative Manifold Projections | Dimensionality reduction (DR) is one of the key tools for the visual
exploration of high-dimensional data and uncovering its cluster structure in
two- or three-dimensional spaces. The vast majority of DR methods in the
literature do not take into account any prior knowledge a practitioner may have
regarding the dataset under consideration. We propose a novel method to
generate informative embeddings which not only factor out the structure
associated with different kinds of prior knowledge but also aim to reveal any
remaining underlying structure. To achieve this, we employ a linear combination
of two objectives: firstly, contrastive PCA that discounts the structure
associated with the prior information, and secondly, kurtosis projection
pursuit which ensures meaningful data separation in the obtained embeddings. We
formulate this task as a manifold optimization problem and validate it
empirically across a variety of datasets considering three distinct types of
prior knowledge. Lastly, we provide an automated framework to perform iterative
visual exploration of high-dimensional data. | [
"Stavros Gerolymatos",
"Xenophon Evangelopoulos",
"Vladimir Gusev",
"John Y. Goulermas"
] | 2023-09-26 11:35:25 | http://arxiv.org/abs/2309.14857v1 | http://arxiv.org/pdf/2309.14857v1 | 2309.14857v1 |
Investigation of factors regarding the effects of COVID-19 pandemic on college students' depression by quantum annealer | Diverse cases regarding the impact, with its related factors, of the COVID-19
pandemic on mental health have been reported in previous studies. College
student groups have been frequently selected as the target population in
previous studies because they are easily affected by pandemics. In this study,
multivariable datasets were collected from 751 college students based on the
complex relationships between various mental health factors. We utilized
quantum annealing (QA)-based feature selection algorithms that were executed by
commercial D-Wave quantum computers to determine the changes in the relative
importance of the associated factors before and after the pandemic.
Multivariable linear regression (MLR) and XGBoost models were also applied to
validate the QA-based algorithms. Based on the experimental results, we confirm
that QA-based algorithms have comparable capabilities in factor analysis
research to the MLR models that have been widely used in previous studies.
Furthermore, the performance of the QA-based algorithms was validated through
the important factor results from the algorithms. Pandemic-related factors
(e.g., confidence in the social system) and psychological factors (e.g.,
decision-making in uncertain situations) were more important in post-pandemic
conditions. We believe that our study will serve as a reference for researchers
studying similar topics. | [
"Junggu Choi",
"Kion Kim",
"Soohyun Park",
"Juyoen Hur",
"Hyunjung Yang",
"Younghoon Kim",
"Hakbae Lee",
"Sanghoon Han"
] | 2023-09-26 11:20:24 | http://arxiv.org/abs/2310.00018v1 | http://arxiv.org/pdf/2310.00018v1 | 2310.00018v1 |
Realtime Motion Generation with Active Perception Using Attention Mechanism for Cooking Robot | To support humans in their daily lives, robots are required to autonomously
learn, adapt to objects and environments, and perform the appropriate actions.
We tackled on the task of cooking scrambled eggs using real ingredients, in
which the robot needs to perceive the states of the egg and adjust stirring
movement in real time, while the egg is heated and the state changes
continuously. In previous works, handling changing objects was found to be
challenging because sensory information includes dynamical, both important or
noisy information, and the modality which should be focused on changes every
time, making it difficult to realize both perception and motion generation in
real time. We propose a predictive recurrent neural network with an attention
mechanism that can weigh the sensor input, distinguishing how important and
reliable each modality is, that realize quick and efficient perception and
motion generation. The model is trained with learning from the demonstration,
and allows the robot to acquire human-like skills. We validated the proposed
technique using the robot, Dry-AIREC, and with our learning model, it could
perform cooking eggs with unknown ingredients. The robot could change the
method of stirring and direction depending on the status of the egg, as in the
beginning it stirs in the whole pot, then subsequently, after the egg started
being heated, it starts flipping and splitting motion targeting specific areas,
although we did not explicitly indicate them. | [
"Namiko Saito",
"Mayu Hiramoto",
"Ayuna Kubo",
"Kanata Suzuki",
"Hiroshi Ito",
"Shigeki Sugano",
"Tetsuya Ogata"
] | 2023-09-26 11:05:37 | http://arxiv.org/abs/2309.14837v1 | http://arxiv.org/pdf/2309.14837v1 | 2309.14837v1 |
OS-net: Orbitally Stable Neural Networks | We introduce OS-net (Orbitally Stable neural NETworks), a new family of
neural network architectures specifically designed for periodic dynamical data.
OS-net is a special case of Neural Ordinary Differential Equations (NODEs) and
takes full advantage of the adjoint method based backpropagation method.
Utilizing ODE theory, we derive conditions on the network weights to ensure
stability of the resulting dynamics. We demonstrate the efficacy of our
approach by applying OS-net to discover the dynamics underlying the R\"{o}ssler
and Sprott's systems, two dynamical systems known for their period doubling
attractors and chaotic behavior. | [
"Marieme Ngom",
"Carlo Graziani"
] | 2023-09-26 10:40:04 | http://arxiv.org/abs/2309.14822v1 | http://arxiv.org/pdf/2309.14822v1 | 2309.14822v1 |
A Comparative Study of Population-Graph Construction Methods and Graph Neural Networks for Brain Age Regression | The difference between the chronological and biological brain age of a
subject can be an important biomarker for neurodegenerative diseases, thus
brain age estimation can be crucial in clinical settings. One way to
incorporate multimodal information into this estimation is through population
graphs, which combine various types of imaging data and capture the
associations among individuals within a population. In medical imaging,
population graphs have demonstrated promising results, mostly for
classification tasks. In most cases, the graph structure is pre-defined and
remains static during training. However, extracting population graphs is a
non-trivial task and can significantly impact the performance of Graph Neural
Networks (GNNs), which are sensitive to the graph structure. In this work, we
highlight the importance of a meaningful graph construction and experiment with
different population-graph construction methods and their effect on GNN
performance on brain age estimation. We use the homophily metric and graph
visualizations to gain valuable quantitative and qualitative insights on the
extracted graph structures. For the experimental evaluation, we leverage the UK
Biobank dataset, which offers many imaging and non-imaging phenotypes. Our
results indicate that architectures highly sensitive to the graph structure,
such as Graph Convolutional Network (GCN) and Graph Attention Network (GAT),
struggle with low homophily graphs, while other architectures, such as
GraphSage and Chebyshev, are more robust across different homophily ratios. We
conclude that static graph construction approaches are potentially insufficient
for the task of brain age estimation and make recommendations for alternative
research directions. | [
"Kyriaki-Margarita Bintsi",
"Tamara T. Mueller",
"Sophie Starck",
"Vasileios Baltatzis",
"Alexander Hammers",
"Daniel Rueckert"
] | 2023-09-26 10:30:45 | http://arxiv.org/abs/2309.14816v1 | http://arxiv.org/pdf/2309.14816v1 | 2309.14816v1 |
Revisiting Softmax Masking for Stability in Continual Learning | In continual learning, many classifiers use softmax function to learn
confidence. However, numerous studies have pointed out its inability to
accurately determine confidence distributions for outliers, often referred to
as epistemic uncertainty. This inherent limitation also curtails the accurate
decisions for selecting what to forget and keep in previously trained
confidence distributions over continual learning process. To address the issue,
we revisit the effects of masking softmax function. While this method is both
simple and prevalent in literature, its implication for retaining confidence
distribution during continual learning, also known as stability, has been
under-investigated. In this paper, we revisit the impact of softmax masking,
and introduce a methodology to utilize its confidence preservation effects. In
class- and task-incremental learning benchmarks with and without memory replay,
our approach significantly increases stability while maintaining sufficiently
large plasticity. In the end, our methodology shows better overall performance
than state-of-the-art methods, particularly in the use with zero or small
memory. This lays a simple and effective foundation of strongly stable
replay-based continual learning. | [
"Hoyong Kim",
"Minchan Kwon",
"Kangil Kim"
] | 2023-09-26 10:06:28 | http://arxiv.org/abs/2309.14808v1 | http://arxiv.org/pdf/2309.14808v1 | 2309.14808v1 |
Evaluating Soccer Match Prediction Models: A Deep Learning Approach and Feature Optimization for Gradient-Boosted Trees | Machine learning models have become increasingly popular for predicting the
results of soccer matches, however, the lack of publicly-available benchmark
datasets has made model evaluation challenging. The 2023 Soccer Prediction
Challenge required the prediction of match results first in terms of the exact
goals scored by each team, and second, in terms of the probabilities for a win,
draw, and loss. The original training set of matches and features, which was
provided for the competition, was augmented with additional matches that were
played between 4 April and 13 April 2023, representing the period after which
the training set ended, but prior to the first matches that were to be
predicted (upon which the performance was evaluated). A CatBoost model was
employed using pi-ratings as the features, which were initially identified as
the optimal choice for calculating the win/draw/loss probabilities. Notably,
deep learning models have frequently been disregarded in this particular task.
Therefore, in this study, we aimed to assess the performance of a deep learning
model and determine the optimal feature set for a gradient-boosted tree model.
The model was trained using the most recent five years of data, and three
training and validation sets were used in a hyperparameter grid search. The
results from the validation sets show that our model had strong performance and
stability compared to previously published models from the 2017 Soccer
Prediction Challenge for win/draw/loss prediction. | [
"Calvin Yeung",
"Rory Bunker",
"Rikuhei Umemoto",
"Keisuke Fujii"
] | 2023-09-26 10:05:46 | http://arxiv.org/abs/2309.14807v1 | http://arxiv.org/pdf/2309.14807v1 | 2309.14807v1 |
Transferring climate change knowledge | Accurate climate projections are required for climate adaptation and
mitigation. Earth system model simulations, used to project climate change,
inherently make approximations in their representation of small-scale physical
processes, such as clouds, that are at the root of the uncertainties in global
mean temperature's response to increased greenhouse gas concentrations. Several
approaches have been developed to use historical observations to constrain
future projections and reduce uncertainties in climate projections and climate
feedbacks. Yet those methods cannot capture the non-linear complexity inherent
in the climate system. Using a Transfer Learning approach, we show that Machine
Learning, in particular Deep Neural Networks, can be used to optimally leverage
and merge the knowledge gained from Earth system model simulations and
historical observations to more accurately project global surface temperature
fields in the 21st century. For the Shared Socioeconomic Pathways (SSPs) 2-4.5,
3-7.0 and 5-8.5, we refine regional estimates and the global projection of the
average global temperature in 2081-2098 (with respect to the period 1850-1900)
to 2.73{\deg}C (2.44-3.11{\deg}C), 3.92{\deg}C (3.5-4.47{\deg}C) and
4.53{\deg}C (3.69-5.5{\deg}C), respectively, compared to the unconstrained
2.7{\deg}C (1.65-3.8{\deg}C), 3.71{\deg}C (2.56-4.97{\deg}C) and 4.47{\deg}C
(2.95-6.02{\deg}C). Our findings show that the 1.5{\deg}C threshold of the
Paris' agreement will be crossed in 2031 (2028-2034) for SSP2-4.5, in 2029
(2027-2031) for SSP3-7.0 and in 2028 (2025-2031) for SSP5-8.5. Similarly, the
2{\deg}C threshold will be exceeded in 2051 (2045-2059), 2044 (2040-2047) and
2042 (2038-2047) respectively. Our new method provides more accurate climate
projections urgently required for climate adaptation. | [
"Francesco Immorlano",
"Veronika Eyring",
"Thomas le Monnier de Gouville",
"Gabriele Accarino",
"Donatello Elia",
"Giovanni Aloisio",
"Pierre Gentine"
] | 2023-09-26 09:24:53 | http://arxiv.org/abs/2309.14780v1 | http://arxiv.org/pdf/2309.14780v1 | 2309.14780v1 |
Exploring Small Language Models with Prompt-Learning Paradigm for Efficient Domain-Specific Text Classification | Domain-specific text classification faces the challenge of scarce labeled
data due to the high cost of manual labeling. Prompt-learning, known for its
efficiency in few-shot scenarios, is proposed as an alternative to traditional
fine-tuning methods. And besides, although large language models (LLMs) have
gained prominence, small language models (SLMs, with under 1B parameters) offer
significant customizability, adaptability, and cost-effectiveness for
domain-specific tasks, given industry constraints. In this study, we
investigate the potential of SLMs combined with prompt-learning paradigm for
domain-specific text classification, specifically within customer-agent
interactions in retail. Our evaluations show that, in few-shot settings when
prompt-based model fine-tuning is possible, T5-base, a typical SLM with 220M
parameters, achieve approximately 75% accuracy with limited labeled data (up to
15% of full data), which shows great potentials of SLMs with prompt-learning.
Based on this, We further validate the effectiveness of active few-shot
sampling and the ensemble strategy in the prompt-learning pipeline that
contribute to a remarkable performance gain. Besides, in zero-shot settings
with a fixed model, we underscore a pivotal observation that, although the
GPT-3.5-turbo equipped with around 154B parameters garners an accuracy of
55.16%, the power of well designed prompts becomes evident when the
FLAN-T5-large, a model with a mere 0.5% of GPT-3.5-turbo's parameters, achieves
an accuracy exceeding 31% with the optimized prompt, a leap from its sub-18%
performance with an unoptimized one. Our findings underscore the promise of
prompt-learning in classification tasks with SLMs, emphasizing the benefits of
active few-shot sampling, and ensemble strategies in few-shot settings, and the
importance of prompt engineering in zero-shot settings. | [
"Hengyu Luo",
"Peng Liu",
"Stefan Esping"
] | 2023-09-26 09:24:46 | http://arxiv.org/abs/2309.14779v1 | http://arxiv.org/pdf/2309.14779v1 | 2309.14779v1 |
Markov Chain Mirror Descent On Data Federation | Stochastic optimization methods such as mirror descent have wide applications
due to low computational cost. Those methods have been well studied under
assumption of the independent and identical distribution, and usually achieve
sublinear rate of convergence. However, this assumption may be too strong and
unpractical in real application scenarios. Recent researches investigate
stochastic gradient descent when instances are sampled from a Markov chain.
Unfortunately, few results are known for stochastic mirror descent. In the
paper, we propose a new version of stochastic mirror descent termed by MarchOn
in the scenario of the federated learning. Given a distributed network, the
model iteratively travels from a node to one of its neighbours randomly.
Furthermore, we propose a new framework to analyze MarchOn, which yields best
rates of convergence for convex, strongly convex, and non-convex loss. Finally,
we conduct empirical studies to evaluate the convergence of MarchOn, and
validate theoretical results. | [
"Yawei Zhao"
] | 2023-09-26 09:18:55 | http://arxiv.org/abs/2309.14775v1 | http://arxiv.org/pdf/2309.14775v1 | 2309.14775v1 |
BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile Screenshot Captioning | This study aims to explore efficient tuning methods for the screenshot
captioning task. Recently, image captioning has seen significant advancements,
but research in captioning tasks for mobile screens remains relatively scarce.
Current datasets and use cases describing user behaviors within product
screenshots are notably limited. Consequently, we sought to fine-tune
pre-existing models for the screenshot captioning task. However, fine-tuning
large pre-trained models can be resource-intensive, requiring considerable
time, computational power, and storage due to the vast number of parameters in
image captioning models. To tackle this challenge, this study proposes a
combination of adapter methods, which necessitates tuning only the additional
modules on the model. These methods are originally designed for vision or
language tasks, and our intention is to apply them to address similar
challenges in screenshot captioning. By freezing the parameters of the image
caption models and training only the weights associated with the methods,
performance comparable to fine-tuning the entire model can be achieved, while
significantly reducing the number of parameters. This study represents the
first comprehensive investigation into the effectiveness of combining adapters
within the context of the screenshot captioning task. Through our experiments
and analyses, this study aims to provide valuable insights into the application
of adapters in vision-language models and contribute to the development of
efficient tuning techniques for the screenshot captioning task. Our study is
available at https://github.com/RainYuGG/BLIP-Adapter | [
"Ching-Yu Chiang",
"I-Hua Chang",
"Shih-Wei Liao"
] | 2023-09-26 09:16:44 | http://arxiv.org/abs/2309.14774v1 | http://arxiv.org/pdf/2309.14774v1 | 2309.14774v1 |
Age Minimization in Massive IoT via UAV Swarm: A Multi-agent Reinforcement Learning Approach | In many massive IoT communication scenarios, the IoT devices require coverage
from dynamic units that can move close to the IoT devices and reduce the uplink
energy consumption. A robust solution is to deploy a large number of UAVs (UAV
swarm) to provide coverage and a better line of sight (LoS) for the IoT
network. However, the study of these massive IoT scenarios with a massive
number of serving units leads to high dimensional problems with high
complexity. In this paper, we apply multi-agent deep reinforcement learning to
address the high-dimensional problem that results from deploying a swarm of
UAVs to collect fresh information from IoT devices. The target is to minimize
the overall age of information in the IoT network. The results reveal that both
cooperative and partially cooperative multi-agent deep reinforcement learning
approaches are able to outperform the high-complexity centralized deep
reinforcement learning approach, which stands helpless in large-scale networks. | [
"Eslam Eldeeb",
"Mohammad Shehab",
"Hirley Alves"
] | 2023-09-26 08:37:21 | http://arxiv.org/abs/2309.14757v1 | http://arxiv.org/pdf/2309.14757v1 | 2309.14757v1 |
ANNCRIPS: Artificial Neural Networks for Cancer Research In Prediction & Survival | Prostate cancer is a prevalent malignancy among men aged 50 and older.
Current diagnostic methods primarily rely on blood tests, PSA:Prostate-Specific
Antigen levels, and Digital Rectal Examinations (DRE). However, these methods
suffer from a significant rate of false positive results. This study focuses on
the development and validation of an intelligent mathematical model utilizing
Artificial Neural Networks (ANNs) to enhance the early detection of prostate
cancer. The primary objective of this research paper is to present a novel
mathematical model designed to aid in the early detection of prostate cancer,
facilitating prompt intervention by healthcare professionals. The model's
implementation demonstrates promising potential in reducing the incidence of
false positives, thereby improving patient outcomes. Furthermore, we envision
that, with further refinement, extensive testing, and validation, this model
can evolve into a robust, marketable solution for prostate cancer detection.
The long-term goal is to make this solution readily available for deployment in
various screening centers, hospitals, and research institutions, ultimately
contributing to more effective cancer screening and patient care. | [
"Amit Mathapati"
] | 2023-09-26 08:11:35 | http://arxiv.org/abs/2309.15803v1 | http://arxiv.org/pdf/2309.15803v1 | 2309.15803v1 |
Effective Multi-Agent Deep Reinforcement Learning Control with Relative Entropy Regularization | In this paper, a novel Multi-agent Reinforcement Learning (MARL) approach,
Multi-Agent Continuous Dynamic Policy Gradient (MACDPP) was proposed to tackle
the issues of limited capability and sample efficiency in various scenarios
controlled by multiple agents. It alleviates the inconsistency of multiple
agents' policy updates by introducing the relative entropy regularization to
the Centralized Training with Decentralized Execution (CTDE) framework with the
Actor-Critic (AC) structure. Evaluated by multi-agent cooperation and
competition tasks and traditional control tasks including OpenAI benchmarks and
robot arm manipulation, MACDPP demonstrates significant superiority in learning
capability and sample efficiency compared with both related multi-agent and
widely implemented signal-agent baselines and therefore expands the potential
of MARL in effectively learning challenging control scenarios. | [
"Chenyang Miao",
"Yunduan Cui",
"Huiyun Li",
"Xinyu Wu"
] | 2023-09-26 07:38:19 | http://arxiv.org/abs/2309.14727v1 | http://arxiv.org/pdf/2309.14727v1 | 2309.14727v1 |
PLMM: Personal Large Models on Mobile Devices | Inspired by Federated Learning, in this paper, we propose personal large
models that are distilled from traditional large language models but more
adaptive to local users' personal information such as education background and
hobbies. We classify the large language models into three levels: the personal
level, expert level and traditional level. The personal level models are
adaptive to users' personal information. They encrypt the users' input and
protect their privacy. The expert level models focus on merging specific
knowledge such as finance, IT and art. The traditional models focus on the
universal knowledge discovery and upgrading the expert models. In such
classifications, the personal models directly interact with the user. For the
whole system, the personal models have users' (encrypted) personal information.
Moreover, such models must be small enough to be performed on personal
computers or mobile devices. Finally, they also have to response in real-time
for better user experience and produce high quality results. The proposed
personal large models can be applied in a wide range of applications such as
language and vision tasks. | [
"Yuanhao Gong"
] | 2023-09-26 07:36:20 | http://arxiv.org/abs/2309.14726v1 | http://arxiv.org/pdf/2309.14726v1 | 2309.14726v1 |
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models | Recently years have witnessed a rapid development of large language models
(LLMs). Despite the strong ability in many language-understanding tasks, the
heavy computational burden largely restricts the application of LLMs especially
when one needs to deploy them onto edge devices. In this paper, we propose a
quantization-aware low-rank adaptation (QA-LoRA) algorithm. The motivation lies
in the imbalanced degrees of freedom of quantization and adaptation, and the
solution is to use group-wise operators which increase the degree of freedom of
quantization meanwhile decreasing that of adaptation. QA-LoRA is easily
implemented with a few lines of code, and it equips the original LoRA with
two-fold abilities: (i) during fine-tuning, the LLM's weights are quantized
(e.g., into INT4) to reduce time and memory usage; (ii) after fine-tuning, the
LLM and auxiliary weights are naturally integrated into a quantized model
without loss of accuracy. We apply QA-LoRA to the LLaMA and LLaMA2 model
families and validate its effectiveness in different fine-tuning datasets and
downstream scenarios. Code will be made available at
https://github.com/yuhuixu1993/qa-lora. | [
"Yuhui Xu",
"Lingxi Xie",
"Xiaotao Gu",
"Xin Chen",
"Heng Chang",
"Hengheng Zhang",
"Zhengsu Chen",
"Xiaopeng Zhang",
"Qi Tian"
] | 2023-09-26 07:22:23 | http://arxiv.org/abs/2309.14717v2 | http://arxiv.org/pdf/2309.14717v2 | 2309.14717v2 |
Explaining Deep Face Algorithms through Visualization: A Survey | Although current deep models for face tasks surpass human performance on some
benchmarks, we do not understand how they work. Thus, we cannot predict how it
will react to novel inputs, resulting in catastrophic failures and unwanted
biases in the algorithms. Explainable AI helps bridge the gap, but currently,
there are very few visualization algorithms designed for faces. This work
undertakes a first-of-its-kind meta-analysis of explainability algorithms in
the face domain. We explore the nuances and caveats of adapting general-purpose
visualization algorithms to the face domain, illustrated by computing
visualizations on popular face models. We review existing face explainability
works and reveal valuable insights into the structure and hierarchy of face
networks. We also determine the design considerations for practical face
visualizations accessible to AI practitioners by conducting a user study on the
utility of various explainability algorithms. | [
"Thrupthi Ann John",
"Vineeth N Balasubramanian",
"C. V. Jawahar"
] | 2023-09-26 07:16:39 | http://arxiv.org/abs/2309.14715v1 | http://arxiv.org/pdf/2309.14715v1 | 2309.14715v1 |
From Asset Flow to Status, Action and Intention Discovery: Early Malice Detection in Cryptocurrency | Cryptocurrency has been subject to illicit activities probably more often
than traditional financial assets due to the pseudo-anonymous nature of its
transacting entities. An ideal detection model is expected to achieve all three
critical properties of (I) early detection, (II) good interpretability, and
(III) versatility for various illicit activities. However, existing solutions
cannot meet all these requirements, as most of them heavily rely on deep
learning without interpretability and are only available for retrospective
analysis of a specific illicit type. To tackle all these challenges, we propose
Intention-Monitor for early malice detection in Bitcoin (BTC), where the
on-chain record data for a certain address are much scarcer than other
cryptocurrency platforms. We first define asset transfer paths with the
Decision-Tree based feature Selection and Complement (DT-SC) to build different
feature sets for different malice types. Then, the Status/Action Proposal
Module (S/A-PM) and the Intention-VAE module generate the status, action,
intent-snippet, and hidden intent-snippet embedding. With all these modules,
our model is highly interpretable and can detect various illegal activities.
Moreover, well-designed loss functions further enhance the prediction speed and
model's interpretability. Extensive experiments on three real-world datasets
demonstrate that our proposed algorithm outperforms the state-of-the-art
methods. Furthermore, additional case studies justify our model can not only
explain existing illicit patterns but can also find new suspicious characters. | [
"Ling Cheng",
"Feida Zhu",
"Yong Wang",
"Ruicheng Liang",
"Huiwen Liu"
] | 2023-09-26 07:12:59 | http://arxiv.org/abs/2309.15133v1 | http://arxiv.org/pdf/2309.15133v1 | 2309.15133v1 |
On the Computational Complexity and Formal Hierarchy of Second Order Recurrent Neural Networks | Artificial neural networks (ANNs) with recurrence and self-attention have
been shown to be Turing-complete (TC). However, existing work has shown that
these ANNs require multiple turns or unbounded computation time, even with
unbounded precision in weights, in order to recognize TC grammars. However,
under constraints such as fixed or bounded precision neurons and time, ANNs
without memory are shown to struggle to recognize even context-free languages.
In this work, we extend the theoretical foundation for the $2^{nd}$-order
recurrent network ($2^{nd}$ RNN) and prove there exists a class of a $2^{nd}$
RNN that is Turing-complete with bounded time. This model is capable of
directly encoding a transition table into its recurrent weights, enabling
bounded time computation and is interpretable by design. We also demonstrate
that $2$nd order RNNs, without memory, under bounded weights and time
constraints, outperform modern-day models such as vanilla RNNs and gated
recurrent units in recognizing regular grammars. We provide an upper bound and
a stability analysis on the maximum number of neurons required by $2$nd order
RNNs to recognize any class of regular grammar. Extensive experiments on the
Tomita grammars support our findings, demonstrating the importance of tensor
connections in crafting computationally efficient RNNs. Finally, we show
$2^{nd}$ order RNNs are also interpretable by extraction and can extract state
machines with higher success rates as compared to first-order RNNs. Our results
extend the theoretical foundations of RNNs and offer promising avenues for
future explainable AI research. | [
"Ankur Mali",
"Alexander Ororbia",
"Daniel Kifer",
"Lee Giles"
] | 2023-09-26 06:06:47 | http://arxiv.org/abs/2309.14691v1 | http://arxiv.org/pdf/2309.14691v1 | 2309.14691v1 |
Are Human-generated Demonstrations Necessary for In-context Learning? | Despite the promising few-shot ability of large language models (LLMs), the
standard paradigm of In-context Learning (ICL) suffers the disadvantages of
susceptibility to selected demonstrations and the intricacy to generate these
demonstrations. In this paper, we raise the fundamental question that whether
human-generated demonstrations are necessary for ICL. To answer this question,
we propose self-contemplation prompting strategy (SEC), a paradigm free from
human-crafted demonstrations. The key point of SEC is that, instead of using
hand-crafted examples as demonstrations in ICL, SEC asks LLMs to first create
demonstrations on their own, based on which the final output is generated. SEC
is a flexible framework and can be adapted to both the vanilla ICL and the
chain-of-thought (CoT), but with greater ease: as the manual-generation process
of both examples and rationale can be saved. Extensive experiments in
arithmetic reasoning, commonsense reasoning, multi-task language understanding,
and code generation benchmarks, show that SEC, which does not require
hand-crafted demonstrations, significantly outperforms the zero-shot learning
strategy, and achieves comparable results to ICL with hand-crafted
demonstrations. This demonstrates that, for many tasks, contemporary LLMs
possess a sufficient level of competence to exclusively depend on their own
capacity for decision making, removing the need for external training data.
Code is available at https://github.com/ruili33/SEC. | [
"Rui Li",
"Guoyin Wang",
"Jiwei Li"
] | 2023-09-26 05:10:08 | http://arxiv.org/abs/2309.14681v2 | http://arxiv.org/pdf/2309.14681v2 | 2309.14681v2 |
FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices using a Computing Power Aware Scheduler | Cross-silo federated learning offers a promising solution to collaboratively
train robust and generalized AI models without compromising the privacy of
local datasets, e.g., healthcare, financial, as well as scientific projects
that lack a centralized data facility. Nonetheless, because of the disparity of
computing resources among different clients (i.e., device heterogeneity),
synchronous federated learning algorithms suffer from degraded efficiency when
waiting for straggler clients. Similarly, asynchronous federated learning
algorithms experience degradation in the convergence rate and final model
accuracy on non-identically and independently distributed (non-IID)
heterogeneous datasets due to stale local models and client drift. To address
these limitations in cross-silo federated learning with heterogeneous clients
and data, we propose FedCompass, an innovative semi-asynchronous federated
learning algorithm with a computing power aware scheduler on the server side,
which adaptively assigns varying amounts of training tasks to different clients
using the knowledge of the computing power of individual clients. FedCompass
ensures that multiple locally trained models from clients are received almost
simultaneously as a group for aggregation, effectively reducing the staleness
of local models. At the same time, the overall training process remains
asynchronous, eliminating prolonged waiting periods from straggler clients.
Using diverse non-IID heterogeneous distributed datasets, we demonstrate that
FedCompass achieves faster convergence and higher accuracy than other
asynchronous algorithms while remaining more efficient than synchronous
algorithms when performing federated learning on heterogeneous clients. | [
"Zilinghan Li",
"Pranshu Chaturvedi",
"Shilan He",
"Han Chen",
"Gagandeep Singh",
"Volodymyr Kindratenko",
"E. A. Huerta",
"Kibaek Kim",
"Ravi Madduri"
] | 2023-09-26 05:03:13 | http://arxiv.org/abs/2309.14675v1 | http://arxiv.org/pdf/2309.14675v1 | 2309.14675v1 |
Leveraging Herpangina Data to Enhance Hospital-level Prediction of Hand-Foot-and-Mouth Disease Admissions Using UPTST | Outbreaks of hand-foot-and-mouth disease(HFMD) have been associated with
significant morbidity and, in severe cases, mortality. Accurate forecasting of
daily admissions of pediatric HFMD patients is therefore crucial for aiding the
hospital in preparing for potential outbreaks and mitigating nosocomial
transmissions. To address this pressing need, we propose a novel
transformer-based model with a U-net shape, utilizing the patching strategy and
the joint prediction strategy that capitalizes on insights from herpangina, a
disease closely correlated with HFMD. This model also integrates representation
learning by introducing reconstruction loss as an auxiliary loss. The results
show that our U-net Patching Time Series Transformer (UPTST) model outperforms
existing approaches in both long- and short-arm prediction accuracy of HFMD at
hospital-level. Furthermore, the exploratory extension experiments show that
the model's capabilities extend beyond prediction of infectious disease,
suggesting broader applicability in various domains. | [
"Guoqi Yu",
"Hailun Yao",
"Huan Zheng",
"Ximing Xu"
] | 2023-09-26 05:01:07 | http://arxiv.org/abs/2309.14674v2 | http://arxiv.org/pdf/2309.14674v2 | 2309.14674v2 |
ALEX: Towards Effective Graph Transfer Learning with Noisy Labels | Graph Neural Networks (GNNs) have garnered considerable interest due to their
exceptional performance in a wide range of graph machine learning tasks.
Nevertheless, the majority of GNN-based approaches have been examined using
well-annotated benchmark datasets, leading to suboptimal performance in
real-world graph learning scenarios. To bridge this gap, the present paper
investigates the problem of graph transfer learning in the presence of label
noise, which transfers knowledge from a noisy source graph to an unlabeled
target graph. We introduce a novel technique termed Balance Alignment and
Information-aware Examination (ALEX) to address this challenge. ALEX first
employs singular value decomposition to generate different views with crucial
structural semantics, which help provide robust node representations using
graph contrastive learning. To mitigate both label shift and domain shift, we
estimate a prior distribution to build subgraphs with balanced label
distributions. Building on this foundation, an adversarial domain discriminator
is incorporated for the implicit domain alignment of complex multi-modal
distributions. Furthermore, we project node representations into a different
space, optimizing the mutual information between the projected features and
labels. Subsequently, the inconsistency of similarity structures is evaluated
to identify noisy samples with potential overfitting. Comprehensive experiments
on various benchmark datasets substantiate the outstanding superiority of the
proposed ALEX in different settings. | [
"Jingyang Yuan",
"Xiao Luo",
"Yifang Qin",
"Zhengyang Mao",
"Wei Ju",
"Ming Zhang"
] | 2023-09-26 04:59:49 | http://arxiv.org/abs/2309.14673v1 | http://arxiv.org/pdf/2309.14673v1 | 2309.14673v1 |
DONNAv2 -- Lightweight Neural Architecture Search for Vision tasks | With the growing demand for vision applications and deployment across edge
devices, the development of hardware-friendly architectures that maintain
performance during device deployment becomes crucial. Neural architecture
search (NAS) techniques explore various approaches to discover efficient
architectures for diverse learning tasks in a computationally efficient manner.
In this paper, we present the next-generation neural architecture design for
computationally efficient neural architecture distillation - DONNAv2 .
Conventional NAS algorithms rely on a computationally extensive stage where an
accuracy predictor is learned to estimate model performance within search
space. This building of accuracy predictors helps them predict the performance
of models that are not being finetuned. Here, we have developed an elegant
approach to eliminate building the accuracy predictor and extend DONNA to a
computationally efficient setting. The loss metric of individual blocks forming
the network serves as the surrogate performance measure for the sampled models
in the NAS search stage. To validate the performance of DONNAv2 we have
performed extensive experiments involving a range of diverse vision tasks
including classification, object detection, image denoising, super-resolution,
and panoptic perception network (YOLOP). The hardware-in-the-loop experiments
were carried out using the Samsung Galaxy S10 mobile platform. Notably, DONNAv2
reduces the computational cost of DONNA by 10x for the larger datasets.
Furthermore, to improve the quality of NAS search space, DONNAv2 leverages a
block knowledge distillation filter to remove blocks with high inference costs. | [
"Sweta Priyadarshi",
"Tianyu Jiang",
"Hsin-Pai Cheng",
"Sendil Krishna",
"Viswanath Ganapathy",
"Chirag Patel"
] | 2023-09-26 04:48:50 | http://arxiv.org/abs/2309.14670v1 | http://arxiv.org/pdf/2309.14670v1 | 2309.14670v1 |
ZiCo-BC: A Bias Corrected Zero-Shot NAS for Vision Tasks | Zero-Shot Neural Architecture Search (NAS) approaches propose novel
training-free metrics called zero-shot proxies to substantially reduce the
search time compared to the traditional training-based NAS. Despite the success
on image classification, the effectiveness of zero-shot proxies is rarely
evaluated on complex vision tasks such as semantic segmentation and object
detection. Moreover, existing zero-shot proxies are shown to be biased towards
certain model characteristics which restricts their broad applicability. In
this paper, we empirically study the bias of state-of-the-art (SOTA) zero-shot
proxy ZiCo across multiple vision tasks and observe that ZiCo is biased towards
thinner and deeper networks, leading to sub-optimal architectures. To solve the
problem, we propose a novel bias correction on ZiCo, called ZiCo-BC. Our
extensive experiments across various vision tasks (image classification, object
detection and semantic segmentation) show that our approach can successfully
search for architectures with higher accuracy and significantly lower latency
on Samsung Galaxy S10 devices. | [
"Kartikeya Bhardwaj",
"Hsin-Pai Cheng",
"Sweta Priyadarshi",
"Zhuojin Li"
] | 2023-09-26 04:44:40 | http://arxiv.org/abs/2309.14666v1 | http://arxiv.org/pdf/2309.14666v1 | 2309.14666v1 |
Transformer-based classification of user queries for medical consultancy with respect to expert specialization | The need for skilled medical support is growing in the era of digital
healthcare. This research presents an innovative strategy, utilizing the RuBERT
model, for categorizing user inquiries in the field of medical consultation
with a focus on expert specialization. By harnessing the capabilities of
transformers, we fine-tuned the pre-trained RuBERT model on a varied dataset,
which facilitates precise correspondence between queries and particular medical
specialisms. Using a comprehensive dataset, we have demonstrated our approach's
superior performance with an F1-score of over 92%, calculated through both
cross-validation and the traditional split of test and train datasets. Our
approach has shown excellent generalization across medical domains such as
cardiology, neurology and dermatology. This methodology provides practical
benefits by directing users to appropriate specialists for prompt and targeted
medical advice. It also enhances healthcare system efficiency, reduces
practitioner burden, and improves patient care quality. In summary, our
suggested strategy facilitates the attainment of specific medical knowledge,
offering prompt and precise advice within the digital healthcare field. | [
"Dmitry Lyutkin",
"Andrey Soloviev",
"Dmitry Zhukov",
"Denis Pozdnyakov",
"Muhammad Shahid Iqbal Malik",
"Dmitry I. Ignatov"
] | 2023-09-26 04:36:12 | http://arxiv.org/abs/2309.14662v2 | http://arxiv.org/pdf/2309.14662v2 | 2309.14662v2 |
Genetic InfoMax: Exploring Mutual Information Maximization in High-Dimensional Imaging Genetics Studies | Genome-wide association studies (GWAS) are used to identify relationships
between genetic variations and specific traits. When applied to
high-dimensional medical imaging data, a key step is to extract
lower-dimensional, yet informative representations of the data as traits.
Representation learning for imaging genetics is largely under-explored due to
the unique challenges posed by GWAS in comparison to typical visual
representation learning. In this study, we tackle this problem from the mutual
information (MI) perspective by identifying key limitations of existing
methods. We introduce a trans-modal learning framework Genetic InfoMax (GIM),
including a regularized MI estimator and a novel genetics-informed transformer
to address the specific challenges of GWAS. We evaluate GIM on human brain 3D
MRI data and establish standardized evaluation protocols to compare it to
existing approaches. Our results demonstrate the effectiveness of GIM and a
significantly improved performance on GWAS. | [
"Yaochen Xie",
"Ziqian Xie",
"Sheikh Muhammad Saiful Islam",
"Degui Zhi",
"Shuiwang Ji"
] | 2023-09-26 03:59:21 | http://arxiv.org/abs/2309.15132v1 | http://arxiv.org/pdf/2309.15132v1 | 2309.15132v1 |
Learning the Uncertainty Sets for Control Dynamics via Set Membership: A Non-Asymptotic Analysis | Set-membership estimation is commonly used in adaptive/learning-based control
algorithms that require robustness over the model uncertainty sets, e.g.,
online robustly stabilizing control and robust adaptive model predictive
control. Despite having broad applications, non-asymptotic estimation error
bounds in the stochastic setting are limited. This paper provides such a
non-asymptotic bound on the diameter of the uncertainty sets generated by set
membership estimation on linear dynamical systems under bounded, i.i.d.
disturbances. Further, this result is applied to robust adaptive model
predictive control with uncertainty sets updated by set membership. We
numerically demonstrate the performance of the robust adaptive controller,
which rapidly approaches the performance of the offline optimal model
predictive controller, in comparison with the control design based on least
square estimation's confidence regions. | [
"Yingying Li",
"Jing Yu",
"Lauren Conger",
"Adam Wierman"
] | 2023-09-26 03:58:06 | http://arxiv.org/abs/2309.14648v1 | http://arxiv.org/pdf/2309.14648v1 | 2309.14648v1 |
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents | In recent years, deep reinforcement learning (Deep RL) has been successfully
implemented as a smart agent in many systems such as complex games,
self-driving cars, and chat-bots. One of the interesting use cases of Deep RL
is its application as an automated stock trading agent. In general, any
automated trading agent is prone to manipulations by adversaries in the trading
environment. Thus studying their robustness is vital for their success in
practice. However, typical mechanism to study RL robustness, which is based on
white-box gradient-based adversarial sample generation techniques (like FGSM),
is obsolete for this use case, since the models are protected behind secure
international exchange APIs, such as NASDAQ. In this research, we demonstrate
that a "gray-box" approach for attacking a Deep RL-based trading agent is
possible by trading in the same stock market, with no extra access to the
trading agent. In our proposed approach, an adversary agent uses a hybrid Deep
Neural Network as its policy consisting of Convolutional layers and
fully-connected layers. On average, over three simulated trading market
configurations, the adversary policy proposed in this research is able to
reduce the reward values by 214.17%, which results in reducing the potential
profits of the baseline by 139.4%, ensemble method by 93.7%, and an automated
trading software developed by our industrial partner by 85.5%, while consuming
significantly less budget than the victims (427.77%, 187.16%, and 66.97%,
respectively). | [
"Foozhan Ataiefard",
"Hadi Hemmati"
] | 2023-09-26 02:07:26 | http://arxiv.org/abs/2309.14615v1 | http://arxiv.org/pdf/2309.14615v1 | 2309.14615v1 |
Reparameterized Variational Rejection Sampling | Traditional approaches to variational inference rely on parametric families
of variational distributions, with the choice of family playing a critical role
in determining the accuracy of the resulting posterior approximation. Simple
mean-field families often lead to poor approximations, while rich families of
distributions like normalizing flows can be difficult to optimize and usually
do not incorporate the known structure of the target distribution due to their
black-box nature. To expand the space of flexible variational families, we
revisit Variational Rejection Sampling (VRS) [Grover et al., 2018], which
combines a parametric proposal distribution with rejection sampling to define a
rich non-parametric family of distributions that explicitly utilizes the known
target distribution. By introducing a low-variance reparameterized gradient
estimator for the parameters of the proposal distribution, we make VRS an
attractive inference strategy for models with continuous latent variables. We
argue theoretically and demonstrate empirically that the resulting
method--Reparameterized Variational Rejection Sampling (RVRS)--offers an
attractive trade-off between computational cost and inference fidelity. In
experiments we show that our method performs well in practice and that it is
well-suited for black-box inference, especially for models with local latent
variables. | [
"Martin Jankowiak",
"Du Phan"
] | 2023-09-26 01:46:53 | http://arxiv.org/abs/2309.14612v1 | http://arxiv.org/pdf/2309.14612v1 | 2309.14612v1 |
Unsupervised Graph Deep Learning Reveals Emergent Flood Risk Profile of Urban Areas | Urban flood risk emerges from complex and nonlinear interactions among
multiple features related to flood hazard, flood exposure, and social and
physical vulnerabilities, along with the complex spatial flood dependence
relationships. Existing approaches for characterizing urban flood risk,
however, are primarily based on flood plain maps, focusing on a limited number
of features, primarily hazard and exposure features, without consideration of
feature interactions or the dependence relationships among spatial areas. To
address this gap, this study presents an integrated urban flood-risk rating
model based on a novel unsupervised graph deep learning model (called
FloodRisk-Net). FloodRisk-Net is capable of capturing spatial dependence among
areas and complex and nonlinear interactions among flood hazards and urban
features for specifying emergent flood risk. Using data from multiple
metropolitan statistical areas (MSAs) in the United States, the model
characterizes their flood risk into six distinct city-specific levels. The
model is interpretable and enables feature analysis of areas within each
flood-risk level, allowing for the identification of the three archetypes
shaping the highest flood risk within each MSA. Flood risk is found to be
spatially distributed in a hierarchical structure within each MSA, where the
core city disproportionately bears the highest flood risk. Multiple cities are
found to have high overall flood-risk levels and low spatial inequality,
indicating limited options for balancing urban development and flood-risk
reduction. Relevant flood-risk reduction strategies are discussed considering
ways that the highest flood risk and uneven spatial distribution of flood risk
are formed. | [
"Kai Yin",
"Ali Mostafavi"
] | 2023-09-26 01:40:36 | http://arxiv.org/abs/2309.14610v2 | http://arxiv.org/pdf/2309.14610v2 | 2309.14610v2 |
Neuro-Visualizer: An Auto-encoder-based Loss Landscape Visualization Method | In recent years, there has been a growing interest in visualizing the loss
landscape of neural networks. Linear landscape visualization methods, such as
principal component analysis, have become widely used as they intuitively help
researchers study neural networks and their training process. However, these
linear methods suffer from limitations and drawbacks due to their lack of
flexibility and low fidelity at representing the high dimensional landscape. In
this paper, we present a novel auto-encoder-based non-linear landscape
visualization method called Neuro-Visualizer that addresses these shortcoming
and provides useful insights about neural network loss landscapes. To
demonstrate its potential, we run experiments on a variety of problems in two
separate applications of knowledge-guided machine learning (KGML). Our findings
show that Neuro-Visualizer outperforms other linear and non-linear baselines
and helps corroborate, and sometime challenge, claims proposed by machine
learning community. All code and data used in the experiments of this paper are
available at an anonymous link
https://anonymous.4open.science/r/NeuroVisualizer-FDD6 | [
"Mohannad Elhamod",
"Anuj Karpatne"
] | 2023-09-26 01:10:16 | http://arxiv.org/abs/2309.14601v1 | http://arxiv.org/pdf/2309.14601v1 | 2309.14601v1 |
Policy Optimization in a Noisy Neighborhood: On Return Landscapes in Continuous Control | Deep reinforcement learning agents for continuous control are known to
exhibit significant instability in their performance over time. In this work,
we provide a fresh perspective on these behaviors by studying the return
landscape: the mapping between a policy and a return. We find that popular
algorithms traverse noisy neighborhoods of this landscape, in which a single
update to the policy parameters leads to a wide range of returns. By taking a
distributional view of these returns, we map the landscape, characterizing
failure-prone regions of policy space and revealing a hidden dimension of
policy quality. We show that the landscape exhibits surprising structure by
finding simple paths in parameter space which improve the stability of a
policy. To conclude, we develop a distribution-aware procedure which finds such
paths, navigating away from noisy neighborhoods in order to improve the
robustness of a policy. Taken together, our results provide new insight into
the optimization, evaluation, and design of agents. | [
"Nate Rahn",
"Pierluca D'Oro",
"Harley Wiltzer",
"Pierre-Luc Bacon",
"Marc G. Bellemare"
] | 2023-09-26 01:03:54 | http://arxiv.org/abs/2309.14597v1 | http://arxiv.org/pdf/2309.14597v1 | 2309.14597v1 |
Efficient Post-training Quantization with FP8 Formats | Recent advances in deep learning methods such as LLMs and Diffusion models
have created a need for improved quantization methods that can meet the
computational demands of these modern architectures while maintaining accuracy.
Towards this goal, we study the advantages of FP8 data formats for
post-training quantization across 75 unique network architectures covering a
wide range of tasks, including machine translation, language modeling, text
generation, image classification, generation, and segmentation. We examine
three different FP8 representations (E5M2, E4M3, and E3M4) to study the effects
of varying degrees of trade-off between dynamic range and precision on model
accuracy. Based on our extensive study, we developed a quantization workflow
that generalizes across different network architectures. Our empirical results
show that FP8 formats outperform INT8 in multiple aspects, including workload
coverage (92.64% vs. 65.87%), model accuracy and suitability for a broader
range of operations. Furthermore, our findings suggest that E4M3 is better
suited for NLP models, whereas E3M4 performs marginally better than E4M3 on
computer vision tasks. The code is publicly available on Intel Neural
Compressor: https://github.com/intel/neural-compressor. | [
"Haihao Shen",
"Naveen Mellempudi",
"Xin He",
"Qun Gao",
"Chang Wang",
"Mengni Wang"
] | 2023-09-26 00:58:36 | http://arxiv.org/abs/2309.14592v1 | http://arxiv.org/pdf/2309.14592v1 | 2309.14592v1 |
Applications of Sequential Learning for Medical Image Classification | Purpose: The aim of this work is to develop a neural network training
framework for continual training of small amounts of medical imaging data and
create heuristics to assess training in the absence of a hold-out validation or
test set.
Materials and Methods: We formulated a retrospective sequential learning
approach that would train and consistently update a model on mini-batches of
medical images over time. We address problems that impede sequential learning
such as overfitting, catastrophic forgetting, and concept drift through PyTorch
convolutional neural networks (CNN) and publicly available Medical MNIST and
NIH Chest X-Ray imaging datasets. We begin by comparing two methods for a
sequentially trained CNN with and without base pre-training. We then transition
to two methods of unique training and validation data recruitment to estimate
full information extraction without overfitting. Lastly, we consider an example
of real-life data that shows how our approach would see mainstream research
implementation.
Results: For the first experiment, both approaches successfully reach a ~95%
accuracy threshold, although the short pre-training step enables sequential
accuracy to plateau in fewer steps. The second experiment comparing two methods
showed better performance with the second method which crosses the ~90%
accuracy threshold much sooner. The final experiment showed a slight advantage
with a pre-training step that allows the CNN to cross ~60% threshold much
sooner than without pre-training.
Conclusion: We have displayed sequential learning as a serviceable
multi-classification technique statistically comparable to traditional CNNs
that can acquire data in small increments feasible for clinically realistic
scenarios. | [
"Sohaib Naim",
"Brian Caffo",
"Haris I Sair",
"Craig K Jones"
] | 2023-09-26 00:46:25 | http://arxiv.org/abs/2309.14591v1 | http://arxiv.org/pdf/2309.14591v1 | 2309.14591v1 |
Joint Communication and Computation Framework for Goal-Oriented Semantic Communication with Distortion Rate Resilience | Recent research efforts on semantic communication have mostly considered
accuracy as a main problem for optimizing goal-oriented communication systems.
However, these approaches introduce a paradox: the accuracy of artificial
intelligence (AI) tasks should naturally emerge through training rather than
being dictated by network constraints. Acknowledging this dilemma, this work
introduces an innovative approach that leverages the rate-distortion theory to
analyze distortions induced by communication and semantic compression, thereby
analyzing the learning process. Specifically, we examine the distribution shift
between the original data and the distorted data, thus assessing its impact on
the AI model's performance. Founding upon this analysis, we can preemptively
estimate the empirical accuracy of AI tasks, making the goal-oriented semantic
communication problem feasible. To achieve this objective, we present the
theoretical foundation of our approach, accompanied by simulations and
experiments that demonstrate its effectiveness. The experimental results
indicate that our proposed method enables accurate AI task performance while
adhering to network constraints, establishing it as a valuable contribution to
the field of signal processing. Furthermore, this work advances research in
goal-oriented semantic communication and highlights the significance of
data-driven approaches in optimizing the performance of intelligent systems. | [
"Minh-Duong Nguyen",
"Quang-Vinh Do",
"Zhaohui Yang",
"Quoc-Viet Pham",
"Won-Joo Hwang"
] | 2023-09-26 00:26:29 | http://arxiv.org/abs/2309.14587v1 | http://arxiv.org/pdf/2309.14587v1 | 2309.14587v1 |
DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature Space | This work investigates efficient score-based black-box adversarial attacks
with a high Attack Success Rate (ASR) and good generalizability. We design a
novel attack method based on a Disentangled Feature space, called DifAttack,
which differs significantly from the existing ones operating over the entire
feature space. Specifically, DifAttack firstly disentangles an image's latent
feature into an adversarial feature and a visual feature, where the former
dominates the adversarial capability of an image, while the latter largely
determines its visual appearance. We train an autoencoder for the
disentanglement by using pairs of clean images and their Adversarial Examples
(AEs) generated from available surrogate models via white-box attack methods.
Eventually, DifAttack iteratively optimizes the adversarial feature according
to the query feedback from the victim model until a successful AE is generated,
while keeping the visual feature unaltered. In addition, due to the avoidance
of using surrogate models' gradient information when optimizing AEs for
black-box models, our proposed DifAttack inherently possesses better attack
capability in the open-set scenario, where the training dataset of the victim
model is unknown. Extensive experimental results demonstrate that our method
achieves significant improvements in ASR and query efficiency simultaneously,
especially in the targeted attack and open-set scenarios. The code will be
available at https://github.com/csjunjun/DifAttack.git soon. | [
"Liu Jun",
"Zhou Jiantao",
"Zeng Jiandian",
"Jinyu Tian"
] | 2023-09-26 00:15:13 | http://arxiv.org/abs/2309.14585v1 | http://arxiv.org/pdf/2309.14585v1 | 2309.14585v1 |
CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss | This paper considers contrastive training for cross-modal 0-shot transfer
wherein a pre-trained model in one modality is used for representation learning
in another domain using pairwise data. The learnt models in the latter domain
can then be used for a diverse set of tasks in a zero-shot way, similar to
``Contrastive Language-Image Pre-training (CLIP)'' and ``Locked-image Tuning
(LiT)'' that have recently gained considerable attention. Most existing works
for cross-modal representation alignment (including CLIP and LiT) use the
standard contrastive training objective, which employs sets of positive and
negative examples to align similar and repel dissimilar training data samples.
However, similarity amongst training examples has a more continuous nature,
thus calling for a more `non-binary' treatment. To address this, we propose a
novel loss function called Continuously Weighted Contrastive Loss (CWCL) that
employs a continuous measure of similarity. With CWCL, we seek to align the
embedding space of one modality with another. Owing to the continuous nature of
similarity in the proposed loss function, these models outperform existing
methods for 0-shot transfer across multiple models, datasets and modalities.
Particularly, we consider the modality pairs of image-text and speech-text and
our models achieve 5-8% (absolute) improvement over previous state-of-the-art
methods in 0-shot image classification and 20-30% (absolute) improvement in
0-shot speech-to-intent classification and keyword classification. | [
"Rakshith Sharma Srinivasa",
"Jaejin Cho",
"Chouchang Yang",
"Yashas Malur Saidutta",
"Ching-Hua Lee",
"Yilin Shen",
"Hongxia Jin"
] | 2023-09-26 00:03:25 | http://arxiv.org/abs/2309.14580v1 | http://arxiv.org/pdf/2309.14580v1 | 2309.14580v1 |
Understanding the Structure of QM7b and QM9 Quantum Mechanical Datasets Using Unsupervised Learning | This paper explores the internal structure of two quantum mechanics datasets
(QM7b, QM9), composed of several thousands of organic molecules and described
in terms of electronic properties. Understanding the structure and
characteristics of this kind of data is important when predicting the atomic
composition from the properties in inverse molecular designs. Intrinsic
dimension analysis, clustering, and outlier detection methods were used in the
study. They revealed that for both datasets the intrinsic dimensionality is
several times smaller than the descriptive dimensions. The QM7b data is
composed of well defined clusters related to atomic composition. The QM9 data
consists of an outer region predominantly composed of outliers, and an inner
core region that concentrates clustered, inliner objects. A significant
relationship exists between the number of atoms in the molecule and its
outlier/inner nature. Despite the structural differences, the predictability of
variables of interest for inverse molecular design is high. This is exemplified
with models estimating the number of atoms of the molecule from both the
original properties, and from lower dimensional embedding spaces. | [
"Julio J. Valdés",
"Alain B. Tchagang"
] | 2023-09-25 23:06:32 | http://arxiv.org/abs/2309.15130v1 | http://arxiv.org/pdf/2309.15130v1 | 2309.15130v1 |
Integrating Higher-Order Dynamics and Roadway-Compliance into Constrained ILQR-based Trajectory Planning for Autonomous Vehicles | This paper addresses the advancements in on-road trajectory planning for
Autonomous Passenger Vehicles (APV). Trajectory planning aims to produce a
globally optimal route for APVs, considering various factors such as vehicle
dynamics, constraints, and detected obstacles. Traditional techniques involve a
combination of sampling methods followed by optimization algorithms, where the
former ensures global awareness and the latter refines for local optima.
Notably, the Constrained Iterative Linear Quadratic Regulator (CILQR)
optimization algorithm has recently emerged, adapted for APV systems,
emphasizing improved safety and comfort. However, existing implementations
utilizing the vehicle bicycle kinematic model may not guarantee controllable
trajectories. We augment this model by incorporating higher-order terms,
including the first and second-order derivatives of curvature and longitudinal
jerk. This inclusion facilitates a richer representation in our cost and
constraint design. We also address roadway compliance, emphasizing adherence to
lane boundaries and directions, which past work often overlooked. Lastly, we
adopt a relaxed logarithmic barrier function to address the CILQR's dependency
on feasible initial trajectories. The proposed methodology is then validated
through simulation and real-world experiment driving scenes in real time. | [
"Hanxiang Li",
"Jiaqiao Zhang",
"Sheng Zhu",
"Dongjian Tang",
"Donghao Xu"
] | 2023-09-25 22:30:18 | http://arxiv.org/abs/2309.14566v1 | http://arxiv.org/pdf/2309.14566v1 | 2309.14566v1 |
Towards a statistical theory of data selection under weak supervision | Given a sample of size $N$, it is often useful to select a subsample of
smaller size $n<N$ to be used for statistical estimation or learning. Such a
data selection step is useful to reduce the requirements of data labeling and
the computational complexity of learning. We assume to be given $N$ unlabeled
samples $\{{\boldsymbol x}_i\}_{i\le N}$, and to be given access to a
`surrogate model' that can predict labels $y_i$ better than random guessing.
Our goal is to select a subset of the samples, to be denoted by $\{{\boldsymbol
x}_i\}_{i\in G}$, of size $|G|=n<N$. We then acquire labels for this set and we
use them to train a model via regularized empirical risk minimization.
By using a mixture of numerical experiments on real and synthetic data, and
mathematical derivations under low- and high- dimensional asymptotics, we show
that: $(i)$~Data selection can be very effective, in particular beating
training on the full sample in some cases; $(ii)$~Certain popular choices in
data selection methods (e.g. unbiased reweighted subsampling, or influence
function-based subsampling) can be substantially suboptimal. | [
"Germain Kolossov",
"Andrea Montanari",
"Pulkit Tandon"
] | 2023-09-25 22:23:27 | http://arxiv.org/abs/2309.14563v2 | http://arxiv.org/pdf/2309.14563v2 | 2309.14563v2 |
Training-free Linear Image Inversion via Flows | Training-free linear inversion involves the use of a pretrained generative
model and -- through appropriate modifications to the generation process --
solving inverse problems without any finetuning of the generative model. While
recent prior methods have explored the use of diffusion models, they still
require the manual tuning of many hyperparameters for different inverse
problems. In this work, we propose a training-free method for image inversion
using pretrained flow models, leveraging the simplicity and efficiency of Flow
Matching models, using theoretically-justified weighting schemes and thereby
significantly reducing the amount of manual tuning. In particular, we draw
inspiration from two main sources: adopting prior gradient correction methods
to the flow regime, and a solver scheme based on conditional Optimal Transport
paths. As pretrained diffusion models are widely accessible, we also show how
to practically adapt diffusion models for our method. Empirically, our approach
requires no problem-specific tuning across an extensive suite of noisy linear
image inversion problems on high-dimensional datasets, ImageNet-64/128 and
AFHQ-256, and we observe that our flow-based method for image inversion
significantly improves upon closely-related diffusion-based linear inversion
methods. | [
"Ashwini Pokle",
"Matthew J. Muckley",
"Ricky T. Q. Chen",
"Brian Karrer"
] | 2023-09-25 22:13:16 | http://arxiv.org/abs/2310.04432v1 | http://arxiv.org/pdf/2310.04432v1 | 2310.04432v1 |
Disruption Detection for a Cognitive Digital Supply Chain Twin Using Hybrid Deep Learning | Purpose: Recent disruptive events, such as COVID-19 and Russia-Ukraine
conflict, had a significant impact of global supply chains. Digital supply
chain twins have been proposed in order to provide decision makers with an
effective and efficient tool to mitigate disruption impact. Methods: This paper
introduces a hybrid deep learning approach for disruption detection within a
cognitive digital supply chain twin framework to enhance supply chain
resilience. The proposed disruption detection module utilises a deep
autoencoder neural network combined with a one-class support vector machine
algorithm. In addition, long-short term memory neural network models are
developed to identify the disrupted echelon and predict time-to-recovery from
the disruption effect. Results: The obtained information from the proposed
approach will help decision-makers and supply chain practitioners make
appropriate decisions aiming at minimizing negative impact of disruptive events
based on real-time disruption detection data. The results demonstrate the
trade-off between disruption detection model sensitivity, encountered delay in
disruption detection, and false alarms. This approach has seldom been used in
recent literature addressing this issue. | [
"Mahmoud Ashraf",
"Amr Eltawil",
"Islam Ali"
] | 2023-09-25 22:03:09 | http://arxiv.org/abs/2309.14557v1 | http://arxiv.org/pdf/2309.14557v1 | 2309.14557v1 |
Tactile Estimation of Extrinsic Contact Patch for Stable Placement | Precise perception of contact interactions is essential for the fine-grained
manipulation skills for robots. In this paper, we present the design of
feedback skills for robots that must learn to stack complex-shaped objects on
top of each other. To design such a system, a robot should be able to reason
about the stability of placement from very gentle contact interactions. Our
results demonstrate that it is possible to infer the stability of object
placement based on tactile readings during contact formation between the object
and its environment. In particular, we estimate the contact patch between a
grasped object and its environment using force and tactile observations to
estimate the stability of the object during a contact formation. The contact
patch could be used to estimate the stability of the object upon the release of
the grasp. The proposed method is demonstrated on various pairs of objects that
are used in a very popular board game. | [
"Kei Ota",
"Devesh K. Jha",
"Krishna Murthy Jatavallabhula",
"Asako Kanezaki",
"Joshua B. Tenenbaum"
] | 2023-09-25 21:51:48 | http://arxiv.org/abs/2309.14552v1 | http://arxiv.org/pdf/2309.14552v1 | 2309.14552v1 |
Cluster-based Method for Eavesdropping Identification and Localization in Optical Links | We propose a cluster-based method to detect and locate eavesdropping events
in optical line systems characterized by small power losses. Our findings
indicate that detecting such subtle losses from eavesdropping can be
accomplished solely through optical performance monitoring (OPM) data collected
at the receiver. On the other hand, the localization of such events can be
effectively achieved by leveraging in-line OPM data. | [
"Haokun Song",
"Rui Lin",
"Andrea Sgambelluri",
"Filippo Cugini",
"Yajie Li",
"Jie Zhang",
"Paolo Monti"
] | 2023-09-25 21:35:44 | http://arxiv.org/abs/2309.14541v1 | http://arxiv.org/pdf/2309.14541v1 | 2309.14541v1 |
Effect of roundabout design on the behavior of road users: A case study of roundabouts with application of Unsupervised Machine Learning | This research aims to evaluate the performance of the rotors and study the
behavior of the human driver in interacting with the rotors. In recent years,
rotors have been increasingly used between countries due to their safety,
capacity, and environmental advantages, and because they provide safe and fluid
flows of vehicles for transit and integration. It turns out that roundabouts
can significantly reduce speed at twisting intersections, entry speed and the
resulting effect on speed depends on the rating of road users. In our research,
(bus, car, truck) drivers were given special attention and their behavior was
categorized into (conservative, normal, aggressive). Anticipating and
recognizing driver behavior is an important challenge. Therefore, the aim of
this research is to study the effect of roundabouts on these classifiers and to
develop a method for predicting the behavior of road users at roundabout
intersections. Safety is primarily due to two inherent features of the rotor.
First, by comparing the data collected and processed in order to classify and
evaluate drivers' behavior, and comparing the speeds of the drivers (bus, car
and truck), the speed of motorists at crossing the roundabout was more fit than
that of buses and trucks. We looked because the car is smaller and all parts of
the rotor are visible to it. So drivers coming from all directions have to slow
down, giving them more time to react and mitigating the consequences in the
event of an accident. Second, with fewer conflicting flows (and points of
conflict), drivers only need to look to their left (in right-hand traffic) for
other vehicles, making their job of crossing the roundabout easier as there is
less need to split attention between different directions. | [
"Tasnim M. Dwekat",
"Ayda A. Almsre",
"Huthaifa I. Ashqar"
] | 2023-09-25 21:28:52 | http://arxiv.org/abs/2309.14540v1 | http://arxiv.org/pdf/2309.14540v1 | 2309.14540v1 |
Detach-ROCKET: Sequential feature selection for time series classification with random convolutional kernels | Time series classification is essential in many fields, such as medicine,
finance, environmental science, and manufacturing, enabling tasks like disease
diagnosis, anomaly detection, and stock price prediction. Machine learning
models like Recurrent Neural Networks and InceptionTime, while successful in
numerous applications, can face scalability limitations due to intensive
training requirements. To address this, random convolutional kernel models such
as Rocket and its derivatives have emerged, simplifying training and achieving
state-of-the-art performance by utilizing a large number of randomly generated
features from time series data. However, due to their random nature, most of
the generated features are redundant or non-informative, adding unnecessary
computational load and compromising generalization. Here, we introduce
Sequential Feature Detachment (SFD) as a method to identify and prune these
non-essential features. SFD uses model coefficients to estimate feature
importance and, unlike previous algorithms, can handle large feature sets
without the need for complex hyperparameter tuning. Testing on the UCR archive
demonstrates that SFD can produce models with $10\%$ of the original features
while improving $0.2\%$ the accuracy on the test set. We also present an
end-to-end procedure for determining an optimal balance between the number of
features and model accuracy, called Detach-ROCKET. When applied to the largest
binary UCR dataset, Detach-ROCKET is capable of reduce model size by $98.9\%$
and increases test accuracy by $0.6\%$. | [
"Gonzalo Uribarri",
"Federico Barone",
"Alessio Ansuini",
"Erik Fransén"
] | 2023-09-25 20:24:36 | http://arxiv.org/abs/2309.14518v1 | http://arxiv.org/pdf/2309.14518v1 | 2309.14518v1 |
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models | Computation in a typical Transformer-based large language model (LLM) can be
characterized by batch size, hidden dimension, number of layers, and sequence
length. Until now, system works for accelerating LLM training have focused on
the first three dimensions: data parallelism for batch size, tensor parallelism
for hidden size and pipeline parallelism for model depth or layers. These
widely studied forms of parallelism are not targeted or optimized for long
sequence Transformer models. Given practical application needs for long
sequence LLM, renewed attentions are being drawn to sequence parallelism.
However, existing works in sequence parallelism are constrained by
memory-communication inefficiency, limiting their scalability to long sequence
large models. In this work, we introduce DeepSpeed-Ulysses, a novel, portable
and effective methodology for enabling highly efficient and scalable LLM
training with extremely long sequence length. DeepSpeed-Ulysses at its core
partitions input data along the sequence dimension and employs an efficient
all-to-all collective communication for attention computation. Theoretical
communication analysis shows that whereas other methods incur communication
overhead as sequence length increases, DeepSpeed-Ulysses maintains constant
communication volume when sequence length and compute devices are increased
proportionally. Furthermore, experimental evaluations show that
DeepSpeed-Ulysses trains 2.5x faster with 4x longer sequence length than the
existing method SOTA baseline. | [
"Sam Ade Jacobs",
"Masahiro Tanaka",
"Chengming Zhang",
"Minjia Zhang",
"Shuaiwen Leon Song",
"Samyam Rajbhandari",
"Yuxiong He"
] | 2023-09-25 20:15:57 | http://arxiv.org/abs/2309.14509v2 | http://arxiv.org/pdf/2309.14509v2 | 2309.14509v2 |
Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms | We present Zeroth-order Riemannian Averaging Stochastic Approximation
(\texttt{Zo-RASA}) algorithms for stochastic optimization on Riemannian
manifolds. We show that \texttt{Zo-RASA} achieves optimal sample complexities
for generating $\epsilon$-approximation first-order stationary solutions using
only one-sample or constant-order batches in each iteration. Our approach
employs Riemannian moving-average stochastic gradient estimators, and a novel
Riemannian-Lyapunov analysis technique for convergence analysis. We improve the
algorithm's practicality by using retractions and vector transport, instead of
exponential mappings and parallel transports, thereby reducing per-iteration
complexity. Additionally, we introduce a novel geometric condition, satisfied
by manifolds with bounded second fundamental form, which enables new error
bounds for approximating parallel transport with vector transport. | [
"Jiaxiang Li",
"Krishnakumar Balasubramanian",
"Shiqian Ma"
] | 2023-09-25 20:13:36 | http://arxiv.org/abs/2309.14506v1 | http://arxiv.org/pdf/2309.14506v1 | 2309.14506v1 |
Uncertainty Aware Deep Learning for Particle Accelerators | Standard deep learning models for classification and regression applications
are ideal for capturing complex system dynamics. However, their predictions can
be arbitrarily inaccurate when the input samples are not similar to the
training data. Implementation of distance aware uncertainty estimation can be
used to detect these scenarios and provide a level of confidence associated
with their predictions. In this paper, we present results from using Deep
Gaussian Process Approximation (DGPA) methods for errant beam prediction at
Spallation Neutron Source (SNS) accelerator (classification) and we provide an
uncertainty aware surrogate model for the Fermi National Accelerator Lab (FNAL)
Booster Accelerator Complex (regression). | [
"Kishansingh Rajput",
"Malachi Schram",
"Karthik Somayaji"
] | 2023-09-25 20:01:57 | http://arxiv.org/abs/2309.14502v1 | http://arxiv.org/pdf/2309.14502v1 | 2309.14502v1 |
Era Splitting -- Invariant Learning for Decision Trees | Real life machine learning problems exhibit distributional shifts in the data
from one time to another or from on place to another. This behavior is beyond
the scope of the traditional empirical risk minimization paradigm, which
assumes i.i.d. distribution of data over time and across locations. The
emerging field of out-of-distribution (OOD) generalization addresses this
reality with new theory and algorithms which incorporate environmental, or
era-wise information into the algorithms. So far, most research has been
focused on linear models and/or neural networks. In this research we develop
two new splitting criteria for decision trees, which allow us to apply ideas
from OOD generalization research to decision tree models, including random
forest and gradient-boosting decision trees. The new splitting criteria use
era-wise information associated with each data point to allow tree-based models
to find split points that are optimal across all disjoint eras in the data,
instead of optimal over the entire data set pooled together, which is the
default setting. We describe the new splitting criteria in detail and develop
unique experiments to showcase the benefits of these new criteria, which
improve metrics in our experiments out-of-sample. The new criteria are
incorporated into the a state-of-the-art gradient boosted decision tree model
in the Scikit-Learn code base, which is made freely available. | [
"Timothy DeLise"
] | 2023-09-25 19:45:45 | http://arxiv.org/abs/2309.14496v2 | http://arxiv.org/pdf/2309.14496v2 | 2309.14496v2 |
Classifying token frequencies using angular Minkowski $p$-distance | Angular Minkowski $p$-distance is a dissimilarity measure that is obtained by
replacing Euclidean distance in the definition of cosine dissimilarity with
other Minkowski $p$-distances. Cosine dissimilarity is frequently used with
datasets containing token frequencies, and angular Minkowski $p$-distance may
potentially be an even better choice for certain tasks. In a case study based
on the 20-newsgroups dataset, we evaluate clasification performance for
classical weighted nearest neighbours, as well as fuzzy rough nearest
neighbours. In addition, we analyse the relationship between the hyperparameter
$p$, the dimensionality $m$ of the dataset, the number of neighbours $k$, the
choice of weights and the choice of classifier. We conclude that it is possible
to obtain substantially higher classification performance with angular
Minkowski $p$-distance with suitable values for $p$ than with classical cosine
dissimilarity. | [
"Oliver Urs Lenz",
"Chris Cornelis"
] | 2023-09-25 19:45:11 | http://arxiv.org/abs/2309.14495v1 | http://arxiv.org/pdf/2309.14495v1 | 2309.14495v1 |
A Novel Deep Learning Technique for Morphology Preserved Fetal ECG Extraction from Mother ECG using 1D-CycleGAN | Monitoring the electrical pulse of fetal heart through a non-invasive fetal
electrocardiogram (fECG) can easily detect abnormalities in the developing
heart to significantly reduce the infant mortality rate and post-natal
complications. Due to the overlapping of maternal and fetal R-peaks, the low
amplitude of the fECG, systematic and ambient noises, typical signal extraction
methods, such as adaptive filters, independent component analysis, empirical
mode decomposition, etc., are unable to produce satisfactory fECG. While some
techniques can produce accurate QRS waves, they often ignore other important
aspects of the ECG. Our approach, which is based on 1D CycleGAN, can
reconstruct the fECG signal from the mECG signal while maintaining the
morphology due to extensive preprocessing and appropriate framework. The
performance of our solution was evaluated by combining two available datasets
from Physionet, "Abdominal and Direct Fetal ECG Database" and "Fetal
electrocardiograms, direct and abdominal with reference heartbeat annotations",
where it achieved an average PCC and Spectral-Correlation score of 88.4% and
89.4%, respectively. It detects the fQRS of the signal with accuracy,
precision, recall and F1 score of 92.6%, 97.6%, 94.8% and 96.4%, respectively.
It can also accurately produce the estimation of fetal heart rate and R-R
interval with an error of 0.25% and 0.27%, respectively. The main contribution
of our work is that, unlike similar studies, it can retain the morphology of
the ECG signal with high fidelity. The accuracy of our solution for fetal heart
rate and R-R interval length is comparable to existing state-of-the-art
techniques. This makes it a highly effective tool for early diagnosis of fetal
heart diseases and regular health checkups of the fetus. | [
"Promit Basak",
"A. H. M Nazmus Sakib",
"Muhammad E. H. Chowdhury",
"Nasser Al-Emadi",
"Huseyin Cagatay Yalcin",
"Shona Pedersen",
"Sakib Mahmud",
"Serkan Kiranyaz",
"Somaya Al-Maadeed"
] | 2023-09-25 19:38:51 | http://arxiv.org/abs/2310.03759v1 | http://arxiv.org/pdf/2310.03759v1 | 2310.03759v1 |
Explainable and Accurate Natural Language Understanding for Voice Assistants and Beyond | Joint intent detection and slot filling, which is also termed as joint NLU
(Natural Language Understanding) is invaluable for smart voice assistants.
Recent advancements in this area have been heavily focusing on improving
accuracy using various techniques. Explainability is undoubtedly an important
aspect for deep learning-based models including joint NLU models. Without
explainability, their decisions are opaque to the outside world and hence, have
tendency to lack user trust. Therefore to bridge this gap, we transform the
full joint NLU model to be `inherently' explainable at granular levels without
compromising on accuracy. Further, as we enable the full joint NLU model
explainable, we show that our extension can be successfully used in other
general classification tasks. We demonstrate this using sentiment analysis and
named entity recognition. | [
"Kalpa Gunaratna",
"Vijay Srinivasan",
"Hongxia Jin"
] | 2023-09-25 19:30:44 | http://arxiv.org/abs/2309.14485v1 | http://arxiv.org/pdf/2309.14485v1 | 2309.14485v1 |
Unveiling the Potential of Deep Learning Models for Solar Flare Prediction in Near-Limb Regions | This study aims to evaluate the performance of deep learning models in
predicting $\geq$M-class solar flares with a prediction window of 24 hours,
using hourly sampled full-disk line-of-sight (LoS) magnetogram images,
particularly focusing on the often overlooked flare events corresponding to the
near-limb regions (beyond $\pm$70$^{\circ}$ of the solar disk). We trained
three well-known deep learning architectures--AlexNet, VGG16, and ResNet34
using transfer learning and compared and evaluated the overall performance of
our models using true skill statistics (TSS) and Heidke skill score (HSS) and
computed recall scores to understand the prediction sensitivity in central and
near-limb regions for both X- and M-class flares. The following points
summarize the key findings of our study: (1) The highest overall performance
was observed with the AlexNet-based model, which achieved an average
TSS$\sim$0.53 and HSS$\sim$0.37; (2) Further, a spatial analysis of recall
scores disclosed that for the near-limb events, the VGG16- and ResNet34-based
models exhibited superior prediction sensitivity. The best results, however,
were seen with the ResNet34-based model for the near-limb flares, where the
average recall was approximately 0.59 (the recall for X- and M-class was 0.81
and 0.56 respectively) and (3) Our research findings demonstrate that our
models are capable of discerning complex spatial patterns from full-disk
magnetograms and exhibit skill in predicting solar flares, even in the vicinity
of near-limb regions. This ability holds substantial importance for operational
flare forecasting systems. | [
"Chetraj Pandey",
"Rafal A. Angryk",
"Berkay Aydin"
] | 2023-09-25 19:30:02 | http://arxiv.org/abs/2309.14483v1 | http://arxiv.org/pdf/2309.14483v1 | 2309.14483v1 |
LogGPT: Log Anomaly Detection via GPT | Detecting system anomalies based on log data is important for ensuring the
security and reliability of computer systems. Recently, deep learning models
have been widely used for log anomaly detection. The core idea is to model the
log sequences as natural language and adopt deep sequential models, such as
LSTM or Transformer, to encode the normal patterns in log sequences via
language modeling. However, there is a gap between language modeling and
anomaly detection as the objective of training a sequential model via a
language modeling loss is not directly related to anomaly detection. To fill up
the gap, we propose LogGPT, a novel framework that employs GPT for log anomaly
detection. LogGPT is first trained to predict the next log entry based on the
preceding sequence. To further enhance the performance of LogGPT, we propose a
novel reinforcement learning strategy to finetune the model specifically for
the log anomaly detection task. The experimental results on three datasets show
that LogGPT significantly outperforms existing state-of-the-art approaches. | [
"Xiao Han",
"Shuhan Yuan",
"Mohamed Trabelsi"
] | 2023-09-25 19:29:50 | http://arxiv.org/abs/2309.14482v1 | http://arxiv.org/pdf/2309.14482v1 | 2309.14482v1 |
Adapting Double Q-Learning for Continuous Reinforcement Learning | Majority of off-policy reinforcement learning algorithms use overestimation
bias control techniques. Most of these techniques rooted in heuristics,
primarily addressing the consequences of overestimation rather than its
fundamental origins. In this work we present a novel approach to the bias
correction, similar in spirit to Double Q-Learning. We propose using a policy
in form of a mixture with two components. Each policy component is maximized
and assessed by separate networks, which removes any basis for the
overestimation bias. Our approach shows promising near-SOTA results on a small
set of MuJoCo environments. | [
"Arsenii Kuznetsov"
] | 2023-09-25 19:09:54 | http://arxiv.org/abs/2309.14471v1 | http://arxiv.org/pdf/2309.14471v1 | 2309.14471v1 |
FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed Estimation Using Traffic Cameras | Estimating the speed of vehicles using traffic cameras is a crucial task for
traffic surveillance and management, enabling more optimal traffic flow,
improved road safety, and lower environmental impact. Transportation-dependent
systems, such as for navigation and logistics, have great potential to benefit
from reliable speed estimation. While there is prior research in this area
reporting competitive accuracy levels, their solutions lack reproducibility and
robustness across different datasets. To address this, we provide a novel
framework for automatic real-time vehicle speed calculation, which copes with
more diverse data from publicly available traffic cameras to achieve greater
robustness. Our model employs novel techniques to estimate the length of road
segments via depth map prediction. Additionally, our framework is capable of
handling realistic conditions such as camera movements and different video
stream inputs automatically. We compare our model to three well-known models in
the field using their benchmark datasets. While our model does not set a new
state of the art regarding prediction performance, the results are competitive
on realistic CCTV videos. At the same time, our end-to-end pipeline offers more
consistent results, an easier implementation, and better compatibility. Its
modular structure facilitates reproducibility and future improvements. | [
"Lucas Liebe",
"Franz Sauerwald",
"Sylwester Sawicki",
"Matthias Schneider",
"Leo Schuhmann",
"Tolga Buz",
"Paul Boes",
"Ahmad Ahmadov",
"Gerard de Melo"
] | 2023-09-25 19:02:40 | http://arxiv.org/abs/2309.14468v1 | http://arxiv.org/pdf/2309.14468v1 | 2309.14468v1 |
DefGoalNet: Contextual Goal Learning from Demonstrations For Deformable Object Manipulation | Shape servoing, a robotic task dedicated to controlling objects to desired
goal shapes, is a promising approach to deformable object manipulation. An
issue arises, however, with the reliance on the specification of a goal shape.
This goal has been obtained either by a laborious domain knowledge engineering
process or by manually manipulating the object into the desired shape and
capturing the goal shape at that specific moment, both of which are impractical
in various robotic applications. In this paper, we solve this problem by
developing a novel neural network DefGoalNet, which learns deformable object
goal shapes directly from a small number of human demonstrations. We
demonstrate our method's effectiveness on various robotic tasks, both in
simulation and on a physical robot. Notably, in the surgical retraction task,
even when trained with as few as 10 demonstrations, our method achieves a
median success percentage of nearly 90%. These results mark a substantial
advancement in enabling shape servoing methods to bring deformable object
manipulation closer to practical, real-world applications. | [
"Bao Thach",
"Tanner Watts",
"Shing-Hei Ho",
"Tucker Hermans",
"Alan Kuntz"
] | 2023-09-25 18:54:32 | http://arxiv.org/abs/2309.14463v1 | http://arxiv.org/pdf/2309.14463v1 | 2309.14463v1 |
Skilog: A Smart Sensor System for Performance Analysis and Biofeedback in Ski Jumping | In ski jumping, low repetition rates of jumps limit the effectiveness of
training. Thus, increasing learning rate within every single jump is key to
success. A critical element of athlete training is motor learning, which has
been shown to be accelerated by feedback methods. In particular, a fine-grained
control of the center of gravity in the in-run is essential. This is because
the actual takeoff occurs within a blink of an eye ($\sim$300ms), thus any
unbalanced body posture during the in-run will affect flight. This paper
presents a smart, compact, and energy-efficient wireless sensor system for
real-time performance analysis and biofeedback during ski jumping. The system
operates by gauging foot pressures at three distinct points on the insoles of
the ski boot at 100Hz. Foot pressure data can either be directly sent to
coaches to improve their feedback, or fed into a ML model to give athletes
instantaneous in-action feedback using a vibration motor in the ski boot. In
the biofeedback scenario, foot pressures act as input variables for an
optimized XGBoost model. We achieve a high predictive accuracy of 92.7% for
center of mass predictions (dorsal shift, neutral stand, ventral shift).
Subsequently, we parallelized and fine-tuned our XGBoost model for a RISC-V
based low power parallel processor (GAP9), based on the PULP architecture. We
demonstrate real-time detection and feedback (0.0109ms/inference) using our
on-chip deployment. The proposed smart system is unobtrusive with a slim form
factor (13mm baseboard, 3.2mm antenna) and a lightweight build (26g). Power
consumption analysis reveals that the system's energy-efficient design enables
sustained operation over multiple days (up to 300 hours) without requiring
recharge. | [
"Lukas Schulthess",
"Thorir Mar Ingolfsson",
"Marc Nölke",
"Michele Magno",
"Luca Benini",
"Christoph Leitner"
] | 2023-09-25 18:27:29 | http://arxiv.org/abs/2309.14455v1 | http://arxiv.org/pdf/2309.14455v1 | 2309.14455v1 |
Learning dislocation dynamics mobility laws from large-scale MD simulations | The computational method of discrete dislocation dynamics (DDD), used as a
coarse-grained model of true atomistic dynamics of lattice dislocations, has
become of powerful tool to study metal plasticity arising from the collective
behavior of dislocations. As a mesoscale approach, motion of dislocations in
the DDD model is prescribed via the mobility law; a function which specifies
how dislocation lines should respond to the driving force. However, the
development of traditional hand-crafted mobility laws can be a cumbersome task
and may involve detrimental simplifications. Here we introduce a
machine-learning (ML) framework to streamline the development of data-driven
mobility laws which are modeled as graph neural networks (GNN) trained on
large-scale Molecular Dynamics (MD) simulations of crystal plasticity. We
illustrate our approach on BCC tungsten and demonstrate that our GNN mobility
implemented in large-scale DDD simulations accurately reproduces the
challenging tension/compression asymmetry observed in ground-truth MD
simulations while correctly predicting the flow stress at lower straining rate
conditions unseen during training, thereby demonstrating the ability of our
method to learn relevant dislocation physics. Our DDD+ML approach opens new
promising avenues to improve fidelity of the DDD model and to incorporate more
complex dislocation motion behaviors in an automated way, providing a faithful
proxy for dislocation dynamics several orders of magnitude faster than
ground-truth MD simulations. | [
"Nicolas Bertin",
"Vasily V. Bulatov",
"Fei Zhou"
] | 2023-09-25 18:16:45 | http://arxiv.org/abs/2309.14450v1 | http://arxiv.org/pdf/2309.14450v1 | 2309.14450v1 |
Self-Recovery Prompting: Promptable General Purpose Service Robot System with Foundation Models and Self-Recovery | A general-purpose service robot (GPSR), which can execute diverse tasks in
various environments, requires a system with high generalizability and
adaptability to tasks and environments. In this paper, we first developed a
top-level GPSR system for worldwide competition (RoboCup@Home 2023) based on
multiple foundation models. This system is both generalizable to variations and
adaptive by prompting each model. Then, by analyzing the performance of the
developed system, we found three types of failure in more realistic GPSR
application settings: insufficient information, incorrect plan generation, and
plan execution failure. We then propose the self-recovery prompting pipeline,
which explores the necessary information and modifies its prompts to recover
from failure. We experimentally confirm that the system with the self-recovery
mechanism can accomplish tasks by resolving various failure cases.
Supplementary videos are available at https://sites.google.com/view/srgpsr . | [
"Mimo Shirasaka",
"Tatsuya Matsushima",
"Soshi Tsunashima",
"Yuya Ikeda",
"Aoi Horo",
"So Ikoma",
"Chikaha Tsuji",
"Hikaru Wada",
"Tsunekazu Omija",
"Dai Komukai",
"Yutaka Matsuo Yusuke Iwasawa"
] | 2023-09-25 18:00:03 | http://arxiv.org/abs/2309.14425v2 | http://arxiv.org/pdf/2309.14425v2 | 2309.14425v2 |
On the expressivity of embedding quantum kernels | One of the most natural connections between quantum and classical machine
learning has been established in the context of kernel methods. Kernel methods
rely on kernels, which are inner products of feature vectors living in large
feature spaces. Quantum kernels are typically evaluated by explicitly
constructing quantum feature states and then taking their inner product, here
called embedding quantum kernels. Since classical kernels are usually evaluated
without using the feature vectors explicitly, we wonder how expressive
embedding quantum kernels are. In this work, we raise the fundamental question:
can all quantum kernels be expressed as the inner product of quantum feature
states? Our first result is positive: Invoking computational universality, we
find that for any kernel function there always exists a corresponding quantum
feature map and an embedding quantum kernel. The more operational reading of
the question is concerned with efficient constructions, however. In a second
part, we formalize the question of universality of efficient embedding quantum
kernels. For shift-invariant kernels, we use the technique of random Fourier
features to show that they are universal within the broad class of all kernels
which allow a variant of efficient Fourier sampling. We then extend this result
to a new class of so-called composition kernels, which we show also contains
projected quantum kernels introduced in recent works. After proving the
universality of embedding quantum kernels for both shift-invariant and
composition kernels, we identify the directions towards new, more exotic, and
unexplored quantum kernel families, for which it still remains open whether
they correspond to efficient embedding quantum kernels. | [
"Elies Gil-Fuster",
"Jens Eisert",
"Vedran Dunjko"
] | 2023-09-25 18:00:01 | http://arxiv.org/abs/2309.14419v1 | http://arxiv.org/pdf/2309.14419v1 | 2309.14419v1 |
Provable advantages of kernel-based quantum learners and quantum preprocessing based on Grover's algorithm | There is an ongoing effort to find quantum speedups for learning problems.
Recently, [Y. Liu et al., Nat. Phys. $\textbf{17}$, 1013--1017 (2021)] have
proven an exponential speedup for quantum support vector machines by leveraging
the speedup of Shor's algorithm. We expand upon this result and identify a
speedup utilizing Grover's algorithm in the kernel of a support vector machine.
To show the practicality of the kernel structure we apply it to a problem
related to pattern matching, providing a practical yet provable advantage.
Moreover, we show that combining quantum computation in a preprocessing step
with classical methods for classification further improves classifier
performance. | [
"Till Muser",
"Elias Zapusek",
"Vasilis Belis",
"Florentin Reiter"
] | 2023-09-25 18:00:00 | http://arxiv.org/abs/2309.14406v1 | http://arxiv.org/pdf/2309.14406v1 | 2309.14406v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.