title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Beyond attention: deriving biologically interpretable insights from weakly-supervised multiple-instance learning models | Recent advances in attention-based multiple instance learning (MIL) have
improved our insights into the tissue regions that models rely on to make
predictions in digital pathology. However, the interpretability of these
approaches is still limited. In particular, they do not report whether
high-attention regions are positively or negatively associated with the class
labels or how well these regions correspond to previously established clinical
and biological knowledge. We address this by introducing a post-training
methodology to analyse MIL models. Firstly, we introduce
prediction-attention-weighted (PAW) maps by combining tile-level attention and
prediction scores produced by a refined encoder, allowing us to quantify the
predictive contribution of high-attention regions. Secondly, we introduce a
biological feature instantiation technique by integrating PAW maps with nuclei
segmentation masks. This further improves interpretability by providing
biologically meaningful features related to the cellular organisation of the
tissue and facilitates comparisons with known clinical features. We illustrate
the utility of our approach by comparing PAW maps obtained for prostate cancer
diagnosis (i.e. samples containing malignant tissue, 381/516 tissue samples)
and prognosis (i.e. samples from patients with biochemical recurrence following
surgery, 98/663 tissue samples) in a cohort of patients from the international
cancer genome consortium (ICGC UK Prostate Group). Our approach reveals that
regions that are predictive of adverse prognosis do not tend to co-locate with
the tumour regions, indicating that non-cancer cells should also be studied
when evaluating prognosis. | [
"Willem Bonnaffé",
"CRUK ICGC Prostate Group",
"Freddie Hamdy",
"Yang Hu",
"Ian Mills",
"Jens Rittscher",
"Clare Verrill",
"Dan J. Woodcock"
] | 2023-09-07 09:44:35 | http://arxiv.org/abs/2309.03925v1 | http://arxiv.org/pdf/2309.03925v1 | 2309.03925v1 |
Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning | Hyperparameter optimization (HPO) is important to leverage the full potential
of machine learning (ML). In practice, users are often interested in
multi-objective (MO) problems, i.e., optimizing potentially conflicting
objectives, like accuracy and energy consumption. To tackle this, the vast
majority of MO-ML algorithms return a Pareto front of non-dominated machine
learning models to the user. Optimizing the hyperparameters of such algorithms
is non-trivial as evaluating a hyperparameter configuration entails evaluating
the quality of the resulting Pareto front. In literature, there are known
indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by
quantifying different properties (e.g., volume, proximity to a reference
point). However, choosing the indicator that leads to the desired Pareto front
might be a hard task for a user. In this paper, we propose a human-centered
interactive HPO approach tailored towards multi-objective ML leveraging
preference learning to extract desiderata from users that guide the
optimization. Instead of relying on the user guessing the most suitable
indicator for their needs, our approach automatically learns an appropriate
indicator. Concretely, we leverage pairwise comparisons of distinct Pareto
fronts to learn such an appropriate quality indicator. Then, we optimize the
hyperparameters of the underlying MO-ML algorithm towards this learned
indicator using a state-of-the-art HPO approach. In an experimental study
targeting the environmental impact of ML, we demonstrate that our approach
leads to substantially better Pareto fronts compared to optimizing based on a
wrong indicator pre-selected by the user, and performs comparable in the case
of an advanced user knowing which indicator to pick. | [
"Joseph Giovanelli",
"Alexander Tornede",
"Tanja Tornede",
"Marius Lindauer"
] | 2023-09-07 09:22:05 | http://arxiv.org/abs/2309.03581v2 | http://arxiv.org/pdf/2309.03581v2 | 2309.03581v2 |
DTW+S: Shape-based Comparison of Time-series with Ordered Local Trend | Measuring distance or similarity between time-series data is a fundamental
aspect of many applications including classification and clustering. Existing
measures may fail to capture similarities due to local trends (shapes) and may
even produce misleading results. Our goal is to develop a measure that looks
for similar trends occurring around similar times and is easily interpretable
for researchers in applied domains. This is particularly useful for
applications where time-series have a sequence of meaningful local trends that
are ordered, such as in epidemics (a surge to an increase to a peak to a
decrease). We propose a novel measure, DTW+S, which creates an interpretable
"closeness-preserving" matrix representation of the time-series, where each
column represents local trends, and then it applies Dynamic Time Warping to
compute distances between these matrices. We present a theoretical analysis
that supports the choice of this representation. We demonstrate the utility of
DTW+S in ensemble building and clustering of epidemic curves. We also
demonstrate that our approach results in better classification compared to
Dynamic Time Warping for a class of datasets, particularly when local trends
rather than scale play a decisive role. | [
"Ajitesh Srivastava"
] | 2023-09-07 09:18:12 | http://arxiv.org/abs/2309.03579v1 | http://arxiv.org/pdf/2309.03579v1 | 2309.03579v1 |
Sparse Federated Training of Object Detection in the Internet of Vehicles | As an essential component part of the Intelligent Transportation System
(ITS), the Internet of Vehicles (IoV) plays a vital role in alleviating traffic
issues. Object detection is one of the key technologies in the IoV, which has
been widely used to provide traffic management services by analyzing timely and
sensitive vehicle-related information. However, the current object detection
methods are mostly based on centralized deep training, that is, the sensitive
data obtained by edge devices need to be uploaded to the server, which raises
privacy concerns. To mitigate such privacy leakage, we first propose a
federated learning-based framework, where well-trained local models are shared
in the central server. However, since edge devices usually have limited
computing power, plus a strict requirement of low latency in IoVs, we further
propose a sparse training process on edge devices, which can effectively
lighten the model, and ensure its training efficiency on edge devices, thereby
reducing communication overheads. In addition, due to the diverse computing
capabilities and dynamic environment, different sparsity rates are applied to
edge devices. To further guarantee the performance, we propose, FedWeg, an
improved aggregation scheme based on FedAvg, which is designed by the inverse
ratio of sparsity rates. Experiments on the real-life dataset using YOLO show
that the proposed scheme can achieve the required object detection rate while
saving considerable communication costs. | [
"Luping Rao",
"Chuan Ma",
"Ming Ding",
"Yuwen Qian",
"Lu Zhou",
"Zhe Liu"
] | 2023-09-07 08:58:41 | http://arxiv.org/abs/2309.03569v1 | http://arxiv.org/pdf/2309.03569v1 | 2309.03569v1 |
Evaluating the Efficacy of Supervised Learning vs Large Language Models for Identifying Cognitive Distortions and Suicidal Risks in Chinese Social Media | Large language models, particularly those akin to the rapidly progressing GPT
series, are gaining traction for their expansive influence. While there is keen
interest in their applicability within medical domains such as psychology,
tangible explorations on real-world data remain scant. Concurrently, users on
social media platforms are increasingly vocalizing personal sentiments; under
specific thematic umbrellas, these sentiments often manifest as negative
emotions, sometimes escalating to suicidal inclinations. Timely discernment of
such cognitive distortions and suicidal risks is crucial to effectively
intervene and potentially avert dire circumstances. Our study ventured into
this realm by experimenting on two pivotal tasks: suicidal risk and cognitive
distortion identification on Chinese social media platforms. Using supervised
learning as a baseline, we examined and contrasted the efficacy of large
language models via three distinct strategies: zero-shot, few-shot, and
fine-tuning. Our findings revealed a discernible performance gap between the
large language models and traditional supervised learning approaches, primarily
attributed to the models' inability to fully grasp subtle categories. Notably,
while GPT-4 outperforms its counterparts in multiple scenarios, GPT-3.5 shows
significant enhancement in suicide risk classification after fine-tuning. To
our knowledge, this investigation stands as the maiden attempt at gauging large
language models on Chinese social media tasks. This study underscores the
forward-looking and transformative implications of using large language models
in the field of psychology. It lays the groundwork for future applications in
psychological research and practice. | [
"Hongzhi Qi",
"Qing Zhao",
"Changwei Song",
"Wei Zhai",
"Dan Luo",
"Shuo Liu",
"Yi Jing Yu",
"Fan Wang",
"Huijing Zou",
"Bing Xiang Yang",
"Jianqiang Li",
"Guanghui Fu"
] | 2023-09-07 08:50:46 | http://arxiv.org/abs/2309.03564v1 | http://arxiv.org/pdf/2309.03564v1 | 2309.03564v1 |
Trinary Decision Trees for missing value handling | This paper introduces the Trinary decision tree, an algorithm designed to
improve the handling of missing data in decision tree regressors and
classifiers. Unlike other approaches, the Trinary decision tree does not assume
that missing values contain any information about the response. Both
theoretical calculations on estimator bias and numerical illustrations using
real data sets are presented to compare its performance with established
algorithms in different missing data scenarios (Missing Completely at Random
(MCAR), and Informative Missingness (IM)). Notably, the Trinary tree
outperforms its peers in MCAR settings, especially when data is only missing
out-of-sample, while lacking behind in IM settings. A hybrid model, the
TrinaryMIA tree, which combines the Trinary tree and the Missing In Attributes
(MIA) approach, shows robust performance in all types of missingness. Despite
the potential drawback of slower training speed, the Trinary tree offers a
promising and more accurate method of handling missing data in decision tree
algorithms. | [
"Henning Zakrisson"
] | 2023-09-07 08:44:25 | http://arxiv.org/abs/2309.03561v1 | http://arxiv.org/pdf/2309.03561v1 | 2309.03561v1 |
On the dynamics of multi agent nonlinear filtering and learning | Multiagent systems aim to accomplish highly complex learning tasks through
decentralised consensus seeking dynamics and their use has garnered a great
deal of attention in the signal processing and computational intelligence
societies. This article examines the behaviour of multiagent networked systems
with nonlinear filtering/learning dynamics. To this end, a general formulation
for the actions of an agent in multiagent networked systems is presented and
conditions for achieving a cohesive learning behaviour is given. Importantly,
application of the so derived framework in distributed and federated learning
scenarios are presented. | [
"Sayed Pouria Talebi",
"Danilo Mandic"
] | 2023-09-07 08:39:53 | http://arxiv.org/abs/2309.03557v2 | http://arxiv.org/pdf/2309.03557v2 | 2309.03557v2 |
MVD:A Novel Methodology and Dataset for Acoustic Vehicle Type Classification | Rising urban populations have led to a surge in vehicle use and made traffic
monitoring and management indispensable. Acoustic traffic monitoring (ATM)
offers a cost-effective and efficient alternative to more computationally
expensive methods of monitoring traffic such as those involving computer vision
technologies. In this paper, we present MVD and MVDA: two open datasets for the
development of acoustic traffic monitoring and vehicle-type classification
algorithms, which contain audio recordings of moving vehicles. The dataset
contain four classes- Trucks, Cars, Motorbikes, and a No-vehicle class.
Additionally, we propose a novel and efficient way to accurately classify these
acoustic signals using cepstrum and spectrum based local and global audio
features, and a multi-input neural network. Experimental results show that our
methodology improves upon the established baselines of previous works and
achieves an accuracy of 91.98% and 96.66% on MVD and MVDA Datasets,
respectively. Finally, the proposed model was deployed through an Android
application to make it accessible for testing and demonstrate its efficacy. | [
"Mohd Ashhad",
"Omar Ahmed",
"Sooraj K. Ambat",
"Zeeshan Ali Haq",
"Mansaf Alam"
] | 2023-09-07 08:02:57 | http://arxiv.org/abs/2309.03544v1 | http://arxiv.org/pdf/2309.03544v1 | 2309.03544v1 |
Subgraph-based Tight Frames on Graphs with Compact Supports and Vanishing Moments | In this work, we proposed a novel and general method to construct tight
frames on graphs with compact supports based on a series of hierarchical
partitions. Starting from our abstract construction that generalizes previous
methods based on partition trees, we are able to flexibly incorporate subgraph
Laplacians into our design of graph frames. Consequently, our general methods
permit adjusting the (subgraph) vanishing moments of the framelets and extra
properties, such as directionality, for efficiently representing graph signals
with path-like supports. Several variants are explicitly defined and tested.
Experimental results show our proposed graph frames perform superiorly in
non-linear approximation tasks. | [
"Ruigang Zheng",
"Xiaosheng Zhuang"
] | 2023-09-07 07:49:43 | http://arxiv.org/abs/2309.03537v1 | http://arxiv.org/pdf/2309.03537v1 | 2309.03537v1 |
Feature Enhancer Segmentation Network (FES-Net) for Vessel Segmentation | Diseases such as diabetic retinopathy and age-related macular degeneration
pose a significant risk to vision, highlighting the importance of precise
segmentation of retinal vessels for the tracking and diagnosis of progression.
However, existing vessel segmentation methods that heavily rely on
encoder-decoder structures struggle to capture contextual information about
retinal vessel configurations, leading to challenges in reconciling semantic
disparities between encoder and decoder features. To address this, we propose a
novel feature enhancement segmentation network (FES-Net) that achieves accurate
pixel-wise segmentation without requiring additional image enhancement steps.
FES-Net directly processes the input image and utilizes four prompt
convolutional blocks (PCBs) during downsampling, complemented by a shallow
upsampling approach to generate a binary mask for each class. We evaluate the
performance of FES-Net on four publicly available state-of-the-art datasets:
DRIVE, STARE, CHASE, and HRF. The evaluation results clearly demonstrate the
superior performance of FES-Net compared to other competitive approaches
documented in the existing literature. | [
"Tariq M. Khan",
"Muhammad Arsalan",
"Shahzaib Iqbal",
"Imran Razzak",
"Erik Meijering"
] | 2023-09-07 07:46:46 | http://arxiv.org/abs/2309.03535v1 | http://arxiv.org/pdf/2309.03535v1 | 2309.03535v1 |
A Robust Negative Learning Approach to Partial Domain Adaptation Using Source Prototypes | This work proposes a robust Partial Domain Adaptation (PDA) framework that
mitigates the negative transfer problem by incorporating a robust
target-supervision strategy. It leverages ensemble learning and includes
diverse, complementary label feedback, alleviating the effect of incorrect
feedback and promoting pseudo-label refinement. Rather than relying exclusively
on first-order moments for distribution alignment, our approach offers explicit
objectives to optimize intra-class compactness and inter-class separation with
the inferred source prototypes and highly-confident target samples in a
domain-invariant fashion. Notably, we ensure source data privacy by eliminating
the need to access the source data during the adaptation phase through a priori
inference of source prototypes. We conducted a series of comprehensive
experiments, including an ablation analysis, covering a range of partial domain
adaptation tasks. Comprehensive evaluations on benchmark datasets corroborate
our framework's enhanced robustness and generalization, demonstrating its
superiority over existing state-of-the-art PDA approaches. | [
"Sandipan Choudhuri",
"Suli Adeniye",
"Arunabha Sen"
] | 2023-09-07 07:26:27 | http://arxiv.org/abs/2309.03531v2 | http://arxiv.org/pdf/2309.03531v2 | 2309.03531v2 |
Efficient Single Object Detection on Image Patches with Early Exit Enhanced High-Precision CNNs | This paper proposes a novel approach for detecting objects using mobile
robots in the context of the RoboCup Standard Platform League, with a primary
focus on detecting the ball. The challenge lies in detecting a dynamic object
in varying lighting conditions and blurred images caused by fast movements. To
address this challenge, the paper presents a convolutional neural network
architecture designed specifically for computationally constrained robotic
platforms. The proposed CNN is trained to achieve high precision classification
of single objects in image patches and to determine their precise spatial
positions. The paper further integrates Early Exits into the existing
high-precision CNN architecture to reduce the computational cost of easily
rejectable cases in the background class. The training process involves a
composite loss function based on confidence and positional losses with dynamic
weighting and data augmentation. The proposed approach achieves a precision of
100% on the validation dataset and a recall of almost 87%, while maintaining an
execution time of around 170 $\mu$s per hypotheses. By combining the proposed
approach with an Early Exit, a runtime optimization of more than 28%, on
average, can be achieved compared to the original CNN. Overall, this paper
provides an efficient solution for an enhanced detection of objects, especially
the ball, in computationally constrained robotic platforms. | [
"Arne Moos"
] | 2023-09-07 07:23:55 | http://arxiv.org/abs/2309.03530v1 | http://arxiv.org/pdf/2309.03530v1 | 2309.03530v1 |
Privacy-preserving Continual Federated Clustering via Adaptive Resonance Theory | With the increasing importance of data privacy protection, various
privacy-preserving machine learning methods have been proposed. In the
clustering domain, various algorithms with a federated learning framework
(i.e., federated clustering) have been actively studied and showed high
clustering performance while preserving data privacy. However, most of the base
clusterers (i.e., clustering algorithms) used in existing federated clustering
algorithms need to specify the number of clusters in advance. These algorithms,
therefore, are unable to deal with data whose distributions are unknown or
continually changing. To tackle this problem, this paper proposes a
privacy-preserving continual federated clustering algorithm. In the proposed
algorithm, an adaptive resonance theory-based clustering algorithm capable of
continual learning is used as a base clusterer. Therefore, the proposed
algorithm inherits the ability of continual learning. Experimental results with
synthetic and real-world datasets show that the proposed algorithm has superior
clustering performance to state-of-the-art federated clustering algorithms
while realizing data privacy protection and continual learning ability. The
source code is available at \url{https://github.com/Masuyama-lab/FCAC}. | [
"Naoki Masuyama",
"Yusuke Nojima",
"Yuichiro Toda",
"Chu Kiong Loo",
"Hisao Ishibuchi",
"Naoyuki Kubota"
] | 2023-09-07 05:45:47 | http://arxiv.org/abs/2309.03487v1 | http://arxiv.org/pdf/2309.03487v1 | 2309.03487v1 |
DeepCrysTet: A Deep Learning Approach Using Tetrahedral Mesh for Predicting Properties of Crystalline Materials | Machine learning (ML) is becoming increasingly popular for predicting
material properties to accelerate materials discovery. Because material
properties are strongly affected by its crystal structure, a key issue is
converting the crystal structure into the features for input to the ML model.
Currently, the most common method is to convert the crystal structure into a
graph and predicting its properties using a graph neural network (GNN). Some
GNN models, such as crystal graph convolutional neural network (CGCNN) and
atomistic line graph neural network (ALIGNN), have achieved highly accurate
predictions of material properties. Despite these successes, using a graph to
represent a crystal structure has the notable limitation of losing the crystal
structure's three-dimensional (3D) information. In this work, we propose
DeepCrysTet, a novel deep learning approach for predicting material properties,
which uses crystal structures represented as a 3D tetrahedral mesh generated by
Delaunay tetrahedralization. DeepCrysTet provides a useful framework that
includes a 3D mesh generation method, mesh-based feature design, and neural
network design. The experimental results using the Materials Project dataset
show that DeepCrysTet significantly outperforms existing GNN models in
classifying crystal structures and achieves state-of-the-art performance in
predicting elastic properties. | [
"Hirofumi Tsuruta",
"Yukari Katsura",
"Masaya Kumagai"
] | 2023-09-07 05:23:52 | http://arxiv.org/abs/2310.06852v1 | http://arxiv.org/pdf/2310.06852v1 | 2310.06852v1 |
Fast FixMatch: Faster Semi-Supervised Learning with Curriculum Batch Size | Advances in Semi-Supervised Learning (SSL) have almost entirely closed the
gap between SSL and Supervised Learning at a fraction of the number of labels.
However, recent performance improvements have often come \textit{at the cost of
significantly increased training computation}. To address this, we propose
Curriculum Batch Size (CBS), \textit{an unlabeled batch size curriculum which
exploits the natural training dynamics of deep neural networks.} A small
unlabeled batch size is used in the beginning of training and is gradually
increased to the end of training. A fixed curriculum is used regardless of
dataset, model or number of epochs, and reduced training computations is
demonstrated on all settings. We apply CBS, strong labeled augmentation,
Curriculum Pseudo Labeling (CPL) \citep{FlexMatch} to FixMatch \citep{FixMatch}
and term the new SSL algorithm Fast FixMatch. We perform an ablation study to
show that strong labeled augmentation and/or CPL do not significantly reduce
training computations, but, in synergy with CBS, they achieve optimal
performance. Fast FixMatch also achieves substantially higher data utilization
compared to previous state-of-the-art. Fast FixMatch achieves between
$2.1\times$ - $3.4\times$ reduced training computations on CIFAR-10 with all
but 40, 250 and 4000 labels removed, compared to vanilla FixMatch, while
attaining the same cited state-of-the-art error rate \citep{FixMatch}. Similar
results are achieved for CIFAR-100, SVHN and STL-10. Finally, Fast MixMatch
achieves between $2.6\times$ - $3.3\times$ reduced training computations in
federated SSL tasks and online/streaming learning SSL tasks, which further
demonstrate the generializbility of Fast MixMatch to different scenarios and
tasks. | [
"John Chen",
"Chen Dun",
"Anastasios Kyrillidis"
] | 2023-09-07 03:34:51 | http://arxiv.org/abs/2309.03469v1 | http://arxiv.org/pdf/2309.03469v1 | 2309.03469v1 |
Cross-Image Context Matters for Bongard Problems | Current machine learning methods struggle to solve Bongard problems, which
are a type of IQ test that requires deriving an abstract "concept" from a set
of positive and negative "support" images, and then classifying whether or not
a new query image depicts the key concept. On Bongard-HOI, a benchmark for
natural-image Bongard problems, existing methods have only reached 66% accuracy
(where chance is 50%). Low accuracy is often attributed to neural nets' lack of
ability to find human-like symbolic rules. In this work, we point out that many
existing methods are forfeiting accuracy due to a much simpler problem: they do
not incorporate information contained in the support set as a whole, and rely
instead on information extracted from individual supports. This is a critical
issue, because unlike in few-shot learning tasks concerning object
classification, the "key concept" in a typical Bongard problem can only be
distinguished using multiple positives and multiple negatives. We explore a
variety of simple methods to take this cross-image context into account, and
demonstrate substantial gains over prior methods, leading to new
state-of-the-art performance on Bongard-LOGO (75.3%) and Bongard-HOI (72.45%)
and strong performance on the original Bongard problem set (60.84%). | [
"Nikhil Raghuraman",
"Adam W. Harley",
"Leonidas Guibas"
] | 2023-09-07 03:33:49 | http://arxiv.org/abs/2309.03468v1 | http://arxiv.org/pdf/2309.03468v1 | 2309.03468v1 |
Multi-Modality Guidance Network For Missing Modality Inference | Multimodal models have gained significant success in recent years. Standard
multimodal approaches often assume unchanged modalities from training stage to
inference stage. In practice, however, many scenarios fail to satisfy such
assumptions with missing modalities during inference, leading to limitations on
where multimodal models can be applied. While existing methods mitigate the
problem through reconstructing the missing modalities, it increases unnecessary
computational cost, which could be just as critical, especially for large,
deployed systems. To solve the problem from both sides, we propose a novel
guidance network that promotes knowledge sharing during training, taking
advantage of the multimodal representations to train better single-modality
models for inference. Real-life experiment in violence detection shows that our
proposed framework trains single-modality models that significantly outperform
its traditionally trained counterparts while maintaining the same inference
cost. | [
"Zhuokai Zhao",
"Harish Palani",
"Tianyi Liu",
"Lena Evans",
"Ruth Toner"
] | 2023-09-07 02:26:55 | http://arxiv.org/abs/2309.03452v1 | http://arxiv.org/pdf/2309.03452v1 | 2309.03452v1 |
Cross-domain Sound Recognition for Efficient Underwater Data Analysis | This paper presents a novel deep learning approach for analyzing massive
underwater acoustic data by leveraging a model trained on a broad spectrum of
non-underwater (aerial) sounds. Recognizing the challenge in labeling vast
amounts of underwater data, we propose a two-fold methodology to accelerate
this labor-intensive procedure.
The first part of our approach involves PCA and UMAP visualization of the
underwater data using the feature vectors of an aerial sound recognition model.
This enables us to cluster the data in a two dimensional space and listen to
points within these clusters to understand their defining characteristics. This
innovative method simplifies the process of selecting candidate labels for
further training.
In the second part, we train a neural network model using both the selected
underwater data and the non-underwater dataset. We conducted a quantitative
analysis to measure the precision, recall, and F1 score of our model for
recognizing airgun sounds, a common type of underwater sound. The F1 score
achieved by our model exceeded 84.3%, demonstrating the effectiveness of our
approach in analyzing underwater acoustic data.
The methodology presented in this paper holds significant potential to reduce
the amount of labor required in underwater data analysis and opens up new
possibilities for further research in the field of cross-domain data analysis. | [
"Jeongsoo Park",
"Dong-Gyun Han",
"Hyoung Sul La",
"Sangmin Lee",
"Yoonchang Han",
"Eun-Jin Yang"
] | 2023-09-07 02:26:32 | http://arxiv.org/abs/2309.03451v1 | http://arxiv.org/pdf/2309.03451v1 | 2309.03451v1 |
XGen-7B Technical Report | Large Language Models (LLMs) have become ubiquitous across various domains,
transforming the way we interact with information and conduct research.
However, most high-performing LLMs remain confined behind proprietary walls,
hindering scientific progress. Most open-source LLMs, on the other hand, are
limited in their ability to support longer sequence lengths, which is a key
requirement for many tasks that require inference over an input context. To
address this, we have trained XGen, a series of 7B parameter models on up to 8K
sequence length for up to 1.5T tokens. We have also finetuned the XGen models
on public-domain instructional data, creating their instruction-tuned
counterparts (XGen-Inst). We open-source our models for both research
advancements and commercial applications. Our evaluation on standard benchmarks
shows that XGen models achieve comparable or better results when compared with
state-of-the-art open-source LLMs. Our targeted evaluation on long sequence
modeling tasks shows the benefits of our 8K-sequence models over 2K-sequence
open-source LLMs. | [
"Erik Nijkamp",
"Tian Xie",
"Hiroaki Hayashi",
"Bo Pang",
"Congying Xia",
"Chen Xing",
"Jesse Vig",
"Semih Yavuz",
"Philippe Laban",
"Ben Krause",
"Senthil Purushwalkam",
"Tong Niu",
"Wojciech Kryściński",
"Lidiya Murakhovs'ka",
"Prafulla Kumar Choubey",
"Alex Fabbri",
"Ye Liu",
"Rui Meng",
"Lifu Tu",
"Meghana Bhat",
"Chien-Sheng Wu",
"Silvio Savarese",
"Yingbo Zhou",
"Shafiq Joty",
"Caiming Xiong"
] | 2023-09-07 02:20:03 | http://arxiv.org/abs/2309.03450v1 | http://arxiv.org/pdf/2309.03450v1 | 2309.03450v1 |
Broadband Ground Motion Synthesis via Generative Adversarial Neural Operators: Development and Validation | We present a data-driven model for ground-motion synthesis using a Generative
Adversarial Neural Operator (GANO) that combines recent advancements in machine
learning and open access strong motion data sets to generate three-component
acceleration time histories conditioned on moment magnitude ($M$), rupture
distance ($R_{rup}$), time-average shear-wave velocity at the top $30m$
($V_{S30}$), and tectonic environment or style of faulting. We use Neural
Operators, a resolution invariant architecture that guarantees that the model
training is independent of the data sampling frequency. We first present the
conditional ground-motion synthesis algorithm (referred to heretofore as
cGM-GANO) and discuss its advantages compared to previous work. Next, we verify
the cGM-GANO framework using simulated ground motions generated with the
Southern California Earthquake Center (SCEC) Broadband Platform (BBP). We
lastly train cGM-GANO on a KiK-net dataset from Japan, showing that the
framework can recover the magnitude, distance, and $V_{S30}$ scaling of Fourier
amplitude and pseudo-spectral accelerations. We evaluate cGM-GANO through
residual analysis with the empirical dataset as well as by comparison with
conventional Ground Motion Models (GMMs) for selected ground motion scenarios.
Results show that cGM-GANO produces consistent median scaling with the GMMs for
the corresponding tectonic environments. The largest misfit is observed at
short distances due to the scarcity of training data. With the exception of
short distances, the aleatory variability of the response spectral ordinates is
also well captured, especially for subduction events due to the adequacy of
training data. Applications of the presented framework include generation of
risk-targeted ground motions for site-specific engineering applications. | [
"Yaozhong Shi",
"Grigorios Lavrentiadis",
"Domniki Asimaki",
"Zachary E. Ross",
"Kamyar Azizzadenesheli"
] | 2023-09-07 02:08:30 | http://arxiv.org/abs/2309.03447v1 | http://arxiv.org/pdf/2309.03447v1 | 2309.03447v1 |
VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints | Deploying Machine Learning as a Service gives rise to model plagiarism,
leading to copyright infringement. Ownership testing techniques are designed to
identify model fingerprints for verifying plagiarism. However, previous works
often rely on overfitting or robustness features as fingerprints, lacking
theoretical guarantees and exhibiting under-performance on generalized models.
In this paper, we propose a novel ownership testing method called VeriDIP,
which verifies a DNN model's intellectual property. VeriDIP makes two major
contributions. (1) It utilizes membership inference attacks to estimate the
lower bound of privacy leakage, which reflects the fingerprint of a given
model. The privacy leakage fingerprints highlight the unique patterns through
which the models memorize sensitive training datasets. (2) We introduce a novel
approach using less private samples to enhance the performance of ownership
testing.
Extensive experimental results confirm that VeriDIP is effective and
efficient in validating the ownership of deep learning models trained on both
image and tabular datasets. VeriDIP achieves comparable performance to
state-of-the-art methods on image datasets while significantly reducing
computation and communication costs. Enhanced VeriDIP demonstrates superior
verification performance on generalized deep learning models, particularly on
table-trained models. Additionally, VeriDIP exhibits similar effectiveness on
utility-preserving differentially private models compared to non-differentially
private baselines. | [
"Aoting Hu",
"Zhigang Lu",
"Renjie Xie",
"Minhui Xue"
] | 2023-09-07 01:58:12 | http://arxiv.org/abs/2310.10656v1 | http://arxiv.org/pdf/2310.10656v1 | 2310.10656v1 |
Punctate White Matter Lesion Segmentation in Preterm Infants Powered by Counterfactually Generative Learning | Accurate segmentation of punctate white matter lesions (PWMLs) are
fundamental for the timely diagnosis and treatment of related developmental
disorders. Automated PWMLs segmentation from infant brain MR images is
challenging, considering that the lesions are typically small and low-contrast,
and the number of lesions may dramatically change across subjects. Existing
learning-based methods directly apply general network architectures to this
challenging task, which may fail to capture detailed positional information of
PWMLs, potentially leading to severe under-segmentations. In this paper, we
propose to leverage the idea of counterfactual reasoning coupled with the
auxiliary task of brain tissue segmentation to learn fine-grained positional
and morphological representations of PWMLs for accurate localization and
segmentation. A simple and easy-to-implement deep-learning framework (i.e.,
DeepPWML) is accordingly designed. It combines the lesion counterfactual map
with the tissue probability map to train a lightweight PWML segmentation
network, demonstrating state-of-the-art performance on a real-clinical dataset
of infant T1w MR images. The code is available at
\href{https://github.com/ladderlab-xjtu/DeepPWML}{https://github.com/ladderlab-xjtu/DeepPWML}. | [
"Zehua Ren",
"Yongheng Sun",
"Miaomiao Wang",
"Yuying Feng",
"Xianjun Li",
"Chao Jin",
"Jian Yang",
"Chunfeng Lian",
"Fan Wang"
] | 2023-09-07 01:46:17 | http://arxiv.org/abs/2309.03440v1 | http://arxiv.org/pdf/2309.03440v1 | 2309.03440v1 |
Personalized Tucker Decomposition: Modeling Commonality and Peculiarity on Tensor Data | We propose personalized Tucker decomposition (perTucker) to address the
limitations of traditional tensor decomposition methods in capturing
heterogeneity across different datasets. perTucker decomposes tensor data into
shared global components and personalized local components. We introduce a mode
orthogonality assumption and develop a proximal gradient regularized block
coordinate descent algorithm that is guaranteed to converge to a stationary
point. By learning unique and common representations across datasets, we
demonstrate perTucker's effectiveness in anomaly detection, client
classification, and clustering through a simulation study and two case studies
on solar flare detection and tonnage signal classification. | [
"Jiuyun Hu",
"Naichen Shi",
"Raed Al Kontar",
"Hao Yan"
] | 2023-09-07 01:43:47 | http://arxiv.org/abs/2309.03439v1 | http://arxiv.org/pdf/2309.03439v1 | 2309.03439v1 |
Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy | Federated learning (FL) is designed to preserve data privacy during model
training, where the data remains on the client side (i.e., IoT devices), and
only model updates of clients are shared iteratively for collaborative
learning. However, this process is vulnerable to privacy attacks and Byzantine
attacks: the local model updates shared throughout the FL network will leak
private information about the local training data, and they can also be
maliciously crafted by Byzantine attackers to disturb the learning. In this
paper, we propose a new FL scheme that guarantees rigorous privacy and
simultaneously enhances system robustness against Byzantine attacks. Our
approach introduces sparsification- and momentum-driven variance reduction into
the client-level differential privacy (DP) mechanism, to defend against
Byzantine attackers. The security design does not violate the privacy guarantee
of the client-level DP mechanism; hence, our approach achieves the same
client-level DP guarantee as the state-of-the-art. We conduct extensive
experiments on both IID and non-IID datasets and different tasks and evaluate
the performance of our approach against different Byzantine attacks by
comparing it with state-of-the-art defense methods. The results of our
experiments show the efficacy of our framework and demonstrate its ability to
improve system robustness against Byzantine attacks while achieving a strong
privacy guarantee. | [
"Zikai Zhang",
"Rui Hu"
] | 2023-09-07 01:39:02 | http://arxiv.org/abs/2309.03437v1 | http://arxiv.org/pdf/2309.03437v1 | 2309.03437v1 |
Equal Long-term Benefit Rate: Adapting Static Fairness Notions to Sequential Decision Making | Decisions made by machine learning models may have lasting impacts over time,
making long-term fairness a crucial consideration. It has been shown that when
ignoring the long-term effect, naively imposing fairness criterion in static
settings can actually exacerbate bias over time. To explicitly address biases
in sequential decision-making, recent works formulate long-term fairness
notions in Markov Decision Process (MDP) framework. They define the long-term
bias to be the sum of static bias over each time step. However, we demonstrate
that naively summing up the step-wise bias can cause a false sense of fairness
since it fails to consider the importance difference of different time steps
during transition. In this work, we introduce a long-term fairness notion
called Equal Long-term Benefit Rate (ELBERT), which explicitly considers
varying temporal importance and adapts static fairness principles to the
sequential setting. Moreover, we show that the policy gradient of Long-term
Benefit Rate can be analytically reduced to standard policy gradient. This
makes standard policy optimization methods applicable for reducing the bias,
leading to our proposed bias mitigation method ELBERT-PO. Experiments on three
sequential decision making environments show that ELBERT-PO significantly
reduces bias and maintains high utility. Code is available at
https://github.com/Yuancheng-Xu/ELBERT. | [
"Yuancheng Xu",
"Chenghao Deng",
"Yanchao Sun",
"Ruijie Zheng",
"Xiyao Wang",
"Jieyu Zhao",
"Furong Huang"
] | 2023-09-07 01:10:01 | http://arxiv.org/abs/2309.03426v1 | http://arxiv.org/pdf/2309.03426v1 | 2309.03426v1 |
Large Language Models as Optimizers | Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by
PROmpting (OPRO), a simple and effective approach to leverage large language
models (LLMs) as optimizers, where the optimization task is described in
natural language. In each optimization step, the LLM generates new solutions
from the prompt that contains previously generated solutions with their values,
then the new solutions are evaluated and added to the prompt for the next
optimization step. We first showcase OPRO on linear regression and traveling
salesman problems, then move on to prompt optimization where the goal is to
find instructions that maximize the task accuracy. With a variety of LLMs, we
demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks. | [
"Chengrun Yang",
"Xuezhi Wang",
"Yifeng Lu",
"Hanxiao Liu",
"Quoc V. Le",
"Denny Zhou",
"Xinyun Chen"
] | 2023-09-07 00:07:15 | http://arxiv.org/abs/2309.03409v1 | http://arxiv.org/pdf/2309.03409v1 | 2309.03409v1 |
Community-Based Hierarchical Positive-Unlabeled (PU) Model Fusion for Chronic Disease Prediction | Positive-Unlabeled (PU) Learning is a challenge presented by binary
classification problems where there is an abundance of unlabeled data along
with a small number of positive data instances, which can be used to address
chronic disease screening problem. State-of-the-art PU learning methods have
resulted in the development of various risk estimators, yet they neglect the
differences among distinct populations. To address this issue, we present a
novel Positive-Unlabeled Learning Tree (PUtree) algorithm. PUtree is designed
to take into account communities such as different age or income brackets, in
tasks of chronic disease prediction. We propose a novel approach for binary
decision-making, which hierarchically builds community-based PU models and then
aggregates their deliverables. Our method can explicate each PU model on the
tree for the optimized non-leaf PU node splitting. Furthermore, a mask-recovery
data augmentation strategy enables sufficient training of the model in
individual communities. Additionally, the proposed approach includes an
adversarial PU risk estimator to capture hierarchical PU-relationships, and a
model fusion network that integrates data from each tree path, resulting in
robust binary classification results. We demonstrate the superior performance
of PUtree as well as its variants on two benchmarks and a new
diabetes-prediction dataset. | [
"Yang Wu",
"Xurui Li",
"Xuhong Zhang",
"Yangyang Kang",
"Changlong Sun",
"Xiaozhong Liu"
] | 2023-09-06 22:16:58 | http://arxiv.org/abs/2309.03386v1 | http://arxiv.org/pdf/2309.03386v1 | 2309.03386v1 |
ViewMix: Augmentation for Robust Representation in Self-Supervised Learning | Joint Embedding Architecture-based self-supervised learning methods have
attributed the composition of data augmentations as a crucial factor for their
strong representation learning capabilities. While regional dropout strategies
have proven to guide models to focus on lesser indicative parts of the objects
in supervised methods, it hasn't been adopted by self-supervised methods for
generating positive pairs. This is because the regional dropout methods are not
suitable for the input sampling process of the self-supervised methodology.
Whereas dropping informative pixels from the positive pairs can result in
inefficient training, replacing patches of a specific object with a different
one can steer the model from maximizing the agreement between different
positive pairs. Moreover, joint embedding representation learning methods have
not made robustness their primary training outcome. To this end, we propose the
ViewMix augmentation policy, specially designed for self-supervised learning,
upon generating different views of the same image, patches are cut and pasted
from one view to another. By leveraging the different views created by this
augmentation strategy, multiple joint embedding-based self-supervised
methodologies obtained better localization capability and consistently
outperformed their corresponding baseline methods. It is also demonstrated that
incorporating ViewMix augmentation policy promotes robustness of the
representations in the state-of-the-art methods. Furthermore, our
experimentation and analysis of compute times suggest that ViewMix augmentation
doesn't introduce any additional overhead compared to other counterparts. | [
"Arjon Das",
"Xin Zhong"
] | 2023-09-06 21:04:53 | http://arxiv.org/abs/2309.03360v1 | http://arxiv.org/pdf/2309.03360v1 | 2309.03360v1 |
Ensemble linear interpolators: The role of ensembling | Interpolators are unstable. For example, the mininum $\ell_2$ norm least
square interpolator exhibits unbounded test errors when dealing with noisy
data. In this paper, we study how ensemble stabilizes and thus improves the
generalization performance, measured by the out-of-sample prediction risk, of
an individual interpolator. We focus on bagged linear interpolators, as bagging
is a popular randomization-based ensemble method that can be implemented in
parallel. We introduce the multiplier-bootstrap-based bagged least square
estimator, which can then be formulated as an average of the sketched least
square estimators. The proposed multiplier bootstrap encompasses the classical
bootstrap with replacement as a special case, along with a more intriguing
variant which we call the Bernoulli bootstrap.
Focusing on the proportional regime where the sample size scales
proportionally with the feature dimensionality, we investigate the
out-of-sample prediction risks of the sketched and bagged least square
estimators in both underparametrized and overparameterized regimes. Our results
reveal the statistical roles of sketching and bagging. In particular, sketching
modifies the aspect ratio and shifts the interpolation threshold of the minimum
$\ell_2$ norm estimator. However, the risk of the sketched estimator continues
to be unbounded around the interpolation threshold due to excessive variance.
In stark contrast, bagging effectively mitigates this variance, leading to a
bounded limiting out-of-sample prediction risk. To further understand this
stability improvement property, we establish that bagging acts as a form of
implicit regularization, substantiated by the equivalence of the bagged
estimator with its explicitly regularized counterpart. We also discuss several
extensions. | [
"Mingqi Wu",
"Qiang Sun"
] | 2023-09-06 20:38:04 | http://arxiv.org/abs/2309.03354v1 | http://arxiv.org/pdf/2309.03354v1 | 2309.03354v1 |
Source Camera Identification and Detection in Digital Videos through Blind Forensics | Source camera identification in digital videos is the problem of associating
an unknown digital video with its source device, within a closed set of
possible devices. The existing techniques in source detection of digital videos
try to find a fingerprint of the actual source in the video in form of PRNU
(Photo Response Non--Uniformity), and match it against the SPN (Sensor Pattern
Noise) of each possible device. The highest correlation indicates the correct
source. We investigate the problem of identifying a video source through a
feature based approach using machine learning. In this paper, we present a
blind forensic technique of video source authentication and identification,
based on feature extraction, feature selection and subsequent source
classification. The main aim is to determine whether a claimed source for a
video is actually its original source. If not, we identify its original source.
Our experimental results prove the efficiency of the proposed method compared
to traditional fingerprint based technique. | [
"Venkata Udaya Sameer",
"Shilpa Mukhopadhyay",
"Ruchira Naskar",
"Ishaan Dali"
] | 2023-09-06 20:36:17 | http://arxiv.org/abs/2309.03353v1 | http://arxiv.org/pdf/2309.03353v1 | 2309.03353v1 |
Using Neural Networks for Fast SAR Roughness Estimation of High Resolution Images | The analysis of Synthetic Aperture Radar (SAR) imagery is an important step
in remote sensing applications, and it is a challenging problem due to its
inherent speckle noise. One typical solution is to model the data using the
$G_I^0$ distribution and extract its roughness information, which in turn can
be used in posterior imaging tasks, such as segmentation, classification and
interpretation. This leads to the need of quick and reliable estimation of the
roughness parameter from SAR data, especially with high resolution images.
Unfortunately, traditional parameter estimation procedures are slow and prone
to estimation failures. In this work, we proposed a neural network-based
estimation framework that first learns how to predict underlying parameters of
$G_I^0$ samples and then can be used to estimate the roughness of unseen data.
We show that this approach leads to an estimator that is quicker, yields less
estimation error and is less prone to failures than the traditional estimation
procedures for this problem, even when we use a simple network. More
importantly, we show that this same methodology can be generalized to handle
image inputs and, even if trained on purely synthetic data for a few seconds,
is able to perform real time pixel-wise roughness estimation for high
resolution real SAR imagery. | [
"Li Fan",
"Jeova Farias Sales Rocha Neto"
] | 2023-09-06 20:24:13 | http://arxiv.org/abs/2309.03351v1 | http://arxiv.org/pdf/2309.03351v1 | 2309.03351v1 |
Students Success Modeling: Most Important Factors | The importance of retention rate for higher education institutions has
encouraged data analysts to present various methods to predict at-risk
students. The present study, motivated by the same encouragement, proposes a
deep learning model trained with 121 features of diverse categories extracted
or engineered out of the records of 60,822 postsecondary students. The model
undertakes to identify students likely to graduate, the ones likely to transfer
to a different school, and the ones likely to drop out and leave their higher
education unfinished. This study undertakes to adjust its predictive methods
for different stages of curricular progress of students. The temporal aspects
introduced for this purpose are accounted for by incorporating layers of LSTM
in the model. Our experiments demonstrate that distinguishing between
to-be-graduate and at-risk students is reasonably achievable in the earliest
stages, and then it rapidly improves, but the resolution within the latter
category (dropout vs. transfer) depends on data accumulated over time. However,
the model remarkably foresees the fate of students who stay in the school for
three years. The model is also assigned to present the weightiest features in
the procedure of prediction, both on institutional and student levels. A large,
diverse sample size along with the investigation of more than one hundred
extracted or engineered features in our study provide new insights into
variables that affect students success, predict dropouts with reasonable
accuracy, and shed light on the less investigated issue of transfer between
colleges. More importantly, by providing individual-level predictions (as
opposed to school-level predictions) and addressing the outcomes of transfers,
this study improves the use of ML in the prediction of educational outcomes. | [
"Sahar Voghoei",
"James M. Byars",
"Scott Jackson King",
"Soheil Shapouri",
"Hamed Yaghoobian",
"Khaled M. Rasheed",
"Hamid R. Arabnia"
] | 2023-09-06 19:23:10 | http://arxiv.org/abs/2309.13052v1 | http://arxiv.org/pdf/2309.13052v1 | 2309.13052v1 |
ETP: Learning Transferable ECG Representations via ECG-Text Pre-training | In the domain of cardiovascular healthcare, the Electrocardiogram (ECG)
serves as a critical, non-invasive diagnostic tool. Although recent strides in
self-supervised learning (SSL) have been promising for ECG representation
learning, these techniques often require annotated samples and struggle with
classes not present in the fine-tuning stages. To address these limitations, we
introduce ECG-Text Pre-training (ETP), an innovative framework designed to
learn cross-modal representations that link ECG signals with textual reports.
For the first time, this framework leverages the zero-shot classification task
in the ECG domain. ETP employs an ECG encoder along with a pre-trained language
model to align ECG signals with their corresponding textual reports. The
proposed framework excels in both linear evaluation and zero-shot
classification tasks, as demonstrated on the PTB-XL and CPSC2018 datasets,
showcasing its ability for robust and generalizable cross-modal ECG feature
learning. | [
"Che Liu",
"Zhongwei Wan",
"Sibo Cheng",
"Mi Zhang",
"Rossella Arcucci"
] | 2023-09-06 19:19:26 | http://arxiv.org/abs/2309.07145v1 | http://arxiv.org/pdf/2309.07145v1 | 2309.07145v1 |
REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation | Dexterous manipulation tasks involving contact-rich interactions pose a
significant challenge for both model-based control systems and imitation
learning algorithms. The complexity arises from the need for multi-fingered
robotic hands to dynamically establish and break contacts, balance
non-prehensile forces, and control large degrees of freedom. Reinforcement
learning (RL) offers a promising approach due to its general applicability and
capacity to autonomously acquire optimal manipulation strategies. However, its
real-world application is often hindered by the necessity to generate a large
number of samples, reset the environment, and obtain reward signals. In this
work, we introduce an efficient system for learning dexterous manipulation
skills with RL to alleviate these challenges. The main idea of our approach is
the integration of recent advances in sample-efficient RL and replay buffer
bootstrapping. This combination allows us to utilize data from different tasks
or objects as a starting point for training new tasks, significantly improving
learning efficiency. Additionally, our system completes the real-world training
cycle by incorporating learned resets via an imitation-based pickup policy as
well as learned reward functions, eliminating the need for manual resets and
reward engineering. We demonstrate the benefits of reusing past data as replay
buffer initialization for new tasks, for instance, the fast acquisition of
intricate manipulation skills in the real world on a four-fingered robotic
hand. (Videos: https://sites.google.com/view/reboot-dexterous) | [
"Zheyuan Hu",
"Aaron Rovinsky",
"Jianlan Luo",
"Vikash Kumar",
"Abhishek Gupta",
"Sergey Levine"
] | 2023-09-06 19:05:31 | http://arxiv.org/abs/2309.03322v1 | http://arxiv.org/pdf/2309.03322v1 | 2309.03322v1 |
Fitness Approximation through Machine Learning | We present a novel approach to performing fitness approximation in genetic
algorithms (GAs) using machine-learning (ML) models, focusing on evolutionary
agents in Gymnasium (game) simulators -- where fitness computation is costly.
Maintaining a dataset of sampled individuals along with their actual fitness
scores, we continually update throughout an evolutionary run a
fitness-approximation ML model. We compare different methods for: 1) switching
between actual and approximate fitness, 2) sampling the population, and 3)
weighting the samples. Experimental findings demonstrate significant
improvement in evolutionary runtimes, with fitness scores that are either
identical or slightly lower than that of the fully run GA -- depending on the
ratio of approximate-to-actual-fitness computation. Our approach is generic and
can be easily applied to many different domains. | [
"Itai Tzruia",
"Tomer Halperin",
"Moshe Sipper",
"Achiya Elyasaf"
] | 2023-09-06 18:58:21 | http://arxiv.org/abs/2309.03318v1 | http://arxiv.org/pdf/2309.03318v1 | 2309.03318v1 |
Robotic Table Tennis: A Case Study into a High Speed Learning System | We present a deep-dive into a real-world robotic learning system that, in
previous work, was shown to be capable of hundreds of table tennis rallies with
a human and has the ability to precisely return the ball to desired targets.
This system puts together a highly optimized perception subsystem, a high-speed
low-latency robot controller, a simulation paradigm that can prevent damage in
the real world and also train policies for zero-shot transfer, and automated
real world environment resets that enable autonomous training and evaluation on
physical robots. We complement a complete system description, including
numerous design decisions that are typically not widely disseminated, with a
collection of studies that clarify the importance of mitigating various sources
of latency, accounting for training and deployment distribution shifts,
robustness of the perception system, sensitivity to policy hyper-parameters,
and choice of action space. A video demonstrating the components of the system
and details of experimental results can be found at
https://youtu.be/uFcnWjB42I0. | [
"David B. D'Ambrosio",
"Jonathan Abelian",
"Saminda Abeyruwan",
"Michael Ahn",
"Alex Bewley",
"Justin Boyd",
"Krzysztof Choromanski",
"Omar Cortes",
"Erwin Coumans",
"Tianli Ding",
"Wenbo Gao",
"Laura Graesser",
"Atil Iscen",
"Navdeep Jaitly",
"Deepali Jain",
"Juhana Kangaspunta",
"Satoshi Kataoka",
"Gus Kouretas",
"Yuheng Kuang",
"Nevena Lazic",
"Corey Lynch",
"Reza Mahjourian",
"Sherry Q. Moore",
"Thinh Nguyen",
"Ken Oslund",
"Barney J Reed",
"Krista Reymann",
"Pannag R. Sanketi",
"Anish Shankar",
"Pierre Sermanet",
"Vikas Sindhwani",
"Avi Singh",
"Vincent Vanhoucke",
"Grace Vesom",
"Peng Xu"
] | 2023-09-06 18:56:20 | http://arxiv.org/abs/2309.03315v1 | http://arxiv.org/pdf/2309.03315v1 | 2309.03315v1 |
Scalable Learning of Intrusion Responses through Recursive Decomposition | We study automated intrusion response for an IT infrastructure and formulate
the interaction between an attacker and a defender as a partially observed
stochastic game. To solve the game we follow an approach where attack and
defense strategies co-evolve through reinforcement learning and self-play
toward an equilibrium. Solutions proposed in previous work prove the
feasibility of this approach for small infrastructures but do not scale to
realistic scenarios due to the exponential growth in computational complexity
with the infrastructure size. We address this problem by introducing a method
that recursively decomposes the game into subgames which can be solved in
parallel. Applying optimal stopping theory we show that the best response
strategies in these subgames exhibit threshold structures, which allows us to
compute them efficiently. To solve the decomposed game we introduce an
algorithm called Decompositional Fictitious Self-Play (DFSP), which learns Nash
equilibria through stochastic approximation. We evaluate the learned strategies
in an emulation environment where real intrusions and response actions can be
executed. The results show that the learned strategies approximate an
equilibrium and that DFSP significantly outperforms a state-of-the-art
algorithm for a realistic infrastructure configuration. | [
"Kim Hammar",
"Rolf Stadler"
] | 2023-09-06 18:12:07 | http://arxiv.org/abs/2309.03292v2 | http://arxiv.org/pdf/2309.03292v2 | 2309.03292v2 |
R2D2: Deep neural network series for near real-time high-dynamic range imaging in radio astronomy | We present a novel AI approach for high-resolution high-dynamic range
synthesis imaging by radio interferometry (RI) in astronomy. R2D2, standing for
"{R}esidual-to-{R}esidual {D}NN series for high-{D}ynamic range imaging", is a
model-based data-driven approach relying on hybrid deep neural networks (DNNs)
and data-consistency updates. Its reconstruction is built as a series of
residual images estimated as the outputs of DNNs, each taking the residual
dirty image of the previous iteration as an input. The approach can be
interpreted as a learned version of a matching pursuit approach, whereby model
components are iteratively identified from residual dirty images, and of which
CLEAN is a well-known example. We propose two variants of the R2D2 model, built
upon two distinctive DNN architectures: a standard U-Net, and a novel unrolled
architecture. We demonstrate their use for monochromatic intensity imaging on
highly-sensitive observations of the radio galaxy Cygnus~A at S band, from the
Very Large Array (VLA). R2D2 is validated against CLEAN and the recent RI
algorithms AIRI and uSARA, which respectively inject a learned implicit
regularization and an advanced handcrafted sparsity-based regularization into
the RI data. With only few terms in its series, the R2D2 model is able to
deliver high-precision imaging, significantly superior to CLEAN and matching
the precision of AIRI and uSARA. In terms of computational efficiency, R2D2
runs at a fraction of the cost of AIRI and uSARA, and is also faster than
CLEAN, opening the door to real-time precision imaging in RI. | [
"Aghabiglou A",
"Chu C S",
"Jackson A",
"Dabbech A",
"Wiaux Y"
] | 2023-09-06 18:11:09 | http://arxiv.org/abs/2309.03291v1 | http://arxiv.org/pdf/2309.03291v1 | 2309.03291v1 |
Let Quantum Neural Networks Choose Their Own Frequencies | Parameterized quantum circuits as machine learning models are typically well
described by their representation as a partial Fourier series of the input
features, with frequencies uniquely determined by the feature map's generator
Hamiltonians. Ordinarily, these data-encoding generators are chosen in advance,
fixing the space of functions that can be represented. In this work we consider
a generalization of quantum models to include a set of trainable parameters in
the generator, leading to a trainable frequency (TF) quantum model. We
numerically demonstrate how TF models can learn generators with desirable
properties for solving the task at hand, including non-regularly spaced
frequencies in their spectra and flexible spectral richness. Finally, we
showcase the real-world effectiveness of our approach, demonstrating an
improved accuracy in solving the Navier-Stokes equations using a TF model with
only a single parameter added to each encoding operation. Since TF models
encompass conventional fixed frequency models, they may offer a sensible
default choice for variational quantum machine learning. | [
"Ben Jaderberg",
"Antonio A. Gentile",
"Youssef Achari Berrada",
"Elvira Shishenina",
"Vincent E. Elfving"
] | 2023-09-06 18:00:07 | http://arxiv.org/abs/2309.03279v1 | http://arxiv.org/pdf/2309.03279v1 | 2309.03279v1 |
Matcha-TTS: A fast TTS architecture with conditional flow matching | We introduce Matcha-TTS, a new encoder-decoder architecture for speedy TTS
acoustic modelling, trained using optimal-transport conditional flow matching
(OT-CFM). This yields an ODE-based decoder capable of high output quality in
fewer synthesis steps than models trained using score matching. Careful design
choices additionally ensure each synthesis step is fast to run. The method is
probabilistic, non-autoregressive, and learns to speak from scratch without
external alignments. Compared to strong pre-trained baseline models, the
Matcha-TTS system has the smallest memory footprint, rivals the speed of the
fastest models on long utterances, and attains the highest mean opinion score
in a listening test. Please see https://shivammehta25.github.io/Matcha-TTS/ for
audio examples, code, and pre-trained models. | [
"Shivam Mehta",
"Ruibo Tu",
"Jonas Beskow",
"Éva Székely",
"Gustav Eje Henter"
] | 2023-09-06 17:59:57 | http://arxiv.org/abs/2309.03199v1 | http://arxiv.org/pdf/2309.03199v1 | 2309.03199v1 |
Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation | Graph neural networks (GNNs) have gained an increasing amount of popularity
due to their superior capability in learning node embeddings for various graph
inference tasks, but training them can raise privacy concerns. To address this,
we propose using link local differential privacy over decentralized nodes,
enabling collaboration with an untrusted server to train GNNs without revealing
the existence of any link. Our approach spends the privacy budget separately on
links and degrees of the graph for the server to better denoise the graph
topology using Bayesian estimation, alleviating the negative impact of LDP on
the accuracy of the trained GNNs. We bound the mean absolute error of the
inferred link probabilities against the ground truth graph topology. We then
propose two variants of our LDP mechanism complementing each other in different
privacy settings, one of which estimates fewer links under lower privacy
budgets to avoid false positive link estimates when the uncertainty is high,
while the other utilizes more information and performs better given relatively
higher privacy budgets. Furthermore, we propose a hybrid variant that combines
both strategies and is able to perform better across different privacy budgets.
Extensive experiments show that our approach outperforms existing methods in
terms of accuracy under varying privacy budgets. | [
"Xiaochen Zhu",
"Vincent Y. F. Tan",
"Xiaokui Xiao"
] | 2023-09-06 17:53:31 | http://arxiv.org/abs/2309.03190v2 | http://arxiv.org/pdf/2309.03190v2 | 2309.03190v2 |
SLiMe: Segment Like Me | Significant strides have been made using large vision-language models, like
Stable Diffusion (SD), for a variety of downstream tasks, including image
editing, image correspondence, and 3D shape generation. Inspired by these
advancements, we explore leveraging these extensive vision-language models for
segmenting images at any desired granularity using as few as one annotated
sample by proposing SLiMe. SLiMe frames this problem as an optimization task.
Specifically, given a single training image and its segmentation mask, we first
extract attention maps, including our novel "weighted accumulated
self-attention map" from the SD prior. Then, using the extracted attention
maps, the text embeddings of Stable Diffusion are optimized such that, each of
them, learn about a single segmented region from the training image. These
learned embeddings then highlight the segmented region in the attention maps,
which in turn can then be used to derive the segmentation map. This enables
SLiMe to segment any real-world image during inference with the granularity of
the segmented region in the training image, using just one example. Moreover,
leveraging additional training data when available, i.e. few-shot, improves the
performance of SLiMe. We carried out a knowledge-rich set of experiments
examining various design factors and showed that SLiMe outperforms other
existing one-shot and few-shot segmentation methods. | [
"Aliasghar Khani",
"Saeid Asgari Taghanaki",
"Aditya Sanghi",
"Ali Mahdavi Amiri",
"Ghassan Hamarneh"
] | 2023-09-06 17:39:05 | http://arxiv.org/abs/2309.03179v2 | http://arxiv.org/pdf/2309.03179v2 | 2309.03179v2 |
Temporal Inductive Path Neural Network for Temporal Knowledge Graph Reasoning | Temporal Knowledge Graph (TKG) is an extension of traditional Knowledge Graph
(KG) that incorporates the dimension of time. Reasoning on TKGs is a crucial
task that aims to predict future facts based on historical occurrences. The key
challenge lies in uncovering structural dependencies within historical
subgraphs and temporal patterns. Most existing approaches model TKGs relying on
entity modeling, as nodes in the graph play a crucial role in knowledge
representation. However, the real-world scenario often involves an extensive
number of entities, with new entities emerging over time. This makes it
challenging for entity-dependent methods to cope with extensive volumes of
entities, and effectively handling newly emerging entities also becomes a
significant challenge. Therefore, we propose Temporal Inductive Path Neural
Network (TiPNN), which models historical information in an entity-independent
perspective. Specifically, TiPNN adopts a unified graph, namely history
temporal graph, to comprehensively capture and encapsulate information from
history. Subsequently, we utilize the defined query-aware temporal paths to
model historical path information related to queries on history temporal graph
for the reasoning. Extensive experiments illustrate that the proposed model not
only attains significant performance enhancements but also handles inductive
settings, while additionally facilitating the provision of reasoning evidence
through history temporal graphs. | [
"Hao Dong",
"Pengyang Wang",
"Meng Xiao",
"Zhiyuan Ning",
"Pengfei Wang",
"Yuanchun Zhou"
] | 2023-09-06 17:37:40 | http://arxiv.org/abs/2309.03251v1 | http://arxiv.org/pdf/2309.03251v1 | 2309.03251v1 |
3D Object Positioning Using Differentiable Multimodal Learning | This article describes a multi-modal method using simulated Lidar data via
ray tracing and image pixel loss with differentiable rendering to optimize an
object's position with respect to an observer or some referential objects in a
computer graphics scene. Object position optimization is completed using
gradient descent with the loss function being influenced by both modalities.
Typical object placement optimization is done using image pixel loss with
differentiable rendering only, this work shows the use of a second modality
(Lidar) leads to faster convergence. This method of fusing sensor input
presents a potential usefulness for autonomous vehicles, as these methods can
be used to establish the locations of multiple actors in a scene. This article
also presents a method for the simulation of multiple types of data to be used
in the training of autonomous vehicles. | [
"Sean Zanyk-McLean",
"Krishna Kumar",
"Paul Navratil"
] | 2023-09-06 17:30:26 | http://arxiv.org/abs/2309.03177v1 | http://arxiv.org/pdf/2309.03177v1 | 2309.03177v1 |
GPT-InvestAR: Enhancing Stock Investment Strategies through Annual Report Analysis with Large Language Models | Annual Reports of publicly listed companies contain vital information about
their financial health which can help assess the potential impact on Stock
price of the firm. These reports are comprehensive in nature, going up to, and
sometimes exceeding, 100 pages. Analysing these reports is cumbersome even for
a single firm, let alone the whole universe of firms that exist. Over the
years, financial experts have become proficient in extracting valuable
information from these documents relatively quickly. However, this requires
years of practice and experience. This paper aims to simplify the process of
assessing Annual Reports of all the firms by leveraging the capabilities of
Large Language Models (LLMs). The insights generated by the LLM are compiled in
a Quant styled dataset and augmented by historical stock price data. A Machine
Learning model is then trained with LLM outputs as features. The walkforward
test results show promising outperformance wrt S&P500 returns. This paper
intends to provide a framework for future work in this direction. To facilitate
this, the code has been released as open source. | [
"Udit Gupta"
] | 2023-09-06 17:18:55 | http://arxiv.org/abs/2309.03079v1 | http://arxiv.org/pdf/2309.03079v1 | 2309.03079v1 |
Impression-Informed Multi-Behavior Recommender System: A Hierarchical Graph Attention Approach | While recommender systems have significantly benefited from implicit
feedback, they have often missed the nuances of multi-behavior interactions
between users and items. Historically, these systems either amalgamated all
behaviors, such as \textit{impression} (formerly \textit{view}),
\textit{add-to-cart}, and \textit{buy}, under a singular 'interaction' label,
or prioritized only the target behavior, often the \textit{buy} action,
discarding valuable auxiliary signals. Although recent advancements tried
addressing this simplification, they primarily gravitated towards optimizing
the target behavior alone, battling with data scarcity. Additionally, they
tended to bypass the nuanced hierarchy intrinsic to behaviors. To bridge these
gaps, we introduce the \textbf{H}ierarchical \textbf{M}ulti-behavior
\textbf{G}raph Attention \textbf{N}etwork (HMGN). This pioneering framework
leverages attention mechanisms to discern information from both inter and
intra-behaviors while employing a multi-task Hierarchical Bayesian Personalized
Ranking (HBPR) for optimization. Recognizing the need for scalability, our
approach integrates a specialized multi-behavior sub-graph sampling technique.
Moreover, the adaptability of HMGN allows for the seamless inclusion of
knowledge metadata and time-series data. Empirical results attest to our
model's prowess, registering a notable performance boost of up to 64\% in
NDCG@100 metrics over conventional graph neural network methods. | [
"Dong Li",
"Divya Bhargavi",
"Vidya Sagar Ravipati"
] | 2023-09-06 17:09:43 | http://arxiv.org/abs/2309.03169v2 | http://arxiv.org/pdf/2309.03169v2 | 2309.03169v2 |
Split-Boost Neural Networks | The calibration and training of a neural network is a complex and
time-consuming procedure that requires significant computational resources to
achieve satisfactory results. Key obstacles are a large number of
hyperparameters to select and the onset of overfitting in the face of a small
amount of data. In this framework, we propose an innovative training strategy
for feed-forward architectures - called split-boost - that improves performance
and automatically includes a regularizing behaviour without modeling it
explicitly. Such a novel approach ultimately allows us to avoid explicitly
modeling the regularization term, decreasing the total number of
hyperparameters and speeding up the tuning phase. The proposed strategy is
tested on a real-world (anonymized) dataset within a benchmark medical
insurance design problem. | [
"Raffaele Giuseppe Cestari",
"Gabriele Maroni",
"Loris Cannelli",
"Dario Piga",
"Simone Formentin"
] | 2023-09-06 17:08:57 | http://arxiv.org/abs/2309.03167v1 | http://arxiv.org/pdf/2309.03167v1 | 2309.03167v1 |
Learning to Recharge: UAV Coverage Path Planning through Deep Reinforcement Learning | Coverage path planning (CPP) is a critical problem in robotics, where the
goal is to find an efficient path that covers every point in an area of
interest. This work addresses the power-constrained CPP problem with recharge
for battery-limited unmanned aerial vehicles (UAVs). In this problem, a notable
challenge emerges from integrating recharge journeys into the overall coverage
strategy, highlighting the intricate task of making strategic, long-term
decisions. We propose a novel proximal policy optimization (PPO)-based deep
reinforcement learning (DRL) approach with map-based observations, utilizing
action masking and discount factor scheduling to optimize coverage trajectories
over the entire mission horizon. We further provide the agent with a position
history to handle emergent state loops caused by the recharge capability. Our
approach outperforms a baseline heuristic, generalizes to different target
zones and maps, with limited generalization to unseen maps. We offer valuable
insights into DRL algorithm design for long-horizon problems and provide a
publicly available software framework for the CPP problem. | [
"Mirco Theile",
"Harald Bayerlein",
"Marco Caccamo",
"Alberto L. Sangiovanni-Vincentelli"
] | 2023-09-06 16:55:11 | http://arxiv.org/abs/2309.03157v2 | http://arxiv.org/pdf/2309.03157v2 | 2309.03157v2 |
Data-Driven Neural Polar Codes for Unknown Channels With and Without Memory | In this work, a novel data-driven methodology for designing polar codes for
channels with and without memory is proposed. The methodology is suitable for
the case where the channel is given as a "black-box" and the designer has
access to the channel for generating observations of its inputs and outputs,
but does not have access to the explicit channel model. The proposed method
leverages the structure of the successive cancellation (SC) decoder to devise a
neural SC (NSC) decoder. The NSC decoder uses neural networks (NNs) to replace
the core elements of the original SC decoder, the check-node, the bit-node and
the soft decision. Along with the NSC, we devise additional NN that embeds the
channel outputs into the input space of the SC decoder. The proposed method is
supported by theoretical guarantees that include the consistency of the NSC.
Also, the NSC has computational complexity that does not grow with the channel
memory size. This sets its main advantage over successive cancellation trellis
(SCT) decoder for finite state channels (FSCs) that has complexity of
$O(|\mathcal{S}|^3 N\log N)$, where $|\mathcal{S}|$ denotes the number of
channel states. We demonstrate the performance of the proposed algorithms on
memoryless channels and on channels with memory. The empirical results are
compared with the optimal polar decoder, given by the SC and SCT decoders. We
further show that our algorithms are applicable for the case where there SC and
SCT decoders are not applicable. | [
"Ziv Aharoni",
"Bashar Huleihel",
"Henry D. Pfister",
"Haim H. Permuter"
] | 2023-09-06 16:44:08 | http://arxiv.org/abs/2309.03148v1 | http://arxiv.org/pdf/2309.03148v1 | 2309.03148v1 |
The Best Arm Evades: Near-optimal Multi-pass Streaming Lower Bounds for Pure Exploration in Multi-armed Bandits | We give a near-optimal sample-pass trade-off for pure exploration in
multi-armed bandits (MABs) via multi-pass streaming algorithms: any streaming
algorithm with sublinear memory that uses the optimal sample complexity of
$O(\frac{n}{\Delta^2})$ requires
$\Omega(\frac{\log{(1/\Delta)}}{\log\log{(1/\Delta)}})$ passes. Here, $n$ is
the number of arms and $\Delta$ is the reward gap between the best and the
second-best arms. Our result matches the $O(\log(\frac{1}{\Delta}))$-pass
algorithm of Jin et al. [ICML'21] (up to lower order terms) that only uses
$O(1)$ memory and answers an open question posed by Assadi and Wang [STOC'20]. | [
"Sepehr Assadi",
"Chen Wang"
] | 2023-09-06 16:41:41 | http://arxiv.org/abs/2309.03145v1 | http://arxiv.org/pdf/2309.03145v1 | 2309.03145v1 |
Using Multiple Vector Channels Improves E(n)-Equivariant Graph Neural Networks | We present a natural extension to E(n)-equivariant graph neural networks that
uses multiple equivariant vectors per node. We formulate the extension and show
that it improves performance across different physical systems benchmark tasks,
with minimal differences in runtime or number of parameters. The proposed
multichannel EGNN outperforms the standard singlechannel EGNN on N-body charged
particle dynamics, molecular property predictions, and predicting the
trajectories of solar system bodies. Given the additional benefits and minimal
additional cost of multi-channel EGNN, we suggest that this extension may be of
practical use to researchers working in machine learning for the physical
sciences | [
"Daniel Levy",
"Sékou-Oumar Kaba",
"Carmelo Gonzales",
"Santiago Miret",
"Siamak Ravanbakhsh"
] | 2023-09-06 16:24:26 | http://arxiv.org/abs/2309.03139v1 | http://arxiv.org/pdf/2309.03139v1 | 2309.03139v1 |
Decoding the Alphabet Soup of Degrees in the United States Postsecondary Education System Through Hybrid Method: Database and Text Mining | This paper proposes a model to predict the levels (e.g., Bachelor, Master,
etc.) of postsecondary degree awards that have been ambiguously expressed in
the student tracking reports of the National Student Clearinghouse (NSC). The
model will be the hybrid of two modules. The first module interprets the
relevant abbreviatory elements embedded in NSC reports by referring to a
comprehensive database that we have made of nearly 950 abbreviations for degree
titles used by American postsecondary educators. The second module is a
combination of feature classification and text mining modeled with CNN-BiLSTM,
which is preceded by several steps of heavy pre-processing. The model proposed
in this paper was trained with four multi-label datasets of different grades of
resolution and returned 97.83\% accuracy with the most sophisticated dataset.
Such a thorough classification of degree levels will provide insights into the
modeling patterns of student success and mobility. To date, such a
classification strategy has not been attempted except using manual methods and
simple text parsing logic. | [
"Sahar Voghoei",
"James Byars",
"John A Miller",
"Khaled Rasheed",
"Hamid A Arabnia"
] | 2023-09-06 16:03:14 | http://arxiv.org/abs/2309.13050v1 | http://arxiv.org/pdf/2309.13050v1 | 2309.13050v1 |
Detecting Manufacturing Defects in PCBs via Data-Centric Machine Learning on Solder Paste Inspection Features | Automated detection of defects in Printed Circuit Board (PCB) manufacturing
using Solder Paste Inspection (SPI) and Automated Optical Inspection (AOI)
machines can help improve operational efficiency and significantly reduce the
need for manual intervention. In this paper, using SPI-extracted features of 6
million pins, we demonstrate a data-centric approach to train Machine Learning
(ML) models to detect PCB defects at three stages of PCB manufacturing. The 6
million PCB pins correspond to 2 million components that belong to 15,387 PCBs.
Using a base extreme gradient boosting (XGBoost) ML model, we iterate on the
data pre-processing step to improve detection performance. Combining pin-level
SPI features using component and PCB IDs, we developed training instances also
at the component and PCB level. This allows the ML model to capture any
inter-pin, inter-component, or spatial effects that may not be apparent at the
pin level. Models are trained at the pin, component, and PCB levels, and the
detection results from the different models are combined to identify defective
components. | [
"Jubilee Prasad-Rao",
"Roohollah Heidary",
"Jesse Williams"
] | 2023-09-06 15:52:55 | http://arxiv.org/abs/2309.03113v1 | http://arxiv.org/pdf/2309.03113v1 | 2309.03113v1 |
Graph Theory Applications in Advanced Geospatial Research | Geospatial sciences include a wide range of applications, from environmental
monitoring transportation to infrastructure planning, as well as location-based
analysis and services. Graph theory algorithms in mathematics have emerged as
indispensable tools in these domains due to their capability to model and
analyse spatial relationships efficiently. This article explores the
applications of graph theory algorithms in geospatial sciences, highlighting
their role in network analysis, spatial connectivity, geographic information
systems, and various other spatial problem-solving scenarios like digital twin.
The article provides a comprehensive idea about graph theory's key concepts and
algorithms that assist the geospatial modelling processes and insights into
real-world geospatial challenges and opportunities. It lists the extensive
research, innovative technologies and methodologies implemented in this domain. | [
"Surajit Ghosh",
"Archita Mallick",
"Anuva Chowdhury",
"Kounik De Sarkar"
] | 2023-09-06 15:47:18 | http://arxiv.org/abs/2309.03249v2 | http://arxiv.org/pdf/2309.03249v2 | 2309.03249v2 |
ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure | This paper presents ContrastWSD, a RoBERTa-based metaphor detection model
that integrates the Metaphor Identification Procedure (MIP) and Word Sense
Disambiguation (WSD) to extract and contrast the contextual meaning with the
basic meaning of a word to determine whether it is used metaphorically in a
sentence. By utilizing the word senses derived from a WSD model, our model
enhances the metaphor detection process and outperforms other methods that rely
solely on contextual embeddings or integrate only the basic definitions and
other external knowledge. We evaluate our approach on various benchmark
datasets and compare it with strong baselines, indicating the effectiveness in
advancing metaphor detection. | [
"Mohamad Elzohbi",
"Richard Zhao"
] | 2023-09-06 15:41:38 | http://arxiv.org/abs/2309.03103v1 | http://arxiv.org/pdf/2309.03103v1 | 2309.03103v1 |
ORL-AUDITOR: Dataset Auditing in Offline Deep Reinforcement Learning | Data is a critical asset in AI, as high-quality datasets can significantly
improve the performance of machine learning models. In safety-critical domains
such as autonomous vehicles, offline deep reinforcement learning (offline DRL)
is frequently used to train models on pre-collected datasets, as opposed to
training these models by interacting with the real-world environment as the
online DRL. To support the development of these models, many institutions make
datasets publicly available with opensource licenses, but these datasets are at
risk of potential misuse or infringement. Injecting watermarks to the dataset
may protect the intellectual property of the data, but it cannot handle
datasets that have already been published and is infeasible to be altered
afterward. Other existing solutions, such as dataset inference and membership
inference, do not work well in the offline DRL scenario due to the diverse
model behavior characteristics and offline setting constraints. In this paper,
we advocate a new paradigm by leveraging the fact that cumulative rewards can
act as a unique identifier that distinguishes DRL models trained on a specific
dataset. To this end, we propose ORL-AUDITOR, which is the first
trajectory-level dataset auditing mechanism for offline RL scenarios. Our
experiments on multiple offline DRL models and tasks reveal the efficacy of
ORL-AUDITOR, with auditing accuracy over 95% and false positive rates less than
2.88%. We also provide valuable insights into the practical implementation of
ORL-AUDITOR by studying various parameter settings. Furthermore, we demonstrate
the auditing capability of ORL-AUDITOR on open-source datasets from Google and
DeepMind, highlighting its effectiveness in auditing published datasets.
ORL-AUDITOR is open-sourced at https://github.com/link-zju/ORL-Auditor. | [
"Linkang Du",
"Min Chen",
"Mingyang Sun",
"Shouling Ji",
"Peng Cheng",
"Jiming Chen",
"Zhikun Zhang"
] | 2023-09-06 15:28:43 | http://arxiv.org/abs/2309.03081v1 | http://arxiv.org/pdf/2309.03081v1 | 2309.03081v1 |
Parameterizing pressure-temperature profiles of exoplanet atmospheres with neural networks | Atmospheric retrievals (AR) of exoplanets typically rely on a combination of
a Bayesian inference technique and a forward simulator to estimate atmospheric
properties from an observed spectrum. A key component in simulating spectra is
the pressure-temperature (PT) profile, which describes the thermal structure of
the atmosphere. Current AR pipelines commonly use ad hoc fitting functions here
that limit the retrieved PT profiles to simple approximations, but still use a
relatively large number of parameters. In this work, we introduce a
conceptually new, data-driven parameterization scheme for physically consistent
PT profiles that does not require explicit assumptions about the functional
form of the PT profiles and uses fewer parameters than existing methods. Our
approach consists of a latent variable model (based on a neural network) that
learns a distribution over functions (PT profiles). Each profile is represented
by a low-dimensional vector that can be used to condition a decoder network
that maps $P$ to $T$. When training and evaluating our method on two publicly
available datasets of self-consistent PT profiles, we find that our method
achieves, on average, better fit quality than existing baseline methods,
despite using fewer parameters. In an AR based on existing literature, our
model (using two parameters) produces a tighter, more accurate posterior for
the PT profile than the five-parameter polynomial baseline, while also speeding
up the retrieval by more than a factor of three. By providing parametric access
to physically consistent PT profiles, and by reducing the number of parameters
required to describe a PT profile (thereby reducing computational cost or
freeing resources for additional parameters of interest), our method can help
improve AR and thus our understanding of exoplanet atmospheres and their
habitability. | [
"Timothy D. Gebhard",
"Daniel Angerhausen",
"Björn S. Konrad",
"Eleonora Alei",
"Sascha P. Quanz",
"Bernhard Schölkopf"
] | 2023-09-06 15:22:33 | http://arxiv.org/abs/2309.03075v1 | http://arxiv.org/pdf/2309.03075v1 | 2309.03075v1 |
Character Queries: A Transformer-based Approach to On-Line Handwritten Character Segmentation | On-line handwritten character segmentation is often associated with
handwriting recognition and even though recognition models include mechanisms
to locate relevant positions during the recognition process, it is typically
insufficient to produce a precise segmentation. Decoupling the segmentation
from the recognition unlocks the potential to further utilize the result of the
recognition. We specifically focus on the scenario where the transcription is
known beforehand, in which case the character segmentation becomes an
assignment problem between sampling points of the stylus trajectory and
characters in the text. Inspired by the $k$-means clustering algorithm, we view
it from the perspective of cluster assignment and present a Transformer-based
architecture where each cluster is formed based on a learned character query in
the Transformer decoder block. In order to assess the quality of our approach,
we create character segmentation ground truths for two popular on-line
handwriting datasets, IAM-OnDB and HANDS-VNOnDB, and evaluate multiple methods
on them, demonstrating that our approach achieves the overall best results. | [
"Michael Jungo",
"Beat Wolf",
"Andrii Maksai",
"Claudiu Musat",
"Andreas Fischer"
] | 2023-09-06 15:19:04 | http://arxiv.org/abs/2309.03072v1 | http://arxiv.org/pdf/2309.03072v1 | 2309.03072v1 |
Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks | Bayesian inference for neural networks, or Bayesian deep learning, has the
potential to provide well-calibrated predictions with quantified uncertainty
and robustness. However, the main hurdle for Bayesian deep learning is its
computational complexity due to the high dimensionality of the parameter space.
In this work, we propose a novel scheme that addresses this limitation by
constructing a low-dimensional subspace of the neural network
parameters-referred to as an active subspace-by identifying the parameter
directions that have the most significant influence on the output of the neural
network. We demonstrate that the significantly reduced active subspace enables
effective and scalable Bayesian inference via either Monte Carlo (MC) sampling
methods, otherwise computationally intractable, or variational inference.
Empirically, our approach provides reliable predictions with robust uncertainty
estimates for various regression tasks. | [
"Sanket Jantre",
"Nathan M. Urban",
"Xiaoning Qian",
"Byung-Jun Yoon"
] | 2023-09-06 15:00:36 | http://arxiv.org/abs/2309.03061v1 | http://arxiv.org/pdf/2309.03061v1 | 2309.03061v1 |
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra | Many areas of machine learning and science involve large linear algebra
problems, such as eigendecompositions, solving linear systems, computing matrix
exponentials, and trace estimation. The matrices involved often have Kronecker,
convolutional, block diagonal, sum, or product structure. In this paper, we
propose a simple but general framework for large-scale linear algebra problems
in machine learning, named CoLA (Compositional Linear Algebra). By combining a
linear operator abstraction with compositional dispatch rules, CoLA
automatically constructs memory and runtime efficient numerical algorithms.
Moreover, CoLA provides memory efficient automatic differentiation, low
precision computation, and GPU acceleration in both JAX and PyTorch, while also
accommodating new objects, operations, and rules in downstream packages via
multiple dispatch. CoLA can accelerate many algebraic operations, while making
it easy to prototype matrix structures and algorithms, providing an appealing
drop-in tool for virtually any computational effort that requires linear
algebra. We showcase its efficacy across a broad range of applications,
including partial differential equations, Gaussian processes, equivariant model
construction, and unsupervised learning. | [
"Andres Potapczynski",
"Marc Finzi",
"Geoff Pleiss",
"Andrew Gordon Wilson"
] | 2023-09-06 14:59:38 | http://arxiv.org/abs/2309.03060v1 | http://arxiv.org/pdf/2309.03060v1 | 2309.03060v1 |
Automated CVE Analysis for Threat Prioritization and Impact Prediction | The Common Vulnerabilities and Exposures (CVE) are pivotal information for
proactive cybersecurity measures, including service patching, security
hardening, and more. However, CVEs typically offer low-level, product-oriented
descriptions of publicly disclosed cybersecurity vulnerabilities, often lacking
the essential attack semantic information required for comprehensive weakness
characterization and threat impact estimation. This critical insight is
essential for CVE prioritization and the identification of potential
countermeasures, particularly when dealing with a large number of CVEs. Current
industry practices involve manual evaluation of CVEs to assess their attack
severities using the Common Vulnerability Scoring System (CVSS) and mapping
them to Common Weakness Enumeration (CWE) for potential mitigation
identification. Unfortunately, this manual analysis presents a major bottleneck
in the vulnerability analysis process, leading to slowdowns in proactive
cybersecurity efforts and the potential for inaccuracies due to human errors.
In this research, we introduce our novel predictive model and tool (called
CVEDrill) which revolutionizes CVE analysis and threat prioritization. CVEDrill
accurately estimates the CVSS vector for precise threat mitigation and priority
ranking and seamlessly automates the classification of CVEs into the
appropriate CWE hierarchy classes. By harnessing CVEDrill, organizations can
now implement cybersecurity countermeasure mitigation with unparalleled
accuracy and timeliness, surpassing in this domain the capabilities of
state-of-the-art tools like ChaptGPT. | [
"Ehsan Aghaei",
"Ehab Al-Shaer",
"Waseem Shadid",
"Xi Niu"
] | 2023-09-06 14:34:03 | http://arxiv.org/abs/2309.03040v1 | http://arxiv.org/pdf/2309.03040v1 | 2309.03040v1 |
Deep Learning for Polycystic Kidney Disease: Utilizing Neural Networks for Accurate and Early Detection through Gene Expression Analysis | With Polycystic Kidney Disease (PKD) potentially leading to fatal
complications in patients due to the formation of cysts in kidneys, early
detection of PKD is crucial for effective management of the condition. However,
the various patient-specific factors that play a role in the diagnosis make it
an intricate puzzle for clinicians to solve, leading to possible kidney
failure. Therefore, in this study we aim to utilize a deep learning-based
approach for early disease detection through gene expression analysis. The
devised neural network is able to achieve accurate and robust prediction
results for possible PKD in kidneys, thereby improving patient outcomes.
Furthermore, by conducting a gene ontology analysis, we were able to predict
the top gene processes and functions that PKD may affect. | [
"Kapil Panda",
"Anirudh Mazumder"
] | 2023-09-06 14:22:24 | http://arxiv.org/abs/2309.03033v2 | http://arxiv.org/pdf/2309.03033v2 | 2309.03033v2 |
Universal Preprocessing Operators for Embedding Knowledge Graphs with Literals | Knowledge graph embeddings are dense numerical representations of entities in
a knowledge graph (KG). While the majority of approaches concentrate only on
relational information, i.e., relations between entities, fewer approaches
exist which also take information about literal values (e.g., textual
descriptions or numerical information) into account. Those which exist are
typically tailored towards a particular modality of literal and a particular
embedding method. In this paper, we propose a set of universal preprocessing
operators which can be used to transform KGs with literals for numerical,
temporal, textual, and image information, so that the transformed KGs can be
embedded with any method. The results on the kgbench dataset with three
different embedding methods show promising results. | [
"Patryk Preisner",
"Heiko Paulheim"
] | 2023-09-06 14:08:46 | http://arxiv.org/abs/2309.03023v1 | http://arxiv.org/pdf/2309.03023v1 | 2309.03023v1 |
Amortised Inference in Bayesian Neural Networks | Meta-learning is a framework in which machine learning models train over a
set of datasets in order to produce predictions on new datasets at test time.
Probabilistic meta-learning has received an abundance of attention from the
research community in recent years, but a problem shared by many existing
probabilistic meta-models is that they require a very large number of datasets
in order to produce high-quality predictions with well-calibrated uncertainty
estimates. In many applications, however, such quantities of data are simply
not available.
In this dissertation we present a significantly more data-efficient approach
to probabilistic meta-learning through per-datapoint amortisation of inference
in Bayesian neural networks, introducing the Amortised Pseudo-Observation
Variational Inference Bayesian Neural Network (APOVI-BNN). First, we show that
the approximate posteriors obtained under our amortised scheme are of similar
or better quality to those obtained through traditional variational inference,
despite the fact that the amortised inference is performed in a single forward
pass. We then discuss how the APOVI-BNN may be viewed as a new member of the
neural process family, motivating the use of neural process training objectives
for potentially better predictive performance on complex problems as a result.
Finally, we assess the predictive performance of the APOVI-BNN against other
probabilistic meta-models in both a one-dimensional regression problem and in a
significantly more complex image completion setting. In both cases, when the
amount of training data is limited, our model is the best in its class. | [
"Tommy Rochussen"
] | 2023-09-06 14:02:33 | http://arxiv.org/abs/2309.03018v1 | http://arxiv.org/pdf/2309.03018v1 | 2309.03018v1 |
SymED: Adaptive and Online Symbolic Representation of Data on the Edge | The edge computing paradigm helps handle the Internet of Things (IoT)
generated data in proximity to its source. Challenges occur in transferring,
storing, and processing this rapidly growing amount of data on
resource-constrained edge devices. Symbolic Representation (SR) algorithms are
promising solutions to reduce the data size by converting actual raw data into
symbols. Also, they allow data analytics (e.g., anomaly detection and trend
prediction) directly on symbols, benefiting large classes of edge applications.
However, existing SR algorithms are centralized in design and work offline with
batch data, which is infeasible for real-time cases. We propose SymED -
Symbolic Edge Data representation method, i.e., an online, adaptive, and
distributed approach for symbolic representation of data on edge. SymED is
based on the Adaptive Brownian Bridge-based Aggregation (ABBA), where we assume
low-powered IoT devices do initial data compression (senders) and the more
robust edge devices do the symbolic conversion (receivers). We evaluate SymED
by measuring compression performance, reconstruction accuracy through Dynamic
Time Warping (DTW) distance, and computational latency. The results show that
SymED is able to (i) reduce the raw data with an average compression rate of
9.5%; (ii) keep a low reconstruction error of 13.25 in the DTW space; (iii)
simultaneously provide real-time adaptability for online streaming IoT data at
typical latencies of 42ms per symbol, reducing the overall network traffic. | [
"Daniel Hofstätter",
"Shashikant Ilager",
"Ivan Lujic",
"Ivona Brandic"
] | 2023-09-06 13:59:04 | http://arxiv.org/abs/2309.03014v1 | http://arxiv.org/pdf/2309.03014v1 | 2309.03014v1 |
A Theoretical Explanation of Activation Sparsity through Flat Minima and Adversarial Robustness | A recent empirical observation (Li et al., 2022b) of activation sparsity in
MLP blocks offers an opportunity to drastically reduce computation costs for
free. Although having attributed it to training dynamics, existing theoretical
explanations of activation sparsity are restricted to shallow networks, small
training steps and special training, despite its emergence in deep models
standardly trained for a large number of steps. To fill these gaps, we propose
the notion of gradient sparsity as one source of activation sparsity and a
theoretical explanation based on it that sees sparsity a necessary step to
adversarial robustness w.r.t. hidden features and parameters, which is
approximately the flatness of minima for well-learned models. The theory
applies to standardly trained LayerNorm-ed MLPs, and further to Transformers or
other architectures trained with weight noises. Eliminating other sources of
flatness except for sparsity, we discover the phenomenon that the ratio between
the largest and smallest non-zero singular values of weight matrices is small.
When discussing the emergence of this spectral concentration, we use random
matrix theory (RMT) as a powerful tool to analyze stochastic gradient noises.
Validational experiments are conducted to verify our gradient-sparsity-based
explanation. We propose two plug-and-play modules for both training and
finetuning for sparsity. Experiments on ImageNet-1k and C4 demonstrate their
50% sparsity improvements, indicating further potential cost reduction in both
training and inference. | [
"Ze Peng",
"Lei Qi",
"Yinghuan Shi",
"Yang Gao"
] | 2023-09-06 13:48:40 | http://arxiv.org/abs/2309.03004v3 | http://arxiv.org/pdf/2309.03004v3 | 2309.03004v3 |
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models | Humans excel at robust bipedal walking in complex natural environments. In
each step, they adequately tune the interaction of biomechanical muscle
dynamics and neuronal signals to be robust against uncertainties in ground
conditions. However, it is still not fully understood how the nervous system
resolves the musculoskeletal redundancy to solve the multi-objective control
problem considering stability, robustness, and energy efficiency. In computer
simulations, energy minimization has been shown to be a successful optimization
target, reproducing natural walking with trajectory optimization or
reflex-based control methods. However, these methods focus on particular
motions at a time and the resulting controllers are limited when compensating
for perturbations. In robotics, reinforcement learning~(RL) methods recently
achieved highly stable (and efficient) locomotion on quadruped systems, but the
generation of human-like walking with bipedal biomechanical models has required
extensive use of expert data sets. This strong reliance on demonstrations often
results in brittle policies and limits the application to new behaviors,
especially considering the potential variety of movements for high-dimensional
musculoskeletal models in 3D. Achieving natural locomotion with RL without
sacrificing its incredible robustness might pave the way for a novel approach
to studying human walking in complex natural environments. Videos:
https://sites.google.com/view/naturalwalkingrl | [
"Pierre Schumacher",
"Thomas Geijtenbeek",
"Vittorio Caggiano",
"Vikash Kumar",
"Syn Schmitt",
"Georg Martius",
"Daniel F. B. Haeufle"
] | 2023-09-06 13:20:31 | http://arxiv.org/abs/2309.02976v2 | http://arxiv.org/pdf/2309.02976v2 | 2309.02976v2 |
On the Impact of Feeding Cost Risk in Aquaculture Valuation and Decision Making | We study the effect of stochastic feeding costs on animal-based commodities
with particular focus on aquaculture. More specifically, we use soybean futures
to infer on the stochastic behaviour of salmon feed, which we assume to follow
a Schwartz-2-factor model. We compare the decision of harvesting salmon using a
decision rule assuming either deterministic or stochastic feeding costs, i.e.
including feeding cost risk. We identify cases, where accounting for stochastic
feeding costs leads to significant improvements as well as cases where
deterministic feeding costs are a good enough proxy. Nevertheless, in all of
these cases, the newly derived rules show superior performance, while the
additional computational costs are negligible. From a methodological point of
view, we demonstrate how to use Deep-Neural-Networks to infer on the decision
boundary that determines harvesting or continuation, improving on more
classical regression-based and curve-fitting methods. To achieve this we use a
deep classifier, which not only improves on previous results but also scales
well for higher dimensional problems, and in addition mitigates effects due to
model uncertainty, which we identify in this article. effects due to model
uncertainty, which we identify in this article. | [
"Christian Oliver Ewald",
"Kevin Kamm"
] | 2023-09-06 13:09:01 | http://arxiv.org/abs/2309.02970v1 | http://arxiv.org/pdf/2309.02970v1 | 2309.02970v1 |
CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse | The Variational Autoencoder (VAE) is known to suffer from the phenomenon of
\textit{posterior collapse}, where the latent representations generated by the
model become independent of the inputs. This leads to degenerated
representations of the input, which is attributed to the limitations of the
VAE's objective function. In this work, we propose a novel solution to this
issue, the Contrastive Regularization for Variational Autoencoders (CR-VAE).
The core of our approach is to augment the original VAE with a contrastive
objective that maximizes the mutual information between the representations of
similar visual inputs. This strategy ensures that the information flow between
the input and its latent representation is maximized, effectively avoiding
posterior collapse. We evaluate our method on a series of visual datasets and
demonstrate, that CR-VAE outperforms state-of-the-art approaches in preventing
posterior collapse. | [
"Fotios Lygerakis",
"Elmar Rueckert"
] | 2023-09-06 13:05:42 | http://arxiv.org/abs/2309.02968v2 | http://arxiv.org/pdf/2309.02968v2 | 2309.02968v2 |
M3D-NCA: Robust 3D Segmentation with Built-in Quality Control | Medical image segmentation relies heavily on large-scale deep learning
models, such as UNet-based architectures. However, the real-world utility of
such models is limited by their high computational requirements, which makes
them impractical for resource-constrained environments such as primary care
facilities and conflict zones. Furthermore, shifts in the imaging domain can
render these models ineffective and even compromise patient safety if such
errors go undetected. To address these challenges, we propose M3D-NCA, a novel
methodology that leverages Neural Cellular Automata (NCA) segmentation for 3D
medical images using n-level patchification. Moreover, we exploit the variance
in M3D-NCA to develop a novel quality metric which can automatically detect
errors in the segmentation process of NCAs. M3D-NCA outperforms the two
magnitudes larger UNet models in hippocampus and prostate segmentation by 2%
Dice and can be run on a Raspberry Pi 4 Model B (2GB RAM). This highlights the
potential of M3D-NCA as an effective and efficient alternative for medical
image segmentation in resource-constrained environments. | [
"John Kalkhof",
"Anirban Mukhopadhyay"
] | 2023-09-06 12:43:18 | http://arxiv.org/abs/2309.02954v1 | http://arxiv.org/pdf/2309.02954v1 | 2309.02954v1 |
EvoCLINICAL: Evolving Cyber-Cyber Digital Twin with Active Transfer Learning for Automated Cancer Registry System | The Cancer Registry of Norway (CRN) collects information on cancer patients
by receiving cancer messages from different medical entities (e.g., medical
labs, and hospitals) in Norway. Such messages are validated by an automated
cancer registry system: GURI. Its correct operation is crucial since it lays
the foundation for cancer research and provides critical cancer-related
statistics to its stakeholders. Constructing a cyber-cyber digital twin (CCDT)
for GURI can facilitate various experiments and advanced analyses of the
operational state of GURI without requiring intensive interactions with the
real system. However, GURI constantly evolves due to novel medical diagnostics
and treatment, technological advances, etc. Accordingly, CCDT should evolve as
well to synchronize with GURI. A key challenge of achieving such
synchronization is that evolving CCDT needs abundant data labelled by the new
GURI. To tackle this challenge, we propose EvoCLINICAL, which considers the
CCDT developed for the previous version of GURI as the pretrained model and
fine-tunes it with the dataset labelled by querying a new GURI version.
EvoCLINICAL employs a genetic algorithm to select an optimal subset of cancer
messages from a candidate dataset and query GURI with it. We evaluate
EvoCLINICAL on three evolution processes. The precision, recall, and F1 score
are all greater than 91%, demonstrating the effectiveness of EvoCLINICAL.
Furthermore, we replace the active learning part of EvoCLINICAL with random
selection to study the contribution of transfer learning to the overall
performance of EvoCLINICAL. Results show that employing active learning in
EvoCLINICAL increases its performances consistently. | [
"Chengjie Lu",
"Qinghua Xu",
"Tao Yue",
"Shaukat Ali",
"Thomas Schwitalla",
"Jan F. Nygård"
] | 2023-09-06 12:02:15 | http://arxiv.org/abs/2309.03246v1 | http://arxiv.org/pdf/2309.03246v1 | 2309.03246v1 |
A hybrid quantum-classical fusion neural network to improve protein-ligand binding affinity predictions for drug discovery | The field of drug discovery hinges on the accurate prediction of binding
affinity between prospective drug molecules and target proteins, especially
when such proteins directly influence disease progression. However, estimating
binding affinity demands significant financial and computational resources.
While state-of-the-art methodologies employ classical machine learning (ML)
techniques, emerging hybrid quantum machine learning (QML) models have shown
promise for enhanced performance, owing to their inherent parallelism and
capacity to manage exponential increases in data dimensionality. Despite these
advances, existing models encounter issues related to convergence stability and
prediction accuracy. This paper introduces a novel hybrid quantum-classical
deep learning model tailored for binding affinity prediction in drug discovery.
Specifically, the proposed model synergistically integrates 3D and spatial
graph convolutional neural networks within an optimized quantum architecture.
Simulation results demonstrate a 6% improvement in prediction accuracy relative
to existing classical models, as well as a significantly more stable
convergence performance compared to previous classical approaches. | [
"S. Banerjee",
"S. He Yuxun",
"S. Konakanchi",
"L. Ogunfowora",
"S. Roy",
"S. Selvaras",
"L. Domingo",
"M. Chehimi",
"M. Djukic",
"C. Johnson"
] | 2023-09-06 11:56:33 | http://arxiv.org/abs/2309.03919v1 | http://arxiv.org/pdf/2309.03919v1 | 2309.03919v1 |
Estimating irregular water demands with physics-informed machine learning to inform leakage detection | Leakages in drinking water distribution networks pose significant challenges
to water utilities, leading to infrastructure failure, operational disruptions,
environmental hazards, property damage, and economic losses. The timely
identification and accurate localisation of such leakages is paramount for
utilities to mitigate these unwanted effects. However, implementation of
algorithms for leakage detection is limited in practice by requirements of
either hydraulic models or large amounts of training data. Physics-informed
machine learning can utilise hydraulic information thereby circumventing both
limitations. In this work, we present a physics-informed machine learning
algorithm that analyses pressure data and therefrom estimates unknown irregular
water demands via a fully connected neural network, ultimately leveraging the
Bernoulli equation and effectively linearising the leakage detection problem.
Our algorithm is tested on data from the L-Town benchmark network, and results
indicate a good capability for estimating most irregular demands, with R2
larger than 0.8. Identification results for leakages under the presence of
irregular demands could be improved by a factor of 5.3 for abrupt leaks and a
factor of 3.0 for incipient leaks when compared the results disregarding
irregular demands. | [
"Ivo Daniel",
"Andrea Cominola"
] | 2023-09-06 11:55:16 | http://arxiv.org/abs/2309.02935v1 | http://arxiv.org/pdf/2309.02935v1 | 2309.02935v1 |
GroupEnc: encoder with group loss for global structure preservation | Recent advances in dimensionality reduction have achieved more accurate
lower-dimensional embeddings of high-dimensional data. In addition to
visualisation purposes, these embeddings can be used for downstream processing,
including batch effect normalisation, clustering, community detection or
trajectory inference. We use the notion of structure preservation at both local
and global levels to create a deep learning model, based on a variational
autoencoder (VAE) and the stochastic quartet loss from the SQuadMDS algorithm.
Our encoder model, called GroupEnc, uses a 'group loss' function to create
embeddings with less global structure distortion than VAEs do, while keeping
the model parametric and the architecture flexible. We validate our approach
using publicly available biological single-cell transcriptomic datasets,
employing RNX curves for evaluation. | [
"David Novak",
"Sofie Van Gassen",
"Yvan Saeys"
] | 2023-09-06 11:22:21 | http://arxiv.org/abs/2309.02917v1 | http://arxiv.org/pdf/2309.02917v1 | 2309.02917v1 |
Persona-aware Generative Model for Code-mixed Language | Code-mixing and script-mixing are prevalent across online social networks and
multilingual societies. However, a user's preference toward code-mixing depends
on the socioeconomic status, demographics of the user, and the local context,
which existing generative models mostly ignore while generating code-mixed
texts. In this work, we make a pioneering attempt to develop a persona-aware
generative model to generate texts resembling real-life code-mixed texts of
individuals. We propose a Persona-aware Generative Model for Code-mixed
Generation, PARADOX, a novel Transformer-based encoder-decoder model that
encodes an utterance conditioned on a user's persona and generates code-mixed
texts without monolingual reference data. We propose an alignment module that
re-calibrates the generated sequence to resemble real-life code-mixed texts.
PARADOX generates code-mixed texts that are semantically more meaningful and
linguistically more valid. To evaluate the personification capabilities of
PARADOX, we propose four new metrics -- CM BLEU, CM Rouge-1, CM Rouge-L and CM
KS. On average, PARADOX achieves 1.6 points better CM BLEU, 47% better
perplexity and 32% better semantic coherence than the non-persona-based
counterparts. | [
"Ayan Sengupta",
"Md Shad Akhtar",
"Tanmoy Chakraborty"
] | 2023-09-06 11:20:41 | http://arxiv.org/abs/2309.02915v1 | http://arxiv.org/pdf/2309.02915v1 | 2309.02915v1 |
Ensemble DNN for Age-of-Information Minimization in UAV-assisted Networks | This paper addresses the problem of Age-of-Information (AoI) in UAV-assisted
networks. Our objective is to minimize the expected AoI across devices by
optimizing UAVs' stopping locations and device selection probabilities. To
tackle this problem, we first derive a closed-form expression of the expected
AoI that involves the probabilities of selection of devices. Then, we formulate
the problem as a non-convex minimization subject to quality of service
constraints. Since the problem is challenging to solve, we propose an Ensemble
Deep Neural Network (EDNN) based approach which takes advantage of the dual
formulation of the studied problem. Specifically, the Deep Neural Networks
(DNNs) in the ensemble are trained in an unsupervised manner using the
Lagrangian function of the studied problem. Our experiments show that the
proposed EDNN method outperforms traditional DNNs in reducing the expected AoI,
achieving a remarkable reduction of $29.5\%$. | [
"Mouhamed Naby Ndiaye",
"El Houcine Bergou",
"Hajar El Hammouti"
] | 2023-09-06 11:19:26 | http://arxiv.org/abs/2309.02913v1 | http://arxiv.org/pdf/2309.02913v1 | 2309.02913v1 |
A Multimodal Learning Framework for Comprehensive 3D Mineral Prospectivity Modeling with Jointly Learned Structure-Fluid Relationships | This study presents a novel multimodal fusion model for three-dimensional
mineral prospectivity mapping (3D MPM), effectively integrating structural and
fluid information through a deep network architecture. Leveraging Convolutional
Neural Networks (CNN) and Multilayer Perceptrons (MLP), the model employs
canonical correlation analysis (CCA) to align and fuse multimodal features.
Rigorous evaluation on the Jiaojia gold deposit dataset demonstrates the
model's superior performance in distinguishing ore-bearing instances and
predicting mineral prospectivity, outperforming other models in result
analyses. Ablation studies further reveal the benefits of joint feature
utilization and CCA incorporation. This research not only advances mineral
prospectivity modeling but also highlights the pivotal role of data integration
and feature alignment for enhanced exploration decision-making. | [
"Yang Zheng",
"Hao Deng",
"Ruisheng Wang",
"Jingjie Wu"
] | 2023-09-06 11:13:34 | http://arxiv.org/abs/2309.02911v2 | http://arxiv.org/pdf/2309.02911v2 | 2309.02911v2 |
DECODE: Data-driven Energy Consumption Prediction leveraging Historical Data and Environmental Factors in Buildings | Energy prediction in buildings plays a crucial role in effective energy
management. Precise predictions are essential for achieving optimal energy
consumption and distribution within the grid. This paper introduces a Long
Short-Term Memory (LSTM) model designed to forecast building energy consumption
using historical energy data, occupancy patterns, and weather conditions. The
LSTM model provides accurate short, medium, and long-term energy predictions
for residential and commercial buildings compared to existing prediction
models. We compare our LSTM model with established prediction methods,
including linear regression, decision trees, and random forest. Encouragingly,
the proposed LSTM model emerges as the superior performer across all metrics.
It demonstrates exceptional prediction accuracy, boasting the highest R2 score
of 0.97 and the most favorable mean absolute error (MAE) of 0.007. An
additional advantage of our developed model is its capacity to achieve
efficient energy consumption forecasts even when trained on a limited dataset.
We address concerns about overfitting (variance) and underfitting (bias)
through rigorous training and evaluation on real-world data. In summary, our
research contributes to energy prediction by offering a robust LSTM model that
outperforms alternative methods and operates with remarkable efficiency,
generalizability, and reliability. | [
"Aditya Mishra",
"Haroon R. Lone",
"Aayush Mishra"
] | 2023-09-06 11:02:53 | http://arxiv.org/abs/2309.02908v1 | http://arxiv.org/pdf/2309.02908v1 | 2309.02908v1 |
Testing properties of distributions in the streaming model | We study distribution testing in the standard access model and the
conditional access model when the memory available to the testing algorithm is
bounded. In both scenarios, the samples appear in an online fashion and the
goal is to test the properties of distribution using an optimal number of
samples subject to a memory constraint on how many samples can be stored at a
given time. First, we provide a trade-off between the sample complexity and the
space complexity for testing identity when the samples are drawn according to
the conditional access oracle. We then show that we can learn a succinct
representation of a monotone distribution efficiently with a memory constraint
on the number of samples that are stored that is almost optimal. We also show
that the algorithm for monotone distributions can be extended to a larger class
of decomposable distributions. | [
"Sampriti Roy",
"Yadu Vasudev"
] | 2023-09-06 10:53:29 | http://arxiv.org/abs/2309.03245v1 | http://arxiv.org/pdf/2309.03245v1 | 2309.03245v1 |
A Unified Framework for Discovering Discrete Symmetries | We consider the problem of learning a function respecting a symmetry from
among a class of symmetries. We develop a unified framework that enables
symmetry discovery across a broad range of subgroups including locally
symmetric, dihedral and cyclic subgroups. At the core of the framework is a
novel architecture composed of linear and tensor-valued functions that
expresses functions invariant to these subgroups in a principled manner. The
structure of the architecture enables us to leverage multi-armed bandit
algorithms and gradient descent to efficiently optimize over the linear and the
tensor-valued functions, respectively, and to infer the symmetry that is
ultimately learnt. We also discuss the necessity of the tensor-valued functions
in the architecture. Experiments on image-digit sum and polynomial regression
tasks demonstrate the effectiveness of our approach. | [
"Pavan Karjol",
"Rohan Kashyap",
"Aditya Gopalan",
"Prathosh A. P"
] | 2023-09-06 10:41:30 | http://arxiv.org/abs/2309.02898v1 | http://arxiv.org/pdf/2309.02898v1 | 2309.02898v1 |
Non-Clashing Teaching Maps for Balls in Graphs | Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023]
introduced non-clashing teaching and showed it to be the most efficient machine
teaching model satisfying the benchmark for collusion-avoidance set by Goldman
and Mathias. A teaching map $T$ for a concept class $\cal{C}$ assigns a
(teaching) set $T(C)$ of examples to each concept $C \in \cal{C}$. A teaching
map is non-clashing if no pair of concepts are consistent with the union of
their teaching sets. The size of a non-clashing teaching map (NCTM) $T$ is the
maximum size of a $T(C)$, $C \in \cal{C}$. The non-clashing teaching dimension
NCTD$(\cal{C})$ of $\cal{C}$ is the minimum size of an NCTM for $\cal{C}$.
NCTM$^+$ and NCTD$^+(\cal{C})$ are defined analogously, except the teacher may
only use positive examples.
We study NCTMs and NCTM$^+$s for the concept class $\mathcal{B}(G)$
consisting of all balls of a graph $G$. We show that the associated decision
problem {\sc B-NCTD$^+$} for NCTD$^+$ is NP-complete in split, co-bipartite,
and bipartite graphs. Surprisingly, we even prove that, unless the ETH fails,
{\sc B-NCTD$^+$} does not admit an algorithm running in time
$2^{2^{o(vc)}}\cdot n^{O(1)}$, nor a kernelization algorithm outputting a
kernel with $2^{o(vc)}$ vertices, where vc is the vertex cover number of $G$.
These are extremely rare results: it is only the second (fourth, resp.) problem
in NP to admit a double-exponential lower bound parameterized by vc (treewidth,
resp.), and only one of very few problems to admit an ETH-based conditional
lower bound on the number of vertices in a kernel. We complement these lower
bounds with matching upper bounds. For trees, interval graphs, cycles, and
trees of cycles, we derive NCTM$^+$s or NCTMs for $\mathcal{B}(G)$ of size
proportional to its VC-dimension. For Gromov-hyperbolic graphs, we design an
approximate NCTM$^+$ for $\mathcal{B}(G)$ of size 2. | [
"Jérémie Chalopin",
"Victor Chepoi",
"Fionn Mc Inerney",
"Sébastien Ratel"
] | 2023-09-06 10:02:58 | http://arxiv.org/abs/2309.02876v1 | http://arxiv.org/pdf/2309.02876v1 | 2309.02876v1 |
Learning Hybrid Dynamics Models With Simulator-Informed Latent States | Dynamics model learning deals with the task of inferring unknown dynamics
from measurement data and predicting the future behavior of the system. A
typical approach to address this problem is to train recurrent models. However,
predictions with these models are often not physically meaningful. Further,
they suffer from deteriorated behavior over time due to accumulating errors.
Often, simulators building on first principles are available being physically
meaningful by design. However, modeling simplifications typically cause
inaccuracies in these models. Consequently, hybrid modeling is an emerging
trend that aims to combine the best of both worlds. In this paper, we propose a
new approach to hybrid modeling, where we inform the latent states of a learned
model via a black-box simulator. This allows to control the predictions via the
simulator preventing them from accumulating errors. This is especially
challenging since, in contrast to previous approaches, access to the
simulator's latent states is not available. We tackle the task by leveraging
observers, a well-known concept from control theory, inferring unknown latent
states from observations and dynamics over time. In our learning-based setting,
we jointly learn the dynamics and an observer that infers the latent states via
the simulator. Thus, the simulator constantly corrects the latent states,
compensating for modeling mismatch caused by learning. To maintain flexibility,
we train an RNN-based residuum for the latent states that cannot be informed by
the simulator. | [
"Katharina Ensinger",
"Sebastian Ziesche",
"Sebastian Trimpe"
] | 2023-09-06 09:57:58 | http://arxiv.org/abs/2309.02873v1 | http://arxiv.org/pdf/2309.02873v1 | 2309.02873v1 |
Rethinking Momentum Knowledge Distillation in Online Continual Learning | Online Continual Learning (OCL) addresses the problem of training neural
networks on a continuous data stream where multiple classification tasks emerge
in sequence. In contrast to offline Continual Learning, data can be seen only
once in OCL. In this context, replay-based strategies have achieved impressive
results and most state-of-the-art approaches are heavily depending on them.
While Knowledge Distillation (KD) has been extensively used in offline
Continual Learning, it remains under-exploited in OCL, despite its potential.
In this paper, we theoretically analyze the challenges in applying KD to OCL.
We introduce a direct yet effective methodology for applying Momentum Knowledge
Distillation (MKD) to many flagship OCL methods and demonstrate its
capabilities to enhance existing approaches. In addition to improving existing
state-of-the-arts accuracy by more than $10\%$ points on ImageNet100, we shed
light on MKD internal mechanics and impacts during training in OCL. We argue
that similar to replay, MKD should be considered a central component of OCL. | [
"Nicolas Michel",
"Maorong Wang",
"Ling Xiao",
"Toshihiko Yamasaki"
] | 2023-09-06 09:49:20 | http://arxiv.org/abs/2309.02870v1 | http://arxiv.org/pdf/2309.02870v1 | 2309.02870v1 |
On Reducing Undesirable Behavior in Deep Reinforcement Learning Models | Deep reinforcement learning (DRL) has proven extremely useful in a large
variety of application domains. However, even successful DRL-based software can
exhibit highly undesirable behavior. This is due to DRL training being based on
maximizing a reward function, which typically captures general trends but
cannot precisely capture, or rule out, certain behaviors of the system. In this
paper, we propose a novel framework aimed at drastically reducing the
undesirable behavior of DRL-based software, while maintaining its excellent
performance. In addition, our framework can assist in providing engineers with
a comprehensible characterization of such undesirable behavior. Under the hood,
our approach is based on extracting decision tree classifiers from erroneous
state-action pairs, and then integrating these trees into the DRL training
loop, penalizing the system whenever it performs an error. We provide a
proof-of-concept implementation of our approach, and use it to evaluate the
technique on three significant case studies. We find that our approach can
extend existing frameworks in a straightforward manner, and incurs only a
slight overhead in training time. Further, it incurs only a very slight hit to
performance, or even in some cases - improves it, while significantly reducing
the frequency of undesirable behavior. | [
"Ophir M. Carmel",
"Guy Katz"
] | 2023-09-06 09:47:36 | http://arxiv.org/abs/2309.02869v2 | http://arxiv.org/pdf/2309.02869v2 | 2309.02869v2 |
Enhancing Asynchronous Time Series Forecasting with Contrastive Relational Inference | Asynchronous time series, also known as temporal event sequences, are the
basis of many applications throughout different industries. Temporal point
processes(TPPs) are the standard method for modeling such data. Existing TPP
models have focused on parameterizing the conditional distribution of future
events instead of explicitly modeling event interactions, imposing challenges
for event predictions. In this paper, we propose a novel approach that
leverages Neural Relational Inference (NRI) to learn a relation graph that
infers interactions while simultaneously learning the dynamics patterns from
observational data. Our approach, the Contrastive Relational Inference-based
Hawkes Process (CRIHP), reasons about event interactions under a variational
inference framework. It utilizes intensity-based learning to search for
prototype paths to contrast relationship constraints. Extensive experiments on
three real-world datasets demonstrate the effectiveness of our model in
capturing event interactions for event sequence modeling tasks. Code will be
integrated into the EasyTPP framework. | [
"Yan Wang",
"Zhixuan Chu",
"Tao Zhou",
"Caigao Jiang",
"Hongyan Hao",
"Minjie Zhu",
"Xindong Cai",
"Qing Cui",
"Longfei Li",
"James Y Zhang",
"Siqiao Xue",
"Jun Zhou"
] | 2023-09-06 09:47:03 | http://arxiv.org/abs/2309.02868v2 | http://arxiv.org/pdf/2309.02868v2 | 2309.02868v2 |
A recommender for the management of chronic pain in patients undergoing spinal cord stimulation | Spinal cord stimulation (SCS) is a therapeutic approach used for the
management of chronic pain. It involves the delivery of electrical impulses to
the spinal cord via an implanted device, which when given suitable stimulus
parameters can mask or block pain signals. Selection of optimal stimulation
parameters usually happens in the clinic under the care of a provider whereas
at-home SCS optimization is managed by the patient. In this paper, we propose a
recommender system for the management of pain in chronic pain patients
undergoing SCS. In particular, we use a contextual multi-armed bandit (CMAB)
approach to develop a system that recommends SCS settings to patients with the
aim of improving their condition. These recommendations, sent directly to
patients though a digital health ecosystem, combined with a patient monitoring
system closes the therapeutic loop around a chronic pain patient over their
entire patient journey. We evaluated the system in a cohort of SCS-implanted
ENVISION study subjects (Clinicaltrials.gov ID: NCT03240588) using a
combination of quality of life metrics and Patient States (PS), a novel measure
of holistic outcomes. SCS recommendations provided statistically significant
improvement in clinical outcomes (pain and/or QoL) in 85\% of all subjects
(N=21). Among subjects in moderate PS (N=7) prior to receiving recommendations,
100\% showed statistically significant improvements and 5/7 had improved PS
dwell time. This analysis suggests SCS patients may benefit from SCS
recommendations, resulting in additional clinical improvement on top of
benefits already received from SCS therapy. | [
"Tigran Tchrakian",
"Mykhaylo Zayats",
"Alessandra Pascale",
"Dat Huynh",
"Pritish Parida",
"Carla Agurto Rios",
"Sergiy Zhuk",
"Jeffrey L. Rogers",
"ENVISION Studies Physician Author Group",
"Boston Scientific Research Scientists Consortium"
] | 2023-09-06 09:43:34 | http://arxiv.org/abs/2309.03918v1 | http://arxiv.org/pdf/2309.03918v1 | 2309.03918v1 |
Generalised Mutual Information: a Framework for Discriminative Clustering | In the last decade, recent successes in deep clustering majorly involved the
Mutual Information (MI) as an unsupervised objective for training neural
networks with increasing regularisations. While the quality of the
regularisations have been largely discussed for improvements, little attention
has been dedicated to the relevance of MI as a clustering objective. In this
paper, we first highlight how the maximisation of MI does not lead to
satisfying clusters. We identified the Kullback-Leibler divergence as the main
reason of this behaviour. Hence, we generalise the mutual information by
changing its core distance, introducing the Generalised Mutual Information
(GEMINI): a set of metrics for unsupervised neural network training. Unlike MI,
some GEMINIs do not require regularisations when training as they are
geometry-aware thanks to distances or kernels in the data space. Finally, we
highlight that GEMINIs can automatically select a relevant number of clusters,
a property that has been little studied in deep discriminative clustering
context where the number of clusters is a priori unknown. | [
"Louis Ohl",
"Pierre-Alexandre Mattei",
"Charles Bouveyron",
"Warith Harchaoui",
"Mickaël Leclercq",
"Arnaud Droit",
"Frédéric Precioso"
] | 2023-09-06 09:39:33 | http://arxiv.org/abs/2309.02858v1 | http://arxiv.org/pdf/2309.02858v1 | 2309.02858v1 |
A Critical Review of Common Log Data Sets Used for Evaluation of Sequence-based Anomaly Detection Techniques | Log data store event execution patterns that correspond to underlying
workflows of systems or applications. While most logs are informative, log data
also include artifacts that indicate failures or incidents. Accordingly, log
data are often used to evaluate anomaly detection techniques that aim to
automatically disclose unexpected or otherwise relevant system behavior
patterns. Recently, detection approaches leveraging deep learning have
increasingly focused on anomalies that manifest as changes of sequential
patterns within otherwise normal event traces. Several publicly available data
sets, such as HDFS, BGL, Thunderbird, OpenStack, and Hadoop, have since become
standards for evaluating these anomaly detection techniques, however, the
appropriateness of these data sets has not been closely investigated in the
past. In this paper we therefore analyze six publicly available log data sets
with focus on the manifestations of anomalies and simple techniques for their
detection. Our findings suggest that most anomalies are not directly related to
sequential manifestations and that advanced detection techniques are not
required to achieve high detection rates on these data sets. | [
"Max Landauer",
"Florian Skopik",
"Markus Wurzenberger"
] | 2023-09-06 09:31:17 | http://arxiv.org/abs/2309.02854v1 | http://arxiv.org/pdf/2309.02854v1 | 2309.02854v1 |
Knowledge Distillation Layer that Lets the Student Decide | Typical technique in knowledge distillation (KD) is regularizing the learning
of a limited capacity model (student) by pushing its responses to match a
powerful model's (teacher). Albeit useful especially in the penultimate layer
and beyond, its action on student's feature transform is rather implicit,
limiting its practice in the intermediate layers. To explicitly embed the
teacher's knowledge in feature transform, we propose a learnable KD layer for
the student which improves KD with two distinct abilities: i) learning how to
leverage the teacher's knowledge, enabling to discard nuisance information, and
ii) feeding forward the transferred knowledge deeper. Thus, the student enjoys
the teacher's knowledge during the inference besides training. Formally, we
repurpose 1x1-BN-ReLU-1x1 convolution block to assign a semantic vector to each
local region according to the template (supervised by the teacher) that the
corresponding region of the student matches. To facilitate template learning in
the intermediate layers, we propose a novel form of supervision based on the
teacher's decisions. Through rigorous experimentation, we demonstrate the
effectiveness of our approach on 3 popular classification benchmarks. Code is
available at: https://github.com/adagorgun/letKD-framework | [
"Ada Gorgun",
"Yeti Z. Gurbuz",
"A. Aydin Alatan"
] | 2023-09-06 09:05:03 | http://arxiv.org/abs/2309.02843v1 | http://arxiv.org/pdf/2309.02843v1 | 2309.02843v1 |
Random postprocessing for combinatorial Bayesian optimization | Model-based sequential approaches to discrete "black-box" optimization,
including Bayesian optimization techniques, often access the same points
multiple times for a given objective function in interest, resulting in many
steps to find the global optimum. Here, we numerically study the effect of a
postprocessing method on Bayesian optimization that strictly prohibits
duplicated samples in the dataset. We find the postprocessing method
significantly reduces the number of sequential steps to find the global
optimum, especially when the acquisition function is of maximum a posterior
estimation. Our results provide a simple but general strategy to solve the slow
convergence of Bayesian optimization for high-dimensional problems. | [
"Keisuke Morita",
"Yoshihiko Nishikawa",
"Masayuki Ohzeki"
] | 2023-09-06 08:59:34 | http://arxiv.org/abs/2309.02842v1 | http://arxiv.org/pdf/2309.02842v1 | 2309.02842v1 |
EGIC: Enhanced Low-Bit-Rate Generative Image Compression Guided by Semantic Segmentation | We introduce EGIC, a novel generative image compression method that allows
traversing the distortion-perception curve efficiently from a single model.
Specifically, we propose an implicitly encoded variant of image interpolation
that predicts the residual between a MSE-optimized and GAN-optimized decoder
output. On the receiver side, the user can then control the impact of the
residual on the GAN-based reconstruction. Together with improved GAN-based
building blocks, EGIC outperforms a wide-variety of perception-oriented and
distortion-oriented baselines, including HiFiC, MRIC and DIRAC, while
performing almost on par with VTM-20.0 on the distortion end. EGIC is simple to
implement, very lightweight (e.g. 0.18x model parameters compared to HiFiC) and
provides excellent interpolation characteristics, which makes it a promising
candidate for practical applications targeting the low bit range. | [
"Nikolai Körber",
"Eduard Kromer",
"Andreas Siebert",
"Sascha Hauke",
"Daniel Mueller-Gritschneder"
] | 2023-09-06 08:50:04 | http://arxiv.org/abs/2309.03244v1 | http://arxiv.org/pdf/2309.03244v1 | 2309.03244v1 |
BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network | Generative adversarial network (GAN)-based vocoders have been intensively
studied because they can synthesize high-fidelity audio waveforms faster than
real-time. However, it has been reported that most GANs fail to obtain the
optimal projection for discriminating between real and fake data in the feature
space. In the literature, it has been demonstrated that slicing adversarial
network (SAN), an improved GAN training framework that can find the optimal
projection, is effective in the image generation task. In this paper, we
investigate the effectiveness of SAN in the vocoding task. For this purpose, we
propose a scheme to modify least-squares GAN, which most GAN-based vocoders
adopt, so that their loss functions satisfy the requirements of SAN. Through
our experiments, we demonstrate that SAN can improve the performance of
GAN-based vocoders, including BigVGAN, with small modifications. Our code is
available at https://github.com/sony/bigvsan. | [
"Takashi Shibuya",
"Yuhta Takida",
"Yuki Mitsufuji"
] | 2023-09-06 08:48:03 | http://arxiv.org/abs/2309.02836v1 | http://arxiv.org/pdf/2309.02836v1 | 2309.02836v1 |
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks | Deep learning classifiers are crucial in the age of artificial intelligence.
The device-edge-based collaborative inference has been widely adopted as an
efficient framework for promoting its applications in IoT and 5G/6G networks.
However, it suffers from accuracy degradation under non-i.i.d. data
distribution and privacy disclosure. For accuracy degradation, direct use of
transfer learning and split learning is high cost and privacy issues remain.
For privacy disclosure, cryptography-based approaches lead to a huge overhead.
Other lightweight methods assume that the ground truth is non-sensitive and can
be exposed. But for many applications, the ground truth is the user's crucial
privacy-sensitive information. In this paper, we propose a framework of
Roulette, which is a task-oriented semantic privacy-preserving collaborative
inference framework for deep learning classifiers. More than input data, we
treat the ground truth of the data as private information. We develop a novel
paradigm of split learning where the back-end DNN is frozen and the front-end
DNN is retrained to be both a feature extractor and an encryptor. Moreover, we
provide a differential privacy guarantee and analyze the hardness of ground
truth inference attacks. To validate the proposed Roulette, we conduct
extensive performance evaluations using realistic datasets, which demonstrate
that Roulette can effectively defend against various attacks and meanwhile
achieve good model accuracy. In a situation where the non-i.i.d. is very
severe, Roulette improves the inference accuracy by 21\% averaged over
benchmarks, while making the accuracy of discrimination attacks almost
equivalent to random guessing. | [
"Jingyi Li",
"Guocheng Liao",
"Lin Chen",
"Xu Chen"
] | 2023-09-06 08:08:12 | http://arxiv.org/abs/2309.02820v1 | http://arxiv.org/pdf/2309.02820v1 | 2309.02820v1 |
Combining Thermodynamics-based Model of the Centrifugal Compressors and Active Machine Learning for Enhanced Industrial Design Optimization | The design process of centrifugal compressors requires applying an
optimization process which is computationally expensive due to complex
analytical equations underlying the compressor's dynamical equations. Although
the regression surrogate models could drastically reduce the computational cost
of such a process, the major challenge is the scarcity of data for training the
surrogate model. Aiming to strategically exploit the labeled samples, we
propose the Active-CompDesign framework in which we combine a
thermodynamics-based compressor model (i.e., our internal software for
compressor design) and Gaussian Process-based surrogate model within a
deployable Active Learning (AL) setting. We first conduct experiments in an
offline setting and further, extend it to an online AL framework where a
real-time interaction with the thermodynamics-based compressor's model allows
the deployment in production. ActiveCompDesign shows a significant performance
improvement in surrogate modeling by leveraging on uncertainty-based query
function of samples within the AL framework with respect to the random
selection of data points. Moreover, our framework in production has reduced the
total computational time of compressor's design optimization to around 46%
faster than relying on the internal thermodynamics-based simulator, achieving
the same performance. | [
"Shadi Ghiasi",
"Guido Pazzi",
"Concettina Del Grosso",
"Giovanni De Magistris",
"Giacomo Veneri"
] | 2023-09-06 08:06:15 | http://arxiv.org/abs/2309.02818v1 | http://arxiv.org/pdf/2309.02818v1 | 2309.02818v1 |
Automated Bioinformatics Analysis via AutoBA | With the fast-growing and evolving omics data, the demand for streamlined and
adaptable tools to handle the analysis continues to grow. In response to this
need, we introduce Auto Bioinformatics Analysis (AutoBA), an autonomous AI
agent based on a large language model designed explicitly for conventional
omics data analysis. AutoBA simplifies the analytical process by requiring
minimal user input while delivering detailed step-by-step plans for various
bioinformatics tasks. Through rigorous validation by expert bioinformaticians,
AutoBA's robustness and adaptability are affirmed across a diverse range of
omics analysis cases, including whole genome sequencing (WGS), RNA sequencing
(RNA-seq), single-cell RNA-seq, ChIP-seq, and spatial transcriptomics. AutoBA's
unique capacity to self-design analysis processes based on input data
variations further underscores its versatility. Compared with online
bioinformatic services, AutoBA deploys the analysis locally, preserving data
privacy. Moreover, different from the predefined pipeline, AutoBA has
adaptability in sync with emerging bioinformatics tools. Overall, AutoBA
represents a convenient tool, offering robustness and adaptability for complex
omics data analysis. | [
"Juexiao Zhou",
"Bin Zhang",
"Xiuying Chen",
"Haoyang Li",
"Xiaopeng Xu",
"Siyuan Chen",
"Xin Gao"
] | 2023-09-06 07:54:45 | http://arxiv.org/abs/2309.03242v1 | http://arxiv.org/pdf/2309.03242v1 | 2309.03242v1 |
Introducing Thermodynamics-Informed Symbolic Regression -- A Tool for Thermodynamic Equations of State Development | Thermodynamic equations of state (EOS) are essential for many industries as
well as in academia. Even leaving aside the expensive and extensive measurement
campaigns required for the data acquisition, the development of EOS is an
intensely time-consuming process, which does often still heavily rely on expert
knowledge and iterative fine-tuning. To improve upon and accelerate the EOS
development process, we introduce thermodynamics-informed symbolic regression
(TiSR), a symbolic regression (SR) tool aimed at thermodynamic EOS modeling.
TiSR is already a capable SR tool, which was used in the research of
https://doi.org/10.1007/s10765-023-03197-z. It aims to combine an SR base with
the extensions required to work with often strongly scattered experimental
data, different residual pre- and post-processing options, and additional
features required to consider thermodynamic EOS development. Although TiSR is
not ready for end users yet, this paper is intended to report on its current
state, showcase the progress, and discuss (distant and not so distant) future
directions. TiSR is available at https://github.com/scoop-group/TiSR and can be
cited as https://doi.org/10.5281/zenodo.8317547. | [
"Viktor Martinek",
"Ophelia Frotscher",
"Markus Richter",
"Roland Herzog"
] | 2023-09-06 07:48:22 | http://arxiv.org/abs/2309.02805v1 | http://arxiv.org/pdf/2309.02805v1 | 2309.02805v1 |
Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory | Split learning is a privacy-preserving distributed learning paradigm in which
an ML model (e.g., a neural network) is split into two parts (i.e., an encoder
and a decoder). The encoder shares so-called latent representation, rather than
raw data, for model training. In mobile-edge computing, network functions (such
as traffic forecasting) can be trained via split learning where an encoder
resides in a user equipment (UE) and a decoder resides in the edge network.
Based on the data processing inequality and the information bottleneck (IB)
theory, we present a new framework and training mechanism to enable a dynamic
balancing of the transmission resource consumption with the informativeness of
the shared latent representations, which directly impacts the predictive
performance. The proposed training mechanism offers an encoder-decoder neural
network architecture featuring multiple modes of complexity-relevance
tradeoffs, enabling tunable performance. The adaptability can accommodate
varying real-time network conditions and application requirements, potentially
reducing operational expenditure and enhancing network agility. As a proof of
concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled
throughput prediction problem. We also offer new insights and highlight some
challenges related to recurrent neural networks from the perspective of the IB
theory. Interestingly, we find a compression phenomenon across the temporal
domain of the sequential model, in addition to the compression phase that
occurs with the number of training epochs. | [
"Omar Alhussein",
"Moshi Wei",
"Arashmid Akhavain"
] | 2023-09-06 07:04:37 | http://arxiv.org/abs/2309.02787v1 | http://arxiv.org/pdf/2309.02787v1 | 2309.02787v1 |
CVE-driven Attack Technique Prediction with Semantic Information Extraction and a Domain-specific Language Model | This paper addresses a critical challenge in cybersecurity: the gap between
vulnerability information represented by Common Vulnerabilities and Exposures
(CVEs) and the resulting cyberattack actions. CVEs provide insights into
vulnerabilities, but often lack details on potential threat actions (tactics,
techniques, and procedures, or TTPs) within the ATT&CK framework. This gap
hinders accurate CVE categorization and proactive countermeasure initiation.
The paper introduces the TTPpredictor tool, which uses innovative techniques to
analyze CVE descriptions and infer plausible TTP attacks resulting from CVE
exploitation. TTPpredictor overcomes challenges posed by limited labeled data
and semantic disparities between CVE and TTP descriptions. It initially
extracts threat actions from unstructured cyber threat reports using Semantic
Role Labeling (SRL) techniques. These actions, along with their contextual
attributes, are correlated with MITRE's attack functionality classes. This
automated correlation facilitates the creation of labeled data, essential for
categorizing novel threat actions into threat functionality classes and TTPs.
The paper presents an empirical assessment, demonstrating TTPpredictor's
effectiveness with accuracy rates of approximately 98% and F1-scores ranging
from 95% to 98% in precise CVE classification to ATT&CK techniques.
TTPpredictor outperforms state-of-the-art language model tools like ChatGPT.
Overall, this paper offers a robust solution for linking CVEs to potential
attack techniques, enhancing cybersecurity practitioners' ability to
proactively identify and mitigate threats. | [
"Ehsan Aghaei",
"Ehab Al-Shaer"
] | 2023-09-06 06:53:45 | http://arxiv.org/abs/2309.02785v1 | http://arxiv.org/pdf/2309.02785v1 | 2309.02785v1 |
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models | As the size of large language models (LLMs) continues to grow, model
compression without sacrificing accuracy has become a crucial challenge for
deployment. While some quantization methods, such as GPTQ, have made progress
in achieving acceptable 4-bit weight-only quantization, attempts at lower bit
quantization often result in severe performance degradation. In this paper, we
introduce a technique called norm tweaking, which can be used as a plugin in
current PTQ methods to achieve high precision while being cost-efficient. Our
approach is inspired by the observation that rectifying the quantized
activation distribution to match its float counterpart can readily restore
accuracy for LLMs. To achieve this, we carefully design a tweaking strategy
that includes calibration data generation and channel-wise distance constraint
to update the weights of normalization layers for better generalization. We
conduct extensive experiments on various datasets using several open-sourced
LLMs. Our method demonstrates significant improvements in both weight-only
quantization and joint quantization of weights and activations, surpassing
existing PTQ methods. On GLM-130B and OPT-66B, our method even achieves the
same level of accuracy at 2-bit quantization as their float ones. Our simple
and effective approach makes it more practical for real-world applications. | [
"Liang Li",
"Qingyuan Li",
"Bo Zhang",
"Xiangxiang Chu"
] | 2023-09-06 06:51:15 | http://arxiv.org/abs/2309.02784v1 | http://arxiv.org/pdf/2309.02784v1 | 2309.02784v1 |
Improving diagnosis and prognosis of lung cancer using vision transformers: A scoping review | Vision transformer-based methods are advancing the field of medical
artificial intelligence and cancer imaging, including lung cancer applications.
Recently, many researchers have developed vision transformer-based AI methods
for lung cancer diagnosis and prognosis. This scoping review aims to identify
the recent developments on vision transformer-based AI methods for lung cancer
imaging applications. It provides key insights into how vision transformers
complemented the performance of AI and deep learning methods for lung cancer.
Furthermore, the review also identifies the datasets that contributed to
advancing the field. Of the 314 retrieved studies, this review included 34
studies published from 2020 to 2022. The most commonly addressed task in these
studies was the classification of lung cancer types, such as lung squamous cell
carcinoma versus lung adenocarcinoma, and identifying benign versus malignant
pulmonary nodules. Other applications included survival prediction of lung
cancer patients and segmentation of lungs. The studies lacked clear strategies
for clinical transformation. SWIN transformer was a popular choice of the
researchers; however, many other architectures were also reported where vision
transformer was combined with convolutional neural networks or UNet model. It
can be concluded that vision transformer-based models are increasingly in
popularity for developing AI methods for lung cancer applications. However,
their computational complexity and clinical relevance are important factors to
be considered for future research work. This review provides valuable insights
for researchers in the field of AI and healthcare to advance the
state-of-the-art in lung cancer diagnosis and prognosis. We provide an
interactive dashboard on lung-cancer.onrender.com/. | [
"Hazrat Ali",
"Farida Mohsen",
"Zubair Shah"
] | 2023-09-06 06:49:31 | http://arxiv.org/abs/2309.02783v1 | http://arxiv.org/pdf/2309.02783v1 | 2309.02783v1 |
Subsets and Splits