title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Actor critic learning algorithms for mean-field control with moment neural networks | We develop a new policy gradient and actor-critic algorithm for solving
mean-field control problems within a continuous time reinforcement learning
setting. Our approach leverages a gradient-based representation of the value
function, employing parametrized randomized policies. The learning for both the
actor (policy) and critic (value function) is facilitated by a class of moment
neural network functions on the Wasserstein space of probability measures, and
the key feature is to sample directly trajectories of distributions. A central
challenge addressed in this study pertains to the computational treatment of an
operator specific to the mean-field framework. To illustrate the effectiveness
of our methods, we provide a comprehensive set of numerical results. These
encompass diverse examples, including multi-dimensional settings and nonlinear
quadratic mean-field control problems with controlled volatility. | [
"Huyên Pham",
"Xavier Warin"
] | 2023-09-08 13:29:57 | http://arxiv.org/abs/2309.04317v1 | http://arxiv.org/pdf/2309.04317v1 | 2309.04317v1 |
Federated Learning for Early Dropout Prediction on Healthy Ageing Applications | The provision of social care applications is crucial for elderly people to
improve their quality of life and enables operators to provide early
interventions. Accurate predictions of user dropouts in healthy ageing
applications are essential since they are directly related to individual health
statuses. Machine Learning (ML) algorithms have enabled highly accurate
predictions, outperforming traditional statistical methods that struggle to
cope with individual patterns. However, ML requires a substantial amount of
data for training, which is challenging due to the presence of personal
identifiable information (PII) and the fragmentation posed by regulations. In
this paper, we present a federated machine learning (FML) approach that
minimizes privacy concerns and enables distributed training, without
transferring individual data. We employ collaborative training by considering
individuals and organizations under FML, which models both cross-device and
cross-silo learning scenarios. Our approach is evaluated on a real-world
dataset with non-independent and identically distributed (non-iid) data among
clients, class imbalance and label ambiguity. Our results show that data
selection and class imbalance handling techniques significantly improve the
predictive accuracy of models trained under FML, demonstrating comparable or
superior predictive performance than traditional ML models. | [
"Christos Chrysanthos Nikolaidis",
"Vasileios Perifanis",
"Nikolaos Pavlidis",
"Pavlos S. Efraimidis"
] | 2023-09-08 13:17:06 | http://arxiv.org/abs/2309.04311v1 | http://arxiv.org/pdf/2309.04311v1 | 2309.04311v1 |
AdBooster: Personalized Ad Creative Generation using Stable Diffusion Outpainting | In digital advertising, the selection of the optimal item (recommendation)
and its best creative presentation (creative optimization) have traditionally
been considered separate disciplines. However, both contribute significantly to
user satisfaction, underpinning our assumption that it relies on both an item's
relevance and its presentation, particularly in the case of visual creatives.
In response, we introduce the task of {\itshape Generative Creative
Optimization (GCO)}, which proposes the use of generative models for creative
generation that incorporate user interests, and {\itshape AdBooster}, a model
for personalized ad creatives based on the Stable Diffusion outpainting
architecture. This model uniquely incorporates user interests both during
fine-tuning and at generation time. To further improve AdBooster's performance,
we also introduce an automated data augmentation pipeline. Through our
experiments on simulated data, we validate AdBooster's effectiveness in
generating more relevant creatives than default product images, showing its
potential of enhancing user engagement. | [
"Veronika Shilova",
"Ludovic Dos Santos",
"Flavian Vasile",
"Gaëtan Racic",
"Ugo Tanielian"
] | 2023-09-08 12:57:05 | http://arxiv.org/abs/2309.11507v1 | http://arxiv.org/pdf/2309.11507v1 | 2309.11507v1 |
Navigating Out-of-Distribution Electricity Load Forecasting during COVID-19: Benchmarking energy load forecasting models without and with continual learning | In traditional deep learning algorithms, one of the key assumptions is that
the data distribution remains constant during both training and deployment.
However, this assumption becomes problematic when faced with
Out-of-Distribution periods, such as the COVID-19 lockdowns, where the data
distribution significantly deviates from what the model has seen during
training. This paper employs a two-fold strategy: utilizing continual learning
techniques to update models with new data and harnessing human mobility data
collected from privacy-preserving pedestrian counters located outside
buildings. In contrast to online learning, which suffers from 'catastrophic
forgetting' as newly acquired knowledge often erases prior information,
continual learning offers a holistic approach by preserving past insights while
integrating new data. This research applies FSNet, a powerful continual
learning algorithm, to real-world data from 13 building complexes in Melbourne,
Australia, a city which had the second longest total lockdown duration globally
during the pandemic. Results underscore the crucial role of continual learning
in accurate energy forecasting, particularly during Out-of-Distribution
periods. Secondary data such as mobility and temperature provided ancillary
support to the primary forecasting model. More importantly, while traditional
methods struggled to adapt during lockdowns, models featuring at least online
learning demonstrated resilience, with lockdown periods posing fewer challenges
once armed with adaptive learning techniques. This study contributes valuable
methodologies and insights to the ongoing effort to improve energy load
forecasting during future Out-of-Distribution periods. | [
"Arian Prabowo",
"Kaixuan Chen",
"Hao Xue",
"Subbu Sethuvenkatraman",
"Flora D. Salim"
] | 2023-09-08 12:36:49 | http://arxiv.org/abs/2309.04296v3 | http://arxiv.org/pdf/2309.04296v3 | 2309.04296v3 |
Viewing the process of generating counterfactuals as a source of knowledge -- Application to the Naive Bayes classifier | There are now many comprehension algorithms for understanding the decisions
of a machine learning algorithm. Among these are those based on the generation
of counterfactual examples. This article proposes to view this generation
process as a source of creating a certain amount of knowledge that can be
stored to be used, later, in different ways. This process is illustrated in the
additive model and, more specifically, in the case of the naive Bayes
classifier, whose interesting properties for this purpose are shown. | [
"Vincent Lemaire",
"Nathan Le Boudec",
"Françoise Fessant",
"Victor Guyomard"
] | 2023-09-08 12:06:48 | http://arxiv.org/abs/2309.04284v1 | http://arxiv.org/pdf/2309.04284v1 | 2309.04284v1 |
Spatial-Temporal Graph Attention Fuser for Calibration in IoT Air Pollution Monitoring Systems | The use of Internet of Things (IoT) sensors for air pollution monitoring has
significantly increased, resulting in the deployment of low-cost sensors.
Despite this advancement, accurately calibrating these sensors in uncontrolled
environmental conditions remains a challenge. To address this, we propose a
novel approach that leverages graph neural networks, specifically the graph
attention network module, to enhance the calibration process by fusing data
from sensor arrays. Through our experiments, we demonstrate the effectiveness
of our approach in significantly improving the calibration accuracy of sensors
in IoT air pollution monitoring platforms. | [
"Keivan Faghih Niresi",
"Mengjie Zhao",
"Hugo Bissig",
"Henri Baumann",
"Olga Fink"
] | 2023-09-08 12:04:47 | http://arxiv.org/abs/2309.04508v1 | http://arxiv.org/pdf/2309.04508v1 | 2309.04508v1 |
Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity | Zero-sum Linear Quadratic (LQ) games are fundamental in optimal control and
can be used (i) as a dynamic game formulation for risk-sensitive or robust
control, or (ii) as a benchmark setting for multi-agent reinforcement learning
with two competing agents in continuous state-control spaces. In contrast to
the well-studied single-agent linear quadratic regulator problem, zero-sum LQ
games entail solving a challenging nonconvex-nonconcave min-max problem with an
objective function that lacks coercivity. Recently, Zhang et al. discovered an
implicit regularization property of natural policy gradient methods which is
crucial for safety-critical control systems since it preserves the robustness
of the controller during learning. Moreover, in the model-free setting where
the knowledge of model parameters is not available, Zhang et al. proposed the
first polynomial sample complexity algorithm to reach an
$\epsilon$-neighborhood of the Nash equilibrium while maintaining the desirable
implicit regularization property. In this work, we propose a simpler nested
Zeroth-Order (ZO) algorithm improving sample complexity by several orders of
magnitude. Our main result guarantees a
$\widetilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity under the same
assumptions using a single-point ZO estimator. Furthermore, when the estimator
is replaced by a two-point estimator, our method enjoys a better
$\widetilde{\mathcal{O}}(\epsilon^{-2})$ sample complexity. Our key
improvements rely on a more sample-efficient nested algorithm design and finer
control of the ZO natural gradient estimation error. | [
"Jiduan Wu",
"Anas Barakat",
"Ilyas Fatkhullin",
"Niao He"
] | 2023-09-08 11:47:31 | http://arxiv.org/abs/2309.04272v1 | http://arxiv.org/pdf/2309.04272v1 | 2309.04272v1 |
Optimal Rate of Kernel Regression in Large Dimensions | We perform a study on kernel regression for large-dimensional data (where the
sample size $n$ is polynomially depending on the dimension $d$ of the samples,
i.e., $n\asymp d^{\gamma}$ for some $\gamma >0$ ). We first build a general
tool to characterize the upper bound and the minimax lower bound of kernel
regression for large dimensional data through the Mendelson complexity
$\varepsilon_{n}^{2}$ and the metric entropy $\bar{\varepsilon}_{n}^{2}$
respectively. When the target function falls into the RKHS associated with a
(general) inner product model defined on $\mathbb{S}^{d}$, we utilize the new
tool to show that the minimax rate of the excess risk of kernel regression is
$n^{-1/2}$ when $n\asymp d^{\gamma}$ for $\gamma =2, 4, 6, 8, \cdots$. We then
further determine the optimal rate of the excess risk of kernel regression for
all the $\gamma>0$ and find that the curve of optimal rate varying along
$\gamma$ exhibits several new phenomena including the {\it multiple descent
behavior} and the {\it periodic plateau behavior}. As an application, For the
neural tangent kernel (NTK), we also provide a similar explicit description of
the curve of optimal rate. As a direct corollary, we know these claims hold for
wide neural networks as well. | [
"Weihao Lu",
"Haobo Zhang",
"Yicheng Li",
"Manyun Xu",
"Qian Lin"
] | 2023-09-08 11:29:05 | http://arxiv.org/abs/2309.04268v1 | http://arxiv.org/pdf/2309.04268v1 | 2309.04268v1 |
Generating drawdown-realistic financial price paths using path signatures | A novel generative machine learning approach for the simulation of sequences
of financial price data with drawdowns quantifiably close to empirical data is
introduced. Applications such as pricing drawdown insurance options or
developing portfolio drawdown control strategies call for a host of
drawdown-realistic paths. Historical scenarios may be insufficient to
effectively train and backtest the strategy, while standard parametric Monte
Carlo does not adequately preserve drawdowns. We advocate a non-parametric
Monte Carlo approach combining a variational autoencoder generative model with
a drawdown reconstruction loss function. To overcome issues of numerical
complexity and non-differentiability, we approximate drawdown as a linear
function of the moments of the path, known in the literature as path
signatures. We prove the required regularity of drawdown function and
consistency of the approximation. Furthermore, we obtain close numerical
approximations using linear regression for fractional Brownian and empirical
data. We argue that linear combinations of the moments of a path yield a
mathematically non-trivial smoothing of the drawdown function, which gives one
leeway to simulate drawdown-realistic price paths by including drawdown
evaluation metrics in the learning objective. We conclude with numerical
experiments on mixed equity, bond, real estate and commodity portfolios and
obtain a host of drawdown-realistic paths. | [
"Emiel Lemahieu",
"Kris Boudt",
"Maarten Wyns"
] | 2023-09-08 10:06:40 | http://arxiv.org/abs/2309.04507v1 | http://arxiv.org/pdf/2309.04507v1 | 2309.04507v1 |
Adaptive Distributed Kernel Ridge Regression: A Feasible Distributed Learning Scheme for Data Silos | Data silos, mainly caused by privacy and interoperability, significantly
constrain collaborations among different organizations with similar data for
the same purpose. Distributed learning based on divide-and-conquer provides a
promising way to settle the data silos, but it suffers from several challenges,
including autonomy, privacy guarantees, and the necessity of collaborations.
This paper focuses on developing an adaptive distributed kernel ridge
regression (AdaDKRR) by taking autonomy in parameter selection, privacy in
communicating non-sensitive information, and the necessity of collaborations in
performance improvement into account. We provide both solid theoretical
verification and comprehensive experiments for AdaDKRR to demonstrate its
feasibility and effectiveness. Theoretically, we prove that under some mild
conditions, AdaDKRR performs similarly to running the optimal learning
algorithms on the whole data, verifying the necessity of collaborations and
showing that no other distributed learning scheme can essentially beat AdaDKRR
under the same conditions. Numerically, we test AdaDKRR on both toy simulations
and two real-world applications to show that AdaDKRR is superior to other
existing distributed learning schemes. All these results show that AdaDKRR is a
feasible scheme to defend against data silos, which are highly desired in
numerous application regions such as intelligent decision-making, pricing
forecasting, and performance prediction for products. | [
"Di Wang",
"Xiaotong Liu",
"Shao-Bo Lin",
"Ding-Xuan Zhou"
] | 2023-09-08 09:54:36 | http://arxiv.org/abs/2309.04236v1 | http://arxiv.org/pdf/2309.04236v1 | 2309.04236v1 |
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion models | Decoding visual representations from human brain activity has emerged as a
thriving research domain, particularly in the context of brain-computer
interfaces. Our study presents an innovative method that employs to classify
and reconstruct images from the ImageNet dataset using electroencephalography
(EEG) data from subjects that had viewed the images themselves (i.e. "brain
decoding"). We analyzed EEG recordings from 6 participants, each exposed to 50
images spanning 40 unique semantic categories. These EEG readings were
converted into spectrograms, which were then used to train a convolutional
neural network (CNN), integrated with a knowledge distillation procedure based
on a pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image
classification teacher network. This strategy allowed our model to attain a
top-5 accuracy of 80%, significantly outperforming a standard CNN and various
RNN-based benchmarks. Additionally, we incorporated an image reconstruction
mechanism based on pre-trained latent diffusion models, which allowed us to
generate an estimate of the images which had elicited EEG activity. Therefore,
our architecture not only decodes images from neural activity but also offers a
credible image reconstruction from EEG only, paving the way for e.g. swift,
individualized feedback experiments. Our research represents a significant step
forward in connecting neural signals with visual cognition. | [
"Matteo Ferrante",
"Tommaso Boccato",
"Stefano Bargione",
"Nicola Toschi"
] | 2023-09-08 09:13:50 | http://arxiv.org/abs/2309.07149v1 | http://arxiv.org/pdf/2309.07149v1 | 2309.07149v1 |
Offline Recommender System Evaluation under Unobserved Confounding | Off-Policy Estimation (OPE) methods allow us to learn and evaluate
decision-making policies from logged data. This makes them an attractive choice
for the offline evaluation of recommender systems, and several recent works
have reported successful adoption of OPE methods to this end. An important
assumption that makes this work is the absence of unobserved confounders:
random variables that influence both actions and rewards at data collection
time. Because the data collection policy is typically under the practitioner's
control, the unconfoundedness assumption is often left implicit, and its
violations are rarely dealt with in the existing literature.
This work aims to highlight the problems that arise when performing
off-policy estimation in the presence of unobserved confounders, specifically
focusing on a recommendation use-case. We focus on policy-based estimators,
where the logging propensities are learned from logged data. We characterise
the statistical bias that arises due to confounding, and show how existing
diagnostics are unable to uncover such cases. Because the bias depends directly
on the true and unobserved logging propensities, it is non-identifiable. As the
unconfoundedness assumption is famously untestable, this becomes especially
problematic. This paper emphasises this common, yet often overlooked issue.
Through synthetic data, we empirically show how na\"ive propensity estimation
under confounding can lead to severely biased metric estimates that are allowed
to fly under the radar. We aim to cultivate an awareness among researchers and
practitioners of this important problem, and touch upon potential research
directions towards mitigating its effects. | [
"Olivier Jeunen",
"Ben London"
] | 2023-09-08 09:11:26 | http://arxiv.org/abs/2309.04222v1 | http://arxiv.org/pdf/2309.04222v1 | 2309.04222v1 |
Concomitant Group Testing | In this paper, we introduce a variation of the group testing problem
capturing the idea that a positive test requires a combination of multiple
``types'' of item. Specifically, we assume that there are multiple disjoint
\emph{semi-defective sets}, and a test is positive if and only if it contains
at least one item from each of these sets. The goal is to reliably identify all
of the semi-defective sets using as few tests as possible, and we refer to this
problem as \textit{Concomitant Group Testing} (ConcGT). We derive a variety of
algorithms for this task, focusing primarily on the case that there are two
semi-defective sets. Our algorithms are distinguished by (i) whether they are
deterministic (zero-error) or randomized (small-error), and (ii) whether they
are non-adaptive, fully adaptive, or have limited adaptivity (e.g., 2 or 3
stages). Both our deterministic adaptive algorithm and our randomized
algorithms (non-adaptive or limited adaptivity) are order-optimal in broad
scaling regimes of interest, and improve significantly over baseline results
that are based on solving a more general problem as an intermediate step (e.g.,
hypergraph learning). | [
"Thach V. Bui",
"Jonathan Scarlett"
] | 2023-09-08 09:11:12 | http://arxiv.org/abs/2309.04221v1 | http://arxiv.org/pdf/2309.04221v1 | 2309.04221v1 |
Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse | Counterfactuals operationalised through algorithmic recourse have become a
powerful tool to make artificial intelligence systems explainable.
Conceptually, given an individual classified as y -- the factual -- we seek
actions such that their prediction becomes the desired class y' -- the
counterfactual. This process offers algorithmic recourse that is (1) easy to
customise and interpret, and (2) directly aligned with the goals of each
individual. However, the properties of a "good" counterfactual are still
largely debated; it remains an open challenge to effectively locate a
counterfactual along with its corresponding recourse. Some strategies use
gradient-driven methods, but these offer no guarantees on the feasibility of
the recourse and are open to adversarial attacks on carefully created
manifolds. This can lead to unfairness and lack of robustness. Other methods
are data-driven, which mostly addresses the feasibility problem at the expense
of privacy, security and secrecy as they require access to the entire training
data set. Here, we introduce LocalFACE, a model-agnostic technique that
composes feasible and actionable counterfactual explanations using
locally-acquired information at each step of the algorithmic recourse. Our
explainer preserves the privacy of users by only leveraging data that it
specifically requires to construct actionable algorithmic recourse, and
protects the model by offering transparency solely in the regions deemed
necessary for the intervention. | [
"Edward A. Small",
"Jeffrey N. Clark",
"Christopher J. McWilliams",
"Kacper Sokol",
"Jeffrey Chan",
"Flora D. Salim",
"Raul Santos-Rodriguez"
] | 2023-09-08 08:47:23 | http://arxiv.org/abs/2309.04211v1 | http://arxiv.org/pdf/2309.04211v1 | 2309.04211v1 |
COVID-19 Detection System: A Comparative Analysis of System Performance Based on Acoustic Features of Cough Audio Signals | A wide range of respiratory diseases, such as cold and flu, asthma, and
COVID-19, affect people's daily lives worldwide. In medical practice,
respiratory sounds are widely used in medical services to diagnose various
respiratory illnesses and lung disorders. The traditional diagnosis of such
sounds requires specialized knowledge, which can be costly and reliant on human
expertise. Recently, cough audio recordings have been used to automate the
process of detecting respiratory conditions. This research aims to examine
various acoustic features that enhance the performance of machine learning (ML)
models in detecting COVID-19 from cough signals. This study investigates the
efficacy of three feature extraction techniques, including Mel Frequency
Cepstral Coefficients (MFCC), Chroma, and Spectral Contrast features, on two ML
algorithms, Support Vector Machine (SVM) and Multilayer Perceptron (MLP), and
thus proposes an efficient COVID-19 detection system. The proposed system
produces a practical solution and demonstrates higher state-of-the-art
classification performance on COUGHVID and Virufy datasets for COVID-19
detection. | [
"Asmaa Shati",
"Ghulam Mubashar Hassan",
"Amitava Datta"
] | 2023-09-08 08:33:24 | http://arxiv.org/abs/2309.04505v1 | http://arxiv.org/pdf/2309.04505v1 | 2309.04505v1 |
Towards Mitigating Architecture Overfitting in Dataset Distillation | Dataset distillation methods have demonstrated remarkable performance for
neural networks trained with very limited training data. However, a significant
challenge arises in the form of architecture overfitting: the distilled
training data synthesized by a specific network architecture (i.e., training
network) generates poor performance when trained by other network architectures
(i.e., test networks). This paper addresses this issue and proposes a series of
approaches in both architecture designs and training schemes which can be
adopted together to boost the generalization performance across different
network architectures on the distilled training data. We conduct extensive
experiments to demonstrate the effectiveness and generality of our methods.
Particularly, across various scenarios involving different sizes of distilled
data, our approaches achieve comparable or superior performance to existing
methods when training on the distilled data using networks with larger
capacities. | [
"Xuyang Zhong",
"Chen Liu"
] | 2023-09-08 08:12:29 | http://arxiv.org/abs/2309.04195v1 | http://arxiv.org/pdf/2309.04195v1 | 2309.04195v1 |
Compositional Learning of Visually-Grounded Concepts Using Reinforcement | Deep reinforcement learning agents need to be trained over millions of
episodes to decently solve navigation tasks grounded to instructions.
Furthermore, their ability to generalize to novel combinations of instructions
is unclear. Interestingly however, children can decompose language-based
instructions and navigate to the referred object, even if they have not seen
the combination of queries prior. Hence, we created three 3D environments to
investigate how deep RL agents learn and compose color-shape based
combinatorial instructions to solve novel combinations in a spatial navigation
task. First, we explore if agents can perform compositional learning, and
whether they can leverage on frozen text encoders (e.g. CLIP, BERT) to learn
word combinations in fewer episodes. Next, we demonstrate that when agents are
pretrained on the shape or color concepts separately, they show a 20 times
decrease in training episodes needed to solve unseen combinations of
instructions. Lastly, we show that agents pretrained on concept and
compositional learning achieve significantly higher reward when evaluated
zero-shot on novel color-shape1-shape2 visual object combinations. Overall, our
results highlight the foundations needed to increase an agent's proficiency in
composing word groups through reinforcement learning and its ability for
zero-shot generalization to new combinations. | [
"Zijun Lin",
"Haidi Azaman",
"M Ganesh Kumar",
"Cheston Tan"
] | 2023-09-08 07:26:49 | http://arxiv.org/abs/2309.04504v1 | http://arxiv.org/pdf/2309.04504v1 | 2309.04504v1 |
Leveraging Prototype Patient Representations with Feature-Missing-Aware Calibration to Mitigate EHR Data Sparsity | Electronic Health Record (EHR) data frequently exhibits sparse
characteristics, posing challenges for predictive modeling. Current direct
imputation such as matrix imputation approaches hinge on referencing analogous
rows or columns to complete raw missing data and do not differentiate between
imputed and actual values. As a result, models may inadvertently incorporate
irrelevant or deceptive information with respect to the prediction objective,
thereby compromising the efficacy of downstream performance. While some methods
strive to recalibrate or augment EHR embeddings after direct imputation, they
often mistakenly prioritize imputed features. This misprioritization can
introduce biases or inaccuracies into the model. To tackle these issues, our
work resorts to indirect imputation, where we leverage prototype
representations from similar patients to obtain a denser embedding. Recognizing
the limitation that missing features are typically treated the same as present
ones when measuring similar patients, our approach designs a feature confidence
learner module. This module is sensitive to the missing feature status,
enabling the model to better judge the reliability of each feature. Moreover,
we propose a novel patient similarity metric that takes feature confidence into
account, ensuring that evaluations are not based merely on potentially
inaccurate imputed values. Consequently, our work captures dense prototype
patient representations with feature-missing-aware calibration process.
Comprehensive experiments demonstrate that designed model surpasses established
EHR-focused models with a statistically significant improvement on MIMIC-III
and MIMIC-IV datasets in-hospital mortality outcome prediction task. The code
is publicly available at \url{https://github.com/yhzhu99/SparseEHR} to assure
the reproducibility. | [
"Yinghao Zhu",
"Zixiang Wang",
"Long He",
"Shiyun Xie",
"Zixi Chen",
"Jingkun An",
"Liantao Ma",
"Chengwei Pan"
] | 2023-09-08 07:01:38 | http://arxiv.org/abs/2309.04160v2 | http://arxiv.org/pdf/2309.04160v2 | 2309.04160v2 |
Adversarial attacks on hybrid classical-quantum Deep Learning models for Histopathological Cancer Detection | We present an effective application of quantum machine learning in
histopathological cancer detection. The study here emphasizes two primary
applications of hybrid classical-quantum Deep Learning models. The first
application is to build a classification model for histopathological cancer
detection using the quantum transfer learning strategy. The second application
is to test the performance of this model for various adversarial attacks.
Rather than using a single transfer learning model, the hybrid
classical-quantum models are tested using multiple transfer learning models,
especially ResNet18, VGG-16, Inception-v3, and AlexNet as feature extractors
and integrate it with several quantum circuit-based variational quantum
circuits (VQC) with high expressibility. As a result, we provide a comparative
analysis of classical models and hybrid classical-quantum transfer learning
models for histopathological cancer detection under several adversarial
attacks. We compared the performance accuracy of the classical model with the
hybrid classical-quantum model using pennylane default quantum simulator. We
also observed that for histopathological cancer detection under several
adversarial attacks, Hybrid Classical-Quantum (HCQ) models provided better
accuracy than classical image classification models. | [
"Biswaraj Baral",
"Reek Majumdar",
"Bhavika Bhalgamiya",
"Taposh Dutta Roy"
] | 2023-09-08 06:37:54 | http://arxiv.org/abs/2309.06377v1 | http://arxiv.org/pdf/2309.06377v1 | 2309.06377v1 |
Preserved Edge Convolutional Neural Network for Sensitivity Enhancement of Deuterium Metabolic Imaging (DMI) | Purpose: Common to most MRSI techniques, the spatial resolution and the
minimal scan duration of Deuterium Metabolic Imaging (DMI) are limited by the
achievable SNR. This work presents a deep learning method for sensitivity
enhancement of DMI.
Methods: A convolutional neural network (CNN) was designed to estimate the
2H-labeled metabolite concentrations from low SNR and distorted DMI FIDs. The
CNN was trained with synthetic data that represent a range of SNR levels
typically encountered in vivo. The estimation precision was further improved by
fine-tuning the CNN with MRI-based edge-preserving regularization for each DMI
dataset. The proposed processing method, PReserved Edge ConvolutIonal neural
network for Sensitivity Enhanced DMI (PRECISE-DMI), was applied to simulation
studies and in vivo experiments to evaluate the anticipated improvements in SNR
and investigate the potential for inaccuracies.
Results: PRECISE-DMI visually improved the metabolic maps of low SNR
datasets, and quantitatively provided higher precision than the standard
Fourier reconstruction. Processing of DMI data acquired in rat brain tumor
models resulted in more precise determination of 2H-labeled lactate and
glutamate + glutamine levels, at increased spatial resolution (from >8 to 2
$\mu$L) or shortened scan time (from 32 to 4 min) compared to standard
acquisitions. However, rigorous SD-bias analyses showed that overuse of the
edge-preserving regularization can compromise the accuracy of the results.
Conclusion: PRECISE-DMI allows a flexible trade-off between enhancing the
sensitivity of DMI and minimizing the inaccuracies. With typical settings, the
DMI sensitivity can be improved by 3-fold while retaining the capability to
detect local signal variations. | [
"Siyuan Dong",
"Henk M. De Feyter",
"Monique A. Thomas",
"Robin A. de Graaf",
"James S. Duncan"
] | 2023-09-08 03:41:54 | http://arxiv.org/abs/2309.04100v2 | http://arxiv.org/pdf/2309.04100v2 | 2309.04100v2 |
Modeling Recommender Ecosystems: Research Challenges at the Intersection of Mechanism Design, Reinforcement Learning and Generative Models | Modern recommender systems lie at the heart of complex ecosystems that couple
the behavior of users, content providers, advertisers, and other actors.
Despite this, the focus of the majority of recommender research -- and most
practical recommenders of any import -- is on the local, myopic optimization of
the recommendations made to individual users. This comes at a significant cost
to the long-term utility that recommenders could generate for its users. We
argue that explicitly modeling the incentives and behaviors of all actors in
the system -- and the interactions among them induced by the recommender's
policy -- is strictly necessary if one is to maximize the value the system
brings to these actors and improve overall ecosystem "health". Doing so
requires: optimization over long horizons using techniques such as
reinforcement learning; making inevitable tradeoffs in the utility that can be
generated for different actors using the methods of social choice; reducing
information asymmetry, while accounting for incentives and strategic behavior,
using the tools of mechanism design; better modeling of both user and
item-provider behaviors by incorporating notions from behavioral economics and
psychology; and exploiting recent advances in generative and foundation models
to make these mechanisms interpretable and actionable. We propose a conceptual
framework that encompasses these elements, and articulate a number of research
challenges that emerge at the intersection of these different disciplines. | [
"Craig Boutilier",
"Martin Mladenov",
"Guy Tennenholtz"
] | 2023-09-08 03:20:58 | http://arxiv.org/abs/2309.06375v2 | http://arxiv.org/pdf/2309.06375v2 | 2309.06375v2 |
Sample-Efficient Co-Design of Robotic Agents Using Multi-fidelity Training on Universal Policy Network | Co-design involves simultaneously optimizing the controller and agents
physical design. Its inherent bi-level optimization formulation necessitates an
outer loop design optimization driven by an inner loop control optimization.
This can be challenging when the design space is large and each design
evaluation involves data-intensive reinforcement learning process for control
optimization. To improve the sample-efficiency we propose a
multi-fidelity-based design exploration strategy based on Hyperband where we
tie the controllers learnt across the design spaces through a universal policy
learner for warm-starting the subsequent controller learning problems. Further,
we recommend a particular way of traversing the Hyperband generated design
matrix that ensures that the stochasticity of the Hyperband is reduced the most
with the increasing warm starting effect of the universal policy learner as it
is strengthened with each new design evaluation. Experiments performed on a
wide range of agent design problems demonstrate the superiority of our method
compared to the baselines. Additionally, analysis of the optimized designs
shows interesting design alterations including design simplifications and
non-intuitive alterations that have emerged in the biological world. | [
"Kishan R. Nagiredla",
"Buddhika L. Semage",
"Thommen G. Karimpanal",
"Arun Kumar A. V",
"Santu Rana"
] | 2023-09-08 02:54:31 | http://arxiv.org/abs/2309.04085v1 | http://arxiv.org/pdf/2309.04085v1 | 2309.04085v1 |
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning | Real-world graphs naturally exhibit hierarchical or cyclical structures that
are unfit for the typical Euclidean space. While there exist graph neural
networks that leverage hyperbolic or spherical spaces to learn representations
that embed such structures more accurately, these methods are confined under
the message-passing paradigm, making the models vulnerable against side-effects
such as oversmoothing and oversquashing. More recent work have proposed global
attention-based graph Transformers that can easily model long-range
interactions, but their extensions towards non-Euclidean geometry are yet
unexplored. To bridge this gap, we propose Fully Product-Stereographic
Transformer, a generalization of Transformers towards operating entirely on the
product of constant curvature spaces. When combined with tokenized graph
Transformers, our model can learn the curvature appropriate for the input graph
in an end-to-end fashion, without the need of additional tuning on different
curvature initializations. We also provide a kernelized approach to
non-Euclidean attention, which enables our model to run in time and memory cost
linear to the number of nodes and edges while respecting the underlying
geometry. Experiments on graph reconstruction and node classification
demonstrate the benefits of generalizing Transformers to the non-Euclidean
domain. | [
"Sungjun Cho",
"Seunghyuk Cho",
"Sungwoo Park",
"Hankook Lee",
"Honglak Lee",
"Moontae Lee"
] | 2023-09-08 02:44:37 | http://arxiv.org/abs/2309.04082v1 | http://arxiv.org/pdf/2309.04082v1 | 2309.04082v1 |
UER: A Heuristic Bias Addressing Approach for Online Continual Learning | Online continual learning aims to continuously train neural networks from a
continuous data stream with a single pass-through data. As the most effective
approach, the rehearsal-based methods replay part of previous data. Commonly
used predictors in existing methods tend to generate biased dot-product logits
that prefer to the classes of current data, which is known as a bias issue and
a phenomenon of forgetting. Many approaches have been proposed to overcome the
forgetting problem by correcting the bias; however, they still need to be
improved in online fashion. In this paper, we try to address the bias issue by
a more straightforward and more efficient method. By decomposing the
dot-product logits into an angle factor and a norm factor, we empirically find
that the bias problem mainly occurs in the angle factor, which can be used to
learn novel knowledge as cosine logits. On the contrary, the norm factor
abandoned by existing methods helps remember historical knowledge. Based on
this observation, we intuitively propose to leverage the norm factor to balance
the new and old knowledge for addressing the bias. To this end, we develop a
heuristic approach called unbias experience replay (UER). UER learns current
samples only by the angle factor and further replays previous samples by both
the norm and angle factors. Extensive experiments on three datasets show that
UER achieves superior performance over various state-of-the-art methods. The
code is in https://github.com/FelixHuiweiLin/UER. | [
"Huiwei Lin",
"Shanshan Feng",
"Baoquan Zhang",
"Hongliang Qiao",
"Xutao Li",
"Yunming Ye"
] | 2023-09-08 02:42:40 | http://arxiv.org/abs/2309.04081v1 | http://arxiv.org/pdf/2309.04081v1 | 2309.04081v1 |
Enabling the Evaluation of Driver Physiology Via Vehicle Dynamics | Driving is a daily routine for many individuals across the globe. This paper
presents the configuration and methodologies used to transform a vehicle into a
connected ecosystem capable of assessing driver physiology. We integrated an
array of commercial sensors from the automotive and digital health sectors
along with driver inputs from the vehicle itself. This amalgamation of sensors
allows for meticulous recording of the external conditions and driving
maneuvers. These data streams are processed to extract key parameters,
providing insights into driver behavior in relation to their external
environment and illuminating vital physiological responses. This innovative
driver evaluation system holds the potential to amplify road safety. Moreover,
when paired with data from conventional health settings, it may enhance early
detection of health-related complications. | [
"Rodrigo Ordonez-Hurtado",
"Bo Wen",
"Nicholas Barra",
"Ryan Vimba",
"Sergio Cabrero-Barros",
"Sergiy Zhuk",
"Jeffrey L. Rogers"
] | 2023-09-08 02:27:28 | http://arxiv.org/abs/2309.04078v1 | http://arxiv.org/pdf/2309.04078v1 | 2309.04078v1 |
Riemannian Langevin Monte Carlo schemes for sampling PSD matrices with fixed rank | This paper introduces two explicit schemes to sample matrices from Gibbs
distributions on $\mathcal S^{n,p}_+$, the manifold of real positive
semi-definite (PSD) matrices of size $n\times n$ and rank $p$. Given an energy
function $\mathcal E:\mathcal S^{n,p}_+\to \mathbb{R}$ and certain Riemannian
metrics $g$ on $\mathcal S^{n,p}_+$, these schemes rely on an Euler-Maruyama
discretization of the Riemannian Langevin equation (RLE) with Brownian motion
on the manifold. We present numerical schemes for RLE under two fundamental
metrics on $\mathcal S^{n,p}_+$: (a) the metric obtained from the embedding of
$\mathcal S^{n,p}_+ \subset \mathbb{R}^{n\times n} $; and (b) the
Bures-Wasserstein metric corresponding to quotient geometry. We also provide
examples of energy functions with explicit Gibbs distributions that allow
numerical validation of these schemes. | [
"Tianmin Yu",
"Shixin Zheng",
"Jianfeng Lu",
"Govind Menon",
"Xiangxiong Zhang"
] | 2023-09-08 02:09:40 | http://arxiv.org/abs/2309.04072v1 | http://arxiv.org/pdf/2309.04072v1 | 2309.04072v1 |
3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation | Pretraining molecular representations from large unlabeled data is essential
for molecular property prediction due to the high cost of obtaining
ground-truth labels. While there exist various 2D graph-based molecular
pretraining approaches, these methods struggle to show statistically
significant gains in predictive performance. Recent work have thus instead
proposed 3D conformer-based pretraining under the task of denoising, which led
to promising results. During downstream finetuning, however, models trained
with 3D conformers require accurate atom-coordinates of previously unseen
molecules, which are computationally expensive to acquire at scale. In light of
this limitation, we propose D&D, a self-supervised molecular representation
learning framework that pretrains a 2D graph encoder by distilling
representations from a 3D denoiser. With denoising followed by cross-modal
knowledge distillation, our approach enjoys use of knowledge obtained from
denoising as well as painless application to downstream tasks with no access to
accurate conformers. Experiments on real-world molecular property prediction
datasets show that the graph encoder trained via D&D can infer 3D information
based on the 2D graph and shows superior performance and label-efficiency
against other baselines. | [
"Sungjun Cho",
"Dae-Woong Jeong",
"Sung Moon Ko",
"Jinwoo Kim",
"Sehui Han",
"Seunghoon Hong",
"Honglak Lee",
"Moontae Lee"
] | 2023-09-08 01:36:58 | http://arxiv.org/abs/2309.04062v1 | http://arxiv.org/pdf/2309.04062v1 | 2309.04062v1 |
Weighted Unsupervised Domain Adaptation Considering Geometry Features and Engineering Performance of 3D Design Data | The product design process in manufacturing involves iterative design
modeling and analysis to achieve the target engineering performance, but such
an iterative process is time consuming and computationally expensive. Recently,
deep learning-based engineering performance prediction models have been
proposed to accelerate design optimization. However, they only guarantee
predictions on training data and may be inaccurate when applied to new domain
data. In particular, 3D design data have complex features, which means domains
with various distributions exist. Thus, the utilization of deep learning has
limitations due to the heavy data collection and training burdens. We propose a
bi-weighted unsupervised domain adaptation approach that considers the geometry
features and engineering performance of 3D design data. It is specialized for
deep learning-based engineering performance predictions. Domain-invariant
features can be extracted through an adversarial training strategy by using
hypothesis discrepancy, and a multi-output regression task can be performed
with the extracted features to predict the engineering performance. In
particular, we present a source instance weighting method suitable for 3D
design data to avoid negative transfers. The developed bi-weighting strategy
based on the geometry features and engineering performance of engineering
structures is incorporated into the training process. The proposed model is
tested on a wheel impact analysis problem to predict the magnitude of the
maximum von Mises stress and the corresponding location of 3D road wheels. This
mechanism can reduce the target risk for unlabeled target domains on the basis
of weighted multi-source domain knowledge and can efficiently replace
conventional finite element analysis. | [
"Seungyeon Shin",
"Namwoo Kang"
] | 2023-09-08 00:26:44 | http://arxiv.org/abs/2309.04499v1 | http://arxiv.org/pdf/2309.04499v1 | 2309.04499v1 |
Bayesian Dynamic DAG Learning: Application in Discovering Dynamic Effective Connectome of Brain | Understanding the complex mechanisms of the brain can be unraveled by
extracting the Dynamic Effective Connectome (DEC). Recently, score-based
Directed Acyclic Graph (DAG) discovery methods have shown significant
improvements in extracting the causal structure and inferring effective
connectivity. However, learning DEC through these methods still faces two main
challenges: one with the fundamental impotence of high-dimensional dynamic DAG
discovery methods and the other with the low quality of fMRI data. In this
paper, we introduce Bayesian Dynamic DAG learning with M-matrices Acyclicity
characterization \textbf{(BDyMA)} method to address the challenges in
discovering DEC. The presented dynamic causal model enables us to discover
bidirected edges as well. Leveraging an unconstrained framework in the BDyMA
method leads to more accurate results in detecting high-dimensional networks,
achieving sparser outcomes, making it particularly suitable for extracting DEC.
Additionally, the score function of the BDyMA method allows the incorporation
of prior knowledge into the process of dynamic causal discovery which further
enhances the accuracy of results. Comprehensive simulations on synthetic data
and experiments on Human Connectome Project (HCP) data demonstrate that our
method can handle both of the two main challenges, yielding more accurate and
reliable DEC compared to state-of-the-art and baseline methods. Additionally,
we investigate the trustworthiness of DTI data as prior knowledge for DEC
discovery and show the improvements in DEC discovery when the DTI data is
incorporated into the process. | [
"Abdolmahdi Bagheri",
"Mohammad Pasande",
"Kevin Bello",
"Alireza Akhondi-Asl",
"Babak Nadjar Araabi"
] | 2023-09-07 22:54:06 | http://arxiv.org/abs/2309.07080v1 | http://arxiv.org/pdf/2309.07080v1 | 2309.07080v1 |
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks | The fast growth of computational power and scales of modern super-computing
systems have raised great challenges for the management of exascale scientific
data. To maintain the usability of scientific data, error-bound lossy
compression is proposed and developed as an essential technique for the size
reduction of scientific data with constrained data distortion. Among the
diverse datasets generated by various scientific simulations, certain datasets
cannot be effectively compressed by existing error-bounded lossy compressors
with traditional techniques. The recent success of Artificial Intelligence has
inspired several researchers to integrate neural networks into error-bounded
lossy compressors. However, those works still suffer from limited compression
ratios and/or extremely low efficiencies. To address those issues and improve
the compression on the hard-to-compress datasets, in this paper, we propose
SRN-SZ, which is a deep learning-based scientific error-bounded lossy
compressor leveraging the hierarchical data grid expansion paradigm implemented
by super-resolution neural networks. SRN-SZ applies the most advanced
super-resolution network HAT for its compression, which is free of time-costing
per-data training. In experiments compared with various state-of-the-art
compressors, SRN-SZ achieves up to 75% compression ratio improvements under the
same error bound and up to 80% compression ratio improvements under the same
PSNR than the second-best compressor. | [
"Jinyang Liu",
"Sheng Di",
"Sian Jin",
"Kai Zhao",
"Xin Liang",
"Zizhong Chen",
"Franck Cappello"
] | 2023-09-07 22:15:32 | http://arxiv.org/abs/2309.04037v1 | http://arxiv.org/pdf/2309.04037v1 | 2309.04037v1 |
Brief technical note on linearizing recurrent neural networks (RNNs) before vs after the pointwise nonlinearity | Linearization of the dynamics of recurrent neural networks (RNNs) is often
used to study their properties. The same RNN dynamics can be written in terms
of the ``activations" (the net inputs to each unit, before its pointwise
nonlinearity) or in terms of the ``activities" (the output of each unit, after
its pointwise nonlinearity); the two corresponding linearizations are different
from each other. This brief and informal technical note describes the
relationship between the two linearizations, between the left and right
eigenvectors of their dynamics matrices, and shows that some context-dependent
effects are readily apparent under linearization of activity dynamics but not
linearization of activation dynamics. | [
"Marino Pagan",
"Adrian Valente",
"Srdjan Ostojic",
"Carlos D. Brody"
] | 2023-09-07 21:57:15 | http://arxiv.org/abs/2309.04030v1 | http://arxiv.org/pdf/2309.04030v1 | 2309.04030v1 |
TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models | Machine learning models can perpetuate unintended biases from unfair and
imbalanced datasets. Evaluating and debiasing these datasets and models is
especially hard in text datasets where sensitive attributes such as race,
gender, and sexual orientation may not be available. When these models are
deployed into society, they can lead to unfair outcomes for historically
underrepresented groups. In this paper, we present a dataset coupled with an
approach to improve text fairness in classifiers and language models. We create
a new, more comprehensive identity lexicon, TIDAL, which includes 15,123
identity terms and associated sense context across three demographic
categories. We leverage TIDAL to develop an identity annotation and
augmentation tool that can be used to improve the availability of identity
context and the effectiveness of ML fairness techniques. We evaluate our
approaches using human contributors, and additionally run experiments focused
on dataset and model debiasing. Results show our assistive annotation technique
improves the reliability and velocity of human-in-the-loop processes. Our
dataset and methods uncover more disparities during evaluation, and also
produce more fair models during remediation. These approaches provide a
practical path forward for scaling classifier and generative model fairness in
real-world settings. | [
"Emmanuel Klu",
"Sameer Sethi"
] | 2023-09-07 21:44:42 | http://arxiv.org/abs/2309.04027v1 | http://arxiv.org/pdf/2309.04027v1 | 2309.04027v1 |
Optimal Transport with Tempered Exponential Measures | In the field of optimal transport, two prominent subfields face each other:
(i) unregularized optimal transport, "\`a-la-Kantorovich", which leads to
extremely sparse plans but with algorithms that scale poorly, and (ii)
entropic-regularized optimal transport, "\`a-la-Sinkhorn-Cuturi", which gets
near-linear approximation algorithms but leads to maximally un-sparse plans. In
this paper, we show that a generalization of the latter to tempered exponential
measures, a generalization of exponential families with indirect measure
normalization, gets to a very convenient middle ground, with both very fast
approximation algorithms and sparsity which is under control up to sparsity
patterns. In addition, it fits naturally in the unbalanced optimal transport
problem setting as well. | [
"Ehsan Amid",
"Frank Nielsen",
"Richard Nock",
"Manfred K. Warmuth"
] | 2023-09-07 20:53:23 | http://arxiv.org/abs/2309.04015v2 | http://arxiv.org/pdf/2309.04015v2 | 2309.04015v2 |
An Element-wise RSAV Algorithm for Unconstrained Optimization Problems | We present a novel optimization algorithm, element-wise relaxed scalar
auxiliary variable (E-RSAV), that satisfies an unconditional energy dissipation
law and exhibits improved alignment between the modified and the original
energy. Our algorithm features rigorous proofs of linear convergence in the
convex setting. Furthermore, we present a simple accelerated algorithm that
improves the linear convergence rate to super-linear in the univariate case. We
also propose an adaptive version of E-RSAV with Steffensen step size. We
validate the robustness and fast convergence of our algorithm through ample
numerical experiments. | [
"Shiheng Zhang",
"Jiahao Zhang",
"Jie Shen",
"Guang Lin"
] | 2023-09-07 20:37:23 | http://arxiv.org/abs/2309.04013v1 | http://arxiv.org/pdf/2309.04013v1 | 2309.04013v1 |
Multimodal Transformer for Material Segmentation | Leveraging information across diverse modalities is known to enhance
performance on multimodal segmentation tasks. However, effectively fusing
information from different modalities remains challenging due to the unique
characteristics of each modality. In this paper, we propose a novel fusion
strategy that can effectively fuse information from different combinations of
four different modalities: RGB, Angle of Linear Polarization (AoLP), Degree of
Linear Polarization (DoLP) and Near-Infrared (NIR). We also propose a new model
named Multi-Modal Segmentation Transformer (MMSFormer) that incorporates the
proposed fusion strategy to perform multimodal material segmentation. MMSFormer
achieves 52.05% mIoU outperforming the current state-of-the-art on Multimodal
Material Segmentation (MCubeS) dataset. For instance, our method provides
significant improvement in detecting gravel (+10.4%) and human (+9.1%) classes.
Ablation studies show that different modules in the fusion block are crucial
for overall model performance. Furthermore, our ablation studies also highlight
the capacity of different input modalities to improve performance in the
identification of different types of materials. The code and pretrained models
will be made available at https://github.com/csiplab/MMSFormer. | [
"Md Kaykobad Reza",
"Ashley Prater-Bennette",
"M. Salman Asif"
] | 2023-09-07 20:07:57 | http://arxiv.org/abs/2309.04001v2 | http://arxiv.org/pdf/2309.04001v2 | 2309.04001v2 |
Adapting Self-Supervised Representations to Multi-Domain Setups | Current state-of-the-art self-supervised approaches, are effective when
trained on individual domains but show limited generalization on unseen
domains. We observe that these models poorly generalize even when trained on a
mixture of domains, making them unsuitable to be deployed under diverse
real-world setups. We therefore propose a general-purpose, lightweight Domain
Disentanglement Module (DDM) that can be plugged into any self-supervised
encoder to effectively perform representation learning on multiple, diverse
domains with or without shared classes. During pre-training according to a
self-supervised loss, DDM enforces a disentanglement in the representation
space by splitting it into a domain-variant and a domain-invariant portion.
When domain labels are not available, DDM uses a robust clustering approach to
discover pseudo-domains. We show that pre-training with DDM can show up to 3.5%
improvement in linear probing accuracy on state-of-the-art self-supervised
models including SimCLR, MoCo, BYOL, DINO, SimSiam and Barlow Twins on
multi-domain benchmarks including PACS, DomainNet and WILDS. Models trained
with DDM show significantly improved generalization (7.4%) to unseen domains
compared to baselines. Therefore, DDM can efficiently adapt self-supervised
encoders to provide high-quality, generalizable representations for diverse
multi-domain data. | [
"Neha Kalibhat",
"Sam Sharpe",
"Jeremy Goodsitt",
"Bayan Bruss",
"Soheil Feizi"
] | 2023-09-07 20:05:39 | http://arxiv.org/abs/2309.03999v1 | http://arxiv.org/pdf/2309.03999v1 | 2309.03999v1 |
Creating a Systematic ESG (Environmental Social Governance) Scoring System Using Social Network Analysis and Machine Learning for More Sustainable Company Practices | Environmental Social Governance (ESG) is a widely used metric that measures
the sustainability of a company practices. Currently, ESG is determined using
self-reported corporate filings, which allows companies to portray themselves
in an artificially positive light. As a result, ESG evaluation is subjective
and inconsistent across raters, giving executives mixed signals on what to
improve. This project aims to create a data-driven ESG evaluation system that
can provide better guidance and more systemized scores by incorporating social
sentiment. Social sentiment allows for more balanced perspectives which
directly highlight public opinion, helping companies create more focused and
impactful initiatives. To build this, Python web scrapers were developed to
collect data from Wikipedia, Twitter, LinkedIn, and Google News for the S&P 500
companies. Data was then cleaned and passed through NLP algorithms to obtain
sentiment scores for ESG subcategories. Using these features, machine-learning
algorithms were trained and calibrated to S&P Global ESG Ratings to test their
predictive capabilities. The Random-Forest model was the strongest model with a
mean absolute error of 13.4% and a correlation of 26.1% (p-value 0.0372),
showing encouraging results. Overall, measuring ESG social sentiment across
sub-categories can help executives focus efforts on areas people care about
most. Furthermore, this data-driven methodology can provide ratings for
companies without coverage, allowing more socially responsible firms to thrive. | [
"Aarav Patel",
"Peter Gloor"
] | 2023-09-07 20:03:45 | http://arxiv.org/abs/2309.05607v1 | http://arxiv.org/pdf/2309.05607v1 | 2309.05607v1 |
ConDA: Contrastive Domain Adaptation for AI-generated Text Detection | Large language models (LLMs) are increasingly being used for generating text
in a variety of use cases, including journalistic news articles. Given the
potential malicious nature in which these LLMs can be used to generate
disinformation at scale, it is important to build effective detectors for such
AI-generated text. Given the surge in development of new LLMs, acquiring
labeled training data for supervised detectors is a bottleneck. However, there
might be plenty of unlabeled text data available, without information on which
generator it came from. In this work we tackle this data problem, in detecting
AI-generated news text, and frame the problem as an unsupervised domain
adaptation task. Here the domains are the different text generators, i.e. LLMs,
and we assume we have access to only the labeled source data and unlabeled
target data. We develop a Contrastive Domain Adaptation framework, called
ConDA, that blends standard domain adaptation techniques with the
representation power of contrastive learning to learn domain invariant
representations that are effective for the final unsupervised detection task.
Our experiments demonstrate the effectiveness of our framework, resulting in
average performance gains of 31.7% from the best performing baselines, and
within 0.8% margin of a fully supervised detector. All our code and data is
available at https://github.com/AmritaBh/ConDA-gen-text-detection. | [
"Amrita Bhattacharjee",
"Tharindu Kumarage",
"Raha Moraffah",
"Huan Liu"
] | 2023-09-07 19:51:30 | http://arxiv.org/abs/2309.03992v2 | http://arxiv.org/pdf/2309.03992v2 | 2309.03992v2 |
Derivation of Coordinate Descent Algorithms from Optimal Control Theory | Recently, it was posited that disparate optimization algorithms may be
coalesced in terms of a central source emanating from optimal control theory.
Here we further this proposition by showing how coordinate descent algorithms
may be derived from this emerging new principle. In particular, we show that
basic coordinate descent algorithms can be derived using a maximum principle
and a collection of max functions as "control" Lyapunov functions. The
convergence of the resulting coordinate descent algorithms is thus connected to
the controlled dissipation of their corresponding Lyapunov functions. The
operational metric for the search vector in all cases is given by the Hessian
of the convex objective function. | [
"I. M. Ross"
] | 2023-09-07 19:46:26 | http://arxiv.org/abs/2309.03990v1 | http://arxiv.org/pdf/2309.03990v1 | 2309.03990v1 |
Noisy Computing of the $\mathsf{OR}$ and $\mathsf{MAX}$ Functions | We consider the problem of computing a function of $n$ variables using noisy
queries, where each query is incorrect with some fixed and known probability $p
\in (0,1/2)$. Specifically, we consider the computation of the $\mathsf{OR}$
function of $n$ bits (where queries correspond to noisy readings of the bits)
and the $\mathsf{MAX}$ function of $n$ real numbers (where queries correspond
to noisy pairwise comparisons). We show that an expected number of queries of
\[ (1 \pm o(1)) \frac{n\log \frac{1}{\delta}}{D_{\mathsf{KL}}(p \| 1-p)} \] is
both sufficient and necessary to compute both functions with a vanishing error
probability $\delta = o(1)$, where $D_{\mathsf{KL}}(p \| 1-p)$ denotes the
Kullback-Leibler divergence between $\mathsf{Bern}(p)$ and $\mathsf{Bern}(1-p)$
distributions. Compared to previous work, our results tighten the dependence on
$p$ in both the upper and lower bounds for the two functions. | [
"Banghua Zhu",
"Ziao Wang",
"Nadim Ghaddar",
"Jiantao Jiao",
"Lele Wang"
] | 2023-09-07 19:37:52 | http://arxiv.org/abs/2309.03986v1 | http://arxiv.org/pdf/2309.03986v1 | 2309.03986v1 |
LanSER: Language-Model Supported Speech Emotion Recognition | Speech emotion recognition (SER) models typically rely on costly
human-labeled data for training, making scaling methods to large speech
datasets and nuanced emotion taxonomies difficult. We present LanSER, a method
that enables the use of unlabeled data by inferring weak emotion labels via
pre-trained large language models through weakly-supervised learning. For
inferring weak labels constrained to a taxonomy, we use a textual entailment
approach that selects an emotion label with the highest entailment score for a
speech transcript extracted via automatic speech recognition. Our experimental
results show that models pre-trained on large datasets with this weak
supervision outperform other baseline models on standard SER datasets when
fine-tuned, and show improved label efficiency. Despite being pre-trained on
labels derived only from text, we show that the resulting representations
appear to model the prosodic content of speech. | [
"Taesik Gong",
"Josh Belanich",
"Krishna Somandepalli",
"Arsha Nagrani",
"Brian Eoff",
"Brendan Jou"
] | 2023-09-07 19:21:08 | http://arxiv.org/abs/2309.03978v1 | http://arxiv.org/pdf/2309.03978v1 | 2309.03978v1 |
DBsurf: A Discrepancy Based Method for Discrete Stochastic Gradient Estimation | Computing gradients of an expectation with respect to the distributional
parameters of a discrete distribution is a problem arising in many fields of
science and engineering. Typically, this problem is tackled using Reinforce,
which frames the problem of gradient estimation as a Monte Carlo simulation.
Unfortunately, the Reinforce estimator is especially sensitive to discrepancies
between the true probability distribution and the drawn samples, a common issue
in low sampling regimes that results in inaccurate gradient estimates. In this
paper, we introduce DBsurf, a reinforce-based estimator for discrete
distributions that uses a novel sampling procedure to reduce the discrepancy
between the samples and the actual distribution. To assess the performance of
our estimator, we subject it to a diverse set of tasks. Among existing
estimators, DBsurf attains the lowest variance in a least squares problem
commonly used in the literature for benchmarking. Furthermore, DBsurf achieves
the best results for training variational auto-encoders (VAE) across different
datasets and sampling setups. Finally, we apply DBsurf to build a simple and
efficient Neural Architecture Search (NAS) algorithm with state-of-the-art
performance. | [
"Pau Mulet Arabi",
"Alec Flowers",
"Lukas Mauch",
"Fabien Cardinaux"
] | 2023-09-07 19:15:40 | http://arxiv.org/abs/2309.03974v1 | http://arxiv.org/pdf/2309.03974v1 | 2309.03974v1 |
Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue! | Interpretability and explainability of neural networks is continuously
increasing in importance, especially within safety-critical domains and to
provide the social right to explanation. Concept based explanations align well
with how humans reason, proving to be a good way to explain models. Concept
Embedding Models (CEMs) are one such concept based explanation architectures.
These have shown to overcome the trade-off between explainability and
performance. However, they have a key limitation -- they require concept
annotations for all their training data. For large datasets, this can be
expensive and infeasible. Motivated by this, we propose Automatic Concept
Embedding Models (ACEMs), which learn the concept annotations automatically. | [
"Rishabh Jain"
] | 2023-09-07 19:03:28 | http://arxiv.org/abs/2309.03970v1 | http://arxiv.org/pdf/2309.03970v1 | 2309.03970v1 |
Improving Resnet-9 Generalization Trained on Small Datasets | This paper presents our proposed approach that won the first prize at the
ICLR competition on Hardware Aware Efficient Training. The challenge is to
achieve the highest possible accuracy in an image classification task in less
than 10 minutes. The training is done on a small dataset of 5000 images picked
randomly from CIFAR-10 dataset. The evaluation is performed by the competition
organizers on a secret dataset with 1000 images of the same size. Our approach
includes applying a series of technique for improving the generalization of
ResNet-9 including: sharpness aware optimization, label smoothing, gradient
centralization, input patch whitening as well as metalearning based training.
Our experiments show that the ResNet-9 can achieve the accuracy of 88% while
trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets | [
"Omar Mohamed Awad",
"Habib Hajimolahoseini",
"Michael Lim",
"Gurpreet Gosal",
"Walid Ahmed",
"Yang Liu",
"Gordon Deng"
] | 2023-09-07 18:46:52 | http://arxiv.org/abs/2309.03965v1 | http://arxiv.org/pdf/2309.03965v1 | 2309.03965v1 |
REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation | Fully-test-time adaptation (F-TTA) can mitigate performance loss due to
distribution shifts between train and test data (1) without access to the
training data, and (2) without knowledge of the model training procedure. In
online F-TTA, a pre-trained model is adapted using a stream of test samples by
minimizing a self-supervised objective, such as entropy minimization. However,
models adapted with online using entropy minimization, are unstable especially
in single sample settings, leading to degenerate solutions, and limiting the
adoption of TTA inference strategies. Prior works identify noisy, or
unreliable, samples as a cause of failure in online F-TTA. One solution is to
ignore these samples, which can lead to bias in the update procedure, slow
adaptation, and poor generalization. In this work, we present a general
framework for improving robustness of F-TTA to these noisy samples, inspired by
self-paced learning and robust loss functions. Our proposed approach, Robust
Entropy Adaptive Loss Minimization (REALM), achieves better adaptation accuracy
than previous approaches throughout the adaptation process on corruptions of
CIFAR-10 and ImageNet-1K, demonstrating its effectiveness. | [
"Skyler Seto",
"Barry-John Theobald",
"Federico Danieli",
"Navdeep Jaitly",
"Dan Busbridge"
] | 2023-09-07 18:44:58 | http://arxiv.org/abs/2309.03964v1 | http://arxiv.org/pdf/2309.03964v1 | 2309.03964v1 |
ImageBind-LLM: Multi-modality Instruction Tuning | We present ImageBind-LLM, a multi-modality instruction tuning method of large
language models (LLMs) via ImageBind. Existing works mainly focus on language
and image instruction tuning, different from which, our ImageBind-LLM can
respond to multi-modality conditions, including audio, 3D point clouds, video,
and their embedding-space arithmetic by only image-text alignment training.
During training, we adopt a learnable bind network to align the embedding space
between LLaMA and ImageBind's image encoder. Then, the image features
transformed by the bind network are added to word tokens of all layers in
LLaMA, which progressively injects visual instructions via an attention-free
and zero-initialized gating mechanism. Aided by the joint embedding of
ImageBind, the simple image-text training enables our model to exhibit superior
multi-modality instruction-following capabilities. During inference, the
multi-modality inputs are fed into the corresponding ImageBind encoders, and
processed by a proposed visual cache model for further cross-modal embedding
enhancement. The training-free cache model retrieves from three million image
features extracted by ImageBind, which effectively mitigates the
training-inference modality discrepancy. Notably, with our approach,
ImageBind-LLM can respond to instructions of diverse modalities and demonstrate
significant language generation quality. Code is released at
https://github.com/OpenGVLab/LLaMA-Adapter. | [
"Jiaming Han",
"Renrui Zhang",
"Wenqi Shao",
"Peng Gao",
"Peng Xu",
"Han Xiao",
"Kaipeng Zhang",
"Chris Liu",
"Song Wen",
"Ziyu Guo",
"Xudong Lu",
"Shuai Ren",
"Yafei Wen",
"Xiaoxin Chen",
"Xiangyu Yue",
"Hongsheng Li",
"Yu Qiao"
] | 2023-09-07 17:59:45 | http://arxiv.org/abs/2309.03905v2 | http://arxiv.org/pdf/2309.03905v2 | 2309.03905v2 |
DiffusionEngine: Diffusion Model is Scalable Data Engine for Object Detection | Data is the cornerstone of deep learning. This paper reveals that the
recently developed Diffusion Model is a scalable data engine for object
detection. Existing methods for scaling up detection-oriented data often
require manual collection or generative models to obtain target images,
followed by data augmentation and labeling to produce training pairs, which are
costly, complex, or lacking diversity. To address these issues, we
presentDiffusionEngine (DE), a data scaling-up engine that provides
high-quality detection-oriented training pairs in a single stage. DE consists
of a pre-trained diffusion model and an effective Detection-Adapter,
contributing to generating scalable, diverse and generalizable detection data
in a plug-and-play manner. Detection-Adapter is learned to align the implicit
semantic and location knowledge in off-the-shelf diffusion models with
detection-aware signals to make better bounding-box predictions. Additionally,
we contribute two datasets, i.e., COCO-DE and VOC-DE, to scale up existing
detection benchmarks for facilitating follow-up research. Extensive experiments
demonstrate that data scaling-up via DE can achieve significant improvements in
diverse scenarios, such as various detection algorithms, self-supervised
pre-training, data-sparse, label-scarce, cross-domain, and semi-supervised
learning. For example, when using DE with a DINO-based adapter to scale up
data, mAP is improved by 3.1% on COCO, 7.6% on VOC, and 11.5% on Clipart. | [
"Manlin Zhang",
"Jie Wu",
"Yuxi Ren",
"Ming Li",
"Jie Qin",
"Xuefeng Xiao",
"Wei Liu",
"Rui Wang",
"Min Zheng",
"Andy J. Ma"
] | 2023-09-07 17:55:01 | http://arxiv.org/abs/2309.03893v1 | http://arxiv.org/pdf/2309.03893v1 | 2309.03893v1 |
ArtiGrasp: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation | We present ArtiGrasp, a novel method to synthesize bi-manual hand-object
interactions that include grasping and articulation. This task is challenging
due to the diversity of the global wrist motions and the precise finger control
that are necessary to articulate objects. ArtiGrasp leverages reinforcement
learning and physics simulations to train a policy that controls the global and
local hand pose. Our framework unifies grasping and articulation within a
single policy guided by a single hand pose reference. Moreover, to facilitate
the training of the precise finger control required for articulation, we
present a learning curriculum with increasing difficulty. It starts with
single-hand manipulation of stationary objects and continues with multi-agent
training including both hands and non-stationary objects. To evaluate our
method, we introduce Dynamic Object Grasping and Articulation, a task that
involves bringing an object into a target articulated pose. This task requires
grasping, relocation, and articulation. We show our method's efficacy towards
this task. We further demonstrate that our method can generate motions with
noisy hand-object pose estimates from an off-the-shelf image-based regressor. | [
"Hui Zhang",
"Sammy Christen",
"Zicong Fan",
"Luocheng Zheng",
"Jemin Hwangbo",
"Jie Song",
"Otmar Hilliges"
] | 2023-09-07 17:53:20 | http://arxiv.org/abs/2309.03891v1 | http://arxiv.org/pdf/2309.03891v1 | 2309.03891v1 |
A Function Interpretation Benchmark for Evaluating Interpretability Methods | Labeling neural network submodules with human-legible descriptions is useful
for many downstream tasks: such descriptions can surface failures, guide
interventions, and perhaps even explain important model behaviors. To date,
most mechanistic descriptions of trained networks have involved small models,
narrowly delimited phenomena, and large amounts of human labor. Labeling all
human-interpretable sub-computations in models of increasing size and
complexity will almost certainly require tools that can generate and validate
descriptions automatically. Recently, techniques that use learned models
in-the-loop for labeling have begun to gain traction, but methods for
evaluating their efficacy are limited and ad-hoc. How should we validate and
compare open-ended labeling tools? This paper introduces FIND (Function
INterpretation and Description), a benchmark suite for evaluating the building
blocks of automated interpretability methods. FIND contains functions that
resemble components of trained neural networks, and accompanying descriptions
of the kind we seek to generate. The functions are procedurally constructed
across textual and numeric domains, and involve a range of real-world
complexities, including noise, composition, approximation, and bias. We
evaluate new and existing methods that use language models (LMs) to produce
code-based and language descriptions of function behavior. We find that an
off-the-shelf LM augmented with only black-box access to functions can
sometimes infer their structure, acting as a scientist by forming hypotheses,
proposing experiments, and updating descriptions in light of new data. However,
LM-based descriptions tend to capture global function behavior and miss local
corruptions. These results show that FIND will be useful for characterizing the
performance of more sophisticated interpretability methods before they are
applied to real-world models. | [
"Sarah Schwettmann",
"Tamar Rott Shaham",
"Joanna Materzynska",
"Neil Chowdhury",
"Shuang Li",
"Jacob Andreas",
"David Bau",
"Antonio Torralba"
] | 2023-09-07 17:47:26 | http://arxiv.org/abs/2309.03886v1 | http://arxiv.org/pdf/2309.03886v1 | 2309.03886v1 |
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models | Despite their impressive capabilities, large language models (LLMs) are prone
to hallucinations, i.e., generating content that deviates from facts seen
during pretraining. We propose a simple decoding strategy for reducing
hallucinations with pretrained LLMs that does not require conditioning on
retrieved external knowledge nor additional fine-tuning. Our approach obtains
the next-token distribution by contrasting the differences in logits obtained
from projecting the later layers versus earlier layers to the vocabulary space,
exploiting the fact that factual knowledge in an LLMs has generally been shown
to be localized to particular transformer layers. We find that this Decoding by
Contrasting Layers (DoLa) approach is able to better surface factual knowledge
and reduce the generation of incorrect facts. DoLa consistently improves the
truthfulness across multiple choices tasks and open-ended generation tasks, for
example improving the performance of LLaMA family models on TruthfulQA by
12-17% absolute points, demonstrating its potential in making LLMs reliably
generate truthful facts. | [
"Yung-Sung Chuang",
"Yujia Xie",
"Hongyin Luo",
"Yoon Kim",
"James Glass",
"Pengcheng He"
] | 2023-09-07 17:45:31 | http://arxiv.org/abs/2309.03883v1 | http://arxiv.org/pdf/2309.03883v1 | 2309.03883v1 |
Better Practices for Domain Adaptation | Distribution shifts are all too common in real-world applications of machine
learning. Domain adaptation (DA) aims to address this by providing various
frameworks for adapting models to the deployment data without using labels.
However, the domain shift scenario raises a second more subtle challenge: the
difficulty of performing hyperparameter optimisation (HPO) for these adaptation
algorithms without access to a labelled validation set. The unclear validation
protocol for DA has led to bad practices in the literature, such as performing
HPO using the target test labels when, in real-world scenarios, they are not
available. This has resulted in over-optimism about DA research progress
compared to reality. In this paper, we analyse the state of DA when using good
evaluation practice, by benchmarking a suite of candidate validation criteria
and using them to assess popular adaptation algorithms. We show that there are
challenges across all three branches of domain adaptation methodology including
Unsupervised Domain Adaptation (UDA), Source-Free Domain Adaptation (SFDA), and
Test Time Adaptation (TTA). While the results show that realistically
achievable performance is often worse than expected, they also show that using
proper validation splits is beneficial, as well as showing that some previously
unexplored validation metrics provide the best options to date. Altogether, our
improved practices covering data, training, validation and hyperparameter
optimisation form a new rigorous pipeline to improve benchmarking, and hence
research progress, within this important field going forward. | [
"Linus Ericsson",
"Da Li",
"Timothy M. Hospedales"
] | 2023-09-07 17:44:18 | http://arxiv.org/abs/2309.03879v1 | http://arxiv.org/pdf/2309.03879v1 | 2309.03879v1 |
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs | Instruction-tuned Large Language Models (LLMs) have recently showcased
remarkable ability to generate fitting responses to natural language
instructions. However, an open research question concerns the inherent biases
of trained models and their responses. For instance, if the data used to tune
an LLM is dominantly written by persons with a specific political bias, we
might expect generated answers to share this bias. Current research work seeks
to de-bias such models, or suppress potentially biased answers. With this
demonstration, we take a different view on biases in instruction-tuning: Rather
than aiming to suppress them, we aim to make them explicit and transparent. To
this end, we present OpinionGPT, a web demo in which users can ask questions
and select all biases they wish to investigate. The demo will answer this
question using a model fine-tuned on text representing each of the selected
biases, allowing side-by-side comparison. To train the underlying model, we
identified 11 different biases (political, geographic, gender, age) and derived
an instruction-tuning corpus in which each answer was written by members of one
of these demographics. This paper presents OpinionGPT, illustrates how we
trained the bias-aware model and showcases the web application (available at
https://opiniongpt.informatik.hu-berlin.de). | [
"Patrick Haller",
"Ansar Aynetdinov",
"Alan Akbik"
] | 2023-09-07 17:41:01 | http://arxiv.org/abs/2309.03876v1 | http://arxiv.org/pdf/2309.03876v1 | 2309.03876v1 |
A Tutorial on the Non-Asymptotic Theory of System Identification | This tutorial serves as an introduction to recently developed non-asymptotic
methods in the theory of -- mainly linear -- system identification. We
emphasize tools we deem particularly useful for a range of problems in this
domain, such as the covering technique, the Hanson-Wright Inequality and the
method of self-normalized martingales. We then employ these tools to give
streamlined proofs of the performance of various least-squares based estimators
for identifying the parameters in autoregressive models. We conclude by
sketching out how the ideas presented herein can be extended to certain
nonlinear identification problems. | [
"Ingvar Ziemann",
"Anastasios Tsiamis",
"Bruce Lee",
"Yassir Jedra",
"Nikolai Matni",
"George J. Pappas"
] | 2023-09-07 17:33:30 | http://arxiv.org/abs/2309.03873v1 | http://arxiv.org/pdf/2309.03873v1 | 2309.03873v1 |
CenTime: Event-Conditional Modelling of Censoring in Survival Analysis | Survival analysis is a valuable tool for estimating the time until specific
events, such as death or cancer recurrence, based on baseline observations.
This is particularly useful in healthcare to prognostically predict clinically
important events based on patient data. However, existing approaches often have
limitations; some focus only on ranking patients by survivability, neglecting
to estimate the actual event time, while others treat the problem as a
classification task, ignoring the inherent time-ordered structure of the
events. Furthermore, the effective utilization of censored samples - training
data points where the exact event time is unknown - is essential for improving
the predictive accuracy of the model. In this paper, we introduce CenTime, a
novel approach to survival analysis that directly estimates the time to event.
Our method features an innovative event-conditional censoring mechanism that
performs robustly even when uncensored data is scarce. We demonstrate that our
approach forms a consistent estimator for the event model parameters, even in
the absence of uncensored data. Furthermore, CenTime is easily integrated with
deep learning models with no restrictions on batch size or the number of
uncensored samples. We compare our approach with standard survival analysis
methods, including the Cox proportional-hazard model and DeepHit. Our results
indicate that CenTime offers state-of-the-art performance in predicting
time-to-death while maintaining comparable ranking performance. Our
implementation is publicly available at
https://github.com/ahmedhshahin/CenTime. | [
"Ahmed H. Shahin",
"An Zhao",
"Alexander C. Whitehead",
"Daniel C. Alexander",
"Joseph Jacob",
"David Barber"
] | 2023-09-07 17:07:33 | http://arxiv.org/abs/2309.03851v2 | http://arxiv.org/pdf/2309.03851v2 | 2309.03851v2 |
Mixtures of Gaussians are Privately Learnable with a Polynomial Number of Samples | We study the problem of estimating mixtures of Gaussians under the constraint
of differential privacy (DP). Our main result is that $\tilde{O}(k^2 d^4
\log(1/\delta) / \alpha^2 \varepsilon)$ samples are sufficient to estimate a
mixture of $k$ Gaussians up to total variation distance $\alpha$ while
satisfying $(\varepsilon, \delta)$-DP. This is the first finite sample
complexity upper bound for the problem that does not make any structural
assumptions on the GMMs.
To solve the problem, we devise a new framework which may be useful for other
tasks. On a high level, we show that if a class of distributions (such as
Gaussians) is (1) list decodable and (2) admits a "locally small'' cover (Bun
et al., 2021) with respect to total variation distance, then the class of its
mixtures is privately learnable. The proof circumvents a known barrier
indicating that, unlike Gaussians, GMMs do not admit a locally small cover
(Aden-Ali et al., 2021b). | [
"Mohammad Afzali",
"Hassan Ashtiani",
"Christopher Liaw"
] | 2023-09-07 17:02:32 | http://arxiv.org/abs/2309.03847v2 | http://arxiv.org/pdf/2309.03847v2 | 2309.03847v2 |
Gradient-Based Feature Learning under Structured Data | Recent works have demonstrated that the sample complexity of gradient-based
learning of single index models, i.e. functions that depend on a 1-dimensional
projection of the input data, is governed by their information exponent.
However, these results are only concerned with isotropic data, while in
practice the input often contains additional structure which can implicitly
guide the algorithm. In this work, we investigate the effect of a spiked
covariance structure and reveal several interesting phenomena. First, we show
that in the anisotropic setting, the commonly used spherical gradient dynamics
may fail to recover the true direction, even when the spike is perfectly
aligned with the target direction. Next, we show that appropriate weight
normalization that is reminiscent of batch normalization can alleviate this
issue. Further, by exploiting the alignment between the (spiked) input
covariance and the target, we obtain improved sample complexity compared to the
isotropic case. In particular, under the spiked model with a suitably large
spike, the sample complexity of gradient-based training can be made independent
of the information exponent while also outperforming lower bounds for
rotationally invariant kernel methods. | [
"Alireza Mousavi-Hosseini",
"Denny Wu",
"Taiji Suzuki",
"Murat A. Erdogdu"
] | 2023-09-07 16:55:50 | http://arxiv.org/abs/2309.03843v1 | http://arxiv.org/pdf/2309.03843v1 | 2309.03843v1 |
Early warning via transitions in latent stochastic dynamical systems | Early warnings for dynamical transitions in complex systems or
high-dimensional observation data are essential in many real world
applications, such as gene mutation, brain diseases, natural disasters,
financial crises, and engineering reliability. To effectively extract early
warning signals, we develop a novel approach: the directed anisotropic
diffusion map that captures the latent evolutionary dynamics in low-dimensional
manifold. Applying the methodology to authentic electroencephalogram (EEG)
data, we successfully find the appropriate effective coordinates, and derive
early warning signals capable of detecting the tipping point during the state
transition. Our method bridges the latent dynamics with the original dataset.
The framework is validated to be accurate and effective through numerical
experiments, in terms of density and transition probability. It is shown that
the second coordinate holds meaningful information for critical transition in
various evaluation metrics. | [
"Lingyu Feng",
"Ting Gao",
"Wang Xiao",
"Jinqiao Duan"
] | 2023-09-07 16:55:33 | http://arxiv.org/abs/2309.03842v1 | http://arxiv.org/pdf/2309.03842v1 | 2309.03842v1 |
Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning | Adaptive interfaces can help users perform sequential decision-making tasks
like robotic teleoperation given noisy, high-dimensional command signals (e.g.,
from a brain-computer interface). Recent advances in human-in-the-loop machine
learning enable such systems to improve by interacting with users, but tend to
be limited by the amount of data that they can collect from individual users in
practice. In this paper, we propose a reinforcement learning algorithm to
address this by training an interface to map raw command signals to actions
using a combination of offline pre-training and online fine-tuning. To address
the challenges posed by noisy command signals and sparse rewards, we develop a
novel method for representing and inferring the user's long-term intent for a
given trajectory. We primarily evaluate our method's ability to assist users
who can only communicate through noisy, high-dimensional input channels through
a user study in which 12 participants performed a simulated navigation task by
using their eye gaze to modulate a 128-dimensional command signal from their
webcam. The results show that our method enables successful goal navigation
more often than a baseline directional interface, by learning to denoise user
commands signals and provide shared autonomy assistance. We further evaluate on
a simulated Sawyer pushing task with eye gaze control, and the Lunar Lander
game with simulated user commands, and find that our method improves over
baseline interfaces in these domains as well. Extensive ablation experiments
with simulated user commands empirically motivate each component of our method. | [
"Jensen Gao",
"Siddharth Reddy",
"Glen Berseth",
"Anca D. Dragan",
"Sergey Levine"
] | 2023-09-07 16:52:27 | http://arxiv.org/abs/2309.03839v1 | http://arxiv.org/pdf/2309.03839v1 | 2309.03839v1 |
Cross-Task Attention Network: Improving Multi-Task Learning for Medical Imaging Applications | Multi-task learning (MTL) is a powerful approach in deep learning that
leverages the information from multiple tasks during training to improve model
performance. In medical imaging, MTL has shown great potential to solve various
tasks. However, existing MTL architectures in medical imaging are limited in
sharing information across tasks, reducing the potential performance
improvements of MTL. In this study, we introduce a novel attention-based MTL
framework to better leverage inter-task interactions for various tasks from
pixel-level to image-level predictions. Specifically, we propose a Cross-Task
Attention Network (CTAN) which utilizes cross-task attention mechanisms to
incorporate information by interacting across tasks. We validated CTAN on four
medical imaging datasets that span different domains and tasks including:
radiation treatment planning prediction using planning CT images of two
different target cancers (Prostate, OpenKBP); pigmented skin lesion
segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19
diagnosis and severity prediction using chest CT scans (STOIC). Our study
demonstrates the effectiveness of CTAN in improving the accuracy of medical
imaging tasks. Compared to standard single-task learning (STL), CTAN
demonstrated a 4.67% improvement in performance and outperformed both widely
used MTL baselines: hard parameter sharing (HPS) with an average performance
improvement of 3.22%; and multi-task attention network (MTAN) with a relative
decrease of 5.38%. These findings highlight the significance of our proposed
MTL framework in solving medical imaging tasks and its potential to improve
their accuracy across domains. | [
"Sangwook Kim",
"Thomas G. Purdie",
"Chris McIntosh"
] | 2023-09-07 16:50:40 | http://arxiv.org/abs/2309.03837v1 | http://arxiv.org/pdf/2309.03837v1 | 2309.03837v1 |
Learning from Demonstration via Probabilistic Diagrammatic Teaching | Learning for Demonstration (LfD) enables robots to acquire new skills by
imitating expert demonstrations, allowing users to communicate their
instructions in an intuitive manner. Recent progress in LfD often relies on
kinesthetic teaching or teleoperation as the medium for users to specify the
demonstrations. Kinesthetic teaching requires physical handling of the robot,
while teleoperation demands proficiency with additional hardware. This paper
introduces an alternative paradigm for LfD called Diagrammatic Teaching.
Diagrammatic Teaching aims to teach robots novel skills by prompting the user
to sketch out demonstration trajectories on 2D images of the scene, these are
then synthesised as a generative model of motion trajectories in 3D task space.
Additionally, we present the Ray-tracing Probabilistic Trajectory Learning
(RPTL) framework for Diagrammatic Teaching. RPTL extracts time-varying
probability densities from the 2D sketches, applies ray-tracing to find
corresponding regions in 3D Cartesian space, and fits a probabilistic model of
motion trajectories to these regions. New motion trajectories, which mimic
those sketched by the user, can then be generated from the probabilistic model.
We empirically validate our framework both in simulation and on real robots,
which include a fixed-base manipulator and a quadruped-mounted manipulator. | [
"Weiming Zhi",
"Tianyi Zhang",
"Matthew Johnson-Roberson"
] | 2023-09-07 16:49:38 | http://arxiv.org/abs/2309.03835v2 | http://arxiv.org/pdf/2309.03835v2 | 2309.03835v2 |
Uncovering Drift in Textual Data: An Unsupervised Method for Detecting and Mitigating Drift in Machine Learning Models | Drift in machine learning refers to the phenomenon where the statistical
properties of data or context, in which the model operates, change over time
leading to a decrease in its performance. Therefore, maintaining a constant
monitoring process for machine learning model performance is crucial in order
to proactively prevent any potential performance regression. However,
supervised drift detection methods require human annotation and consequently
lead to a longer time to detect and mitigate the drift. In our proposed
unsupervised drift detection method, we follow a two step process. Our first
step involves encoding a sample of production data as the target distribution,
and the model training data as the reference distribution. In the second step,
we employ a kernel-based statistical test that utilizes the maximum mean
discrepancy (MMD) distance metric to compare the reference and target
distributions and estimate any potential drift. Our method also identifies the
subset of production data that is the root cause of the drift. The models
retrained using these identified high drift samples show improved performance
on online customer experience quality metrics. | [
"Saeed Khaki",
"Akhouri Abhinav Aditya",
"Zohar Karnin",
"Lan Ma",
"Olivia Pan",
"Samarth Marudheri Chandrashekar"
] | 2023-09-07 16:45:42 | http://arxiv.org/abs/2309.03831v1 | http://arxiv.org/pdf/2309.03831v1 | 2309.03831v1 |
ArtHDR-Net: Perceptually Realistic and Accurate HDR Content Creation | High Dynamic Range (HDR) content creation has become an important topic for
modern media and entertainment sectors, gaming and Augmented/Virtual Reality
industries. Many methods have been proposed to recreate the HDR counterparts of
input Low Dynamic Range (LDR) images/videos given a single exposure or
multi-exposure LDRs. The state-of-the-art methods focus primarily on the
preservation of the reconstruction's structural similarity and the pixel-wise
accuracy. However, these conventional approaches do not emphasize preserving
the artistic intent of the images in terms of human visual perception, which is
an essential element in media, entertainment and gaming. In this paper, we
attempt to study and fill this gap. We propose an architecture called
ArtHDR-Net based on a Convolutional Neural Network that uses multi-exposed LDR
features as input. Experimental results show that ArtHDR-Net can achieve
state-of-the-art performance in terms of the HDR-VDP-2 score (i.e., mean
opinion score index) while reaching competitive performance in terms of PSNR
and SSIM. | [
"Hrishav Bakul Barua",
"Ganesh Krishnasamy",
"KokSheik Wong",
"Kalin Stefanov",
"Abhinav Dhall"
] | 2023-09-07 16:40:49 | http://arxiv.org/abs/2309.03827v1 | http://arxiv.org/pdf/2309.03827v1 | 2309.03827v1 |
Prime and Modulate Learning: Generation of forward models with signed back-propagation and environmental cues | Deep neural networks employing error back-propagation for learning can suffer
from exploding and vanishing gradient problems. Numerous solutions have been
proposed such as normalisation techniques or limiting activation functions to
linear rectifying units. In this work we follow a different approach which is
particularly applicable to closed-loop learning of forward models where
back-propagation makes exclusive use of the sign of the error signal to prime
the learning, whilst a global relevance signal modulates the rate of learning.
This is inspired by the interaction between local plasticity and a global
neuromodulation. For example, whilst driving on an empty road, one can allow
for slow step-wise optimisation of actions, whereas, at a busy junction, an
error must be corrected at once. Hence, the error is the priming signal and the
intensity of the experience is a modulating factor in the weight change. The
advantages of this Prime and Modulate paradigm is twofold: it is free from
normalisation and it makes use of relevant cues from the environment to enrich
the learning. We present a mathematical derivation of the learning rule in
z-space and demonstrate the real-time performance with a robotic platform. The
results show a significant improvement in the speed of convergence compared to
that of the conventional back-propagation. | [
"Sama Daryanavard",
"Bernd Porr"
] | 2023-09-07 16:34:30 | http://arxiv.org/abs/2309.03825v1 | http://arxiv.org/pdf/2309.03825v1 | 2309.03825v1 |
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization | Low Rank Decomposition (LRD) is a model compression technique applied to the
weight tensors of deep learning models in order to reduce the number of
trainable parameters and computational complexity. However, due to high number
of new layers added to the architecture after applying LRD, it may not lead to
a high training/inference acceleration if the decomposition ranks are not small
enough. The issue is that using small ranks increases the risk of significant
accuracy drop after decomposition. In this paper, we propose two techniques for
accelerating low rank decomposed models without requiring to use small ranks
for decomposition. These methods include rank optimization and sequential
freezing of decomposed layers. We perform experiments on both convolutional and
transformer-based models. Experiments show that these techniques can improve
the model throughput up to 60% during training and 37% during inference when
combined together while preserving the accuracy close to that of the original
models | [
"Habib Hajimolahoseini",
"Walid Ahmed",
"Yang Liu"
] | 2023-09-07 16:33:42 | http://arxiv.org/abs/2309.03824v1 | http://arxiv.org/pdf/2309.03824v1 | 2309.03824v1 |
Empirical Risk Minimization for Losses without Variance | This paper considers an empirical risk minimization problem under
heavy-tailed settings, where data does not have finite variance, but only has
$p$-th moment with $p \in (1,2)$. Instead of using estimation procedure based
on truncated observed data, we choose the optimizer by minimizing the risk
value. Those risk values can be robustly estimated via using the remarkable
Catoni's method (Catoni, 2012). Thanks to the structure of Catoni-type
influence functions, we are able to establish excess risk upper bounds via
using generalized generic chaining methods. Moreover, we take computational
issues into consideration. We especially theoretically investigate two types of
optimization methods, robust gradient descent algorithm and empirical
risk-based methods. With an extensive numerical study, we find that the
optimizer based on empirical risks via Catoni-style estimation indeed shows
better performance than other baselines. It indicates that estimation directly
based on truncated data may lead to unsatisfactory results. | [
"Guanhua Fang",
"Ping Li",
"Gennady Samorodnitsky"
] | 2023-09-07 16:14:00 | http://arxiv.org/abs/2309.03818v1 | http://arxiv.org/pdf/2309.03818v1 | 2309.03818v1 |
AnthroNet: Conditional Generation of Humans via Anthropometrics | We present a novel human body model formulated by an extensive set of
anthropocentric measurements, which is capable of generating a wide range of
human body shapes and poses. The proposed model enables direct modeling of
specific human identities through a deep generative architecture, which can
produce humans in any arbitrary pose. It is the first of its kind to have been
trained end-to-end using only synthetically generated data, which not only
provides highly accurate human mesh representations but also allows for precise
anthropometry of the body. Moreover, using a highly diverse animation library,
we articulated our synthetic humans' body and hands to maximize the diversity
of the learnable priors for model training. Our model was trained on a dataset
of $100k$ procedurally-generated posed human meshes and their corresponding
anthropometric measurements. Our synthetic data generator can be used to
generate millions of unique human identities and poses for non-commercial
academic research purposes. | [
"Francesco Picetti",
"Shrinath Deshpande",
"Jonathan Leban",
"Soroosh Shahtalebi",
"Jay Patel",
"Peifeng Jing",
"Chunpu Wang",
"Charles Metze III",
"Cameron Sun",
"Cera Laidlaw",
"James Warren",
"Kathy Huynh",
"River Page",
"Jonathan Hogins",
"Adam Crespi",
"Sujoy Ganguly",
"Salehe Erfanian Ebadi"
] | 2023-09-07 16:09:06 | http://arxiv.org/abs/2309.03812v1 | http://arxiv.org/pdf/2309.03812v1 | 2309.03812v1 |
Improved theoretical guarantee for rank aggregation via spectral method | Given pairwise comparisons between multiple items, how to rank them so that
the ranking matches the observations? This problem, known as rank aggregation,
has found many applications in sports, recommendation systems, and other web
applications. As it is generally NP-hard to find a global ranking that
minimizes the mismatch (known as the Kemeny optimization), we focus on the
Erd\"os-R\'enyi outliers (ERO) model for this ranking problem. Here, each
pairwise comparison is a corrupted copy of the true score difference. We
investigate spectral ranking algorithms that are based on unnormalized and
normalized data matrices. The key is to understand their performance in
recovering the underlying scores of each item from the observed data. This
reduces to deriving an entry-wise perturbation error bound between the top
eigenvectors of the unnormalized/normalized data matrix and its population
counterpart. By using the leave-one-out technique, we provide a sharper
$\ell_{\infty}$-norm perturbation bound of the eigenvectors and also derive an
error bound on the maximum displacement for each item, with only $\Omega(n\log
n)$ samples. Our theoretical analysis improves upon the state-of-the-art
results in terms of sample complexity, and our numerical experiments confirm
these theoretical findings. | [
"Ziliang Samuel Zhong",
"Shuyang Ling"
] | 2023-09-07 16:01:47 | http://arxiv.org/abs/2309.03808v2 | http://arxiv.org/pdf/2309.03808v2 | 2309.03808v2 |
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck | This work investigates the nuanced algorithm design choices for deep learning
in the presence of computational-statistical gaps. We begin by considering
offline sparse parity learning, a supervised classification problem which
admits a statistical query lower bound for gradient-based training of a
multilayer perceptron. This lower bound can be interpreted as a multi-resource
tradeoff frontier: successful learning can only occur if one is sufficiently
rich (large model), knowledgeable (large dataset), patient (many training
iterations), or lucky (many random guesses). We show, theoretically and
experimentally, that sparse initialization and increasing network width yield
significant improvements in sample efficiency in this setting. Here, width
plays the role of parallel search: it amplifies the probability of finding
"lottery ticket" neurons, which learn sparse features more sample-efficiently.
Finally, we show that the synthetic sparse parity task can be useful as a proxy
for real problems requiring axis-aligned feature learning. We demonstrate
improved sample efficiency on tabular classification benchmarks by using wide,
sparsely-initialized MLP models; these networks sometimes outperform tuned
random forests. | [
"Benjamin L. Edelman",
"Surbhi Goel",
"Sham Kakade",
"Eran Malach",
"Cyril Zhang"
] | 2023-09-07 15:52:48 | http://arxiv.org/abs/2309.03800v1 | http://arxiv.org/pdf/2309.03800v1 | 2309.03800v1 |
Conformal Autoregressive Generation: Beam Search with Coverage Guarantees | We introduce two new extensions to the beam search algorithm based on
conformal predictions (CP) to produce sets of sequences with theoretical
coverage guarantees. The first method is very simple and proposes
dynamically-sized subsets of beam search results but, unlike typical CP
procedures, has an upper bound on the achievable guarantee depending on a
post-hoc calibration measure. Our second algorithm introduces the conformal set
prediction procedure as part of the decoding process, producing a variable beam
width which adapts to the current uncertainty. While more complex, this
procedure can achieve coverage guarantees selected a priori. We provide
marginal coverage bounds for each method, and evaluate them empirically on a
selection of tasks drawing from natural language processing and chemistry. | [
"Nicolas Deutschmann",
"Marvin Alberts",
"María Rodríguez Martínez"
] | 2023-09-07 15:50:48 | http://arxiv.org/abs/2309.03797v1 | http://arxiv.org/pdf/2309.03797v1 | 2309.03797v1 |
Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences | We introduce the $ARMOR_D$ methods as novel approaches to enhancing the
adversarial robustness of deep learning models. These methods are based on a
new class of optimal-transport-regularized divergences, constructed via an
infimal convolution between an information divergence and an optimal-transport
(OT) cost. We use these as tools to enhance adversarial robustness by
maximizing the expected loss over a neighborhood of distributions, a technique
known as distributionally robust optimization. Viewed as a tool for
constructing adversarial samples, our method allows samples to be both
transported, according to the OT cost, and re-weighted, according to the
information divergence. We demonstrate the effectiveness of our method on
malware detection and image recognition applications and find that, to our
knowledge, it outperforms existing methods at enhancing the robustness against
adversarial attacks. $ARMOR_D$ yields the robustified accuracy of $98.29\%$
against $FGSM$ and $98.18\%$ against $PGD^{40}$ on the MNIST dataset, reducing
the error rate by more than $19.7\%$ and $37.2\%$ respectively compared to
prior methods. Similarly, in malware detection, a discrete (binary) data
domain, $ARMOR_D$ improves the robustified accuracy under $rFGSM^{50}$ attack
compared to the previous best-performing adversarial training methods by
$37.0\%$ while lowering false negative and false positive rates by $51.1\%$ and
$57.53\%$, respectively. | [
"Jeremiah Birrell",
"Mohammadreza Ebrahimi"
] | 2023-09-07 15:41:45 | http://arxiv.org/abs/2309.03791v1 | http://arxiv.org/pdf/2309.03791v1 | 2309.03791v1 |
CPU frequency scheduling of real-time applications on embedded devices with temporal encoding-based deep reinforcement learning | Small devices are frequently used in IoT and smart-city applications to
perform periodic dedicated tasks with soft deadlines. This work focuses on
developing methods to derive efficient power-management methods for periodic
tasks on small devices. We first study the limitations of the existing Linux
built-in methods used in small devices. We illustrate three typical
workload/system patterns that are challenging to manage with Linux's built-in
solutions. We develop a reinforcement-learning-based technique with temporal
encoding to derive an effective DVFS governor even with the presence of the
three system patterns. The derived governor uses only one performance counter,
the same as the built-in Linux mechanism, and does not require an explicit task
model for the workload. We implemented a prototype system on the Nvidia Jetson
Nano Board and experimented with it with six applications, including two
self-designed and four benchmark applications. Under different deadline
constraints, our approach can quickly derive a DVFS governor that can adapt to
performance requirements and outperform the built-in Linux approach in energy
saving. On Mibench workloads, with performance slack ranging from 0.04 s to 0.4
s, the proposed method can save 3% - 11% more energy compared to Ondemand.
AudioReg and FaceReg applications tested have 5%- 14% energy-saving
improvement. We have open-sourced the implementation of our in-kernel quantized
neural network engine. The codebase can be found at:
https://github.com/coladog/tinyagent. | [
"Ti Zhou",
"Man Lin"
] | 2023-09-07 15:28:03 | http://arxiv.org/abs/2309.03779v1 | http://arxiv.org/pdf/2309.03779v1 | 2309.03779v1 |
Deep Learning Safety Concerns in Automated Driving Perception | Recent advances in the field of deep learning and impressive performance of
deep neural networks (DNNs) for perception have resulted in an increased demand
for their use in automated driving (AD) systems. The safety of such systems is
of utmost importance and thus requires to consider the unique properties of
DNNs.
In order to achieve safety of AD systems with DNN-based perception components
in a systematic and comprehensive approach, so-called safety concerns have been
introduced as a suitable structuring element. On the one hand, the concept of
safety concerns is -- by design -- well aligned to existing standards relevant
for safety of AD systems such as ISO 21448 (SOTIF). On the other hand, it has
already inspired several academic publications and upcoming standards on AI
safety such as ISO PAS 8800.
While the concept of safety concerns has been previously introduced, this
paper extends and refines it, leveraging feedback from various domain and
safety experts in the field. In particular, this paper introduces an additional
categorization for a better understanding as well as enabling cross-functional
teams to jointly address the concerns. | [
"Stephanie Abrecht",
"Alexander Hirsch",
"Shervin Raafatnia",
"Matthias Woehrle"
] | 2023-09-07 15:25:47 | http://arxiv.org/abs/2309.03774v1 | http://arxiv.org/pdf/2309.03774v1 | 2309.03774v1 |
Neural lasso: a unifying approach of lasso and neural networks | In recent years, there is a growing interest in combining techniques
attributed to the areas of Statistics and Machine Learning in order to obtain
the benefits of both approaches. In this article, the statistical technique
lasso for variable selection is represented through a neural network. It is
observed that, although both the statistical approach and its neural version
have the same objective function, they differ due to their optimization. In
particular, the neural version is usually optimized in one-step using a single
validation set, while the statistical counterpart uses a two-step optimization
based on cross-validation. The more elaborated optimization of the statistical
method results in more accurate parameter estimation, especially when the
training set is small. For this reason, a modification of the standard approach
for training neural networks, that mimics the statistical framework, is
proposed. During the development of the above modification, a new optimization
algorithm for identifying the significant variables emerged. Experimental
results, using synthetic and real data sets, show that this new optimization
algorithm achieves better performance than any of the three previous
optimization approaches. | [
"David Delgado",
"Ernesto Curbelo",
"Danae Carreras"
] | 2023-09-07 15:17:10 | http://arxiv.org/abs/2309.03770v1 | http://arxiv.org/pdf/2309.03770v1 | 2309.03770v1 |
M(otion)-mode Based Prediction of Ejection Fraction using Echocardiograms | Early detection of cardiac dysfunction through routine screening is vital for
diagnosing cardiovascular diseases. An important metric of cardiac function is
the left ventricular ejection fraction (EF), where lower EF is associated with
cardiomyopathy. Echocardiography is a popular diagnostic tool in cardiology,
with ultrasound being a low-cost, real-time, and non-ionizing technology.
However, human assessment of echocardiograms for calculating EF is
time-consuming and expertise-demanding, raising the need for an automated
approach. In this work, we propose using the M(otion)-mode of echocardiograms
for estimating the EF and classifying cardiomyopathy. We generate multiple
artificial M-mode images from a single echocardiogram and combine them using
off-the-shelf model architectures. Additionally, we extend contrastive learning
(CL) to cardiac imaging to learn meaningful representations from exploiting
structures in unlabeled data allowing the model to achieve high accuracy, even
with limited annotations. Our experiments show that the supervised setting
converges with only ten modes and is comparable to the baseline method while
bypassing its cumbersome training process and being computationally much more
efficient. Furthermore, CL using M-mode images is helpful for limited data
scenarios, such as having labels for only 200 patients, which is common in
medical applications. | [
"Ece Ozkan",
"Thomas M. Sutter",
"Yurong Hu",
"Sebastian Balzer",
"Julia E. Vogt"
] | 2023-09-07 15:00:58 | http://arxiv.org/abs/2309.03759v1 | http://arxiv.org/pdf/2309.03759v1 | 2309.03759v1 |
TSGBench: Time Series Generation Benchmark | Synthetic Time Series Generation (TSG) is crucial in a range of applications,
including data augmentation, anomaly detection, and privacy preservation.
Although significant strides have been made in this field, existing methods
exhibit three key limitations: (1) They often benchmark against similar model
types, constraining a holistic view of performance capabilities. (2) The use of
specialized synthetic and private datasets introduces biases and hampers
generalizability. (3) Ambiguous evaluation measures, often tied to custom
networks or downstream tasks, hinder consistent and fair comparison.
To overcome these limitations, we introduce \textsf{TSGBench}, the inaugural
TSG Benchmark, designed for a unified and comprehensive assessment of TSG
methods. It comprises three modules: (1) a curated collection of publicly
available, real-world datasets tailored for TSG, together with a standardized
preprocessing pipeline; (2) a comprehensive evaluation measures suite including
vanilla measures, new distance-based assessments, and visualization tools; (3)
a pioneering generalization test rooted in Domain Adaptation (DA), compatible
with all methods. We have conducted extensive experiments across ten real-world
datasets from diverse domains, utilizing ten advanced TSG methods and twelve
evaluation measures, all gauged through \textsf{TSGBench}. The results
highlight its remarkable efficacy and consistency. More importantly,
\textsf{TSGBench} delivers a statistical breakdown of method rankings,
illuminating performance variations across different datasets and measures, and
offering nuanced insights into the effectiveness of each method. | [
"Yihao Ang",
"Qiang Huang",
"Yifan Bao",
"Anthony K. H. Tung",
"Zhiyong Huang"
] | 2023-09-07 14:51:42 | http://arxiv.org/abs/2309.03755v1 | http://arxiv.org/pdf/2309.03755v1 | 2309.03755v1 |
Convergence Analysis of Decentralized ASGD | Over the last decades, Stochastic Gradient Descent (SGD) has been intensively
studied by the Machine Learning community. Despite its versatility and
excellent performance, the optimization of large models via SGD still is a
time-consuming task. To reduce training time, it is common to distribute the
training process across multiple devices. Recently, it has been shown that the
convergence of asynchronous SGD (ASGD) will always be faster than mini-batch
SGD. However, despite these improvements in the theoretical bounds, most ASGD
convergence-rate proofs still rely on a centralized parameter server, which is
prone to become a bottleneck when scaling out the gradient computations across
many distributed processes.
In this paper, we present a novel convergence-rate analysis for decentralized
and asynchronous SGD (DASGD) which does not require partial synchronization
among nodes nor restrictive network topologies. Specifically, we provide a
bound of $\mathcal{O}(\sigma\epsilon^{-2}) +
\mathcal{O}(QS_{avg}\epsilon^{-3/2}) + \mathcal{O}(S_{avg}\epsilon^{-1})$ for
the convergence rate of DASGD, where $S_{avg}$ is the average staleness between
models, $Q$ is a constant that bounds the norm of the gradients, and $\epsilon$
is a (small) error that is allowed within the bound. Furthermore, when
gradients are not bounded, we prove the convergence rate of DASGD to be
$\mathcal{O}(\sigma\epsilon^{-2}) +
\mathcal{O}(\sqrt{\hat{S}_{avg}\hat{S}_{max}}\epsilon^{-1})$, with
$\hat{S}_{max}$ and $\hat{S}_{avg}$ representing a loose version of the average
and maximum staleness, respectively. Our convergence proof holds for a fixed
stepsize and any non-convex, homogeneous, and L-smooth objective function. We
anticipate that our results will be of high relevance for the adoption of DASGD
by a broad community of researchers and developers. | [
"Mauro DL Tosi",
"Martin Theobald"
] | 2023-09-07 14:50:31 | http://arxiv.org/abs/2309.03754v1 | http://arxiv.org/pdf/2309.03754v1 | 2309.03754v1 |
Medoid Silhouette clustering with automatic cluster number selection | The evaluation of clustering results is difficult, highly dependent on the
evaluated data set and the perspective of the beholder. There are many
different clustering quality measures, which try to provide a general measure
to validate clustering results. A very popular measure is the Silhouette. We
discuss the efficient medoid-based variant of the Silhouette, perform a
theoretical analysis of its properties, provide two fast versions for the
direct optimization, and discuss the use to choose the optimal number of
clusters. We combine ideas from the original Silhouette with the well-known PAM
algorithm and its latest improvements FasterPAM. One of the versions guarantees
equal results to the original variant and provides a run speedup of $O(k^2)$.
In experiments on real data with 30000 samples and $k$=100, we observed a
10464$\times$ speedup compared to the original PAMMEDSIL algorithm.
Additionally, we provide a variant to choose the optimal number of clusters
directly. | [
"Lars Lenssen",
"Erich Schubert"
] | 2023-09-07 14:46:48 | http://arxiv.org/abs/2309.03751v1 | http://arxiv.org/pdf/2309.03751v1 | 2309.03751v1 |
Enhancing Pipeline-Based Conversational Agents with Large Language Models | The latest advancements in AI and deep learning have led to a breakthrough in
large language model (LLM)-based agents such as GPT-4. However, many commercial
conversational agent development tools are pipeline-based and have limitations
in holding a human-like conversation. This paper investigates the capabilities
of LLMs to enhance pipeline-based conversational agents during two phases: 1)
in the design and development phase and 2) during operations. In 1) LLMs can
aid in generating training data, extracting entities and synonyms,
localization, and persona design. In 2) LLMs can assist in contextualization,
intent classification to prevent conversational breakdown and handle
out-of-scope questions, auto-correcting utterances, rephrasing responses,
formulating disambiguation questions, summarization, and enabling closed
question-answering capabilities. We conducted informal experiments with GPT-4
in the private banking domain to demonstrate the scenarios above with a
practical example. Companies may be hesitant to replace their pipeline-based
agents with LLMs entirely due to privacy concerns and the need for deep
integration within their existing ecosystems. A hybrid approach in which LLMs'
are integrated into the pipeline-based agents allows them to save time and
costs of building and running agents by capitalizing on the capabilities of
LLMs while retaining the integration and privacy safeguards of their existing
systems. | [
"Mina Foosherian",
"Hendrik Purwins",
"Purna Rathnayake",
"Touhidul Alam",
"Rui Teimao",
"Klaus-Dieter Thoben"
] | 2023-09-07 14:43:17 | http://arxiv.org/abs/2309.03748v1 | http://arxiv.org/pdf/2309.03748v1 | 2309.03748v1 |
Learning continuous-valued treatment effects through representation balancing | Estimating the effects of treatments with an associated dose on an instance's
outcome, the "dose response", is relevant in a variety of domains, from
healthcare to business, economics, and beyond. Such effects, also known as
continuous-valued treatment effects, are typically estimated from observational
data, which may be subject to dose selection bias. This means that the
allocation of doses depends on pre-treatment covariates. Previous studies have
shown that conventional machine learning approaches fail to learn accurate
individual estimates of dose responses under the presence of dose selection
bias. In this work, we propose CBRNet, a causal machine learning approach to
estimate an individual dose response from observational data. CBRNet adopts the
Neyman-Rubin potential outcome framework and extends the concept of balanced
representation learning for overcoming selection bias to continuous-valued
treatments. Our work is the first to apply representation balancing in a
continuous-valued treatment setting. We evaluate our method on a newly proposed
benchmark. Our experiments demonstrate CBRNet's ability to accurately learn
treatment effects under selection bias and competitive performance with respect
to other state-of-the-art methods. | [
"Christopher Bockel-Rickermann",
"Toon Vanderschueren",
"Jeroen Berrevoets",
"Tim Verdonck",
"Wouter Verbeke"
] | 2023-09-07 14:17:44 | http://arxiv.org/abs/2309.03731v1 | http://arxiv.org/pdf/2309.03731v1 | 2309.03731v1 |
A Causal Perspective on Loan Pricing: Investigating the Impacts of Selection Bias on Identifying Bid-Response Functions | In lending, where prices are specific to both customers and products, having
a well-functioning personalized pricing policy in place is essential to
effective business making. Typically, such a policy must be derived from
observational data, which introduces several challenges. While the problem of
``endogeneity'' is prominently studied in the established pricing literature,
the problem of selection bias (or, more precisely, bid selection bias) is not.
We take a step towards understanding the effects of selection bias by posing
pricing as a problem of causal inference. Specifically, we consider the
reaction of a customer to price a treatment effect. In our experiments, we
simulate varying levels of selection bias on a semi-synthetic dataset on
mortgage loan applications in Belgium. We investigate the potential of
parametric and nonparametric methods for the identification of individual
bid-response functions. Our results illustrate how conventional methods such as
logistic regression and neural networks suffer adversely from selection bias.
In contrast, we implement state-of-the-art methods from causal machine learning
and show their capability to overcome selection bias in pricing data. | [
"Christopher Bockel-Rickermann",
"Sam Verboven",
"Tim Verdonck",
"Wouter Verbeke"
] | 2023-09-07 14:14:30 | http://arxiv.org/abs/2309.03730v1 | http://arxiv.org/pdf/2309.03730v1 | 2309.03730v1 |
Operator-Based Detecting, Learning, and Stabilizing Unstable Periodic Orbits of Chaotic Attractors | This paper examines the use of operator-theoretic approaches to the analysis
of chaotic systems through the lens of their unstable periodic orbits (UPOs).
Our approach involves three data-driven steps for detecting, identifying, and
stabilizing UPOs. We demonstrate the use of kernel integral operators within
delay coordinates as an innovative method for UPO detection. For identifying
the dynamic behavior associated with each individual UPO, we utilize the
Koopman operator to present the dynamics as linear equations in the space of
Koopman eigenfunctions. This allows for characterizing the chaotic attractor by
investigating its principal dynamical modes across varying UPOs. We extend this
methodology into an interpretable machine learning framework aimed at
stabilizing strange attractors on their UPOs. To illustrate the efficacy of our
approach, we apply it to the Lorenz attractor as a case study. | [
"Ali Tavasoli",
"Heman Shakeri"
] | 2023-09-07 13:58:58 | http://arxiv.org/abs/2310.12156v1 | http://arxiv.org/pdf/2310.12156v1 | 2310.12156v1 |
A Natural Gas Consumption Forecasting System for Continual Learning Scenarios based on Hoeffding Trees with Change Point Detection Mechanism | Forecasting natural gas consumption, considering seasonality and trends, is
crucial in planning its supply and consumption and optimizing the cost of
obtaining it, mainly by industrial entities. However, in times of threats to
its supply, it is also a critical element that guarantees the supply of this
raw material to meet individual consumers' needs, ensuring society's energy
security. This article introduces a novel multistep ahead forecasting of
natural gas consumption with change point detection integration for model
collection selection with continual learning capabilities using data stream
processing. The performance of the forecasting models based on the proposed
approach is evaluated in a complex real-world use case of natural gas
consumption forecasting. We employed Hoeffding tree predictors as forecasting
models and the Pruned Exact Linear Time (PELT) algorithm for the change point
detection procedure. The change point detection integration enables selecting a
different model collection for successive time frames. Thus, three model
collection selection procedures (with and without an error feedback loop) are
defined and evaluated for forecasting scenarios with various densities of
detected change points. These models were compared with change point agnostic
baseline approaches. Our experiments show that fewer change points result in a
lower forecasting error regardless of the model collection selection procedure
employed. Also, simpler model collection selection procedures omitting
forecasting error feedback leads to more robust forecasting models suitable for
continual learning tasks. | [
"Radek Svoboda",
"Sebastian Basterrech",
"Jędrzej Kozal",
"Jan Platoš",
"Michał Woźniak"
] | 2023-09-07 13:52:20 | http://arxiv.org/abs/2309.03720v1 | http://arxiv.org/pdf/2309.03720v1 | 2309.03720v1 |
DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial Attention Detection | Auditory Attention Detection (AAD) aims to detect target speaker from brain
signals in a multi-speaker environment. Although EEG-based AAD methods have
shown promising results in recent years, current approaches primarily rely on
traditional convolutional neural network designed for processing Euclidean data
like images. This makes it challenging to handle EEG signals, which possess
non-Euclidean characteristics. In order to address this problem, this paper
proposes a dynamical graph self-distillation (DGSD) approach for AAD, which
does not require speech stimuli as input. Specifically, to effectively
represent the non-Euclidean properties of EEG signals, dynamical graph
convolutional networks are applied to represent the graph structure of EEG
signals, which can also extract crucial features related to auditory spatial
attention in EEG signals. In addition, to further improve AAD detection
performance, self-distillation, consisting of feature distillation and
hierarchical distillation strategies at each layer, is integrated. These
strategies leverage features and classification results from the deepest
network layers to guide the learning of shallow layers. Our experiments are
conducted on two publicly available datasets, KUL and DTU. Under a 1-second
time window, we achieve results of 90.0\% and 79.6\% accuracy on KUL and DTU,
respectively. We compare our DGSD method with competitive baselines, and the
experimental results indicate that the detection performance of our proposed
DGSD method is not only superior to the best reproducible baseline but also
significantly reduces the number of trainable parameters by approximately 100
times. | [
"Cunhang Fan",
"Hongyu Zhang",
"Wei Huang",
"Jun Xue",
"Jianhua Tao",
"Jiangyan Yi",
"Zhao Lv",
"Xiaopei Wu"
] | 2023-09-07 13:43:46 | http://arxiv.org/abs/2309.07147v1 | http://arxiv.org/pdf/2309.07147v1 | 2309.07147v1 |
A State Representation for Diminishing Rewards | A common setting in multitask reinforcement learning (RL) demands that an
agent rapidly adapt to various stationary reward functions randomly sampled
from a fixed distribution. In such situations, the successor representation
(SR) is a popular framework which supports rapid policy evaluation by
decoupling a policy's expected discounted, cumulative state occupancies from a
specific reward function. However, in the natural world, sequential tasks are
rarely independent, and instead reflect shifting priorities based on the
availability and subjective perception of rewarding stimuli. Reflecting this
disjunction, in this paper we study the phenomenon of diminishing marginal
utility and introduce a novel state representation, the $\lambda$
representation ($\lambda$R) which, surprisingly, is required for policy
evaluation in this setting and which generalizes the SR as well as several
other state representations from the literature. We establish the $\lambda$R's
formal properties and examine its normative advantages in the context of
machine learning, as well as its usefulness for studying natural behaviors,
particularly foraging. | [
"Ted Moskovitz",
"Samo Hromadka",
"Ahmed Touati",
"Diana Borsa",
"Maneesh Sahani"
] | 2023-09-07 13:38:36 | http://arxiv.org/abs/2309.03710v1 | http://arxiv.org/pdf/2309.03710v1 | 2309.03710v1 |
Chat Failures and Troubles: Reasons and Solutions | This paper examines some common problems in Human-Robot Interaction (HRI)
causing failures and troubles in Chat. A given use case's design decisions
start with the suitable robot, the suitable chatting model, identifying common
problems that cause failures, identifying potential solutions, and planning
continuous improvement. In conclusion, it is recommended to use a closed-loop
control algorithm that guides the use of trained Artificial Intelligence (AI)
pre-trained models and provides vocabulary filtering, re-train batched models
on new datasets, learn online from data streams, and/or use reinforcement
learning models to self-update the trained models and reduce errors. | [
"Manal Helal",
"Patrick Holthaus",
"Gabriella Lakatos",
"Farshid Amirabdollahian"
] | 2023-09-07 13:36:03 | http://arxiv.org/abs/2309.03708v1 | http://arxiv.org/pdf/2309.03708v1 | 2309.03708v1 |
A Probabilistic Semi-Supervised Approach with Triplet Markov Chains | Triplet Markov chains are general generative models for sequential data which
take into account three kinds of random variables: (noisy) observations, their
associated discrete labels and latent variables which aim at strengthening the
distribution of the observations and their associated labels. However, in
practice, we do not have at our disposal all the labels associated to the
observations to estimate the parameters of such models. In this paper, we
propose a general framework based on a variational Bayesian inference to train
parameterized triplet Markov chain models in a semi-supervised context. The
generality of our approach enables us to derive semi-supervised algorithms for
a variety of generative models for sequential Bayesian classification. | [
"Katherine Morales",
"Yohan Petetin"
] | 2023-09-07 13:34:20 | http://arxiv.org/abs/2309.03707v1 | http://arxiv.org/pdf/2309.03707v1 | 2309.03707v1 |
DiffDefense: Defending against Adversarial Attacks via Diffusion Models | This paper presents a novel reconstruction method that leverages Diffusion
Models to protect machine learning classifiers against adversarial attacks, all
without requiring any modifications to the classifiers themselves. The
susceptibility of machine learning models to minor input perturbations renders
them vulnerable to adversarial attacks. While diffusion-based methods are
typically disregarded for adversarial defense due to their slow reverse
process, this paper demonstrates that our proposed method offers robustness
against adversarial threats while preserving clean accuracy, speed, and
plug-and-play compatibility. Code at:
https://github.com/HondamunigePrasannaSilva/DiffDefence. | [
"Hondamunige Prasanna Silva",
"Lorenzo Seidenari",
"Alberto Del Bimbo"
] | 2023-09-07 13:28:36 | http://arxiv.org/abs/2309.03702v1 | http://arxiv.org/pdf/2309.03702v1 | 2309.03702v1 |
Short-Term Load Forecasting Using A Particle-Swarm Optimized Multi-Head Attention-Augmented CNN-LSTM Network | Short-term load forecasting is of paramount importance in the efficient
operation and planning of power systems, given its inherent non-linear and
dynamic nature. Recent strides in deep learning have shown promise in
addressing this challenge. However, these methods often grapple with
hyperparameter sensitivity, opaqueness in interpretability, and high
computational overhead for real-time deployment. In this paper, I propose a
novel solution that surmounts these obstacles. Our approach harnesses the power
of the Particle-Swarm Optimization algorithm to autonomously explore and
optimize hyperparameters, a Multi-Head Attention mechanism to discern the
salient features crucial for accurate forecasting, and a streamlined framework
for computational efficiency. Our method undergoes rigorous evaluation using a
genuine electricity demand dataset. The results underscore its superiority in
terms of accuracy, robustness, and computational efficiency. Notably, our Mean
Absolute Percentage Error of 1.9376 marks a significant advancement over
existing state-of-the-art approaches, heralding a new era in short-term load
forecasting. | [
"Paapa Kwesi Quansah",
"Edwin Kwesi Ansah Tenkorang"
] | 2023-09-07 13:06:52 | http://arxiv.org/abs/2309.03694v2 | http://arxiv.org/pdf/2309.03694v2 | 2309.03694v2 |
A computationally lightweight safe learning algorithm | Safety is an essential asset when learning control policies for physical
systems, as violating safety constraints during training can lead to expensive
hardware damage. In response to this need, the field of safe learning has
emerged with algorithms that can provide probabilistic safety guarantees
without knowledge of the underlying system dynamics. Those algorithms often
rely on Gaussian process inference. Unfortunately, Gaussian process inference
scales cubically with the number of data points, limiting applicability to
high-dimensional and embedded systems. In this paper, we propose a safe
learning algorithm that provides probabilistic safety guarantees but leverages
the Nadaraya-Watson estimator instead of Gaussian processes. For the
Nadaraya-Watson estimator, we can reach logarithmic scaling with the number of
data points. We provide theoretical guarantees for the estimates, embed them
into a safe learning algorithm, and show numerical experiments on a simulated
seven-degrees-of-freedom robot manipulator. | [
"Dominik Baumann",
"Krzysztof Kowalczyk",
"Koen Tiels",
"Paweł Wachel"
] | 2023-09-07 12:21:22 | http://arxiv.org/abs/2309.03672v1 | http://arxiv.org/pdf/2309.03672v1 | 2309.03672v1 |
Dataset Generation and Bonobo Classification from Weakly Labelled Videos | This paper presents a bonobo detection and classification pipeline built from
the commonly used machine learning methods. Such application is motivated by
the need to test bonobos in their enclosure using touch screen devices without
human assistance. This work introduces a newly acquired dataset based on bonobo
recordings generated semi-automatically. The recordings are weakly labelled and
fed to a macaque detector in order to spatially detect the individual present
in the video. Handcrafted features coupled with different classification
algorithms and deep-learning methods using a ResNet architecture are
investigated for bonobo identification. Performance is compared in terms of
classification accuracy on the splits of the database using different data
separation methods. We demonstrate the importance of data preparation and how a
wrong data separation can lead to false good results. Finally, after a
meaningful separation of the data, the best classification performance is
obtained using a fine-tuned ResNet model and reaches 75% of accuracy. | [
"Pierre-Etienne Martin"
] | 2023-09-07 12:19:51 | http://arxiv.org/abs/2309.03671v1 | http://arxiv.org/pdf/2309.03671v1 | 2309.03671v1 |
How adversarial attacks can disrupt seemingly stable accurate classifiers | Adversarial attacks dramatically change the output of an otherwise accurate
learning system using a seemingly inconsequential modification to a piece of
input data. Paradoxically, empirical evidence indicates that even systems which
are robust to large random perturbations of the input data remain susceptible
to small, easily constructed, adversarial perturbations of their inputs. Here,
we show that this may be seen as a fundamental feature of classifiers working
with high dimensional input data. We introduce a simple generic and
generalisable framework for which key behaviours observed in practical systems
arise with high probability -- notably the simultaneous susceptibility of the
(otherwise accurate) model to easily constructed adversarial attacks, and
robustness to random perturbations of the input data. We confirm that the same
phenomena are directly observed in practical neural networks trained on
standard image classification problems, where even large additive random noise
fails to trigger the adversarial instability of the network. A surprising
takeaway is that even small margins separating a classifier's decision surface
from training and testing data can hide adversarial susceptibility from being
detected using randomly sampled perturbations. Counterintuitively, using
additive noise during training or testing is therefore inefficient for
eradicating or detecting adversarial examples, and more demanding adversarial
training is required. | [
"Oliver J. Sutton",
"Qinghua Zhou",
"Ivan Y. Tyukin",
"Alexander N. Gorban",
"Alexander Bastounis",
"Desmond J. Higham"
] | 2023-09-07 12:02:00 | http://arxiv.org/abs/2309.03665v1 | http://arxiv.org/pdf/2309.03665v1 | 2309.03665v1 |
Alzheimer Disease Detection from Raman Spectroscopy of the Cerebrospinal Fluid via Topological Machine Learning | The cerebrospinal fluid (CSF) of 19 subjects who received a clinical
diagnosis of Alzheimer's disease (AD) as well as of 5 pathological controls
have been collected and analysed by Raman spectroscopy (RS). We investigated
whether the raw and preprocessed Raman spectra could be used to distinguish AD
from controls. First, we applied standard Machine Learning (ML) methods
obtaining unsatisfactory results. Then, we applied ML to a set of topological
descriptors extracted from raw spectra, achieving a very good classification
accuracy (>87%). Although our results are preliminary, they indicate that RS
and topological analysis together may provide an effective combination to
confirm or disprove a clinical diagnosis of AD. The next steps will include
enlarging the dataset of CSF samples to validate the proposed method better
and, possibly, to understand if topological data analysis could support the
characterization of AD subtypes. | [
"Francesco Conti",
"Martina Banchelli",
"Valentina Bessi",
"Cristina Cecchi",
"Fabrizio Chiti",
"Sara Colantonio",
"Cristiano D'Andrea",
"Marella de Angelis",
"Davide Moroni",
"Benedetta Nacmias",
"Maria Antonietta Pascali",
"Sandro Sorbi",
"Paolo Matteini"
] | 2023-09-07 12:01:01 | http://arxiv.org/abs/2309.03664v1 | http://arxiv.org/pdf/2309.03664v1 | 2309.03664v1 |
Towards Comparable Knowledge Distillation in Semantic Image Segmentation | Knowledge Distillation (KD) is one proposed solution to large model sizes and
slow inference speed in semantic segmentation. In our research we identify 25
proposed distillation loss terms from 14 publications in the last 4 years.
Unfortunately, a comparison of terms based on published results is often
impossible, because of differences in training configurations. A good
illustration of this problem is the comparison of two publications from 2022.
Using the same models and dataset, Structural and Statistical Texture
Distillation (SSTKD) reports an increase of student mIoU of 4.54 and a final
performance of 29.19, while Adaptive Perspective Distillation (APD) only
improves student performance by 2.06 percentage points, but achieves a final
performance of 39.25. The reason for such extreme differences is often a
suboptimal choice of hyperparameters and a resulting underperformance of the
student model used as reference point. In our work, we reveal problems of
insufficient hyperparameter tuning by showing that distillation improvements of
two widely accepted frameworks, SKD and IFVD, vanish when hyperparameters are
optimized sufficiently. To improve comparability of future research in the
field, we establish a solid baseline for three datasets and two student models
and provide extensive information on hyperparameter tuning. We find that only
two out of eight techniques can compete with our simple baseline on the ADE20K
dataset. | [
"Onno Niemann",
"Christopher Vox",
"Thorben Werner"
] | 2023-09-07 11:56:23 | http://arxiv.org/abs/2309.03659v1 | http://arxiv.org/pdf/2309.03659v1 | 2309.03659v1 |
Large-Scale Automatic Audiobook Creation | An audiobook can dramatically improve a work of literature's accessibility
and improve reader engagement. However, audiobooks can take hundreds of hours
of human effort to create, edit, and publish. In this work, we present a system
that can automatically generate high-quality audiobooks from online e-books. In
particular, we leverage recent advances in neural text-to-speech to create and
release thousands of human-quality, open-license audiobooks from the Project
Gutenberg e-book collection. Our method can identify the proper subset of
e-book content to read for a wide collection of diversely structured books and
can operate on hundreds of books in parallel. Our system allows users to
customize an audiobook's speaking speed and style, emotional intonation, and
can even match a desired voice using a small amount of sample audio. This work
contributed over five thousand open-license audiobooks and an interactive demo
that allows users to quickly create their own customized audiobooks. To listen
to the audiobook collection visit \url{https://aka.ms/audiobook}. | [
"Brendan Walsh",
"Mark Hamilton",
"Greg Newby",
"Xi Wang",
"Serena Ruan",
"Sheng Zhao",
"Lei He",
"Shaofei Zhang",
"Eric Dettinger",
"William T. Freeman",
"Markus Weimer"
] | 2023-09-07 11:41:23 | http://arxiv.org/abs/2309.03926v1 | http://arxiv.org/pdf/2309.03926v1 | 2309.03926v1 |
Promoting Fairness in GNNs: A Characterization of Stability | The Lipschitz bound, a technique from robust statistics, can limit the
maximum changes in the output concerning the input, taking into account
associated irrelevant biased factors. It is an efficient and provable method
for examining the output stability of machine learning models without incurring
additional computation costs. Recently, Graph Neural Networks (GNNs), which
operate on non-Euclidean data, have gained significant attention. However, no
previous research has investigated the GNN Lipschitz bounds to shed light on
stabilizing model outputs, especially when working on non-Euclidean data with
inherent biases. Given the inherent biases in common graph data used for GNN
training, it poses a serious challenge to constraining the GNN output
perturbations induced by input biases, thereby safeguarding fairness during
training. Recently, despite the Lipschitz constant's use in controlling the
stability of Euclideanneural networks, the calculation of the precise Lipschitz
constant remains elusive for non-Euclidean neural networks like GNNs,
especially within fairness contexts. To narrow this gap, we begin with the
general GNNs operating on an attributed graph, and formulate a Lipschitz bound
to limit the changes in the output regarding biases associated with the input.
Additionally, we theoretically analyze how the Lipschitz constant of a GNN
model could constrain the output perturbations induced by biases learned from
data for fairness training. We experimentally validate the Lipschitz bound's
effectiveness in limiting biases of the model output. Finally, from a training
dynamics perspective, we demonstrate why the theoretical Lipschitz bound can
effectively guide the GNN training to better trade-off between accuracy and
fairness. | [
"Yaning Jia",
"Chunhui Zhang"
] | 2023-09-07 11:29:16 | http://arxiv.org/abs/2309.03648v2 | http://arxiv.org/pdf/2309.03648v2 | 2309.03648v2 |
Automatically Testing Functional Properties of Code Translation Models | Large language models are becoming increasingly practical for translating
code across programming languages, a process known as $transpiling$. Even
though automated transpilation significantly boosts developer productivity, a
key concern is whether the generated code is correct. Existing work initially
used manually crafted test suites to test the translations of a small corpus of
programs; these test suites were later automated. In contrast, we devise the
first approach for automated, functional, property-based testing of code
translation models. Our general, user-provided specifications about the
transpiled code capture a range of properties, from purely syntactic to purely
semantic ones. As shown by our experiments, this approach is very effective in
detecting property violations in popular code translation models, and
therefore, in evaluating model quality with respect to given properties. We
also go a step further and explore the usage scenario where a user simply aims
to obtain a correct translation of some code with respect to certain properties
without necessarily being concerned about the overall quality of the model. To
this purpose, we develop the first property-guided search procedure for code
translation models, where a model is repeatedly queried with slightly different
parameters to produce alternative and potentially more correct translations.
Our results show that this search procedure helps to obtain significantly
better code translations. | [
"Hasan Ferit Eniser",
"Valentin Wüstholz",
"Maria Christakis"
] | 2023-09-07 11:00:15 | http://arxiv.org/abs/2309.12813v1 | http://arxiv.org/pdf/2309.12813v1 | 2309.12813v1 |
Insights Into the Inner Workings of Transformer Models for Protein Function Prediction | Motivation: We explored how explainable AI (XAI) can help to shed light into
the inner workings of neural networks for protein function prediction, by
extending the widely used XAI method of integrated gradients such that latent
representations inside of transformer models, which were finetuned to Gene
Ontology term and Enzyme Commission number prediction, can be inspected too.
Results: The approach enabled us to identify amino acids in the sequences that
the transformers pay particular attention to, and to show that these relevant
sequence parts reflect expectations from biology and chemistry, both in the
embedding layer and inside of the model, where we identified transformer heads
with a statistically significant correspondence of attribution maps with ground
truth sequence annotations (e.g., transmembrane regions, active sites) across
many proteins. Availability and Implementation: Source code can be accessed at
https://github.com/markuswenzel/xai-proteins . | [
"Markus Wenzel",
"Erik Grüner",
"Nils Strodthoff"
] | 2023-09-07 10:54:06 | http://arxiv.org/abs/2309.03631v1 | http://arxiv.org/pdf/2309.03631v1 | 2309.03631v1 |
Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction | The choice of the objective function is crucial in emerging high-quality
representations from self-supervised learning. This paper investigates how
different formulations of the Barlow Twins (BT) objective impact downstream
task performance for speech data. We propose Modified Barlow Twins (MBT) with
normalized latents to enforce scale-invariance and evaluate on speaker
identification, gender recognition and keyword spotting tasks. Our results show
MBT improves representation generalization over original BT, especially when
fine-tuning with limited target data. This highlights the importance of
designing objectives that encourage invariant and transferable representations.
Our analysis provides insights into how the BT learning objective can be
tailored to produce speech representations that excel when adapted to new
downstream tasks. This study is an important step towards developing reusable
self-supervised speech representations. | [
"Yusuf Brima",
"Ulf Krumnack",
"Simone Pika",
"Gunther Heidemann"
] | 2023-09-07 10:23:59 | http://arxiv.org/abs/2309.03619v1 | http://arxiv.org/pdf/2309.03619v1 | 2309.03619v1 |
Filtration Surfaces for Dynamic Graph Classification | Existing approaches for classifying dynamic graphs either lift graph kernels
to the temporal domain, or use graph neural networks (GNNs). However, current
baselines have scalability issues, cannot handle a changing node set, or do not
take edge weight information into account. We propose filtration surfaces, a
novel method that is scalable and flexible, to alleviate said restrictions. We
experimentally validate the efficacy of our model and show that filtration
surfaces outperform previous state-of-the-art baselines on datasets that rely
on edge weight information. Our method does so while being either completely
parameter-free or having at most one parameter, and yielding the lowest overall
standard deviation among similarly scalable methods. | [
"Franz Srambical",
"Bastian Rieck"
] | 2023-09-07 10:18:36 | http://arxiv.org/abs/2309.03616v2 | http://arxiv.org/pdf/2309.03616v2 | 2309.03616v2 |
Your Battery Is a Blast! Safeguarding Against Counterfeit Batteries with Authentication | Lithium-ion (Li-ion) batteries are the primary power source in various
applications due to their high energy and power density. Their market was
estimated to be up to 48 billion U.S. dollars in 2022. However, the widespread
adoption of Li-ion batteries has resulted in counterfeit cell production, which
can pose safety hazards to users. Counterfeit cells can cause explosions or
fires, and their prevalence in the market makes it difficult for users to
detect fake cells. Indeed, current battery authentication methods can be
susceptible to advanced counterfeiting techniques and are often not adaptable
to various cells and systems. In this paper, we improve the state of the art on
battery authentication by proposing two novel methodologies, DCAuth and
EISthentication, which leverage the internal characteristics of each cell
through Machine Learning models. Our methods automatically authenticate
lithium-ion battery models and architectures using data from their regular
usage without the need for any external device. They are also resilient to the
most common and critical counterfeit practices and can scale to several
batteries and devices. To evaluate the effectiveness of our proposed
methodologies, we analyze time-series data from a total of 20 datasets that we
have processed to extract meaningful features for our analysis. Our methods
achieve high accuracy in battery authentication for both architectures (up to
0.99) and models (up to 0.96). Moreover, our methods offer comparable
identification performances. By using our proposed methodologies, manufacturers
can ensure that devices only use legitimate batteries, guaranteeing the
operational state of any system and safety measures for the users. | [
"Francesco Marchiori",
"Mauro Conti"
] | 2023-09-07 10:02:59 | http://arxiv.org/abs/2309.03607v1 | http://arxiv.org/pdf/2309.03607v1 | 2309.03607v1 |
Subsets and Splits