title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
On the Fine-Grained Hardness of Inverting Generative Models | The objective of generative model inversion is to identify a size-$n$ latent
vector that produces a generative model output that closely matches a given
target. This operation is a core computational primitive in numerous modern
applications involving computer vision and NLP. However, the problem is known
to be computationally challenging and NP-hard in the worst case. This paper
aims to provide a fine-grained view of the landscape of computational hardness
for this problem. We establish several new hardness lower bounds for both exact
and approximate model inversion. In exact inversion, the goal is to determine
whether a target is contained within the range of a given generative model.
Under the strong exponential time hypothesis (SETH), we demonstrate that the
computational complexity of exact inversion is lower bounded by $\Omega(2^n)$
via a reduction from $k$-SAT; this is a strengthening of known results. For the
more practically relevant problem of approximate inversion, the goal is to
determine whether a point in the model range is close to a given target with
respect to the $\ell_p$-norm. When $p$ is a positive odd integer, under SETH,
we provide an $\Omega(2^n)$ complexity lower bound via a reduction from the
closest vectors problem (CVP). Finally, when $p$ is even, under the exponential
time hypothesis (ETH), we provide a lower bound of $2^{\Omega (n)}$ via a
reduction from Half-Clique and Vertex-Cover. | [
"Feyza Duman Keles",
"Chinmay Hegde"
] | 2023-09-11 20:03:25 | http://arxiv.org/abs/2309.05795v1 | http://arxiv.org/pdf/2309.05795v1 | 2309.05795v1 |
Generative Hyperelasticity with Physics-Informed Probabilistic Diffusion Fields | Many natural materials exhibit highly complex, nonlinear, anisotropic, and
heterogeneous mechanical properties. Recently, it has been demonstrated that
data-driven strain energy functions possess the flexibility to capture the
behavior of these complex materials with high accuracy while satisfying
physics-based constraints. However, most of these approaches disregard the
uncertainty in the estimates and the spatial heterogeneity of these materials.
In this work, we leverage recent advances in generative models to address these
issues. We use as building block neural ordinary equations (NODE) that -- by
construction -- create polyconvex strain energy functions, a key property of
realistic hyperelastic material models. We combine this approach with
probabilistic diffusion models to generate new samples of strain energy
functions. This technique allows us to sample a vector of Gaussian white noise
and translate it to NODE parameters thereby representing plausible strain
energy functions. We extend our approach to spatially correlated diffusion
resulting in heterogeneous material properties for arbitrary geometries. We
extensively test our method with synthetic and experimental data on biological
tissues and run finite element simulations with various degrees of spatial
heterogeneity. We believe this approach is a major step forward including
uncertainty in predictive, data-driven models of hyperelasticity | [
"Vahidullah Tac",
"Manuel K Rausch",
"Ilias Bilionis",
"Francisco Sahli Costabal",
"Adrian Buganza Tepole"
] | 2023-09-11 19:35:23 | http://arxiv.org/abs/2310.03745v1 | http://arxiv.org/pdf/2310.03745v1 | 2310.03745v1 |
Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems | Recent advances in machine learning, particularly deep learning, have enabled
autonomous systems to perceive and comprehend objects and their environments in
a perceptual subsymbolic manner. These systems can now perform object
detection, sensor data fusion, and language understanding tasks. However, there
is a growing need to enhance these systems to understand objects and their
environments more conceptually and symbolically. It is essential to consider
both the explicit teaching provided by humans (e.g., describing a situation or
explaining how to act) and the implicit teaching obtained by observing human
behavior (e.g., through the system's sensors) to achieve this level of powerful
artificial intelligence. Thus, the system must be designed with multimodal
input and output capabilities to support implicit and explicit interaction
models. In this position paper, we argue for considering both types of inputs,
as well as human-in-the-loop and incremental learning techniques, for advancing
the field of artificial intelligence and enabling autonomous systems to learn
like humans. We propose several hypotheses and design guidelines and highlight
a use case from related work to achieve this goal. | [
"Amr Gomaa",
"Michael Feld"
] | 2023-09-11 19:35:12 | http://arxiv.org/abs/2309.05787v1 | http://arxiv.org/pdf/2309.05787v1 | 2309.05787v1 |
Grey-box Bayesian Optimization for Sensor Placement in Assisted Living Environments | Optimizing the configuration and placement of sensors is crucial for reliable
fall detection, indoor localization, and activity recognition in assisted
living spaces. We propose a novel, sample-efficient approach to find a
high-quality sensor placement in an arbitrary indoor space based on grey-box
Bayesian optimization and simulation-based evaluation. Our key technical
contribution lies in capturing domain-specific knowledge about the spatial
distribution of activities and incorporating it into the iterative selection of
query points in Bayesian optimization. Considering two simulated indoor
environments and a real-world dataset containing human activities and sensor
triggers, we show that our proposed method performs better compared to
state-of-the-art black-box optimization techniques in identifying high-quality
sensor placements, leading to accurate activity recognition in terms of
F1-score, while also requiring a significantly lower (51.3% on average) number
of expensive function queries. | [
"Shadan Golestan",
"Omid Ardakanian",
"Pierre Boulanger"
] | 2023-09-11 19:31:14 | http://arxiv.org/abs/2309.05784v1 | http://arxiv.org/pdf/2309.05784v1 | 2309.05784v1 |
Smartwatch-derived Acoustic Markers for Deficits in Cognitively Relevant Everyday Functioning | Detection of subtle deficits in everyday functioning due to cognitive
impairment is important for early detection of neurodegenerative diseases,
particularly Alzheimer's disease. However, current standards for assessment of
everyday functioning are based on qualitative, subjective ratings. Speech has
been shown to provide good objective markers for cognitive impairments, but the
association with cognition-relevant everyday functioning remains
uninvestigated. In this study, we demonstrate the feasibility of using a
smartwatch-based application to collect acoustic features as objective markers
for detecting deficits in everyday functioning. We collected voice data during
the performance of cognitive tasks and daily conversation, as possible
application scenarios, from 54 older adults, along with a measure of everyday
functioning. Machine learning models using acoustic features could detect
individuals with deficits in everyday functioning with up to 77.8% accuracy,
which was higher than the 68.5% accuracy with standard neuropsychological
tests. We also identified common acoustic features for robustly discriminating
deficits in everyday functioning across both types of voice data (cognitive
tasks and daily conversation). Our results suggest that common acoustic
features extracted from different types of voice data can be used as markers
for deficits in everyday functioning. | [
"Yasunori Yamada",
"Kaoru Shinkawa",
"Masatomo Kobayashi",
"Miyuki Nemoto",
"Miho Ota",
"Kiyotaka Nemoto",
"Tetsuaki Arai"
] | 2023-09-11 19:12:09 | http://arxiv.org/abs/2309.05777v1 | http://arxiv.org/pdf/2309.05777v1 | 2309.05777v1 |
The Effect of Intrinsic Dimension on Metric Learning under Compression | Metric learning aims at finding a suitable distance metric over the input
space, to improve the performance of distance-based learning algorithms. In
high-dimensional settings, metric learning can also play the role of
dimensionality reduction, by imposing a low-rank restriction to the learnt
metric. In this paper, instead of training a low-rank metric on
high-dimensional data, we consider a randomly compressed version of the data,
and train a full-rank metric there. We give theoretical guarantees on the error
of distance-based metric learning, with respect to the random compression,
which do not depend on the ambient dimension. Our bounds do not make any
explicit assumptions, aside from i.i.d. data from a bounded support, and
automatically tighten when benign geometrical structures are present.
Experimental results on both synthetic and real data sets support our
theoretical findings in high-dimensional settings. | [
"Efstratios Palias",
"Ata Kabán"
] | 2023-09-11 18:15:51 | http://arxiv.org/abs/2309.05751v1 | http://arxiv.org/pdf/2309.05751v1 | 2309.05751v1 |
CaloClouds II: Ultra-Fast Geometry-Independent Highly-Granular Calorimeter Simulation | Fast simulation of the energy depositions in high-granular detectors is
needed for future collider experiments with ever increasing luminosities.
Generative machine learning (ML) models have been shown to speed up and augment
the traditional simulation chain in physics analysis. However, the majority of
previous efforts were limited to models relying on fixed, regular detector
readout geometries. A major advancement is the recently introduced CaloClouds
model, a geometry-independent diffusion model, which generates calorimeter
showers as point clouds for the electromagnetic calorimeter of the envisioned
International Large Detector (ILD).
In this work, we introduce CaloClouds II which features a number of key
improvements. This includes continuous time score-based modelling, which allows
for a 25 step sampling with comparable fidelity to CaloClouds while yielding a
$6\times$ speed-up over Geant4 on a single CPU ($5\times$ over CaloClouds). We
further distill the diffusion model into a consistency model allowing for
accurate sampling in a single step and resulting in a $46\times$ ($37\times$)
speed-up. This constitutes the first application of consistency distillation
for the generation of calorimeter showers. | [
"Erik Buhmann",
"Frank Gaede",
"Gregor Kasieczka",
"Anatolii Korol",
"William Korcari",
"Katja Krüger",
"Peter McKeown"
] | 2023-09-11 18:00:02 | http://arxiv.org/abs/2309.05704v1 | http://arxiv.org/pdf/2309.05704v1 | 2309.05704v1 |
Unsupervised Machine Learning Techniques for Exploring Tropical Coamoeba, Brane Tilings and Seiberg Duality | We introduce unsupervised machine learning techniques in order to identify
toric phases of 4d N=1 supersymmetric gauge theories corresponding to the same
toric Calabi-Yau 3-fold. These 4d N=1 supersymmetric gauge theories are
worldvolume theories of a D3-brane probing a toric Calabi-Yau 3-fold and are
realized in terms of a Type IIB brane configuration known as a brane tiling. It
corresponds to the skeleton graph of the coamoeba projection of the mirror
curve associated to the toric Calabi-Yau 3-fold. When we vary the complex
structure moduli of the mirror Calabi-Yau 3-fold, the coamoeba and the
corresponding brane tilings change their shape, giving rise to different toric
phases related by Seiberg duality. We illustrate that by employing techniques
such as principal component analysis (PCA) and t-distributed stochastic
neighbor embedding (t-SNE), we can project the space of coamoeba labelled by
complex structure moduli down to a lower dimensional phase space with phase
boundaries corresponding to Seiberg duality. In this work, we illustrate this
technique by obtaining a 2-dimensional phase diagram for brane tilings
corresponding to the cone over the zeroth Hirzebruch surface F0. | [
"Rak-Kyeong Seong"
] | 2023-09-11 18:00:01 | http://arxiv.org/abs/2309.05702v1 | http://arxiv.org/pdf/2309.05702v1 | 2309.05702v1 |
Robot Parkour Learning | Parkour is a grand challenge for legged locomotion that requires robots to
overcome various obstacles rapidly in complex environments. Existing methods
can generate either diverse but blind locomotion skills or vision-based but
specialized skills by using reference animal data or complex rewards. However,
autonomous parkour requires robots to learn generalizable skills that are both
vision-based and diverse to perceive and react to various scenarios. In this
work, we propose a system for learning a single end-to-end vision-based parkour
policy of diverse parkour skills using a simple reward without any reference
motion data. We develop a reinforcement learning method inspired by direct
collocation to generate parkour skills, including climbing over high obstacles,
leaping over large gaps, crawling beneath low barriers, squeezing through thin
slits, and running. We distill these skills into a single vision-based parkour
policy and transfer it to a quadrupedal robot using its egocentric depth
camera. We demonstrate that our system can empower two different low-cost
robots to autonomously select and execute appropriate parkour skills to
traverse challenging real-world environments. | [
"Ziwen Zhuang",
"Zipeng Fu",
"Jianren Wang",
"Christopher Atkeson",
"Soeren Schwertfeger",
"Chelsea Finn",
"Hang Zhao"
] | 2023-09-11 17:59:17 | http://arxiv.org/abs/2309.05665v2 | http://arxiv.org/pdf/2309.05665v2 | 2309.05665v2 |
Hypothesis Search: Inductive Reasoning with Language Models | Inductive reasoning is a core problem-solving capacity: humans can identify
underlying principles from a few examples, which can then be robustly
generalized to novel scenarios. Recent work has evaluated large language models
(LLMs) on inductive reasoning tasks by directly prompting them yielding "in
context learning." This can work well for straightforward inductive tasks, but
performs very poorly on more complex tasks such as the Abstraction and
Reasoning Corpus (ARC). In this work, we propose to improve the inductive
reasoning ability of LLMs by generating explicit hypotheses at multiple levels
of abstraction: we prompt the LLM to propose multiple abstract hypotheses about
the problem, in natural language, then implement the natural language
hypotheses as concrete Python programs. These programs can be directly verified
by running on the observed examples and generalized to novel inputs. Because of
the prohibitive cost of generation with state-of-the-art LLMs, we consider a
middle step to filter the set of hypotheses that will be implemented into
programs: we either ask the LLM to summarize into a smaller set of hypotheses,
or ask human annotators to select a subset of the hypotheses. We verify our
pipeline's effectiveness on the ARC visual inductive reasoning benchmark, its
variant 1D-ARC, and string transformation dataset SyGuS. On a random 40-problem
subset of ARC, our automated pipeline using LLM summaries achieves 27.5%
accuracy, significantly outperforming the direct prompting baseline (accuracy
of 12.5%). With the minimal human input of selecting from LLM-generated
candidates, the performance is boosted to 37.5%. (And we argue this is a lower
bound on the performance of our approach without filtering.) Our ablation
studies show that abstract hypothesis generation and concrete program
representations are both beneficial for LLMs to perform inductive reasoning
tasks. | [
"Ruocheng Wang",
"Eric Zelikman",
"Gabriel Poesia",
"Yewen Pu",
"Nick Haber",
"Noah D. Goodman"
] | 2023-09-11 17:56:57 | http://arxiv.org/abs/2309.05660v1 | http://arxiv.org/pdf/2309.05660v1 | 2309.05660v1 |
On the quality of randomized approximations of Tukey's depth | Tukey's depth (or halfspace depth) is a widely used measure of centrality for
multivariate data. However, exact computation of Tukey's depth is known to be a
hard problem in high dimensions. As a remedy, randomized approximations of
Tukey's depth have been proposed. In this paper we explore when such randomized
algorithms return a good approximation of Tukey's depth. We study the case when
the data are sampled from a log-concave isotropic distribution. We prove that,
if one requires that the algorithm runs in polynomial time in the dimension,
the randomized algorithm correctly approximates the maximal depth $1/2$ and
depths close to zero. On the other hand, for any point of intermediate depth,
any good approximation requires exponential complexity. | [
"Simon Briend",
"Gábor Lugosi",
"Roberto Imbuzeiro Oliveira"
] | 2023-09-11 17:52:28 | http://arxiv.org/abs/2309.05657v2 | http://arxiv.org/pdf/2309.05657v2 | 2309.05657v2 |
Dynamic Handover: Throw and Catch with Bimanual Hands | Humans throw and catch objects all the time. However, such a seemingly common
skill introduces a lot of challenges for robots to achieve: The robots need to
operate such dynamic actions at high-speed, collaborate precisely, and interact
with diverse objects. In this paper, we design a system with two multi-finger
hands attached to robot arms to solve this problem. We train our system using
Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer
to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple
novel algorithm designs including learning a trajectory prediction model for
the object. Such a model can help the robot catcher has a real-time estimation
of where the object will be heading, and then react accordingly. We conduct our
experiments with multiple objects in the real-world system, and show
significant improvements over multiple baselines. Our project page is available
at \url{https://binghao-huang.github.io/dynamic_handover/}. | [
"Binghao Huang",
"Yuanpei Chen",
"Tianyu Wang",
"Yuzhe Qin",
"Yaodong Yang",
"Nikolay Atanasov",
"Xiaolong Wang"
] | 2023-09-11 17:49:25 | http://arxiv.org/abs/2309.05655v1 | http://arxiv.org/pdf/2309.05655v1 | 2309.05655v1 |
Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck | The Symmetric Information Bottleneck (SIB), an extension of the more familiar
Information Bottleneck, is a dimensionality reduction technique that
simultaneously compresses two random variables to preserve information between
their compressed versions. We introduce the Generalized Symmetric Information
Bottleneck (GSIB), which explores different functional forms of the cost of
such simultaneous reduction. We then explore the dataset size requirements of
such simultaneous compression. We do this by deriving bounds and
root-mean-squared estimates of statistical fluctuations of the involved loss
functions. We show that, in typical situations, the simultaneous GSIB
compression requires qualitatively less data to achieve the same errors
compared to compressing variables one at a time. We suggest that this is an
example of a more general principle that simultaneous compression is more data
efficient than independent compression of each of the input variables. | [
"K. Michael Martini",
"Ilya Nemenman"
] | 2023-09-11 17:40:37 | http://arxiv.org/abs/2309.05649v1 | http://arxiv.org/pdf/2309.05649v1 | 2309.05649v1 |
A Novel Supervised Deep Learning Solution to Detect Distributed Denial of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks (CNN) | Cybersecurity attacks are becoming increasingly sophisticated and pose a
growing threat to individuals, and private and public sectors. Distributed
Denial of Service attacks are one of the most harmful of these threats in
today's internet, disrupting the availability of essential services. This
project presents a novel deep learning-based approach for detecting DDoS
attacks in network traffic using the industry-recognized DDoS evaluation
dataset from the University of New Brunswick, which contains packet captures
from real-time DDoS attacks, creating a broader and more applicable model for
the real world. The algorithm employed in this study exploits the properties of
Convolutional Neural Networks (CNN) and common deep learning algorithms to
build a novel mitigation technique that classifies benign and malicious
traffic. The proposed model preprocesses the data by extracting packet flows
and normalizing them to a fixed length which is fed into a custom architecture
containing layers regulating node dropout, normalization, and a sigmoid
activation function to out a binary classification. This allows for the model
to process the flows effectively and look for the nodes that contribute to DDoS
attacks while dropping the "noise" or the distractors. The results of this
study demonstrate the effectiveness of the proposed algorithm in detecting DDOS
attacks, achieving an accuracy of .9883 on 2000 unseen flows in network
traffic, while being scalable for any network environment. | [
"Vedanth Ramanathan",
"Krish Mahadevan",
"Sejal Dua"
] | 2023-09-11 17:37:35 | http://arxiv.org/abs/2309.05646v1 | http://arxiv.org/pdf/2309.05646v1 | 2309.05646v1 |
Desenvolvimento de modelo para predição de cotações de ação baseada em análise de sentimentos de tweets | Training machine learning models for predicting stock market share prices is
an active area of research since the automatization of trading such papers was
available in real time. While most of the work in this field of research is
done by training Neural networks based on past prices of stock shares, in this
work, we use iFeel 2.0 platform to extract 19 sentiment features from posts
obtained from microblog platform Twitter that mention the company Petrobras.
Then, we used those features to train XBoot models to predict future stock
prices for the referred company. Later, we simulated the trading of Petrobras'
shares based on the model's outputs and determined the gain of R$88,82 (net) in
a 250-day period when compared to a 100 random models' average performance. | [
"Mario Mitsuo Akita",
"Everton Josue da Silva"
] | 2023-09-11 17:32:54 | http://arxiv.org/abs/2309.06538v1 | http://arxiv.org/pdf/2309.06538v1 | 2309.06538v1 |
Boundary Peeling: Outlier Detection Method Using One-Class Peeling | Unsupervised outlier detection constitutes a crucial phase within data
analysis and remains a dynamic realm of research. A good outlier detection
algorithm should be computationally efficient, robust to tuning parameter
selection, and perform consistently well across diverse underlying data
distributions. We introduce One-Class Boundary Peeling, an unsupervised outlier
detection algorithm. One-class Boundary Peeling uses the average signed
distance from iteratively-peeled, flexible boundaries generated by one-class
support vector machines. One-class Boundary Peeling has robust hyperparameter
settings and, for increased flexibility, can be cast as an ensemble method. In
synthetic data simulations One-Class Boundary Peeling outperforms all state of
the art methods when no outliers are present while maintaining comparable or
superior performance in the presence of outliers, as compared to benchmark
methods. One-Class Boundary Peeling performs competitively in terms of correct
classification, AUC, and processing time using common benchmark data sets. | [
"Sheikh Arafat",
"Na Sun",
"Maria L. Weese",
"Waldyn G. Martinez"
] | 2023-09-11 17:19:07 | http://arxiv.org/abs/2309.05630v1 | http://arxiv.org/pdf/2309.05630v1 | 2309.05630v1 |
Privacy Side Channels in Machine Learning Systems | Most current approaches for protecting privacy in machine learning (ML)
assume that models exist in a vacuum, when in reality, ML models are part of
larger systems that include components for training data filtering, output
monitoring, and more. In this work, we introduce privacy side channels: attacks
that exploit these system-level components to extract private information at
far higher rates than is otherwise possible for standalone models. We propose
four categories of side channels that span the entire ML lifecycle (training
data filtering, input preprocessing, output post-processing, and query
filtering) and allow for either enhanced membership inference attacks or even
novel threats such as extracting users' test queries. For example, we show that
deduplicating training data before applying differentially-private training
creates a side-channel that completely invalidates any provable privacy
guarantees. Moreover, we show that systems which block language models from
regenerating training data can be exploited to allow exact reconstruction of
private keys contained in the training set -- even if the model did not
memorize these keys. Taken together, our results demonstrate the need for a
holistic, end-to-end privacy analysis of machine learning. | [
"Edoardo Debenedetti",
"Giorgio Severi",
"Nicholas Carlini",
"Christopher A. Choquette-Choo",
"Matthew Jagielski",
"Milad Nasr",
"Eric Wallace",
"Florian Tramèr"
] | 2023-09-11 16:49:05 | http://arxiv.org/abs/2309.05610v1 | http://arxiv.org/pdf/2309.05610v1 | 2309.05610v1 |
Exploration and Comparison of Deep Learning Architectures to Predict Brain Response to Realistic Pictures | We present an exploration of machine learning architectures for predicting
brain responses to realistic images on occasion of the Algonauts Challenge
2023. Our research involved extensive experimentation with various pretrained
models. Initially, we employed simpler models to predict brain activity but
gradually introduced more complex architectures utilizing available data and
embeddings generated by large-scale pre-trained models. We encountered typical
difficulties related to machine learning problems, e.g. regularization and
overfitting, as well as issues specific to the challenge, such as difficulty in
combining multiple input encodings, as well as the high dimensionality, unclear
structure, and noisy nature of the output. To overcome these issues we tested
single edge 3D position-based, multi-region of interest (ROI) and hemisphere
predictor models, but we found that employing multiple simple models, each
dedicated to a ROI in each hemisphere of the brain of each subject, yielded the
best results - a single fully connected linear layer with image embeddings
generated by CLIP as input. While we surpassed the challenge baseline, our
results fell short of establishing a robust association with the data. | [
"Riccardo Chimisso",
"Sathya Buršić",
"Paolo Marocco",
"Giuseppe Vizzari",
"Dimitri Ognibene"
] | 2023-09-11 16:45:02 | http://arxiv.org/abs/2309.09983v1 | http://arxiv.org/pdf/2309.09983v1 | 2309.09983v1 |
Memory Injections: Correcting Multi-Hop Reasoning Failures during Inference in Transformer-Based Language Models | Answering multi-hop reasoning questions requires retrieving and synthesizing
information from diverse sources. Large Language Models (LLMs) struggle to
perform such reasoning consistently. Here we propose an approach to pinpoint
and rectify multi-hop reasoning failures through targeted memory injections on
LLM attention heads. First, we analyze the per-layer activations of GPT-2
models in response to single and multi-hop prompts. We then propose a mechanism
that allows users to inject pertinent prompt-specific information, which we
refer to as "memories," at critical LLM locations during inference. By thus
enabling the LLM to incorporate additional relevant information during
inference, we enhance the quality of multi-hop prompt completions. We show
empirically that a simple, efficient, and targeted memory injection into a key
attention layer can often increase the probability of the desired next token in
multi-hop tasks, by up to 424%. | [
"Mansi Sakarvadia",
"Aswathy Ajith",
"Arham Khan",
"Daniel Grzenda",
"Nathaniel Hudson",
"André Bauer",
"Kyle Chard",
"Ian Foster"
] | 2023-09-11 16:39:30 | http://arxiv.org/abs/2309.05605v2 | http://arxiv.org/pdf/2309.05605v2 | 2309.05605v2 |
Introspective Deep Metric Learning | This paper proposes an introspective deep metric learning (IDML) framework
for uncertainty-aware comparisons of images. Conventional deep metric learning
methods focus on learning a discriminative embedding to describe the semantic
features of images, which ignore the existence of uncertainty in each image
resulting from noise or semantic ambiguity. Training without awareness of these
uncertainties causes the model to overfit the annotated labels during training
and produce unsatisfactory judgments during inference. Motivated by this, we
argue that a good similarity model should consider the semantic discrepancies
with awareness of the uncertainty to better deal with ambiguous images for more
robust training. To achieve this, we propose to represent an image using not
only a semantic embedding but also an accompanying uncertainty embedding, which
describes the semantic characteristics and ambiguity of an image, respectively.
We further propose an introspective similarity metric to make similarity
judgments between images considering both their semantic differences and
ambiguities. The gradient analysis of the proposed metric shows that it enables
the model to learn at an adaptive and slower pace to deal with the uncertainty
during training. The proposed IDML framework improves the performance of deep
metric learning through uncertainty modeling and attains state-of-the-art
results on the widely used CUB-200-2011, Cars196, and Stanford Online Products
datasets for image retrieval and clustering. We further provide an in-depth
analysis of our framework to demonstrate the effectiveness and reliability of
IDML. Code: https://github.com/wzzheng/IDML. | [
"Chengkun Wang",
"Wenzhao Zheng",
"Zheng Zhu",
"Jie Zhou",
"Jiwen Lu"
] | 2023-09-11 16:21:13 | http://arxiv.org/abs/2309.09982v1 | http://arxiv.org/pdf/2309.09982v1 | 2309.09982v1 |
Quantitative Analysis of Forecasting Models:In the Aspect of Online Political Bias | Understanding and mitigating political bias in online social media platforms
are crucial tasks to combat misinformation and echo chamber effects. However,
characterizing political bias temporally using computational methods presents
challenges due to the high frequency of noise in social media datasets. While
existing research has explored various approaches to political bias
characterization, the ability to forecast political bias and anticipate how
political conversations might evolve in the near future has not been
extensively studied. In this paper, we propose a heuristic approach to classify
social media posts into five distinct political leaning categories. Since there
is a lack of prior work on forecasting political bias, we conduct an in-depth
analysis of existing baseline models to identify which model best fits to
forecast political leaning time series. Our approach involves utilizing
existing time series forecasting models on two social media datasets with
different political ideologies, specifically Twitter and Gab. Through our
experiments and analyses, we seek to shed light on the challenges and
opportunities in forecasting political bias in social media platforms.
Ultimately, our work aims to pave the way for developing more effective
strategies to mitigate the negative impact of political bias in the digital
realm. | [
"Srinath Sai Tripuraneni",
"Sadia Kamal",
"Arunkumar Bagavathi"
] | 2023-09-11 16:17:24 | http://arxiv.org/abs/2309.05589v2 | http://arxiv.org/pdf/2309.05589v2 | 2309.05589v2 |
Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning | We introduce a simple but effective method for managing risk in model-based
reinforcement learning with trajectory sampling that involves probabilistic
safety constraints and balancing of optimism in the face of epistemic
uncertainty and pessimism in the face of aleatoric uncertainty of an ensemble
of stochastic neural networks.Various experiments indicate that the separation
of uncertainties is essential to performing well with data-driven MPC
approaches in uncertain and safety-critical control environments. | [
"Marin Vlastelica",
"Sebastian Blaes",
"Cristina Pineri",
"Georg Martius"
] | 2023-09-11 16:10:58 | http://arxiv.org/abs/2309.05582v1 | http://arxiv.org/pdf/2309.05582v1 | 2309.05582v1 |
Anisotropic Diffusion Stencils: From Simple Derivations over Stability Estimates to ResNet Implementations | Anisotropic diffusion processes with a diffusion tensor are important in
image analysis, physics, and engineering. However, their numerical
approximation has a strong impact on dissipative artefacts and deviations from
rotation invariance. In this work, we study a large family of finite difference
discretisations on a 3 x 3 stencil. We derive it by splitting 2-D anisotropic
diffusion into four 1-D diffusions. The resulting stencil class involves one
free parameter and covers a wide range of existing discretisations. It
comprises the full stencil family of Weickert et al. (2013) and shows that
their two parameters contain redundancy. Furthermore, we establish a bound on
the spectral norm of the matrix corresponding to the stencil. This gives time
step size limits that guarantee stability of an explicit scheme in the
Euclidean norm. Our directional splitting also allows a very natural
translation of the explicit scheme into ResNet blocks. Employing neural network
libraries enables simple and highly efficient parallel implementations on GPUs. | [
"Karl Schrader",
"Joachim Weickert",
"Michael Krause"
] | 2023-09-11 16:03:00 | http://arxiv.org/abs/2309.05575v2 | http://arxiv.org/pdf/2309.05575v2 | 2309.05575v2 |
ITI-GEN: Inclusive Text-to-Image Generation | Text-to-image generative models often reflect the biases of the training
data, leading to unequal representations of underrepresented groups. This study
investigates inclusive text-to-image generative models that generate images
based on human-written prompts and ensure the resulting images are uniformly
distributed across attributes of interest. Unfortunately, directly expressing
the desired attributes in the prompt often leads to sub-optimal results due to
linguistic ambiguity or model misrepresentation. Hence, this paper proposes a
drastically different approach that adheres to the maxim that "a picture is
worth a thousand words". We show that, for some attributes, images can
represent concepts more expressively than text. For instance, categories of
skin tones are typically hard to specify by text but can be easily represented
by example images. Building upon these insights, we propose a novel approach,
ITI-GEN, that leverages readily available reference images for Inclusive
Text-to-Image GENeration. The key idea is learning a set of prompt embeddings
to generate images that can effectively represent all desired attribute
categories. More importantly, ITI-GEN requires no model fine-tuning, making it
computationally efficient to augment existing text-to-image models. Extensive
experiments demonstrate that ITI-GEN largely improves over state-of-the-art
models to generate inclusive images from a prompt. Project page:
https://czhang0528.github.io/iti-gen. | [
"Cheng Zhang",
"Xuanbai Chen",
"Siqi Chai",
"Chen Henry Wu",
"Dmitry Lagun",
"Thabo Beeler",
"Fernando De la Torre"
] | 2023-09-11 15:54:30 | http://arxiv.org/abs/2309.05569v1 | http://arxiv.org/pdf/2309.05569v1 | 2309.05569v1 |
Distance-Aware eXplanation Based Learning | eXplanation Based Learning (XBL) is an interactive learning approach that
provides a transparent method of training deep learning models by interacting
with their explanations. XBL augments loss functions to penalize a model based
on deviation of its explanations from user annotation of image features. The
literature on XBL mostly depends on the intersection of visual model
explanations and image feature annotations. We present a method to add a
distance-aware explanation loss to categorical losses that trains a learner to
focus on important regions of a training dataset. Distance is an appropriate
approach for calculating explanation loss since visual model explanations such
as Gradient-weighted Class Activation Mapping (Grad-CAMs) are not strictly
bounded as annotations and their intersections may not provide complete
information on the deviation of a model's focus from relevant image regions. In
addition to assessing our model using existing metrics, we propose an
interpretability metric for evaluating visual feature-attribution based model
explanations that is more informative of the model's performance than existing
metrics. We demonstrate performance of our proposed method on three image
classification tasks. | [
"Misgina Tsighe Hagos",
"Niamh Belton",
"Kathleen M. Curran",
"Brian Mac Namee"
] | 2023-09-11 15:33:00 | http://arxiv.org/abs/2309.05548v1 | http://arxiv.org/pdf/2309.05548v1 | 2309.05548v1 |
Advancing Federated Learning in 6G: A Trusted Architecture with Graph-based Analysis | Integrating native AI support into the network architecture is an essential
objective of 6G. Federated Learning (FL) emerges as a potential paradigm,
facilitating decentralized AI model training across a diverse range of devices
under the coordination of a central server. However, several challenges hinder
its wide application in the 6G context, such as malicious attacks and privacy
snooping on local model updates, and centralization pitfalls. This work
proposes a trusted architecture for supporting FL, which utilizes Distributed
Ledger Technology (DLT) and Graph Neural Network (GNN), including three key
features. First, a pre-processing layer employing homomorphic encryption is
incorporated to securely aggregate local models, preserving the privacy of
individual models. Second, given the distributed nature and graph structure
between clients and nodes in the pre-processing layer, GNN is leveraged to
identify abnormal local models, enhancing system security. Third, DLT is
utilized to decentralize the system by selecting one of the candidates to
perform the central server's functions. Additionally, DLT ensures reliable data
management by recording data exchanges in an immutable and transparent ledger.
The feasibility of the novel architecture is validated through simulations,
demonstrating improved performance in anomalous model detection and global
model accuracy compared to relevant baselines. | [
"Wenxuan Ye",
"Chendi Qian",
"Xueli An",
"Xueqiang Yan",
"Georg Carle"
] | 2023-09-11 15:10:41 | http://arxiv.org/abs/2309.05525v3 | http://arxiv.org/pdf/2309.05525v3 | 2309.05525v3 |
Re-formalization of Individual Fairness | The notion of individual fairness is a formalization of an ethical principle,
"Treating like cases alike," which has been argued such as by Aristotle. In a
fairness-aware machine learning context, Dwork et al. firstly formalized the
notion. In their formalization, a similar pair of data in an unfair space
should be mapped to similar positions in a fair space. We propose to
re-formalize individual fairness by the statistical independence conditioned by
individuals. This re-formalization has the following merits. First, our
formalization is compatible with that of Dwork et al. Second, our formalization
enables to combine individual fairness with the fairness notion, equalized odds
or sufficiency, as well as statistical parity. Third, though their
formalization implicitly assumes a pre-process approach for making fair
prediction, our formalization is applicable to an in-process or post-process
approach. | [
"Toshihiro Kamishima"
] | 2023-09-11 15:04:46 | http://arxiv.org/abs/2309.05521v1 | http://arxiv.org/pdf/2309.05521v1 | 2309.05521v1 |
NExT-GPT: Any-to-Any Multimodal LLM | While recently Multimodal Large Language Models (MM-LLMs) have made exciting
strides, they mostly fall prey to the limitation of only input-side multimodal
understanding, without the ability to produce content in multiple modalities.
As we humans always perceive the world and communicate with people through
various modalities, developing any-to-any MM-LLMs capable of accepting and
delivering content in any modality becomes essential to human-level AI. To fill
the gap, we present an end-to-end general-purpose any-to-any MM-LLM system,
NExT-GPT. We connect an LLM with multimodal adaptors and different diffusion
decoders, enabling NExT-GPT to perceive inputs and generate outputs in
arbitrary combinations of text, images, videos, and audio. By leveraging the
existing well-trained highly-performing encoders and decoders, NExT-GPT is
tuned with only a small amount of parameter (1%) of certain projection layers,
which not only benefits low-cost training and also facilitates convenient
expansion to more potential modalities. Moreover, we introduce a
modality-switching instruction tuning (MosIT) and manually curate a
high-quality dataset for MosIT, based on which NExT-GPT is empowered with
complex cross-modal semantic understanding and content generation. Overall, our
research showcases the promising possibility of building an AI agent capable of
modeling universal modalities, paving the way for more human-like AI research
in the community. Project page: https://next-gpt.github.io/ | [
"Shengqiong Wu",
"Hao Fei",
"Leigang Qu",
"Wei Ji",
"Tat-Seng Chua"
] | 2023-09-11 15:02:25 | http://arxiv.org/abs/2309.05519v2 | http://arxiv.org/pdf/2309.05519v2 | 2309.05519v2 |
Stream-based Active Learning by Exploiting Temporal Properties in Perception with Temporal Predicted Loss | Active learning (AL) reduces the amount of labeled data needed to train a
machine learning model by intelligently choosing which instances to label.
Classic pool-based AL requires all data to be present in a datacenter, which
can be challenging with the increasing amounts of data needed in deep learning.
However, AL on mobile devices and robots, like autonomous cars, can filter the
data from perception sensor streams before reaching the datacenter. We
exploited the temporal properties for such image streams in our work and
proposed the novel temporal predicted loss (TPL) method. To evaluate the
stream-based setting properly, we introduced the GTA V streets and the A2D2
streets dataset and made both publicly available. Our experiments showed that
our approach significantly improves the diversity of the selection while being
an uncertainty-based method. As pool-based approaches are more common in
perception applications, we derived a concept for comparing pool-based and
stream-based AL, where TPL out-performed state-of-the-art pool- or stream-based
approaches for different models. TPL demonstrated a gain of 2.5 precept points
(pp) less required data while being significantly faster than pool-based
methods. | [
"Sebastian Schmidt",
"Stephan Günnemann"
] | 2023-09-11 15:00:01 | http://arxiv.org/abs/2309.05517v2 | http://arxiv.org/pdf/2309.05517v2 | 2309.05517v2 |
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs | Large Language Models (LLMs) have proven their exceptional capabilities in
performing language-related tasks. However, their deployment poses significant
challenges due to their considerable memory and storage requirements. In
response to this issue, weight-only quantization, particularly 3 and 4-bit
weight-only quantization, has emerged as one of the most viable solutions. As
the number of bits decreases, the quantization grid broadens, thus emphasizing
the importance of up and down rounding. While previous studies have
demonstrated that fine-tuning up and down rounding with the addition of
perturbations can enhance accuracy in some scenarios, our study is driven by
the precise and limited boundary of these perturbations, where only the
threshold for altering the rounding value is of significance. Consequently, we
propose a concise and highly effective approach for optimizing the weight
rounding task. Our method, named SignRound, involves lightweight block-wise
tuning using signed gradient descent, enabling us to achieve outstanding
results within 400 steps. SignRound competes impressively against recent
methods without introducing additional inference overhead. The source code will
be publicly available at \url{https://github.com/intel/neural-compressor} soon. | [
"Wenhua Cheng",
"Weiwei Zhang",
"Haihao Shen",
"Yiyang Cai",
"Xin He",
"Kaokao Lv"
] | 2023-09-11 14:58:23 | http://arxiv.org/abs/2309.05516v2 | http://arxiv.org/pdf/2309.05516v2 | 2309.05516v2 |
Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning | Repeated parameter sharing in federated learning causes significant
information leakage about private data, thus defeating its main purpose: data
privacy. Mitigating the risk of this information leakage, using state of the
art differentially private algorithms, also does not come for free. Randomized
mechanisms can prevent convergence of models on learning even the useful
representation functions, especially if there is more disagreement between
local models on the classification functions (due to data heterogeneity). In
this paper, we consider a representation federated learning objective that
encourages various parties to collaboratively refine the consensus part of the
model, with differential privacy guarantees, while separately allowing
sufficient freedom for local personalization (without releasing it). We prove
that in the linear representation setting, while the objective is non-convex,
our proposed new algorithm \DPFEDREP\ converges to a ball centered around the
\emph{global optimal} solution at a linear rate, and the radius of the ball is
proportional to the reciprocal of the privacy budget. With this novel utility
analysis, we improve the SOTA utility-privacy trade-off for this problem by a
factor of $\sqrt{d}$, where $d$ is the input dimension. We empirically evaluate
our method with the image classification task on CIFAR10, CIFAR100, and EMNIST,
and observe a significant performance improvement over the prior work under the
same small privacy budget. The code can be found in this link:
https://github.com/shenzebang/CENTAUR-Privacy-Federated-Representation-Learning. | [
"Zebang Shen",
"Jiayuan Ye",
"Anmin Kang",
"Hamed Hassani",
"Reza Shokri"
] | 2023-09-11 14:46:55 | http://arxiv.org/abs/2309.05505v1 | http://arxiv.org/pdf/2309.05505v1 | 2309.05505v1 |
Learning Semantic Segmentation with Query Points Supervision on Aerial Images | Semantic segmentation is crucial in remote sensing, where high-resolution
satellite images are segmented into meaningful regions. Recent advancements in
deep learning have significantly improved satellite image segmentation.
However, most of these methods are typically trained in fully supervised
settings that require high-quality pixel-level annotations, which are expensive
and time-consuming to obtain. In this work, we present a weakly supervised
learning algorithm to train semantic segmentation algorithms that only rely on
query point annotations instead of full mask labels. Our proposed approach
performs accurate semantic segmentation and improves efficiency by
significantly reducing the cost and time required for manual annotation.
Specifically, we generate superpixels and extend the query point labels into
those superpixels that group similar meaningful semantics. Then, we train
semantic segmentation models, supervised with images partially labeled with the
superpixels pseudo-labels. We benchmark our weakly supervised training approach
on an aerial image dataset and different semantic segmentation architectures,
showing that we can reach competitive performance compared to fully supervised
training while reducing the annotation effort. | [
"Santiago Rivier",
"Carlos Hinojosa",
"Silvio Giancola",
"Bernard Ghanem"
] | 2023-09-11 14:32:04 | http://arxiv.org/abs/2309.05490v1 | http://arxiv.org/pdf/2309.05490v1 | 2309.05490v1 |
Bayesian Quality-Diversity approaches for constrained optimization problems with mixed continuous, discrete and categorical variables | Complex engineering design problems, such as those involved in aerospace,
civil, or energy engineering, require the use of numerically costly simulation
codes in order to predict the behavior and performance of the system to be
designed. To perform the design of the systems, these codes are often embedded
into an optimization process to provide the best design while satisfying the
design constraints. Recently, new approaches, called Quality-Diversity, have
been proposed in order to enhance the exploration of the design space and to
provide a set of optimal diversified solutions with respect to some feature
functions. These functions are interesting to assess trade-offs. Furthermore,
complex engineering design problems often involve mixed continuous, discrete,
and categorical design variables allowing to take into account technological
choices in the optimization problem. In this paper, a new Quality-Diversity
methodology based on mixed continuous, discrete and categorical Bayesian
optimization strategy is proposed. This approach allows to reduce the
computational cost with respect to classical Quality - Diversity approaches
while dealing with discrete choices and constraints. The performance of the
proposed method is assessed on a benchmark of analytical problems as well as on
an industrial design optimization problem dealing with aerospace systems. | [
"Loic Brevault",
"Mathieu Balesdent"
] | 2023-09-11 14:29:47 | http://arxiv.org/abs/2310.05955v1 | http://arxiv.org/pdf/2310.05955v1 | 2310.05955v1 |
Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography-Based Cognitive Workload Detection | This article summarizes a systematic review of the electroencephalography
(EEG)-based cognitive workload (CWL) estimation. The focus of the article is
twofold: identify the disparate experimental paradigms used for reliably
eliciting discreet and quantifiable levels of cognitive load and the specific
nature and representational structure of the commonly used input formulations
in deep neural networks (DNNs) used for signal classification. The analysis
revealed a number of studies using EEG signals in its native representation of
a two-dimensional matrix for offline classification of CWL. However, only a few
studies adopted an online or pseudo-online classification strategy for
real-time CWL estimation. Further, only a couple of interpretable DNNs and a
single generative model were employed for cognitive load detection till date
during this review. More often than not, researchers were using DNNs as
black-box type models. In conclusion, DNNs prove to be valuable tools for
classifying EEG signals, primarily due to the substantial modeling power
provided by the depth of their network architecture. It is further suggested
that interpretable and explainable DNN models must be employed for cognitive
workload estimation since existing methods are limited in the face of the
non-stationary nature of the signal. | [
"Vishnu KN",
"Cota Navin Gupta"
] | 2023-09-11 14:27:22 | http://arxiv.org/abs/2309.07163v1 | http://arxiv.org/pdf/2309.07163v1 | 2309.07163v1 |
Learning Objective-Specific Active Learning Strategies with Attentive Neural Processes | Pool-based active learning (AL) is a promising technology for increasing
data-efficiency of machine learning models. However, surveys show that
performance of recent AL methods is very sensitive to the choice of dataset and
training setting, making them unsuitable for general application. In order to
tackle this problem, the field Learning Active Learning (LAL) suggests to learn
the active learning strategy itself, allowing it to adapt to the given setting.
In this work, we propose a novel LAL method for classification that exploits
symmetry and independence properties of the active learning problem with an
Attentive Conditional Neural Process model. Our approach is based on learning
from a myopic oracle, which gives our model the ability to adapt to
non-standard objectives, such as those that do not equally weight the error on
all data points. We experimentally verify that our Neural Process model
outperforms a variety of baselines in these settings. Finally, our experiments
show that our model exhibits a tendency towards improved stability to changing
datasets. However, performance is sensitive to choice of classifier and more
work is necessary to reduce the performance the gap with the myopic oracle and
to improve scalability. We present our work as a proof-of-concept for LAL on
nonstandard objectives and hope our analysis and modelling considerations
inspire future LAL work. | [
"Tim Bakker",
"Herke van Hoof",
"Max Welling"
] | 2023-09-11 14:16:37 | http://arxiv.org/abs/2309.05477v1 | http://arxiv.org/pdf/2309.05477v1 | 2309.05477v1 |
Machine learning the dimension of a Fano variety | Fano varieties are basic building blocks in geometry - they are `atomic
pieces' of mathematical shapes. Recent progress in the classification of Fano
varieties involves analysing an invariant called the quantum period. This is a
sequence of integers which gives a numerical fingerprint for a Fano variety. It
is conjectured that a Fano variety is uniquely determined by its quantum
period. If this is true, one should be able to recover geometric properties of
a Fano variety directly from its quantum period. We apply machine learning to
the question: does the quantum period of X know the dimension of X? Note that
there is as yet no theoretical understanding of this. We show that a simple
feed-forward neural network can determine the dimension of X with 98% accuracy.
Building on this, we establish rigorous asymptotics for the quantum periods of
a class of Fano varieties. These asymptotics determine the dimension of X from
its quantum period. Our results demonstrate that machine learning can pick out
structure from complex mathematical data in situations where we lack
theoretical understanding. They also give positive evidence for the conjecture
that the quantum period of a Fano variety determines that variety. | [
"Tom Coates",
"Alexander M. Kasprzyk",
"Sara Veneziale"
] | 2023-09-11 14:13:30 | http://arxiv.org/abs/2309.05473v1 | http://arxiv.org/pdf/2309.05473v1 | 2309.05473v1 |
Using causal inference to avoid fallouts in data-driven parametric analysis: a case study in the architecture, engineering, and construction industry | The decision-making process in real-world implementations has been affected
by a growing reliance on data-driven models. We investigated the synergetic
pattern between the data-driven methods, empirical domain knowledge, and
first-principles simulations. We showed the potential risk of biased results
when using data-driven models without causal analysis. Using a case study
assessing the implication of several design solutions on the energy consumption
of a building, we proved the necessity of causal analysis during the
data-driven modeling process. We concluded that: (a) Data-driven models'
accuracy assessment or domain knowledge screening may not rule out biased and
spurious results; (b) Data-driven models' feature selection should involve
careful consideration of causal relationships, especially colliders; (c) Causal
analysis results can be used as an aid to first-principles simulation design
and parameter checking to avoid cognitive biases. We proved the benefits of
causal analysis when applied to data-driven models in building engineering. | [
"Xia Chen",
"Ruiji Sun",
"Ueli Saluz",
"Stefano Schiavon",
"Philipp Geyer"
] | 2023-09-11 13:54:58 | http://arxiv.org/abs/2309.11509v1 | http://arxiv.org/pdf/2309.11509v1 | 2309.11509v1 |
Unveiling the Sentinels: Assessing AI Performance in Cybersecurity Peer Review | Peer review is the method employed by the scientific community for evaluating
research advancements. In the field of cybersecurity, the practice of
double-blind peer review is the de-facto standard. This paper touches on the
holy grail of peer reviewing and aims to shed light on the performance of AI in
reviewing for academic security conferences. Specifically, we investigate the
predictability of reviewing outcomes by comparing the results obtained from
human reviewers and machine-learning models. To facilitate our study, we
construct a comprehensive dataset by collecting thousands of papers from
renowned computer science conferences and the arXiv preprint website. Based on
the collected data, we evaluate the prediction capabilities of ChatGPT and a
two-stage classification approach based on the Doc2Vec model with various
classifiers. Our experimental evaluation of review outcome prediction using the
Doc2Vec-based approach performs significantly better than the ChatGPT and
achieves an accuracy of over 90%. While analyzing the experimental results, we
identify the potential advantages and limitations of the tested ML models. We
explore areas within the paper-reviewing process that can benefit from
automated support approaches, while also recognizing the irreplaceable role of
human intellect in certain aspects that cannot be matched by state-of-the-art
AI techniques. | [
"Liang Niu",
"Nian Xue",
"Christina Pöpper"
] | 2023-09-11 13:51:40 | http://arxiv.org/abs/2309.05457v1 | http://arxiv.org/pdf/2309.05457v1 | 2309.05457v1 |
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation | This paper describes a system developed for the GENEA (Generation and
Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2023. Our
solution builds on an existing diffusion-based motion synthesis model. We
propose a contrastive speech and motion pretraining (CSMP) module, which learns
a joint embedding for speech and gesture with the aim to learn a semantic
coupling between these modalities. The output of the CSMP module is used as a
conditioning signal in the diffusion-based gesture synthesis model in order to
achieve semantically-aware co-speech gesture generation. Our entry achieved
highest human-likeness and highest speech appropriateness rating among the
submitted entries. This indicates that our system is a promising approach to
achieve human-like co-speech gestures in agents that carry semantic meaning. | [
"Anna Deichler",
"Shivam Mehta",
"Simon Alexanderson",
"Jonas Beskow"
] | 2023-09-11 13:51:06 | http://arxiv.org/abs/2309.05455v1 | http://arxiv.org/pdf/2309.05455v1 | 2309.05455v1 |
Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning | The Mixture of Experts (MoE) is a widely known neural architecture where an
ensemble of specialized sub-models optimizes overall performance with a
constant computational cost. However, conventional MoEs pose challenges at
scale due to the need to store all experts in memory. In this paper, we push
MoE to the limit. We propose extremely parameter-efficient MoE by uniquely
combining MoE architecture with lightweight experts.Our MoE architecture
outperforms standard parameter-efficient fine-tuning (PEFT) methods and is on
par with full fine-tuning by only updating the lightweight experts -- less than
1% of an 11B parameters model. Furthermore, our method generalizes to unseen
tasks as it does not depend on any prior task knowledge. Our research
underscores the versatility of the mixture of experts architecture, showcasing
its ability to deliver robust performance even when subjected to rigorous
parameter constraints. Our code used in all the experiments is publicly
available here: https://github.com/for-ai/parameter-efficient-moe. | [
"Ted Zadouri",
"Ahmet Üstün",
"Arash Ahmadian",
"Beyza Ermiş",
"Acyr Locatelli",
"Sara Hooker"
] | 2023-09-11 13:31:00 | http://arxiv.org/abs/2309.05444v1 | http://arxiv.org/pdf/2309.05444v1 | 2309.05444v1 |
Quantized Fourier and Polynomial Features for more Expressive Tensor Network Models | In the context of kernel machines, polynomial and Fourier features are
commonly used to provide a nonlinear extension to linear models by mapping the
data to a higher-dimensional space. Unless one considers the dual formulation
of the learning problem, which renders exact large-scale learning unfeasible,
the exponential increase of model parameters in the dimensionality of the data
caused by their tensor-product structure prohibits to tackle high-dimensional
problems. One of the possible approaches to circumvent this exponential scaling
is to exploit the tensor structure present in the features by constraining the
model weights to be an underparametrized tensor network. In this paper we
quantize, i.e. further tensorize, polynomial and Fourier features. Based on
this feature quantization we propose to quantize the associated model weights,
yielding quantized models. We show that, for the same number of model
parameters, the resulting quantized models have a higher bound on the
VC-dimension as opposed to their non-quantized counterparts, at no additional
computational cost while learning from identical features. We verify
experimentally how this additional tensorization regularizes the learning
problem by prioritizing the most salient features in the data and how it
provides models with increased generalization capabilities. We finally
benchmark our approach on large regression task, achieving state-of-the-art
results on a laptop computer. | [
"Frederiek Wesel",
"Kim Batselier"
] | 2023-09-11 13:18:19 | http://arxiv.org/abs/2309.05436v1 | http://arxiv.org/pdf/2309.05436v1 | 2309.05436v1 |
A parameterised model for link prediction using node centrality and similarity measure based on graph embedding | Link prediction is a key aspect of graph machine learning, with applications
as diverse as disease prediction, social network recommendations, and drug
discovery. It involves predicting new links that may form between network
nodes. Despite the clear importance of link prediction, existing models have
significant shortcomings. Graph Convolutional Networks, for instance, have been
proven to be highly efficient for link prediction on a variety of datasets.
However, they encounter severe limitations when applied to short-path networks
and ego networks, resulting in poor performance. This presents a critical
problem space that this work aims to address. In this paper, we present the
Node Centrality and Similarity Based Parameterised Model (NCSM), a novel method
for link prediction tasks. NCSM uniquely integrates node centrality and
similarity measures as edge features in a customised Graph Neural Network (GNN)
layer, effectively leveraging the topological information of large networks.
This model represents the first parameterised GNN-based link prediction model
that considers topological information. The proposed model was evaluated on
five benchmark graph datasets, each comprising thousands of nodes and edges.
Experimental results highlight NCSM's superiority over existing
state-of-the-art models like Graph Convolutional Networks and Variational Graph
Autoencoder, as it outperforms them across various metrics and datasets. This
exceptional performance can be attributed to NCSM's innovative integration of
node centrality, similarity measures, and its efficient use of topological
information. | [
"Haohui Lu",
"Shahadat Uddin"
] | 2023-09-11 13:13:54 | http://arxiv.org/abs/2309.05434v1 | http://arxiv.org/pdf/2309.05434v1 | 2309.05434v1 |
Neuromorphic Auditory Perception by Neural Spiketrum | Neuromorphic computing holds the promise to achieve the energy efficiency and
robust learning performance of biological neural systems. To realize the
promised brain-like intelligence, it needs to solve the challenges of the
neuromorphic hardware architecture design of biological neural substrate and
the hardware amicable algorithms with spike-based encoding and learning. Here
we introduce a neural spike coding model termed spiketrum, to characterize and
transform the time-varying analog signals, typically auditory signals, into
computationally efficient spatiotemporal spike patterns. It minimizes the
information loss occurring at the analog-to-spike transformation and possesses
informational robustness to neural fluctuations and spike losses. The model
provides a sparse and efficient coding scheme with precisely controllable spike
rate that facilitates training of spiking neural networks in various auditory
perception tasks. We further investigate the algorithm-hardware co-designs
through a neuromorphic cochlear prototype which demonstrates that our approach
can provide a systematic solution for spike-based artificial intelligence by
fully exploiting its advantages with spike-based computation. | [
"Huajin Tang",
"Pengjie Gu",
"Jayawan Wijekoon",
"MHD Anas Alsakkal",
"Ziming Wang",
"Jiangrong Shen",
"Rui Yan"
] | 2023-09-11 13:06:19 | http://arxiv.org/abs/2309.05430v1 | http://arxiv.org/pdf/2309.05430v1 | 2309.05430v1 |
Temporal Patience: Efficient Adaptive Deep Learning for Embedded Radar Data Processing | Radar sensors offer power-efficient solutions for always-on smart devices,
but processing the data streams on resource-constrained embedded platforms
remains challenging. This paper presents novel techniques that leverage the
temporal correlation present in streaming radar data to enhance the efficiency
of Early Exit Neural Networks for Deep Learning inference on embedded devices.
These networks add additional classifier branches between the architecture's
hidden layers that allow for an early termination of the inference if their
result is deemed sufficient enough by an at-runtime decision mechanism. Our
methods enable more informed decisions on when to terminate the inference,
reducing computational costs while maintaining a minimal loss of accuracy.
Our results demonstrate that our techniques save up to 26% of operations per
inference over a Single Exit Network and 12% over a confidence-based Early Exit
version. Our proposed techniques work on commodity hardware and can be combined
with traditional optimizations, making them accessible for resource-constrained
embedded platforms commonly used in smart devices. Such efficiency gains enable
real-time radar data processing on resource-constrained platforms, allowing for
new applications in the context of smart homes, Internet-of-Things, and
human-computer interaction. | [
"Max Sponner",
"Julius Ott",
"Lorenzo Servadei",
"Bernd Waschneck",
"Robert Wille",
"Akash Kumar"
] | 2023-09-11 12:38:01 | http://arxiv.org/abs/2309.05686v1 | http://arxiv.org/pdf/2309.05686v1 | 2309.05686v1 |
Learning noise-induced transitions by multi-scaling reservoir computing | Noise is usually regarded as adversarial to extract the effective dynamics
from time series, such that the conventional data-driven approaches usually aim
at learning the dynamics by mitigating the noisy effect. However, noise can
have a functional role of driving transitions between stable states underlying
many natural and engineered stochastic dynamics. To capture such stochastic
transitions from data, we find that leveraging a machine learning model,
reservoir computing as a type of recurrent neural network, can learn
noise-induced transitions. We develop a concise training protocol for tuning
hyperparameters, with a focus on a pivotal hyperparameter controlling the time
scale of the reservoir dynamics. The trained model generates accurate
statistics of transition time and the number of transitions. The approach is
applicable to a wide class of systems, including a bistable system under a
double-well potential, with either white noise or colored noise. It is also
aware of the asymmetry of the double-well potential, the rotational dynamics
caused by non-detailed balance, and transitions in multi-stable systems. For
the experimental data of protein folding, it learns the transition time between
folded states, providing a possibility of predicting transition statistics from
a small dataset. The results demonstrate the capability of machine-learning
methods in capturing noise-induced phenomena. | [
"Zequn Lin",
"Zhaofan Lu",
"Zengru Di",
"Ying Tang"
] | 2023-09-11 12:26:36 | http://arxiv.org/abs/2309.05413v1 | http://arxiv.org/pdf/2309.05413v1 | 2309.05413v1 |
Physics-informed reinforcement learning via probabilistic co-adjustment functions | Reinforcement learning of real-world tasks is very data inefficient, and
extensive simulation-based modelling has become the dominant approach for
training systems. However, in human-robot interaction and many other real-world
settings, there is no appropriate one-model-for-all due to differences in
individual instances of the system (e.g. different people) or necessary
oversimplifications in the simulation models. This requires two approaches: 1.
either learning the individual system's dynamics approximately from data which
requires data-intensive training or 2. using a complete digital twin of the
instances, which may not be realisable in many cases. We introduce two
approaches: co-kriging adjustments (CKA) and ridge regression adjustment (RRA)
as novel ways to combine the advantages of both approaches. Our adjustment
methods are based on an auto-regressive AR1 co-kriging model that we integrate
with GP priors. This yield a data- and simulation-efficient way of using
simplistic simulation models (e.g., simple two-link model) and rapidly adapting
them to individual instances (e.g., biomechanics of individual people). Using
CKA and RRA, we obtain more accurate uncertainty quantification of the entire
system's dynamics than pure GP-based and AR1 methods. We demonstrate the
efficiency of co-kriging adjustment with an interpretable reinforcement
learning control example, learning to control a biomechanical human arm using
only a two-link arm simulation model (offline part) and CKA derived from a
small amount of interaction data (on-the-fly online). Our method unlocks an
efficient and uncertainty-aware way to implement reinforcement learning methods
in real world complex systems for which only imperfect simulation models exist. | [
"Nat Wannawas",
"A. Aldo Faisal"
] | 2023-09-11 12:10:19 | http://arxiv.org/abs/2309.05404v1 | http://arxiv.org/pdf/2309.05404v1 | 2309.05404v1 |
Practical Homomorphic Aggregation for Byzantine ML | Due to the large-scale availability of data, machine learning (ML) algorithms
are being deployed in distributed topologies, where different nodes collaborate
to train ML models over their individual data by exchanging model-related
information (e.g., gradients) with a central server. However, distributed
learning schemes are notably vulnerable to two threats. First, Byzantine nodes
can single-handedly corrupt the learning by sending incorrect information to
the server, e.g., erroneous gradients. The standard approach to mitigate such
behavior is to use a non-linear robust aggregation method at the server.
Second, the server can violate the privacy of the nodes. Recent attacks have
shown that exchanging (unencrypted) gradients enables a curious server to
recover the totality of the nodes' data. The use of homomorphic encryption
(HE), a gold standard security primitive, has extensively been studied as a
privacy-preserving solution to distributed learning in non-Byzantine scenarios.
However, due to HE's large computational demand especially for high-dimensional
ML models, there has not yet been any attempt to design purely homomorphic
operators for non-linear robust aggregators. In this work, we present SABLE,
the first completely homomorphic and Byzantine robust distributed learning
algorithm. SABLE essentially relies on a novel plaintext encoding method that
enables us to implement the robust aggregator over batching-friendly BGV.
Moreover, this encoding scheme also accelerates state-of-the-art homomorphic
sorting with larger security margins and smaller ciphertext size. We perform
extensive experiments on image classification tasks and show that our algorithm
achieves practical execution times while matching the ML performance of its
non-private counterpart. | [
"Antoine Choffrut",
"Rachid Guerraoui",
"Rafael Pinot",
"Renaud Sirdey",
"John Stephan",
"Martin Zuber"
] | 2023-09-11 11:54:42 | http://arxiv.org/abs/2309.05395v3 | http://arxiv.org/pdf/2309.05395v3 | 2309.05395v3 |
Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach | This study explores the potential of reinforcement learning algorithms to
enhance career planning processes. Leveraging data from Randstad The
Netherlands, the study simulates the Dutch job market and develops strategies
to optimize employees' long-term income. By formulating career planning as a
Markov Decision Process (MDP) and utilizing machine learning algorithms such as
Sarsa, Q-Learning, and A2C, we learn optimal policies that recommend career
paths with high-income occupations and industries. The results demonstrate
significant improvements in employees' income trajectories, with RL models,
particularly Q-Learning and Sarsa, achieving an average increase of 5% compared
to observed career paths. The study acknowledges limitations, including narrow
job filtering, simplifications in the environment formulation, and assumptions
regarding employment continuity and zero application costs. Future research can
explore additional objectives beyond income optimization and address these
limitations to further enhance career planning processes. | [
"Spyros Avlonitis",
"Dor Lavi",
"Masoud Mansoury",
"David Graus"
] | 2023-09-11 11:42:28 | http://arxiv.org/abs/2309.05391v1 | http://arxiv.org/pdf/2309.05391v1 | 2309.05391v1 |
Data-Driven Model Reduction and Nonlinear Model Predictive Control of an Air Separation Unit by Applied Koopman Theory | Achieving real-time capability is an essential prerequisite for the
industrial implementation of nonlinear model predictive control (NMPC).
Data-driven model reduction offers a way to obtain low-order control models
from complex digital twins. In particular, data-driven approaches require
little expert knowledge of the particular process and its model, and provide
reduced models of a well-defined generic structure. Herein, we apply our
recently proposed data-driven reduction strategy based on Koopman theory
[Schulze et al. (2022), Comput. Chem. Eng.] to generate a low-order control
model of an air separation unit (ASU). The reduced Koopman model combines
autoencoders and linear latent dynamics and is constructed using machine
learning. Further, we present an NMPC implementation that uses derivative
computation tailored to the fixed block structure of reduced Koopman models.
Our reduction approach with tailored NMPC implementation enables real-time NMPC
of an ASU at an average CPU time decrease by 98 %. | [
"Jan C. Schulze",
"Danimir T. Doncevic",
"Nils Erwes",
"Alexander Mitsos"
] | 2023-09-11 11:18:16 | http://arxiv.org/abs/2309.05386v1 | http://arxiv.org/pdf/2309.05386v1 | 2309.05386v1 |
Feature-based Transferable Disruption Prediction for future tokamaks using domain adaptation | The high acquisition cost and the significant demand for disruptive
discharges for data-driven disruption prediction models in future tokamaks pose
an inherent contradiction in disruption prediction research. In this paper, we
demonstrated a novel approach to predict disruption in a future tokamak only
using a few discharges based on a domain adaptation algorithm called CORAL. It
is the first attempt at applying domain adaptation in the disruption prediction
task. In this paper, this disruption prediction approach aligns a few data from
the future tokamak (target domain) and a large amount of data from the existing
tokamak (source domain) to train a machine learning model in the existing
tokamak. To simulate the existing and future tokamak case, we selected J-TEXT
as the existing tokamak and EAST as the future tokamak. To simulate the lack of
disruptive data in future tokamak, we only selected 100 non-disruptive
discharges and 10 disruptive discharges from EAST as the target domain training
data. We have improved CORAL to make it more suitable for the disruption
prediction task, called supervised CORAL. Compared to the model trained by
mixing data from the two tokamaks, the supervised CORAL model can enhance the
disruption prediction performance for future tokamaks (AUC value from 0.764 to
0.890). Through interpretable analysis, we discovered that using the supervised
CORAL enables the transformation of data distribution to be more similar to
future tokamak. An assessment method for evaluating whether a model has learned
a trend of similar features is designed based on SHAP analysis. It demonstrates
that the supervised CORAL model exhibits more similarities to the model trained
on large data sizes of EAST. FTDP provides a light, interpretable, and
few-data-required way by aligning features to predict disruption using small
data sizes from the future tokamak. | [
"Chengshuo Shen",
"Wei Zheng",
"Bihao Guo",
"Dalong Chen",
"Xinkun Ai",
"Fengming Xue",
"Yu Zhong",
"Nengchao Wang",
"Biao Shen",
"Binjia Xiao",
"Yonghua Ding",
"Zhongyong Chen",
"Yuan Pan",
"J-TEXT team"
] | 2023-09-11 10:13:30 | http://arxiv.org/abs/2309.05361v1 | http://arxiv.org/pdf/2309.05361v1 | 2309.05361v1 |
EDAC: Efficient Deployment of Audio Classification Models For COVID-19 Detection | The global spread of COVID-19 had severe consequences for public health and
the world economy. The quick onset of the pandemic highlighted the potential
benefits of cheap and deployable pre-screening methods to monitor the
prevalence of the disease in a population. Various researchers made use of
machine learning methods in an attempt to detect COVID-19. The solutions
leverage various input features, such as CT scans or cough audio signals, with
state-of-the-art results arising from deep neural network architectures.
However, larger models require more compute; a pertinent consideration when
deploying to the edge. To address this, we first recreated two models that use
cough audio recordings to detect COVID-19. Through applying network pruning and
quantisation, we were able to compress these two architectures without reducing
the model's predictive performance. Specifically, we were able to achieve an
105.76x and an 19.34x reduction in the compressed model file size with
corresponding 1.37x and 1.71x reductions in the inference times of the two
models. | [
"Andrej Jovanović",
"Mario Mihaly",
"Lennon Donaldson"
] | 2023-09-11 10:07:51 | http://arxiv.org/abs/2309.05357v1 | http://arxiv.org/pdf/2309.05357v1 | 2309.05357v1 |
Neural Discovery of Permutation Subgroups | We consider the problem of discovering subgroup $H$ of permutation group
$S_{n}$. Unlike the traditional $H$-invariant networks wherein $H$ is assumed
to be known, we present a method to discover the underlying subgroup, given
that it satisfies certain conditions. Our results show that one could discover
any subgroup of type $S_{k} (k \leq n)$ by learning an $S_{n}$-invariant
function and a linear transformation. We also prove similar results for cyclic
and dihedral subgroups. Finally, we provide a general theorem that can be
extended to discover other subgroups of $S_{n}$. We also demonstrate the
applicability of our results through numerical experiments on image-digit sum
and symmetric polynomial regression tasks. | [
"Pavan Karjol",
"Rohan Kashyap",
"Prathosh A P"
] | 2023-09-11 09:53:28 | http://arxiv.org/abs/2309.05352v1 | http://arxiv.org/pdf/2309.05352v1 | 2309.05352v1 |
Learning Geometric Representations of Objects via Interaction | We address the problem of learning representations from observations of a
scene involving an agent and an external object the agent interacts with. To
this end, we propose a representation learning framework extracting the
location in physical space of both the agent and the object from unstructured
observations of arbitrary nature. Our framework relies on the actions performed
by the agent as the only source of supervision, while assuming that the object
is displaced by the agent via unknown dynamics. We provide a theoretical
foundation and formally prove that an ideal learner is guaranteed to infer an
isometric representation, disentangling the agent from the object and correctly
extracting their locations. We evaluate empirically our framework on a variety
of scenarios, showing that it outperforms vision-based approaches such as a
state-of-the-art keypoint extractor. We moreover demonstrate how the extracted
representations enable the agent to solve downstream tasks via reinforcement
learning in an efficient manner. | [
"Alfredo Reichlin",
"Giovanni Luca Marchetti",
"Hang Yin",
"Anastasiia Varava",
"Danica Kragic"
] | 2023-09-11 09:45:22 | http://arxiv.org/abs/2309.05346v1 | http://arxiv.org/pdf/2309.05346v1 | 2309.05346v1 |
A DRL-based Reflection Enhancement Method for RIS-assisted Multi-receiver Communications | In reconfigurable intelligent surface (RIS)-assisted wireless communication
systems, the pointing accuracy and intensity of reflections depend crucially on
the 'profile,' representing the amplitude/phase state information of all
elements in a RIS array. The superposition of multiple single-reflection
profiles enables multi-reflection for distributed users. However, the
optimization challenges from periodic element arrangements in single-reflection
and multi-reflection profiles are understudied. The combination of periodical
single-reflection profiles leads to amplitude/phase counteractions, affecting
the performance of each reflection beam. This paper focuses on a
dual-reflection optimization scenario and investigates the far-field
performance deterioration caused by the misalignment of overlapped profiles. To
address this issue, we introduce a novel deep reinforcement learning
(DRL)-based optimization method. Comparative experiments against random and
exhaustive searches demonstrate that our proposed DRL method outperforms both
alternatives, achieving the shortest optimization time. Remarkably, our
approach achieves a 1.2 dB gain in the reflection peak gain and a broader beam
without any hardware modifications. | [
"Wei Wang",
"Peizheng Li",
"Angela Doufexi",
"Mark A Beach"
] | 2023-09-11 09:43:59 | http://arxiv.org/abs/2309.05343v1 | http://arxiv.org/pdf/2309.05343v1 | 2309.05343v1 |
PAg-NeRF: Towards fast and efficient end-to-end panoptic 3D representations for agricultural robotics | Precise scene understanding is key for most robot monitoring and intervention
tasks in agriculture. In this work we present PAg-NeRF which is a novel
NeRF-based system that enables 3D panoptic scene understanding. Our
representation is trained using an image sequence with noisy robot odometry
poses and automatic panoptic predictions with inconsistent IDs between frames.
Despite this noisy input, our system is able to output scene geometry,
photo-realistic renders and 3D consistent panoptic representations with
consistent instance IDs. We evaluate this novel system in a very challenging
horticultural scenario and in doing so demonstrate an end-to-end trainable
system that can make use of noisy robot poses rather than precise poses that
have to be pre-calculated. Compared to a baseline approach the peak signal to
noise ratio is improved from 21.34dB to 23.37dB while the panoptic quality
improves from 56.65% to 70.08%. Furthermore, our approach is faster and can be
tuned to improve inference time by more than a factor of 2 while being memory
efficient with approximately 12 times fewer parameters. | [
"Claus Smitt",
"Michael Halstead",
"Patrick Zimmer",
"Thomas Läbe",
"Esra Guclu",
"Cyrill Stachniss",
"Chris McCool"
] | 2023-09-11 09:35:51 | http://arxiv.org/abs/2309.05339v1 | http://arxiv.org/pdf/2309.05339v1 | 2309.05339v1 |
Stochastic Gradient Descent-like relaxation is equivalent to Glauber dynamics in discrete optimization and inference problems | Is Stochastic Gradient Descent (SGD) substantially different from Glauber
dynamics? This is a fundamental question at the time of understanding the most
used training algorithm in the field of Machine Learning, but it received no
answer until now. Here we show that in discrete optimization and inference
problems, the dynamics of an SGD-like algorithm resemble very closely that of
Metropolis Monte Carlo with a properly chosen temperature, which depends on the
mini-batch size. This quantitative matching holds both at equilibrium and in
the out-of-equilibrium regime, despite the two algorithms having fundamental
differences (e.g.\ SGD does not satisfy detailed balance). Such equivalence
allows us to use results about performances and limits of Monte Carlo
algorithms to optimize the mini-batch size in the SGD-like algorithm and make
it efficient at recovering the signal in hard inference problems. | [
"Maria Chiara Angelini",
"Angelo Giorgio Cavaliere",
"Raffaele Marino",
"Federico Ricci-Tersenghi"
] | 2023-09-11 09:34:44 | http://arxiv.org/abs/2309.05337v1 | http://arxiv.org/pdf/2309.05337v1 | 2309.05337v1 |
A Strong and Simple Deep Learning Baseline for BCI MI Decoding | We propose EEG-SimpleConv, a straightforward 1D convolutional neural network
for Motor Imagery decoding in BCI. Our main motivation is to propose a very
simple baseline to compare to, using only very standard ingredients from the
literature. We evaluate its performance on four EEG Motor Imagery datasets,
including simulated online setups, and compare it to recent Deep Learning and
Machine Learning approaches. EEG-SimpleConv is at least as good or far more
efficient than other approaches, showing strong knowledge-transfer capabilities
across subjects, at the cost of a low inference time. We advocate that using
off-the-shelf ingredients rather than coming with ad-hoc solutions can
significantly help the adoption of Deep Learning approaches for BCI. We make
the code of the models and the experiments accessible. | [
"Yassine El Ouahidi",
"Vincent Gripon",
"Bastien Pasdeloup",
"Ghaith Bouallegue",
"Nicolas Farrugia",
"Giulia Lioi"
] | 2023-09-11 09:23:01 | http://arxiv.org/abs/2309.07159v1 | http://arxiv.org/pdf/2309.07159v1 | 2309.07159v1 |
Neural Koopman prior for data assimilation | With the increasing availability of large scale datasets, computational power
and tools like automatic differentiation and expressive neural network
architectures, sequential data are now often treated in a data-driven way, with
a dynamical model trained from the observation data. While neural networks are
often seen as uninterpretable black-box architectures, they can still benefit
from physical priors on the data and from mathematical knowledge. In this
paper, we use a neural network architecture which leverages the long-known
Koopman operator theory to embed dynamical systems in latent spaces where their
dynamics can be described linearly, enabling a number of appealing features. We
introduce methods that enable to train such a model for long-term continuous
reconstruction, even in difficult contexts where the data comes in
irregularly-sampled time series. The potential for self-supervised learning is
also demonstrated, as we show the promising use of trained dynamical models as
priors for variational data assimilation techniques, with applications to e.g.
time series interpolation and forecasting. | [
"Anthony Frion",
"Lucas Drumetz",
"Mauro Dalla Mura",
"Guillaume Tochon",
"Abdeldjalil Aïssa El Bey"
] | 2023-09-11 09:04:36 | http://arxiv.org/abs/2309.05317v1 | http://arxiv.org/pdf/2309.05317v1 | 2309.05317v1 |
Balance Measures Derived from Insole Sensor Differentiate Prodromal Dementia with Lewy Bodies | Dementia with Lewy bodies is the second most common type of neurodegenerative
dementia, and identification at the prodromal stage$-$i.e., mild cognitive
impairment due to Lewy bodies (MCI-LB)$-$is important for providing appropriate
care. However, MCI-LB is often underrecognized because of its diversity in
clinical manifestations and similarities with other conditions such as mild
cognitive impairment due to Alzheimer's disease (MCI-AD). In this study, we
propose a machine learning-based automatic pipeline that helps identify MCI-LB
by exploiting balance measures acquired with an insole sensor during a 30-s
standing task. An experiment with 98 participants (14 MCI-LB, 38 MCI-AD, 46
cognitively normal) showed that the resultant models could discriminate MCI-LB
from the other groups with up to 78.0% accuracy (AUC: 0.681), which was 6.8%
better than the accuracy of a reference model based on demographic and clinical
neuropsychological measures. Our findings may open up a new approach for timely
identification of MCI-LB, enabling better care for patients. | [
"Masatomo Kobayashi",
"Yasunori Yamada",
"Kaoru Shinkawa",
"Miyuki Nemoto",
"Miho Ota",
"Kiyotaka Nemoto",
"Tetsuaki Arai"
] | 2023-09-11 08:46:36 | http://arxiv.org/abs/2309.08623v1 | http://arxiv.org/pdf/2309.08623v1 | 2309.08623v1 |
Fully-Connected Spatial-Temporal Graph for Multivariate Time Series Data | Multivariate Time-Series (MTS) data is crucial in various application fields.
With its sequential and multi-source (multiple sensors) properties, MTS data
inherently exhibits Spatial-Temporal (ST) dependencies, involving temporal
correlations between timestamps and spatial correlations between sensors in
each timestamp. To effectively leverage this information, Graph Neural
Network-based methods (GNNs) have been widely adopted. However, existing
approaches separately capture spatial dependency and temporal dependency and
fail to capture the correlations between Different sEnsors at Different
Timestamps (DEDT). Overlooking such correlations hinders the comprehensive
modelling of ST dependencies within MTS data, thus restricting existing GNNs
from learning effective representations. To address this limitation, we propose
a novel method called Fully-Connected Spatial-Temporal Graph Neural Network
(FC-STGNN), including two key components namely FC graph construction and FC
graph convolution. For graph construction, we design a decay graph to connect
sensors across all timestamps based on their temporal distances, enabling us to
fully model the ST dependencies by considering the correlations between DEDT.
Further, we devise FC graph convolution with a moving-pooling GNN layer to
effectively capture the ST dependencies for learning effective representations.
Extensive experiments show the effectiveness of FC-STGNN on multiple MTS
datasets compared to SOTA methods. | [
"Yucheng Wang",
"Yuecong Xu",
"Jianfei Yang",
"Min Wu",
"Xiaoli Li",
"Lihua Xie",
"Zhenghua Chen"
] | 2023-09-11 08:44:07 | http://arxiv.org/abs/2309.05305v1 | http://arxiv.org/pdf/2309.05305v1 | 2309.05305v1 |
Optimization of Raman amplifiers: a comparison between black-, grey- and white-box modeling | Designing and optimizing optical amplifiers to maximize system performance is
becoming increasingly important as optical communication systems strive to
increase throughput. Offline optimization of optical amplifiers relies on
models ranging from white-box models deeply rooted in physics to black-box
data-driven physics-agnostic models. Here, we compare the capabilities of
white-, grey- and black-box models to achieve a target frequency-distance
amplification in a bidirectional Raman amplifier. We show that any of the
studied methods can achieve down to 1 dB of frequency-distance flatness over
the C-band in a 100-km span. Then, we discuss the models' applicability,
advantages, and drawbacks based on the target application scenario, in
particular in terms of optimization speed and access to training data. | [
"Metodi P. Yankov",
"Mehran Soltani",
"Andrea Carena",
"Darko Zibar",
"Francesco Da Ros"
] | 2023-09-11 08:39:57 | http://arxiv.org/abs/2310.05954v1 | http://arxiv.org/pdf/2310.05954v1 | 2310.05954v1 |
Discrete Denoising Diffusion Approach to Integer Factorization | Integer factorization is a famous computational problem unknown whether being
solvable in the polynomial time. With the rise of deep neural networks, it is
interesting whether they can facilitate faster factorization. We present an
approach to factorization utilizing deep neural networks and discrete denoising
diffusion that works by iteratively correcting errors in a partially-correct
solution. To this end, we develop a new seq2seq neural network architecture,
employ relaxed categorical distribution and adapt the reverse diffusion process
to cope better with inaccuracies in the denoising step. The approach is able to
find factors for integers of up to 56 bits long. Our analysis indicates that
investment in training leads to an exponential decrease of sampling steps
required at inference to achieve a given success rate, thus counteracting an
exponential run-time increase depending on the bit-length. | [
"Karlis Freivalds",
"Emils Ozolins",
"Guntis Barzdins"
] | 2023-09-11 08:26:08 | http://arxiv.org/abs/2309.05295v1 | http://arxiv.org/pdf/2309.05295v1 | 2309.05295v1 |
The fine print on tempered posteriors | We conduct a detailed investigation of tempered posteriors and uncover a
number of crucial and previously undiscussed points. Contrary to previous
results, we first show that for realistic models and datasets and the tightly
controlled case of the Laplace approximation to the posterior, stochasticity
does not in general improve test accuracy. The coldest temperature is often
optimal. One might think that Bayesian models with some stochasticity can at
least obtain improvements in terms of calibration. However, we show empirically
that when gains are obtained this comes at the cost of degradation in test
accuracy. We then discuss how targeting Frequentist metrics using Bayesian
models provides a simple explanation of the need for a temperature parameter
$\lambda$ in the optimization objective. Contrary to prior works, we finally
show through a PAC-Bayesian analysis that the temperature $\lambda$ cannot be
seen as simply fixing a misspecified prior or likelihood. | [
"Konstantinos Pitas",
"Julyan Arbel"
] | 2023-09-11 08:21:42 | http://arxiv.org/abs/2309.05292v1 | http://arxiv.org/pdf/2309.05292v1 | 2309.05292v1 |
Efficient Finite Initialization for Tensorized Neural Networks | We present a novel method for initializing layers of tensorized neural
networks in a way that avoids the explosion of the parameters of the matrix it
emulates. The method is intended for layers with a high number of nodes in
which there is a connection to the input or output of all or most of the nodes.
The core of this method is the use of the Frobenius norm of this layer in an
iterative partial form, so that it has to be finite and within a certain range.
This norm is efficient to compute, fully or partially for most cases of
interest. We apply the method to different layers and check its performance. We
create a Python function to run it on an arbitrary layer, available in a
Jupyter Notebook in the i3BQuantum repository:
https://github.com/i3BQuantumTeam/Q4Real/blob/e07c827651ef16bcf74590ab965ea3985143f891/Quantum-Inspired%20Variational%20Methods/Normalization_process.ipynb | [
"Alejandro Mata Ali",
"Iñigo Perez Delgado",
"Marina Ristol Roura",
"Aitor Moreno Fdez. de Leceta"
] | 2023-09-11 08:05:09 | http://arxiv.org/abs/2309.06577v2 | http://arxiv.org/pdf/2309.06577v2 | 2309.06577v2 |
Compressed Real Numbers for AI: a case-study using a RISC-V CPU | As recently demonstrated, Deep Neural Networks (DNN), usually trained using
single precision IEEE 754 floating point numbers (binary32), can also work
using lower precision. Therefore, 16-bit and 8-bit compressed format have
attracted considerable attention. In this paper, we focused on two families of
formats that have already achieved interesting results in compressing binary32
numbers in machine learning applications, without sensible degradation of the
accuracy: bfloat and posit. Even if 16-bit and 8-bit bfloat/posit are routinely
used for reducing the storage of the weights/biases of trained DNNs, the
inference still often happens on the 32-bit FPU of the CPU (especially if GPUs
are not available). In this paper we propose a way to decompress a tensor of
bfloat/posits just before computations, i.e., after the compressed operands
have been loaded within the vector registers of a vector capable CPU, in order
to save bandwidth usage and increase cache efficiency. Finally, we show the
architectural parameters and considerations under which this solution is
advantageous with respect to the uncompressed one. | [
"Federico Rossi",
"Marco Cococcioni",
"Roger Ferrer Ibàñez",
"Jesùs Labarta",
"Filippo Mantovani",
"Marc Casas",
"Emanuele Ruffaldi",
"Sergio Saponara"
] | 2023-09-11 07:54:28 | http://arxiv.org/abs/2309.07158v1 | http://arxiv.org/pdf/2309.07158v1 | 2309.07158v1 |
Can you text what is happening? Integrating pre-trained language encoders into trajectory prediction models for autonomous driving | In autonomous driving tasks, scene understanding is the first step towards
predicting the future behavior of the surrounding traffic participants. Yet,
how to represent a given scene and extract its features are still open research
questions. In this study, we propose a novel text-based representation of
traffic scenes and process it with a pre-trained language encoder.
First, we show that text-based representations, combined with classical
rasterized image representations, lead to descriptive scene embeddings. Second,
we benchmark our predictions on the nuScenes dataset and show significant
improvements compared to baselines. Third, we show in an ablation study that a
joint encoder of text and rasterized images outperforms the individual encoders
confirming that both representations have their complementary strengths. | [
"Ali Keysan",
"Andreas Look",
"Eitan Kosman",
"Gonca Gürsun",
"Jörg Wagner",
"Yu Yao",
"Barbara Rakitsch"
] | 2023-09-11 07:37:10 | http://arxiv.org/abs/2309.05282v2 | http://arxiv.org/pdf/2309.05282v2 | 2309.05282v2 |
Class-Incremental Grouping Network for Continual Audio-Visual Learning | Continual learning is a challenging problem in which models need to be
trained on non-stationary data across sequential tasks for class-incremental
learning. While previous methods have focused on using either regularization or
rehearsal-based frameworks to alleviate catastrophic forgetting in image
classification, they are limited to a single modality and cannot learn compact
class-aware cross-modal representations for continual audio-visual learning. To
address this gap, we propose a novel class-incremental grouping network (CIGN)
that can learn category-wise semantic features to achieve continual
audio-visual learning. Our CIGN leverages learnable audio-visual class tokens
and audio-visual grouping to continually aggregate class-aware features.
Additionally, it utilizes class tokens distillation and continual grouping to
prevent forgetting parameters learned from previous tasks, thereby improving
the model's ability to capture discriminative audio-visual categories. We
conduct extensive experiments on VGGSound-Instruments, VGGSound-100, and
VGG-Sound Sources benchmarks. Our experimental results demonstrate that the
CIGN achieves state-of-the-art audio-visual class-incremental learning
performance. Code is available at https://github.com/stoneMo/CIGN. | [
"Shentong Mo",
"Weiguo Pian",
"Yapeng Tian"
] | 2023-09-11 07:36:16 | http://arxiv.org/abs/2309.05281v1 | http://arxiv.org/pdf/2309.05281v1 | 2309.05281v1 |
Beamforming in Wireless Coded-Caching Systems | Increased capacity in the access network poses capacity challenges on the
transport network due to the aggregated traffic. However, there are spatial and
time correlation in the user data demands that could potentially be utilized.
To that end, we investigate a wireless transport network architecture that
integrates beamforming and coded-caching strategies. Especially, our proposed
design entails a server with multiple antennas that broadcasts content to cache
nodes responsible for serving users. Traditional caching methods face the
limitation of relying on the individual memory with additional overhead. Hence,
we develop an efficient genetic algorithm-based scheme for beam optimization in
the coded-caching system. By exploiting the advantages of beamforming and
coded-caching, the architecture achieves gains in terms of multicast
opportunities, interference mitigation, and reduced peak backhaul traffic. A
comparative analysis of this joint design with traditional, un-coded caching
schemes is also conducted to assess the benefits of the proposed approach.
Additionally, we examine the impact of various buffering and decoding methods
on the performance of the coded-caching scheme. Our findings suggest that
proper beamforming is useful in enhancing the effectiveness of the
coded-caching technique, resulting in significant reduction in peak backhaul
traffic. | [
"Sneha Madhusudan",
"Charitha Madapatha",
"Behrooz Makki",
"Hao Guo",
"Tommy Svensson"
] | 2023-09-11 07:21:57 | http://arxiv.org/abs/2309.05276v1 | http://arxiv.org/pdf/2309.05276v1 | 2309.05276v1 |
EANet: Expert Attention Network for Online Trajectory Prediction | Trajectory prediction plays a crucial role in autonomous driving. Existing
mainstream research and continuoual learning-based methods all require training
on complete datasets, leading to poor prediction accuracy when sudden changes
in scenarios occur and failing to promptly respond and update the model.
Whether these methods can make a prediction in real-time and use data instances
to update the model immediately(i.e., online learning settings) remains a
question. The problem of gradient explosion or vanishing caused by data
instance streams also needs to be addressed. Inspired by Hedge Propagation
algorithm, we propose Expert Attention Network, a complete online learning
framework for trajectory prediction. We introduce expert attention, which
adjusts the weights of different depths of network layers, avoiding the model
updated slowly due to gradient problem and enabling fast learning of new
scenario's knowledge to restore prediction accuracy. Furthermore, we propose a
short-term motion trend kernel function which is sensitive to scenario change,
allowing the model to respond quickly. To the best of our knowledge, this work
is the first attempt to address the online learning problem in trajectory
prediction. The experimental results indicate that traditional methods suffer
from gradient problems and that our method can quickly reduce prediction errors
and reach the state-of-the-art prediction accuracy. | [
"Pengfei Yao",
"Tianlu Mao",
"Min Shi",
"Jingkai Sun",
"Zhaoqi Wang"
] | 2023-09-11 07:09:40 | http://arxiv.org/abs/2309.05683v1 | http://arxiv.org/pdf/2309.05683v1 | 2309.05683v1 |
CONFLATOR: Incorporating Switching Point based Rotatory Positional Encodings for Code-Mixed Language Modeling | The mixing of two or more languages is called Code-Mixing (CM). CM is a
social norm in multilingual societies. Neural Language Models (NLMs) like
transformers have been effective on many NLP tasks. However, NLM for CM is an
under-explored area. Though transformers are capable and powerful, they cannot
always encode positional information since they are non-recurrent. Therefore,
to enrich word information and incorporate positional information, positional
encoding is defined. We hypothesize that Switching Points (SPs), i.e.,
junctions in the text where the language switches (L1 -> L2 or L2 -> L1), pose
a challenge for CM Language Models (LMs), and hence give special emphasis to
SPs in the modeling process. We experiment with several positional encoding
mechanisms and show that rotatory positional encodings along with switching
point information yield the best results.
We introduce CONFLATOR: a neural language modeling approach for code-mixed
languages. CONFLATOR tries to learn to emphasize switching points using smarter
positional encoding, both at unigram and bigram levels. CONFLATOR outperforms
the state-of-the-art on two tasks based on code-mixed Hindi and English
(Hinglish): (i) sentiment analysis and (ii) machine translation. | [
"Mohsin Ali",
"Kandukuri Sai Teja",
"Neeharika Gupta",
"Parth Patwa",
"Anubhab Chatterjee",
"Vinija Jain",
"Aman Chadha",
"Amitava Das"
] | 2023-09-11 07:02:13 | http://arxiv.org/abs/2309.05270v2 | http://arxiv.org/pdf/2309.05270v2 | 2309.05270v2 |
UniKG: A Benchmark and Universal Embedding for Large-Scale Knowledge Graphs | Irregular data in real-world are usually organized as heterogeneous graphs
(HGs) consisting of multiple types of nodes and edges. To explore useful
knowledge from real-world data, both the large-scale encyclopedic HG datasets
and corresponding effective learning methods are crucial, but haven't been well
investigated. In this paper, we construct a large-scale HG benchmark dataset
named UniKG from Wikidata to facilitate knowledge mining and heterogeneous
graph representation learning. Overall, UniKG contains more than 77 million
multi-attribute entities and 2000 diverse association types, which
significantly surpasses the scale of existing HG datasets. To perform effective
learning on the large-scale UniKG, two key measures are taken, including (i)
the semantic alignment strategy for multi-attribute entities, which projects
the feature description of multi-attribute nodes into a common embedding space
to facilitate node aggregation in a large receptive field; (ii) proposing a
novel plug-and-play anisotropy propagation module (APM) to learn effective
multi-hop anisotropy propagation kernels, which extends methods of large-scale
homogeneous graphs to heterogeneous graphs. These two strategies enable
efficient information propagation among a tremendous number of multi-attribute
entities and meantimes adaptively mine multi-attribute association through the
multi-hop aggregation in large-scale HGs. We set up a node classification task
on our UniKG dataset, and evaluate multiple baseline methods which are
constructed by embedding our APM into large-scale homogenous graph learning
methods. Our UniKG dataset and the baseline codes have been released at
https://github.com/Yide-Qiu/UniKG. | [
"Yide Qiu",
"Shaoxiang Ling",
"Tong Zhang",
"Bo Huang",
"Zhen Cui"
] | 2023-09-11 06:56:42 | http://arxiv.org/abs/2309.05269v1 | http://arxiv.org/pdf/2309.05269v1 | 2309.05269v1 |
Unsupervised Bias Detection in College Student Newspapers | This paper presents a pipeline with minimal human influence for scraping and
detecting bias on college newspaper archives. This paper introduces a framework
for scraping complex archive sites that automated tools fail to grab data from,
and subsequently generates a dataset of 14 student papers with 23,154 entries.
This data can also then be queried by keyword to calculate bias by comparing
the sentiment of a large language model summary to the original article. The
advantages of this approach are that it is less comparative than reconstruction
bias and requires less labelled data than generating keyword sentiment. Results
are calculated on politically charged words as well as control words to show
how conclusions can be drawn. The complete method facilitates the extraction of
nuanced insights with minimal assumptions and categorizations, paving the way
for a more objective understanding of bias within student newspaper sources. | [
"Adam M. Lehavi",
"William McCormack",
"Noah Kornfeld",
"Solomon Glazer"
] | 2023-09-11 06:51:09 | http://arxiv.org/abs/2309.06557v1 | http://arxiv.org/pdf/2309.06557v1 | 2309.06557v1 |
Generalized Graphon Process: Convergence of Graph Frequencies in Stretched Cut Distance | Graphons have traditionally served as limit objects for dense graph
sequences, with the cut distance serving as the metric for convergence.
However, sparse graph sequences converge to the trivial graphon under the
conventional definition of cut distance, which make this framework inadequate
for many practical applications. In this paper, we utilize the concepts of
generalized graphons and stretched cut distance to describe the convergence of
sparse graph sequences. Specifically, we consider a random graph process
generated from a generalized graphon. This random graph process converges to
the generalized graphon in stretched cut distance. We use this random graph
process to model the growing sparse graph, and prove the convergence of the
adjacency matrices' eigenvalues. We supplement our findings with experimental
validation. Our results indicate the possibility of transfer learning between
sparse graphs. | [
"Xingchao Jian",
"Feng Ji",
"Wee Peng Tay"
] | 2023-09-11 06:34:46 | http://arxiv.org/abs/2309.05260v1 | http://arxiv.org/pdf/2309.05260v1 | 2309.05260v1 |
A physics-informed and attention-based graph learning approach for regional electric vehicle charging demand prediction | Along with the proliferation of electric vehicles (EVs), optimizing the use
of EV charging space can significantly alleviate the growing load on
intelligent transportation systems. As the foundation to achieve such an
optimization, a spatiotemporal method for EV charging demand prediction in
urban areas is required. Although several solutions have been proposed by using
data-driven deep learning methods, it can be found that these
performance-oriented methods may suffer from misinterpretations to correctly
handle the reverse relationship between charging demands and prices. To tackle
the emerging challenges of training an accurate and interpretable prediction
model, this paper proposes a novel approach that enables the integration of
graph and temporal attention mechanisms for feature extraction and the usage of
physic-informed meta-learning in the model pre-training step for knowledge
transfer. Evaluation results on a dataset of 18,013 EV charging piles in
Shenzhen, China, show that the proposed approach, named PAG, can achieve
state-of-the-art forecasting performance and the ability in understanding the
adaptive changes in charging demands caused by price fluctuations. | [
"Haohao Qu",
"Haoxuan Kuang",
"Jun Li",
"Linlin You"
] | 2023-09-11 06:31:45 | http://arxiv.org/abs/2309.05259v1 | http://arxiv.org/pdf/2309.05259v1 | 2309.05259v1 |
Examining the Effect of Pre-training on Time Series Classification | Although the pre-training followed by fine-tuning paradigm is used
extensively in many fields, there is still some controversy surrounding the
impact of pre-training on the fine-tuning process. Currently, experimental
findings based on text and image data lack consensus. To delve deeper into the
unsupervised pre-training followed by fine-tuning paradigm, we have extended
previous research to a new modality: time series. In this study, we conducted a
thorough examination of 150 classification datasets derived from the Univariate
Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis
reveals several key conclusions. (i) Pre-training can only help improve the
optimization process for models that fit the data poorly, rather than those
that fit the data well. (ii) Pre-training does not exhibit the effect of
regularization when given sufficient training time. (iii) Pre-training can only
speed up convergence if the model has sufficient ability to fit the data. (iv)
Adding more pre-training data does not improve generalization, but it can
strengthen the advantage of pre-training on the original data volume, such as
faster convergence. (v) While both the pre-training task and the model
structure determine the effectiveness of the paradigm on a given dataset, the
model structure plays a more significant role. | [
"Jiashu Pu",
"Shiwei Zhao",
"Ling Cheng",
"Yongzhu Chang",
"Runze Wu",
"Tangjie Lv",
"Rongsheng Zhang"
] | 2023-09-11 06:26:57 | http://arxiv.org/abs/2309.05256v1 | http://arxiv.org/pdf/2309.05256v1 | 2309.05256v1 |
A quantum tug of war between randomness and symmetries on homogeneous spaces | We explore the interplay between symmetry and randomness in quantum
information. Adopting a geometric approach, we consider states as
$H$-equivalent if related by a symmetry transformation characterized by the
group $H$. We then introduce the Haar measure on the homogeneous space
$\mathbb{U}/H$, characterizing true randomness for $H$-equivalent systems.
While this mathematical machinery is well-studied by mathematicians, it has
seen limited application in quantum information: we believe our work to be the
first instance of utilizing homogeneous spaces to characterize symmetry in
quantum information. This is followed by a discussion of approximations of true
randomness, commencing with $t$-wise independent approximations and defining
$t$-designs on $\mathbb{U}/H$ and $H$-equivalent states. Transitioning further,
we explore pseudorandomness, defining pseudorandom unitaries and states within
homogeneous spaces. Finally, as a practical demonstration of our findings, we
study the expressibility of quantum machine learning ansatze in homogeneous
spaces. Our work provides a fresh perspective on the relationship between
randomness and symmetry in the quantum world. | [
"Rahul Arvind",
"Kishor Bharti",
"Jun Yong Khoo",
"Dax Enshan Koh",
"Jian Feng Kong"
] | 2023-09-11 06:06:31 | http://arxiv.org/abs/2309.05253v1 | http://arxiv.org/pdf/2309.05253v1 | 2309.05253v1 |
SparseSwin: Swin Transformer with Sparse Transformer Block | Advancements in computer vision research have put transformer architecture as
the state of the art in computer vision tasks. One of the known drawbacks of
the transformer architecture is the high number of parameters, this can lead to
a more complex and inefficient algorithm. This paper aims to reduce the number
of parameters and in turn, made the transformer more efficient. We present
Sparse Transformer (SparTa) Block, a modified transformer block with an
addition of a sparse token converter that reduces the number of tokens used. We
use the SparTa Block inside the Swin T architecture (SparseSwin) to leverage
Swin capability to downsample its input and reduce the number of initial tokens
to be calculated. The proposed SparseSwin model outperforms other state of the
art models in image classification with an accuracy of 86.96%, 97.43%, and
85.35% on the ImageNet100, CIFAR10, and CIFAR100 datasets respectively. Despite
its fewer parameters, the result highlights the potential of a transformer
architecture using a sparse token converter with a limited number of tokens to
optimize the use of the transformer and improve its performance. | [
"Krisna Pinasthika",
"Blessius Sheldo Putra Laksono",
"Riyandi Banovbi Putera Irsal",
"Syifa Hukma Shabiyya",
"Novanto Yudistira"
] | 2023-09-11 04:03:43 | http://arxiv.org/abs/2309.05224v1 | http://arxiv.org/pdf/2309.05224v1 | 2309.05224v1 |
Circle Feature Graphormer: Can Circle Features Stimulate Graph Transformer? | In this paper, we introduce two local graph features for missing link
prediction tasks on ogbl-citation2. We define the features as Circle Features,
which are borrowed from the concept of circle of friends. We propose the
detailed computing formulas for the above features. Firstly, we define the
first circle feature as modified swing for common graph, which comes from
bipartite graph. Secondly, we define the second circle feature as bridge, which
indicates the importance of two nodes for different circle of friends. In
addition, we firstly propose the above features as bias to enhance graph
transformer neural network, such that graph self-attention mechanism can be
improved. We implement a Circled Feature aware Graph transformer (CFG) model
based on SIEG network, which utilizes a double tower structure to capture both
global and local structure features. Experimental results show that CFG
achieves the state-of-the-art performance on dataset ogbl-citation2. | [
"Jingsong Lv",
"Hongyang Chen",
"Yao Qi",
"Lei Yu"
] | 2023-09-11 03:58:26 | http://arxiv.org/abs/2309.06574v1 | http://arxiv.org/pdf/2309.06574v1 | 2309.06574v1 |
Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout | Large machine learning models trained on diverse data have recently seen
unprecedented success. Federated learning enables training on private data that
may otherwise be inaccessible, such as domain-specific datasets decentralized
across many clients. However, federated learning can be difficult to scale to
large models when clients have limited resources. This challenge often results
in a trade-off between model size and access to diverse data. To mitigate this
issue and facilitate training of large models on edge devices, we introduce a
simple yet effective strategy, Federated Layer-wise Learning, to simultaneously
reduce per-client memory, computation, and communication costs. Clients train
just a single layer each round, reducing resource costs considerably with
minimal performance degradation. We also introduce Federated Depth Dropout, a
complementary technique that randomly drops frozen layers during training, to
further reduce resource usage. Coupling these two techniques enables us to
effectively train significantly larger models on edge devices. Specifically, we
reduce training memory usage by 5x or more in federated self-supervised
representation learning and demonstrate that performance in downstream tasks is
comparable to conventional federated self-supervised learning. | [
"Pengfei Guo",
"Warren Richard Morningstar",
"Raviteja Vemulapalli",
"Karan Singhal",
"Vishal M. Patel",
"Philip Andrew Mansfield"
] | 2023-09-11 03:17:45 | http://arxiv.org/abs/2309.05213v1 | http://arxiv.org/pdf/2309.05213v1 | 2309.05213v1 |
Graph Contextual Contrasting for Multivariate Time Series Classification | Contrastive learning, as a self-supervised learning paradigm, becomes popular
for Multivariate Time-Series (MTS) classification. It ensures the consistency
across different views of unlabeled samples and then learns effective
representations for these samples. Existing contrastive learning methods mainly
focus on achieving temporal consistency with temporal augmentation and
contrasting techniques, aiming to preserve temporal patterns against
perturbations for MTS data. However, they overlook spatial consistency that
requires the stability of individual sensors and their correlations. As MTS
data typically originate from multiple sensors, ensuring spatial consistency
becomes essential for the overall performance of contrastive learning on MTS
data. Thus, we propose Graph Contextual Contrasting (GCC) for spatial
consistency across MTS data. Specifically, we propose graph augmentations
including node and edge augmentations to preserve the stability of sensors and
their correlations, followed by graph contrasting with both node- and
graph-level contrasting to extract robust sensor- and global-level features. We
further introduce multi-window temporal contrasting to ensure temporal
consistency in the data for each sensor. Extensive experiments demonstrate that
our proposed GCC achieves state-of-the-art performance on various MTS
classification tasks. | [
"Yucheng Wang",
"Yuecong Xu",
"Jianfei Yang",
"Min Wu",
"Xiaoli Li",
"Lihua Xie",
"Zhenghua Chen"
] | 2023-09-11 02:35:22 | http://arxiv.org/abs/2309.05202v1 | http://arxiv.org/pdf/2309.05202v1 | 2309.05202v1 |
CARE: Confidence-rich Autonomous Robot Exploration using Bayesian Kernel Inference and Optimization | In this paper, we consider improving the efficiency of information-based
autonomous robot exploration in unknown and complex environments. We first
utilize Gaussian process (GP) regression to learn a surrogate model to infer
the confidence-rich mutual information (CRMI) of querying control actions, then
adopt an objective function consisting of predicted CRMI values and prediction
uncertainties to conduct Bayesian optimization (BO), i.e., GP-based BO (GPBO).
The trade-off between the best action with the highest CRMI value
(exploitation) and the action with high prediction variance (exploration) can
be realized. To further improve the efficiency of GPBO, we propose a novel
lightweight information gain inference method based on Bayesian kernel
inference and optimization (BKIO), achieving an approximate logarithmic
complexity without the need for training. BKIO can also infer the CRMI and
generate the best action using BO with bounded cumulative regret, which ensures
its comparable accuracy to GPBO with much higher efficiency. Extensive
numerical and real-world experiments show the desired efficiency of our
proposed methods without losing exploration performance in different
unstructured, cluttered environments. We also provide our open-source
implementation code at https://github.com/Shepherd-Gregory/BKIO-Exploration. | [
"Yang Xu",
"Ronghao Zheng",
"Senlin Zhang",
"Meiqin Liu",
"Shoudong Huang"
] | 2023-09-11 02:30:06 | http://arxiv.org/abs/2309.05200v1 | http://arxiv.org/pdf/2309.05200v1 | 2309.05200v1 |
Does Writing with Language Models Reduce Content Diversity? | Large language models (LLMs) have led to a surge in collaborative writing
with model assistance. As different users incorporate suggestions from the same
model, there is a risk of decreased diversity in the produced content,
potentially limiting diverse perspectives in public discourse. In this work, we
measure the impact of co-writing on diversity via a controlled experiment,
where users write argumentative essays in three setups -- using a base LLM
(GPT3), a feedback-tuned LLM (InstructGPT), and writing without model help. We
develop a set of diversity metrics and find that writing with InstructGPT (but
not the GPT3) results in a statistically significant reduction in diversity.
Specifically, it increases the similarity between the writings of different
authors and reduces the overall lexical and content diversity. We additionally
find that this effect is mainly attributable to InstructGPT contributing less
diverse text to co-written essays. In contrast, the user-contributed text
remains unaffected by model collaboration. This suggests that the recent
improvement in generation quality from adapting models to human feedback might
come at the cost of more homogeneous and less diverse content. | [
"Vishakh Padmakumar",
"He He"
] | 2023-09-11 02:16:47 | http://arxiv.org/abs/2309.05196v1 | http://arxiv.org/pdf/2309.05196v1 | 2309.05196v1 |
Data Summarization beyond Monotonicity: Non-monotone Two-Stage Submodular Maximization | The objective of a two-stage submodular maximization problem is to reduce the
ground set using provided training functions that are submodular, with the aim
of ensuring that optimizing new objective functions over the reduced ground set
yields results comparable to those obtained over the original ground set. This
problem has applications in various domains including data summarization.
Existing studies often assume the monotonicity of the objective function,
whereas our work pioneers the extension of this research to accommodate
non-monotone submodular functions. We have introduced the first constant-factor
approximation algorithms for this more general case. | [
"Shaojie Tang"
] | 2023-09-11 01:00:10 | http://arxiv.org/abs/2309.05183v1 | http://arxiv.org/pdf/2309.05183v1 | 2309.05183v1 |
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning | Prompt tuning (PT), where a small amount of trainable soft (continuous)
prompt vectors is affixed to the input of language models (LM), has shown
promising results across various tasks and models for parameter-efficient
fine-tuning (PEFT). PT stands out from other PEFT approaches because it
maintains competitive performance with fewer trainable parameters and does not
drastically scale up its parameters as the model size expands. However, PT
introduces additional soft prompt tokens, leading to longer input sequences,
which significantly impacts training and inference time and memory usage due to
the Transformer's quadratic complexity. Particularly concerning for Large
Language Models (LLMs) that face heavy daily querying. To address this issue,
we propose Decomposed Prompt Tuning (DePT), which decomposes the soft prompt
into a shorter soft prompt and a pair of low-rank matrices that are then
optimised with two different learning rates. This allows DePT to achieve better
performance while saving over 20% memory and time costs compared to vanilla PT
and its variants, without changing trainable parameter sizes. Through extensive
experiments on 23 natural language processing (NLP) and vision-language (VL)
tasks, we demonstrate that DePT outperforms state-of-the-art PEFT approaches,
including the full fine-tuning baseline in some scenarios. Additionally, we
empirically show that DEPT grows more efficient as the model size increases.
Our further study reveals that DePT integrates seamlessly with
parameter-efficient transfer learning in the few-shot learning setting and
highlights its adaptability to various model architectures and sizes. | [
"Zhengxiang Shi",
"Aldo Lipani"
] | 2023-09-11 00:02:05 | http://arxiv.org/abs/2309.05173v2 | http://arxiv.org/pdf/2309.05173v2 | 2309.05173v2 |
Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood | Training energy-based models (EBMs) with maximum likelihood estimation on
high-dimensional data can be both challenging and time-consuming. As a result,
there a noticeable gap in sample quality between EBMs and other generative
frameworks like GANs and diffusion models. To close this gap, inspired by the
recent efforts of learning EBMs by maximimizing diffusion recovery likelihood
(DRL), we propose cooperative diffusion recovery likelihood (CDRL), an
effective approach to tractably learn and sample from a series of EBMs defined
on increasingly noisy versons of a dataset, paired with an initializer model
for each EBM. At each noise level, the initializer model learns to amortize the
sampling process of the EBM, and the two models are jointly estimated within a
cooperative training framework. Samples from the initializer serve as starting
points that are refined by a few sampling steps from the EBM. With the refined
samples, the EBM is optimized by maximizing recovery likelihood, while the
initializer is optimized by learning from the difference between the refined
samples and the initial samples. We develop a new noise schedule and a variance
reduction technique to further improve the sample quality. Combining these
advances, we significantly boost the FID scores compared to existing EBM
methods on CIFAR-10 and ImageNet 32x32, with a 2x speedup over DRL. In
addition, we extend our method to compositional generation and image inpainting
tasks, and showcase the compatibility of CDRL with classifier-free guidance for
conditional generation, achieving similar trade-offs between sample quality and
sample diversity as in diffusion models. | [
"Yaxuan Zhu",
"Jianwen Xie",
"Yingnian Wu",
"Ruiqi Gao"
] | 2023-09-10 22:05:24 | http://arxiv.org/abs/2309.05153v2 | http://arxiv.org/pdf/2309.05153v2 | 2309.05153v2 |
Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation | To address the increasing need for efficient and accurate content moderation,
we propose an efficient and lightweight deep classification ensemble structure.
Our approach is based on a combination of simple visual features, designed for
high-accuracy classification of violent content with low false positives. Our
ensemble architecture utilizes a set of lightweight models with narrowed-down
color features, and we apply it to both images and videos.
We evaluated our approach using a large dataset of explosion and blast
contents and compared its performance to popular deep learning models such as
ResNet-50. Our evaluation results demonstrate significant improvements in
prediction accuracy, while benefiting from 7.64x faster inference and lower
computation cost.
While our approach is tailored to explosion detection, it can be applied to
other similar content moderation and violence detection use cases as well.
Based on our experiments, we propose a "think small, think many" philosophy in
classification scenarios. We argue that transforming a single, large,
monolithic deep model into a verification-based step model ensemble of multiple
small, simple, and lightweight models with narrowed-down visual features can
possibly lead to predictions with higher accuracy. | [
"Mohammad Hosseini",
"Mahmudul Hasan"
] | 2023-09-10 21:54:03 | http://arxiv.org/abs/2309.05150v1 | http://arxiv.org/pdf/2309.05150v1 | 2309.05150v1 |
Outlier Robust Adversarial Training | Supervised learning models are challenged by the intrinsic complexities of
training data such as outliers and minority subpopulations and intentional
attacks at inference time with adversarial samples. While traditional robust
learning methods and the recent adversarial training approaches are designed to
handle each of the two challenges, to date, no work has been done to develop
models that are robust with regard to the low-quality training data and the
potential adversarial attack at inference time simultaneously. It is for this
reason that we introduce Outlier Robust Adversarial Training (ORAT) in this
work. ORAT is based on a bi-level optimization formulation of adversarial
training with a robust rank-based loss function. Theoretically, we show that
the learning objective of ORAT satisfies the $\mathcal{H}$-consistency in
binary classification, which establishes it as a proper surrogate to
adversarial 0/1 loss. Furthermore, we analyze its generalization ability and
provide uniform convergence rates in high probability. ORAT can be optimized
with a simple algorithm. Experimental evaluations on three benchmark datasets
demonstrate the effectiveness and robustness of ORAT in handling outliers and
adversarial attacks. Our code is available at
https://github.com/discovershu/ORAT. | [
"Shu Hu",
"Zhenhuan Yang",
"Xin Wang",
"Yiming Ying",
"Siwei Lyu"
] | 2023-09-10 21:36:38 | http://arxiv.org/abs/2309.05145v1 | http://arxiv.org/pdf/2309.05145v1 | 2309.05145v1 |
Distribution Grid Line Outage Identification with Unknown Pattern and Performance Guarantee | Line outage identification in distribution grids is essential for sustainable
grid operation. In this work, we propose a practical yet robust detection
approach that utilizes only readily available voltage magnitudes, eliminating
the need for costly phase angles or power flow data. Given the sensor data,
many existing detection methods based on change-point detection require prior
knowledge of outage patterns, which are unknown for real-world outage
scenarios. To remove this impractical requirement, we propose a data-driven
method to learn the parameters of the post-outage distribution through gradient
descent. However, directly using gradient descent presents feasibility issues.
To address this, we modify our approach by adding a Bregman divergence
constraint to control the trajectory of the parameter updates, which eliminates
the feasibility problems. As timely operation is the key nowadays, we prove
that the optimal parameters can be learned with convergence guarantees via
leveraging the statistical and physical properties of voltage data. We evaluate
our approach using many representative distribution grids and real load
profiles with 17 outage configurations. The results show that we can detect and
localize the outage in a timely manner with only voltage magnitudes and without
assuming a prior knowledge of outage patterns. | [
"Chenhan Xiao",
"Yizheng Liao",
"Yang Weng"
] | 2023-09-10 21:11:36 | http://arxiv.org/abs/2309.07157v1 | http://arxiv.org/pdf/2309.07157v1 | 2309.07157v1 |
DAD++: Improved Data-free Test Time Adversarial Defense | With the increasing deployment of deep neural networks in safety-critical
applications such as self-driving cars, medical imaging, anomaly detection,
etc., adversarial robustness has become a crucial concern in the reliability of
these networks in real-world scenarios. A plethora of works based on
adversarial training and regularization-based techniques have been proposed to
make these deep networks robust against adversarial attacks. However, these
methods require either retraining models or training them from scratch, making
them infeasible to defend pre-trained models when access to training data is
restricted. To address this problem, we propose a test time Data-free
Adversarial Defense (DAD) containing detection and correction frameworks.
Moreover, to further improve the efficacy of the correction framework in cases
when the detector is under-confident, we propose a soft-detection scheme
(dubbed as "DAD++"). We conduct a wide range of experiments and ablations on
several datasets and network architectures to show the efficacy of our proposed
approach. Furthermore, we demonstrate the applicability of our approach in
imparting adversarial defense at test time under data-free (or data-efficient)
applications/setups, such as Data-free Knowledge Distillation and Source-free
Unsupervised Domain Adaptation, as well as Semi-supervised classification
frameworks. We observe that in all the experiments and applications, our DAD++
gives an impressive performance against various adversarial attacks with a
minimal drop in clean accuracy. The source code is available at:
https://github.com/vcl-iisc/Improved-Data-free-Test-Time-Adversarial-Defense | [
"Gaurav Kumar Nayak",
"Inder Khatri",
"Shubham Randive",
"Ruchit Rawal",
"Anirban Chakraborty"
] | 2023-09-10 20:39:53 | http://arxiv.org/abs/2309.05132v1 | http://arxiv.org/pdf/2309.05132v1 | 2309.05132v1 |
Signal Temporal Logic Neural Predictive Control | Ensuring safety and meeting temporal specifications are critical challenges
for long-term robotic tasks. Signal temporal logic (STL) has been widely used
to systematically and rigorously specify these requirements. However,
traditional methods of finding the control policy under those STL requirements
are computationally complex and not scalable to high-dimensional or systems
with complex nonlinear dynamics. Reinforcement learning (RL) methods can learn
the policy to satisfy the STL specifications via hand-crafted or STL-inspired
rewards, but might encounter unexpected behaviors due to ambiguity and sparsity
in the reward. In this paper, we propose a method to directly learn a neural
network controller to satisfy the requirements specified in STL. Our controller
learns to roll out trajectories to maximize the STL robustness score in
training. In testing, similar to Model Predictive Control (MPC), the learned
controller predicts a trajectory within a planning horizon to ensure the
satisfaction of the STL requirement in deployment. A backup policy is designed
to ensure safety when our controller fails. Our approach can adapt to various
initial conditions and environmental parameters. We conduct experiments on six
tasks, where our method with the backup policy outperforms the classical
methods (MPC, STL-solver), model-free and model-based RL methods in STL
satisfaction rate, especially on tasks with complex STL specifications while
being 10X-100X faster than the classical methods. | [
"Yue Meng",
"Chuchu Fan"
] | 2023-09-10 20:31:25 | http://arxiv.org/abs/2309.05131v1 | http://arxiv.org/pdf/2309.05131v1 | 2309.05131v1 |
The online learning architecture with edge computing for high-level control for assisting patients | The prevalence of mobility impairments due to conditions such as spinal cord
injuries, strokes, and degenerative diseases is on the rise globally.
Lower-limb exoskeletons have been increasingly recognized as a viable solution
for enhancing mobility and rehabilitation for individuals with such
impairments. However, existing exoskeleton control systems often suffer from
limitations such as latency, lack of adaptability, and computational
inefficiency. To address these challenges, this paper introduces a novel online
adversarial learning architecture integrated with edge computing for high-level
lower-limb exoskeleton control. In the proposed architecture, sensor data from
the user is processed in real-time through edge computing nodes, which then
interact with an online adversarial learning model. This model adapts to the
user's specific needs and controls the exoskeleton with minimal latency.
Experimental evaluations demonstrate significant improvements in control
accuracy and adaptability, as well as enhanced quality-of-service (QoS)
metrics. These findings indicate that the integration of online adversarial
learning with edge computing offers a robust and efficient approach for the
next generation of lower-limb exoskeleton control systems. | [
"Yue Shi",
"Yihui Zhao"
] | 2023-09-10 20:30:03 | http://arxiv.org/abs/2309.05130v1 | http://arxiv.org/pdf/2309.05130v1 | 2309.05130v1 |
A compendium of data sources for data science, machine learning, and artificial intelligence | Recent advances in data science, machine learning, and artificial
intelligence, such as the emergence of large language models, are leading to an
increasing demand for data that can be processed by such models. While data
sources are application-specific, and it is impossible to produce an exhaustive
list of such data sources, it seems that a comprehensive, rather than complete,
list would still benefit data scientists and machine learning experts of all
levels of seniority. The goal of this publication is to provide just such an
(inevitably incomplete) list -- or compendium -- of data sources across
multiple areas of applications, including finance and economics, legal (laws
and regulations), life sciences (medicine and drug discovery), news sentiment
and social media, retail and ecommerce, satellite imagery, and shipping and
logistics, and sports. | [
"Paul Bilokon",
"Oleksandr Bilokon",
"Saeed Amen"
] | 2023-09-10 19:15:22 | http://arxiv.org/abs/2309.05682v1 | http://arxiv.org/pdf/2309.05682v1 | 2309.05682v1 |
Nonlinear Granger Causality using Kernel Ridge Regression | I introduce a novel algorithm and accompanying Python library, named
mlcausality, designed for the identification of nonlinear Granger causal
relationships. This novel algorithm uses a flexible plug-in architecture that
enables researchers to employ any nonlinear regressor as the base prediction
model. Subsequently, I conduct a comprehensive performance analysis of
mlcausality when the prediction regressor is the kernel ridge regressor with
the radial basis function kernel. The results demonstrate that mlcausality
employing kernel ridge regression achieves competitive AUC scores across a
diverse set of simulated data. Furthermore, mlcausality with kernel ridge
regression yields more finely calibrated $p$-values in comparison to rival
algorithms. This enhancement enables mlcausality to attain superior accuracy
scores when using intuitive $p$-value-based thresholding criteria. Finally,
mlcausality with the kernel ridge regression exhibits significantly reduced
computation times compared to existing nonlinear Granger causality algorithms.
In fact, in numerous instances, this innovative approach achieves superior
solutions within computational timeframes that are an order of magnitude
shorter than those required by competing algorithms. | [
"Wojciech \"Victor\" Fulmyk"
] | 2023-09-10 18:28:48 | http://arxiv.org/abs/2309.05107v1 | http://arxiv.org/pdf/2309.05107v1 | 2309.05107v1 |
Convex Q Learning in a Stochastic Environment: Extended Version | The paper introduces the first formulation of convex Q-learning for Markov
decision processes with function approximation. The algorithms and theory rest
on a relaxation of a dual of Manne's celebrated linear programming
characterization of optimal control. The main contributions firstly concern
properties of the relaxation, described as a deterministic convex program: we
identify conditions for a bounded solution, and a significant relationship
between the solution to the new convex program, and the solution to standard
Q-learning. The second set of contributions concern algorithm design and
analysis: (i) A direct model-free method for approximating the convex program
for Q-learning shares properties with its ideal. In particular, a bounded
solution is ensured subject to a simple property of the basis functions; (ii)
The proposed algorithms are convergent and new techniques are introduced to
obtain the rate of convergence in a mean-square sense; (iii) The approach can
be generalized to a range of performance criteria, and it is found that
variance can be reduced by considering ``relative'' dynamic programming
equations; (iv) The theory is illustrated with an application to a classical
inventory control problem. | [
"Fan Lu",
"Sean Meyn"
] | 2023-09-10 18:24:43 | http://arxiv.org/abs/2309.05105v1 | http://arxiv.org/pdf/2309.05105v1 | 2309.05105v1 |
Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes | In recent years, there has been an intense debate about how learning in
biological neural networks (BNNs) differs from learning in artificial neural
networks. It is often argued that the updating of connections in the brain
relies only on local information, and therefore a stochastic gradient-descent
type optimization method cannot be used. In this paper, we study a stochastic
model for supervised learning in BNNs. We show that a (continuous) gradient
step occurs approximately when each learning opportunity is processed by many
local updates. This result suggests that stochastic gradient descent may indeed
play a role in optimizing BNNs. | [
"Sören Christensen",
"Jan Kallsen"
] | 2023-09-10 18:12:52 | http://arxiv.org/abs/2309.05102v1 | http://arxiv.org/pdf/2309.05102v1 | 2309.05102v1 |
Data-efficient Deep Learning Approach for Single-Channel EEG-Based Sleep Stage Classification with Model Interpretability | Sleep, a fundamental physiological process, occupies a significant portion of
our lives. Accurate classification of sleep stages serves as a crucial tool for
evaluating sleep quality and identifying probable sleep disorders. Our work
introduces a novel methodology that utilizes a SE-Resnet-Bi-LSTM architecture
to classify sleep into five separate stages. The classification process is
based on the analysis of single-channel electroencephalograms (EEGs). The
suggested framework consists of two fundamental elements: a feature extractor
that utilizes SE-ResNet, and a temporal context encoder that uses stacks of
Bi-LSTM units. The effectiveness of our approach is substantiated by thorough
assessments conducted on three different datasets, namely SleepEDF-20,
SleepEDF-78, and SHHS. The proposed methodology achieves significant model
performance, with Macro-F1 scores of 82.5, 78.9, and 81.9 for the respective
datasets. We employ 1D-GradCAM visualization as a methodology to elucidate the
decision-making process inherent in our model in the realm of sleep stage
classification. This visualization method not only provides valuable insights
into the model's classification rationale but also aligns its outcomes with the
annotations made by sleep experts. One notable feature of our research lies in
the incorporation of an efficient training approach, which adeptly upholds the
model's resilience in terms of performance. The experimental evaluations
provide a comprehensive evaluation of the effectiveness of our proposed model
in comparison to the existing approaches, highlighting its potential for
practical applications. | [
"Shivam Sharma",
"Suvadeep Maiti",
"S. Mythirayee",
"Srijithesh Rajendran",
"Raju Surampudi Bapi"
] | 2023-09-10 17:56:03 | http://arxiv.org/abs/2309.07156v2 | http://arxiv.org/pdf/2309.07156v2 | 2309.07156v2 |
Adaptive conformal classification with noisy labels | This paper develops novel conformal prediction methods for classification
tasks that can automatically adapt to random label contamination in the
calibration sample, enabling more informative prediction sets with stronger
coverage guarantees compared to state-of-the-art approaches. This is made
possible by a precise theoretical characterization of the effective coverage
inflation (or deflation) suffered by standard conformal inferences in the
presence of label contamination, which is then made actionable through new
calibration algorithms. Our solution is flexible and can leverage different
modeling assumptions about the label contamination process, while requiring no
knowledge about the data distribution or the inner workings of the
machine-learning classifier. The advantages of the proposed methods are
demonstrated through extensive simulations and an application to object
classification with the CIFAR-10H image data set. | [
"Matteo Sesia",
"Y. X. Rachel Wang",
"Xin Tong"
] | 2023-09-10 17:35:43 | http://arxiv.org/abs/2309.05092v1 | http://arxiv.org/pdf/2309.05092v1 | 2309.05092v1 |
Variance Reduction of Resampling for Sequential Monte Carlo | A resampling scheme provides a way to switch low-weight particles for
sequential Monte Carlo with higher-weight particles representing the objective
distribution. The less the variance of the weight distribution is, the more
concentrated the effective particles are, and the quicker and more accurate it
is to approximate the hidden Markov model, especially for the nonlinear case.
We propose a repetitive deterministic domain with median ergodicity for
resampling and have achieved the lowest variances compared to the other
resampling methods. As the size of the deterministic domain $M\ll N$ (the size
of population), given a feasible size of particles, our algorithm is faster
than the state of the art, which is verified by theoretical deduction and
experiments of a hidden Markov model in both the linear and non-linear cases. | [
"Xiongming Dai",
"Gerald Baumgartner"
] | 2023-09-10 17:25:43 | http://arxiv.org/abs/2309.08620v1 | http://arxiv.org/pdf/2309.08620v1 | 2309.08620v1 |
A supervised generative optimization approach for tabular data | Synthetic data generation has emerged as a crucial topic for financial
institutions, driven by multiple factors, such as privacy protection and data
augmentation. Many algorithms have been proposed for synthetic data generation
but reaching the consensus on which method we should use for the specific data
sets and use cases remains challenging. Moreover, the majority of existing
approaches are ``unsupervised'' in the sense that they do not take into account
the downstream task. To address these issues, this work presents a novel
synthetic data generation framework. The framework integrates a supervised
component tailored to the specific downstream task and employs a meta-learning
approach to learn the optimal mixture distribution of existing synthetic
distributions. | [
"Fadi Hamad",
"Shinpei Nakamura-Sakai",
"Saheed Obitayo",
"Vamsi K. Potluru"
] | 2023-09-10 16:56:46 | http://arxiv.org/abs/2309.05079v1 | http://arxiv.org/pdf/2309.05079v1 | 2309.05079v1 |
Generalization error bounds for iterative learning algorithms with bounded updates | This paper explores the generalization characteristics of iterative learning
algorithms with bounded updates for non-convex loss functions, employing
information-theoretic techniques. Our key contribution is a novel bound for the
generalization error of these algorithms with bounded updates. Our approach
introduces two main novelties: 1) we reformulate the mutual information as the
uncertainty of updates, providing a new perspective, and 2) instead of using
the chaining rule of mutual information, we employ a variance decomposition
technique to decompose information across iterations, allowing for a simpler
surrogate process. We analyze our generalization bound under various settings
and demonstrate improved bounds. To bridge the gap between theory and practice,
we also examine the previously observed scaling behavior in large language
models. Ultimately, our work takes a further step for developing practical
generalization theories. | [
"Jingwen Fu",
"Nanning Zheng"
] | 2023-09-10 16:55:59 | http://arxiv.org/abs/2309.05077v3 | http://arxiv.org/pdf/2309.05077v3 | 2309.05077v3 |
Subsets and Splits