title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Modularity in Deep Learning: A Survey | Modularity is a general principle present in many fields. It offers
attractive advantages, including, among others, ease of conceptualization,
interpretability, scalability, module combinability, and module reusability.
The deep learning community has long sought to take inspiration from the
modularity principle, either implicitly or explicitly. This interest has been
increasing over recent years. We review the notion of modularity in deep
learning around three axes: data, task, and model, which characterize the life
cycle of deep learning. Data modularity refers to the observation or creation
of data groups for various purposes. Task modularity refers to the
decomposition of tasks into sub-tasks. Model modularity means that the
architecture of a neural network system can be decomposed into identifiable
modules. We describe different instantiations of the modularity principle, and
we contextualize their advantages in different deep learning sub-fields.
Finally, we conclude the paper with a discussion of the definition of
modularity and directions for future research. | [
"Haozhe Sun",
"Isabelle Guyon"
] | 2023-10-02 12:41:34 | http://arxiv.org/abs/2310.01154v1 | http://arxiv.org/pdf/2310.01154v1 | 2310.01154v1 |
SWMLP: Shared Weight Multilayer Perceptron for Car Trajectory Speed Prediction using Road Topographical Features | Although traffic is one of the massively collected data, it is often only
available for specific regions. One concern is that, although there are studies
that give good results for these data, the data from these regions may not be
sufficiently representative to describe all the traffic patterns in the rest of
the world. In quest of addressing this concern, we propose a speed prediction
method that is independent of large historical speed data. To predict a
vehicle's speed, we use the trajectory road topographical features to fit a
Shared Weight Multilayer Perceptron learning model. Our results show
significant improvement, both qualitative and quantitative, over standard
regression analysis. Moreover, the proposed framework sheds new light on the
way to design new approaches for traffic analysis. | [
"Sarah Almeida Carneiro",
"Giovanni Chierchia",
"Jean Charléty",
"Aurélie Chataignon",
"Laurent Najman"
] | 2023-10-02 12:39:33 | http://arxiv.org/abs/2310.02282v1 | http://arxiv.org/pdf/2310.02282v1 | 2310.02282v1 |
Cryptocurrency Portfolio Optimization by Neural Networks | Many cryptocurrency brokers nowadays offer a variety of derivative assets
that allow traders to perform hedging or speculation. This paper proposes an
effective algorithm based on neural networks to take advantage of these
investment products. The proposed algorithm constructs a portfolio that
contains a pair of negatively correlated assets. A deep neural network, which
outputs the allocation weight of each asset at a time interval, is trained to
maximize the Sharpe ratio. A novel loss term is proposed to regulate the
network's bias towards a specific asset, thus enforcing the network to learn an
allocation strategy that is close to a minimum variance strategy. Extensive
experiments were conducted using data collected from Binance spanning 19 months
to evaluate the effectiveness of our approach. The backtest results show that
the proposed algorithm can produce neural networks that are able to make
profits in different market situations. | [
"Quoc Minh Nguyen",
"Dat Thanh Tran",
"Juho Kanniainen",
"Alexandros Iosifidis",
"Moncef Gabbouj"
] | 2023-10-02 12:33:28 | http://arxiv.org/abs/2310.01148v1 | http://arxiv.org/pdf/2310.01148v1 | 2310.01148v1 |
Parallel-in-Time Probabilistic Numerical ODE Solvers | Probabilistic numerical solvers for ordinary differential equations (ODEs)
treat the numerical simulation of dynamical systems as problems of Bayesian
state estimation. Aside from producing posterior distributions over ODE
solutions and thereby quantifying the numerical approximation error of the
method itself, one less-often noted advantage of this formalism is the
algorithmic flexibility gained by formulating numerical simulation in the
framework of Bayesian filtering and smoothing. In this paper, we leverage this
flexibility and build on the time-parallel formulation of iterated extended
Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE
solver. Instead of simulating the dynamical system sequentially in time, as
done by current probabilistic solvers, the proposed method processes all time
steps in parallel and thereby reduces the span cost from linear to logarithmic
in the number of time steps. We demonstrate the effectiveness of our approach
on a variety of ODEs and compare it to a range of both classic and
probabilistic numerical ODE solvers. | [
"Nathanael Bosch",
"Adrien Corenflos",
"Fatemeh Yaghoobi",
"Filip Tronarp",
"Philipp Hennig",
"Simo Särkkä"
] | 2023-10-02 12:32:21 | http://arxiv.org/abs/2310.01145v1 | http://arxiv.org/pdf/2310.01145v1 | 2310.01145v1 |
The Map Equation Goes Neural | Community detection and graph clustering are essential for unsupervised data
exploration and understanding the high-level organisation of networked systems.
Recently, graph clustering has been highlighted as an under-explored primary
task for graph neural networks. While hierarchical graph pooling has been shown
to improve performance in graph and node classification tasks, it performs
poorly in identifying meaningful clusters. Community detection has a long
history in network science, but typically relies on optimising objective
functions with custom-tailored search algorithms, not leveraging recent
advances in deep learning, particularly from graph neural networks. In this
paper, we narrow this gap between the deep learning and network science
communities. We consider the map equation, an information-theoretic objective
function for community detection. Expressing it in a fully differentiable
tensor form that produces soft cluster assignments, we optimise the map
equation with deep learning through gradient descent. More specifically, the
reformulated map equation is a loss function compatible with any graph neural
network architecture, enabling flexible clustering and graph pooling that
clusters both graph structure and data features in an end-to-end way,
automatically finding an optimum number of clusters without explicit
regularisation. We evaluate our approach experimentally using different neural
network architectures for unsupervised clustering in synthetic and real data.
Our results show that our approach achieves competitive performance against
baselines, naturally detects overlapping communities, and avoids
over-partitioning sparse graphs. | [
"Christopher Blöcker",
"Chester Tan",
"Ingo Scholtes"
] | 2023-10-02 12:32:18 | http://arxiv.org/abs/2310.01144v1 | http://arxiv.org/pdf/2310.01144v1 | 2310.01144v1 |
Stability and Generalization for Minibatch SGD and Local SGD | The increasing scale of data propels the popularity of leveraging parallelism
to speed up the optimization. Minibatch stochastic gradient descent (minibatch
SGD) and local SGD are two popular methods for parallel optimization. The
existing theoretical studies show a linear speedup of these methods with
respect to the number of machines, which, however, is measured by optimization
errors. As a comparison, the stability and generalization of these methods are
much less studied. In this paper, we pioneer the stability and generalization
analysis of minibatch and local SGD to understand their learnability. We
incorporate training errors into the stability analysis, which shows how small
training errors help generalization for overparameterized models. Our stability
bounds imply optimistic risk bounds which decay fast under a low noise
condition. We show both minibatch and local SGD achieve a linear speedup to
attain the optimal risk bounds. | [
"Yunwen Lei",
"Tao Sun",
"Mingrui Liu"
] | 2023-10-02 12:26:51 | http://arxiv.org/abs/2310.01139v1 | http://arxiv.org/pdf/2310.01139v1 | 2310.01139v1 |
CommIN: Semantic Image Communications as an Inverse Problem with INN-Guided Diffusion Models | Joint source-channel coding schemes based on deep neural networks (DeepJSCC)
have recently achieved remarkable performance for wireless image transmission.
However, these methods usually focus only on the distortion of the
reconstructed signal at the receiver side with respect to the source at the
transmitter side, rather than the perceptual quality of the reconstruction
which carries more semantic information. As a result, severe perceptual
distortion can be introduced under extreme conditions such as low bandwidth and
low signal-to-noise ratio. In this work, we propose CommIN, which views the
recovery of high-quality source images from degraded reconstructions as an
inverse problem. To address this, CommIN combines Invertible Neural Networks
(INN) with diffusion models, aiming for superior perceptual quality. Through
experiments, we show that our CommIN significantly improves the perceptual
quality compared to DeepJSCC under extreme conditions and outperforms other
inverse problem approaches used in DeepJSCC. | [
"Jiakang Chen",
"Di You",
"Deniz Gündüz",
"Pier Luigi Dragotti"
] | 2023-10-02 12:06:58 | http://arxiv.org/abs/2310.01130v1 | http://arxiv.org/pdf/2310.01130v1 | 2310.01130v1 |
End-to-End Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations | Speech Emotion recognition (SER) in call center conversations has emerged as
a valuable tool for assessing the quality of interactions between clients and
agents. In contrast to controlled laboratory environments, real-life
conversations take place under uncontrolled conditions and are subject to
contextual factors that influence the expression of emotions. In this paper, we
present our approach to constructing a large-scale reallife dataset (CusEmo)
for continuous SER in customer service call center conversations. We adopted
the dimensional emotion annotation approach to capture the subtlety,
complexity, and continuity of emotions in real-life call center conversations,
while annotating contextual information. The study also addresses the
challenges encountered during the application of the End-to-End (E2E) SER
system to the dataset, including determining the appropriate label sampling
rate and input segment length, as well as integrating contextual information
(interlocutor's gender and empathy level) with different weights using
multitask learning. The result shows that incorporating the empathy level
information improved the model's performance. | [
"Yajing Feng",
"Laurence Devillers"
] | 2023-10-02 11:53:48 | http://arxiv.org/abs/2310.02281v1 | http://arxiv.org/pdf/2310.02281v1 | 2310.02281v1 |
Text Data Augmentation in Low-Resource Settings via Fine-Tuning of Large Language Models | The in-context learning ability of large language models (LLMs) enables them
to generalize to novel downstream tasks with relatively few labeled examples.
However, they require enormous computational resources to be deployed.
Alternatively, smaller models can solve specific tasks if fine-tuned with
enough labeled examples. These examples, however, are expensive to obtain. In
pursuit of the best of both worlds, we study the annotation and generation of
fine-tuning training data via fine-tuned teacher LLMs to improve the downstream
performance of much smaller models. In four text classification and two text
generation tasks, we find that both data generation and annotation dramatically
improve the respective downstream model's performance, occasionally
necessitating only a minor fraction of the original training dataset. | [
"Jean Kaddour",
"Qi Liu"
] | 2023-10-02 11:49:05 | http://arxiv.org/abs/2310.01119v1 | http://arxiv.org/pdf/2310.01119v1 | 2310.01119v1 |
Predicting emergence of crystals from amorphous matter with deep learning | Crystallization of the amorphous phases into metastable crystals plays a
fundamental role in the formation of new matter, from geological to biological
processes in nature to synthesis and development of new materials in the
laboratory. Predicting the outcome of such phase transitions reliably would
enable new research directions in these areas, but has remained beyond reach
with molecular modeling or ab-initio methods. Here, we show that
crystallization products of amorphous phases can be predicted in any inorganic
chemistry by sampling the crystallization pathways of their local structural
motifs at the atomistic level using universal deep learning potentials. We show
that this approach identifies the crystal structures of polymorphs that
initially nucleate from amorphous precursors with high accuracy across a
diverse set of material systems, including polymorphic oxides, nitrides,
carbides, fluorides, chlorides, chalcogenides, and metal alloys. Our results
demonstrate that Ostwald's rule of stages can be exploited mechanistically at
the molecular level to predictably access new metastable crystals from the
amorphous phase in material synthesis. | [
"Muratahan Aykol",
"Amil Merchant",
"Simon Batzner",
"Jennifer N. Wei",
"Ekin Dogus Cubuk"
] | 2023-10-02 11:46:39 | http://arxiv.org/abs/2310.01117v1 | http://arxiv.org/pdf/2310.01117v1 | 2310.01117v1 |
Batch-less stochastic gradient descent for compressive learning of deep regularization for image denoising | We consider the problem of denoising with the help of prior information taken
from a database of clean signals or images. Denoising with variational methods
is very efficient if a regularizer well adapted to the nature of the data is
available. Thanks to the maximum a posteriori Bayesian framework, such
regularizer can be systematically linked with the distribution of the data.
With deep neural networks (DNN), complex distributions can be recovered from a
large training database.To reduce the computational burden of this task, we
adapt the compressive learning framework to the learning of regularizers
parametrized by DNN. We propose two variants of stochastic gradient descent
(SGD) for the recovery of deep regularization parameters from a heavily
compressed database. These algorithms outperform the initially proposed method
that was limited to low-dimensional signals, each iteration using information
from the whole database. They also benefit from classical SGD convergence
guarantees. Thanks to these improvements we show that this method can be
applied for patch based image denoising.} | [
"Hui Shi",
"Yann Traonmilin",
"J-F Aujol"
] | 2023-10-02 11:46:11 | http://arxiv.org/abs/2310.03085v1 | http://arxiv.org/pdf/2310.03085v1 | 2310.03085v1 |
Prompt-tuning latent diffusion models for inverse problems | We propose a new method for solving imaging inverse problems using
text-to-image latent diffusion models as general priors. Existing methods using
latent diffusion models for inverse problems typically rely on simple null text
prompts, which can lead to suboptimal performance. To address this limitation,
we introduce a method for prompt tuning, which jointly optimizes the text
embedding on-the-fly while running the reverse diffusion process. This allows
us to generate images that are more faithful to the diffusion prior. In
addition, we propose a method to keep the evolution of latent variables within
the range space of the encoder, by projection. This helps to reduce image
artifacts, a major problem when using latent diffusion models instead of
pixel-based diffusion models. Our combined method, called P2L, outperforms both
image- and latent-diffusion model-based inverse problem solvers on a variety of
tasks, such as super-resolution, deblurring, and inpainting. | [
"Hyungjin Chung",
"Jong Chul Ye",
"Peyman Milanfar",
"Mauricio Delbracio"
] | 2023-10-02 11:31:48 | http://arxiv.org/abs/2310.01110v1 | http://arxiv.org/pdf/2310.01110v1 | 2310.01110v1 |
R-divergence for Estimating Model-oriented Distribution Discrepancy | Real-life data are often non-IID due to complex distributions and
interactions, and the sensitivity to the distribution of samples can differ
among learning models. Accordingly, a key question for any supervised or
unsupervised model is whether the probability distributions of two given
datasets can be considered identical. To address this question, we introduce
R-divergence, designed to assess model-oriented distribution discrepancies. The
core insight is that two distributions are likely identical if their optimal
hypothesis yields the same expected risk for each distribution. To estimate the
distribution discrepancy between two datasets, R-divergence learns a minimum
hypothesis on the mixed data and then gauges the empirical risk difference
between them. We evaluate the test power across various unsupervised and
supervised tasks and find that R-divergence achieves state-of-the-art
performance. To demonstrate the practicality of R-divergence, we employ
R-divergence to train robust neural networks on samples with noisy labels. | [
"Zhilin Zhao",
"Longbing Cao"
] | 2023-10-02 11:30:49 | http://arxiv.org/abs/2310.01109v1 | http://arxiv.org/pdf/2310.01109v1 | 2310.01109v1 |
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models | Recent endeavors in video editing have showcased promising results in
single-attribute editing or style transfer tasks, either by training
text-to-video (T2V) models on text-video data or adopting training-free
methods. However, when confronted with the complexities of multi-attribute
editing scenarios, they exhibit shortcomings such as omitting or overlooking
intended attribute changes, modifying the wrong elements of the input video,
and failing to preserve regions of the input video that should remain intact.
To address this, here we present a novel grounding-guided video-to-video
translation framework called Ground-A-Video for multi-attribute video editing.
Ground-A-Video attains temporally consistent multi-attribute editing of input
videos in a training-free manner without aforementioned shortcomings. Central
to our method is the introduction of Cross-Frame Gated Attention which
incorporates groundings information into the latent representations in a
temporally consistent fashion, along with Modulated Cross-Attention and optical
flow guided inverted latents smoothing. Extensive experiments and applications
demonstrate that Ground-A-Video's zero-shot capacity outperforms other baseline
methods in terms of edit-accuracy and frame consistency. Further results and
codes are provided at our project page (http://ground-a-video.github.io). | [
"Hyeonho Jeong",
"Jong Chul Ye"
] | 2023-10-02 11:28:37 | http://arxiv.org/abs/2310.01107v1 | http://arxiv.org/pdf/2310.01107v1 | 2310.01107v1 |
Energy-Guided Continuous Entropic Barycenter Estimation for General Costs | Optimal transport (OT) barycenters are a mathematically grounded way of
averaging probability distributions while capturing their geometric properties.
In short, the barycenter task is to take the average of a collection of
probability distributions w.r.t. given OT discrepancies. We propose a novel
algorithm for approximating the continuous Entropic OT (EOT) barycenter for
arbitrary OT cost functions. Our approach is built upon the dual reformulation
of the EOT problem based on weak OT, which has recently gained the attention of
the ML community. Beyond its novelty, our method enjoys several advantageous
properties: (i) we establish quality bounds for the recovered solution; (ii)
this approach seemlessly interconnects with the Energy-Based Models (EBMs)
learning procedure enabling the use of well-tuned algorithms for the problem of
interest; (iii) it provides an intuitive optimization scheme avoiding min-max,
reinforce and other intricate technical tricks. For validation, we consider
several low-dimensional scenarios and image-space setups, including
non-Euclidean cost functions. Furthermore, we investigate the practical task of
learning the barycenter on an image manifold generated by a pretrained
generative model, opening up new directions for real-world applications. | [
"Alexander Kolesov",
"Petr Mokrov",
"Igor Udovichenko",
"Milena Gazdieva",
"Gudmund Pammer",
"Evgeny Burnaev",
"Alexander Korotin"
] | 2023-10-02 11:24:36 | http://arxiv.org/abs/2310.01105v1 | http://arxiv.org/pdf/2310.01105v1 | 2310.01105v1 |
HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk Factors | In recent years, deep learning has shown promise in predicting hypertension
(HTN) from fundus images. However, most prior research has primarily focused on
analyzing a single type of data, which may not capture the full complexity of
HTN risk. To address this limitation, this study introduces a multimodal deep
learning (MMDL) system, dubbed HyMNet, which combines fundus images and
cardiometabolic risk factors, specifically age and gender, to improve
hypertension detection capabilities. Our MMDL system uses the DenseNet-201
architecture, pre-trained on ImageNet, for the fundus imaging path and a fully
connected neural network for the age and gender path. The two paths are jointly
trained by concatenating 64 features output from each path that are then fed
into a fusion network. The system was trained on 1,143 retinal images from 626
individuals collected from the Saudi Ministry of National Guard Health Affairs.
The results show that the multimodal model that integrates fundus images along
with age and gender achieved an AUC of 0.791 [CI: 0.735, 0.848], which
outperforms the unimodal model trained solely on fundus photographs that
yielded an AUC of 0.766 [CI: 0.705, 0.828] for hypertension detection. | [
"Mohammed Baharoon",
"Hessa Almatar",
"Reema Alduhayan",
"Tariq Aldebasi",
"Badr Alahmadi",
"Yahya Bokhari",
"Mohammed Alawad",
"Ahmed Almazroa",
"Abdulrhman Aljouie"
] | 2023-10-02 11:17:19 | http://arxiv.org/abs/2310.01099v1 | http://arxiv.org/pdf/2310.01099v1 | 2310.01099v1 |
NP$^2$L: Negative Pseudo Partial Labels Extraction for Graph Neural Networks | How to utilize the pseudo labels has always been a research hotspot in
machine learning. However, most methods use pseudo labels as supervised
training, and lack of valid assessing for their accuracy. Moreover,
applications of pseudo labels in graph neural networks (GNNs) oversee the
difference between graph learning and other machine learning tasks such as
message passing mechanism. Aiming to address the first issue, we found through
a large number of experiments that the pseudo labels are more accurate if they
are selected by not overlapping partial labels and defined as negative node
pairs relations. Therefore, considering the extraction based on pseudo and
partial labels, negative edges are constructed between two nodes by the
negative pseudo partial labels extraction (NP$^2$E) module. With that, a signed
graph are built containing highly accurate pseudo labels information from the
original graph, which effectively assists GNN in learning at the
message-passing level, provide one solution to the second issue. Empirical
results about link prediction and node classification tasks on several
benchmark datasets demonstrate the effectiveness of our method.
State-of-the-art performance is achieved on the both tasks. | [
"Xinjie Shen",
"Danyang Wu",
"Jitao Lu",
"Junjie Liang",
"Jin Xu",
"Feiping Nie"
] | 2023-10-02 11:13:59 | http://arxiv.org/abs/2310.01098v1 | http://arxiv.org/pdf/2310.01098v1 | 2310.01098v1 |
GraphText: Graph Reasoning in Text Space | Large Language Models (LLMs) have gained the ability to assimilate human
knowledge and facilitate natural language interactions with both humans and
other LLMs. However, despite their impressive achievements, LLMs have not made
significant advancements in the realm of graph machine learning. This
limitation arises because graphs encapsulate distinct relational data, making
it challenging to transform them into natural language that LLMs understand. In
this paper, we bridge this gap with a novel framework, GraphText, that
translates graphs into natural language. GraphText derives a graph-syntax tree
for each graph that encapsulates both the node attributes and inter-node
relationships. Traversal of the tree yields a graph text sequence, which is
then processed by an LLM to treat graph tasks as text generation tasks.
Notably, GraphText offers multiple advantages. It introduces training-free
graph reasoning: even without training on graph data, GraphText with ChatGPT
can achieve on par with, or even surpassing, the performance of
supervised-trained graph neural networks through in-context learning (ICL).
Furthermore, GraphText paves the way for interactive graph reasoning, allowing
both humans and LLMs to communicate with the model seamlessly using natural
language. These capabilities underscore the vast, yet-to-be-explored potential
of LLMs in the domain of graph machine learning. | [
"Jianan Zhao",
"Le Zhuo",
"Yikang Shen",
"Meng Qu",
"Kai Liu",
"Michael Bronstein",
"Zhaocheng Zhu",
"Jian Tang"
] | 2023-10-02 11:03:57 | http://arxiv.org/abs/2310.01089v1 | http://arxiv.org/pdf/2310.01089v1 | 2310.01089v1 |
Towards human-like spoken dialogue generation between AI agents from written dialogue | The advent of large language models (LLMs) has made it possible to generate
natural written dialogues between two agents. However, generating human-like
spoken dialogues from these written dialogues remains challenging. Spoken
dialogues have several unique characteristics: they frequently include
backchannels and laughter, and the smoothness of turn-taking significantly
influences the fluidity of conversation. This study proposes CHATS - CHatty
Agents Text-to-Speech - a discrete token-based system designed to generate
spoken dialogues based on written dialogues. Our system can generate speech for
both the speaker side and the listener side simultaneously, using only the
transcription from the speaker side, which eliminates the need for
transcriptions of backchannels or laughter. Moreover, CHATS facilitates natural
turn-taking; it determines the appropriate duration of silence after each
utterance in the absence of overlap, and it initiates the generation of
overlapping speech based on the phoneme sequence of the next utterance in case
of overlap. Experimental evaluations indicate that CHATS outperforms the
text-to-speech baseline, producing spoken dialogues that are more interactive
and fluid while retaining clarity and intelligibility. | [
"Kentaro Mitsui",
"Yukiya Hono",
"Kei Sawada"
] | 2023-10-02 11:03:20 | http://arxiv.org/abs/2310.01088v1 | http://arxiv.org/pdf/2310.01088v1 | 2310.01088v1 |
Non-negative isomorphic neural networks for photonic neuromorphic accelerators | Neuromorphic photonic accelerators are becoming increasingly popular, since
they can significantly improve computation speed and energy efficiency, leading
to femtojoule per MAC efficiency. However, deploying existing DL models on such
platforms is not trivial, since a great range of photonic neural network
architectures relies on incoherent setups and power addition operational
schemes that cannot natively represent negative quantities. This results in
additional hardware complexity that increases cost and reduces energy
efficiency. To overcome this, we can train non-negative neural networks and
potentially exploit the full range of incoherent neuromorphic photonic
capabilities. However, existing approaches cannot achieve the same level of
accuracy as their regular counterparts, due to training difficulties, as also
recent evidence suggests. To this end, we introduce a methodology to obtain the
non-negative isomorphic equivalents of regular neural networks that meet
requirements of neuromorphic hardware, overcoming the aforementioned
limitations. Furthermore, we also introduce a sign-preserving optimization
approach that enables training of such isomorphic networks in a non-negative
manner. | [
"Manos Kirtas",
"Nikolaos Passalis",
"Nikolaos Pleros",
"Anastasios Tefas"
] | 2023-10-02 10:54:46 | http://arxiv.org/abs/2310.01084v1 | http://arxiv.org/pdf/2310.01084v1 | 2310.01084v1 |
Linear attention is (maybe) all you need (to understand transformer optimization) | Transformer training is notoriously difficult, requiring a careful design of
optimizers and use of various heuristics. We make progress towards
understanding the subtleties of training transformers by carefully studying a
simple yet canonical linearized shallow transformer model. Specifically, we
train linear transformers to solve regression tasks, inspired by J. von Oswald
et al. (ICML 2023), and K. Ahn et al. (NeurIPS 2023). Most importantly, we
observe that our proposed linearized models can reproduce several prominent
aspects of transformer training dynamics. Consequently, the results obtained in
this paper suggest that a simple linearized transformer model could actually be
a valuable, realistic abstraction for understanding transformer optimization. | [
"Kwangjun Ahn",
"Xiang Cheng",
"Minhak Song",
"Chulhee Yun",
"Ali Jadbabaie",
"Suvrit Sra"
] | 2023-10-02 10:48:42 | http://arxiv.org/abs/2310.01082v1 | http://arxiv.org/pdf/2310.01082v1 | 2310.01082v1 |
Combining Deep Learning and GARCH Models for Financial Volatility and Risk Forecasting | In this paper, we develop a hybrid approach to forecasting the volatility and
risk of financial instruments by combining common econometric GARCH time series
models with deep learning neural networks. For the latter, we employ Gated
Recurrent Unit (GRU) networks, whereas four different specifications are used
as the GARCH component: standard GARCH, EGARCH, GJR-GARCH and APARCH. Models
are tested using daily logarithmic returns on the S&P 500 index as well as gold
price Bitcoin prices, with the three assets representing quite distinct
volatility dynamics. As the main volatility estimator, also underlying the
target function of our hybrid models, we use the price-range-based Garman-Klass
estimator, modified to incorporate the opening and closing prices. Volatility
forecasts resulting from the hybrid models are employed to evaluate the assets'
risk using the Value-at-Risk (VaR) and Expected Shortfall (ES) at two different
tolerance levels of 5% and 1%. Gains from combining the GARCH and GRU
approaches are discussed in the contexts of both the volatility and risk
forecasts. In general, it can be concluded that the hybrid solutions produce
more accurate point volatility forecasts, although it does not necessarily
translate into superior VaR and ES forecasts. | [
"Jakub Michańków",
"Łukasz Kwiatkowski",
"Janusz Morajda"
] | 2023-10-02 10:18:13 | http://arxiv.org/abs/2310.01063v1 | http://arxiv.org/pdf/2310.01063v1 | 2310.01063v1 |
Improved Crop and Weed Detection with Diverse Data Ensemble Learning in Agriculture | Modern agriculture heavily relies on Site-Specific Farm Management practices,
necessitating accurate detection, localization, and quantification of crops and
weeds in the field, which can be achieved using deep learning techniques. In
this regard, crop and weed-specific binary segmentation models have shown
promise. However, uncontrolled field conditions limit their performance from
one field to the other. To improve semantic model generalization, existing
methods augment and synthesize agricultural data to account for uncontrolled
field conditions. However, given highly varied field conditions, these methods
have limitations. To overcome the challenges of model deterioration in such
conditions, we propose utilizing data specific to other crops and weeds for our
specific target problem. To achieve this, we propose a novel ensemble
framework. Our approach involves utilizing different crop and weed models
trained on diverse datasets and employing a teacher-student configuration. By
using homogeneous stacking of base models and a trainable meta-architecture to
combine their outputs, we achieve significant improvements for Canola crops and
Kochia weeds on unseen test data, surpassing the performance of single semantic
segmentation models. We identify the UNET meta-architecture as the most
effective in this context. Finally, through ablation studies, we demonstrate
and validate the effectiveness of our proposed model. We observe that including
base models trained on other target crops and weeds can help generalize the
model to capture varied field conditions. Lastly, we propose two novel datasets
with varied conditions for comparisons. | [
"Muhammad Hamza Asad",
"Saeed Anwar",
"Abdul Bais"
] | 2023-10-02 10:05:30 | http://arxiv.org/abs/2310.01055v1 | http://arxiv.org/pdf/2310.01055v1 | 2310.01055v1 |
Seismogram Transformer: A generic deep learning backbone network for multiple earthquake monitoring tasks | Seismic records, known as seismograms, are crucial records of ground motion
resulting from seismic events, constituting the backbone of earthquake research
and monitoring. The latest advancements in deep learning have significantly
facilitated various seismic signal processing tasks. This paper introduces a
novel backbone neural network model designed for various seismic monitoring
tasks, named Seismogram Transformer (SeisT). Thanks to its efficient network
architecture, SeisT matches or even outperforms the state-of-the-art models in
earthquake detection, seismic phase picking, first-motion polarity
classification, magnitude estimation, and azimuth estimation tasks,
particularly in terms of out-of-distribution generalization performance. SeisT
consists of multiple network layers composed of different foundational blocks,
which help the model understand multi-level feature representations of
seismograms from low-level to high-level complex features, effectively
extracting features such as frequency, phase, and time-frequency relationships
from input seismograms. Three different-sized models were customized based on
these diverse foundational modules. Through extensive experiments and
performance evaluations, this study showcases the capabilities and potential of
SeisT in advancing seismic signal processing and earthquake research. | [
"Sen Li",
"Xu Yang",
"Anye Cao",
"Changbin Wang",
"Yaoqi Liu",
"Yapeng Liu",
"Qiang Niu"
] | 2023-10-02 09:28:31 | http://arxiv.org/abs/2310.01037v1 | http://arxiv.org/pdf/2310.01037v1 | 2310.01037v1 |
Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality | The problem of missing modalities is both critical and non-trivial to be
handled in multi-modal models. It is common for multi-modal tasks that certain
modalities contribute more compared to other modalities, and if those important
modalities are missing, the model performance drops significantly. Such fact
remains unexplored by current multi-modal approaches that recover the
representation from missing modalities by feature reconstruction or blind
feature aggregation from other modalities, instead of extracting useful
information from the best performing modalities. In this paper, we propose a
Learnable Cross-modal Knowledge Distillation (LCKD) model to adaptively
identify important modalities and distil knowledge from them to help other
modalities from the cross-modal perspective for solving the missing modality
issue. Our approach introduces a teacher election procedure to select the most
``qualified'' teachers based on their single modality performance on certain
tasks. Then, cross-modal knowledge distillation is performed between teacher
and student modalities for each task to push the model parameters to a point
that is beneficial for all tasks. Hence, even if the teacher modalities for
certain tasks are missing during testing, the available student modalities can
accomplish the task well enough based on the learned knowledge from their
automatically elected teacher modalities. Experiments on the Brain Tumour
Segmentation Dataset 2018 (BraTS2018) shows that LCKD outperforms other methods
by a considerable margin, improving the state-of-the-art performance by 3.61%
for enhancing tumour, 5.99% for tumour core, and 3.76% for whole tumour in
terms of segmentation Dice score. | [
"Hu Wang",
"Yuanhong Chen",
"Congbo Ma",
"Jodie Avery",
"Louise Hull",
"Gustavo Carneiro"
] | 2023-10-02 09:24:54 | http://arxiv.org/abs/2310.01035v1 | http://arxiv.org/pdf/2310.01035v1 | 2310.01035v1 |
A Novel Approach for Machine Learning-based Load Balancing in High-speed Train System using Nested Cross Validation | Fifth-generation (5G) mobile communication networks have recently emerged in
various fields, including highspeed trains. However, the dense deployment of 5G
millimeter wave (mmWave) base stations (BSs) and the high speed of moving
trains lead to frequent handovers (HOs), which can adversely affect the
Quality-of-Service (QoS) of mobile users. As a result, HO optimization and
resource allocation are essential considerations for managing mobility in
high-speed train systems. In this paper, we model system performance of a
high-speed train system with a novel machine learning (ML) approach that is
nested cross validation scheme that prevents information leakage from model
evaluation into the model parameter tuning, thereby avoiding overfitting and
resulting in better generalization error. To this end, we employ ML methods for
the high-speed train system scenario. Handover Margin (HOM) and Time-to-Trigger
(TTT) values are used as features, and several KPIs are used as outputs, and
several ML methods including Gradient Boosting Regression (GBR), Adaptive
Boosting (AdaBoost), CatBoost Regression (CBR), Artificial Neural Network
(ANN), Kernel Ridge Regression (KRR), Support Vector Regression (SVR), and
k-Nearest Neighbor Regression (KNNR) are employed for the problem. Finally,
performance comparisons of the cross validation schemes with the methods are
made in terms of mean absolute error (MAE) and mean square error (MSE) metrics
are made. As per obtained results, boosting methods, ABR, CBR, GBR, with nested
cross validation scheme superiorly outperforms conventional cross validation
scheme results with the same methods. On the other hand, SVR, KNRR, KRR, ANN
with the nested scheme produce promising results for prediction of some KPIs
with respect to their conventional scheme employment. | [
"Ibrahim Yazici",
"Emre Gures"
] | 2023-10-02 09:24:10 | http://arxiv.org/abs/2310.01034v1 | http://arxiv.org/pdf/2310.01034v1 | 2310.01034v1 |
The Fisher-Rao geometry of CES distributions | When dealing with a parametric statistical model, a Riemannian manifold can
naturally appear by endowing the parameter space with the Fisher information
metric. The geometry induced on the parameters by this metric is then referred
to as the Fisher-Rao information geometry. Interestingly, this yields a point
of view that allows for leveragingmany tools from differential geometry. After
a brief introduction about these concepts, we will present some practical uses
of these geometric tools in the framework of elliptical distributions. This
second part of the exposition is divided into three main axes: Riemannian
optimization for covariance matrix estimation, Intrinsic Cram\'er-Rao bounds,
and classification using Riemannian distances. | [
"Florent Bouchard",
"Arnaud Breloy",
"Antoine Collas",
"Alexandre Renaux",
"Guillaume Ginolhac"
] | 2023-10-02 09:23:32 | http://arxiv.org/abs/2310.01032v1 | http://arxiv.org/pdf/2310.01032v1 | 2310.01032v1 |
A Robust Machine Learning Approach for Path Loss Prediction in 5G Networks with Nested Cross Validation | The design and deployment of fifth-generation (5G) wireless networks pose
significant challenges due to the increasing number of wireless devices. Path
loss has a landmark importance in network performance optimization, and
accurate prediction of the path loss, which characterizes the attenuation of
signal power during transmission, is critical for effective network planning,
coverage estimation, and optimization. In this sense, we utilize machine
learning (ML) methods, which overcome conventional path loss prediction models
drawbacks, for path loss prediction in a 5G network system to facilitate more
accurate network planning, resource optimization, and performance improvement
in wireless communication systems. To this end, we utilize a novel approach,
nested cross validation scheme, with ML to prevent overfitting, thereby getting
better generalization error and stable results for ML deployment. First, we
acquire a publicly available dataset obtained through a comprehensive
measurement campaign conducted in an urban macro-cell scenario located in
Beijing, China. The dataset includes crucial information such as longitude,
latitude, elevation, altitude, clutter height, and distance, which are utilized
as essential features to predict the path loss in the 5G network system. We
deploy Support Vector Regression (SVR), CatBoost Regression (CBR), eXtreme
Gradient Boosting Regression (XGBR), Artificial Neural Network (ANN), and
Random Forest (RF) methods to predict the path loss, and compare the prediction
results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE). As
per obtained results, XGBR outperforms the rest of the methods. It outperforms
CBR with a slight performance differences by 0.4 % and 1 % in terms of MAE and
MSE metrics, respectively. On the other hand, it outperforms the rest of the
methods with clear performance differences. | [
"Ibrahim Yazıcı",
"Emre Gures"
] | 2023-10-02 09:21:58 | http://arxiv.org/abs/2310.01030v1 | http://arxiv.org/pdf/2310.01030v1 | 2310.01030v1 |
Efficient Algorithms for the CCA Family: Unconstrained Objectives with Unbiased Gradients | The Canonical Correlation Analysis (CCA) family of methods is foundational in
multi-view learning. Regularised linear CCA methods can be seen to generalise
Partial Least Squares (PLS) and unified with a Generalized Eigenvalue Problem
(GEP) framework. However, classical algorithms for these linear methods are
computationally infeasible for large-scale data. Extensions to Deep CCA show
great promise, but current training procedures are slow and complicated. First
we propose a novel unconstrained objective that characterizes the top subspace
of GEPs. Our core contribution is a family of fast algorithms for stochastic
PLS, stochastic CCA, and Deep CCA, simply obtained by applying stochastic
gradient descent (SGD) to the corresponding CCA objectives. These methods show
far faster convergence and recover higher correlations than the previous
state-of-the-art on all standard CCA and Deep CCA benchmarks. This speed allows
us to perform a first-of-its-kind PLS analysis of an extremely large biomedical
dataset from the UK Biobank, with over 33,000 individuals and 500,000 variants.
Finally, we not only match the performance of `CCA-family' Self-Supervised
Learning (SSL) methods on CIFAR-10 and CIFAR-100 with minimal hyper-parameter
tuning, but also establish the first solid theoretical links to classical CCA,
laying the groundwork for future insights. | [
"James Chapman",
"Ana Lawry Aguila",
"Lennie Wells"
] | 2023-10-02 09:03:59 | http://arxiv.org/abs/2310.01012v1 | http://arxiv.org/pdf/2310.01012v1 | 2310.01012v1 |
Conflict-Aware Active Automata Learning | Active automata learning algorithms cannot easily handle conflict in the
observation data (different outputs observed for the same inputs). This
inherent inability to recover after a conflict impairs their effective
applicability in scenarios where noise is present or the system under learning
is mutating. We propose the Conflict-Aware Active Automata Learning (C3AL)
framework to enable handling conflicting information during the learning
process. The core idea is to consider the so-called observation tree as a
first-class citizen in the learning process. Though this idea is explored in
recent work, we take it to its full effect by enabling its use with any
existing learner and minimizing the number of tests performed on the system
under learning, specially in the face of conflicts. We evaluate C3AL in a large
set of benchmarks, covering over 30 different realistic targets, and over
18,000 different scenarios. The results of the evaluation show that C3AL is a
suitable alternative framework for closed-box learning that can better handle
noise and mutations. | [
"Tiago Ferreira",
"Léo Henry",
"Raquel Fernandes da Silva",
"Alexandra Silva"
] | 2023-10-02 09:00:48 | http://arxiv.org/abs/2310.01003v1 | http://arxiv.org/pdf/2310.01003v1 | 2310.01003v1 |
A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression | Existing statistical learning guarantees for general kernel regressors often
yield loose bounds when used with finite-rank kernels. Yet, finite-rank kernels
naturally appear in several machine learning problems, e.g.\ when fine-tuning a
pre-trained deep neural network's last layer to adapt it to a novel task when
performing transfer learning. We address this gap for finite-rank kernel ridge
regression (KRR) by deriving sharp non-asymptotic upper and lower bounds for
the KRR test error of any finite-rank KRR. Our bounds are tighter than
previously derived bounds on finite-rank KRR, and unlike comparable results,
they also remain valid for any regularization parameters. | [
"Tin Sum Cheng",
"Aurelien Lucchi",
"Ivan Dokmanić",
"Anastasis Kratsios",
"David Belius"
] | 2023-10-02 08:52:29 | http://arxiv.org/abs/2310.00987v2 | http://arxiv.org/pdf/2310.00987v2 | 2310.00987v2 |
Using Reinforcement Learning to Optimize Responses in Care Processes: A Case Study on Aggression Incidents | Previous studies have used prescriptive process monitoring to find actionable
policies in business processes and conducted case studies in similar domains,
such as the loan application process and the traffic fine process. However,
care processes tend to be more dynamic and complex. For example, at any stage
of a care process, a multitude of actions is possible. In this paper, we follow
the reinforcement approach and train a Markov decision process using event data
from a care process. The goal was to find optimal policies for staff members
when clients are displaying any type of aggressive behavior. We used the
reinforcement learning algorithms Q-learning and SARSA to find optimal
policies. Results showed that the policies derived from these algorithms are
similar to the most frequent actions currently used but provide the staff
members with a few more options in certain situations. | [
"Bart J. Verhoef",
"Xixi Lu"
] | 2023-10-02 08:43:29 | http://arxiv.org/abs/2310.00981v1 | http://arxiv.org/pdf/2310.00981v1 | 2310.00981v1 |
Variance-Aware Regret Bounds for Stochastic Contextual Dueling Bandits | Dueling bandits is a prominent framework for decision-making involving
preferential feedback, a valuable feature that fits various applications
involving human interaction, such as ranking, information retrieval, and
recommendation systems. While substantial efforts have been made to minimize
the cumulative regret in dueling bandits, a notable gap in the current research
is the absence of regret bounds that account for the inherent uncertainty in
pairwise comparisons between the dueling arms. Intuitively, greater uncertainty
suggests a higher level of difficulty in the problem. To bridge this gap, this
paper studies the problem of contextual dueling bandits, where the binary
comparison of dueling arms is generated from a generalized linear model (GLM).
We propose a new SupLinUCB-type algorithm that enjoys computational efficiency
and a variance-aware regret bound $\tilde O\big(d\sqrt{\sum_{t=1}^T\sigma_t^2}
+ d\big)$, where $\sigma_t$ is the variance of the pairwise comparison in round
$t$, $d$ is the dimension of the context vectors, and $T$ is the time horizon.
Our regret bound naturally aligns with the intuitive expectation in scenarios
where the comparison is deterministic, the algorithm only suffers from an
$\tilde O(d)$ regret. We perform empirical experiments on synthetic data to
confirm the advantage of our method over previous variance-agnostic algorithms. | [
"Qiwei Di",
"Tao Jin",
"Yue Wu",
"Heyang Zhao",
"Farzad Farnoud",
"Quanquan Gu"
] | 2023-10-02 08:15:52 | http://arxiv.org/abs/2310.00968v1 | http://arxiv.org/pdf/2310.00968v1 | 2310.00968v1 |
MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training | Gradient sparsification is a communication optimisation technique for scaling
and accelerating distributed deep neural network (DNN) training. It reduces the
increasing communication traffic for gradient aggregation. However, existing
sparsifiers have poor scalability because of the high computational cost of
gradient selection and/or increase in communication traffic. In particular, an
increase in communication traffic is caused by gradient build-up and
inappropriate threshold for gradient selection.
To address these challenges, we propose a novel gradient sparsification
method called MiCRO. In MiCRO, the gradient vector is partitioned, and each
partition is assigned to the corresponding worker. Each worker then selects
gradients from its partition, and the aggregated gradients are free from
gradient build-up. Moreover, MiCRO estimates the accurate threshold to maintain
the communication traffic as per user requirement by minimising the compression
ratio error. MiCRO enables near-zero cost gradient sparsification by solving
existing problems that hinder the scalability and acceleration of distributed
DNN training. In our extensive experiments, MiCRO outperformed state-of-the-art
sparsifiers with an outstanding convergence rate. | [
"Daegun Yoon",
"Sangyoon Oh"
] | 2023-10-02 08:15:35 | http://arxiv.org/abs/2310.00967v1 | http://arxiv.org/pdf/2310.00967v1 | 2310.00967v1 |
Effective Learning with Node Perturbation in Deep Neural Networks | Backpropagation (BP) is the dominant and most successful method for training
parameters of deep neural network models. However, BP relies on two
computationally distinct phases, does not provide a satisfactory explanation of
biological learning, and can be challenging to apply for training of networks
with discontinuities or noisy node dynamics. By comparison, node perturbation
(NP) proposes learning by the injection of noise into the network activations,
and subsequent measurement of the induced loss change. NP relies on two forward
(inference) passes, does not make use of network derivatives, and has been
proposed as a model for learning in biological systems. However, standard NP is
highly data inefficient and unstable due to its unguided, noise-based, activity
search. In this work, we investigate different formulations of NP and relate it
to the concept of directional derivatives as well as combining it with a
decorrelating mechanism for layer-wise inputs. We find that a closer alignment
with directional derivatives, and induction of decorrelation of inputs at every
layer significantly enhances performance of NP learning making it competitive
with BP. | [
"Sander Dalm",
"Marcel van Gerven",
"Nasir Ahmad"
] | 2023-10-02 08:12:51 | http://arxiv.org/abs/2310.00965v1 | http://arxiv.org/pdf/2310.00965v1 | 2310.00965v1 |
All by Myself: Learning Individualized Competitive Behaviour with a Contrastive Reinforcement Learning optimization | In a competitive game scenario, a set of agents have to learn decisions that
maximize their goals and minimize their adversaries' goals at the same time.
Besides dealing with the increased dynamics of the scenarios due to the
opponents' actions, they usually have to understand how to overcome the
opponent's strategies. Most of the common solutions, usually based on continual
learning or centralized multi-agent experiences, however, do not allow the
development of personalized strategies to face individual opponents. In this
paper, we propose a novel model composed of three neural layers that learn a
representation of a competitive game, learn how to map the strategy of specific
opponents, and how to disrupt them. The entire model is trained online, using a
composed loss based on a contrastive optimization, to learn competitive and
multiplayer games. We evaluate our model on a pokemon duel scenario and the
four-player competitive Chef's Hat card game. Our experiments demonstrate that
our model achieves better performance when playing against offline, online, and
competitive-specific models, in particular when playing against the same
opponent multiple times. We also present a discussion on the impact of our
model, in particular on how well it deals with on specific strategy learning
for each of the two scenarios. | [
"Pablo Barros",
"Alessandra Sciutti"
] | 2023-10-02 08:11:07 | http://arxiv.org/abs/2310.00964v1 | http://arxiv.org/pdf/2310.00964v1 | 2310.00964v1 |
Multi-Agent Bayesian Optimization with Coupled Black-Box and Affine Constraints | This paper studies the problem of distributed multi-agent Bayesian
optimization with both coupled black-box constraints and known affine
constraints. A primal-dual distributed algorithm is proposed that achieves
similar regret/violation bounds as those in the single-agent case for the
black-box objective and constraint functions. Additionally, the algorithm
guarantees an $\mathcal{O}(N\sqrt{T})$ bound on the cumulative violation for
the known affine constraints, where $N$ is the number of agents. Hence, it is
ensured that the average of the samples satisfies the affine constraints up to
the error $\mathcal{O}({N}/{\sqrt{T}})$. Furthermore, we characterize certain
conditions under which our algorithm can bound a stronger metric of cumulative
violation and provide best-iterate convergence without affine constraint. The
method is then applied to both sampled instances from Gaussian processes and a
real-world optimal power allocation problem for wireless communication; the
results show that our method simultaneously provides close-to-optimal
performance and maintains minor violations on average, corroborating our
theoretical analysis. | [
"Wenjie Xu",
"Yuning Jiang",
"Bratislav Svetozarevic",
"Colin N. Jones"
] | 2023-10-02 08:07:36 | http://arxiv.org/abs/2310.00962v1 | http://arxiv.org/pdf/2310.00962v1 | 2310.00962v1 |
Deep Learning in Computational Biology: Advancements, Challenges, and Future Outlook | Deep learning has become a powerful tool in computational biology,
revolutionising the analysis and interpretation of biological data over time.
In our article review, we delve into various aspects of deep learning in
computational biology. Specifically, we examine its history, advantages, and
challenges. Our focus is on two primary applications: DNA sequence
classification and prediction, as well as protein structure prediction from
sequence data. Additionally, we provide insights into the outlook for this
field. To fully harness the potential of deep learning in computational
biology, it is crucial to address the challenges that come with it. These
challenges include the requirement for large, labelled datasets and the
interpretability of deep learning models. The use of deep learning in the
analysis of DNA sequences has brought about a significant transformation in the
detection of genomic variants and the analysis of gene expression. This has
greatly contributed to the advancement of personalised medicine and drug
discovery. Convolutional neural networks (CNNs) have been shown to be highly
accurate in predicting genetic variations and gene expression levels. Deep
learning techniques are used for analysing epigenetic data, including DNA
methylation and histone modifications. This provides valuable insights into
metabolic conditions and gene regulation. The field of protein structure
prediction has been significantly impacted by deep learning, which has enabled
accurate determination of the three-dimensional shape of proteins and
prediction of their interactions. The future of deep learning in computational
biology looks promising. With the development of advanced deep learning models
and interpretation techniques, there is potential to overcome current
challenges and further our understanding of biological systems. | [
"Suresh Kumar",
"Dhanyashri Guruparan",
"Pavithren Aaron",
"Philemon Telajan",
"Kavinesh Mahadevan",
"Dinesh Davagandhi",
"Ong Xin Yue"
] | 2023-10-02 07:53:05 | http://arxiv.org/abs/2310.03086v1 | http://arxiv.org/pdf/2310.03086v1 | 2310.03086v1 |
A Novel IoT Trust Model Leveraging Fully Distributed Behavioral Fingerprinting and Secure Delegation | With the number of connected smart devices expected to constantly grow in the
next years, Internet of Things (IoT) solutions are experimenting a booming
demand to make data collection and processing easier. The ability of IoT
appliances to provide pervasive and better support to everyday tasks, in most
cases transparently to humans, is also achieved through the high degree of
autonomy of such devices. However, the higher the number of new capabilities
and services provided in an autonomous way, the wider the attack surface that
exposes users to data hacking and lost. In this scenario, many critical
challenges arise also because IoT devices have heterogeneous computational
capabilities (i.e., in the same network there might be simple sensors/actuators
as well as more complex and smart nodes). In this paper, we try to provide a
contribution in this setting, tackling the non-trivial issues of equipping
smart things with a strategy to evaluate, also through their neighbors, the
trustworthiness of an object in the network before interacting with it. To do
so, we design a novel and fully distributed trust model exploiting devices'
behavioral fingerprints, a distributed consensus mechanism and the Blockchain
technology. Beyond the detailed description of our framework, we also
illustrate the security model associated with it and the tests carried out to
evaluate its correctness and performance. | [
"Marco Arazzi",
"Serena Nicolazzo",
"Antonino Nocera"
] | 2023-10-02 07:45:49 | http://arxiv.org/abs/2310.00953v1 | http://arxiv.org/pdf/2310.00953v1 | 2310.00953v1 |
Distilling Influences to Mitigate Prediction Churn in Graph Neural Networks | Models with similar performances exhibit significant disagreement in the
predictions of individual samples, referred to as prediction churn. Our work
explores this phenomenon in graph neural networks by investigating differences
between models differing only in their initializations in their utilized
features for predictions. We propose a novel metric called Influence Difference
(ID) to quantify the variation in reasons used by nodes across models by
comparing their influence distribution. Additionally, we consider the
differences between nodes with a stable and an unstable prediction, positing
that both equally utilize different reasons and thus provide a meaningful
gradient signal to closely match two models even when the predictions for nodes
are similar. Based on our analysis, we propose to minimize this ID in Knowledge
Distillation, a domain where a new model should closely match an established
one. As an efficient approximation, we introduce DropDistillation (DD) that
matches the output for a graph perturbed by edge deletions. Our empirical
evaluation of six benchmark datasets for node classification validates the
differences in utilized features. DD outperforms previous methods regarding
prediction stability and overall performance in all considered Knowledge
Distillation experiments. | [
"Andreas Roth",
"Thomas Liebig"
] | 2023-10-02 07:37:28 | http://arxiv.org/abs/2310.00946v1 | http://arxiv.org/pdf/2310.00946v1 | 2310.00946v1 |
Towards Robust 3D Object Detection In Rainy Conditions | LiDAR sensors are used in autonomous driving applications to accurately
perceive the environment. However, they are affected by adverse weather
conditions such as snow, fog, and rain. These everyday phenomena introduce
unwanted noise into the measurements, severely degrading the performance of
LiDAR-based perception systems. In this work, we propose a framework for
improving the robustness of LiDAR-based 3D object detectors against road spray.
Our approach uses a state-of-the-art adverse weather detection network to
filter out spray from the LiDAR point cloud, which is then used as input for
the object detector. In this way, the detected objects are less affected by the
adverse weather in the scene, resulting in a more accurate perception of the
environment. In addition to adverse weather filtering, we explore the use of
radar targets to further filter false positive detections. Tests on real-world
data show that our approach improves the robustness to road spray of several
popular 3D object detectors. | [
"Aldi Piroli",
"Vinzenz Dallabetta",
"Johannes Kopp",
"Marc Walessa",
"Daniel Meissner",
"Klaus Dietmayer"
] | 2023-10-02 07:34:15 | http://arxiv.org/abs/2310.00944v2 | http://arxiv.org/pdf/2310.00944v2 | 2310.00944v2 |
Improved Variational Bayesian Phylogenetic Inference using Mixtures | We present VBPI-Mixtures, an algorithm designed to enhance the accuracy of
phylogenetic posterior distributions, particularly for tree-topology and
branch-length approximations. Despite the Variational Bayesian Phylogenetic
Inference (VBPI), a leading-edge black-box variational inference (BBVI)
framework, achieving remarkable approximations of these distributions, the
multimodality of the tree-topology posterior presents a formidable challenge to
sampling-based learning techniques such as BBVI. Advanced deep learning
methodologies such as normalizing flows and graph neural networks have been
explored to refine the branch-length posterior approximation, yet efforts to
ameliorate the posterior approximation over tree topologies have been lacking.
Our novel VBPI-Mixtures algorithm bridges this gap by harnessing the latest
breakthroughs in mixture learning within the BBVI domain. As a result,
VBPI-Mixtures is capable of capturing distributions over tree-topologies that
VBPI fails to model. We deliver state-of-the-art performance on difficult
density estimation tasks across numerous real phylogenetic datasets. | [
"Oskar Kviman",
"Ricky Molén",
"Jens Lagergren"
] | 2023-10-02 07:18:48 | http://arxiv.org/abs/2310.00941v1 | http://arxiv.org/pdf/2310.00941v1 | 2310.00941v1 |
Data Efficient Training of a U-Net Based Architecture for Structured Documents Localization | Structured documents analysis and recognition are essential for modern online
on-boarding processes, and document localization is a crucial step to achieve
reliable key information extraction. While deep-learning has become the
standard technique used to solve document analysis problems, real-world
applications in industry still face the limited availability of labelled data
and of computational resources when training or fine-tuning deep-learning
models. To tackle these challenges, we propose SDL-Net: a novel U-Net like
encoder-decoder architecture for the localization of structured documents. Our
approach allows pre-training the encoder of SDL-Net on a generic dataset
containing samples of various document classes, and enables fast and
data-efficient fine-tuning of decoders to support the localization of new
document classes. We conduct extensive experiments on a proprietary dataset of
structured document images to demonstrate the effectiveness and the
generalization capabilities of the proposed approach. | [
"Anastasiia Kabeshova",
"Guillaume Betmont",
"Julien Lerouge",
"Evgeny Stepankevich",
"Alexis Bergès"
] | 2023-10-02 07:05:19 | http://arxiv.org/abs/2310.00937v1 | http://arxiv.org/pdf/2310.00937v1 | 2310.00937v1 |
Understanding Transferable Representation Learning and Zero-shot Transfer in CLIP | Multi-modal learning has become increasingly popular due to its ability to
leverage information from different data sources (e.g., text and images) to
improve the model performance. Recently, CLIP has emerged as an effective
approach that employs vision-language contrastive pretraining to learn joint
image and text representations and exhibits remarkable performance in zero-shot
learning and text-guided natural image generation. Despite the huge practical
success of CLIP, its theoretical understanding remains elusive. In this paper,
we formally study transferrable representation learning underlying CLIP and
demonstrate how features from different modalities get aligned. We also analyze
its zero-shot transfer performance on the downstream tasks. Inspired by our
analysis, we propose a new CLIP-type approach, which achieves better
performance than CLIP and other state-of-the-art methods on benchmark datasets. | [
"Zixiang Chen",
"Yihe Deng",
"Yuanzhi Li",
"Quanquan Gu"
] | 2023-10-02 06:41:30 | http://arxiv.org/abs/2310.00927v1 | http://arxiv.org/pdf/2310.00927v1 | 2310.00927v1 |
Integration of Graph Neural Network and Neural-ODEs for Tumor Dynamic Prediction | In anti-cancer drug development, a major scientific challenge is
disentangling the complex relationships between high-dimensional genomics data
from patient tumor samples, the corresponding tumor's organ of origin, the drug
targets associated with given treatments and the resulting treatment response.
Furthermore, to realize the aspirations of precision medicine in identifying
and adjusting treatments for patients depending on the therapeutic response,
there is a need for building tumor dynamic models that can integrate both
longitudinal tumor size as well as multimodal, high-content data. In this work,
we take a step towards enhancing personalized tumor dynamic predictions by
proposing a heterogeneous graph encoder that utilizes a bipartite Graph
Convolutional Neural network (GCN) combined with Neural Ordinary Differential
Equations (Neural-ODEs). We applied the methodology to a large collection of
patient-derived xenograft (PDX) data, spanning a wide variety of treatments (as
well as their combinations) on tumors that originated from a number of
different organs. We first show that the methodology is able to discover a
tumor dynamic model that significantly improves upon an empirical model which
is in current use. Additionally, we show that the graph encoder is able to
effectively utilize multimodal data to enhance tumor predictions. Our findings
indicate that the methodology holds significant promise and offers potential
applications in pre-clinical settings. | [
"Omid Bazgir",
"Zichen Wang",
"Marc Hafner",
"James Lu"
] | 2023-10-02 06:39:08 | http://arxiv.org/abs/2310.00926v1 | http://arxiv.org/pdf/2310.00926v1 | 2310.00926v1 |
BAAF: A Benchmark Attention Adaptive Framework for Medical Ultrasound Image Segmentation Tasks | The AI-based assisted diagnosis programs have been widely investigated on
medical ultrasound images. Complex scenario of ultrasound image, in which the
coupled interference of internal and external factors is severe, brings a
unique challenge for localize the object region automatically and precisely in
ultrasound images. In this study, we seek to propose a more general and robust
Benchmark Attention Adaptive Framework (BAAF) to assist doctors segment or
diagnose lesions and tissues in ultrasound images more quickly and accurately.
Different from existing attention schemes, the BAAF consists of a parallel
hybrid attention module (PHAM) and an adaptive calibration mechanism (ACM).
Specifically, BAAF first coarsely calibrates the input features from the
channel and spatial dimensions, and then adaptively selects more robust lesion
or tissue characterizations from the coarse-calibrated feature maps. The design
of BAAF further optimizes the "what" and "where" focus and selection problems
in CNNs and seeks to improve the segmentation accuracy of lesions or tissues in
medical ultrasound images. The method is evaluated on four medical ultrasound
segmentation tasks, and the adequate experimental results demonstrate the
remarkable performance improvement over existing state-of-the-art methods. In
addition, the comparison with existing attention mechanisms also demonstrates
the superiority of BAAF. This work provides the possibility for automated
medical ultrasound assisted diagnosis and reduces reliance on human accuracy
and precision. | [
"Gongping Chen",
"Lei Zhao",
"Xiaotao Yin",
"Liang Cui",
"Jianxun Zhang",
"Yu Dai"
] | 2023-10-02 06:15:50 | http://arxiv.org/abs/2310.00919v1 | http://arxiv.org/pdf/2310.00919v1 | 2310.00919v1 |
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models | Quantifying the impact of training data points is crucial for understanding
the outputs of machine learning models and for improving the transparency of
the AI pipeline. The influence function is a principled and popular data
attribution method, but its computational cost often makes it challenging to
use. This issue becomes more pronounced in the setting of large language models
and text-to-image models. In this work, we propose DataInf, an efficient
influence approximation method that is practical for large-scale generative AI
models. Leveraging an easy-to-compute closed-form expression, DataInf
outperforms existing influence computation algorithms in terms of computational
and memory efficiency. Our theoretical analysis shows that DataInf is
particularly well-suited for parameter-efficient fine-tuning techniques such as
LoRA. Through systematic empirical evaluations, we show that DataInf accurately
approximates influence scores and is orders of magnitude faster than existing
methods. In applications to RoBERTa-large, Llama-2-13B-chat, and
stable-diffusion-v1.5 models, DataInf effectively identifies the most
influential fine-tuning examples better than other approximate influence
scores. Moreover, it can help to identify which data points are mislabeled. | [
"Yongchan Kwon",
"Eric Wu",
"Kevin Wu",
"James Zou"
] | 2023-10-02 04:59:19 | http://arxiv.org/abs/2310.00902v1 | http://arxiv.org/pdf/2310.00902v1 | 2310.00902v1 |
Expert enhanced dynamic time warping based anomaly detection | Dynamic time warping (DTW) is a well-known algorithm for time series elastic
dissimilarity measure. Its ability to deal with non-linear time distortions
makes it helpful in variety of data mining tasks. Such a task is also anomaly
detection which attempts to reveal unexpected behaviour without false detection
alarms. In this paper, we propose a novel anomaly detection method named Expert
enhanced dynamic time warping anomaly detection (E-DTWA). It is based on DTW
with additional enhancements involving human-in-the-loop concept. The main
benefits of our approach comprise efficient detection, flexible retraining
based on strong consideration of the expert's detection feedback while
retaining low computational and space complexity. | [
"Matej Kloska",
"Gabriela Grmanova",
"Viera Rozinajova"
] | 2023-10-02 04:54:04 | http://arxiv.org/abs/2310.02280v1 | http://arxiv.org/pdf/2310.02280v1 | 2310.02280v1 |
Organized Event Participant Prediction Enhanced by Social Media Retweeting Data | Nowadays, many platforms on the Web offer organized events, allowing users to
be organizers or participants. For such platforms, it is beneficial to predict
potential event participants. Existing work on this problem tends to borrow
recommendation techniques. However, compared to e-commerce items and purchases,
events and participation are usually of a much smaller frequency, and the data
may be insufficient to learn an accurate model. In this paper, we propose to
utilize social media retweeting activity data to enhance the learning of event
participant prediction models. We create a joint knowledge graph to bridge the
social media and the target domain, assuming that event descriptions and tweets
are written in the same language. Furthermore, we propose a learning model that
utilizes retweeting information for the target domain prediction more
effectively. We conduct comprehensive experiments in two scenarios with
real-world data. In each scenario, we set up training data of different sizes,
as well as warm and cold test cases. The evaluation results show that our
approach consistently outperforms several baseline models, especially with the
warm test cases, and when target domain data is limited. | [
"Yihong Zhang",
"Takahiro Hara"
] | 2023-10-02 04:26:07 | http://arxiv.org/abs/2310.00896v1 | http://arxiv.org/pdf/2310.00896v1 | 2310.00896v1 |
Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss | Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for
classification tasks that makes use of similarities in the embedding space to
allow for richer representations. In this work, we propose methods to engineer
the geometry of these learnt feature embeddings by modifying the contrastive
loss. In pursuit of adjusting the geometry we explore the impact of prototypes,
fixed embeddings included during training to alter the final feature geometry.
Specifically, through empirical findings, we demonstrate that the inclusion of
prototypes in every batch induces the geometry of the learnt embeddings to
align with that of the prototypes. We gain further insights by considering a
limiting scenario where the number of prototypes far outnumber the original
batch size. Through this, we establish a connection to cross-entropy (CE) loss
with a fixed classifier and normalized embeddings. We validate our findings by
conducting a series of experiments with deep neural networks on benchmark
vision datasets. | [
"Jaidev Gill",
"Vala Vakilian",
"Christos Thrampoulidis"
] | 2023-10-02 04:23:17 | http://arxiv.org/abs/2310.00893v1 | http://arxiv.org/pdf/2310.00893v1 | 2310.00893v1 |
GRID: A Platform for General Robot Intelligence Development | Developing machine intelligence abilities in robots and autonomous systems is
an expensive and time consuming process. Existing solutions are tailored to
specific applications and are harder to generalize. Furthermore, scarcity of
training data adds a layer of complexity in deploying deep machine learning
models. We present a new platform for General Robot Intelligence Development
(GRID) to address both of these issues. The platform enables robots to learn,
compose and adapt skills to their physical capabilities, environmental
constraints and goals. The platform addresses AI problems in robotics via
foundation models that know the physical world. GRID is designed from the
ground up to be extensible to accommodate new types of robots, vehicles,
hardware platforms and software protocols. In addition, the modular design
enables various deep ML components and existing foundation models to be easily
usable in a wider variety of robot-centric problems. We demonstrate the
platform in various aerial robotics scenarios and demonstrate how the platform
dramatically accelerates development of machine intelligent robots. | [
"Sai Vemprala",
"Shuhang Chen",
"Abhinav Shukla",
"Dinesh Narayanan",
"Ashish Kapoor"
] | 2023-10-02 04:09:27 | http://arxiv.org/abs/2310.00887v2 | http://arxiv.org/pdf/2310.00887v2 | 2310.00887v2 |
Deep Neural Networks Tend To Extrapolate Predictably | Conventional wisdom suggests that neural network predictions tend to be
unpredictable and overconfident when faced with out-of-distribution (OOD)
inputs. Our work reassesses this assumption for neural networks with
high-dimensional inputs. Rather than extrapolating in arbitrary ways, we
observe that neural network predictions often tend towards a constant value as
input data becomes increasingly OOD. Moreover, we find that this value often
closely approximates the optimal constant solution (OCS), i.e., the prediction
that minimizes the average loss over the training data without observing the
input. We present results showing this phenomenon across 8 datasets with
different distributional shifts (including CIFAR10-C and ImageNet-R, S),
different loss functions (cross entropy, MSE, and Gaussian NLL), and different
architectures (CNNs and transformers). Furthermore, we present an explanation
for this behavior, which we first validate empirically and then study
theoretically in a simplified setting involving deep homogeneous networks with
ReLU activations. Finally, we show how one can leverage our insights in
practice to enable risk-sensitive decision-making in the presence of OOD
inputs. | [
"Katie Kang",
"Amrith Setlur",
"Claire Tomlin",
"Sergey Levine"
] | 2023-10-02 03:25:32 | http://arxiv.org/abs/2310.00873v1 | http://arxiv.org/pdf/2310.00873v1 | 2310.00873v1 |
COMPOSER: Scalable and Robust Modular Policies for Snake Robots | Snake robots have showcased remarkable compliance and adaptability in their
interaction with environments, mirroring the traits of their natural
counterparts. While their hyper-redundant and high-dimensional characteristics
add to this adaptability, they also pose great challenges to robot control.
Instead of perceiving the hyper-redundancy and flexibility of snake robots as
mere challenges, there lies an unexplored potential in leveraging these traits
to enhance robustness and generalizability at the control policy level. We seek
to develop a control policy that effectively breaks down the high
dimensionality of snake robots while harnessing their redundancy. In this work,
we consider the snake robot as a modular robot and formulate the control of the
snake robot as a cooperative Multi-Agent Reinforcement Learning (MARL) problem.
Each segment of the snake robot functions as an individual agent. Specifically,
we incorporate a self-attention mechanism to enhance the cooperative behavior
between agents. A high-level imagination policy is proposed to provide
additional rewards to guide the low-level control policy. We validate the
proposed method COMPOSER with five snake robot tasks, including goal reaching,
wall climbing, shape formation, tube crossing, and block pushing. COMPOSER
achieves the highest success rate across all tasks when compared to a
centralized baseline and four modular policy baselines. Additionally, we show
enhanced robustness against module corruption and significantly superior
zero-shot generalizability in our proposed method. The videos of this work are
available on our project page: https://sites.google.com/view/composer-snake/. | [
"Yuyou Zhang",
"Yaru Niu",
"Xingyu Liu",
"Ding Zhao"
] | 2023-10-02 03:20:31 | http://arxiv.org/abs/2310.00871v1 | http://arxiv.org/pdf/2310.00871v1 | 2310.00871v1 |
Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers | Large language models (LLMs) have shown remarkable instruction-following
capabilities and achieved impressive performances in various applications.
However, the performances of LLMs depend heavily on the instructions given to
them, which are typically manually tuned with substantial human efforts. Recent
work has used the query-efficient Bayesian optimization (BO) algorithm to
automatically optimize the instructions given to black-box LLMs. However, BO
usually falls short when optimizing highly sophisticated (e.g.,
high-dimensional) objective functions, such as the functions mapping an
instruction to the performance of an LLM. This is mainly due to the limited
expressive power of the Gaussian process (GP) model which is used by BO as a
surrogate to model the objective function. Meanwhile, it has been repeatedly
shown that neural networks (NNs), especially pre-trained transformers, possess
strong expressive power and can model highly complex functions. So, we adopt a
neural bandit algorithm which replaces the GP in BO by an NN surrogate to
optimize instructions for black-box LLMs. More importantly, the neural bandit
algorithm allows us to naturally couple the NN surrogate with the hidden
representation learned by a pre-trained transformer (i.e., an open-source LLM),
which significantly boosts its performance. These motivate us to propose our
INSTruction optimization usIng Neural bandits Coupled with Transformers}
(INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use
extensive experiments to show that our INSTINCT consistently outperforms the
existing methods in different tasks, such as in various instruction induction
tasks and the task of improving the zero-shot chain-of-thought instruction. | [
"Xiaoqiang Lin",
"Zhaoxuan Wu",
"Zhongxiang Dai",
"Wenyang Hu",
"Yao Shu",
"See-Kiong Ng",
"Patrick Jaillet",
"Bryan Kian Hsiang Low"
] | 2023-10-02 02:01:16 | http://arxiv.org/abs/2310.02905v1 | http://arxiv.org/pdf/2310.02905v1 | 2310.02905v1 |
Drug Discovery with Dynamic Goal-aware Fragments | Fragment-based drug discovery is an effective strategy for discovering drug
candidates in the vast chemical space, and has been widely employed in
molecular generative models. However, many existing fragment extraction methods
in such models do not take the target chemical properties into account or rely
on heuristic rules. Additionally, the existing fragment-based generative models
cannot update the fragment vocabulary with goal-aware fragments newly
discovered during the generation. To this end, we propose a molecular
generative framework for drug discovery, named Goal-aware fragment Extraction,
Assembly, and Modification (GEAM). GEAM consists of three modules, each
responsible for goal-aware fragment extraction, fragment assembly, and fragment
modification. The fragment extraction module identifies important fragments
that contribute to the desired target properties with the information
bottleneck principle, thereby constructing an effective goal-aware fragment
vocabulary. Moreover, GEAM can explore beyond the initial vocabulary with the
fragment modification module, and the exploration is further enhanced through
the dynamic goal-aware vocabulary update. We experimentally demonstrate that
GEAM effectively discovers drug candidates through the generative cycle of the
three modules in various drug discovery tasks. | [
"Seul Lee",
"Seanie Lee",
"Sung Ju Hwang"
] | 2023-10-02 01:30:42 | http://arxiv.org/abs/2310.00841v1 | http://arxiv.org/pdf/2310.00841v1 | 2310.00841v1 |
Subsurface Characterization using Ensemble-based Approaches with Deep Generative Models | Estimating spatially distributed properties such as hydraulic conductivity
(K) from available sparse measurements is a great challenge in subsurface
characterization. However, the use of inverse modeling is limited for
ill-posed, high-dimensional applications due to computational costs and poor
prediction accuracy with sparse datasets. In this paper, we combine Wasserstein
Generative Adversarial Network with Gradient Penalty (WGAN-GP), a deep
generative model that can accurately capture complex subsurface structure, and
Ensemble Smoother with Multiple Data Assimilation (ES-MDA), an ensemble-based
inversion method, for accurate and accelerated subsurface characterization.
WGAN-GP is trained to generate high-dimensional K fields from a low-dimensional
latent space and ES-MDA then updates the latent variables by assimilating
available measurements. Several subsurface examples are used to evaluate the
accuracy and efficiency of the proposed method and the main features of the
unknown K fields are characterized accurately with reliable uncertainty
quantification. Furthermore, the estimation performance is compared with a
widely-used variational, i.e., optimization-based, inversion approach, and the
proposed approach outperforms the variational inversion method, especially for
the channelized and fractured field examples. We explain such superior
performance by visualizing the objective function in the latent space: because
of nonlinear and aggressive dimension reduction via generative modeling, the
objective function surface becomes extremely complex while the ensemble
approximation can smooth out the multi-modal surface during the minimization.
This suggests that the ensemble-based approach works well over the variational
approach when combined with deep generative models at the cost of forward model
runs unless convergence-ensuring modifications are implemented in the
variational inversion. | [
"Jichao Bao",
"Hongkyu Yoon",
"Jonghyun Lee"
] | 2023-10-02 01:27:10 | http://arxiv.org/abs/2310.00839v2 | http://arxiv.org/pdf/2310.00839v2 | 2310.00839v2 |
Necessary and Sufficient Watermark for Large Language Models | In recent years, large language models (LLMs) have achieved remarkable
performances in various NLP tasks. They can generate texts that are
indistinguishable from those written by humans. Such remarkable performance of
LLMs increases their risk of being used for malicious purposes, such as
generating fake news articles. Therefore, it is necessary to develop methods
for distinguishing texts written by LLMs from those written by humans.
Watermarking is one of the most powerful methods for achieving this. Although
existing watermarking methods have successfully detected texts generated by
LLMs, they significantly degrade the quality of the generated texts. In this
study, we propose the Necessary and Sufficient Watermark (NS-Watermark) for
inserting watermarks into generated texts without degrading the text quality.
More specifically, we derive minimum constraints required to be imposed on the
generated texts to distinguish whether LLMs or humans write the texts. Then, we
formulate the NS-Watermark as a constrained optimization problem and propose an
efficient algorithm to solve it. Through the experiments, we demonstrate that
the NS-Watermark can generate more natural texts than existing watermarking
methods and distinguish more accurately between texts written by LLMs and those
written by humans. Especially in machine translation tasks, the NS-Watermark
can outperform the existing watermarking method by up to 30 BLEU scores. | [
"Yuki Takezawa",
"Ryoma Sato",
"Han Bao",
"Kenta Niwa",
"Makoto Yamada"
] | 2023-10-02 00:48:51 | http://arxiv.org/abs/2310.00833v1 | http://arxiv.org/pdf/2310.00833v1 | 2310.00833v1 |
Online Sensitivity Optimization in Differentially Private Learning | Training differentially private machine learning models requires constraining
an individual's contribution to the optimization process. This is achieved by
clipping the $2$-norm of their gradient at a predetermined threshold prior to
averaging and batch sanitization. This selection adversely influences
optimization in two opposing ways: it either exacerbates the bias due to
excessive clipping at lower values, or augments sanitization noise at higher
values. The choice significantly hinges on factors such as the dataset, model
architecture, and even varies within the same optimization, demanding
meticulous tuning usually accomplished through a grid search. In order to
circumvent the privacy expenses incurred in hyperparameter tuning, we present a
novel approach to dynamically optimize the clipping threshold. We treat this
threshold as an additional learnable parameter, establishing a clean
relationship between the threshold and the cost function. This allows us to
optimize the former with gradient descent, with minimal repercussions on the
overall privacy analysis. Our method is thoroughly assessed against alternative
fixed and adaptive strategies across diverse datasets, tasks, model dimensions,
and privacy levels. Our results demonstrate its comparable or superior
performance in all evaluated scenarios, given the same privacy requirements. | [
"Filippo Galli",
"Catuscia Palamidessi",
"Tommaso Cucinotta"
] | 2023-10-02 00:30:49 | http://arxiv.org/abs/2310.00829v1 | http://arxiv.org/pdf/2310.00829v1 | 2310.00829v1 |
Determining the Optimal Number of Clusters for Time Series Datasets with Symbolic Pattern Forest | Clustering algorithms are among the most widely used data mining methods due
to their exploratory power and being an initial preprocessing step that paves
the way for other techniques. But the problem of calculating the optimal number
of clusters (say k) is one of the significant challenges for such methods. The
most widely used clustering algorithms like k-means and k-shape in time series
data mining also need the ground truth for the number of clusters that need to
be generated. In this work, we extended the Symbolic Pattern Forest algorithm,
another time series clustering algorithm, to determine the optimal number of
clusters for the time series datasets. We used SPF to generate the clusters
from the datasets and chose the optimal number of clusters based on the
Silhouette Coefficient, a metric used to calculate the goodness of a clustering
technique. Silhouette was calculated on both the bag of word vectors and the
tf-idf vectors generated from the SAX words of each time series. We tested our
approach on the UCR archive datasets, and our experimental results so far
showed significant improvement over the baseline. | [
"Md Nishat Raihan"
] | 2023-10-01 23:33:37 | http://arxiv.org/abs/2310.00820v1 | http://arxiv.org/pdf/2310.00820v1 | 2310.00820v1 |
ECG-SL: Electrocardiogram(ECG) Segment Learning, a deep learning method for ECG signal | Electrocardiogram (ECG) is an essential signal in monitoring human heart
activities. Researchers have achieved promising results in leveraging ECGs in
clinical applications with deep learning models. However, the mainstream deep
learning approaches usually neglect the periodic and formative attribute of the
ECG heartbeat waveform. In this work, we propose a novel ECG-Segment based
Learning (ECG-SL) framework to explicitly model the periodic nature of ECG
signals. More specifically, ECG signals are first split into heartbeat
segments, and then structural features are extracted from each of the segments.
Based on the structural features, a temporal model is designed to learn the
temporal information for various clinical tasks. Further, due to the fact that
massive ECG signals are available but the labeled data are very limited, we
also explore self-supervised learning strategy to pre-train the models,
resulting significant improvement for downstream tasks. The proposed method
outperforms the baseline model and shows competitive performances compared with
task-specific methods in three clinical applications: cardiac condition
diagnosis, sleep apnea detection, and arrhythmia classification. Further, we
find that the ECG-SL tends to focus more on each heartbeat's peak and ST range
than ResNet by visualizing the saliency maps. | [
"Han Yu",
"Huiyuan Yang",
"Akane Sano"
] | 2023-10-01 23:17:55 | http://arxiv.org/abs/2310.00818v2 | http://arxiv.org/pdf/2310.00818v2 | 2310.00818v2 |
Learning to Make Adherence-Aware Advice | As artificial intelligence (AI) systems play an increasingly prominent role
in human decision-making, challenges surface in the realm of human-AI
interactions. One challenge arises from the suboptimal AI policies due to the
inadequate consideration of humans disregarding AI recommendations, as well as
the need for AI to provide advice selectively when it is most pertinent. This
paper presents a sequential decision-making model that (i) takes into account
the human's adherence level (the probability that the human follows/rejects
machine advice) and (ii) incorporates a defer option so that the machine can
temporarily refrain from making advice. We provide learning algorithms that
learn the optimal advice policy and make advice only at critical time stamps.
Compared to problem-agnostic reinforcement learning algorithms, our specialized
learning algorithms not only enjoy better theoretical convergence properties
but also show strong empirical performance. | [
"Guanting Chen",
"Xiaocheng Li",
"Chunlin Sun",
"Hanzhao Wang"
] | 2023-10-01 23:15:55 | http://arxiv.org/abs/2310.00817v1 | http://arxiv.org/pdf/2310.00817v1 | 2310.00817v1 |
OceanNet: A principled neural operator-based digital twin for regional oceans | While data-driven approaches demonstrate great potential in atmospheric
modeling and weather forecasting, ocean modeling poses distinct challenges due
to complex bathymetry, land, vertical structure, and flow non-linearity. This
study introduces OceanNet, a principled neural operator-based digital twin for
ocean circulation. OceanNet uses a Fourier neural operator and
predictor-evaluate-corrector integration scheme to mitigate autoregressive
error growth and enhance stability over extended time scales. A spectral
regularizer counteracts spectral bias at smaller scales. OceanNet is applied to
the northwest Atlantic Ocean western boundary current (the Gulf Stream),
focusing on the task of seasonal prediction for Loop Current eddies and the
Gulf Stream meander. Trained using historical sea surface height (SSH) data,
OceanNet demonstrates competitive forecast skill by outperforming SSH
predictions by an uncoupled, state-of-the-art dynamical ocean model forecast,
reducing computation by 500,000 times. These accomplishments demonstrate the
potential of physics-inspired deep neural operators as cost-effective
alternatives to high-resolution numerical ocean models. | [
"Ashesh Chattopadhyay",
"Michael Gray",
"Tianning Wu",
"Anna B. Lowe",
"Ruoying He"
] | 2023-10-01 23:06:17 | http://arxiv.org/abs/2310.00813v1 | http://arxiv.org/pdf/2310.00813v1 | 2310.00813v1 |
Sparse Backpropagation for MoE Training | One defining characteristic of Mixture-of-Expert (MoE) models is their
capacity for conducting sparse computation via expert routing, leading to
remarkable scalability. However, backpropagation, the cornerstone of deep
learning, requires dense computation, thereby posting challenges in MoE
gradient computations. Here, we introduce SparseMixer, a scalable gradient
estimator that bridges the gap between backpropagation and sparse expert
routing. Unlike typical MoE training which strategically neglects certain
gradient terms for the sake of sparse computation and scalability, SparseMixer
provides scalable gradient approximations for these terms, enabling reliable
gradient estimation in MoE training. Grounded in a numerical ODE framework,
SparseMixer harnesses the mid-point method, a second-order ODE solver, to
deliver precise gradient approximations with negligible computational overhead.
Applying SparseMixer to Switch Transformer on both pre-training and machine
translation tasks, SparseMixer showcases considerable performance gain,
accelerating training convergence up to 2 times. | [
"Liyuan Liu",
"Jianfeng Gao",
"Weizhu Chen"
] | 2023-10-01 22:43:57 | http://arxiv.org/abs/2310.00811v1 | http://arxiv.org/pdf/2310.00811v1 | 2310.00811v1 |
Towards Causal Foundation Model: on Duality between Causal Inference and Attention | Foundation models have brought changes to the landscape of machine learning,
demonstrating sparks of human-level intelligence across a diverse array of
tasks. However, a gap persists in complex tasks such as causal inference,
primarily due to challenges associated with intricate reasoning steps and high
numerical precision requirements. In this work, we take a first step towards
building causally-aware foundation models for complex tasks. We propose a
novel, theoretically sound method called Causal Inference with Attention
(CInA), which utilizes multiple unlabeled datasets to perform self-supervised
causal learning, and subsequently enables zero-shot causal inference on unseen
tasks with new data. This is based on our theoretical results that demonstrate
the primal-dual connection between optimal covariate balancing and
self-attention, facilitating zero-shot causal inference through the final layer
of a trained transformer-type architecture. We demonstrate empirically that our
approach CInA effectively generalizes to out-of-distribution datasets and
various real-world datasets, matching or even surpassing traditional
per-dataset causal inference methodologies. | [
"Jiaqi Zhang",
"Joel Jennings",
"Cheng Zhang",
"Chao Ma"
] | 2023-10-01 22:28:34 | http://arxiv.org/abs/2310.00809v1 | http://arxiv.org/pdf/2310.00809v1 | 2310.00809v1 |
Bayesian Design Principles for Frequentist Sequential Learning | We develop a general theory to optimize the frequentist regret for sequential
learning problems, where efficient bandit and reinforcement learning algorithms
can be derived from unified Bayesian principles. We propose a novel
optimization approach to generate "algorithmic beliefs" at each round, and use
Bayesian posteriors to make decisions. The optimization objective to create
"algorithmic beliefs," which we term "Algorithmic Information Ratio,"
represents an intrinsic complexity measure that effectively characterizes the
frequentist regret of any algorithm. To the best of our knowledge, this is the
first systematical approach to make Bayesian-type algorithms prior-free and
applicable to adversarial settings, in a generic and optimal manner. Moreover,
the algorithms are simple and often efficient to implement. As a major
application, we present a novel algorithm for multi-armed bandits that achieves
the "best-of-all-worlds" empirical performance in the stochastic, adversarial,
and non-stationary environments. And we illustrate how these principles can be
used in linear bandits, bandit convex optimization, and reinforcement learning. | [
"Yunbei Xu",
"Assaf Zeevi"
] | 2023-10-01 22:17:37 | http://arxiv.org/abs/2310.00806v1 | http://arxiv.org/pdf/2310.00806v1 | 2310.00806v1 |
GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation | Recent studies have shown that graph neural networks (GNNs) exhibit strong
biases towards the node degree: they usually perform satisfactorily on
high-degree nodes with rich neighbor information but struggle with low-degree
nodes. Existing works tackle this problem by deriving either designated GNN
architectures or training strategies specifically for low-degree nodes. Though
effective, these approaches unintentionally create an artificial
out-of-distribution scenario, where models mainly or even only observe
low-degree nodes during the training, leading to a downgraded performance for
high-degree nodes that GNNs originally perform well at. In light of this, we
propose a test-time augmentation framework, namely GraphPatcher, to enhance
test-time generalization of any GNNs on low-degree nodes. Specifically,
GraphPatcher iteratively generates virtual nodes to patch artificially created
low-degree nodes via corruptions, aiming at progressively reconstructing target
GNN's predictions over a sequence of increasingly corrupted nodes. Through this
scheme, GraphPatcher not only learns how to enhance low-degree nodes (when the
neighborhoods are heavily corrupted) but also preserves the original superior
performance of GNNs on high-degree nodes (when lightly corrupted).
Additionally, GraphPatcher is model-agnostic and can also mitigate the degree
bias for either self-supervised or supervised GNNs. Comprehensive experiments
are conducted over seven benchmark datasets and GraphPatcher consistently
enhances common GNNs' overall performance by up to 3.6% and low-degree
performance by up to 6.5%, significantly outperforming state-of-the-art
baselines. The source code is publicly available at
https://github.com/jumxglhf/GraphPatcher. | [
"Mingxuan Ju",
"Tong Zhao",
"Wenhao Yu",
"Neil Shah",
"Yanfang Ye"
] | 2023-10-01 21:50:03 | http://arxiv.org/abs/2310.00800v1 | http://arxiv.org/pdf/2310.00800v1 | 2310.00800v1 |
Going Beyond Familiar Features for Deep Anomaly Detection | Anomaly Detection (AD) is a critical task that involves identifying
observations that do not conform to a learned model of normality. Prior work in
deep AD is predominantly based on a familiarity hypothesis, where familiar
features serve as the reference in a pre-trained embedding space. While this
strategy has proven highly successful, it turns out that it causes consistent
false negatives when anomalies consist of truly novel features that are not
well captured by the pre-trained encoding. We propose a novel approach to AD
using explainability to capture novel features as unexplained observations in
the input space. We achieve strong performance across a wide range of anomaly
benchmarks by combining similarity and novelty in a hybrid approach. Our
approach establishes a new state-of-the-art across multiple benchmarks,
handling diverse anomaly types while eliminating the need for expensive
background models and dense matching. In particular, we show that by taking
account of novel features, we reduce false negative anomalies by up to 40% on
challenging benchmarks compared to the state-of-the-art. Our method gives
visually inspectable explanations for pixel-level anomalies. | [
"Sarath Sivaprasad",
"Mario Fritz"
] | 2023-10-01 21:24:05 | http://arxiv.org/abs/2310.00797v2 | http://arxiv.org/pdf/2310.00797v2 | 2310.00797v2 |
A Comprehensive Review of Generative AI in Healthcare | The advancement of Artificial Intelligence (AI) has catalyzed revolutionary
changes across various sectors, notably in healthcare. Among the significant
developments in this field are the applications of generative AI models,
specifically transformers and diffusion models. These models have played a
crucial role in analyzing diverse forms of data, including medical imaging
(encompassing image reconstruction, image-to-image translation, image
generation, and image classification), protein structure prediction, clinical
documentation, diagnostic assistance, radiology interpretation, clinical
decision support, medical coding, and billing, as well as drug design and
molecular representation. Such applications have enhanced clinical diagnosis,
data reconstruction, and drug synthesis. This review paper aims to offer a
thorough overview of the generative AI applications in healthcare, focusing on
transformers and diffusion models. Additionally, we propose potential
directions for future research to tackle the existing limitations and meet the
evolving demands of the healthcare sector. Intended to serve as a comprehensive
guide for researchers and practitioners interested in the healthcare
applications of generative AI, this review provides valuable insights into the
current state of the art, challenges faced, and prospective future directions. | [
"Yasin Shokrollahi",
"Sahar Yarmohammadtoosky",
"Matthew M. Nikahd",
"Pengfei Dong",
"Xianqi Li",
"Linxia Gu"
] | 2023-10-01 21:13:14 | http://arxiv.org/abs/2310.00795v1 | http://arxiv.org/pdf/2310.00795v1 | 2310.00795v1 |
Testing the Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks | Tables stored in databases and tables which are present in web pages and
articles account for a large part of semi-structured data that is available on
the internet. It then becomes pertinent to develop a modeling approach with
large language models (LLMs) that can be used to solve diverse table tasks such
as semantic parsing, question answering as well as classification problems.
Traditionally, there existed separate models specialized for each task
individually. It raises the question of how far can we go to build a unified
model that works well on some table tasks without significant degradation on
others. To that end, we attempt at creating a shared modeling approach in the
pretraining stage with encoder-decoder style LLMs that can cater to diverse
tasks. We evaluate our approach that continually pretrains and finetunes
different model families of T5 with data from tables and surrounding context,
on these downstream tasks at different model scales. Through multiple ablation
studies, we observe that our pretraining with self-supervised objectives can
significantly boost the performance of the models on these tasks. As an example
of one improvement, we observe that the instruction finetuned public models
which come specialized on text question answering (QA) and have been trained on
table data still have room for improvement when it comes to table specific QA.
Our work is the first attempt at studying the advantages of a unified approach
to table specific pretraining when scaled from 770M to 11B sequence to sequence
models while also comparing the instruction finetuned variants of the models. | [
"Soumajyoti Sarkar",
"Leonard Lausen"
] | 2023-10-01 21:06:15 | http://arxiv.org/abs/2310.00789v1 | http://arxiv.org/pdf/2310.00789v1 | 2310.00789v1 |
BooookScore: A systematic exploration of book-length summarization in the era of LLMs | Summarizing book-length documents (>100K tokens) that exceed the context
window size of large language models (LLMs) requires first breaking the input
document into smaller chunks and then prompting an LLM to merge, update, and
compress chunk-level summaries. Despite the complexity and importance of this
task, it has yet to be meaningfully studied due to the challenges of
evaluation: existing book-length summarization datasets (e.g., BookSum) are in
the pretraining data of most public LLMs, and existing evaluation methods
struggle to capture errors made by modern LLM summarizers. In this paper, we
present the first study of the coherence of LLM-based book-length summarizers
implemented via two prompting workflows: (1) hierarchically merging chunk-level
summaries, and (2) incrementally updating a running summary. We obtain 1193
fine-grained human annotations on GPT-4 generated summaries of 100
recently-published books and identify eight common types of coherence errors
made by LLMs. Because human evaluation is expensive and time-consuming, we
develop an automatic metric, BooookScore, that measures the proportion of
sentences in a summary that do not contain any of the identified error types.
BooookScore has high agreement with human annotations and allows us to
systematically evaluate the impact of many other critical parameters (e.g.,
chunk size, base LLM) while saving $15K and 500 hours in human evaluation
costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce
summaries with higher BooookScore than the oft-repetitive ones generated by
LLaMA 2. Incremental updating yields lower BooookScore but higher level of
detail than hierarchical merging, a trade-off sometimes preferred by human
annotators. We release code and annotations after blind review to spur more
principled research on book-length summarization. | [
"Yapei Chang",
"Kyle Lo",
"Tanya Goyal",
"Mohit Iyyer"
] | 2023-10-01 20:46:44 | http://arxiv.org/abs/2310.00785v2 | http://arxiv.org/pdf/2310.00785v2 | 2310.00785v2 |
Categorizing Flight Paths using Data Visualization and Clustering Methodologies | This work leverages the U.S. Federal Aviation Administration's Traffic Flow
Management System dataset and DV8, a recently developed tool for highly
interactive visualization of air traffic data, to develop clustering algorithms
for categorizing air traffic by their varying flight paths. Two clustering
methodologies, a spatial-based geographic distance model, and a vector-based
cosine similarity model, are demonstrated and compared for their clustering
effectiveness. Examples of their applications reveal successful, realistic
clustering based on automated clustering result determination and
human-in-the-loop processes, with geographic distance algorithms performing
better for enroute portions of flight paths and cosine similarity algorithms
performing better for near-terminal operations, such as arrival paths. A point
extraction technique is applied to improve computation efficiency. | [
"Yifan Song",
"Keyang Yu",
"Seth Young"
] | 2023-10-01 19:42:00 | http://arxiv.org/abs/2310.00773v1 | http://arxiv.org/pdf/2310.00773v1 | 2310.00773v1 |
Pre-training with Synthetic Data Helps Offline Reinforcement Learning | Recently, it has been shown that for offline deep reinforcement learning
(DRL), pre-training Decision Transformer with a large language corpus can
improve downstream performance (Reid et al., 2022). A natural question to ask
is whether this performance gain can only be achieved with language
pre-training, or can be achieved with simpler pre-training schemes which do not
involve language. In this paper, we first show that language is not essential
for improved performance, and indeed pre-training with synthetic IID data for a
small number of updates can match the performance gains from pre-training with
a large language corpus; moreover, pre-training with data generated by a
one-step Markov chain can further improve the performance. Inspired by these
experimental results, we then consider pre-training Conservative Q-Learning
(CQL), a popular offline DRL algorithm, which is Q-learning-based and typically
employs a Multi-Layer Perceptron (MLP) backbone. Surprisingly, pre-training
with simple synthetic data for a small number of updates can also improve CQL,
providing consistent performance improvement on D4RL Gym locomotion datasets.
The results of this paper not only illustrate the importance of pre-training
for offline DRL but also show that the pre-training data can be synthetic and
generated with remarkably simple mechanisms. | [
"Zecheng Wang",
"Che Wang",
"Zixuan Dong",
"Keith Ross"
] | 2023-10-01 19:32:14 | http://arxiv.org/abs/2310.00771v2 | http://arxiv.org/pdf/2310.00771v2 | 2310.00771v2 |
Data-Efficient Power Flow Learning for Network Contingencies | This work presents an efficient data-driven method to learn power flows in
grids with network contingencies and to estimate corresponding probabilistic
voltage envelopes (PVE). First, a network-aware Gaussian process (GP) termed
Vertex-Degree Kernel (VDK-GP), developed in prior work, is used to estimate
voltage-power functions for a few network configurations. The paper introduces
a novel multi-task vertex degree kernel (MT-VDK) that amalgamates the learned
VDK-GPs to determine power flows for unseen networks, with a significant
reduction in the computational complexity and hyperparameter requirements
compared to alternate approaches. Simulations on the IEEE 30-Bus network
demonstrate the retention and transfer of power flow knowledge in both N-1 and
N-2 contingency scenarios. The MT-VDK-GP approach achieves over 50% reduction
in mean prediction error for novel N-1 contingency network configurations in
low training data regimes (50-250 samples) over VDK-GP. Additionally, MT-VDK-GP
outperforms a hyper-parameter based transfer learning approach in over 75% of
N-2 contingency network structures, even without historical N-2 outage data.
The proposed method demonstrates the ability to achieve PVEs using sixteen
times fewer power flow solutions compared to Monte-Carlo sampling-based
methods. | [
"Parikshit Pareek",
"Deepjyoti Deka",
"Sidhant Misra"
] | 2023-10-01 19:02:00 | http://arxiv.org/abs/2310.00763v2 | http://arxiv.org/pdf/2310.00763v2 | 2310.00763v2 |
Counterfactual Image Generation for adversarially robust and interpretable Classifiers | Neural Image Classifiers are effective but inherently hard to interpret and
susceptible to adversarial attacks. Solutions to both problems exist, among
others, in the form of counterfactual examples generation to enhance
explainability or adversarially augment training datasets for improved
robustness. However, existing methods exclusively address only one of the
issues. We propose a unified framework leveraging image-to-image translation
Generative Adversarial Networks (GANs) to produce counterfactual samples that
highlight salient regions for interpretability and act as adversarial samples
to augment the dataset for more robustness. This is achieved by combining the
classifier and discriminator into a single model that attributes real images to
their respective classes and flags generated images as "fake". We assess the
method's effectiveness by evaluating (i) the produced explainability masks on a
semantic segmentation task for concrete cracks and (ii) the model's resilience
against the Projected Gradient Descent (PGD) attack on a fruit defects
detection problem. Our produced saliency maps are highly descriptive, achieving
competitive IoU values compared to classical segmentation models despite being
trained exclusively on classification labels. Furthermore, the model exhibits
improved robustness to adversarial attacks, and we show how the discriminator's
"fakeness" value serves as an uncertainty measure of the predictions. | [
"Rafael Bischof",
"Florian Scheidegger",
"Michael A. Kraus",
"A. Cristiano I. Malossi"
] | 2023-10-01 18:50:29 | http://arxiv.org/abs/2310.00761v1 | http://arxiv.org/pdf/2310.00761v1 | 2310.00761v1 |
Data-driven adaptive building thermal controller tuning with constraints: A primal-dual contextual Bayesian optimization approach | We study the problem of tuning the parameters of a room temperature
controller to minimize its energy consumption, subject to the constraint that
the daily cumulative thermal discomfort of the occupants is below a given
threshold. We formulate it as an online constrained black-box optimization
problem where, on each day, we observe some relevant environmental context and
adaptively select the controller parameters. In this paper, we propose to use a
data-driven Primal-Dual Contextual Bayesian Optimization (PDCBO) approach to
solve this problem. In a simulation case study on a single room, we apply our
algorithm to tune the parameters of a Proportional Integral (PI) heating
controller and the pre-heating time. Our results show that PDCBO can save up to
4.7% energy consumption compared to other state-of-the-art Bayesian
optimization-based methods while keeping the daily thermal discomfort below the
given tolerable threshold on average. Additionally, PDCBO can automatically
track time-varying tolerable thresholds while existing methods fail to do so.
We then study an alternative constrained tuning problem where we aim to
minimize the thermal discomfort with a given energy budget. With this
formulation, PDCBO reduces the average discomfort by up to 63% compared to
state-of-the-art safe optimization methods while keeping the average daily
energy consumption below the required threshold. | [
"Wenjie Xu",
"Bratislav Svetozarevic",
"Loris Di Natale",
"Philipp Heer",
"Colin N Jones"
] | 2023-10-01 18:33:37 | http://arxiv.org/abs/2310.00758v1 | http://arxiv.org/pdf/2310.00758v1 | 2310.00758v1 |
Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI Models | Developing robust artificial intelligence (AI) models that generalize well to
unseen datasets is challenging and usually requires large and variable
datasets, preferably from multiple institutions. In federated learning (FL), a
model is trained collaboratively at numerous sites that hold local datasets
without exchanging them. So far, the impact of training strategy, i.e., local
versus collaborative, on the diagnostic on-domain and off-domain performance of
AI models interpreting chest radiographs has not been assessed. Consequently,
using 610,000 chest radiographs from five institutions across the globe, we
assessed diagnostic performance as a function of training strategy (i.e., local
vs. collaborative), network architecture (i.e., convolutional vs.
transformer-based), generalization performance (i.e., on-domain vs.
off-domain), imaging finding (i.e., cardiomegaly, pleural effusion, pneumonia,
atelectasis, consolidation, pneumothorax, and no abnormality), dataset size
(i.e., from n=18,000 to 213,921 radiographs), and dataset diversity. Large
datasets not only showed minimal performance gains with FL but, in some
instances, even exhibited decreases. In contrast, smaller datasets revealed
marked improvements. Thus, on-domain performance was mainly driven by training
data size. However, off-domain performance leaned more on training diversity.
When trained collaboratively across diverse external institutions, AI models
consistently surpassed models trained locally for off-domain tasks, emphasizing
FL's potential in leveraging data diversity. In conclusion, FL can bolster
diagnostic privacy, reproducibility, and off-domain reliability of AI models
and, potentially, optimize healthcare outcomes. | [
"Soroosh Tayebi Arasteh",
"Christiane Kuhl",
"Marwin-Jonathan Saehn",
"Peter Isfort",
"Daniel Truhn",
"Sven Nebelung"
] | 2023-10-01 18:27:59 | http://arxiv.org/abs/2310.00757v1 | http://arxiv.org/pdf/2310.00757v1 | 2310.00757v1 |
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | [
"Yiyang Zhou",
"Chenhang Cui",
"Jaehong Yoon",
"Linjun Zhang",
"Zhun Deng",
"Chelsea Finn",
"Mohit Bansal",
"Huaxiu Yao"
] | 2023-10-01 18:10:53 | http://arxiv.org/abs/2310.00754v1 | http://arxiv.org/pdf/2310.00754v1 | 2310.00754v1 |
Identifying Copeland Winners in Dueling Bandits with Indifferences | We consider the task of identifying the Copeland winner(s) in a dueling
bandits problem with ternary feedback. This is an underexplored but practically
relevant variant of the conventional dueling bandits problem, in which, in
addition to strict preference between two arms, one may observe feedback in the
form of an indifference. We provide a lower bound on the sample complexity for
any learning algorithm finding the Copeland winner(s) with a fixed error
probability. Moreover, we propose POCOWISTA, an algorithm with a sample
complexity that almost matches this lower bound, and which shows excellent
empirical performance, even for the conventional dueling bandits problem. For
the case where the preference probabilities satisfy a specific type of
stochastic transitivity, we provide a refined version with an improved worst
case sample complexity. | [
"Viktor Bengs",
"Björn Haddenhorst",
"Eyke Hüllermeier"
] | 2023-10-01 17:59:27 | http://arxiv.org/abs/2310.00750v1 | http://arxiv.org/pdf/2310.00750v1 | 2310.00750v1 |
SEED: Simple, Efficient, and Effective Data Management via Large Language Models | We introduce SEED, an LLM-centric system that allows users to easily create
efficient, and effective data management applications. SEED comprises three
main components: code generation, model generation, and augmented LLM query to
address the challenges that LLM services are computationally and economically
expensive and do not always work well on all cases for a given data management
task. SEED addresses the expense challenge by localizing LLM computation as
much as possible. This includes replacing most of LLM calls with local code,
local models, and augmenting LLM queries with batching and data access tools,
etc. To ensure effectiveness, SEED features a bunch of optimization techniques
to enhance the localized solution and the LLM queries, including automatic code
validation, code ensemble, model representatives selection, selective tool
usages, etc. Moreover, with SEED users are able to easily construct a data
management solution customized to their applications. It allows the users to
configure each component and compose an execution pipeline in natural language.
SEED then automatically compiles it into an executable program. We showcase the
efficiency and effectiveness of SEED using diverse data management tasks such
as data imputation, NL2SQL translation, etc., achieving state-of-the-art
few-shot performance while significantly reducing the number of required LLM
calls. | [
"Zui CHen",
"Lei Cao",
"Sam Madden",
"Ju Fan",
"Nan Tang",
"Zihui Gu",
"Zeyuan Shang",
"Chunwei Liu",
"Michael Cafarella",
"Tim Kraska"
] | 2023-10-01 17:59:20 | http://arxiv.org/abs/2310.00749v1 | http://arxiv.org/pdf/2310.00749v1 | 2310.00749v1 |
Deterministic Langevin Unconstrained Optimization with Normalizing Flows | We introduce a global, gradient-free surrogate optimization strategy for
expensive black-box functions inspired by the Fokker-Planck and Langevin
equations. These can be written as an optimization problem where the objective
is the target function to maximize minus the logarithm of the current density
of evaluated samples. This objective balances exploitation of the target
objective with exploration of low-density regions. The method, Deterministic
Langevin Optimization (DLO), relies on a Normalizing Flow density estimate to
perform active learning and select proposal points for evaluation. This
strategy differs qualitatively from the widely-used acquisition functions
employed by Bayesian Optimization methods, and can accommodate a range of
surrogate choices. We demonstrate superior or competitive progress toward
objective optima on standard synthetic test functions, as well as on non-convex
and multi-modal posteriors of moderate dimension. On real-world objectives,
such as scientific and neural network hyperparameter optimization, DLO is
competitive with state-of-the-art baselines. | [
"James M. Sullivan",
"Uros Seljak"
] | 2023-10-01 17:46:20 | http://arxiv.org/abs/2310.00745v1 | http://arxiv.org/pdf/2310.00745v1 | 2310.00745v1 |
Top-down Green-ups: Satellite Sensing and Deep Models to Predict Buffelgrass Phenology | An invasive species of grass known as "buffelgrass" contributes to severe
wildfires and biodiversity loss in the Southwest United States. We tackle the
problem of predicting buffelgrass "green-ups" (i.e. readiness for herbicidal
treatment). To make our predictions, we explore temporal, visual and
multi-modal models that combine satellite sensing and deep learning. We find
that all of our neural-based approaches improve over conventional buffelgrass
green-up models, and discuss how neural model deployment promises significant
resource savings. | [
"Lucas Rosenblatt",
"Bin Han",
"Erin Posthumus",
"Theresa Crimmins",
"Bill Howe"
] | 2023-10-01 17:35:35 | http://arxiv.org/abs/2310.00740v1 | http://arxiv.org/pdf/2310.00740v1 | 2310.00740v1 |
Robust Sentiment Analysis for Low Resource languages Using Data Augmentation Approaches: A Case Study in Marathi | Sentiment analysis plays a crucial role in understanding the sentiment
expressed in text data. While sentiment analysis research has been extensively
conducted in English and other Western languages, there exists a significant
gap in research efforts for sentiment analysis in low-resource languages.
Limited resources, including datasets and NLP research, hinder the progress in
this area. In this work, we present an exhaustive study of data augmentation
approaches for the low-resource Indic language Marathi. Although
domain-specific datasets for sentiment analysis in Marathi exist, they often
fall short when applied to generalized and variable-length inputs. To address
this challenge, this research paper proposes four data augmentation techniques
for sentiment analysis in Marathi. The paper focuses on augmenting existing
datasets to compensate for the lack of sufficient resources. The primary
objective is to enhance sentiment analysis model performance in both in-domain
and cross-domain scenarios by leveraging data augmentation strategies. The data
augmentation approaches proposed showed a significant performance improvement
for cross-domain accuracies. The augmentation methods include paraphrasing,
back-translation; BERT-based random token replacement, named entity
replacement, and pseudo-label generation; GPT-based text and label generation.
Furthermore, these techniques can be extended to other low-resource languages
and for general text classification tasks. | [
"Aabha Pingle",
"Aditya Vyawahare",
"Isha Joshi",
"Rahul Tangsali",
"Geetanjali Kale",
"Raviraj Joshi"
] | 2023-10-01 17:09:31 | http://arxiv.org/abs/2310.00734v1 | http://arxiv.org/pdf/2310.00734v1 | 2310.00734v1 |
Spectral Neural Networks: Approximation Theory and Optimization Landscape | There is a large variety of machine learning methodologies that are based on
the extraction of spectral geometric information from data. However, the
implementations of many of these methods often depend on traditional
eigensolvers, which present limitations when applied in practical online big
data scenarios. To address some of these challenges, researchers have proposed
different strategies for training neural networks as alternatives to
traditional eigensolvers, with one such approach known as Spectral Neural
Network (SNN). In this paper, we investigate key theoretical aspects of SNN.
First, we present quantitative insights into the tradeoff between the number of
neurons and the amount of spectral geometric information a neural network
learns. Second, we initiate a theoretical exploration of the optimization
landscape of SNN's objective to shed light on the training dynamics of SNN.
Unlike typical studies of convergence to global solutions of NN training
dynamics, SNN presents an additional complexity due to its non-convex ambient
loss function. | [
"Chenghui Li",
"Rishi Sonthalia",
"Nicolas Garcia Trillos"
] | 2023-10-01 17:03:47 | http://arxiv.org/abs/2310.00729v1 | http://arxiv.org/pdf/2310.00729v1 | 2310.00729v1 |
Physics-Informed Graph Neural Network for Dynamic Reconfiguration of Power Systems | To maintain a reliable grid we need fast decision-making algorithms for
complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution
grid switch settings in real-time to minimize grid losses and dispatches
resources to supply loads with available generation. DyR is a mixed-integer
problem and can be computationally intractable to solve for large grids and at
fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network
(GNNs) framework tailored for DyR. We incorporate essential operational and
connectivity constraints directly within the GNN framework and train it
end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR
task. | [
"Jules Authier",
"Rabab Haider",
"Anuradha Annaswamy",
"Florian Dorfler"
] | 2023-10-01 17:02:29 | http://arxiv.org/abs/2310.00728v1 | http://arxiv.org/pdf/2310.00728v1 | 2310.00728v1 |
Review of deep learning in healthcare | Given the growing complexity of healthcare data over the last several years,
using machine learning techniques like Deep Neural Network (DNN) models has
gained increased appeal. In order to extract hidden patterns and other valuable
information from the huge quantity of health data, which traditional analytics
are unable to do in a reasonable length of time, machine learning (ML)
techniques are used. Deep Learning (DL) algorithms in particular have been
shown as potential approaches to pattern identification in healthcare systems.
This thought has led to the contribution of this research, which examines deep
learning methods used in healthcare systems via an examination of cutting-edge
network designs, applications, and market trends. To connect deep learning
methodologies and human healthcare interpretability, the initial objective is
to provide in-depth insight into the deployment of deep learning models in
healthcare solutions. And last, to outline the current unresolved issues and
potential directions. | [
"Hasan Hejbari Zargar",
"Saha Hejbari Zargar",
"Raziye Mehri"
] | 2023-10-01 16:58:20 | http://arxiv.org/abs/2310.00727v1 | http://arxiv.org/pdf/2310.00727v1 | 2310.00727v1 |
Improving Length-Generalization in Transformers via Task Hinting | It has been observed in recent years that transformers have problems with
length generalization for certain types of reasoning and arithmetic tasks. In
particular, the performance of a transformer model trained on tasks (say
addition) up to a certain length (e.g., 5 digit numbers) drops sharply when
applied to longer instances of the same problem. This work proposes an approach
based on task hinting towards addressing length generalization. Our key idea is
that while training the model on task-specific data, it is helpful to
simultaneously train the model to solve a simpler but related auxiliary task as
well.
We study the classical sorting problem as a canonical example to evaluate our
approach. We design a multitask training framework and show that task hinting
significantly improve length generalization. For sorting we show that it is
possible to train models on data consisting of sequences having length at most
$20$, and improve the test accuracy on sequences of length $100$ from less than
1% (for standard training) to more than 92% (via task hinting).
Our study uncovers several interesting aspects of length generalization. We
observe that while several auxiliary tasks may seem natural a priori, their
effectiveness in improving length generalization differs dramatically. We
further use probing and visualization-based techniques to understand the
internal mechanisms via which the model performs the task, and propose a
theoretical construction consistent with the observed learning behaviors of the
model. Based on our construction, we show that introducing a small number of
length dependent parameters into the training procedure can further boost the
performance on unseen lengths. Finally, we also show the efficacy of our task
hinting based approach beyond sorting, giving hope that these techniques will
be applicable in broader contexts. | [
"Pranjal Awasthi",
"Anupam Gupta"
] | 2023-10-01 16:57:40 | http://arxiv.org/abs/2310.00726v1 | http://arxiv.org/pdf/2310.00726v1 | 2310.00726v1 |
Subtractive Mixture Models via Squaring: Representation and Learning | Mixture models are traditionally represented and learned by adding several
distributions as components. Allowing mixtures to subtract probability mass or
density can drastically reduce the number of components needed to model complex
distributions. However, learning such subtractive mixtures while ensuring they
still encode a non-negative function is challenging. We investigate how to
learn and perform inference on deep subtractive mixtures by squaring them. We
do this in the framework of probabilistic circuits, which enable us to
represent tensorized mixtures and generalize several other subtractive models.
We theoretically prove that the class of squared circuits allowing subtractions
can be exponentially more expressive than traditional additive mixtures; and,
we empirically show this increased expressiveness on a series of real-world
distribution estimation tasks. | [
"Lorenzo Loconte",
"Aleksanteri M. Sladek",
"Stefan Mengel",
"Martin Trapp",
"Arno Solin",
"Nicolas Gillis",
"Antonio Vergari"
] | 2023-10-01 16:51:58 | http://arxiv.org/abs/2310.00724v1 | http://arxiv.org/pdf/2310.00724v1 | 2310.00724v1 |
Logical Bias Learning for Object Relation Prediction | Scene graph generation (SGG) aims to automatically map an image into a
semantic structural graph for better scene understanding. It has attracted
significant attention for its ability to provide object and relation
information, enabling graph reasoning for downstream tasks. However, it faces
severe limitations in practice due to the biased data and training method. In
this paper, we present a more rational and effective strategy based on causal
inference for object relation prediction. To further evaluate the superiority
of our strategy, we propose an object enhancement module to conduct ablation
studies. Experimental results on the Visual Gnome 150 (VG-150) dataset
demonstrate the effectiveness of our proposed method. These contributions can
provide great potential for foundation models for decision-making. | [
"Xinyu Zhou",
"Zihan Ji",
"Anna Zhu"
] | 2023-10-01 16:12:00 | http://arxiv.org/abs/2310.00712v1 | http://arxiv.org/pdf/2310.00712v1 | 2310.00712v1 |
A Simple Yet Effective Strategy to Robustify the Meta Learning Paradigm | Meta learning is a promising paradigm to enable skill transfer across tasks.
Most previous methods employ the empirical risk minimization principle in
optimization. However, the resulting worst fast adaptation to a subset of tasks
can be catastrophic in risk-sensitive scenarios. To robustify fast adaptation,
this paper optimizes meta learning pipelines from a distributionally robust
perspective and meta trains models with the measure of expected tail risk. We
take the two-stage strategy as heuristics to solve the robust meta learning
problem, controlling the worst fast adaptation cases at a certain probabilistic
level. Experimental results show that our simple method can improve the
robustness of meta learning to task distributions and reduce the conditional
expectation of the worst fast adaptation risk. | [
"Qi Wang",
"Yiqin Lv",
"Yanghe Feng",
"Zheng Xie",
"Jincai Huang"
] | 2023-10-01 15:54:45 | http://arxiv.org/abs/2310.00708v1 | http://arxiv.org/pdf/2310.00708v1 | 2310.00708v1 |
Learning How to Propagate Messages in Graph Neural Networks | This paper studies the problem of learning message propagation strategies for
graph neural networks (GNNs). One of the challenges for graph neural networks
is that of defining the propagation strategy. For instance, the choices of
propagation steps are often specialized to a single graph and are not
personalized to different nodes. To compensate for this, in this paper, we
present learning to propagate, a general learning framework that not only
learns the GNN parameters for prediction but more importantly, can explicitly
learn the interpretable and personalized propagate strategies for different
nodes and various types of graphs. We introduce the optimal propagation steps
as latent variables to help find the maximum-likelihood estimation of the GNN
parameters in a variational Expectation-Maximization (VEM) framework. Extensive
experiments on various types of graph benchmarks demonstrate that our proposed
framework can significantly achieve better performance compared with the
state-of-the-art methods, and can effectively learn personalized and
interpretable propagate strategies of messages in GNNs. | [
"Teng Xiao",
"Zhengyu Chen",
"Donglin Wang",
"Suhang Wang"
] | 2023-10-01 15:09:59 | http://arxiv.org/abs/2310.00697v1 | http://arxiv.org/pdf/2310.00697v1 | 2310.00697v1 |
The Noise Geometry of Stochastic Gradient Descent: A Quantitative and Analytical Characterization | Empirical studies have demonstrated that the noise in stochastic gradient
descent (SGD) aligns favorably with the local geometry of loss landscape.
However, theoretical and quantitative explanations for this phenomenon remain
sparse. In this paper, we offer a comprehensive theoretical investigation into
the aforementioned {\em noise geometry} for over-parameterized linear (OLMs)
models and two-layer neural networks. We scrutinize both average and
directional alignments, paying special attention to how factors like sample
size and input data degeneracy affect the alignment strength. As a specific
application, we leverage our noise geometry characterizations to study how SGD
escapes from sharp minima, revealing that the escape direction has significant
components along flat directions. This is in stark contrast to GD, which
escapes only along the sharpest directions. To substantiate our theoretical
findings, both synthetic and real-world experiments are provided. | [
"Mingze Wang",
"Lei Wu"
] | 2023-10-01 14:58:20 | http://arxiv.org/abs/2310.00692v1 | http://arxiv.org/pdf/2310.00692v1 | 2310.00692v1 |
PharmacoNet: Accelerating Large-Scale Virtual Screening by Deep Pharmacophore Modeling | As the size of accessible compound libraries expands to over 10 billion, the
need for more efficient structure-based virtual screening methods is emerging.
Different pre-screening methods have been developed to rapidly screen the
library, but the structure-based methods applicable to general proteins are
still lacking: the challenge is to predict the binding pose between proteins
and ligands and perform scoring in an extremely short time. We introduce
PharmacoNet, a deep learning framework that identifies the optimal 3D
pharmacophore arrangement which a ligand should have for stable binding from
the binding site. By coarse-grained graph matching between ligands and the
generated pharmacophore arrangement, we solve the expensive binding pose
sampling and scoring procedures of existing methods in a single step.
PharmacoNet is significantly faster than state-of-the-art structure-based
approaches, yet reasonably accurate with a simple scoring function.
Furthermore, we show the promising result that PharmacoNet effectively retains
hit candidates even under the high pre-screening filtration rates. Overall, our
study uncovers the hitherto untapped potential of a pharmacophore modeling
approach in deep learning-based drug discovery. | [
"Seonghwan Seo",
"Woo Youn Kim"
] | 2023-10-01 14:13:09 | http://arxiv.org/abs/2310.00681v2 | http://arxiv.org/pdf/2310.00681v2 | 2310.00681v2 |
A General Offline Reinforcement Learning Framework for Interactive Recommendation | This paper studies the problem of learning interactive recommender systems
from logged feedbacks without any exploration in online environments. We
address the problem by proposing a general offline reinforcement learning
framework for recommendation, which enables maximizing cumulative user rewards
without online exploration. Specifically, we first introduce a probabilistic
generative model for interactive recommendation, and then propose an effective
inference algorithm for discrete and stochastic policy learning based on logged
feedbacks. In order to perform offline learning more effectively, we propose
five approaches to minimize the distribution mismatch between the logging
policy and recommendation policy: support constraints, supervised
regularization, policy constraints, dual constraints and reward extrapolation.
We conduct extensive experiments on two public real-world datasets,
demonstrating that the proposed methods can achieve superior performance over
existing supervised learning and reinforcement learning methods for
recommendation. | [
"Teng Xiao",
"Donglin Wang"
] | 2023-10-01 14:09:21 | http://arxiv.org/abs/2310.00678v1 | http://arxiv.org/pdf/2310.00678v1 | 2310.00678v1 |
Optimization or Architecture: How to Hack Kalman Filtering | In non-linear filtering, it is traditional to compare non-linear
architectures such as neural networks to the standard linear Kalman Filter
(KF). We observe that this mixes the evaluation of two separate components: the
non-linear architecture, and the parameters optimization method. In particular,
the non-linear model is often optimized, whereas the reference KF model is not.
We argue that both should be optimized similarly, and to that end present the
Optimized KF (OKF). We demonstrate that the KF may become competitive to neural
models - if optimized using OKF. This implies that experimental conclusions of
certain previous studies were derived from a flawed process. The advantage of
OKF over the standard KF is further studied theoretically and empirically, in a
variety of problems. Conveniently, OKF can replace the KF in real-world systems
by merely updating the parameters. | [
"Ido Greenberg",
"Netanel Yannay",
"Shie Mannor"
] | 2023-10-01 14:00:18 | http://arxiv.org/abs/2310.00675v1 | http://arxiv.org/pdf/2310.00675v1 | 2310.00675v1 |
Learning Type Inference for Enhanced Dataflow Analysis | Statically analyzing dynamically-typed code is a challenging endeavor, as
even seemingly trivial tasks such as determining the targets of procedure calls
are non-trivial without knowing the types of objects at compile time.
Addressing this challenge, gradual typing is increasingly added to
dynamically-typed languages, a prominent example being TypeScript that
introduces static typing to JavaScript. Gradual typing improves the developer's
ability to verify program behavior, contributing to robust, secure and
debuggable programs. In practice, however, users only sparsely annotate types
directly. At the same time, conventional type inference faces
performance-related challenges as program size grows. Statistical techniques
based on machine learning offer faster inference, but although recent
approaches demonstrate overall improved accuracy, they still perform
significantly worse on user-defined types than on the most common built-in
types. Limiting their real-world usefulness even more, they rarely integrate
with user-facing applications. We propose CodeTIDAL5, a Transformer-based model
trained to reliably predict type annotations. For effective result retrieval
and re-integration, we extract usage slices from a program's code property
graph. Comparing our approach against recent neural type inference systems, our
model outperforms the current state-of-the-art by 7.85% on the
ManyTypes4TypeScript benchmark, achieving 71.27% accuracy overall. Furthermore,
we present JoernTI, an integration of our approach into Joern, an open source
static analysis tool, and demonstrate that the analysis benefits from the
additional type information. As our model allows for fast inference times even
on commodity CPUs, making our system available through Joern leads to high
accessibility and facilitates security research. | [
"Lukas Seidel",
"Sedick David Baker Effendi",
"Xavier Pinho",
"Konrad Rieck",
"Brink van der Merwe",
"Fabian Yamaguchi"
] | 2023-10-01 13:52:28 | http://arxiv.org/abs/2310.00673v2 | http://arxiv.org/pdf/2310.00673v2 | 2310.00673v2 |
GeRA: Label-Efficient Geometrically Regularized Alignment | Pretrained unimodal encoders incorporate rich semantic information into
embedding space structures. To be similarly informative, multi-modal encoders
typically require massive amounts of paired data for alignment and training. We
introduce a semi-supervised Geometrically Regularized Alignment (GeRA) method
to align the embedding spaces of pretrained unimodal encoders in a
label-efficient way. Our method leverages the manifold geometry of unpaired
(unlabeled) data to improve alignment performance. To prevent distortions to
local geometry during the alignment process, potentially disrupting semantic
neighborhood structures and causing misalignment of unobserved pairs, we
introduce a geometric loss term. This term is built upon a diffusion operator
that captures the local manifold geometry of the unimodal pretrained encoders.
GeRA is modality-agnostic and thus can be used to align pretrained encoders
from any data modalities. We provide empirical evidence to the effectiveness of
our method in the domains of speech-text and image-text alignment. Our
experiments demonstrate significant improvement in alignment quality compared
to a variaty of leading baselines, especially with a small amount of paired
data, using our proposed geometric regularization. | [
"Dustin Klebe",
"Tal Shnitzer",
"Mikhail Yurochkin",
"Leonid Karlinsky",
"Justin Solomon"
] | 2023-10-01 13:48:36 | http://arxiv.org/abs/2310.00672v2 | http://arxiv.org/pdf/2310.00672v2 | 2310.00672v2 |
Balancing Efficiency vs. Effectiveness and Providing Missing Label Robustness in Multi-Label Stream Classification | Available works addressing multi-label classification in a data stream
environment focus on proposing accurate models; however, these models often
exhibit inefficiency and cannot balance effectiveness and efficiency. In this
work, we propose a neural network-based approach that tackles this issue and is
suitable for high-dimensional multi-label classification. Our model uses a
selective concept drift adaptation mechanism that makes it suitable for a
non-stationary environment. Additionally, we adapt our model to an environment
with missing labels using a simple yet effective imputation strategy and
demonstrate that it outperforms a vast majority of the state-of-the-art
supervised models. To achieve our purposes, we introduce a weighted binary
relevance-based approach named ML-BELS using the Broad Ensemble Learning System
(BELS) as its base classifier. Instead of a chain of stacked classifiers, our
model employs independent weighted ensembles, with the weights generated by the
predictions of a BELS classifier. We show that using the weighting strategy on
datasets with low label cardinality negatively impacts the accuracy of the
model; with this in mind, we use the label cardinality as a trigger for
applying the weights. We present an extensive assessment of our model using 11
state-of-the-art baselines, five synthetics, and 13 real-world datasets, all
with different characteristics. Our results demonstrate that the proposed
approach ML-BELS is successful in balancing effectiveness and efficiency, and
is robust to missing labels and concept drift. | [
"Sepehr Bakhshi",
"Fazli Can"
] | 2023-10-01 13:23:37 | http://arxiv.org/abs/2310.00665v1 | http://arxiv.org/pdf/2310.00665v1 | 2310.00665v1 |
Twin Neural Network Improved k-Nearest Neighbor Regression | Twin neural network regression is trained to predict differences between
regression targets rather than the targets themselves. A solution to the
original regression problem can be obtained by ensembling predicted differences
between the targets of an unknown data point and multiple known anchor data
points. Choosing the anchors to be the nearest neighbors of the unknown data
point leads to a neural network-based improvement of k-nearest neighbor
regression. This algorithm is shown to outperform both neural networks and
k-nearest neighbor regression on small to medium-sized data sets. | [
"Sebastian J. Wetzel"
] | 2023-10-01 13:20:49 | http://arxiv.org/abs/2310.00664v1 | http://arxiv.org/pdf/2310.00664v1 | 2310.00664v1 |
Liveness Detection Competition -- Noncontact-based Fingerprint Algorithms and Systems (LivDet-2023 Noncontact Fingerprint) | Liveness Detection (LivDet) is an international competition series open to
academia and industry with the objec-tive to assess and report state-of-the-art
in Presentation Attack Detection (PAD). LivDet-2023 Noncontact Fingerprint is
the first edition of the noncontact fingerprint-based PAD competition for
algorithms and systems. The competition serves as an important benchmark in
noncontact-based fingerprint PAD, offering (a) independent assessment of the
state-of-the-art in noncontact-based fingerprint PAD for algorithms and
systems, and (b) common evaluation protocol, which includes finger photos of a
variety of Presentation Attack Instruments (PAIs) and live fingers to the
biometric research community (c) provides standard algorithm and system
evaluation protocols, along with the comparative analysis of state-of-the-art
algorithms from academia and industry with both old and new android
smartphones. The winning algorithm achieved an APCER of 11.35% averaged overall
PAIs and a BPCER of 0.62%. The winning system achieved an APCER of 13.0.4%,
averaged over all PAIs tested over all the smartphones, and a BPCER of 1.68%
over all smartphones tested. Four-finger systems that make individual
finger-based PAD decisions were also tested. The dataset used for competition
will be available 1 to all researchers as per data share protocol | [
"Sandip Purnapatra",
"Humaira Rezaie",
"Bhavin Jawade",
"Yu Liu",
"Yue Pan",
"Luke Brosell",
"Mst Rumana Sumi",
"Lambert Igene",
"Alden Dimarco",
"Srirangaraj Setlur",
"Soumyabrata Dey",
"Stephanie Schuckers",
"Marco Huber",
"Jan Niklas Kolf",
"Meiling Fang",
"Naser Damer",
"Banafsheh Adami",
"Raul Chitic",
"Karsten Seelert",
"Vishesh Mistry",
"Rahul Parthe",
"Umit Kacar"
] | 2023-10-01 12:59:30 | http://arxiv.org/abs/2310.00659v1 | http://arxiv.org/pdf/2310.00659v1 | 2310.00659v1 |
PatchMixer: A Patch-Mixing Architecture for Long-Term Time Series Forecasting | Although the Transformer has been the dominant architecture for time series
forecasting tasks in recent years, a fundamental challenge remains: the
permutation-invariant self-attention mechanism within Transformers leads to a
loss of temporal information. To tackle these challenges, we propose
PatchMixer, a novel CNN-based model. It introduces a permutation-variant
convolutional structure to preserve temporal information. Diverging from
conventional CNNs in this field, which often employ multiple scales or numerous
branches, our method relies exclusively on depthwise separable convolutions.
This allows us to extract both local features and global correlations using a
single-scale architecture. Furthermore, we employ dual forecasting heads that
encompass both linear and nonlinear components to better model future curve
trends and details. Our experimental results on seven time-series forecasting
benchmarks indicate that compared with the state-of-the-art method and the
best-performing CNN, PatchMixer yields $3.9\%$ and $21.2\%$ relative
improvements, respectively, while being 2-3x faster than the most advanced
method. We will release our code and model. | [
"Zeying Gong",
"Yujin Tang",
"Junwei Liang"
] | 2023-10-01 12:47:59 | http://arxiv.org/abs/2310.00655v1 | http://arxiv.org/pdf/2310.00655v1 | 2310.00655v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.