title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Conformalized Multimodal Uncertainty Regression and Reasoning | This paper introduces a lightweight uncertainty estimator capable of
predicting multimodal (disjoint) uncertainty bounds by integrating conformal
prediction with a deep-learning regressor. We specifically discuss its
application for visual odometry (VO), where environmental features such as
flying domain symmetries and sensor measurements under ambiguities and
occlusion can result in multimodal uncertainties. Our simulation results show
that uncertainty estimates in our framework adapt sample-wise against
challenging operating conditions such as pronounced noise, limited training
data, and limited parametric size of the prediction model. We also develop a
reasoning framework that leverages these robust uncertainty estimates and
incorporates optical flow-based reasoning to improve prediction prediction
accuracy. Thus, by appropriately accounting for predictive uncertainties of
data-driven learning and closing their estimation loop via rule-based
reasoning, our methodology consistently surpasses conventional deep learning
approaches on all these challenging scenarios--pronounced noise, limited
training data, and limited model size-reducing the prediction error by 2-3x. | [
"Domenico Parente",
"Nastaran Darabi",
"Alex C. Stutts",
"Theja Tulabandhula",
"Amit Ranjan Trivedi"
] | 2023-09-20 02:40:59 | http://arxiv.org/abs/2309.11018v1 | http://arxiv.org/pdf/2309.11018v1 | 2309.11018v1 |
3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images | Accurate representation of tooth position is extremely important in
treatment. 3D dental image segmentation is a widely used method, however
labelled 3D dental datasets are a scarce resource, leading to the problem of
small samples that this task faces in many cases. To this end, we address this
problem with a pretrained SAM and propose a novel 3D-U-SAM network for 3D
dental image segmentation. Specifically, in order to solve the problem of using
2D pre-trained weights on 3D datasets, we adopted a convolution approximation
method; in order to retain more details, we designed skip connections to fuse
features at all levels with reference to U-Net. The effectiveness of the
proposed method is demonstrated in ablation experiments, comparison
experiments, and sample size experiments. | [
"Yifu Zhang",
"Zuozhu Liu",
"Yang Feng",
"Renjing Xu"
] | 2023-09-20 02:32:09 | http://arxiv.org/abs/2309.11015v1 | http://arxiv.org/pdf/2309.11015v1 | 2309.11015v1 |
ModelGiF: Gradient Fields for Model Functional Distance | The last decade has witnessed the success of deep learning and the surge of
publicly released trained models, which necessitates the quantification of the
model functional distance for various purposes. However, quantifying the model
functional distance is always challenging due to the opacity in inner workings
and the heterogeneity in architectures or tasks. Inspired by the concept of
"field" in physics, in this work we introduce Model Gradient Field (abbr.
ModelGiF) to extract homogeneous representations from the heterogeneous
pre-trained models. Our main assumption underlying ModelGiF is that each
pre-trained deep model uniquely determines a ModelGiF over the input space. The
distance between models can thus be measured by the similarity between their
ModelGiFs. We validate the effectiveness of the proposed ModelGiF with a suite
of testbeds, including task relatedness estimation, intellectual property
protection, and model unlearning verification. Experimental results demonstrate
the versatility of the proposed ModelGiF on these tasks, with significantly
superiority performance to state-of-the-art competitors. Codes are available at
https://github.com/zju-vipa/modelgif. | [
"Jie Song",
"Zhengqi Xu",
"Sai Wu",
"Gang Chen",
"Mingli Song"
] | 2023-09-20 02:27:40 | http://arxiv.org/abs/2309.11013v1 | http://arxiv.org/pdf/2309.11013v1 | 2309.11013v1 |
It's Simplex! Disaggregating Measures to Improve Certified Robustness | Certified robustness circumvents the fragility of defences against
adversarial attacks, by endowing model predictions with guarantees of class
invariance for attacks up to a calculated size. While there is value in these
certifications, the techniques through which we assess their performance do not
present a proper accounting of their strengths and weaknesses, as their
analysis has eschewed consideration of performance over individual samples in
favour of aggregated measures. By considering the potential output space of
certified models, this work presents two distinct approaches to improve the
analysis of certification mechanisms, that allow for both dataset-independent
and dataset-dependent measures of certification performance. Embracing such a
perspective uncovers new certification approaches, which have the potential to
more than double the achievable radius of certification, relative to current
state-of-the-art. Empirical evaluation verifies that our new approach can
certify $9\%$ more samples at noise scale $\sigma = 1$, with greater relative
improvements observed as the difficulty of the predictive task increases. | [
"Andrew C. Cullen",
"Paul Montague",
"Shijie Liu",
"Sarah M. Erfani",
"Benjamin I. P. Rubinstein"
] | 2023-09-20 02:16:19 | http://arxiv.org/abs/2309.11005v1 | http://arxiv.org/pdf/2309.11005v1 | 2309.11005v1 |
AI-Driven Patient Monitoring with Multi-Agent Deep Reinforcement Learning | Effective patient monitoring is vital for timely interventions and improved
healthcare outcomes. Traditional monitoring systems often struggle to handle
complex, dynamic environments with fluctuating vital signs, leading to delays
in identifying critical conditions. To address this challenge, we propose a
novel AI-driven patient monitoring framework using multi-agent deep
reinforcement learning (DRL). Our approach deploys multiple learning agents,
each dedicated to monitoring a specific physiological feature, such as heart
rate, respiration, and temperature. These agents interact with a generic
healthcare monitoring environment, learn the patients' behavior patterns, and
make informed decisions to alert the corresponding Medical Emergency Teams
(METs) based on the level of emergency estimated. In this study, we evaluate
the performance of the proposed multi-agent DRL framework using real-world
physiological and motion data from two datasets: PPG-DaLiA and WESAD. We
compare the results with several baseline models, including Q-Learning, PPO,
Actor-Critic, Double DQN, and DDPG, as well as monitoring frameworks like
WISEML and CA-MAQL. Our experiments demonstrate that the proposed DRL approach
outperforms all other baseline models, achieving more accurate monitoring of
patient's vital signs. Furthermore, we conduct hyperparameter optimization to
fine-tune the learning process of each agent. By optimizing hyperparameters, we
enhance the learning rate and discount factor, thereby improving the agents'
overall performance in monitoring patient health status. Our AI-driven patient
monitoring system offers several advantages over traditional methods, including
the ability to handle complex and uncertain environments, adapt to varying
patient conditions, and make real-time decisions without external supervision. | [
"Thanveer Shaik",
"Xiaohui Tao",
"Haoran Xie",
"Lin Li",
"Jianming Yong",
"Hong-Ning Dai"
] | 2023-09-20 00:42:08 | http://arxiv.org/abs/2309.10980v2 | http://arxiv.org/pdf/2309.10980v2 | 2309.10980v2 |
Towards Data-centric Graph Machine Learning: Review and Outlook | Data-centric AI, with its primary focus on the collection, management, and
utilization of data to drive AI models and applications, has attracted
increasing attention in recent years. In this article, we conduct an in-depth
and comprehensive review, offering a forward-looking outlook on the current
efforts in data-centric AI pertaining to graph data-the fundamental data
structure for representing and capturing intricate dependencies among massive
and diverse real-life entities. We introduce a systematic framework,
Data-centric Graph Machine Learning (DC-GML), that encompasses all stages of
the graph data lifecycle, including graph data collection, exploration,
improvement, exploitation, and maintenance. A thorough taxonomy of each stage
is presented to answer three critical graph-centric questions: (1) how to
enhance graph data availability and quality; (2) how to learn from graph data
with limited-availability and low-quality; (3) how to build graph MLOps systems
from the graph data-centric view. Lastly, we pinpoint the future prospects of
the DC-GML domain, providing insights to navigate its advancements and
applications. | [
"Xin Zheng",
"Yixin Liu",
"Zhifeng Bao",
"Meng Fang",
"Xia Hu",
"Alan Wee-Chung Liew",
"Shirui Pan"
] | 2023-09-20 00:40:13 | http://arxiv.org/abs/2309.10979v1 | http://arxiv.org/pdf/2309.10979v1 | 2309.10979v1 |
PAGER: A Framework for Failure Analysis of Deep Regression Models | Safe deployment of AI models requires proactive detection of potential
prediction failures to prevent costly errors. While failure detection in
classification problems has received significant attention, characterizing
failure modes in regression tasks is more complicated and less explored.
Existing approaches rely on epistemic uncertainties or feature inconsistency
with the training distribution to characterize model risk. However, we show
that uncertainties are necessary but insufficient to accurately characterize
failure, owing to the various sources of error. In this paper, we propose PAGER
(Principled Analysis of Generalization Errors in Regressors), a framework to
systematically detect and characterize failures in deep regression models.
Built upon the recently proposed idea of anchoring in deep models, PAGER
unifies both epistemic uncertainties and novel, complementary non-conformity
scores to organize samples into different risk regimes, thereby providing a
comprehensive analysis of model errors. Additionally, we introduce novel
metrics for evaluating failure detectors in regression tasks. We demonstrate
the effectiveness of PAGER on synthetic and real-world benchmarks. Our results
highlight the capability of PAGER to identify regions of accurate
generalization and detect failure cases in out-of-distribution and
out-of-support scenarios. | [
"Jayaraman J. Thiagarajan",
"Vivek Narayanaswamy",
"Puja Trivedi",
"Rushil Anirudh"
] | 2023-09-20 00:37:35 | http://arxiv.org/abs/2309.10977v1 | http://arxiv.org/pdf/2309.10977v1 | 2309.10977v1 |
Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks | Safe deployment of graph neural networks (GNNs) under distribution shift
requires models to provide accurate confidence indicators (CI). However, while
it is well-known in computer vision that CI quality diminishes under
distribution shift, this behavior remains understudied for GNNs. Hence, we
begin with a case study on CI calibration under controlled structural and
feature distribution shifts and demonstrate that increased expressivity or
model size do not always lead to improved CI performance. Consequently, we
instead advocate for the use of epistemic uncertainty quantification (UQ)
methods to modulate CIs. To this end, we propose G-$\Delta$UQ, a new single
model UQ method that extends the recently proposed stochastic centering
framework to support structured data and partial stochasticity. Evaluated
across covariate, concept, and graph size shifts, G-$\Delta$UQ not only
outperforms several popular UQ methods in obtaining calibrated CIs, but also
outperforms alternatives when CIs are used for generalization gap prediction or
OOD detection. Overall, our work not only introduces a new, flexible GNN UQ
method, but also provides novel insights into GNN CIs on safety-critical tasks. | [
"Puja Trivedi",
"Mark Heimann",
"Rushil Anirudh",
"Danai Koutra",
"Jayaraman J. Thiagarajan"
] | 2023-09-20 00:35:27 | http://arxiv.org/abs/2309.10976v1 | http://arxiv.org/pdf/2309.10976v1 | 2309.10976v1 |
SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network Quantization | Quantization is a widely used compression method that effectively reduces
redundancies in over-parameterized neural networks. However, existing
quantization techniques for deep neural networks often lack a comprehensive
error analysis due to the presence of non-convex loss functions and nonlinear
activations. In this paper, we propose a fast stochastic algorithm for
quantizing the weights of fully trained neural networks. Our approach leverages
a greedy path-following mechanism in combination with a stochastic quantizer.
Its computational complexity scales only linearly with the number of weights in
the network, thereby enabling the efficient quantization of large networks.
Importantly, we establish, for the first time, full-network error bounds, under
an infinite alphabet condition and minimal assumptions on the weights and input
data. As an application of this result, we prove that when quantizing a
multi-layer network having Gaussian weights, the relative square quantization
error exhibits a linear decay as the degree of over-parametrization increases.
Furthermore, we demonstrate that it is possible to achieve error bounds
equivalent to those obtained in the infinite alphabet case, using on the order
of a mere $\log\log N$ bits per weight, where $N$ represents the largest number
of neurons in a layer. | [
"Jinjie Zhang",
"Rayan Saab"
] | 2023-09-20 00:35:16 | http://arxiv.org/abs/2309.10975v1 | http://arxiv.org/pdf/2309.10975v1 | 2309.10975v1 |
SEMPART: Self-supervised Multi-resolution Partitioning of Image Semantics | Accurately determining salient regions of an image is challenging when
labeled data is scarce. DINO-based self-supervised approaches have recently
leveraged meaningful image semantics captured by patch-wise features for
locating foreground objects. Recent methods have also incorporated intuitive
priors and demonstrated value in unsupervised methods for object partitioning.
In this paper, we propose SEMPART, which jointly infers coarse and fine
bi-partitions over an image's DINO-based semantic graph. Furthermore, SEMPART
preserves fine boundary details using graph-driven regularization and
successfully distills the coarse mask semantics into the fine mask. Our salient
object detection and single object localization findings suggest that SEMPART
produces high-quality masks rapidly without additional post-processing and
benefits from co-optimizing the coarse and fine branches. | [
"Sriram Ravindran",
"Debraj Basu"
] | 2023-09-20 00:07:30 | http://arxiv.org/abs/2309.10972v1 | http://arxiv.org/pdf/2309.10972v1 | 2309.10972v1 |
DPpack: An R Package for Differentially Private Statistical Analysis and Machine Learning | Differential privacy (DP) is the state-of-the-art framework for guaranteeing
privacy for individuals when releasing aggregated statistics or building
statistical/machine learning models from data. We develop the open-source R
package DPpack that provides a large toolkit of differentially private
analysis. The current version of DPpack implements three popular mechanisms for
ensuring DP: Laplace, Gaussian, and exponential. Beyond that, DPpack provides a
large toolkit of easily accessible privacy-preserving descriptive statistics
functions. These include mean, variance, covariance, and quantiles, as well as
histograms and contingency tables. Finally, DPpack provides user-friendly
implementation of privacy-preserving versions of logistic regression, SVM, and
linear regression, as well as differentially private hyperparameter tuning for
each of these models. This extensive collection of implemented differentially
private statistics and models permits hassle-free utilization of differential
privacy principles in commonly performed statistical analysis. We plan to
continue developing DPpack and make it more comprehensive by including more
differentially private machine learning techniques, statistical modeling and
inference in the future. | [
"Spencer Giddens",
"Fang Liu"
] | 2023-09-19 23:36:11 | http://arxiv.org/abs/2309.10965v1 | http://arxiv.org/pdf/2309.10965v1 | 2309.10965v1 |
In-Context Learning for Text Classification with Many Labels | In-context learning (ICL) using large language models for tasks with many
labels is challenging due to the limited context window, which makes it
difficult to fit a sufficient number of examples in the prompt. In this paper,
we use a pre-trained dense retrieval model to bypass this limitation, giving
the model only a partial view of the full label space for each inference call.
Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the art
performance in few-shot settings for three common intent classification
datasets, with no finetuning. We also surpass fine-tuned performance on
fine-grained sentiment classification in certain cases. We analyze the
performance across number of in-context examples and different model scales,
showing that larger models are necessary to effectively and consistently make
use of larger context lengths for ICL. By running several ablations, we analyze
the model's use of: a) the similarity of the in-context examples to the current
input, b) the semantic content of the class names, and c) the correct
correspondence between examples and labels. We demonstrate that all three are
needed to varying degrees depending on the domain, contrary to certain recent
works. | [
"Aristides Milios",
"Siva Reddy",
"Dzmitry Bahdanau"
] | 2023-09-19 22:41:44 | http://arxiv.org/abs/2309.10954v1 | http://arxiv.org/pdf/2309.10954v1 | 2309.10954v1 |
Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces | We present the development and analysis of a reinforcement learning (RL)
algorithm designed to solve continuous-space mean field game (MFG) and mean
field control (MFC) problems in a unified manner. The proposed approach pairs
the actor-critic (AC) paradigm with a representation of the mean field
distribution via a parameterized score function, which can be efficiently
updated in an online fashion, and uses Langevin dynamics to obtain samples from
the resulting distribution. The AC agent and the score function are updated
iteratively to converge, either to the MFG equilibrium or the MFC optimum for a
given mean field problem, depending on the choice of learning rates. A
straightforward modification of the algorithm allows us to solve mixed mean
field control games (MFCGs). The performance of our algorithm is evaluated
using linear-quadratic benchmarks in the asymptotic infinite horizon framework. | [
"Andrea Angiuli",
"Jean-Pierre Fouque",
"Ruimeng Hu",
"Alan Raydan"
] | 2023-09-19 22:37:47 | http://arxiv.org/abs/2309.10953v1 | http://arxiv.org/pdf/2309.10953v1 | 2309.10953v1 |
LMDX: Language Model-based Document Information Extraction and Localization | Large Language Models (LLM) have revolutionized Natural Language Processing
(NLP), improving state-of-the-art on many existing tasks and exhibiting
emergent capabilities. However, LLMs have not yet been successfully applied on
semi-structured document information extraction, which is at the core of many
document processing workflows and consists of extracting key entities from a
visually rich document (VRD) given a predefined target schema. The main
obstacles to LLM adoption in that task have been the absence of layout encoding
within LLMs, critical for a high quality extraction, and the lack of a
grounding mechanism ensuring the answer is not hallucinated. In this paper, we
introduce Language Model-based Document Information Extraction and Localization
(LMDX), a methodology to adapt arbitrary LLMs for document information
extraction. LMDX can do extraction of singular, repeated, and hierarchical
entities, both with and without training data, while providing grounding
guarantees and localizing the entities within the document. In particular, we
apply LMDX to the PaLM 2-S LLM and evaluate it on VRDU and CORD benchmarks,
setting a new state-of-the-art and showing how LMDX enables the creation of
high quality, data-efficient parsers. | [
"Vincent Perot",
"Kai Kang",
"Florian Luisier",
"Guolong Su",
"Xiaoyu Sun",
"Ramya Sree Boppana",
"Zilong Wang",
"Jiaqi Mu",
"Hao Zhang",
"Nan Hua"
] | 2023-09-19 22:32:56 | http://arxiv.org/abs/2309.10952v1 | http://arxiv.org/pdf/2309.10952v1 | 2309.10952v1 |
A Novel Deep Neural Network for Trajectory Prediction in Automated Vehicles Using Velocity Vector Field | Anticipating the motion of other road users is crucial for automated driving
systems (ADS), as it enables safe and informed downstream decision-making and
motion planning. Unfortunately, contemporary learning-based approaches for
motion prediction exhibit significant performance degradation as the prediction
horizon increases or the observation window decreases. This paper proposes a
novel technique for trajectory prediction that combines a data-driven
learning-based method with a velocity vector field (VVF) generated from a
nature-inspired concept, i.e., fluid flow dynamics. In this work, the vector
field is incorporated as an additional input to a convolutional-recurrent deep
neural network to help predict the most likely future trajectories given a
sequence of bird's eye view scene representations. The performance of the
proposed model is compared with state-of-the-art methods on the HighD dataset
demonstrating that the VVF inclusion improves the prediction accuracy for both
short and long-term (5~sec) time horizons. It is also shown that the accuracy
remains consistent with decreasing observation windows which alleviates the
requirement of a long history of past observations for accurate trajectory
prediction. Source codes are available at:
https://github.com/Amir-Samadi/VVF-TP. | [
"MReza Alipour Sormoli",
"Amir Samadi",
"Sajjad Mozaffari",
"Konstantinos Koufos",
"Mehrdad Dianati",
"Roger Woodman"
] | 2023-09-19 22:14:52 | http://arxiv.org/abs/2309.10948v1 | http://arxiv.org/pdf/2309.10948v1 | 2309.10948v1 |
Extreme Image Transformations Facilitate Robust Latent Object Representations | Adversarial attacks can affect the object recognition capabilities of
machines in wild. These can often result from spurious correlations between
input and class labels, and are prone to memorization in large networks. While
networks are expected to do automated feature selection, it is not effective at
the scale of the object. Humans, however, are able to select the minimum set of
features required to form a robust representation of an object. In this work,
we show that finetuning any pretrained off-the-shelf network with Extreme Image
Transformations (EIT) not only helps in learning a robust latent
representation, it also improves the performance of these networks against
common adversarial attacks of various intensities. Our EIT trained networks
show strong activations in the object regions even when tested with more
intense noise, showing promising generalizations across different kinds of
adversarial attacks. | [
"Girik Malik",
"Dakarai Crowder",
"Ennio Mingolla"
] | 2023-09-19 21:31:25 | http://arxiv.org/abs/2310.07725v1 | http://arxiv.org/pdf/2310.07725v1 | 2310.07725v1 |
Test-Time Training for Speech | In this paper, we study the application of Test-Time Training (TTT) as a
solution to handling distribution shifts in speech applications. In particular,
we introduce distribution-shifts to the test datasets of standard
speech-classification tasks -- for example, speaker-identification and
emotion-detection -- and explore how Test-Time Training (TTT) can help adjust
to the distribution-shift. In our experiments that include distribution shifts
due to background noise and natural variations in speech such as gender and
age, we identify some key-challenges with TTT including sensitivity to
optimization hyperparameters (e.g., number of optimization steps and subset of
parameters chosen for TTT) and scalability (e.g., as each example gets its own
set of parameters, TTT is not scalable). Finally, we propose using BitFit -- a
parameter-efficient fine-tuning algorithm proposed for text applications that
only considers the bias parameters for fine-tuning -- as a solution to the
aforementioned challenges and demonstrate that it is consistently more stable
than fine-tuning all the parameters of the model. | [
"Sri Harsha Dumpala",
"Chandramouli Sastry",
"Sageev Oore"
] | 2023-09-19 21:06:22 | http://arxiv.org/abs/2309.10930v2 | http://arxiv.org/pdf/2309.10930v2 | 2309.10930v2 |
Semi-automatic staging area for high-quality structured data extraction from scientific literature | In this study, we propose a staging area for ingesting new superconductors'
experimental data in SuperCon that is machine-collected from scientific
articles. Our objective is to enhance the efficiency of updating SuperCon while
maintaining or enhancing the data quality. We present a semi-automatic staging
area driven by a workflow combining automatic and manual processes on the
extracted database. An anomaly detection automatic process aims to pre-screen
the collected data. Users can then manually correct any errors through a user
interface tailored to simplify the data verification on the original PDF
documents. Additionally, when a record is corrected, its raw data is collected
and utilised to improve machine learning models as training data. Evaluation
experiments demonstrate that our staging area significantly improves curation
quality. We compare the interface with the traditional manual approach of
reading PDF documents and recording information in an Excel document. Using the
interface boosts the precision and recall by 6% and 50%, respectively to an
average increase of 40% in F1-score. | [
"Luca Foppiano",
"Tomoya Mato",
"Kensei Terashima",
"Pedro Ortiz Suarez",
"Taku Tou",
"Chikako Sakai",
"Wei-Sheng Wang",
"Toshiyuki Amagasa",
"Yoshihiko Takano",
"Masashi Ishii"
] | 2023-09-19 20:53:13 | http://arxiv.org/abs/2309.10923v1 | http://arxiv.org/pdf/2309.10923v1 | 2309.10923v1 |
Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds | Gaussian processes are used in many machine learning applications that rely
on uncertainty quantification. Recently, computational tools for working with
these models in geometric settings, such as when inputs lie on a Riemannian
manifold, have been developed. This raises the question: can these intrinsic
models be shown theoretically to lead to better performance, compared to simply
embedding all relevant quantities into $\mathbb{R}^d$ and using the restriction
of an ordinary Euclidean Gaussian process? To study this, we prove optimal
contraction rates for intrinsic Mat\'ern Gaussian processes defined on compact
Riemannian manifolds. We also prove analogous rates for extrinsic processes
using trace and extension theorems between manifold and ambient Sobolev spaces:
somewhat surprisingly, the rates obtained turn out to coincide with those of
the intrinsic processes, provided that their smoothness parameters are matched
appropriately. We illustrate these rates empirically on a number of examples,
which, mirroring prior work, show that intrinsic processes can achieve better
performance in practice. Therefore, our work shows that finer-grained analyses
are needed to distinguish between different levels of data-efficiency of
geometric Gaussian processes, particularly in settings which involve small data
set sizes and non-asymptotic behavior. | [
"Paul Rosa",
"Viacheslav Borovitskiy",
"Alexander Terenin",
"Judith Rousseau"
] | 2023-09-19 20:30:58 | http://arxiv.org/abs/2309.10918v2 | http://arxiv.org/pdf/2309.10918v2 | 2309.10918v2 |
End-to-End Speech Recognition Contextualization with Large Language Models | In recent years, Large Language Models (LLMs) have garnered significant
attention from the research community due to their exceptional performance and
generalization capabilities. In this paper, we introduce a novel method for
contextualizing speech recognition models incorporating LLMs. Our approach
casts speech recognition as a mixed-modal language modeling task based on a
pretrained LLM. We provide audio features, along with optional text tokens for
context, to train the system to complete transcriptions in a decoder-only
fashion. As a result, the system is implicitly incentivized to learn how to
leverage unstructured contextual information during training. Our empirical
results demonstrate a significant improvement in performance, with a 6% WER
reduction when additional textual context is provided. Moreover, we find that
our method performs competitively and improve by 7.5% WER overall and 17% WER
on rare words against a baseline contextualized RNN-T system that has been
trained on more than twenty five times larger speech dataset. Overall, we
demonstrate that by only adding a handful number of trainable parameters via
adapters, we can unlock contextualized speech recognition capability for the
pretrained LLM while keeping the same text-only input functionality. | [
"Egor Lakomkin",
"Chunyang Wu",
"Yassir Fathullah",
"Ozlem Kalinli",
"Michael L. Seltzer",
"Christian Fuegen"
] | 2023-09-19 20:28:57 | http://arxiv.org/abs/2309.10917v1 | http://arxiv.org/pdf/2309.10917v1 | 2309.10917v1 |
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples | Adversarial examples, deliberately crafted using small perturbations to fool
deep neural networks, were first studied in image processing and more recently
in NLP. While approaches to detecting adversarial examples in NLP have largely
relied on search over input perturbations, image processing has seen a range of
techniques that aim to characterise adversarial subspaces over the learned
representations.
In this paper, we adapt two such approaches to NLP, one based on nearest
neighbors and influence functions and one on Mahalanobis distances. The former
in particular produces a state-of-the-art detector when compared against
several strong baselines; moreover, the novel use of influence functions
provides insight into how the nature of adversarial example subspaces in NLP
relate to those in image processing, and also how they differ depending on the
kind of NLP task. | [
"Shakila Mahjabin Tonni",
"Mark Dras"
] | 2023-09-19 20:28:24 | http://arxiv.org/abs/2309.10916v3 | http://arxiv.org/pdf/2309.10916v3 | 2309.10916v3 |
Amplifying Pathological Detection in EEG Signaling Pathways through Cross-Dataset Transfer Learning | Pathology diagnosis based on EEG signals and decoding brain activity holds
immense importance in understanding neurological disorders. With the
advancement of artificial intelligence methods and machine learning techniques,
the potential for accurate data-driven diagnoses and effective treatments has
grown significantly. However, applying machine learning algorithms to
real-world datasets presents diverse challenges at multiple levels. The
scarcity of labelled data, especially in low regime scenarios with limited
availability of real patient cohorts due to high costs of recruitment,
underscores the vital deployment of scaling and transfer learning techniques.
In this study, we explore a real-world pathology classification task to
highlight the effectiveness of data and model scaling and cross-dataset
knowledge transfer. As such, we observe varying performance improvements
through data scaling, indicating the need for careful evaluation and labelling.
Additionally, we identify the challenges of possible negative transfer and
emphasize the significance of some key components to overcome distribution
shifts and potential spurious correlations and achieve positive transfer. We
see improvement in the performance of the target model on the target (NMT)
datasets by using the knowledge from the source dataset (TUAB) when a low
amount of labelled data was available. Our findings indicate a small and
generic model (e.g. ShallowNet) performs well on a single dataset, however, a
larger model (e.g. TCN) performs better on transfer and learning from a larger
and diverse dataset. | [
"Mohammad-Javad Darvishi-Bayazi",
"Mohammad Sajjad Ghaemi",
"Timothee Lesort",
"Md Rifat Arefin",
"Jocelyn Faubert",
"Irina Rish"
] | 2023-09-19 20:09:15 | http://arxiv.org/abs/2309.10910v1 | http://arxiv.org/pdf/2309.10910v1 | 2309.10910v1 |
Self-Augmentation Improves Zero-Shot Cross-Lingual Transfer | Zero-shot cross-lingual transfer is a central task in multilingual NLP,
allowing models trained in languages with more sufficient training resources to
generalize to other low-resource languages. Earlier efforts on this task use
parallel corpora, bilingual dictionaries, or other annotated alignment data to
improve cross-lingual transferability, which are typically expensive to obtain.
In this paper, we propose a simple yet effective method, SALT, to improve the
zero-shot cross-lingual transfer of the multilingual pretrained language models
without the help of such external data. By incorporating code-switching and
embedding mixup with self-augmentation, SALT effectively distills cross-lingual
knowledge from the multilingual PLM and enhances its transferability on
downstream tasks. Experimental results on XNLI and PAWS-X show that our method
is able to improve zero-shot cross-lingual transferability without external
data. Our code is available at https://github.com/luka-group/SALT. | [
"Fei Wang",
"Kuan-Hao Huang",
"Kai-Wei Chang",
"Muhao Chen"
] | 2023-09-19 19:30:56 | http://arxiv.org/abs/2309.10891v1 | http://arxiv.org/pdf/2309.10891v1 | 2309.10891v1 |
Crypto'Graph: Leveraging Privacy-Preserving Distributed Link Prediction for Robust Graph Learning | Graphs are a widely used data structure for collecting and analyzing
relational data. However, when the graph structure is distributed across
several parties, its analysis is particularly challenging. In particular, due
to the sensitivity of the data each party might want to keep their partial
knowledge of the graph private, while still willing to collaborate with the
other parties for tasks of mutual benefit, such as data curation or the removal
of poisoned data. To address this challenge, we propose Crypto'Graph, an
efficient protocol for privacy-preserving link prediction on distributed
graphs. More precisely, it allows parties partially sharing a graph with
distributed links to infer the likelihood of formation of new links in the
future. Through the use of cryptographic primitives, Crypto'Graph is able to
compute the likelihood of these new links on the joint network without
revealing the structure of the private individual graph of each party, even
though they know the number of nodes they have, since they share the same graph
but not the same links. Crypto'Graph improves on previous works by enabling the
computation of a certain number of similarity metrics without any additional
cost. The use of Crypto'Graph is illustrated for defense against graph
poisoning attacks, in which it is possible to identify potential adversarial
links without compromising the privacy of the graphs of individual parties. The
effectiveness of Crypto'Graph in mitigating graph poisoning attacks and
achieving high prediction accuracy on a graph neural network node
classification task is demonstrated through extensive experimentation on a
real-world dataset. | [
"Sofiane Azogagh",
"Zelma Aubin Birba",
"Sébastien Gambs",
"Marc-Olivier Killijian"
] | 2023-09-19 19:30:28 | http://arxiv.org/abs/2309.10890v1 | http://arxiv.org/pdf/2309.10890v1 | 2309.10890v1 |
DeepliteRT: Computer Vision at the Edge | The proliferation of edge devices has unlocked unprecedented opportunities
for deep learning model deployment in computer vision applications. However,
these complex models require considerable power, memory and compute resources
that are typically not available on edge platforms. Ultra low-bit quantization
presents an attractive solution to this problem by scaling down the model
weights and activations from 32-bit to less than 8-bit. We implement highly
optimized ultra low-bit convolution operators for ARM-based targets that
outperform existing methods by up to 4.34x. Our operator is implemented within
Deeplite Runtime (DeepliteRT), an end-to-end solution for the compilation,
tuning, and inference of ultra low-bit models on ARM devices. Compiler passes
in DeepliteRT automatically convert a fake-quantized model in full precision to
a compact ultra low-bit representation, easing the process of quantized model
deployment on commodity hardware. We analyze the performance of DeepliteRT on
classification and detection models against optimized 32-bit floating-point,
8-bit integer, and 2-bit baselines, achieving significant speedups of up to
2.20x, 2.33x and 2.17x, respectively. | [
"Saad Ashfaq",
"Alexander Hoffman",
"Saptarshi Mitra",
"Sudhakar Sah",
"MohammadHossein AskariHemmat",
"Ehsan Saboori"
] | 2023-09-19 18:58:38 | http://arxiv.org/abs/2309.10878v1 | http://arxiv.org/pdf/2309.10878v1 | 2309.10878v1 |
Dynamical Tests of a Deep-Learning Weather Prediction Model | Global deep-learning weather prediction models have recently been shown to
produce forecasts that rival those from physics-based models run at operational
centers. It is unclear whether these models have encoded atmospheric dynamics,
or simply pattern matching that produces the smallest forecast error. Answering
this question is crucial to establishing the utility of these models as tools
for basic science. Here we subject one such model, Pangu-weather, to a set of
four classical dynamical experiments that do not resemble the model training
data. Localized perturbations to the model output and the initial conditions
are added to steady time-averaged conditions, to assess the propagation speed
and structural evolution of signals away from the local source. Perturbing the
model physics by adding a steady tropical heat source results in a classical
Matsuno--Gill response near the heating, and planetary waves that radiate into
the extratropics. A localized disturbance on the winter-averaged North Pacific
jet stream produces realistic extratropical cyclones and fronts, including the
spontaneous emergence of polar lows. Perturbing the 500hPa height field alone
yields adjustment from a state of rest to one of wind--pressure balance over ~6
hours. Localized subtropical low pressure systems produce Atlantic hurricanes,
provided the initial amplitude exceeds about 5 hPa, and setting the initial
humidity to zero eliminates hurricane development. We conclude that the model
encodes realistic physics in all experiments, and suggest it can be used as a
tool for rapidly testing ideas before using expensive physics-based models. | [
"Gregory J. Hakim",
"Sanjit Masanam"
] | 2023-09-19 18:26:41 | http://arxiv.org/abs/2309.10867v1 | http://arxiv.org/pdf/2309.10867v1 | 2309.10867v1 |
Generative AI in the Construction Industry: Opportunities & Challenges | In the last decade, despite rapid advancements in artificial intelligence
(AI) transforming many industry practices, construction largely lags in
adoption. Recently, the emergence and rapid adoption of advanced large language
models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown
great potential and sparked considerable global interest. However, the current
surge lacks a study investigating the opportunities and challenges of
implementing Generative AI (GenAI) in the construction sector, creating a
critical knowledge gap for researchers and practitioners. This underlines the
necessity to explore the prospects and complexities of GenAI integration.
Bridging this gap is fundamental to optimizing GenAI's early-stage adoption
within the construction sector. Given GenAI's unprecedented capabilities to
generate human-like content based on learning from existing content, we reflect
on two guiding questions: What will the future bring for GenAI in the
construction industry? What are the potential opportunities and challenges in
implementing GenAI in the construction industry? This study delves into
reflected perception in literature, analyzes the industry perception using
programming-based word cloud and frequency analysis, and integrates authors'
opinions to answer these questions. This paper recommends a conceptual GenAI
implementation framework, provides practical recommendations, summarizes future
research questions, and builds foundational literature to foster subsequent
research expansion in GenAI within the construction and its allied architecture
& engineering domains. | [
"Prashnna Ghimire",
"Kyungki Kim",
"Manoj Acharya"
] | 2023-09-19 18:20:49 | http://arxiv.org/abs/2310.04427v1 | http://arxiv.org/pdf/2310.04427v1 | 2310.04427v1 |
Assessing the capacity of a denoising diffusion probabilistic model to reproduce spatial context | Diffusion models have emerged as a popular family of deep generative models
(DGMs). In the literature, it has been claimed that one class of diffusion
models -- denoising diffusion probabilistic models (DDPMs) -- demonstrate
superior image synthesis performance as compared to generative adversarial
networks (GANs). To date, these claims have been evaluated using either
ensemble-based methods designed for natural images, or conventional measures of
image quality such as structural similarity. However, there remains an
important need to understand the extent to which DDPMs can reliably learn
medical imaging domain-relevant information, which is referred to as `spatial
context' in this work. To address this, a systematic assessment of the ability
of DDPMs to learn spatial context relevant to medical imaging applications is
reported for the first time. A key aspect of the studies is the use of
stochastic context models (SCMs) to produce training data. In this way, the
ability of the DDPMs to reliably reproduce spatial context can be
quantitatively assessed by use of post-hoc image analyses. Error-rates in
DDPM-generated ensembles are reported, and compared to those corresponding to a
modern GAN. The studies reveal new and important insights regarding the
capacity of DDPMs to learn spatial context. Notably, the results demonstrate
that DDPMs hold significant capacity for generating contextually correct images
that are `interpolated' between training samples, which may benefit
data-augmentation tasks in ways that GANs cannot. | [
"Rucha Deshpande",
"Muzaffer Özbey",
"Hua Li",
"Mark A. Anastasio",
"Frank J. Brooks"
] | 2023-09-19 17:58:35 | http://arxiv.org/abs/2309.10817v1 | http://arxiv.org/pdf/2309.10817v1 | 2309.10817v1 |
AI Foundation Models for Weather and Climate: Applications, Design, and Implementation | Machine learning and deep learning methods have been widely explored in
understanding the chaotic behavior of the atmosphere and furthering weather
forecasting. There has been increasing interest from technology companies,
government institutions, and meteorological agencies in building digital twins
of the Earth. Recent approaches using transformers, physics-informed machine
learning, and graph neural networks have demonstrated state-of-the-art
performance on relatively narrow spatiotemporal scales and specific tasks. With
the recent success of generative artificial intelligence (AI) using pre-trained
transformers for language modeling and vision with prompt engineering and
fine-tuning, we are now moving towards generalizable AI. In particular, we are
witnessing the rise of AI foundation models that can perform competitively on
multiple domain-specific downstream tasks. Despite this progress, we are still
in the nascent stages of a generalizable AI model for global Earth system
models, regional climate models, and mesoscale weather models. Here, we review
current state-of-the-art AI approaches, primarily from transformer and operator
learning literature in the context of meteorology. We provide our perspective
on criteria for success towards a family of foundation models for nowcasting
and forecasting weather and climate predictions. We also discuss how such
models can perform competitively on downstream tasks such as downscaling
(super-resolution), identifying conditions conducive to the occurrence of
wildfires, and predicting consequential meteorological phenomena across various
spatiotemporal scales such as hurricanes and atmospheric rivers. In particular,
we examine current AI methodologies and contend they have matured enough to
design and implement a weather foundation model. | [
"S. Karthik Mukkavilli",
"Daniel Salles Civitarese",
"Johannes Schmude",
"Johannes Jakubik",
"Anne Jones",
"Nam Nguyen",
"Christopher Phillips",
"Sujit Roy",
"Shraddha Singh",
"Campbell Watson",
"Raghu Ganti",
"Hendrik Hamann",
"Udaysankar Nair",
"Rahul Ramachandran",
"Kommy Weldemariam"
] | 2023-09-19 17:50:27 | http://arxiv.org/abs/2309.10808v2 | http://arxiv.org/pdf/2309.10808v2 | 2309.10808v2 |
Multi-Context Dual Hyper-Prior Neural Image Compression | Transform and entropy models are the two core components in deep image
compression neural networks. Most existing learning-based image compression
methods utilize convolutional-based transform, which lacks the ability to model
long-range dependencies, primarily due to the limited receptive field of the
convolution operation. To address this limitation, we propose a
Transformer-based nonlinear transform. This transform has the remarkable
ability to efficiently capture both local and global information from the input
image, leading to a more decorrelated latent representation. In addition, we
introduce a novel entropy model that incorporates two different hyperpriors to
model cross-channel and spatial dependencies of the latent representation. To
further improve the entropy model, we add a global context that leverages
distant relationships to predict the current latent more accurately. This
global context employs a causal attention mechanism to extract long-range
information in a content-dependent manner. Our experiments show that our
proposed framework performs better than the state-of-the-art methods in terms
of rate-distortion performance. | [
"Atefeh Khoshkhahtinat",
"Ali Zafari",
"Piyush M. Mehta",
"Mohammad Akyash",
"Hossein Kashiani",
"Nasser M. Nasrabadi"
] | 2023-09-19 17:44:44 | http://arxiv.org/abs/2309.10799v1 | http://arxiv.org/pdf/2309.10799v1 | 2309.10799v1 |
Guide Your Agent with Adaptive Multimodal Rewards | Developing an agent capable of adapting to unseen environments remains a
difficult challenge in imitation learning. In this work, we present Adaptive
Return-conditioned Policy (ARP), an efficient framework designed to enhance the
agent's generalization ability using natural language task descriptions and
pre-trained multimodal encoders. Our key idea is to calculate a similarity
between visual observations and natural language instructions in the
pre-trained multimodal embedding space (such as CLIP) and use it as a reward
signal. We then train a return-conditioned policy using expert demonstrations
labeled with multimodal rewards. Because the multimodal rewards provide
adaptive signals at each timestep, our ARP effectively mitigates the goal
misgeneralization. This results in superior generalization performances even
when faced with unseen text instructions, compared to existing text-conditioned
policies. To improve the quality of rewards, we also introduce a fine-tuning
method for pre-trained multimodal encoders, further enhancing the performance.
Video demonstrations and source code are available on the project website:
https://sites.google.com/view/2023arp. | [
"Changyeon Kim",
"Younggyo Seo",
"Hao Liu",
"Lisa Lee",
"Jinwoo Shin",
"Honglak Lee",
"Kimin Lee"
] | 2023-09-19 17:39:20 | http://arxiv.org/abs/2309.10790v1 | http://arxiv.org/pdf/2309.10790v1 | 2309.10790v1 |
Context-Aware Neural Video Compression on Solar Dynamics Observatory | NASA's Solar Dynamics Observatory (SDO) mission collects large data volumes
of the Sun's daily activity. Data compression is crucial for space missions to
reduce data storage and video bandwidth requirements by eliminating
redundancies in the data. In this paper, we present a novel neural
Transformer-based video compression approach specifically designed for the SDO
images. Our primary objective is to efficiently exploit the temporal and
spatial redundancies inherent in solar images to obtain a high compression
ratio. Our proposed architecture benefits from a novel Transformer block called
Fused Local-aware Window (FLaWin), which incorporates window-based
self-attention modules and an efficient fused local-aware feed-forward (FLaFF)
network. This architectural design allows us to simultaneously capture
short-range and long-range information while facilitating the extraction of
rich and diverse contextual representations. Moreover, this design choice
results in reduced computational complexity. Experimental results demonstrate
the significant contribution of the FLaWin Transformer block to the compression
performance, outperforming conventional hand-engineered video codecs such as
H.264 and H.265 in terms of rate-distortion trade-off. | [
"Atefeh Khoshkhahtinat",
"Ali Zafari",
"Piyush M. Mehta",
"Nasser M. Nasrabadi",
"Barbara J. Thompson",
"Michael S. F. Kirk",
"Daniel da Silva"
] | 2023-09-19 17:33:12 | http://arxiv.org/abs/2309.10784v1 | http://arxiv.org/pdf/2309.10784v1 | 2309.10784v1 |
$O(k)$-Equivariant Dimensionality Reduction on Stiefel Manifolds | Many real-world datasets live on high-dimensional Stiefel and Grassmannian
manifolds, $V_k(\mathbb{R}^N)$ and $Gr(k, \mathbb{R}^N)$ respectively, and
benefit from projection onto lower-dimensional Stiefel (respectively,
Grassmannian) manifolds. In this work, we propose an algorithm called Principal
Stiefel Coordinates (PSC) to reduce data dimensionality from $
V_k(\mathbb{R}^N)$ to $V_k(\mathbb{R}^n)$ in an $O(k)$-equivariant manner ($k
\leq n \ll N$). We begin by observing that each element $\alpha \in
V_n(\mathbb{R}^N)$ defines an isometric embedding of $V_k(\mathbb{R}^n)$ into
$V_k(\mathbb{R}^N)$. Next, we optimize for such an embedding map that minimizes
data fit error by warm-starting with the output of principal component analysis
(PCA) and applying gradient descent. Then, we define a continuous and
$O(k)$-equivariant map $\pi_\alpha$ that acts as a ``closest point operator''
to project the data onto the image of $V_k(\mathbb{R}^n)$ in
$V_k(\mathbb{R}^N)$ under the embedding determined by $\alpha$, while
minimizing distortion. Because this dimensionality reduction is
$O(k)$-equivariant, these results extend to Grassmannian manifolds as well.
Lastly, we show that the PCA output globally minimizes projection error in a
noiseless setting, but that our algorithm achieves a meaningfully different and
improved outcome when the data does not lie exactly on the image of a linearly
embedded lower-dimensional Stiefel manifold as above. Multiple numerical
experiments using synthetic and real-world data are performed. | [
"Andrew Lee",
"Harlin Lee",
"Jose A. Perea",
"Nikolas Schonsheck",
"Madeleine Weinstein"
] | 2023-09-19 17:21:12 | http://arxiv.org/abs/2309.10775v1 | http://arxiv.org/pdf/2309.10775v1 | 2309.10775v1 |
Semi-supervised Domain Adaptation in Graph Transfer Learning | As a specific case of graph transfer learning, unsupervised domain adaptation
on graphs aims for knowledge transfer from label-rich source graphs to
unlabeled target graphs. However, graphs with topology and attributes usually
have considerable cross-domain disparity and there are numerous real-world
scenarios where merely a subset of nodes are labeled in the source graph. This
imposes critical challenges on graph transfer learning due to serious domain
shifts and label scarcity. To address these challenges, we propose a method
named Semi-supervised Graph Domain Adaptation (SGDA). To deal with the domain
shift, we add adaptive shift parameters to each of the source nodes, which are
trained in an adversarial manner to align the cross-domain distributions of
node embedding, thus the node classifier trained on labeled source nodes can be
transferred to the target nodes. Moreover, to address the label scarcity, we
propose pseudo-labeling on unlabeled nodes, which improves classification on
the target graph via measuring the posterior influence of nodes based on their
relative position to the class centroids. Finally, extensive experiments on a
range of publicly accessible datasets validate the effectiveness of our
proposed SGDA in different experimental settings. | [
"Ziyue Qiao",
"Xiao Luo",
"Meng Xiao",
"Hao Dong",
"Yuanchun Zhou",
"Hui Xiong"
] | 2023-09-19 17:20:58 | http://arxiv.org/abs/2309.10773v1 | http://arxiv.org/pdf/2309.10773v1 | 2309.10773v1 |
Interactive Distillation of Large Single-Topic Corpora of Scientific Papers | Highly specific datasets of scientific literature are important for both
research and education. However, it is difficult to build such datasets at
scale. A common approach is to build these datasets reductively by applying
topic modeling on an established corpus and selecting specific topics. A more
robust but time-consuming approach is to build the dataset constructively in
which a subject matter expert (SME) handpicks documents. This method does not
scale and is prone to error as the dataset grows. Here we showcase a new tool,
based on machine learning, for constructively generating targeted datasets of
scientific literature. Given a small initial "core" corpus of papers, we build
a citation network of documents. At each step of the citation network, we
generate text embeddings and visualize the embeddings through dimensionality
reduction. Papers are kept in the dataset if they are "similar" to the core or
are otherwise pruned through human-in-the-loop selection. Additional insight
into the papers is gained through sub-topic modeling using SeNMFk. We
demonstrate our new tool for literature review by applying it to two different
fields in machine learning. | [
"Nicholas Solovyev",
"Ryan Barron",
"Manish Bhattarai",
"Maksim E. Eren",
"Kim O. Rasmussen",
"Boian S. Alexandrov"
] | 2023-09-19 17:18:36 | http://arxiv.org/abs/2309.10772v1 | http://arxiv.org/pdf/2309.10772v1 | 2309.10772v1 |
Improving Opioid Use Disorder Risk Modelling through Behavioral and Genetic Feature Integration | Opioids are an effective analgesic for acute and chronic pain, but also carry
a considerable risk of addiction leading to millions of opioid use disorder
(OUD) cases and tens of thousands of premature deaths in the United States
yearly. Estimating OUD risk prior to prescription could improve the efficacy of
treatment regimens, monitoring programs, and intervention strategies, but risk
estimation is typically based on self-reported data or questionnaires. We
develop an experimental design and computational methods that combines genetic
variants associated with OUD with behavioral features extracted from GPS and
Wi-Fi spatiotemporal coordinates to assess OUD risk. Since both OUD mobility
and genetic data do not exist for the same cohort, we develop algorithms to (1)
generate mobility features from empirical distributions and (2) synthesize
mobility and genetic samples assuming a level of comorbidity and relative
risks. We show that integrating genetic and mobility modalities improves risk
modelling using classification accuracy, area under the precision-recall and
receiver operator characteristic curves, and $F_1$ score. Interpreting the
fitted models suggests that mobility features have more influence on OUD risk,
although the genetic contribution was significant, particularly in linear
models. While there exists concerns with respect to privacy, security, bias,
and generalizability that must be evaluated in clinical trials before being
implemented in practice, our framework provides preliminary evidence that
behavioral and genetic features may improve OUD risk estimation to assist with
personalized clinical decision-making. | [
"Sybille Légitime",
"Kaustubh Prabhu",
"Devin McConnell",
"Bing Wang",
"Dipak K. Dey",
"Derek Aguiar"
] | 2023-09-19 17:01:28 | http://arxiv.org/abs/2309.10837v1 | http://arxiv.org/pdf/2309.10837v1 | 2309.10837v1 |
SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction | Recent hand-object interaction datasets show limited real object variability
and rely on fitting the MANO parametric model to obtain groundtruth hand
shapes. To go beyond these limitations and spur further research, we introduce
the SHOWMe dataset which consists of 96 videos, annotated with real and
detailed hand-object 3D textured meshes. Following recent work, we consider a
rigid hand-object scenario, in which the pose of the hand with respect to the
object remains constant during the whole video sequence. This assumption allows
us to register sub-millimetre-precise groundtruth 3D scans to the image
sequences in SHOWMe. Although simpler, this hypothesis makes sense in terms of
applications where the required accuracy and level of detail is important eg.,
object hand-over in human-robot collaboration, object scanning, or manipulation
and contact point analysis. Importantly, the rigidity of the hand-object
systems allows to tackle video-based 3D reconstruction of unknown hand-held
objects using a 2-stage pipeline consisting of a rigid registration step
followed by a multi-view reconstruction (MVR) part. We carefully evaluate a set
of non-trivial baselines for these two stages and show that it is possible to
achieve promising object-agnostic 3D hand-object reconstructions employing an
SfM toolbox or a hand pose estimator to recover the rigid transforms and
off-the-shelf MVR algorithms. However, these methods remain sensitive to the
initial camera pose estimates which might be imprecise due to lack of textures
on the objects or heavy occlusions of the hands, leaving room for improvements
in the reconstruction. Code and dataset are available at
https://europe.naverlabs.com/research/showme | [
"Anilkumar Swamy",
"Vincent Leroy",
"Philippe Weinzaepfel",
"Fabien Baradel",
"Salma Galaaoui",
"Romain Bregier",
"Matthieu Armando",
"Jean-Sebastien Franco",
"Gregory Rogez"
] | 2023-09-19 16:48:29 | http://arxiv.org/abs/2309.10748v1 | http://arxiv.org/pdf/2309.10748v1 | 2309.10748v1 |
Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillation | Diffusion models power a vast majority of text-to-audio (TTA) generation
methods. Unfortunately, these models suffer from slow inference speed due to
iterative queries to the underlying denoising network, thus unsuitable for
scenarios with inference time or computational constraints. This work modifies
the recently proposed consistency distillation framework to train TTA models
that require only a single neural network query. In addition to incorporating
classifier-free guidance into the distillation process, we leverage the
availability of generated audio during distillation training to fine-tune the
consistency TTA model with novel loss functions in the audio space, such as the
CLAP score. Our objective and subjective evaluation results on the AudioCaps
dataset show that consistency models retain diffusion models' high generation
quality and diversity while reducing the number of queries by a factor of 400. | [
"Yatong Bai",
"Trung Dang",
"Dung Tran",
"Kazuhito Koishida",
"Somayeh Sojoudi"
] | 2023-09-19 16:36:33 | http://arxiv.org/abs/2309.10740v1 | http://arxiv.org/pdf/2309.10740v1 | 2309.10740v1 |
Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation | We consider the problem of learning a model from multiple heterogeneous
sources with the goal of performing well on a new target distribution. The goal
of learner is to mix these data sources in a target-distribution aware way and
simultaneously minimize the empirical risk on the mixed source. The literature
has made some tangible advancements in establishing theory of learning on
mixture domain. However, there are still two unsolved problems. Firstly, how to
estimate the optimal mixture of sources, given a target domain; Secondly, when
there are numerous target domains, how to solve empirical risk minimization
(ERM) for each target using possibly unique mixture of data sources in a
computationally efficient manner. In this paper we address both problems
efficiently and with guarantees. We cast the first problem, mixture weight
estimation, as a convex-nonconcave compositional minimax problem, and propose
an efficient stochastic algorithm with provable stationarity guarantees. Next,
for the second problem, we identify that for certain regimes, solving ERM for
each target domain individually can be avoided, and instead parameters for a
target optimal model can be viewed as a non-linear function on a space of the
mixture coefficients. Building upon this, we show that in the offline setting,
a GD-trained overparameterized neural network can provably learn such function
to predict the model of target domain instead of solving a designated ERM
problem. Finally, we also consider an online setting and propose a label
efficient online algorithm, which predicts parameters for new targets given an
arbitrary sequence of mixing coefficients, while enjoying regret guarantees. | [
"Yuyang Deng",
"Ilja Kuzborskij",
"Mehrdad Mahdavi"
] | 2023-09-19 16:29:34 | http://arxiv.org/abs/2309.10736v1 | http://arxiv.org/pdf/2309.10736v1 | 2309.10736v1 |
GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models | The remarkable capabilities and intricate nature of Artificial Intelligence
(AI) have dramatically escalated the imperative for specialized AI
accelerators. Nonetheless, designing these accelerators for various AI
workloads remains both labor- and time-intensive. While existing design
exploration and automation tools can partially alleviate the need for extensive
human involvement, they still demand substantial hardware expertise, posing a
barrier to non-experts and stifling AI accelerator development. Motivated by
the astonishing potential of large language models (LLMs) for generating
high-quality content in response to human language instructions, we embark on
this work to examine the possibility of harnessing LLMs to automate AI
accelerator design. Through this endeavor, we develop GPT4AIGChip, a framework
intended to democratize AI accelerator design by leveraging human natural
languages instead of domain-specific languages. Specifically, we first perform
an in-depth investigation into LLMs' limitations and capabilities for AI
accelerator design, thus aiding our understanding of our current position and
garnering insights into LLM-powered automated AI accelerator design.
Furthermore, drawing inspiration from the above insights, we develop a
framework called GPT4AIGChip, which features an automated demo-augmented
prompt-generation pipeline utilizing in-context learning to guide LLMs towards
creating high-quality AI accelerator design. To our knowledge, this work is the
first to demonstrate an effective pipeline for LLM-powered automated AI
accelerator generation. Accordingly, we anticipate that our insights and
framework can serve as a catalyst for innovations in next-generation
LLM-powered design automation tools. | [
"Yonggan Fu",
"Yongan Zhang",
"Zhongzhi Yu",
"Sixu Li",
"Zhifan Ye",
"Chaojian Li",
"Cheng Wan",
"Yingyan Lin"
] | 2023-09-19 16:14:57 | http://arxiv.org/abs/2309.10730v1 | http://arxiv.org/pdf/2309.10730v1 | 2309.10730v1 |
PAMS: Platform for Artificial Market Simulations | This paper presents a new artificial market simulation platform, PAMS:
Platform for Artificial Market Simulations. PAMS is developed as a Python-based
simulator that is easily integrated with deep learning and enabling various
simulation that requires easy users' modification. In this paper, we
demonstrate PAMS effectiveness through a study using agents predicting future
prices by deep learning. | [
"Masanori Hirano",
"Ryosuke Takata",
"Kiyoshi Izumi"
] | 2023-09-19 16:14:21 | http://arxiv.org/abs/2309.10729v1 | http://arxiv.org/pdf/2309.10729v1 | 2309.10729v1 |
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback | To solve complex tasks, large language models (LLMs) often require multiple
rounds of interactions with the user, sometimes assisted by external tools.
However, current evaluation protocols often emphasize benchmark performance
with single-turn exchanges, neglecting the nuanced interactions among the user,
LLMs, and external tools, while also underestimating the importance of natural
language feedback from users. These oversights contribute to discrepancies
between research benchmark evaluations and real-world use cases. We introduce
MINT, a benchmark that evaluates LLMs' ability to solve tasks with multi-turn
interactions by (1) using tools and (2) leveraging natural language feedback.
To ensure reproducibility, we provide an evaluation framework where LLMs can
access tools by executing Python code and receive users' natural language
feedback simulated by GPT-4. We repurpose a diverse set of established
evaluation datasets focusing on reasoning, coding, and decision-making and
carefully curate them into a compact subset for efficient evaluation. Our
analysis of 20 open- and closed-source LLMs offers intriguing findings. (a)
LLMs generally benefit from tools and language feedback, with performance gains
(absolute, same below) of 1-8% for each turn of tool use and 2-17% with natural
language feedback. (b) Better single-turn performance does not guarantee better
multi-turn performance. (c) Surprisingly, on the LLMs evaluated, supervised
instruction-finetuning (SIFT) and reinforcement learning from human feedback
(RLHF) generally hurt multi-turn capabilities. We expect MINT can help measure
progress and incentivize research in improving LLMs' capabilities in multi-turn
interactions, especially for open-source communities where multi-turn human
evaluation can be less accessible compared to commercial LLMs with a larger
user base. | [
"Xingyao Wang",
"Zihan Wang",
"Jiateng Liu",
"Yangyi Chen",
"Lifan Yuan",
"Hao Peng",
"Heng Ji"
] | 2023-09-19 15:25:42 | http://arxiv.org/abs/2309.10691v2 | http://arxiv.org/pdf/2309.10691v2 | 2309.10691v2 |
On the different regimes of Stochastic Gradient Descent | Modern deep networks are trained with stochastic gradient descent (SGD) whose
key parameters are the number of data considered at each step or batch size
$B$, and the step size or learning rate $\eta$. For small $B$ and large $\eta$,
SGD corresponds to a stochastic evolution of the parameters, whose noise
amplitude is governed by the `temperature' $T\equiv \eta/B$. Yet this
description is observed to break down for sufficiently large batches $B\geq
B^*$, or simplifies to gradient descent (GD) when the temperature is
sufficiently small. Understanding where these cross-overs take place remains a
central challenge. Here we resolve these questions for a teacher-student
perceptron classification model, and show empirically that our key predictions
still apply to deep networks. Specifically, we obtain a phase diagram in the
$B$-$\eta$ plane that separates three dynamical phases: $\textit{(i)}$ a
noise-dominated SGD governed by temperature, $\textit{(ii)}$ a
large-first-step-dominated SGD and $\textit{(iii)}$ GD. These different phases
also corresponds to different regimes of generalization error. Remarkably, our
analysis reveals that the batch size $B^*$ separating regimes $\textit{(i)}$
and $\textit{(ii)}$ scale with the size $P$ of the training set, with an
exponent that characterizes the hardness of the classification problem. | [
"Antonio Sclocchi",
"Matthieu Wyart"
] | 2023-09-19 15:23:07 | http://arxiv.org/abs/2309.10688v2 | http://arxiv.org/pdf/2309.10688v2 | 2309.10688v2 |
Oracle Complexity Reduction for Model-free LQR: A Stochastic Variance-Reduced Policy Gradient Approach | We investigate the problem of learning an $\epsilon$-approximate solution for
the discrete-time Linear Quadratic Regulator (LQR) problem via a Stochastic
Variance-Reduced Policy Gradient (SVRPG) approach. Whilst policy gradient
methods have proven to converge linearly to the optimal solution of the
model-free LQR problem, the substantial requirement for two-point cost queries
in gradient estimations may be intractable, particularly in applications where
obtaining cost function evaluations at two distinct control input
configurations is exceptionally costly. To this end, we propose an
oracle-efficient approach. Our method combines both one-point and two-point
estimations in a dual-loop variance-reduced algorithm. It achieves an
approximate optimal solution with only
$O\left(\log\left(1/\epsilon\right)^{\beta}\right)$ two-point cost information
for $\beta \in (0,1)$. | [
"Leonardo F. Toso",
"Han Wang",
"James Anderson"
] | 2023-09-19 15:03:18 | http://arxiv.org/abs/2309.10679v1 | http://arxiv.org/pdf/2309.10679v1 | 2309.10679v1 |
Language Modeling Is Compression | It has long been established that predictive models can be transformed into
lossless compressors and vice versa. Incidentally, in recent years, the machine
learning community has focused on training increasingly large and powerful
self-supervised (language) models. Since these large language models exhibit
impressive predictive capabilities, they are well-positioned to be strong
compressors. In this work, we advocate for viewing the prediction problem
through the lens of compression and evaluate the compression capabilities of
large (foundation) models. We show that large language models are powerful
general-purpose predictors and that the compression viewpoint provides novel
insights into scaling laws, tokenization, and in-context learning. For example,
Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to
43.4% and LibriSpeech samples to 16.4% of their raw size, beating
domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.
Finally, we show that the prediction-compression equivalence allows us to use
any compressor (like gzip) to build a conditional generative model. | [
"Grégoire Delétang",
"Anian Ruoss",
"Paul-Ambroise Duquenne",
"Elliot Catt",
"Tim Genewein",
"Christopher Mattern",
"Jordi Grau-Moya",
"Li Kevin Wenliang",
"Matthew Aitchison",
"Laurent Orseau",
"Marcus Hutter",
"Joel Veness"
] | 2023-09-19 14:50:38 | http://arxiv.org/abs/2309.10668v1 | http://arxiv.org/pdf/2309.10668v1 | 2309.10668v1 |
Analysing race and sex bias in brain age prediction | Brain age prediction from MRI has become a popular imaging biomarker
associated with a wide range of neuropathologies. The datasets used for
training, however, are often skewed and imbalanced regarding demographics,
potentially making brain age prediction models susceptible to bias. We analyse
the commonly used ResNet-34 model by conducting a comprehensive subgroup
performance analysis and feature inspection. The model is trained on 1,215
T1-weighted MRI scans from Cam-CAN and IXI, and tested on UK Biobank
(n=42,786), split into six racial and biological sex subgroups. With the
objective of comparing the performance between subgroups, measured by the
absolute prediction error, we use a Kruskal-Wallis test followed by two
post-hoc Conover-Iman tests to inspect bias across race and biological sex. To
examine biases in the generated features, we use PCA for dimensionality
reduction and employ two-sample Kolmogorov-Smirnov tests to identify
distribution shifts among subgroups. Our results reveal statistically
significant differences in predictive performance between Black and White,
Black and Asian, and male and female subjects. Seven out of twelve pairwise
comparisons show statistically significant differences in the feature
distributions. Our findings call for further analysis of brain age prediction
models. | [
"Carolina Piçarra",
"Ben Glocker"
] | 2023-09-19 14:40:19 | http://arxiv.org/abs/2309.10835v1 | http://arxiv.org/pdf/2309.10835v1 | 2309.10835v1 |
Implementing a new fully stepwise decomposition-based sampling technique for the hybrid water level forecasting model in real-world application | Various time variant non-stationary signals need to be pre-processed properly
in hydrological time series forecasting in real world, for example, predictions
of water level. Decomposition method is a good candidate and widely used in
such a pre-processing problem. However, decomposition methods with an
inappropriate sampling technique may introduce future data which is not
available in practical applications, and result in incorrect
decomposition-based forecasting models. In this work, a novel Fully Stepwise
Decomposition-Based (FSDB) sampling technique is well designed for the
decomposition-based forecasting model, strictly avoiding introducing future
information. This sampling technique with decomposition methods, such as
Variational Mode Decomposition (VMD) and Singular spectrum analysis (SSA), is
applied to predict water level time series in three different stations of
Guoyang and Chaohu basins in China. Results of VMD-based hybrid model using
FSDB sampling technique show that Nash-Sutcliffe Efficiency (NSE) coefficient
is increased by 6.4%, 28.8% and 7.0% in three stations respectively, compared
with those obtained from the currently most advanced sampling technique. In the
meantime, for series of SSA-based experiments, NSE is increased by 3.2%, 3.1%
and 1.1% respectively. We conclude that the newly developed FSDB sampling
technique can be used to enhance the performance of decomposition-based hybrid
model in water level time series forecasting in real world. | [
"Ziqian Zhang",
"Nana Bao",
"Xingting Yan",
"Aokai Zhu",
"Chenyang Li",
"Mingyu Liu"
] | 2023-09-19 14:40:13 | http://arxiv.org/abs/2309.10658v1 | http://arxiv.org/pdf/2309.10658v1 | 2309.10658v1 |
Learning Adaptive Safety for Multi-Agent Systems | Ensuring safety in dynamic multi-agent systems is challenging due to limited
information about the other agents. Control Barrier Functions (CBFs) are
showing promise for safety assurance but current methods make strong
assumptions about other agents and often rely on manual tuning to balance
safety, feasibility, and performance. In this work, we delve into the problem
of adaptive safe learning for multi-agent systems with CBF. We show how
emergent behavior can be profoundly influenced by the CBF configuration,
highlighting the necessity for a responsive and dynamic approach to CBF design.
We present ASRL, a novel adaptive safe RL framework, to fully automate the
optimization of policy and CBF coefficients, to enhance safety and long-term
performance through reinforcement learning. By directly interacting with the
other agents, ASRL learns to cope with diverse agent behaviours and maintains
the cost violations below a desired limit. We evaluate ASRL in a multi-robot
system and a competitive multi-agent racing scenario, against learning-based
and control-theoretic approaches. We empirically demonstrate the efficacy and
flexibility of ASRL, and assess generalization and scalability to
out-of-distribution scenarios. Code and supplementary material are public
online. | [
"Luigi Berducci",
"Shuo Yang",
"Rahul Mangharam",
"Radu Grosu"
] | 2023-09-19 14:39:39 | http://arxiv.org/abs/2309.10657v2 | http://arxiv.org/pdf/2309.10657v2 | 2309.10657v2 |
A spectrum of physics-informed Gaussian processes for regression in engineering | Despite the growing availability of sensing and data in general, we remain
unable to fully characterise many in-service engineering systems and structures
from a purely data-driven approach. The vast data and resources available to
capture human activity are unmatched in our engineered world, and, even in
cases where data could be referred to as ``big,'' they will rarely hold
information across operational windows or life spans. This paper pursues the
combination of machine learning technology and physics-based reasoning to
enhance our ability to make predictive models with limited data. By explicitly
linking the physics-based view of stochastic processes with a data-based
regression approach, a spectrum of possible Gaussian process models are
introduced that enable the incorporation of different levels of expert
knowledge of a system. Examples illustrate how these approaches can
significantly reduce reliance on data collection whilst also increasing the
interpretability of the model, another important consideration in this context. | [
"Elizabeth J Cross",
"Timothy J Rogers",
"Daniel J Pitchforth",
"Samuel J Gibson",
"Matthew R Jones"
] | 2023-09-19 14:39:03 | http://arxiv.org/abs/2309.10656v1 | http://arxiv.org/pdf/2309.10656v1 | 2309.10656v1 |
Training neural mapping schemes for satellite altimetry with simulation data | Satellite altimetry combined with data assimilation and optimal interpolation
schemes have deeply renewed our ability to monitor sea surface dynamics.
Recently, deep learning (DL) schemes have emerged as appealing solutions to
address space-time interpolation problems. The scarcity of real altimetry
dataset, in terms of space-time coverage of the sea surface, however impedes
the training of state-of-the-art neural schemes on real-world case-studies.
Here, we leverage both simulations of ocean dynamics and satellite altimeters
to train simulation-based neural mapping schemes for the sea surface height and
demonstrate their performance for real altimetry datasets. We analyze further
how the ocean simulation dataset used during the training phase impacts this
performance. This experimental analysis covers both the resolution from
eddy-present configurations to eddy-rich ones, forced simulations vs.
reanalyses using data assimilation and tide-free vs. tide-resolving
simulations. Our benchmarking framework focuses on a Gulf Stream region for a
realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet
mapping schemes. All simulation-based 4DVarNets outperform the operational
observation-driven and reanalysis products, namely DUACS and GLORYS. The more
realistic the ocean simulation dataset used during the training phase, the
better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and
tide-free simulation datasets. It improves the resolved longitudinal scale from
151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and
reduces the root mean squared error (RMSE) by 23% and 61%. These results open
research avenues for new synergies between ocean modelling and ocean
observation using learning-based approaches. | [
"Quentin Febvre",
"Julien Le Sommer",
"Clément Ubelmann",
"Ronan Fablet"
] | 2023-09-19 14:32:25 | http://arxiv.org/abs/2309.14350v1 | http://arxiv.org/pdf/2309.14350v1 | 2309.14350v1 |
Towards Energy-Aware Federated Traffic Prediction for Cellular Networks | Cellular traffic prediction is a crucial activity for optimizing networks in
fifth-generation (5G) networks and beyond, as accurate forecasting is essential
for intelligent network design, resource allocation and anomaly mitigation.
Although machine learning (ML) is a promising approach to effectively predict
network traffic, the centralization of massive data in a single data center
raises issues regarding confidentiality, privacy and data transfer demands. To
address these challenges, federated learning (FL) emerges as an appealing ML
training framework which offers high accurate predictions through parallel
distributed computations. However, the environmental impact of these methods is
often overlooked, which calls into question their sustainability. In this
paper, we address the trade-off between accuracy and energy consumption in FL
by proposing a novel sustainability indicator that allows assessing the
feasibility of ML models. Then, we comprehensively evaluate state-of-the-art
deep learning (DL) architectures in a federated scenario using real-world
measurements from base station (BS) sites in the area of Barcelona, Spain. Our
findings indicate that larger ML models achieve marginally improved performance
but have a significant environmental impact in terms of carbon footprint, which
make them impractical for real-world applications. | [
"Vasileios Perifanis",
"Nikolaos Pavlidis",
"Selim F. Yilmaz",
"Francesc Wilhelmi",
"Elia Guerra",
"Marco Miozzo",
"Pavlos S. Efraimidis",
"Paolo Dini",
"Remous-Aris Koutsiamanis"
] | 2023-09-19 14:28:09 | http://arxiv.org/abs/2309.10645v1 | http://arxiv.org/pdf/2309.10645v1 | 2309.10645v1 |
Geometric structure of Deep Learning networks and construction of global ${\mathcal L}^2$ minimizers | In this paper, we provide a geometric interpretation of the structure of Deep
Learning (DL) networks, characterized by $L$ hidden layers, a ramp activation
function, an ${\mathcal L}^2$ Schatten class (or Hilbert-Schmidt) cost
function, and input and output spaces ${\mathbb R}^Q$ with equal dimension
$Q\geq1$. The hidden layers are defined on spaces ${\mathbb R}^{Q}$, as well.
We apply our recent results on shallow neural networks to construct an explicit
family of minimizers for the global minimum of the cost function in the case
$L\geq Q$, which we show to be degenerate. In the context presented here, the
hidden layers of the DL network "curate" the training inputs by recursive
application of a truncation map that minimizes the noise to signal ratio of the
training inputs. Moreover, we determine a set of $2^Q-1$ distinct degenerate
local minima of the cost function. | [
"Thomas Chen",
"Patricia Muñoz Ewald"
] | 2023-09-19 14:20:55 | http://arxiv.org/abs/2309.10639v2 | http://arxiv.org/pdf/2309.10639v2 | 2309.10639v2 |
Sparser Random Networks Exist: Enforcing Communication-Efficient Federated Learning via Regularization | This work presents a new method for enhancing communication efficiency in
stochastic Federated Learning that trains over-parameterized random networks.
In this setting, a binary mask is optimized instead of the model weights, which
are kept fixed. The mask characterizes a sparse sub-network that is able to
generalize as good as a smaller target network. Importantly, sparse binary
masks are exchanged rather than the floating point weights in traditional
federated learning, reducing communication cost to at most 1 bit per parameter.
We show that previous state of the art stochastic methods fail to find the
sparse networks that can reduce the communication and storage overhead using
consistent loss objectives. To address this, we propose adding a regularization
term to local objectives that encourages sparser solutions by eliminating
redundant features across sub-networks. Extensive experiments demonstrate
significant improvements in communication and memory efficiency of up to five
magnitudes compared to the literature, with minimal performance degradation in
validation accuracy in some instances. | [
"Mohamad Mestoukirdi",
"Omid Esrafilian",
"David Gesbert",
"Qianrui Li",
"Nicolas Gresset"
] | 2023-09-19 14:05:12 | http://arxiv.org/abs/2309.10834v1 | http://arxiv.org/pdf/2309.10834v1 | 2309.10834v1 |
Large language models can accurately predict searcher preferences | Relevance labels, which indicate whether a search result is valuable to a
searcher, are key to evaluating and optimising search systems. The best way to
capture the true preferences of users is to ask them for their careful feedback
on which results would be useful, but this approach does not scale to produce a
large number of labels. Getting relevance labels at scale is usually done with
third-party labellers, who judge on behalf of the user, but there is a risk of
low-quality data if the labeller doesn't understand user needs. To improve
quality, one standard approach is to study real users through interviews, user
studies and direct feedback, find areas where labels are systematically
disagreeing with users, then educate labellers about user needs through judging
guidelines, training and monitoring. This paper introduces an alternate
approach for improving label quality. It takes careful feedback from real
users, which by definition is the highest-quality first-party gold data that
can be derived, and develops an large language model prompt that agrees with
that data.
We present ideas and observations from deploying language models for
large-scale relevance labelling at Bing, and illustrate with data from TREC. We
have found large language models can be effective, with accuracy as good as
human labellers and similar capability to pick the hardest queries, best runs,
and best groups. Systematic changes to the prompts make a difference in
accuracy, but so too do simple paraphrases. To measure agreement with real
searchers needs high-quality ``gold'' labels, but with these we find that
models produce better labels than third-party workers, for a fraction of the
cost, and these labels let us train notably better rankers. | [
"Paul Thomas",
"Seth Spielman",
"Nick Craswell",
"Bhaskar Mitra"
] | 2023-09-19 13:55:39 | http://arxiv.org/abs/2309.10621v1 | http://arxiv.org/pdf/2309.10621v1 | 2309.10621v1 |
Source-free Active Domain Adaptation for Diabetic Retinopathy Grading Based on Ultra-wide-field Fundus Image | Domain adaptation (DA) has been widely applied in the diabetic retinopathy
(DR) grading of unannotated ultra-wide-field (UWF) fundus images, which can
transfer annotated knowledge from labeled color fundus images. However,
suffering from huge domain gaps and complex real-world scenarios, the DR
grading performance of most mainstream DA is far from that of clinical
diagnosis. To tackle this, we propose a novel source-free active domain
adaptation (SFADA) in this paper. Specifically, we focus on DR grading problem
itself and propose to generate features of color fundus images with
continuously evolving relationships of DRs, actively select a few valuable UWF
fundus images for labeling with local representation matching, and adapt model
on UWF fundus images with DR lesion prototypes. Notably, the SFADA also takes
data privacy and computational efficiency into consideration. Extensive
experimental results demonstrate that our proposed SFADA achieves
state-of-the-art DR grading performance, increasing accuracy by 20.9% and
quadratic weighted kappa by 18.63% compared with baseline and reaching 85.36%
and 92.38% respectively. These investigations show that the potential of our
approach for real clinical practice is promising. | [
"Jinye Ran",
"Guanghua Zhang",
"Ximei Zhang",
"Juan Xie",
"Fan Xia",
"Hao Zhang"
] | 2023-09-19 13:52:06 | http://arxiv.org/abs/2309.10619v1 | http://arxiv.org/pdf/2309.10619v1 | 2309.10619v1 |
A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor Analysis | High-Dimensional and Incomplete (HDI) data is commonly encountered in big
data-related applications like social network services systems, which are
concerning the limited interactions among numerous nodes. Knowledge acquisition
from HDI data is a vital issue in the domain of data science due to their
embedded rich patterns like node behaviors, where the fundamental task is to
perform HDI data representation learning. Nonnegative Latent Factor Analysis
(NLFA) models have proven to possess the superiority to address this issue,
where a linear bias incorporation (LBI) scheme is important in present the
training overshooting and fluctuation, as well as preventing the model from
premature convergence. However, existing LBI schemes are all statistic ones
where the linear biases are fixed, which significantly restricts the
scalability of the resultant NLFA model and results in loss of representation
learning ability to HDI data. Motivated by the above discoveries, this paper
innovatively presents the dynamic linear bias incorporation (DLBI) scheme. It
firstly extends the linear bias vectors into matrices, and then builds a binary
weight matrix to switch the active/inactive states of the linear biases. The
weight matrix's each entry switches between the binary states dynamically
corresponding to the linear bias value variation, thereby establishing the
dynamic linear biases for an NLFA model. Empirical studies on three HDI
datasets from real applications demonstrate that the proposed DLBI-based NLFA
model obtains higher representation accuracy several than state-of-the-art
models do, as well as highly-competitive computational efficiency. | [
"Yurong Zhong",
"Zhe Xie",
"Weiling Li",
"Xin Luo"
] | 2023-09-19 13:48:26 | http://arxiv.org/abs/2309.10618v1 | http://arxiv.org/pdf/2309.10618v1 | 2309.10618v1 |
An Extendable Python Implementation of Robust Optimisation Monte Carlo | Performing inference in statistical models with an intractable likelihood is
challenging, therefore, most likelihood-free inference (LFI) methods encounter
accuracy and efficiency limitations. In this paper, we present the
implementation of the LFI method Robust Optimisation Monte Carlo (ROMC) in the
Python package ELFI. ROMC is a novel and efficient (highly-parallelizable) LFI
framework that provides accurate weighted samples from the posterior. Our
implementation can be used in two ways. First, a scientist may use it as an
out-of-the-box LFI algorithm; we provide an easy-to-use API harmonized with the
principles of ELFI, enabling effortless comparisons with the rest of the
methods included in the package. Additionally, we have carefully split ROMC
into isolated components for supporting extensibility. A researcher may
experiment with novel method(s) for solving part(s) of ROMC without
reimplementing everything from scratch. In both scenarios, the ROMC parts can
run in a fully-parallelized manner, exploiting all CPU cores. We also provide
helpful functionalities for (i) inspecting the inference process and (ii)
evaluating the obtained samples. Finally, we test the robustness of our
implementation on some typical LFI examples. | [
"Vasilis Gkolemis",
"Michael Gutmann",
"Henri Pesonen"
] | 2023-09-19 13:37:47 | http://arxiv.org/abs/2309.10612v1 | http://arxiv.org/pdf/2309.10612v1 | 2309.10612v1 |
Asteroids co-orbital motion classification based on Machine Learning | In this work, we explore how to classify asteroids in co-orbital motion with
a given planet using Machine Learning. We consider four different kinds of
motion in mean motion resonance with the planet, nominally Tadpole, Horseshoe
and Quasi-satellite, building 3 datasets defined as Real (taking the
ephemerides of real asteroids from the JPL Horizons system), Ideal and
Perturbed (both simulated, obtained by propagating initial conditions
considering two different dynamical systems) for training and testing the
Machine Learning algorithms in different conditions.
The time series of the variable theta (angle related to the resonance) are
studied with a data analysis pipeline defined ad hoc for the problem and
composed by: data creation and annotation, time series features extraction
thanks to the tsfresh package (potentially followed by selection and
standardization) and the application of Machine Learning algorithms for
Dimensionality Reduction and Classification. Such approach, based on features
extracted from the time series, allows to work with a smaller number of data
with respect to Deep Learning algorithms, also allowing to define a ranking of
the importance of the features. Physical Interpretability of the features is
another key point of this approach. In addition, we introduce the SHapley
Additive exPlanations for Explainability technique.
Different training and test sets are used, in order to understand the power
and the limits of our approach. The results show how the algorithms are able to
identify and classify correctly the time series, with a high degree of
performance. | [
"Giulia Ciacci",
"Andrea Barucci",
"Sara Di Ruzza",
"Elisa Maria Alessi"
] | 2023-09-19 13:19:31 | http://arxiv.org/abs/2309.10603v1 | http://arxiv.org/pdf/2309.10603v1 | 2309.10603v1 |
Unsupervised Deep Cross-Language Entity Alignment | Cross-lingual entity alignment is the task of finding the same semantic
entities from different language knowledge graphs. In this paper, we propose a
simple and novel unsupervised method for cross-language entity alignment. We
utilize the deep learning multi-language encoder combined with a machine
translator to encode knowledge graph text, which reduces the reliance on label
data. Unlike traditional methods that only emphasize global or local alignment,
our method simultaneously considers both alignment strategies. We first view
the alignment task as a bipartite matching problem and then adopt the
re-exchanging idea to accomplish alignment. Compared with the traditional
bipartite matching algorithm that only gives one optimal solution, our
algorithm generates ranked matching results which enabled many potentials
downstream tasks. Additionally, our method can adapt two different types of
optimization (minimal and maximal) in the bipartite matching process, which
provides more flexibility. Our evaluation shows, we each scored 0.966, 0.990,
and 0.996 Hits@1 rates on the DBP15K dataset in Chinese, Japanese, and French
to English alignment tasks. We outperformed the state-of-the-art method in
unsupervised and semi-supervised categories. Compared with the state-of-the-art
supervised method, our method outperforms 2.6% and 0.4% in Ja-En and Fr-En
alignment tasks while marginally lower by 0.2% in the Zh-En alignment task. | [
"Chuanyu Jiang",
"Yiming Qian",
"Lijun Chen",
"Yang Gu",
"Xia Xie"
] | 2023-09-19 13:12:48 | http://arxiv.org/abs/2309.10598v1 | http://arxiv.org/pdf/2309.10598v1 | 2309.10598v1 |
Motif-Centric Representation Learning for Symbolic Music | Music motif, as a conceptual building block of composition, is crucial for
music structure analysis and automatic composition. While human listeners can
identify motifs easily, existing computational models fall short in
representing motifs and their developments. The reason is that the nature of
motifs is implicit, and the diversity of motif variations extends beyond simple
repetitions and modulations. In this study, we aim to learn the implicit
relationship between motifs and their variations via representation learning,
using the Siamese network architecture and a pretraining and fine-tuning
pipeline. A regularization-based method, VICReg, is adopted for pretraining,
while contrastive learning is used for fine-tuning. Experimental results on a
retrieval-based task show that these two methods complement each other,
yielding an improvement of 12.6% in the area under the precision-recall curve.
Lastly, we visualize the acquired motif representations, offering an intuitive
comprehension of the overall structure of a music piece. As far as we know,
this work marks a noteworthy step forward in computational modeling of music
motifs. We believe that this work lays the foundations for future applications
of motifs in automatic music composition and music information retrieval. | [
"Yuxuan Wu",
"Roger B. Dannenberg",
"Gus Xia"
] | 2023-09-19 13:09:03 | http://arxiv.org/abs/2309.10597v1 | http://arxiv.org/pdf/2309.10597v1 | 2309.10597v1 |
Decentralized Online Learning in Task Assignment Games for Mobile Crowdsensing | The problem of coordinated data collection is studied for a mobile
crowdsensing (MCS) system. A mobile crowdsensing platform (MCSP) sequentially
publishes sensing tasks to the available mobile units (MUs) that signal their
willingness to participate in a task by sending sensing offers back to the
MCSP. From the received offers, the MCSP decides the task assignment. A stable
task assignment must address two challenges: the MCSP's and MUs' conflicting
goals, and the uncertainty about the MUs' required efforts and preferences. To
overcome these challenges a novel decentralized approach combining matching
theory and online learning, called collision-avoidance multi-armed bandit with
strategic free sensing (CA-MAB-SFS), is proposed. The task assignment problem
is modeled as a matching game considering the MCSP's and MUs' individual goals
while the MUs learn their efforts online. Our innovative "free-sensing"
mechanism significantly improves the MU's learning process while reducing
collisions during task allocation. The stable regret of CA-MAB-SFS, i.e., the
loss of learning, is analytically shown to be bounded by a sublinear function,
ensuring the convergence to a stable optimal solution. Simulation results show
that CA-MAB-SFS increases the MUs' and the MCSP's satisfaction compared to
state-of-the-art methods while reducing the average task completion time by at
least 16%. | [
"Bernd Simon",
"Andrea Ortiz",
"Walid Saad",
"Anja Klein"
] | 2023-09-19 13:07:15 | http://arxiv.org/abs/2309.10594v1 | http://arxiv.org/pdf/2309.10594v1 | 2309.10594v1 |
Adversarial Attacks Against Uncertainty Quantification | Machine-learning models can be fooled by adversarial examples, i.e.,
carefully-crafted input perturbations that force models to output wrong
predictions. While uncertainty quantification has been recently proposed to
detect adversarial inputs, under the assumption that such attacks exhibit a
higher prediction uncertainty than pristine data, it has been shown that
adaptive attacks specifically aimed at reducing also the uncertainty estimate
can easily bypass this defense mechanism. In this work, we focus on a different
adversarial scenario in which the attacker is still interested in manipulating
the uncertainty estimate, but regardless of the correctness of the prediction;
in particular, the goal is to undermine the use of machine-learning models when
their outputs are consumed by a downstream module or by a human operator.
Following such direction, we: \textit{(i)} design a threat model for attacks
targeting uncertainty quantification; \textit{(ii)} devise different attack
strategies on conceptually different UQ techniques spanning for both
classification and semantic segmentation problems; \textit{(iii)} conduct a
first complete and extensive analysis to compare the differences between some
of the most employed UQ approaches under attack. Our extensive experimental
analysis shows that our attacks are more effective in manipulating uncertainty
quantification measures than attacks aimed to also induce misclassifications. | [
"Emanuele Ledda",
"Daniele Angioni",
"Giorgio Piras",
"Giorgio Fumera",
"Battista Biggio",
"Fabio Roli"
] | 2023-09-19 12:54:09 | http://arxiv.org/abs/2309.10586v1 | http://arxiv.org/pdf/2309.10586v1 | 2309.10586v1 |
PDRL: Multi-Agent based Reinforcement Learning for Predictive Monitoring | Reinforcement learning has been increasingly applied in monitoring
applications because of its ability to learn from previous experiences and can
make adaptive decisions. However, existing machine learning-based health
monitoring applications are mostly supervised learning algorithms, trained on
labels and they cannot make adaptive decisions in an uncertain complex
environment. This study proposes a novel and generic system, predictive deep
reinforcement learning (PDRL) with multiple RL agents in a time series
forecasting environment. The proposed generic framework accommodates virtual
Deep Q Network (DQN) agents to monitor predicted future states of a complex
environment with a well-defined reward policy so that the agent learns existing
knowledge while maximizing their rewards. In the evaluation process of the
proposed framework, three DRL agents were deployed to monitor a subject's
future heart rate, respiration, and temperature predicted using a BiLSTM model.
With each iteration, the three agents were able to learn the associated
patterns and their cumulative rewards gradually increased. It outperformed the
baseline models for all three monitoring agents. The proposed PDRL framework is
able to achieve state-of-the-art performance in the time series forecasting
process. The proposed DRL agents and deep learning model in the PDRL framework
are customized to implement the transfer learning in other forecasting
applications like traffic and weather and monitor their states. The PDRL
framework is able to learn the future states of the traffic and weather
forecasting and the cumulative rewards are gradually increasing over each
episode. | [
"Thanveer Shaik",
"Xiaohui Tao",
"Lin Li",
"Haoran Xie",
"U R Acharya",
"Raj Gururajan",
"Xujuan Zhou"
] | 2023-09-19 12:35:08 | http://arxiv.org/abs/2309.10576v2 | http://arxiv.org/pdf/2309.10576v2 | 2309.10576v2 |
Task Graph offloading via Deep Reinforcement Learning in Mobile Edge Computing | Various mobile applications that comprise dependent tasks are gaining
widespread popularity and are increasingly complex. These applications often
have low-latency requirements, resulting in a significant surge in demand for
computing resources. With the emergence of mobile edge computing (MEC), it
becomes the most significant issue to offload the application tasks onto
small-scale devices deployed at the edge of the mobile network for obtaining a
high-quality user experience. However, since the environment of MEC is dynamic,
most existing works focusing on task graph offloading, which rely heavily on
expert knowledge or accurate analytical models, fail to fully adapt to such
environmental changes, resulting in the reduction of user experience. This
paper investigates the task graph offloading in MEC, considering the
time-varying computation capabilities of edge computing devices. To adapt to
environmental changes, we model the task graph scheduling for computation
offloading as a Markov Decision Process (MDP). Then, we design a deep
reinforcement learning algorithm (SATA-DRL) to learn the task scheduling
strategy from the interaction with the environment, to improve user experience.
Extensive simulations validate that SATA-DRL is superior to existing strategies
in terms of reducing average makespan and deadline violation. | [
"Jiagang Liu",
"Yun Mi",
"Xinyu Zhang"
] | 2023-09-19 12:26:56 | http://arxiv.org/abs/2309.10569v3 | http://arxiv.org/pdf/2309.10569v3 | 2309.10569v3 |
Multimodal Modeling For Spoken Language Identification | Spoken language identification refers to the task of automatically predicting
the spoken language in a given utterance. Conventionally, it is modeled as a
speech-based language identification task. Prior techniques have been
constrained to a single modality; however in the case of video data there is a
wealth of other metadata that may be beneficial for this task. In this work, we
propose MuSeLI, a Multimodal Spoken Language Identification method, which
delves into the use of various metadata sources to enhance language
identification. Our study reveals that metadata such as video title,
description and geographic location provide substantial information to identify
the spoken language of the multimedia recording. We conduct experiments using
two diverse public datasets of YouTube videos, and obtain state-of-the-art
results on the language identification task. We additionally conduct an
ablation study that describes the distinct contribution of each modality for
language recognition. | [
"Shikhar Bharadwaj",
"Min Ma",
"Shikhar Vashishth",
"Ankur Bapna",
"Sriram Ganapathy",
"Vera Axelrod",
"Siddharth Dalmia",
"Wei Han",
"Yu Zhang",
"Daan van Esch",
"Sandy Ritchie",
"Partha Talukdar",
"Jason Riesa"
] | 2023-09-19 12:21:39 | http://arxiv.org/abs/2309.10567v1 | http://arxiv.org/pdf/2309.10567v1 | 2309.10567v1 |
A Hierarchical Neural Framework for Classification and its Explanation in Large Unstructured Legal Documents | Automatic legal judgment prediction and its explanation suffer from the
problem of long case documents exceeding tens of thousands of words, in
general, and having a non-uniform structure. Predicting judgments from such
documents and extracting their explanation becomes a challenging task, more so
on documents with no structural annotation. We define this problem as "scarce
annotated legal documents" and explore their lack of structural information and
their long lengths with a deep-learning-based classification framework which we
call MESc; "Multi-stage Encoder-based Supervised with-clustering"; for judgment
prediction. We explore the adaptability of LLMs with multi-billion parameters
(GPT-Neo, and GPT-J) to legal texts and their intra-domain(legal) transfer
learning capacity. Alongside this, we compare their performance and
adaptability with MESc and the impact of combining embeddings from their last
layers. For such hierarchical models, we also propose an explanation extraction
algorithm named ORSE; Occlusion sensitivity-based Relevant Sentence Extractor;
based on the input-occlusion sensitivity of the model, to explain the
predictions with the most relevant sentences from the document. We explore
these methods and test their effectiveness with extensive experiments and
ablation studies on legal documents from India, the European Union, and the
United States with the ILDC dataset and a subset of the LexGLUE dataset. MESc
achieves a minimum total performance gain of approximately 2 points over
previous state-of-the-art proposed methods, while ORSE applied on MESc achieves
a total average gain of 50% over the baseline explainability scores. | [
"Nishchal Prasad",
"Mohand Boughanem",
"Taoufik Dkaki"
] | 2023-09-19 12:18:28 | http://arxiv.org/abs/2309.10563v2 | http://arxiv.org/pdf/2309.10563v2 | 2309.10563v2 |
Hybrid State Space-based Learning for Sequential Data Prediction with Joint Optimization | We investigate nonlinear prediction/regression in an online setting and
introduce a hybrid model that effectively mitigates, via a joint mechanism
through a state space formulation, the need for domain-specific feature
engineering issues of conventional nonlinear prediction models and achieves an
efficient mix of nonlinear and linear components. In particular, we use
recursive structures to extract features from raw sequential sequences and a
traditional linear time series model to deal with the intricacies of the
sequential data, e.g., seasonality, trends. The state-of-the-art ensemble or
hybrid models typically train the base models in a disjoint manner, which is
not only time consuming but also sub-optimal due to the separation of modeling
or independent training. In contrast, as the first time in the literature, we
jointly optimize an enhanced recurrent neural network (LSTM) for automatic
feature extraction from raw data and an ARMA-family time series model (SARIMAX)
for effectively addressing peculiarities associated with time series data. We
achieve this by introducing novel state space representations for the base
models, which are then combined to provide a full state space representation of
the hybrid or the ensemble. Hence, we are able to jointly optimize both models
in a single pass via particle filtering, for which we also provide the update
equations. The introduced architecture is generic so that one can use other
recurrent architectures, e.g., GRUs, traditional time series-specific models,
e.g., ETS or other optimization methods, e.g., EKF, UKF. Due to such novel
combination and joint optimization, we demonstrate significant improvements in
widely publicized real life competition datasets. We also openly share our code
for further research and replicability of our results. | [
"Mustafa E. Aydın",
"Arda Fazla",
"Suleyman S. Kozat"
] | 2023-09-19 12:00:28 | http://arxiv.org/abs/2309.10553v1 | http://arxiv.org/pdf/2309.10553v1 | 2309.10553v1 |
A Neighbourhood-Aware Differential Privacy Mechanism for Static Word Embeddings | We propose a Neighbourhood-Aware Differential Privacy (NADP) mechanism
considering the neighbourhood of a word in a pretrained static word embedding
space to determine the minimal amount of noise required to guarantee a
specified privacy level. We first construct a nearest neighbour graph over the
words using their embeddings, and factorise it into a set of connected
components (i.e. neighbourhoods). We then separately apply different levels of
Gaussian noise to the words in each neighbourhood, determined by the set of
words in that neighbourhood. Experiments show that our proposed NADP mechanism
consistently outperforms multiple previously proposed DP mechanisms such as
Laplacian, Gaussian, and Mahalanobis in multiple downstream tasks, while
guaranteeing higher levels of privacy. | [
"Danushka Bollegala",
"Shuichi Otake",
"Tomoya Machide",
"Ken-ichi Kawarabayashi"
] | 2023-09-19 11:58:08 | http://arxiv.org/abs/2309.10551v1 | http://arxiv.org/pdf/2309.10551v1 | 2309.10551v1 |
Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey | The metaverse is a nascent concept that envisions a virtual universe, a
collaborative space where individuals can interact, create, and participate in
a wide range of activities. Privacy in the metaverse is a critical concern as
the concept evolves and immersive virtual experiences become more prevalent.
The metaverse privacy problem refers to the challenges and concerns surrounding
the privacy of personal information and data within Virtual Reality (VR)
environments as the concept of a shared VR space becomes more accessible.
Metaverse will harness advancements from various technologies such as
Artificial Intelligence (AI), Extended Reality (XR), Mixed Reality (MR), and
5G/6G-based communication to provide personalized and immersive services to its
users. Moreover, to enable more personalized experiences, the metaverse relies
on the collection of fine-grained user data that leads to various privacy
issues. Therefore, before the potential of the metaverse can be fully realized,
privacy concerns related to personal information and data within VR
environments must be addressed. This includes safeguarding users' control over
their data, ensuring the security of their personal information, and protecting
in-world actions and interactions from unauthorized sharing. In this paper, we
explore various privacy challenges that future metaverses are expected to face,
given their reliance on AI for tracking users, creating XR and MR experiences,
and facilitating interactions. Moreover, we thoroughly analyze technical
solutions such as differential privacy, Homomorphic Encryption (HE), and
Federated Learning (FL) and discuss related sociotechnical issues regarding
privacy. | [
"Mahdi Alkaeed",
"Adnan Qayyum",
"Junaid Qadir"
] | 2023-09-19 11:56:12 | http://arxiv.org/abs/2310.10665v1 | http://arxiv.org/pdf/2310.10665v1 | 2310.10665v1 |
Mean Absolute Directional Loss as a New Loss Function for Machine Learning Problems in Algorithmic Investment Strategies | This paper investigates the issue of an adequate loss function in the
optimization of machine learning models used in the forecasting of financial
time series for the purpose of algorithmic investment strategies (AIS)
construction. We propose the Mean Absolute Directional Loss (MADL) function,
solving important problems of classical forecast error functions in extracting
information from forecasts to create efficient buy/sell signals in algorithmic
investment strategies. Finally, based on the data from two different asset
classes (cryptocurrencies: Bitcoin and commodities: Crude Oil), we show that
the new loss function enables us to select better hyperparameters for the LSTM
model and obtain more efficient investment strategies, with regard to
risk-adjusted return metrics on the out-of-sample data. | [
"Jakub Michańków",
"Paweł Sakowski",
"Robert Ślepaczuk"
] | 2023-09-19 11:52:13 | http://arxiv.org/abs/2309.10546v1 | http://arxiv.org/pdf/2309.10546v1 | 2309.10546v1 |
Model Leeching: An Extraction Attack Targeting LLMs | Model Leeching is a novel extraction attack targeting Large Language Models
(LLMs), capable of distilling task-specific knowledge from a target LLM into a
reduced parameter model. We demonstrate the effectiveness of our attack by
extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match
(EM) similarity, and SQuAD EM and F1 accuracy scores of 75% and 87%,
respectively for only $50 in API cost. We further demonstrate the feasibility
of adversarial attack transferability from an extracted model extracted via
Model Leeching to perform ML attack staging against a target LLM, resulting in
an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo. | [
"Lewis Birch",
"William Hackett",
"Stefan Trawicki",
"Neeraj Suri",
"Peter Garraghan"
] | 2023-09-19 11:45:29 | http://arxiv.org/abs/2309.10544v1 | http://arxiv.org/pdf/2309.10544v1 | 2309.10544v1 |
Love or Hate? Share or Split? Privacy-Preserving Training Using Split Learning and Homomorphic Encryption | Split learning (SL) is a new collaborative learning technique that allows
participants, e.g. a client and a server, to train machine learning models
without the client sharing raw data. In this setting, the client initially
applies its part of the machine learning model on the raw data to generate
activation maps and then sends them to the server to continue the training
process. Previous works in the field demonstrated that reconstructing
activation maps could result in privacy leakage of client data. In addition to
that, existing mitigation techniques that overcome the privacy leakage of SL
prove to be significantly worse in terms of accuracy. In this paper, we improve
upon previous works by constructing a protocol based on U-shaped SL that can
operate on homomorphically encrypted data. More precisely, in our approach, the
client applies homomorphic encryption on the activation maps before sending
them to the server, thus protecting user privacy. This is an important
improvement that reduces privacy leakage in comparison to other SL-based works.
Finally, our results show that, with the optimum set of parameters, training
with HE data in the U-shaped SL setting only reduces accuracy by 2.65% compared
to training on plaintext. In addition, raw training data privacy is preserved. | [
"Tanveer Khan",
"Khoa Nguyen",
"Antonis Michalas",
"Alexandros Bakas"
] | 2023-09-19 10:56:08 | http://arxiv.org/abs/2309.10517v1 | http://arxiv.org/pdf/2309.10517v1 | 2309.10517v1 |
Single-Image based unsupervised joint segmentation and denoising | In this work, we develop an unsupervised method for the joint segmentation
and denoising of a single image. To this end, we combine the advantages of a
variational segmentation method with the power of a self-supervised,
single-image based deep learning approach. One major strength of our method
lies in the fact, that in contrast to data-driven methods, where huge amounts
of labeled samples are necessary, our model can segment an image into multiple
meaningful regions without any training database. Further, we introduce a novel
energy functional in which denoising and segmentation are coupled in a way that
both tasks benefit from each other. The limitations of existing single-image
based variational segmentation methods, which are not capable of dealing with
high noise or generic texture, are tackled by this specific combination with
self-supervised image denoising. We propose a unified optimisation strategy and
show that, especially for very noisy images available in microscopy, our
proposed joint approach outperforms its sequential counterpart as well as
alternative methods focused purely on denoising or segmentation. Another
comparison is conducted with a supervised deep learning approach designed for
the same application, highlighting the good performance of our approach. | [
"Nadja Gruber",
"Johannes Schwab",
"Noémie Debroux",
"Nicolas Papadakis",
"Markus Haltmeier"
] | 2023-09-19 10:47:32 | http://arxiv.org/abs/2309.10511v1 | http://arxiv.org/pdf/2309.10511v1 | 2309.10511v1 |
Learning End-to-End Channel Coding with Diffusion Models | The training of neural encoders via deep learning necessitates a
differentiable channel model due to the backpropagation algorithm. This
requirement can be sidestepped by approximating either the channel distribution
or its gradient through pilot signals in real-world scenarios. The initial
approach draws upon the latest advancements in image generation, utilizing
generative adversarial networks (GANs) or their enhanced variants to generate
channel distributions. In this paper, we address this channel approximation
challenge with diffusion models, which have demonstrated high sample quality in
image generation. We offer an end-to-end channel coding framework underpinned
by diffusion models and propose an efficient training algorithm. Our
simulations with various channel models establish that our diffusion models
learn the channel distribution accurately, thereby achieving near-optimal
end-to-end symbol error rates (SERs). We also note a significant advantage of
diffusion models: A robust generalization capability in high signal-to-noise
ratio regions, in contrast to GAN variants that suffer from error floor.
Furthermore, we examine the trade-off between sample quality and sampling
speed, when an accelerated sampling algorithm is deployed, and investigate the
effect of the noise scheduling on this trade-off. With an apt choice of noise
scheduling, sampling time can be significantly reduced with a minor increase in
SER. | [
"Muah Kim",
"Rick Fritschek",
"Rafael F. Schaefer"
] | 2023-09-19 10:35:54 | http://arxiv.org/abs/2309.10505v2 | http://arxiv.org/pdf/2309.10505v2 | 2309.10505v2 |
A Configurable Library for Generating and Manipulating Maze Datasets | Understanding how machine learning models respond to distributional shifts is
a key research challenge. Mazes serve as an excellent testbed due to varied
generation algorithms offering a nuanced platform to simulate both subtle and
pronounced distributional shifts. To enable systematic investigations of model
behavior on out-of-distribution data, we present $\texttt{maze-dataset}$, a
comprehensive library for generating, processing, and visualizing datasets
consisting of maze-solving tasks. With this library, researchers can easily
create datasets, having extensive control over the generation algorithm used,
the parameters fed to the algorithm of choice, and the filters that generated
mazes must satisfy. Furthermore, it supports multiple output formats, including
rasterized and text-based, catering to convolutional neural networks and
autoregressive transformer models. These formats, along with tools for
visualizing and converting between them, ensure versatility and adaptability in
research applications. | [
"Michael Igorevich Ivanitskiy",
"Rusheb Shah",
"Alex F. Spies",
"Tilman Räuker",
"Dan Valentine",
"Can Rager",
"Lucia Quirke",
"Chris Mathwin",
"Guillaume Corlouer",
"Cecilia Diniz Behn",
"Samy Wu Fung"
] | 2023-09-19 10:20:11 | http://arxiv.org/abs/2309.10498v1 | http://arxiv.org/pdf/2309.10498v1 | 2309.10498v1 |
A comparative study of Grid and Natural sentences effects on Normal-to-Lombard conversion | Grid sentence is commonly used for studying the Lombard effect and
Normal-to-Lombard conversion. However, it's unclear if Normal-to-Lombard models
trained on grid sentences are sufficient for improving natural speech
intelligibility in real-world applications. This paper presents the recording
of a parallel Lombard corpus (called Lombard Chinese TIMIT, LCT) extracting
natural sentences from Chinese TIMIT. Then We compare natural and grid
sentences in terms of Lombard effect and Normal-to-Lombard conversion using LCT
and Enhanced MAndarin Lombard Grid corpus (EMALG). Through a parametric
analysis of the Lombard effect, We find that as the noise level increases, both
natural sentences and grid sentences exhibit similar changes in parameters, but
in terms of the increase of the alpha ratio, grid sentences show a greater
increase. Following a subjective intelligibility assessment across genders and
Signal-to-Noise Ratios, the StarGAN model trained on EMALG consistently
outperforms the model trained on LCT in terms of improving intelligibility.
This superior performance may be attributed to EMALG's larger alpha ratio
increase from normal to Lombard speech. | [
"Hongyang Chen",
"Yuhong Yang",
"Qingmu Liu",
"Baifeng Li",
"Weiping Tu",
"Song Lin"
] | 2023-09-19 09:54:36 | http://arxiv.org/abs/2309.10485v1 | http://arxiv.org/pdf/2309.10485v1 | 2309.10485v1 |
Nebula: Self-Attention for Dynamic Malware Analysis | Dynamic analysis enables detecting Windows malware by executing programs in a
controlled environment, and storing their actions in log reports. Previous work
has started training machine learning models on such reports to perform either
malware detection or malware classification. However, most of the approaches
(i) have only considered convolutional and long-short term memory networks,
(ii) they have been built focusing only on APIs called at runtime, without
considering other relevant though heterogeneous sources of information like
network and file operations, and (iii) the code and pretrained models are
hardly available, hindering reproducibility of results in this research area.
In this work, we overcome these limitations by presenting Nebula, a versatile,
self-attention transformer-based neural architecture that can generalize across
different behavior representations and formats, combining heterogeneous
information from dynamic log reports. We show the efficacy of Nebula on three
distinct data collections from different dynamic analysis platforms, comparing
its performance with previous state-of-the-art models developed for malware
detection and classification tasks. We produce an extensive ablation study that
showcases how the components of Nebula influence its predictive performance,
while enabling it to outperform some competing approaches at very low false
positive rates. We conclude our work by inspecting the behavior of Nebula
through the application of explainability methods, which highlight that Nebula
correctly focuses more on portions of reports that contain malicious
activities. We release our code and models at github.com/dtrizna/nebula. | [
"Dmitrijs Trizna",
"Luca Demetrio",
"Battista Biggio",
"Fabio Roli"
] | 2023-09-19 09:24:36 | http://arxiv.org/abs/2310.10664v1 | http://arxiv.org/pdf/2310.10664v1 | 2310.10664v1 |
Ad-load Balancing via Off-policy Learning in a Content Marketplace | Ad-load balancing is a critical challenge in online advertising systems,
particularly in the context of social media platforms, where the goal is to
maximize user engagement and revenue while maintaining a satisfactory user
experience. This requires the optimization of conflicting objectives, such as
user satisfaction and ads revenue. Traditional approaches to ad-load balancing
rely on static allocation policies, which fail to adapt to changing user
preferences and contextual factors. In this paper, we present an approach that
leverages off-policy learning and evaluation from logged bandit feedback. We
start by presenting a motivating analysis of the ad-load balancing problem,
highlighting the conflicting objectives between user satisfaction and ads
revenue. We emphasize the nuances that arise due to user heterogeneity and the
dependence on the user's position within a session. Based on this analysis, we
define the problem as determining the optimal ad-load for a particular feed
fetch. To tackle this problem, we propose an off-policy learning framework that
leverages unbiased estimators such as Inverse Propensity Scoring (IPS) and
Doubly Robust (DR) to learn and estimate the policy values using offline
collected stochastic data. We present insights from online A/B experiments
deployed at scale across over 80 million users generating over 200 million
sessions, where we find statistically significant improvements in both user
satisfaction metrics and ads revenue for the platform. | [
"Hitesh Sagtani",
"Madan Jhawar",
"Rishabh Mehrotra",
"Olivier Jeunen"
] | 2023-09-19 09:17:07 | http://arxiv.org/abs/2309.11518v1 | http://arxiv.org/pdf/2309.11518v1 | 2309.11518v1 |
Coreset selection can accelerate quantum machine learning models with provable generalization | Quantum neural networks (QNNs) and quantum kernels stand as prominent figures
in the realm of quantum machine learning, poised to leverage the nascent
capabilities of near-term quantum computers to surmount classical machine
learning challenges. Nonetheless, the training efficiency challenge poses a
limitation on both QNNs and quantum kernels, curbing their efficacy when
applied to extensive datasets. To confront this concern, we present a unified
approach: coreset selection, aimed at expediting the training of QNNs and
quantum kernels by distilling a judicious subset from the original training
dataset. Furthermore, we analyze the generalization error bounds of QNNs and
quantum kernels when trained on such coresets, unveiling the comparable
performance with those training on the complete original dataset. Through
systematic numerical simulations, we illuminate the potential of coreset
selection in expediting tasks encompassing synthetic data classification,
identification of quantum correlations, and quantum compiling. Our work offers
a useful way to improve diverse quantum machine learning models with a
theoretical guarantee while reducing the training cost. | [
"Yiming Huang",
"Huiyuan Wang",
"Yuxuan Du",
"Xiao Yuan"
] | 2023-09-19 08:59:46 | http://arxiv.org/abs/2309.10441v1 | http://arxiv.org/pdf/2309.10441v1 | 2309.10441v1 |
Graph Neural Networks for Dynamic Modeling of Roller Bearing | In the presented work, we propose to apply the framework of graph neural
networks (GNNs) to predict the dynamics of a rolling element bearing. This
approach offers generalizability and interpretability, having the potential for
scalable use in real-time operational digital twin systems for monitoring the
health state of rotating machines. By representing the bearing's components as
nodes in a graph, the GNN can effectively model the complex relationships and
interactions among them. We utilize a dynamic spring-mass-damper model of a
bearing to generate the training data for the GNN. In this model, discrete
masses represent bearing components such as rolling elements, inner raceways,
and outer raceways, while a Hertzian contact model is employed to calculate the
forces between these components.
We evaluate the learning and generalization capabilities of the proposed GNN
framework by testing different bearing configurations that deviate from the
training configurations. Through this approach, we demonstrate the
effectiveness of the GNN-based method in accurately predicting the dynamics of
rolling element bearings, highlighting its potential for real-time health
monitoring of rotating machinery. | [
"Vinay Sharma",
"Jens Ravesloot",
"Cees Taal",
"Olga Fink"
] | 2023-09-19 08:30:10 | http://arxiv.org/abs/2309.10418v1 | http://arxiv.org/pdf/2309.10418v1 | 2309.10418v1 |
A Variational Auto-Encoder Enabled Multi-Band Channel Prediction Scheme for Indoor Localization | Indoor localization is getting increasing demands for various cutting-edged
technologies, like Virtual/Augmented reality and smart home. Traditional
model-based localization suffers from significant computational overhead, so
fingerprint localization is getting increasing attention, which needs lower
computation cost after the fingerprint database is built. However, the accuracy
of indoor localization is limited by the complicated indoor environment which
brings the multipath signal refraction. In this paper, we provided a scheme to
improve the accuracy of indoor fingerprint localization from the frequency
domain by predicting the channel state information (CSI) values from another
transmitting channel and spliced the multi-band information together to get
more precise localization results. We tested our proposed scheme on COST 2100
simulation data and real time orthogonal frequency division multiplexing (OFDM)
WiFi data collected from an office scenario. | [
"Ruihao Yuan",
"Kaixuan Huang",
"Pan Yang",
"Shunqing Zhang"
] | 2023-09-19 08:19:34 | http://arxiv.org/abs/2309.12200v1 | http://arxiv.org/pdf/2309.12200v1 | 2309.12200v1 |
Unsupervised Learning via Network-Aware Embeddings | Data clustering, the task of grouping observations according to their
similarity, is a key component of unsupervised learning -- with real world
applications in diverse fields such as biology, medicine, and social science.
Often in these fields the data comes with complex interdependencies between the
dimensions of analysis, for instance the various characteristics and opinions
people can have live on a complex social network. Current clustering methods
are ill-suited to tackle this complexity: deep learning can approximate these
dependencies, but not take their explicit map as the input of the analysis. In
this paper, we aim at fixing this blind spot in the unsupervised learning
literature. We can create network-aware embeddings by estimating the network
distance between numeric node attributes via the generalized Euclidean
distance. Differently from all methods in the literature that we know of, we do
not cluster the nodes of the network, but rather its node attributes. In our
experiments we show that having these network embeddings is always beneficial
for the learning task; that our method scales to large networks; and that we
can actually provide actionable insights in applications in a variety of fields
such as marketing, economics, and political science. Our method is fully open
source and data and code are available to reproduce all results in the paper. | [
"Anne Sophie Riis Damstrup",
"Sofie Tosti Madsen",
"Michele Coscia"
] | 2023-09-19 08:17:48 | http://arxiv.org/abs/2309.10408v1 | http://arxiv.org/pdf/2309.10408v1 | 2309.10408v1 |
Minimum width for universal approximation using ReLU networks on compact domain | The universal approximation property of width-bounded networks has been
studied as a dual of the classical universal approximation theorem for
depth-bounded ones. There were several attempts to characterize the minimum
width $w_{\min}$ enabling the universal approximation property; however, only a
few of them found the exact values. In this work, we show that the minimum
width for the universal approximation of $L^p$ functions from $[0,1]^{d_x}$ to
$\mathbb R^{d_y}$ is exactly $\max\{d_x,d_y,2\}$ if an activation function is
ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result
$w_{\min}=\max\{d_x+1,d_y\}$ when the domain is ${\mathbb R^{d_x}}$, our result
first shows that approximation on a compact domain requires smaller width than
on ${\mathbb R^{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform
approximation using general activation functions including ReLU: $w_{\min}\ge
d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a
dichotomy between $L^p$ and uniform approximations for general activation
functions and input/output dimensions. | [
"Namjun Kim",
"Chanho Min",
"Sejun Park"
] | 2023-09-19 08:04:48 | http://arxiv.org/abs/2309.10402v1 | http://arxiv.org/pdf/2309.10402v1 | 2309.10402v1 |
PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training | Large Language Models (LLMs) are trained with a pre-defined context length,
restricting their use in scenarios requiring long inputs. Previous efforts for
adapting LLMs to a longer length usually requires fine-tuning with this target
length (Full-length fine-tuning), suffering intensive training cost. To
decouple train length from target length for efficient context window
extension, we propose Positional Skip-wisE (PoSE) training that smartly
simulates long inputs using a fixed context window. This is achieved by first
dividing the original context window into several chunks, then designing
distinct skipping bias terms to manipulate the position indices of each chunk.
These bias terms and the lengths of each chunk are altered for every training
example, allowing the model to adapt to all positions within target length.
Experimental results show that PoSE greatly reduces memory and time overhead
compared with Full-length fine-tuning, with minimal impact on performance.
Leveraging this advantage, we have successfully extended the LLaMA model to
128k tokens using a 2k training context window. Furthermore, we empirically
confirm that PoSE is compatible with all RoPE-based LLMs and position
interpolation strategies. Notably, our method can potentially support infinite
length, limited only by memory usage in inference. With ongoing progress for
efficient inference, we believe PoSE can further scale the context window
beyond 128k. | [
"Dawei Zhu",
"Nan Yang",
"Liang Wang",
"Yifan Song",
"Wenhao Wu",
"Furu Wei",
"Sujian Li"
] | 2023-09-19 08:03:38 | http://arxiv.org/abs/2309.10400v2 | http://arxiv.org/pdf/2309.10400v2 | 2309.10400v2 |
Differentiable Quantum Architecture Search for Quantum Reinforcement Learning | Differentiable quantum architecture search (DQAS) is a gradient-based
framework to design quantum circuits automatically in the NISQ era. It was
motivated by such as low fidelity of quantum hardware, low flexibility of
circuit architecture, high circuit design cost, barren plateau (BP) problem,
and periodicity of weights. People used it to address error mitigation, unitary
decomposition, and quantum approximation optimization problems based on fixed
datasets. Quantum reinforcement learning (QRL) is a part of quantum machine
learning and often has various data. QRL usually uses a manually designed
circuit. However, the pre-defined circuit needs more flexibility for different
tasks, and the circuit design based on various datasets could become
intractable in the case of a large circuit. The problem of whether DQAS can be
applied to quantum deep Q-learning with various datasets is still open. The
main target of this work is to discover the capability of DQAS to solve quantum
deep Q-learning problems. We apply a gradient-based framework DQAS on
reinforcement learning tasks and evaluate it in two different environments -
cart pole and frozen lake. It contains input- and output weights, progressive
search, and other new features. The experiments conclude that DQAS can design
quantum circuits automatically and efficiently. The evaluation results show
significant outperformance compared to the manually designed circuit.
Furthermore, the performance of the automatically created circuit depends on
whether the super-circuit learned well during the training process. This work
is the first to show that gradient-based quantum architecture search is
applicable to QRL tasks. | [
"Yize Sun",
"Yunpu Ma",
"Volker Tresp"
] | 2023-09-19 07:45:39 | http://arxiv.org/abs/2309.10392v2 | http://arxiv.org/pdf/2309.10392v2 | 2309.10392v2 |
Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method for Few-shot Node Tasks | Graph Neural Networks (GNNs) have become popular in Graph Representation
Learning (GRL). One fundamental application is few-shot node classification.
Most existing methods follow the meta learning paradigm, showing the ability of
fast generalization to few-shot tasks. However, recent works indicate that
graph contrastive learning combined with fine-tuning can significantly
outperform meta learning methods. Despite the empirical success, there is
limited understanding of the reasons behind it. In our study, we first identify
two crucial advantages of contrastive learning compared to meta learning,
including (1) the comprehensive utilization of graph nodes and (2) the power of
graph augmentations. To integrate the strength of both contrastive learning and
meta learning on the few-shot node classification tasks, we introduce a new
paradigm: Contrastive Few-Shot Node Classification (COLA). Specifically, COLA
employs graph augmentations to identify semantically similar nodes, which
enables the construction of meta-tasks without the need for label information.
Therefore, COLA can utilize all nodes to construct meta-tasks, further reducing
the risk of overfitting. Through extensive experiments, we validate the
essentiality of each component in our design and demonstrate that COLA achieves
new state-of-the-art on all tasks. | [
"Hao Liu",
"Jiarui Feng",
"Lecheng Kong",
"Dacheng Tao",
"Yixin Chen",
"Muhan Zhang"
] | 2023-09-19 07:24:10 | http://arxiv.org/abs/2309.10376v1 | http://arxiv.org/pdf/2309.10376v1 | 2309.10376v1 |
Geometric structure of shallow neural networks and constructive ${\mathcal L}^2$ cost minimization | In this paper, we provide a geometric interpretation of the structure of
shallow neural networks characterized by one hidden layer, a ramp activation
function, an ${\mathcal L}^2$ Schatten class (or Hilbert-Schmidt) cost
function, input space ${\mathbb R}^M$, output space ${\mathbb R}^Q$ with $Q\leq
M$, and training input sample size $N>QM$. We prove an upper bound on the
minimum of the cost function of order $O(\delta_P$ where $\delta_P$ measures
the signal to noise ratio of training inputs. We obtain an approximate
optimizer using projections adapted to the averages $\overline{x_{0,j}}$ of
training input vectors belonging to the same output vector $y_j$,
$j=1,\dots,Q$. In the special case $M=Q$, we explicitly determine an exact
degenerate local minimum of the cost function; the sharp value differs from the
upper bound obtained for $Q\leq M$ by a relative error $O(\delta_P^2)$. The
proof of the upper bound yields a constructively trained network; we show that
it metrizes the $Q$-dimensional subspace in the input space ${\mathbb R}^M$
spanned by $\overline{x_{0,j}}$, $j=1,\dots,Q$. We comment on the
characterization of the global minimum of the cost function in the given
context. | [
"Thomas Chen",
"Patricia Muñoz Ewald"
] | 2023-09-19 07:12:41 | http://arxiv.org/abs/2309.10370v1 | http://arxiv.org/pdf/2309.10370v1 | 2309.10370v1 |
Toward efficient resource utilization at edge nodes in federated learning | Federated learning (FL) enables edge nodes to collaboratively contribute to
constructing a global model without sharing their data. This is accomplished by
devices computing local, private model updates that are then aggregated by a
server. However, computational resource constraints and network communication
can become a severe bottleneck for larger model sizes typical for deep learning
applications. Edge nodes tend to have limited hardware resources (RAM, CPU),
and the network bandwidth and reliability at the edge is a concern for scaling
federated fleet applications. In this paper, we propose and evaluate a FL
strategy inspired by transfer learning in order to reduce resource utilization
on devices, as well as the load on the server and network in each global
training round. For each local model update, we randomly select layers to
train, freezing the remaining part of the model. In doing so, we can reduce
both server load and communication costs per round by excluding all untrained
layer weights from being transferred to the server. The goal of this study is
to empirically explore the potential trade-off between resource utilization on
devices and global model convergence under the proposed strategy. We implement
the approach using the federated learning framework FEDn. A number of
experiments were carried out over different datasets (CIFAR-10, CASA, and
IMDB), performing different tasks using different deep-learning model
architectures. Our results show that training the model partially can
accelerate the training process, efficiently utilizes resources on-device, and
reduce the data transmission by around 75% and 53% when we train 25%, and 50%
of the model layers, respectively, without harming the resulting global model
accuracy. | [
"Sadi Alawadi",
"Addi Ait-Mlouk",
"Salman Toor",
"Andreas Hellander"
] | 2023-09-19 07:04:50 | http://arxiv.org/abs/2309.10367v1 | http://arxiv.org/pdf/2309.10367v1 | 2309.10367v1 |
Testable Likelihoods for Beyond-the-Standard Model Fits | Studying potential BSM effects at the precision frontier requires accurate
transfer of information from low-energy measurements to high-energy BSM models.
We propose to use normalising flows to construct likelihood functions that
achieve this transfer. Likelihood functions constructed in this way provide the
means to generate additional samples and admit a ``trivial'' goodness-of-fit
test in form of a $\chi^2$ test statistic. Here, we study a particular form of
normalising flow, apply it to a multi-modal and non-Gaussian example, and
quantify the accuracy of the likelihood function and its test statistic. | [
"Anja Beck",
"Méril Reboud",
"Danny van Dyk"
] | 2023-09-19 07:03:41 | http://arxiv.org/abs/2309.10365v1 | http://arxiv.org/pdf/2309.10365v1 | 2309.10365v1 |
Improving CLIP Robustness with Knowledge Distillation and Self-Training | This paper examines the robustness of a multi-modal computer vision model,
CLIP (Contrastive Language-Image Pretraining), in the context of unsupervised
learning. The main objective is twofold: first, to evaluate the robustness of
CLIP, and second, to explore strategies for augmenting its robustness. To
achieve this, we introduce a novel approach named LP-CLIP. This technique
involves the distillation of CLIP features through the incorporation of a
linear probing layer positioned atop its encoding structure. This newly added
layer is trained utilizing pseudo-labels produced by CLIP, coupled with a
self-training strategy. The LP-CLIP technique offers a promising approach to
enhance the robustness of CLIP without the need for annotations. By leveraging
a simple linear probing layer, we aim to improve the model's ability to
withstand various uncertainties and challenges commonly encountered in
real-world scenarios. Importantly, our approach does not rely on annotated
data, which makes it particularly valuable in situations where labeled data
might be scarce or costly to obtain. Our proposed approach increases the
robustness of CLIP with SOTA results compared to supervised technique on
various datasets. | [
"Clement Laroudie",
"Andrei Bursuc",
"Mai Lan Ha",
"Gianni Franchi"
] | 2023-09-19 06:43:31 | http://arxiv.org/abs/2309.10361v1 | http://arxiv.org/pdf/2309.10361v1 | 2309.10361v1 |
Language Guided Adversarial Purification | Adversarial purification using generative models demonstrates strong
adversarial defense performance. These methods are classifier and
attack-agnostic, making them versatile but often computationally intensive.
Recent strides in diffusion and score networks have improved image generation
and, by extension, adversarial purification. Another highly efficient class of
adversarial defense methods known as adversarial training requires specific
knowledge of attack vectors, forcing them to be trained extensively on
adversarial examples. To overcome these limitations, we introduce a new
framework, namely Language Guided Adversarial Purification (LGAP), utilizing
pre-trained diffusion models and caption generators to defend against
adversarial attacks. Given an input image, our method first generates a
caption, which is then used to guide the adversarial purification process
through a diffusion network. Our approach has been evaluated against strong
adversarial attacks, proving its effectiveness in enhancing adversarial
robustness. Our results indicate that LGAP outperforms most existing
adversarial defense techniques without requiring specialized network training.
This underscores the generalizability of models trained on large datasets,
highlighting a promising direction for further research. | [
"Himanshu Singh",
"A V Subramanyam"
] | 2023-09-19 06:17:18 | http://arxiv.org/abs/2309.10348v1 | http://arxiv.org/pdf/2309.10348v1 | 2309.10348v1 |
Explaining Agent Behavior with Large Language Models | Intelligent agents such as robots are increasingly deployed in real-world,
safety-critical settings. It is vital that these agents are able to explain the
reasoning behind their decisions to human counterparts, however, their behavior
is often produced by uninterpretable models such as deep neural networks. We
propose an approach to generate natural language explanations for an agent's
behavior based only on observations of states and actions, agnostic to the
underlying model representation. We show how a compact representation of the
agent's behavior can be learned and used to produce plausible explanations with
minimal hallucination while affording user interaction with a pre-trained large
language model. Through user studies and empirical experiments, we show that
our approach generates explanations as helpful as those generated by a human
domain expert while enabling beneficial interactions such as clarification and
counterfactual queries. | [
"Xijia Zhang",
"Yue Guo",
"Simon Stepputtis",
"Katia Sycara",
"Joseph Campbell"
] | 2023-09-19 06:13:24 | http://arxiv.org/abs/2309.10346v1 | http://arxiv.org/pdf/2309.10346v1 | 2309.10346v1 |
Weakly Supervised Reasoning by Neuro-Symbolic Approaches | Deep learning has largely improved the performance of various natural
language processing (NLP) tasks. However, most deep learning models are
black-box machinery, and lack explicit interpretation. In this chapter, we will
introduce our recent progress on neuro-symbolic approaches to NLP, which
combines different schools of AI, namely, symbolism and connectionism.
Generally, we will design a neural system with symbolic latent structures for
an NLP task, and apply reinforcement learning or its relaxation to perform
weakly supervised reasoning in the downstream task. Our framework has been
successfully applied to various tasks, including table query reasoning,
syntactic structure reasoning, information extraction reasoning, and rule
reasoning. For each application, we will introduce the background, our
approach, and experimental results. | [
"Xianggen Liu",
"Zhengdong Lu",
"Lili Mou"
] | 2023-09-19 06:10:51 | http://arxiv.org/abs/2309.13072v1 | http://arxiv.org/pdf/2309.13072v1 | 2309.13072v1 |
Striking a Balance: An Optimal Mechanism Design for Heterogenous Differentially Private Data Acquisition for Logistic Regression | We investigate the problem of performing logistic regression on data
collected from privacy-sensitive sellers. Since the data is private, sellers
must be incentivized through payments to provide their data. Thus, the goal is
to design a mechanism that optimizes a weighted combination of test loss,
seller privacy, and payment, i.e., strikes a balance between multiple
objectives of interest. We solve the problem by combining ideas from game
theory, statistical learning theory, and differential privacy. The buyer's
objective function can be highly non-convex. However, we show that, under
certain conditions on the problem parameters, the problem can be convexified by
using a change of variables. We also provide asymptotic results characterizing
the buyer's test error and payments when the number of sellers becomes large.
Finally, we demonstrate our ideas by applying them to a real healthcare data
set. | [
"Ameya Anjarlekar",
"Rasoul Etesami",
"R. Srikant"
] | 2023-09-19 05:51:13 | http://arxiv.org/abs/2309.10340v1 | http://arxiv.org/pdf/2309.10340v1 | 2309.10340v1 |
FedWOA: A Federated Learning Model that uses the Whale Optimization Algorithm for Renewable Energy Prediction | Privacy is important when dealing with sensitive personal information in
machine learning models, which require large data sets for training. In the
energy field, access to household prosumer energy data is crucial for energy
predictions to support energy grid management and large-scale adoption of
renewables however citizens are often hesitant to grant access to cloud-based
machine learning models. Federated learning has been proposed as a solution to
privacy challenges however report issues in generating the global prediction
model due to data heterogeneity, variations in generation patterns, and the
high number of parameters leading to even lower prediction accuracy. This paper
addresses these challenges by introducing FedWOA a novel federated learning
model that employs the Whale Optimization Algorithm to aggregate global
prediction models from the weights of local LTSM neural network models trained
on prosumer energy data. The proposed solution identifies the optimal vector of
weights in the search spaces of the local models to construct the global shared
model and then is subsequently transmitted to the local nodes to improve the
prediction quality at the prosumer site while for handling non-IID data K-Means
was used for clustering prosumers with similar scale of energy data. The
evaluation results on prosumers energy data have shown that FedWOA can
effectively enhance the accuracy of energy prediction models accuracy by 25%
for MSE and 16% for MAE compared to FedAVG while demonstrating good convergence
and reduced loss. | [
"Viorica Chifu",
"Tudor Cioara",
"Cristian Anitiei",
"Cristina Pop",
"Ionut Anghel"
] | 2023-09-19 05:44:18 | http://arxiv.org/abs/2309.10337v1 | http://arxiv.org/pdf/2309.10337v1 | 2309.10337v1 |
Computational Approaches for App-to-App Retrieval and Design Consistency Check | Extracting semantic representations from mobile user interfaces (UI) and
using the representations for designers' decision-making processes have shown
the potential to be effective computational design support tools. Current
approaches rely on machine learning models trained on small-sized mobile UI
datasets to extract semantic vectors and use screenshot-to-screenshot
comparison to retrieve similar-looking UIs given query screenshots. However,
the usability of these methods is limited because they are often not
open-sourced and have complex training pipelines for practitioners to follow,
and are unable to perform screenshot set-to-set (i.e., app-to-app) retrieval.
To this end, we (1) employ visual models trained with large web-scale images
and test whether they could extract a UI representation in a zero-shot way and
outperform existing specialized models, and (2) use mathematically founded
methods to enable app-to-app retrieval and design consistency analysis. Our
experiments show that our methods not only improve upon previous retrieval
models but also enable multiple new applications. | [
"Seokhyeon Park",
"Wonjae Kim",
"Young-Ho Kim",
"Jinwook Seo"
] | 2023-09-19 05:21:22 | http://arxiv.org/abs/2309.10328v1 | http://arxiv.org/pdf/2309.10328v1 | 2309.10328v1 |
Investigating the Catastrophic Forgetting in Multimodal Large Language Models | Following the success of GPT4, there has been a surge in interest in
multimodal large language model (MLLM) research. This line of research focuses
on developing general-purpose LLMs through fine-tuning pre-trained LLMs and
vision models. However, catastrophic forgetting, a notorious phenomenon where
the fine-tuned model fails to retain similar performance compared to the
pre-trained model, still remains an inherent problem in multimodal LLMs (MLLM).
In this paper, we introduce EMT: Evaluating MulTimodality for evaluating the
catastrophic forgetting in MLLMs, by treating each MLLM as an image classifier.
We first apply EMT to evaluate several open-source fine-tuned MLLMs and we
discover that almost all evaluated MLLMs fail to retain the same performance
levels as their vision encoders on standard image classification tasks.
Moreover, we continue fine-tuning LLaVA, an MLLM and utilize EMT to assess
performance throughout the fine-tuning. Interestingly, our results suggest that
early-stage fine-tuning on an image dataset improves performance across other
image datasets, by enhancing the alignment of text and visual features.
However, as fine-tuning proceeds, the MLLMs begin to hallucinate, resulting in
a significant loss of generalizability, even when the image encoder remains
frozen. Our results suggest that MLLMs have yet to demonstrate performance on
par with their vision models on standard image classification tasks and the
current MLLM fine-tuning procedure still has room for improvement. | [
"Yuexiang Zhai",
"Shengbang Tong",
"Xiao Li",
"Mu Cai",
"Qing Qu",
"Yong Jae Lee",
"Yi Ma"
] | 2023-09-19 04:51:13 | http://arxiv.org/abs/2309.10313v3 | http://arxiv.org/pdf/2309.10313v3 | 2309.10313v3 |
TensorCodec: Compact Lossy Compression of Tensors without Strong Data Assumptions | Many real-world datasets are represented as tensors, i.e., multi-dimensional
arrays of numerical values. Storing them without compression often requires
substantial space, which grows exponentially with the order. While many tensor
compression algorithms are available, many of them rely on strong data
assumptions regarding its order, sparsity, rank, and smoothness. In this work,
we propose TENSORCODEC, a lossy compression algorithm for general tensors that
do not necessarily adhere to strong input data assumptions. TENSORCODEC
incorporates three key ideas. The first idea is Neural Tensor-Train
Decomposition (NTTD) where we integrate a recurrent neural network into
Tensor-Train Decomposition to enhance its expressive power and alleviate the
limitations imposed by the low-rank assumption. Another idea is to fold the
input tensor into a higher-order tensor to reduce the space required by NTTD.
Finally, the mode indices of the input tensor are reordered to reveal patterns
that can be exploited by NTTD for improved approximation. Our analysis and
experiments on 8 real-world datasets demonstrate that TENSORCODEC is (a)
Concise: it gives up to 7.38x more compact compression than the best competitor
with similar reconstruction error, (b) Accurate: given the same budget for
compressed size, it yields up to 3.33x more accurate reconstruction than the
best competitor, (c) Scalable: its empirical compression time is linear in the
number of tensor entries, and it reconstructs each entry in logarithmic time.
Our code and datasets are available at https://github.com/kbrother/TensorCodec. | [
"Taehyung Kwon",
"Jihoon Ko",
"Jinhong Jung",
"Kijung Shin"
] | 2023-09-19 04:48:01 | http://arxiv.org/abs/2309.10310v2 | http://arxiv.org/pdf/2309.10310v2 | 2309.10310v2 |
Decoupled Training: Return of Frustratingly Easy Multi-Domain Learning | Multi-domain learning (MDL) aims to train a model with minimal average risk
across multiple overlapping but non-identical domains. To tackle the challenges
of dataset bias and domain domination, numerous MDL approaches have been
proposed from the perspectives of seeking commonalities by aligning
distributions to reduce domain gap or reserving differences by implementing
domain-specific towers, gates, and even experts. MDL models are becoming more
and more complex with sophisticated network architectures or loss functions,
introducing extra parameters and enlarging computation costs. In this paper, we
propose a frustratingly easy and hyperparameter-free multi-domain learning
method named Decoupled Training(D-Train). D-Train is a tri-phase
general-to-specific training strategy that first pre-trains on all domains to
warm up a root model, then post-trains on each domain by splitting into multi
heads, and finally fine-tunes the heads by fixing the backbone, enabling
decouple training to achieve domain independence. Despite its extraordinary
simplicity and efficiency, D-Train performs remarkably well in extensive
evaluations of various datasets from standard benchmarks to applications of
satellite imagery and recommender systems. | [
"Ximei Wang",
"Junwei Pan",
"Xingzhuo Guo",
"Dapeng Liu",
"Jie Jiang"
] | 2023-09-19 04:06:41 | http://arxiv.org/abs/2309.10302v1 | http://arxiv.org/pdf/2309.10302v1 | 2309.10302v1 |
Prominent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms | Domain adaptation (DA) is a statistical learning problem that arises when the
distribution of the source data used to train a model differs from that of the
target data used to evaluate the model. While many DA algorithms have
demonstrated considerable empirical success, blindly applying these algorithms
can often lead to worse performance on new datasets. To address this, it is
crucial to clarify the assumptions under which a DA algorithm has good target
performance. In this work, we focus on the assumption of the presence of
conditionally invariant components (CICs), which are relevant for prediction
and remain conditionally invariant across the source and target data. We
demonstrate that CICs, which can be estimated through conditional invariant
penalty (CIP), play three prominent roles in providing target risk guarantees
in DA. First, we propose a new algorithm based on CICs, importance-weighted
conditional invariant penalty (IW-CIP), which has target risk guarantees beyond
simple settings such as covariate shift and label shift. Second, we show that
CICs help identify large discrepancies between source and target risks of other
DA algorithms. Finally, we demonstrate that incorporating CICs into the domain
invariant projection (DIP) algorithm can address its failure scenario caused by
label-flipping features. We support our new algorithms and theoretical findings
via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17
datasets. | [
"Keru Wu",
"Yuansi Chen",
"Wooseok Ha",
"Bin Yu"
] | 2023-09-19 04:04:59 | http://arxiv.org/abs/2309.10301v1 | http://arxiv.org/pdf/2309.10301v1 | 2309.10301v1 |
Subsets and Splits