title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
A Survey on Quantum Machine Learning: Current Trends, Challenges, Opportunities, and the Road Ahead | Quantum Computing (QC) claims to improve the efficiency of solving complex
problems, compared to classical computing. When QC is applied to Machine
Learning (ML) applications, it forms a Quantum Machine Learning (QML) system.
After discussing the basic concepts of QC and its advantages over classical
computing, this paper reviews the key aspects of QML in a comprehensive manner.
We discuss different QML algorithms and their domain applicability, quantum
datasets, hardware technologies, software tools, simulators, and applications.
In this survey, we provide valuable information and resources for readers to
jumpstart into the current state-of-the-art techniques in the QML field. | [
"Kamila Zaman",
"Alberto Marchisio",
"Muhammad Abdullah Hanif",
"Muhammad Shafique"
] | 2023-10-16 11:52:54 | http://arxiv.org/abs/2310.10315v1 | http://arxiv.org/pdf/2310.10315v1 | 2310.10315v1 |
End-to-end Offline Reinforcement Learning for Glycemia Control | The development of closed-loop systems for glycemia control in type I
diabetes relies heavily on simulated patients. Improving the performances and
adaptability of these close-loops raises the risk of over-fitting the
simulator. This may have dire consequences, especially in unusual cases which
were not faithfully-if at all-captured by the simulator. To address this, we
propose to use offline RL agents, trained on real patient data, to perform the
glycemia control. To further improve the performances, we propose an end-to-end
personalization pipeline, which leverages offline-policy evaluation methods to
remove altogether the need of a simulator, while still enabling an estimation
of clinically relevant metrics for diabetes. | [
"Tristan Beolet",
"Alice Adenis",
"Erik Huneker",
"Maxime Louis"
] | 2023-10-16 11:46:45 | http://arxiv.org/abs/2310.10312v1 | http://arxiv.org/pdf/2310.10312v1 | 2310.10312v1 |
Transparent Anomaly Detection via Concept-based Explanations | Advancements in deep learning techniques have given a boost to the
performance of anomaly detection. However, real-world and safety-critical
applications demand a level of transparency and reasoning beyond accuracy. The
task of anomaly detection (AD) focuses on finding whether a given sample
follows the learned distribution. Existing methods lack the ability to reason
with clear explanations for their outcomes. Hence to overcome this challenge,
we propose Transparent {A}nomaly Detection {C}oncept {E}xplanations (ACE). ACE
is able to provide human interpretable explanations in the form of concepts
along with anomaly prediction. To the best of our knowledge, this is the first
paper that proposes interpretable by-design anomaly detection. In addition to
promoting transparency in AD, it allows for effective human-model interaction.
Our proposed model shows either higher or comparable results to black-box
uninterpretable models. We validate the performance of ACE across three
realistic datasets - bird classification on CUB-200-2011, challenging
histopathology slide image classification on TIL-WSI-TCGA, and gender
classification on CelebA. We further demonstrate that our concept learning
paradigm can be seamlessly integrated with other classification-based AD
methods. | [
"Laya Rafiee Sevyeri",
"Ivaxi Sheth",
"Farhood Farahnak",
"Shirin Abbasinejad Enger"
] | 2023-10-16 11:46:26 | http://arxiv.org/abs/2310.10702v1 | http://arxiv.org/pdf/2310.10702v1 | 2310.10702v1 |
Time integration schemes based on neural networks for solving partial differential equations on coarse grids | The accuracy of solving partial differential equations (PDEs) on coarse grids
is greatly affected by the choice of discretization schemes. In this work, we
propose to learn time integration schemes based on neural networks which
satisfy three distinct sets of mathematical constraints, i.e., unconstrained,
semi-constrained with the root condition, and fully-constrained with both root
and consistency conditions. We focus on the learning of 3-step linear multistep
methods, which we subsequently applied to solve three model PDEs, i.e., the
one-dimensional heat equation, the one-dimensional wave equation, and the
one-dimensional Burgers' equation. The results show that the prediction error
of the learned fully-constrained scheme is close to that of the Runge-Kutta
method and Adams-Bashforth method. Compared to the traditional methods, the
learned unconstrained and semi-constrained schemes significantly reduce the
prediction error on coarse grids. On a grid that is 4 times coarser than the
reference grid, the mean square error shows a reduction of up to an order of
magnitude for some of the heat equation cases, and a substantial improvement in
phase prediction for the wave equation. On a 32 times coarser grid, the mean
square error for the Burgers' equation can be reduced by up to 35% to 40%. | [
"Xinxin Yan",
"Zhideng Zhou",
"Xiaohan Cheng",
"Xiaolei Yang"
] | 2023-10-16 11:43:08 | http://arxiv.org/abs/2310.10308v1 | http://arxiv.org/pdf/2310.10308v1 | 2310.10308v1 |
Forking Uncertainties: Reliable Prediction and Model Predictive Control with Sequence Models via Conformal Risk Control | In many real-world problems, predictions are leveraged to monitor and control
cyber-physical systems, demanding guarantees on the satisfaction of reliability
and safety requirements. However, predictions are inherently uncertain, and
managing prediction uncertainty presents significant challenges in environments
characterized by complex dynamics and forking trajectories. In this work, we
assume access to a pre-designed probabilistic implicit or explicit sequence
model, which may have been obtained using model-based or model-free methods. We
introduce probabilistic time series-conformal risk prediction (PTS-CRC), a
novel post-hoc calibration procedure that operates on the predictions produced
by any pre-designed probabilistic forecaster to yield reliable error bars. In
contrast to existing art, PTS-CRC produces predictive sets based on an ensemble
of multiple prototype trajectories sampled from the sequence model, supporting
the efficient representation of forking uncertainties. Furthermore, unlike the
state of the art, PTS-CRC can satisfy reliability definitions beyond coverage.
This property is leveraged to devise a novel model predictive control (MPC)
framework that addresses open-loop and closed-loop control problems under
general average constraints on the quality or safety of the control policy. We
experimentally validate the performance of PTS-CRC prediction and control by
studying a number of use cases in the context of wireless networking. Across
all the considered tasks, PTS-CRC predictors are shown to provide more
informative predictive sets, as well as safe control policies with larger
returns. | [
"Matteo Zecchin",
"Sangwoo Park",
"Osvaldo Simeone"
] | 2023-10-16 11:35:41 | http://arxiv.org/abs/2310.10299v1 | http://arxiv.org/pdf/2310.10299v1 | 2310.10299v1 |
Mimicking the Maestro: Exploring the Efficacy of a Virtual AI Teacher in Fine Motor Skill Acquisition | Motor skills, especially fine motor skills like handwriting, play an
essential role in academic pursuits and everyday life. Traditional methods to
teach these skills, although effective, can be time-consuming and inconsistent.
With the rise of advanced technologies like robotics and artificial
intelligence, there is increasing interest in automating such teaching
processes using these technologies, via human-robot and human-computer
interactions. In this study, we examine the potential of a virtual AI teacher
in emulating the techniques of human educators for motor skill acquisition. We
introduce an AI teacher model that captures the distinct characteristics of
human instructors. Using a Reinforcement Learning environment tailored to mimic
teacher-learner interactions, we tested our AI model against four guiding
hypotheses, emphasizing improved learner performance, enhanced rate of skill
acquisition, and reduced variability in learning outcomes. Our findings,
validated on synthetic learners, revealed significant improvements across all
tested hypotheses. Notably, our model showcased robustness across different
learners and settings and demonstrated adaptability to handwriting. This
research underscores the potential of integrating Reinforcement Learning and
Imitation Learning models with robotics in revolutionizing the teaching of
critical motor skills. | [
"Hadar Mulian",
"Segev Shlomov",
"Lior Limonad"
] | 2023-10-16 11:11:43 | http://arxiv.org/abs/2310.10280v1 | http://arxiv.org/pdf/2310.10280v1 | 2310.10280v1 |
Prediction of Arabic Legal Rulings using Large Language Models | In the intricate field of legal studies, the analysis of court decisions is a
cornerstone for the effective functioning of the judicial system. The ability
to predict court outcomes helps judges during the decision-making process and
equips lawyers with invaluable insights, enhancing their strategic approaches
to cases. Despite its significance, the domain of Arabic court analysis remains
under-explored. This paper pioneers a comprehensive predictive analysis of
Arabic court decisions on a dataset of 10,813 commercial court real cases,
leveraging the advanced capabilities of the current state-of-the-art large
language models. Through a systematic exploration, we evaluate three prevalent
foundational models (LLaMA-7b, JAIS-13b, and GPT3.5-turbo) and three training
paradigms: zero-shot, one-shot, and tailored fine-tuning. Besides, we assess
the benefit of summarizing and/or translating the original Arabic input texts.
This leads to a spectrum of 14 model variants, for which we offer a granular
performance assessment with a series of different metrics (human assessment,
GPT evaluation, ROUGE, and BLEU scores). We show that all variants of LLaMA
models yield limited performance, whereas GPT-3.5-based models outperform all
other models by a wide margin, surpassing the average score of the dedicated
Arabic-centric JAIS model by 50%. Furthermore, we show that all scores except
human evaluation are inconsistent and unreliable for assessing the performance
of large language models on court decision predictions. This study paves the
way for future research, bridging the gap between computational linguistics and
Arabic legal analytics. | [
"Adel Ammar",
"Anis Koubaa",
"Bilel Benjdira",
"Omar Najar",
"Serry Sibaee"
] | 2023-10-16 10:37:35 | http://arxiv.org/abs/2310.10260v1 | http://arxiv.org/pdf/2310.10260v1 | 2310.10260v1 |
Leveraging heterogeneous spillover effects in maximizing contextual bandit rewards | Recommender systems relying on contextual multi-armed bandits continuously
improve relevant item recommendations by taking into account the contextual
information. The objective of these bandit algorithms is to learn the best arm
(i.e., best item to recommend) for each user and thus maximize the cumulative
rewards from user engagement with the recommendations. However, current
approaches ignore potential spillover between interacting users, where the
action of one user can impact the actions and rewards of other users. Moreover,
spillover may vary for different people based on their preferences and the
closeness of ties to other users. This leads to heterogeneity in the spillover
effects, i.e., the extent to which the action of one user can impact the action
of another. Here, we propose a framework that allows contextual multi-armed
bandits to account for such heterogeneous spillovers when choosing the best arm
for each user. By experimenting on several real-world datasets using prominent
linear and non-linear contextual bandit algorithms, we observe that our
proposed method leads to significantly higher rewards than existing solutions
that ignore spillover. | [
"Ahmed Sayeed Faruk",
"Elena Zheleva"
] | 2023-10-16 10:34:41 | http://arxiv.org/abs/2310.10259v1 | http://arxiv.org/pdf/2310.10259v1 | 2310.10259v1 |
Leveraging Topological Maps in Deep Reinforcement Learning for Multi-Object Navigation | This work addresses the challenge of navigating expansive spaces with sparse
rewards through Reinforcement Learning (RL). Using topological maps, we elevate
elementary actions to object-oriented macro actions, enabling a simple Deep
Q-Network (DQN) agent to solve otherwise practically impossible environments. | [
"Simon Hakenes",
"Tobias Glasmachers"
] | 2023-10-16 10:19:45 | http://arxiv.org/abs/2310.10250v1 | http://arxiv.org/pdf/2310.10250v1 | 2310.10250v1 |
The Mixtures and the Neural Critics: On the Pointwise Mutual Information Profiles of Fine Distributions | Mutual information quantifies the dependence between two random variables and
remains invariant under diffeomorphisms. In this paper, we explore the
pointwise mutual information profile, an extension of mutual information that
maintains this invariance. We analytically describe the profiles of
multivariate normal distributions and introduce the family of fine
distributions, for which the profile can be accurately approximated using Monte
Carlo methods. We then show how fine distributions can be used to study the
limitations of existing mutual information estimators, investigate the behavior
of neural critics used in variational estimators, and understand the effect of
experimental outliers on mutual information estimation. Finally, we show how
fine distributions can be used to obtain model-based Bayesian estimates of
mutual information, suitable for problems with available domain expertise in
which uncertainty quantification is necessary. | [
"Paweł Czyż",
"Frederic Grabowski",
"Julia E. Vogt",
"Niko Beerenwinkel",
"Alexander Marx"
] | 2023-10-16 10:02:24 | http://arxiv.org/abs/2310.10240v1 | http://arxiv.org/pdf/2310.10240v1 | 2310.10240v1 |
Structural transfer learning of non-Gaussian DAG | Directed acyclic graph (DAG) has been widely employed to represent
directional relationships among a set of collected nodes. Yet, the available
data in one single study is often limited for accurate DAG reconstruction,
whereas heterogeneous data may be collected from multiple relevant studies. It
remains an open question how to pool the heterogeneous data together for better
DAG structure reconstruction in the target study. In this paper, we first
introduce a novel set of structural similarity measures for DAG and then
present a transfer DAG learning framework by effectively leveraging information
from auxiliary DAGs of different levels of similarities. Our theoretical
analysis shows substantial improvement in terms of DAG reconstruction in the
target study, even when no auxiliary DAG is overall similar to the target DAG,
which is in sharp contrast to most existing transfer learning methods. The
advantage of the proposed transfer DAG learning is also supported by extensive
numerical experiments on both synthetic data and multi-site brain functional
connectivity network data. | [
"Mingyang Ren",
"Xin He",
"Junhui Wang"
] | 2023-10-16 10:01:27 | http://arxiv.org/abs/2310.10239v1 | http://arxiv.org/pdf/2310.10239v1 | 2310.10239v1 |
SGOOD: Substructure-enhanced Graph-Level Out-of-Distribution Detection | Graph-level representation learning is important in a wide range of
applications. However, existing graph-level models are generally built on
i.i.d. assumption for both training and testing graphs, which is not realistic
in an open world, where models can encounter out-of-distribution (OOD) testing
graphs that are from different distributions unknown during training. A
trustworthy model should not only produce accurate predictions for
in-distribution (ID) data, but also detect OOD graphs to avoid unreliable
prediction. In this paper, we present SGOOD, a novel graph-level OOD detection
framework. We find that substructure differences commonly exist between ID and
OOD graphs. Hence, SGOOD explicitly utilizes substructures to learn powerful
representations to achieve superior performance. Specifically, we build a super
graph of substructures for every graph, and design a two-level graph encoding
pipeline that works on both original graphs and super graphs to obtain
substructure-enhanced graph representations. To further distinguish ID and OOD
graphs, we develop three graph augmentation techniques that preserve
substructures and increase expressiveness. Extensive experiments against 10
competitors on numerous graph datasets demonstrate the superiority of SGOOD,
often surpassing existing methods by a significant margin. The code is
available at https://anonymous.4open.science/r/SGOOD-0958. | [
"Zhihao Ding",
"Jieming Shi"
] | 2023-10-16 09:51:24 | http://arxiv.org/abs/2310.10237v1 | http://arxiv.org/pdf/2310.10237v1 | 2310.10237v1 |
Generalizing Medical Image Representations via Quaternion Wavelet Networks | Neural network generalizability is becoming a broad research field due to the
increasing availability of datasets from different sources and for various
tasks. This issue is even wider when processing medical data, where a lack of
methodological standards causes large variations being provided by different
imaging centers or acquired with various devices and cofactors. To overcome
these limitations, we introduce a novel, generalizable, data- and task-agnostic
framework able to extract salient features from medical images. The proposed
quaternion wavelet network (QUAVE) can be easily integrated with any
pre-existing medical image analysis or synthesis task, and it can be involved
with real, quaternion, or hypercomplex-valued models, generalizing their
adoption to single-channel data. QUAVE first extracts different sub-bands
through the quaternion wavelet transform, resulting in both
low-frequency/approximation bands and high-frequency/fine-grained features.
Then, it weighs the most representative set of sub-bands to be involved as
input to any other neural model for image processing, replacing standard data
samples. We conduct an extensive experimental evaluation comprising different
datasets, diverse image analysis, and synthesis tasks including reconstruction,
segmentation, and modality translation. We also evaluate QUAVE in combination
with both real and quaternion-valued models. Results demonstrate the
effectiveness and the generalizability of the proposed framework that improves
network performance while being flexible to be adopted in manifold scenarios. | [
"Luigi Sigillo",
"Eleonora Grassucci",
"Aurelio Uncini",
"Danilo Comminiello"
] | 2023-10-16 09:34:06 | http://arxiv.org/abs/2310.10224v1 | http://arxiv.org/pdf/2310.10224v1 | 2310.10224v1 |
GEVO-ML: Optimizing Machine Learning Code with Evolutionary Computation | Parallel accelerators, such as GPUs, are key enablers for large-scale Machine
Learning (ML) applications. However, ML model developers often lack detailed
knowledge of the underlying system architectures, while system programmers
usually do not have a high-level understanding of the ML model that runs on the
specific system. To mitigate this gap between two relevant aspects of domain
knowledge, this paper proposes GEVO-ML, a tool for automatically discovering
optimization opportunities and tuning the performance of ML kernels, where the
model and training/prediction processes are uniformly represented in a single
intermediate language, the Multiple-Layer Intermediate Representation (MLIR).
GEVO-ML uses multi-objective evolutionary search to find edits (mutations) to
MLIR code that ultimately runs on GPUs, improving performance on desired
criteria while retaining required functionality.
We demonstrate GEVO-ML on two different ML workloads for both model training
and prediction. GEVO-ML finds significant Pareto improvements for these models,
achieving 90.43% performance improvement when model accuracy is relaxed by 2%,
from 91.2% to 89.3%. For the training workloads, GEVO-ML finds a 4.88%
improvement in model accuracy, from 91% to 96%, without sacrificing training or
testing speed. Our analysis of key GEVO-ML mutations reveals diverse code
modifications, while might be foreign to human developers, achieving similar
effects with how human developers improve model design, for example, by
changing learning rates or pruning non-essential layer parameters. | [
"Jhe-Yu Liou",
"Stephanie Forrest",
"Carole-Jean Wu"
] | 2023-10-16 09:24:20 | http://arxiv.org/abs/2310.10211v1 | http://arxiv.org/pdf/2310.10211v1 | 2310.10211v1 |
Self-supervised Fetal MRI 3D Reconstruction Based on Radiation Diffusion Generation Model | Although the use of multiple stacks can handle slice-to-volume motion
correction and artifact removal problems, there are still several problems: 1)
The slice-to-volume method usually uses slices as input, which cannot solve the
problem of uniform intensity distribution and complementarity in regions of
different fetal MRI stacks; 2) The integrity of 3D space is not considered,
which adversely affects the discrimination and generation of globally
consistent information in fetal MRI; 3) Fetal MRI with severe motion artifacts
in the real-world cannot achieve high-quality super-resolution reconstruction.
To address these issues, we propose a novel fetal brain MRI high-quality volume
reconstruction method, called the Radiation Diffusion Generation Model (RDGM).
It is a self-supervised generation method, which incorporates the idea of
Neural Radiation Field (NeRF) based on the coordinate generation and diffusion
model based on super-resolution generation. To solve regional intensity
heterogeneity in different directions, we use a pre-trained transformer model
for slice registration, and then, a new regionally Consistent Implicit Neural
Representation (CINR) network sub-module is proposed. CINR can generate the
initial volume by combining a coordinate association map of two different
coordinate mapping spaces. To enhance volume global consistency and
discrimination, we introduce the Volume Diffusion Super-resolution Generation
(VDSG) mechanism. The global intensity discriminant generation from
volume-to-volume is carried out using the idea of diffusion generation, and
CINR becomes the deviation intensity generation network of the volume-to-volume
diffusion model. Finally, the experimental results on real-world fetal brain
MRI stacks demonstrate the state-of-the-art performance of our method. | [
"Junpeng Tan",
"Xin Zhang",
"Yao Lv",
"Xiangmin Xu",
"Gang Li"
] | 2023-10-16 09:22:00 | http://arxiv.org/abs/2310.10209v1 | http://arxiv.org/pdf/2310.10209v1 | 2310.10209v1 |
Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World | We introduce Bongard-OpenWorld, a new benchmark for evaluating real-world
few-shot reasoning for machine vision. It originates from the classical Bongard
Problems (BPs): Given two sets of images (positive and negative), the model
needs to identify the set that query images belong to by inducing the visual
concepts, which is exclusively depicted by images from the positive set. Our
benchmark inherits the few-shot concept induction of the original BPs while
adding the two novel layers of challenge: 1) open-world free-form concepts, as
the visual concepts in Bongard-OpenWorld are unique compositions of terms from
an open vocabulary, ranging from object categories to abstract visual
attributes and commonsense factual knowledge; 2) real-world images, as opposed
to the synthetic diagrams used by many counterparts. In our exploration,
Bongard-OpenWorld already imposes a significant challenge to current few-shot
reasoning algorithms. We further investigate to which extent the recently
introduced Large Language Models (LLMs) and Vision-Language Models (VLMs) can
solve our task, by directly probing VLMs, and combining VLMs and LLMs in an
interactive reasoning scheme. We even designed a neuro-symbolic reasoning
approach that reconciles LLMs & VLMs with logical reasoning to emulate the
human problem-solving process for Bongard Problems. However, none of these
approaches manage to close the human-machine gap, as the best learner achieves
64% accuracy while human participants easily reach 91%. We hope
Bongard-OpenWorld can help us better understand the limitations of current
visual intelligence and facilitate future research on visual agents with
stronger few-shot visual reasoning capabilities. | [
"Rujie Wu",
"Xiaojian Ma",
"Qing Li",
"Wei Wang",
"Zhenliang Zhang",
"Song-Chun Zhu",
"Yizhou Wang"
] | 2023-10-16 09:19:18 | http://arxiv.org/abs/2310.10207v1 | http://arxiv.org/pdf/2310.10207v1 | 2310.10207v1 |
Interpretable Predictive Models to Understand Risk Factors for Maternal and Fetal Outcomes | Although most pregnancies result in a good outcome, complications are not
uncommon and can be associated with serious implications for mothers and
babies. Predictive modeling has the potential to improve outcomes through
better understanding of risk factors, heightened surveillance for high risk
patients, and more timely and appropriate interventions, thereby helping
obstetricians deliver better care. We identify and study the most important
risk factors for four types of pregnancy complications: (i) severe maternal
morbidity, (ii) shoulder dystocia, (iii) preterm preeclampsia, and (iv)
antepartum stillbirth. We use an Explainable Boosting Machine (EBM), a
high-accuracy glass-box learning method, for prediction and identification of
important risk factors. We undertake external validation and perform an
extensive robustness analysis of the EBM models. EBMs match the accuracy of
other black-box ML methods such as deep neural networks and random forests, and
outperform logistic regression, while being more interpretable. EBMs prove to
be robust. The interpretability of the EBM models reveals surprising insights
into the features contributing to risk (e.g. maternal height is the second most
important feature for shoulder dystocia) and may have potential for clinical
application in the prediction and prevention of serious complications in
pregnancy. | [
"Tomas M. Bosschieter",
"Zifei Xu",
"Hui Lan",
"Benjamin J. Lengerich",
"Harsha Nori",
"Ian Painter",
"Vivienne Souter",
"Rich Caruana"
] | 2023-10-16 09:17:10 | http://arxiv.org/abs/2310.10203v1 | http://arxiv.org/pdf/2310.10203v1 | 2310.10203v1 |
Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook | Temporal data, notably time series and spatio-temporal data, are prevalent in
real-world applications. They capture dynamic system measurements and are
produced in vast quantities by both physical and virtual sensors. Analyzing
these data types is vital to harnessing the rich information they encompass and
thus benefits a wide range of downstream tasks. Recent advances in large
language and other foundational models have spurred increased use of these
models in time series and spatio-temporal data mining. Such methodologies not
only enable enhanced pattern recognition and reasoning across diverse domains
but also lay the groundwork for artificial general intelligence capable of
comprehending and processing common temporal data. In this survey, we offer a
comprehensive and up-to-date review of large models tailored (or adapted) for
time series and spatio-temporal data, spanning four key facets: data types,
model categories, model scopes, and application areas/tasks. Our objective is
to equip practitioners with the knowledge to develop applications and further
research in this underexplored domain. We primarily categorize the existing
literature into two major clusters: large models for time series analysis
(LM4TS) and spatio-temporal data mining (LM4STD). On this basis, we further
classify research based on model scopes (i.e., general vs. domain-specific) and
application areas/tasks. We also provide a comprehensive collection of
pertinent resources, including datasets, model assets, and useful tools,
categorized by mainstream applications. This survey coalesces the latest
strides in large model-centric research on time series and spatio-temporal
data, underscoring the solid foundations, current advances, practical
applications, abundant resources, and future research opportunities. | [
"Ming Jin",
"Qingsong Wen",
"Yuxuan Liang",
"Chaoli Zhang",
"Siqiao Xue",
"Xue Wang",
"James Zhang",
"Yi Wang",
"Haifeng Chen",
"Xiaoli Li",
"Shirui Pan",
"Vincent S. Tseng",
"Yu Zheng",
"Lei Chen",
"Hui Xiong"
] | 2023-10-16 09:06:00 | http://arxiv.org/abs/2310.10196v2 | http://arxiv.org/pdf/2310.10196v2 | 2310.10196v2 |
AdaLomo: Low-memory Optimization with Adaptive Learning Rate | Large language models have achieved remarkable success, but their extensive
parameter size necessitates substantial memory for training, thereby setting a
high threshold. While the recently proposed low-memory optimization (LOMO)
reduces memory footprint, its optimization technique, akin to stochastic
gradient descent, is sensitive to hyper-parameters and exhibits suboptimal
convergence, failing to match the performance of the prevailing optimizer for
large language models, AdamW. Through empirical analysis of the Adam optimizer,
we found that, compared to momentum, the adaptive learning rate is more
critical for bridging the gap. Building on this insight, we introduce the
low-memory optimization with adaptive learning rate (AdaLomo), which offers an
adaptive learning rate for each parameter. To maintain memory efficiency, we
employ non-negative matrix factorization for the second-order moment estimation
in the optimizer state. Additionally, we suggest the use of a grouped update
normalization to stabilize convergence. Our experiments with instruction-tuning
and further pre-training demonstrate that AdaLomo achieves results on par with
AdamW, while significantly reducing memory requirements, thereby lowering the
hardware barrier to training large language models. | [
"Kai Lv",
"Hang Yan",
"Qipeng Guo",
"Haijun Lv",
"Xipeng Qiu"
] | 2023-10-16 09:04:28 | http://arxiv.org/abs/2310.10195v2 | http://arxiv.org/pdf/2310.10195v2 | 2310.10195v2 |
An Interpretable Deep-Learning Framework for Predicting Hospital Readmissions From Electronic Health Records | With the increasing availability of patients' data, modern medicine is
shifting towards prospective healthcare. Electronic health records contain a
variety of information useful for clinical patient description and can be
exploited for the construction of predictive models, given that similar medical
histories will likely lead to similar progressions. One example is unplanned
hospital readmission prediction, an essential task for reducing hospital costs
and improving patient health. Despite predictive models showing very good
performances especially with deep-learning models, they are often criticized
for the poor interpretability of their results, a fundamental characteristic in
the medical field, where incorrect predictions might have serious consequences
for the patient health. In this paper we propose a novel, interpretable
deep-learning framework for predicting unplanned hospital readmissions,
supported by NLP findings on word embeddings and by neural-network models
(ConvLSTM) for better handling temporal data. We validate our system on the two
predictive tasks of hospital readmission within 30 and 180 days, using
real-world data. In addition, we introduce and test a model-dependent technique
to make the representation of results easily interpretable by the medical
staff. Our solution achieves better performances compared to traditional models
based on machine learning, while providing at the same time more interpretable
results. | [
"Fabio Azzalini",
"Tommaso Dolci",
"Marco Vagaggini"
] | 2023-10-16 08:48:52 | http://arxiv.org/abs/2310.10187v1 | http://arxiv.org/pdf/2310.10187v1 | 2310.10187v1 |
Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition | In a practical dialogue system, users may input out-of-domain (OOD) queries.
The Generalized Intent Discovery (GID) task aims to discover OOD intents from
OOD queries and extend them to the in-domain (IND) classifier. However, GID
only considers one stage of OOD learning, and needs to utilize the data in all
previous stages for joint training, which limits its wide application in
reality. In this paper, we introduce a new task, Continual Generalized Intent
Discovery (CGID), which aims to continuously and automatically discover OOD
intents from dynamic OOD data streams and then incrementally add them to the
classifier with almost no previous data, thus moving towards dynamic intent
recognition in an open world. Next, we propose a method called Prototype-guided
Learning with Replay and Distillation (PLRD) for CGID, which bootstraps new
intent discovery through class prototypes and balances new and old intents
through data replay and feature distillation. Finally, we conduct detailed
experiments and analysis to verify the effectiveness of PLRD and understand the
key challenges of CGID for future research. | [
"Xiaoshuai Song",
"Yutao Mou",
"Keqing He",
"Yueyan Qiu",
"Pei Wang",
"Weiran Xu"
] | 2023-10-16 08:48:07 | http://arxiv.org/abs/2310.10184v1 | http://arxiv.org/pdf/2310.10184v1 | 2310.10184v1 |
Hypergraph Echo State Network | A hypergraph as a generalization of graphs records higher-order interactions
among nodes, yields a more flexible network model, and allows non-linear
features for a group of nodes. In this article, we propose a hypergraph echo
state network (HypergraphESN) as a generalization of graph echo state network
(GraphESN) designed for efficient processing of hypergraph-structured data,
derive convergence conditions for the algorithm, and discuss its versatility in
comparison to GraphESN. The numerical experiments on the binary classification
tasks demonstrate that HypergraphESN exhibits comparable or superior accuracy
performance to GraphESN for hypergraph-structured data, and accuracy increases
if more higher-order interactions in a network are identified. | [
"Justin Lien"
] | 2023-10-16 08:35:23 | http://arxiv.org/abs/2310.10177v1 | http://arxiv.org/pdf/2310.10177v1 | 2310.10177v1 |
Large Language Models Meet Open-World Intent Discovery and Recognition: An Evaluation of ChatGPT | The tasks of out-of-domain (OOD) intent discovery and generalized intent
discovery (GID) aim to extend a closed intent classifier to open-world intent
sets, which is crucial to task-oriented dialogue (TOD) systems. Previous
methods address them by fine-tuning discriminative models. Recently, although
some studies have been exploring the application of large language models
(LLMs) represented by ChatGPT to various downstream tasks, it is still unclear
for the ability of ChatGPT to discover and incrementally extent OOD intents. In
this paper, we comprehensively evaluate ChatGPT on OOD intent discovery and
GID, and then outline the strengths and weaknesses of ChatGPT. Overall, ChatGPT
exhibits consistent advantages under zero-shot settings, but is still at a
disadvantage compared to fine-tuned models. More deeply, through a series of
analytical experiments, we summarize and discuss the challenges faced by LLMs
including clustering, domain-specific understanding, and cross-domain
in-context learning scenarios. Finally, we provide empirical guidance for
future directions to address these challenges. | [
"Xiaoshuai Song",
"Keqing He",
"Pei Wang",
"Guanting Dong",
"Yutao Mou",
"Jingang Wang",
"Yunsen Xian",
"Xunliang Cai",
"Weiran Xu"
] | 2023-10-16 08:34:44 | http://arxiv.org/abs/2310.10176v1 | http://arxiv.org/pdf/2310.10176v1 | 2310.10176v1 |
On permutation symmetries in Bayesian neural network posteriors: a variational perspective | The elusive nature of gradient-based optimization in neural networks is tied
to their loss landscape geometry, which is poorly understood. However recent
work has brought solid evidence that there is essentially no loss barrier
between the local solutions of gradient descent, once accounting for
weight-permutations that leave the network's computation unchanged. This raises
questions for approximate inference in Bayesian neural networks (BNNs), where
we are interested in marginalizing over multiple points in the loss landscape.
In this work, we first extend the formalism of marginalized loss barrier and
solution interpolation to BNNs, before proposing a matching algorithm to search
for linearly connected solutions. This is achieved by aligning the
distributions of two independent approximate Bayesian solutions with respect to
permutation matrices. We build on the results of Ainsworth et al. (2023),
reframing the problem as a combinatorial optimization one, using an
approximation to the sum of bilinear assignment problem. We then experiment on
a variety of architectures and datasets, finding nearly zero marginalized loss
barriers for linearly connected solutions. | [
"Simone Rossi",
"Ankit Singh",
"Thomas Hannagan"
] | 2023-10-16 08:26:50 | http://arxiv.org/abs/2310.10171v1 | http://arxiv.org/pdf/2310.10171v1 | 2310.10171v1 |
Leveraging Knowledge Distillation for Efficient Deep Reinforcement Learning in Resource-Constrained Environments | This paper aims to explore the potential of combining Deep Reinforcement
Learning (DRL) with Knowledge Distillation (KD) by distilling various DRL
algorithms and studying their distillation effects. By doing so, the
computational burden of deep models could be reduced while maintaining the
performance. The primary objective is to provide a benchmark for evaluating the
performance of different DRL algorithms that have been refined using KD
techniques. By distilling these algorithms, the goal is to develop efficient
and fast DRL models. This research is expected to provide valuable insights
that can facilitate further advancements in this promising direction. By
exploring the combination of DRL and KD, this work aims to promote the
development of models that require fewer GPU resources, learn more quickly, and
make faster decisions in complex environments. The results of this research
have the capacity to significantly advance the field of DRL and pave the way
for the future deployment of resource-efficient, decision-making intelligent
systems. | [
"Guanlin Meng"
] | 2023-10-16 08:26:45 | http://arxiv.org/abs/2310.10170v1 | http://arxiv.org/pdf/2310.10170v1 | 2310.10170v1 |
DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy Slot Filling Task | Recently, prompt-based generative frameworks have shown impressive
capabilities in sequence labeling tasks. However, in practical dialogue
scenarios, relying solely on simplistic templates and traditional corpora
presents a challenge for these methods in generalizing to unknown input
perturbations. To address this gap, we propose a multi-task demonstration based
generative framework for noisy slot filling, named DemoNSF. Specifically, we
introduce three noisy auxiliary tasks, namely noisy recovery (NR), random mask
(RM), and hybrid discrimination (HD), to implicitly capture semantic structural
information of input perturbations at different granularities. In the
downstream main task, we design a noisy demonstration construction strategy for
the generative framework, which explicitly incorporates task-specific
information and perturbed distribution during training and inference.
Experiments on two benchmarks demonstrate that DemoNSF outperforms all baseline
methods and achieves strong generalization. Further analysis provides empirical
guidance for the practical application of generative frameworks. Our code is
released at https://github.com/dongguanting/Demo-NSF. | [
"Guanting Dong",
"Tingfeng Hui",
"Zhuoma GongQue",
"Jinxu Zhao",
"Daichi Guo",
"Gang Zhao",
"Keqing He",
"Weiran Xu"
] | 2023-10-16 08:16:53 | http://arxiv.org/abs/2310.10169v1 | http://arxiv.org/pdf/2310.10169v1 | 2310.10169v1 |
The Road to On-board Change Detection: A Lightweight Patch-Level Change Detection Network via Exploring the Potential of Pruning and Pooling | Existing satellite remote sensing change detection (CD) methods often crop
original large-scale bi-temporal image pairs into small patch pairs and then
use pixel-level CD methods to fairly process all the patch pairs. However, due
to the sparsity of change in large-scale satellite remote sensing images,
existing pixel-level CD methods suffer from a waste of computational cost and
memory resources on lots of unchanged areas, which reduces the processing
efficiency of on-board platform with extremely limited computation and memory
resources. To address this issue, we propose a lightweight patch-level CD
network (LPCDNet) to rapidly remove lots of unchanged patch pairs in
large-scale bi-temporal image pairs. This is helpful to accelerate the
subsequent pixel-level CD processing stage and reduce its memory costs. In our
LPCDNet, a sensitivity-guided channel pruning method is proposed to remove
unimportant channels and construct the lightweight backbone network on basis of
ResNet18 network. Then, the multi-layer feature compression (MLFC) module is
designed to compress and fuse the multi-level feature information of
bi-temporal image patch. The output of MLFC module is fed into the
fully-connected decision network to generate the predicted binary label.
Finally, a weighted cross-entropy loss is utilized in the training process of
network to tackle the change/unchange class imbalance problem. Experiments on
two CD datasets demonstrate that our LPCDNet achieves more than 1000 frames per
second on an edge computation platform, i.e., NVIDIA Jetson AGX Orin, which is
more than 3 times that of the existing methods without noticeable CD
performance loss. In addition, our method reduces more than 60% memory costs of
the subsequent pixel-level CD processing stage. | [
"Lihui Xue",
"Zhihao Wang",
"Xueqian Wang",
"Gang Li"
] | 2023-10-16 08:11:41 | http://arxiv.org/abs/2310.10166v1 | http://arxiv.org/pdf/2310.10166v1 | 2310.10166v1 |
Adaptive Workload Distribution for Accuracy-aware DNN Inference on Collaborative Edge Platforms | DNN inference can be accelerated by distributing the workload among a cluster
of collaborative edge nodes. Heterogeneity among edge devices and
accuracy-performance trade-offs of DNN models present a complex exploration
space while catering to the inference performance requirements. In this work,
we propose adaptive workload distribution for DNN inference, jointly
considering node-level heterogeneity of edge devices, and application-specific
accuracy and performance requirements. Our proposed approach combinatorially
optimizes heterogeneity-aware workload partitioning and dynamic accuracy
configuration of DNN models to ensure performance and accuracy guarantees. We
tested our approach on an edge cluster of Odroid XU4, Raspberry Pi4, and Jetson
Nano boards and achieved an average gain of 41.52% in performance and 5.2% in
output accuracy as compared to state-of-the-art workload distribution
strategies. | [
"Zain Taufique",
"Antonio Miele",
"Pasi Liljeberg",
"Anil Kanduri"
] | 2023-10-16 07:55:30 | http://arxiv.org/abs/2310.10157v1 | http://arxiv.org/pdf/2310.10157v1 | 2310.10157v1 |
DNA: Denoised Neighborhood Aggregation for Fine-grained Category Discovery | Discovering fine-grained categories from coarsely labeled data is a practical
and challenging task, which can bridge the gap between the demand for
fine-grained analysis and the high annotation cost. Previous works mainly focus
on instance-level discrimination to learn low-level features, but ignore
semantic similarities between data, which may prevent these models learning
compact cluster representations. In this paper, we propose Denoised
Neighborhood Aggregation (DNA), a self-supervised framework that encodes
semantic structures of data into the embedding space. Specifically, we retrieve
k-nearest neighbors of a query as its positive keys to capture semantic
similarities between data and then aggregate information from the neighbors to
learn compact cluster representations, which can make fine-grained categories
more separatable. However, the retrieved neighbors can be noisy and contain
many false-positive keys, which can degrade the quality of learned embeddings.
To cope with this challenge, we propose three principles to filter out these
false neighbors for better representation learning. Furthermore, we
theoretically justify that the learning objective of our framework is
equivalent to a clustering loss, which can capture semantic similarities
between data to form compact fine-grained clusters. Extensive experiments on
three benchmark datasets show that our method can retrieve more accurate
neighbors (21.31% accuracy improvement) and outperform state-of-the-art models
by a large margin (average 9.96% improvement on three metrics). Our code and
data are available at https://github.com/Lackel/DNA. | [
"Wenbin An",
"Feng Tian",
"Wenkai Shi",
"Yan Chen",
"Qinghua Zheng",
"QianYing Wang",
"Ping Chen"
] | 2023-10-16 07:43:30 | http://arxiv.org/abs/2310.10151v1 | http://arxiv.org/pdf/2310.10151v1 | 2310.10151v1 |
An Empirical Study of Simplicial Representation Learning with Wasserstein Distance | In this paper, we delve into the problem of simplicial representation
learning utilizing the 1-Wasserstein distance on a tree structure (a.k.a.,
Tree-Wasserstein distance (TWD)), where TWD is defined as the L1 distance
between two tree-embedded vectors. Specifically, we consider a framework for
simplicial representation estimation employing a self-supervised learning
approach based on SimCLR with a negative TWD as a similarity measure. In
SimCLR, the cosine similarity with real-vector embeddings is often utilized;
however, it has not been well studied utilizing L1-based measures with
simplicial embeddings. A key challenge is that training the L1 distance is
numerically challenging and often yields unsatisfactory outcomes, and there are
numerous choices for probability models. Thus, this study empirically
investigates a strategy for optimizing self-supervised learning with TWD and
find a stable training procedure. More specifically, we evaluate the
combination of two types of TWD (total variation and ClusterTree) and several
simplicial models including the softmax function, the ArcFace probability
model, and simplicial embedding. Moreover, we propose a simple yet effective
Jeffrey divergence-based regularization method to stabilize the optimization.
Through empirical experiments on STL10, CIFAR10, CIFAR100, and SVHN, we first
found that the simple combination of softmax function and TWD can obtain
significantly lower results than the standard SimCLR (non-simplicial model and
cosine similarity). We found that the model performance depends on the
combination of TWD and the simplicial model, and the Jeffrey divergence
regularization usually helps model training. Finally, we inferred that the
appropriate choice of combination of TWD and simplicial models outperformed
cosine similarity based representation learning. | [
"Makoto Yamada",
"Yuki Takezawa",
"Guillaume Houry",
"Kira Michaela Dusterwald",
"Deborah Sulem",
"Han Zhao",
"Yao-Hung Hubert Tsai"
] | 2023-10-16 07:31:30 | http://arxiv.org/abs/2310.10143v1 | http://arxiv.org/pdf/2310.10143v1 | 2310.10143v1 |
LoBaSS: Gauging Learnability in Supervised Fine-tuning Data | Supervised Fine-Tuning (SFT) serves as a crucial phase in aligning Large
Language Models (LLMs) to specific task prerequisites. The selection of
fine-tuning data profoundly influences the model's performance, whose principle
is traditionally grounded in data quality and distribution. In this paper, we
introduce a new dimension in SFT data selection: learnability. This new
dimension is motivated by the intuition that SFT unlocks capabilities acquired
by a LLM during the pretraining phase. Given that different pretrained models
have disparate capabilities, the SFT data appropriate for one may not suit
another. Thus, we introduce the term learnability to define the suitability of
data for effective learning by the model. We present the Loss Based SFT Data
Selection (LoBaSS) method, utilizing data learnability as the principal
criterion for the selection SFT data. This method provides a nuanced approach,
allowing the alignment of data selection with inherent model capabilities,
ensuring optimal compatibility and learning efficiency. In experimental
comparisons involving 7B and 13B models, our LoBaSS method is able to surpass
full-data fine-tuning at merely 6% of the total training data. When employing
16.7% of the data, LoBaSS harmonizes the model's capabilities across
conversational and mathematical domains, proving its efficacy and adaptability. | [
"Haotian Zhou",
"Tingkai Liu",
"Qianli Ma",
"Jianbo Yuan",
"Pengfei Liu",
"Yang You",
"Hongxia Yang"
] | 2023-10-16 07:26:24 | http://arxiv.org/abs/2310.13008v1 | http://arxiv.org/pdf/2310.13008v1 | 2310.13008v1 |
CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization | Language agents have shown some ability to interact with an external
environment, e.g., a virtual world such as ScienceWorld, to perform complex
tasks, e.g., growing a plant, without the startup costs of reinforcement
learning. However, despite their zero-shot capabilities, these agents to date
do not continually improve over time beyond performance refinement on a
specific task. Here we present CLIN, the first language-based agent to achieve
this, so that it continually improves over multiple trials, including when both
the environment and task are varied, and without requiring parameter updates.
Our approach is to use a persistent, dynamic, textual memory centered on causal
abstractions (rather than general "helpful hints") that is regularly updated
after each trial so that the agent gradually learns useful knowledge for new
trials. In the ScienceWorld benchmark, CLIN is able to continually improve on
repeated trials on the same task and environment, outperforming
state-of-the-art reflective language agents like Reflexion by 23 absolute
points. CLIN can also transfer its learning to new environments (or new tasks),
improving its zero-shot performance by 4 points (13 for new tasks) and can
further improve performance there through continual memory updates, enhancing
performance by an additional 17 points (7 for new tasks). This suggests a new
architecture for agents built on frozen models that can still continually and
rapidly improve over time. | [
"Bodhisattwa Prasad Majumder",
"Bhavana Dalvi Mishra",
"Peter Jansen",
"Oyvind Tafjord",
"Niket Tandon",
"Li Zhang",
"Chris Callison-Burch",
"Peter Clark"
] | 2023-10-16 07:17:27 | http://arxiv.org/abs/2310.10134v1 | http://arxiv.org/pdf/2310.10134v1 | 2310.10134v1 |
A Non-monotonic Smooth Activation Function | Activation functions are crucial in deep learning models since they introduce
non-linearity into the networks, allowing them to learn from errors and make
adjustments, which is essential for learning complex patterns. The essential
purpose of activation functions is to transform unprocessed input signals into
significant output activations, promoting information transmission throughout
the neural network. In this study, we propose a new activation function called
Sqish, which is a non-monotonic and smooth function and an alternative to
existing ones. We showed its superiority in classification, object detection,
segmentation tasks, and adversarial robustness experiments. We got an 8.21%
improvement over ReLU on the CIFAR100 dataset with the ShuffleNet V2 model in
the FGSM adversarial attack. We also got a 5.87% improvement over ReLU on image
classification on the CIFAR100 dataset with the ShuffleNet V2 model. | [
"Koushik Biswas",
"Meghana Karri",
"Ulaş Bağcı"
] | 2023-10-16 07:09:47 | http://arxiv.org/abs/2310.10126v1 | http://arxiv.org/pdf/2310.10126v1 | 2310.10126v1 |
A Comprehensive Study of Privacy Risks in Curriculum Learning | Training a machine learning model with data following a meaningful order,
i.e., from easy to hard, has been proven to be effective in accelerating the
training process and achieving better model performance. The key enabling
technique is curriculum learning (CL), which has seen great success and has
been deployed in areas like image and text classification. Yet, how CL affects
the privacy of machine learning is unclear. Given that CL changes the way a
model memorizes the training data, its influence on data privacy needs to be
thoroughly evaluated. To fill this knowledge gap, we perform the first study
and leverage membership inference attack (MIA) and attribute inference attack
(AIA) as two vectors to quantify the privacy leakage caused by CL.
Our evaluation of nine real-world datasets with attack methods (NN-based,
metric-based, label-only MIA, and NN-based AIA) revealed new insights about CL.
First, MIA becomes slightly more effective when CL is applied, but the impact
is much more prominent to a subset of training samples ranked as difficult.
Second, a model trained under CL is less vulnerable under AIA, compared to MIA.
Third, the existing defense techniques like DP-SGD, MemGuard, and MixupMMD are
still effective under CL, though DP-SGD has a significant impact on target
model accuracy. Finally, based on our insights into CL, we propose a new MIA,
termed Diff-Cali, which exploits the difficulty scores for result calibration
and is demonstrated to be effective against all CL methods and the normal
training method. With this study, we hope to draw the community's attention to
the unintended privacy risks of emerging machine-learning techniques and
develop new attack benchmarks and defense solutions. | [
"Joann Qiongna Chen",
"Xinlei He",
"Zheng Li",
"Yang Zhang",
"Zhou Li"
] | 2023-10-16 07:06:38 | http://arxiv.org/abs/2310.10124v1 | http://arxiv.org/pdf/2310.10124v1 | 2310.10124v1 |
From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and Beyond | Graph neural networks (GNNs) have demonstrated significant promise in
modelling relational data and have been widely applied in various fields of
interest. The key mechanism behind GNNs is the so-called message passing where
information is being iteratively aggregated to central nodes from their
neighbourhood. Such a scheme has been found to be intrinsically linked to a
physical process known as heat diffusion, where the propagation of GNNs
naturally corresponds to the evolution of heat density. Analogizing the process
of message passing to the heat dynamics allows to fundamentally understand the
power and pitfalls of GNNs and consequently informs better model design.
Recently, there emerges a plethora of works that proposes GNNs inspired from
the continuous dynamics formulation, in an attempt to mitigate the known
limitations of GNNs, such as oversmoothing and oversquashing. In this survey,
we provide the first systematic and comprehensive review of studies that
leverage the continuous perspective of GNNs. To this end, we introduce
foundational ingredients for adapting continuous dynamics to GNNs, along with a
general framework for the design of graph neural dynamics. We then review and
categorize existing works based on their driven mechanisms and underlying
dynamics. We also summarize how the limitations of classic GNNs can be
addressed under the continuous framework. We conclude by identifying multiple
open research directions. | [
"Andi Han",
"Dai Shi",
"Lequan Lin",
"Junbin Gao"
] | 2023-10-16 06:57:24 | http://arxiv.org/abs/2310.10121v1 | http://arxiv.org/pdf/2310.10121v1 | 2310.10121v1 |
A proximal augmented Lagrangian based algorithm for federated learning with global and local convex conic constraints | This paper considers federated learning (FL) with constraints, where the
central server and all local clients collectively minimize a sum of convex
local objective functions subject to global and local convex conic constraints.
To train the model without moving local data from clients to the central
server, we propose an FL framework in which each local client performs multiple
updates using the local objective and local constraint, while the central
server handles the global constraint and performs aggregation based on the
updated local models. In particular, we develop a proximal augmented Lagrangian
(AL) based algorithm for FL with global and local convex conic constraints. The
subproblems arising in this algorithm are solved by an inexact alternating
direction method of multipliers (ADMM) in a federated fashion. Under a local
Lipschitz condition and mild assumptions, we establish the worst-case
complexity bounds of the proposed algorithm for finding an approximate KKT
solution. To the best of our knowledge, this work proposes the first algorithm
for FL with global and local constraints. Our numerical experiments demonstrate
the practical advantages of our algorithm in performing Neyman-Pearson
classification and enhancing model fairness in the context of FL. | [
"Chuan He",
"Le Peng",
"Ju Sun"
] | 2023-10-16 06:51:32 | http://arxiv.org/abs/2310.10117v1 | http://arxiv.org/pdf/2310.10117v1 | 2310.10117v1 |
Regret Analysis of the Posterior Sampling-based Learning Algorithm for Episodic POMDPs | Compared to Markov Decision Processes (MDPs), learning in Partially
Observable Markov Decision Processes (POMDPs) can be significantly harder due
to the difficulty of interpreting observations. In this paper, we consider
episodic learning problems in POMDPs with unknown transition and observation
models. We consider the Posterior Sampling-based Reinforcement Learning (PSRL)
algorithm for POMDPs and show that its Bayesian regret scales as the square
root of the number of episodes. In general, the regret scales exponentially
with the horizon length $H$, and we show that this is inevitable by providing a
lower bound. However, under the condition that the POMDP is undercomplete and
weakly revealing, we establish a polynomial Bayesian regret bound that improves
the regret bound by a factor of $\Omega(H^2\sqrt{SA})$ over the recent result
by arXiv:2204.08967. | [
"Dengwang Tang",
"Rahul Jain",
"Ashutosh Nayyar",
"Pierluigi Nuzzo"
] | 2023-10-16 06:41:13 | http://arxiv.org/abs/2310.10107v1 | http://arxiv.org/pdf/2310.10107v1 | 2310.10107v1 |
Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning | Navigation in unfamiliar environments presents a major challenge for robots:
while mapping and planning techniques can be used to build up a representation
of the world, quickly discovering a path to a desired goal in unfamiliar
settings with such methods often requires lengthy mapping and exploration.
Humans can rapidly navigate new environments, particularly indoor environments
that are laid out logically, by leveraging semantics -- e.g., a kitchen often
adjoins a living room, an exit sign indicates the way out, and so forth.
Language models can provide robots with such knowledge, but directly using
language models to instruct a robot how to reach some destination can also be
impractical: while language models might produce a narrative about how to reach
some goal, because they are not grounded in real-world observations, this
narrative might be arbitrarily wrong. Therefore, in this paper we study how the
``semantic guesswork'' produced by language models can be utilized as a guiding
heuristic for planning algorithms. Our method, Language Frontier Guide (LFG),
uses the language model to bias exploration of novel real-world environments by
incorporating the semantic knowledge stored in language models as a search
heuristic for planning with either topological or metric maps. We evaluate LFG
in challenging real-world environments and simulated benchmarks, outperforming
uninformed exploration and other ways of using language models. | [
"Dhruv Shah",
"Michael Equi",
"Blazej Osinski",
"Fei Xia",
"Brian Ichter",
"Sergey Levine"
] | 2023-10-16 06:21:06 | http://arxiv.org/abs/2310.10103v1 | http://arxiv.org/pdf/2310.10103v1 | 2310.10103v1 |
KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training | This paper proposes a method for hiding the least-important samples during
the training of deep neural networks to increase efficiency, i.e., to reduce
the cost of training. Using information about the loss and prediction
confidence during training, we adaptively find samples to exclude in a given
epoch based on their contribution to the overall learning process, without
significantly degrading accuracy. We explore the converge properties when
accounting for the reduction in the number of SGD updates. Empirical results on
various large-scale datasets and models used directly in image classification
and segmentation show that while the with-replacement importance sampling
algorithm performs poorly on large datasets, our method can reduce total
training time by up to 22% impacting accuracy only by 0.4% compared to the
baseline. Code available at https://github.com/TruongThaoNguyen/kakurenbo | [
"Truong Thao Nguyen",
"Balazs Gerofi",
"Edgar Josafat Martinez-Noriega",
"François Trahay",
"Mohamed Wahib"
] | 2023-10-16 06:19:29 | http://arxiv.org/abs/2310.10102v1 | http://arxiv.org/pdf/2310.10102v1 | 2310.10102v1 |
Reusing Pretrained Models by Multi-linear Operators for Efficient Training | Training large models from scratch usually costs a substantial amount of
resources. Towards this problem, recent studies such as bert2BERT and LiGO have
reused small pretrained models to initialize a large model (termed the ``target
model''), leading to a considerable acceleration in training. Despite the
successes of these previous studies, they grew pretrained models by mapping
partial weights only, ignoring potential correlations across the entire model.
As we show in this paper, there are inter- and intra-interactions among the
weights of both the pretrained and the target models. As a result, the partial
mapping may not capture the complete information and lead to inadequate growth.
In this paper, we propose a method that linearly correlates each weight of the
target model to all the weights of the pretrained model to further enhance
acceleration ability. We utilize multi-linear operators to reduce computational
and spacial complexity, enabling acceptable resource requirements. Experiments
demonstrate that our method can save 76\% computational costs on DeiT-base
transferred from DeiT-small, which outperforms bert2BERT by +12.0\% and LiGO by
+20.7\%, respectively. | [
"Yu Pan",
"Ye Yuan",
"Yichun Yin",
"Zenglin Xu",
"Lifeng Shang",
"Xin Jiang",
"Qun Liu"
] | 2023-10-16 06:16:47 | http://arxiv.org/abs/2310.10699v1 | http://arxiv.org/pdf/2310.10699v1 | 2310.10699v1 |
PAC Learning Linear Thresholds from Label Proportions | Learning from label proportions (LLP) is a generalization of supervised
learning in which the training data is available as sets or bags of
feature-vectors (instances) along with the average instance-label of each bag.
The goal is to train a good instance classifier. While most previous works on
LLP have focused on training models on such training data, computational
learnability of LLP was only recently explored by [Saket'21, Saket'22] who
showed worst case intractability of properly learning linear threshold
functions (LTFs) from label proportions. However, their work did not rule out
efficient algorithms for this problem on natural distributions.
In this work we show that it is indeed possible to efficiently learn LTFs
using LTFs when given access to random bags of some label proportion in which
feature-vectors are, conditioned on their labels, independently sampled from a
Gaussian distribution $N(\mathbf{\mu}, \mathbf{\Sigma})$. Our work shows that a
certain matrix -- formed using covariances of the differences of
feature-vectors sampled from the bags with and without replacement --
necessarily has its principal component, after a transformation, in the
direction of the normal vector of the LTF. Our algorithm estimates the means
and covariance matrices using subgaussian concentration bounds which we show
can be applied to efficiently sample bags for approximating the normal
direction. Using this in conjunction with novel generalization error bounds in
the bag setting, we show that a low error hypothesis LTF can be identified. For
some special cases of the $N(\mathbf{0}, \mathbf{I})$ distribution we provide a
simpler mean estimation based algorithm. We include an experimental evaluation
of our learning algorithms along with a comparison with those of [Saket'21,
Saket'22] and random LTFs, demonstrating the effectiveness of our techniques. | [
"Anand Brahmbhatt",
"Rishi Saket",
"Aravindan Raghuveer"
] | 2023-10-16 05:59:34 | http://arxiv.org/abs/2310.10098v1 | http://arxiv.org/pdf/2310.10098v1 | 2310.10098v1 |
LLP-Bench: A Large Scale Tabular Benchmark for Learning from Label Proportions | In the task of Learning from Label Proportions (LLP), a model is trained on
groups (a.k.a bags) of instances and their corresponding label proportions to
predict labels for individual instances. LLP has been applied pre-dominantly on
two types of datasets - image and tabular. In image LLP, bags of fixed size are
created by randomly sampling instances from an underlying dataset. Bags created
via this methodology are called random bags. Experimentation on Image LLP has
been mostly on random bags on CIFAR-* and MNIST datasets. Despite being a very
crucial task in privacy sensitive applications, tabular LLP does not yet have a
open, large scale LLP benchmark. One of the unique properties of tabular LLP is
the ability to create feature bags where all the instances in a bag have the
same value for a given feature. It has been shown in prior research that
feature bags are very common in practical, real world applications [Chen et. al
'23, Saket et. al. '22].
In this paper, we address the lack of a open, large scale tabular benchmark.
First we propose LLP-Bench, a suite of 56 LLP datasets (52 feature bag and 4
random bag datasets) created from the Criteo CTR prediction dataset consisting
of 45 million instances. The 56 datasets represent diverse ways in which bags
can be constructed from underlying tabular data. To the best of our knowledge,
LLP-Bench is the first large scale tabular LLP benchmark with an extensive
diversity in constituent datasets. Second, we propose four metrics that
characterize and quantify the hardness of a LLP dataset. Using these four
metrics we present deep analysis of the 56 datasets in LLP-Bench. Finally we
present the performance of 9 SOTA and popular tabular LLP techniques on all the
56 datasets. To the best of our knowledge, our study consisting of more than
2500 experiments is the most extensive study of popular tabular LLP techniques
in literature. | [
"Anand Brahmbhatt",
"Mohith Pokala",
"Rishi Saket",
"Aravindan Raghuveer"
] | 2023-10-16 05:58:25 | http://arxiv.org/abs/2310.10096v1 | http://arxiv.org/pdf/2310.10096v1 | 2310.10096v1 |
A Multi-Scale Spatial Transformer U-Net for Simultaneously Automatic Reorientation and Segmentation of 3D Nuclear Cardiac Images | Accurate reorientation and segmentation of the left ventricular (LV) is
essential for the quantitative analysis of myocardial perfusion imaging (MPI),
in which one critical step is to reorient the reconstructed transaxial nuclear
cardiac images into standard short-axis slices for subsequent image processing.
Small-scale LV myocardium (LV-MY) region detection and the diverse cardiac
structures of individual patients pose challenges to LV segmentation operation.
To mitigate these issues, we propose an end-to-end model, named as multi-scale
spatial transformer UNet (MS-ST-UNet), that involves the multi-scale spatial
transformer network (MSSTN) and multi-scale UNet (MSUNet) modules to perform
simultaneous reorientation and segmentation of LV region from nuclear cardiac
images. The proposed method is trained and tested using two different nuclear
cardiac image modalities: 13N-ammonia PET and 99mTc-sestamibi SPECT. We use a
multi-scale strategy to generate and extract image features with different
scales. Our experimental results demonstrate that the proposed method
significantly improves the reorientation and segmentation performance. This
joint learning framework promotes mutual enhancement between reorientation and
segmentation tasks, leading to cutting edge performance and an efficient image
processing workflow. The proposed end-to-end deep network has the potential to
reduce the burden of manual delineation for cardiac images, thereby providing
multimodal quantitative analysis assistance for physicists. | [
"Yangfan Ni",
"Duo Zhang",
"Gege Ma",
"Lijun Lu",
"Zhongke Huang",
"Wentao Zhu"
] | 2023-10-16 05:56:53 | http://arxiv.org/abs/2310.10095v1 | http://arxiv.org/pdf/2310.10095v1 | 2310.10095v1 |
Label Differential Privacy via Aggregation | In many real-world applications, in particular due to recent developments in
the privacy landscape, training data may be aggregated to preserve the privacy
of sensitive training labels. In the learning from label proportions (LLP)
framework, the dataset is partitioned into bags of feature-vectors which are
available only with the sum of the labels per bag. A further restriction, which
we call learning from bag aggregates (LBA) is where instead of individual
feature-vectors, only the (possibly weighted) sum of the feature-vectors per
bag is available. We study whether such aggregation techniques can provide
privacy guarantees under the notion of label differential privacy (label-DP)
previously studied in for e.g. [Chaudhuri-Hsu'11, Ghazi et al.'21, Esfandiari
et al.'22].
It is easily seen that naive LBA and LLP do not provide label-DP. Our main
result however, shows that weighted LBA using iid Gaussian weights with $m$
randomly sampled disjoint $k$-sized bags is in fact $(\varepsilon,
\delta)$-label-DP for any $\varepsilon > 0$ with $\delta \approx
\exp(-\Omega(\sqrt{k}))$ assuming a lower bound on the linear-mse regression
loss. Further, this preserves the optimum over linear mse-regressors of bounded
norm to within $(1 \pm o(1))$-factor w.p. $\approx 1 - \exp(-\Omega(m))$. We
emphasize that no additive label noise is required.
The analogous weighted-LLP does not however admit label-DP. Nevertheless, we
show that if additive $N(0, 1)$ noise can be added to any constant fraction of
the instance labels, then the noisy weighted-LLP admits similar label-DP
guarantees without assumptions on the dataset, while preserving the utility of
Lipschitz-bounded neural mse-regression tasks.
Our work is the first to demonstrate that label-DP can be achieved by
randomly weighted aggregation for regression tasks, using no or little additive
noise. | [
"Anand Brahmbhatt",
"Rishi Saket",
"Shreyas Havaldar",
"Anshul Nasery",
"Aravindan Raghuveer"
] | 2023-10-16 05:54:30 | http://arxiv.org/abs/2310.10092v2 | http://arxiv.org/pdf/2310.10092v2 | 2310.10092v2 |
Orthogonal Uncertainty Representation of Data Manifold for Robust Long-Tailed Learning | In scenarios with long-tailed distributions, the model's ability to identify
tail classes is limited due to the under-representation of tail samples. Class
rebalancing, information augmentation, and other techniques have been proposed
to facilitate models to learn the potential distribution of tail classes. The
disadvantage is that these methods generally pursue models with balanced class
accuracy on the data manifold, while ignoring the ability of the model to
resist interference. By constructing noisy data manifold, we found that the
robustness of models trained on unbalanced data has a long-tail phenomenon.
That is, even if the class accuracy is balanced on the data domain, it still
has bias on the noisy data manifold. However, existing methods cannot
effectively mitigate the above phenomenon, which makes the model vulnerable in
long-tailed scenarios. In this work, we propose an Orthogonal Uncertainty
Representation (OUR) of feature embedding and an end-to-end training strategy
to improve the long-tail phenomenon of model robustness. As a general
enhancement tool, OUR has excellent compatibility with other methods and does
not require additional data generation, ensuring fast and efficient training.
Comprehensive evaluations on long-tailed datasets show that our method
significantly improves the long-tail phenomenon of robustness, bringing
consistent performance gains to other long-tailed learning methods. | [
"Yanbiao Ma",
"Licheng Jiao",
"Fang Liu",
"Shuyuan Yang",
"Xu Liu",
"Lingling Li"
] | 2023-10-16 05:50:34 | http://arxiv.org/abs/2310.10090v1 | http://arxiv.org/pdf/2310.10090v1 | 2310.10090v1 |
Over-the-Air Federated Learning and Optimization | Federated learning (FL), as an emerging distributed machine learning
paradigm, allows a mass of edge devices to collaboratively train a global model
while preserving privacy. In this tutorial, we focus on FL via over-the-air
computation (AirComp), which is proposed to reduce the communication overhead
for FL over wireless networks at the cost of compromising in the learning
performance due to model aggregation error arising from channel fading and
noise. We first provide a comprehensive study on the convergence of
AirComp-based FedAvg (AirFedAvg) algorithms under both strongly convex and
non-convex settings with constant and diminishing learning rates in the
presence of data heterogeneity. Through convergence and asymptotic analysis, we
characterize the impact of aggregation error on the convergence bound and
provide insights for system design with convergence guarantees. Then we derive
convergence rates for AirFedAvg algorithms for strongly convex and non-convex
objectives. For different types of local updates that can be transmitted by
edge devices (i.e., local model, gradient, and model difference), we reveal
that transmitting local model in AirFedAvg may cause divergence in the training
procedure. In addition, we consider more practical signal processing schemes to
improve the communication efficiency and further extend the convergence
analysis to different forms of model aggregation error caused by these signal
processing schemes. Extensive simulation results under different settings of
objective functions, transmitted local information, and communication schemes
verify the theoretical conclusions. | [
"Jingyang Zhu",
"Yuanming Shi",
"Yong Zhou",
"Chunxiao Jiang",
"Wei Chen",
"Khaled B. Letaief"
] | 2023-10-16 05:49:28 | http://arxiv.org/abs/2310.10089v1 | http://arxiv.org/pdf/2310.10089v1 | 2310.10089v1 |
PUCA: Patch-Unshuffle and Channel Attention for Enhanced Self-Supervised Image Denoising | Although supervised image denoising networks have shown remarkable
performance on synthesized noisy images, they often fail in practice due to the
difference between real and synthesized noise. Since clean-noisy image pairs
from the real world are extremely costly to gather, self-supervised learning,
which utilizes noisy input itself as a target, has been studied. To prevent a
self-supervised denoising model from learning identical mapping, each output
pixel should not be influenced by its corresponding input pixel; This
requirement is known as J-invariance. Blind-spot networks (BSNs) have been a
prevalent choice to ensure J-invariance in self-supervised image denoising.
However, constructing variations of BSNs by injecting additional operations
such as downsampling can expose blinded information, thereby violating
J-invariance. Consequently, convolutions designed specifically for BSNs have
been allowed only, limiting architectural flexibility. To overcome this
limitation, we propose PUCA, a novel J-invariant U-Net architecture, for
self-supervised denoising. PUCA leverages patch-unshuffle/shuffle to
dramatically expand receptive fields while maintaining J-invariance and dilated
attention blocks (DABs) for global context incorporation. Experimental results
demonstrate that PUCA achieves state-of-the-art performance, outperforming
existing methods in self-supervised image denoising. | [
"Hyemi Jang",
"Junsung Park",
"Dahuin Jung",
"Jaihyun Lew",
"Ho Bae",
"Sungroh Yoon"
] | 2023-10-16 05:42:49 | http://arxiv.org/abs/2310.10088v1 | http://arxiv.org/pdf/2310.10088v1 | 2310.10088v1 |
A simple uniformly optimal method without line search for convex optimization | Line search (or backtracking) procedures have been widely employed into
first-order methods for solving convex optimization problems, especially those
with unknown problem parameters (e.g., Lipschitz constant). In this paper, we
show that line search is superfluous in attaining the optimal rate of
convergence for solving a convex optimization problem whose parameters are not
given a priori. In particular, we present a novel accelerated gradient descent
type algorithm called auto-conditioned fast gradient method (AC-FGM) that can
achieve an optimal $\mathcal{O}(1/k^2)$ rate of convergence for smooth convex
optimization without requiring the estimate of a global Lipschitz constant or
the employment of line search procedures. We then extend AC-FGM to solve convex
optimization problems with H\"{o}lder continuous gradients and show that it
automatically achieves the optimal rates of convergence uniformly for all
problem classes with the desired accuracy of the solution as the only input.
Finally, we report some encouraging numerical results that demonstrate the
advantages of AC-FGM over the previously developed parameter-free methods for
convex optimization. | [
"Tianjiao Li",
"Guanghui Lan"
] | 2023-10-16 05:26:03 | http://arxiv.org/abs/2310.10082v1 | http://arxiv.org/pdf/2310.10082v1 | 2310.10082v1 |
SoTTA: Robust Test-Time Adaptation on Noisy Data Streams | Test-time adaptation (TTA) aims to address distributional shifts between
training and testing data using only unlabeled test data streams for continual
model adaptation. However, most TTA methods assume benign test streams, while
test samples could be unexpectedly diverse in the wild. For instance, an unseen
object or noise could appear in autonomous driving. This leads to a new threat
to existing TTA algorithms; we found that prior TTA algorithms suffer from
those noisy test samples as they blindly adapt to incoming samples. To address
this problem, we present Screening-out Test-Time Adaptation (SoTTA), a novel
TTA algorithm that is robust to noisy samples. The key enabler of SoTTA is
two-fold: (i) input-wise robustness via high-confidence uniform-class sampling
that effectively filters out the impact of noisy samples and (ii)
parameter-wise robustness via entropy-sharpness minimization that improves the
robustness of model parameters against large gradients from noisy samples. Our
evaluation with standard TTA benchmarks with various noisy scenarios shows that
our method outperforms state-of-the-art TTA methods under the presence of noisy
samples and achieves comparable accuracy to those methods without noisy
samples. The source code is available at https://github.com/taeckyung/SoTTA . | [
"Taesik Gong",
"Yewon Kim",
"Taeckyung Lee",
"Sorn Chottananurak",
"Sung-Ju Lee"
] | 2023-10-16 05:15:35 | http://arxiv.org/abs/2310.10074v1 | http://arxiv.org/pdf/2310.10074v1 | 2310.10074v1 |
Learning Graph Filters for Spectral GNNs via Newton Interpolation | Spectral Graph Neural Networks (GNNs) are gaining attention because they can
surpass the limitations of message-passing GNNs by learning spectral filters
that capture essential frequency information in graph data through task
supervision. However, previous research suggests that the choice of filter
frequency is tied to the graph's homophily level, a connection that hasn't been
thoroughly explored in existing spectral GNNs. To address this gap, the study
conducts both theoretical and empirical analyses, revealing that low-frequency
filters have a positive correlation with homophily, while high-frequency
filters have a negative correlation. This leads to the introduction of a
shape-aware regularization technique applied to a Newton Interpolation-based
spectral filter, enabling the customization of polynomial spectral filters that
align with desired homophily levels. Extensive experiments demonstrate that
NewtonNet successfully achieves the desired filter shapes and exhibits superior
performance on both homophilous and heterophilous datasets. | [
"Junjie Xu",
"Enyan Dai",
"Dongsheng Luo",
"Xiang Zhang",
"Suhang Wang"
] | 2023-10-16 04:57:30 | http://arxiv.org/abs/2310.10064v1 | http://arxiv.org/pdf/2310.10064v1 | 2310.10064v1 |
Data Augmentation for Time-Series Classification: An Extensive Empirical Study and Comprehensive Survey | Data Augmentation (DA) has emerged as an indispensable strategy in Time
Series Classification (TSC), primarily due to its capacity to amplify training
samples, thereby bolstering model robustness, diversifying datasets, and
curtailing overfitting. However, the current landscape of DA in TSC is plagued
with fragmented literature reviews, nebulous methodological taxonomies,
inadequate evaluative measures, and a dearth of accessible, user-oriented
tools. In light of these challenges, this study embarks on an exhaustive
dissection of DA methodologies within the TSC realm. Our initial approach
involved an extensive literature review spanning a decade, revealing that
contemporary surveys scarcely capture the breadth of advancements in DA for
TSC, prompting us to meticulously analyze over 100 scholarly articles to
distill more than 60 unique DA techniques. This rigorous analysis precipitated
the formulation of a novel taxonomy, purpose-built for the intricacies of DA in
TSC, categorizing techniques into five principal echelons:
Transformation-Based, Pattern-Based, Generative, Decomposition-Based, and
Automated Data Augmentation. Our taxonomy promises to serve as a robust
navigational aid for scholars, offering clarity and direction in method
selection. Addressing the conspicuous absence of holistic evaluations for
prevalent DA techniques, we executed an all-encompassing empirical assessment,
wherein upwards of 15 DA strategies were subjected to scrutiny across 8 UCR
time-series datasets, employing ResNet and a multi-faceted evaluation paradigm
encompassing Accuracy, Method Ranking, and Residual Analysis, yielding a
benchmark accuracy of 88.94 +- 11.83%. Our investigation underscored the
inconsistent efficacies of DA techniques, with... | [
"Zijun Gao",
"Lingbo Li",
"Tianhua Xu"
] | 2023-10-16 04:49:51 | http://arxiv.org/abs/2310.10060v2 | http://arxiv.org/pdf/2310.10060v2 | 2310.10060v2 |
Flow Dynamics Correction for Action Recognition | Various research studies indicate that action recognition performance highly
depends on the types of motions being extracted and how accurate the human
actions are represented. In this paper, we investigate different optical flow,
and features extracted from these optical flow that capturing both short-term
and long-term motion dynamics. We perform power normalization on the magnitude
component of optical flow for flow dynamics correction to boost subtle or
dampen sudden motions. We show that existing action recognition models which
rely on optical flow are able to get performance boosted with our corrected
optical flow. To further improve performance, we integrate our corrected flow
dynamics into popular models through a simple hallucination step by selecting
only the best performing optical flow features, and we show that by
'translating' the CNN feature maps into these optical flow features with
different scales of motions leads to the new state-of-the-art performance on
several benchmarks including HMDB-51, YUP++, fine-grained action recognition on
MPII Cooking Activities, and large-scale Charades. | [
"Lei Wang",
"Piotr Koniusz"
] | 2023-10-16 04:49:06 | http://arxiv.org/abs/2310.10059v1 | http://arxiv.org/pdf/2310.10059v1 | 2310.10059v1 |
Latent Conservative Objective Models for Data-Driven Crystal Structure Prediction | In computational chemistry, crystal structure prediction (CSP) is an
optimization problem that involves discovering the lowest energy stable crystal
structure for a given chemical formula. This problem is challenging as it
requires discovering globally optimal designs with the lowest energies on
complex manifolds. One approach to tackle this problem involves building
simulators based on density functional theory (DFT) followed by running search
in simulation, but these simulators are painfully slow. In this paper, we study
present and study an alternate, data-driven approach to crystal structure
prediction: instead of directly searching for the most stable structures in
simulation, we train a surrogate model of the crystal formation energy from a
database of existing crystal structures, and then optimize this model with
respect to the parameters of the crystal structure. This surrogate model is
trained to be conservative so as to prevent exploitation of its errors by the
optimizer. To handle optimization in the non-Euclidean space of crystal
structures, we first utilize a state-of-the-art graph diffusion auto-encoder
(CD-VAE) to convert a crystal structure into a vector-based search space and
then optimize a conservative surrogate model of the crystal energy, trained on
top of this vector representation. We show that our approach, dubbed LCOMs
(latent conservative objective models), performs comparably to the best current
approaches in terms of success rate of structure prediction, while also
drastically reducing computational cost. | [
"Han Qi",
"Xinyang Geng",
"Stefano Rando",
"Iku Ohama",
"Aviral Kumar",
"Sergey Levine"
] | 2023-10-16 04:35:44 | http://arxiv.org/abs/2310.10056v1 | http://arxiv.org/pdf/2310.10056v1 | 2310.10056v1 |
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models | Structured pruning methods have proven effective in reducing the model size
and accelerating inference speed in various network architectures such as
Transformers. Despite the versatility of encoder-decoder models in numerous NLP
tasks, the structured pruning methods on such models are relatively less
explored compared to encoder-only models. In this study, we investigate the
behavior of the structured pruning of the encoder-decoder models in the
decoupled pruning perspective of the encoder and decoder component,
respectively. Our findings highlight two insights: (1) the number of decoder
layers is the dominant factor of inference speed, and (2) low sparsity in the
pruned encoder network enhances generation quality. Motivated by these
findings, we propose a simple and effective framework, NASH, that narrows the
encoder and shortens the decoder networks of encoder-decoder models. Extensive
experiments on diverse generation and inference tasks validate the
effectiveness of our method in both speedup and output quality. | [
"Jongwoo Ko",
"Seungjoon Park",
"Yujin Kim",
"Sumyeong Ahn",
"Du-Seong Chang",
"Euijai Ahn",
"Se-Young Yun"
] | 2023-10-16 04:27:36 | http://arxiv.org/abs/2310.10054v1 | http://arxiv.org/pdf/2310.10054v1 | 2310.10054v1 |
Robust Collaborative Filtering to Popularity Distribution Shift | In leading collaborative filtering (CF) models, representations of users and
items are prone to learn popularity bias in the training data as shortcuts. The
popularity shortcut tricks are good for in-distribution (ID) performance but
poorly generalized to out-of-distribution (OOD) data, i.e., when popularity
distribution of test data shifts w.r.t. the training one. To close the gap,
debiasing strategies try to assess the shortcut degrees and mitigate them from
the representations. However, there exist two deficiencies: (1) when measuring
the shortcut degrees, most strategies only use statistical metrics on a single
aspect (i.e., item frequency on item and user frequency on user aspect),
failing to accommodate the compositional degree of a user-item pair; (2) when
mitigating shortcuts, many strategies assume that the test distribution is
known in advance. This results in low-quality debiased representations. Worse
still, these strategies achieve OOD generalizability with a sacrifice on ID
performance. In this work, we present a simple yet effective debiasing
strategy, PopGo, which quantifies and reduces the interaction-wise popularity
shortcut without any assumptions on the test data. It first learns a shortcut
model, which yields a shortcut degree of a user-item pair based on their
popularity representations. Then, it trains the CF model by adjusting the
predictions with the interaction-wise shortcut degrees. By taking both causal-
and information-theoretical looks at PopGo, we can justify why it encourages
the CF model to capture the critical popularity-agnostic features while leaving
the spurious popularity-relevant patterns out. We use PopGo to debias two
high-performing CF models (MF, LightGCN) on four benchmark datasets. On both ID
and OOD test sets, PopGo achieves significant gains over the state-of-the-art
debiasing strategies (e.g., DICE, MACR). | [
"An Zhang",
"Wenchang Ma",
"Jingnan Zheng",
"Xiang Wang",
"Tat-seng Chua"
] | 2023-10-16 04:20:52 | http://arxiv.org/abs/2310.10696v1 | http://arxiv.org/pdf/2310.10696v1 | 2310.10696v1 |
FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models | Large Language Models (LLMs), such as ChatGPT, LLaMA, GLM, and PaLM, have
exhibited remarkable performances across various tasks in recent years.
However, LLMs face two main challenges in real-world applications. One
challenge is that training LLMs consumes vast computing resources, preventing
LLMs from being adopted by small and medium-sized enterprises with limited
computing resources. Another is that training LLM requires a large amount of
high-quality data, which are often scattered among enterprises. To address
these challenges, we propose FATE-LLM, an industrial-grade federated learning
framework for large language models. FATE-LLM (1) facilitates federated
learning for large language models (coined FedLLM); (2) promotes efficient
training of FedLLM using parameter-efficient fine-tuning methods; (3) protects
the intellectual property of LLMs; (4) preserves data privacy during training
and inference through privacy-preserving mechanisms. We release the code of
FATE-LLM at https://github.com/FederatedAI/FATE-LLM to facilitate the research
of FedLLM and enable a broad range of industrial applications. | [
"Tao Fan",
"Yan Kang",
"Guoqiang Ma",
"Weijing Chen",
"Wenbin Wei",
"Lixin Fan",
"Qiang Yang"
] | 2023-10-16 04:17:13 | http://arxiv.org/abs/2310.10049v1 | http://arxiv.org/pdf/2310.10049v1 | 2310.10049v1 |
Symmetrical SyncMap for Imbalanced General Chunking Problems | Recently, SyncMap pioneered an approach to learn complex structures from
sequences as well as adapt to any changes in underlying structures. This is
achieved by using only nonlinear dynamical equations inspired by neuron group
behaviors, i.e., without loss functions. Here we propose Symmetrical SyncMap
that goes beyond the original work to show how to create dynamical equations
and attractor-repeller points which are stable over the long run, even dealing
with imbalanced continual general chunking problems (CGCPs). The main idea is
to apply equal updates from negative and positive feedback loops by symmetrical
activation. We then introduce the concept of memory window to allow for more
positive updates. Our algorithm surpasses or ties other unsupervised
state-of-the-art baselines in all 12 imbalanced CGCPs with various
difficulties, including dynamically changing ones. To verify its performance in
real-world scenarios, we conduct experiments on several well-studied structure
learning problems. The proposed method surpasses substantially other methods in
3 out of 4 scenarios, suggesting that symmetrical activation plays a critical
role in uncovering topological structures and even hierarchies encoded in
temporal data. | [
"Heng Zhang",
"Danilo Vasconcellos Vargas"
] | 2023-10-16 04:03:36 | http://arxiv.org/abs/2310.10045v1 | http://arxiv.org/pdf/2310.10045v1 | 2310.10045v1 |
TpopT: Efficient Trainable Template Optimization on Low-Dimensional Manifolds | In scientific and engineering scenarios, a recurring task is the detection of
low-dimensional families of signals or patterns. A classic family of
approaches, exemplified by template matching, aims to cover the search space
with a dense template bank. While simple and highly interpretable, it suffers
from poor computational efficiency due to unfavorable scaling in the signal
space dimensionality. In this work, we study TpopT (TemPlate OPTimization) as
an alternative scalable framework for detecting low-dimensional families of
signals which maintains high interpretability. We provide a theoretical
analysis of the convergence of Riemannian gradient descent for TpopT, and prove
that it has a superior dimension scaling to covering. We also propose a
practical TpopT framework for nonparametric signal sets, which incorporates
techniques of embedding and kernel interpolation, and is further configurable
into a trainable network architecture by unrolled optimization. The proposed
trainable TpopT exhibits significantly improved efficiency-accuracy tradeoffs
for gravitational wave detection, where matched filtering is currently a method
of choice. We further illustrate the general applicability of this approach
with experiments on handwritten digit data. | [
"Jingkai Yan",
"Shiyu Wang",
"Xinyu Rain Wei",
"Jimmy Wang",
"Zsuzsanna Márka",
"Szabolcs Márka",
"John Wright"
] | 2023-10-16 03:51:13 | http://arxiv.org/abs/2310.10039v1 | http://arxiv.org/pdf/2310.10039v1 | 2310.10039v1 |
Unraveling Fundamental Properties of Power System Resilience Curves using Unsupervised Machine Learning | The standard model of infrastructure resilience, the resilience triangle, has
been the primary way of characterizing and quantifying infrastructure
resilience. However, the theoretical model merely provides a one-size-fits-all
framework for all infrastructure systems. Most of the existing studies examine
the characteristics of infrastructure resilience curves based on analytical
models constructed upon simulated system performance. Limited empirical studies
hindered our ability to fully understand and predict resilience characteristics
in infrastructure systems. To address this gap, this study examined over 200
resilience curves related to power outages in three major extreme weather
events. Using unsupervised machine learning, we examined different curve
archetypes, as well as the fundamental properties of each resilience curve
archetype. The results show two primary archetypes for power system resilience
curves, triangular, and trapezoidal curves. Triangular curves characterize
resilience behavior based on 1. critical functionality threshold, 2. critical
functionality recovery rate, and 3. recovery pivot point. Trapezoidal
archetypes explain resilience curves based on 1. duration of sustained function
loss and 2. constant recovery rate. The longer the duration of sustained
function loss, the slower the constant rate of recovery. The findings of this
study provide novel perspectives enabling better understanding and prediction
of resilience performance of power system infrastructures. | [
"Bo Li",
"Ali Mostafavi"
] | 2023-10-16 03:16:21 | http://arxiv.org/abs/2310.10030v1 | http://arxiv.org/pdf/2310.10030v1 | 2310.10030v1 |
Data-Driven Score-Based Models for Generating Stable Structures with Adaptive Crystal Cells | The discovery of new functional and stable materials is a big challenge due
to its complexity. This work aims at the generation of new crystal structures
with desired properties, such as chemical stability and specified chemical
composition, by using machine learning generative models. Compared to the
generation of molecules, crystal structures pose new difficulties arising from
the periodic nature of the crystal and from the specific symmetry constraints
related to the space group. In this work, score-based probabilistic models
based on annealed Langevin dynamics, which have shown excellent performance in
various applications, are adapted to the task of crystal generation. The
novelty of the presented approach resides in the fact that the lattice of the
crystal cell is not fixed. During the training of the model, the lattice is
learned from the available data, whereas during the sampling of a new chemical
structure, two denoising processes are used in parallel to generate the lattice
along the generation of the atomic positions. A multigraph crystal
representation is introduced that respects symmetry constraints, yielding
computational advantages and a better quality of the sampled structures. We
show that our model is capable of generating new candidate structures in any
chosen chemical system and crystal group without any additional training. To
illustrate the functionality of the proposed method, a comparison of our model
to other recent generative models, based on descriptor-based metrics, is
provided. | [
"Arsen Sultanov",
"Jean-Claude Crivello",
"Tabea Rebafka",
"Nataliya Sokolovska"
] | 2023-10-16 02:53:24 | http://arxiv.org/abs/2310.10695v1 | http://arxiv.org/pdf/2310.10695v1 | 2310.10695v1 |
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance | We propose BOSS, an approach that automatically learns to solve new
long-horizon, complex, and meaningful tasks by growing a learned skill library
with minimal supervision. Prior work in reinforcement learning require expert
supervision, in the form of demonstrations or rich reward functions, to learn
long-horizon tasks. Instead, our approach BOSS (BOotStrapping your own Skills)
learns to accomplish new tasks by performing "skill bootstrapping," where an
agent with a set of primitive skills interacts with the environment to practice
new skills without receiving reward feedback for tasks outside of the initial
skill set. This bootstrapping phase is guided by large language models (LLMs)
that inform the agent of meaningful skills to chain together. Through this
process, BOSS builds a wide range of complex and useful behaviors from a basic
set of primitive skills. We demonstrate through experiments in realistic
household environments that agents trained with our LLM-guided bootstrapping
procedure outperform those trained with naive bootstrapping as well as prior
unsupervised skill acquisition methods on zero-shot execution of unseen,
long-horizon tasks in new environments. Website at clvrai.com/boss. | [
"Jesse Zhang",
"Jiahui Zhang",
"Karl Pertsch",
"Ziyi Liu",
"Xiang Ren",
"Minsuk Chang",
"Shao-Hua Sun",
"Joseph J. Lim"
] | 2023-10-16 02:43:47 | http://arxiv.org/abs/2310.10021v2 | http://arxiv.org/pdf/2310.10021v2 | 2310.10021v2 |
Riemannian Residual Neural Networks | Recent methods in geometric deep learning have introduced various neural
networks to operate over data that lie on Riemannian manifolds. Such networks
are often necessary to learn well over graphs with a hierarchical structure or
to learn over manifold-valued data encountered in the natural sciences. These
networks are often inspired by and directly generalize standard Euclidean
neural networks. However, extending Euclidean networks is difficult and has
only been done for a select few manifolds. In this work, we examine the
residual neural network (ResNet) and show how to extend this construction to
general Riemannian manifolds in a geometrically principled manner. Originally
introduced to help solve the vanishing gradient problem, ResNets have become
ubiquitous in machine learning due to their beneficial learning properties,
excellent empirical results, and easy-to-incorporate nature when building
varied neural networks. We find that our Riemannian ResNets mirror these
desirable properties: when compared to existing manifold neural networks
designed to learn over hyperbolic space and the manifold of symmetric positive
definite matrices, we outperform both kinds of networks in terms of relevant
testing metrics and training dynamics. | [
"Isay Katsman",
"Eric Ming Chen",
"Sidhanth Holalkere",
"Anna Asch",
"Aaron Lou",
"Ser-Nam Lim",
"Christopher De Sa"
] | 2023-10-16 02:12:32 | http://arxiv.org/abs/2310.10013v1 | http://arxiv.org/pdf/2310.10013v1 | 2310.10013v1 |
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models? | Diffusion models for text-to-image (T2I) synthesis, such as Stable Diffusion
(SD), have recently demonstrated exceptional capabilities for generating
high-quality content. However, this progress has raised several concerns of
potential misuse, particularly in creating copyrighted, prohibited, and
restricted content, or NSFW (not safe for work) images. While efforts have been
made to mitigate such problems, either by implementing a safety filter at the
evaluation stage or by fine-tuning models to eliminate undesirable concepts or
styles, the effectiveness of these safety measures in dealing with a wide range
of prompts remains largely unexplored. In this work, we aim to investigate
these safety mechanisms by proposing one novel concept retrieval algorithm for
evaluation. We introduce Ring-A-Bell, a model-agnostic red-teaming tool for T2I
diffusion models, where the whole evaluation can be prepared in advance without
prior knowledge of the target model. Specifically, Ring-A-Bell first performs
concept extraction to obtain holistic representations for sensitive and
inappropriate concepts. Subsequently, by leveraging the extracted concept,
Ring-A-Bell automatically identifies problematic prompts for diffusion models
with the corresponding generation of inappropriate content, allowing the user
to assess the reliability of deployed safety mechanisms. Finally, we
empirically validate our method by testing online services such as Midjourney
and various methods of concept removal. Our results show that Ring-A-Bell, by
manipulating safe prompting benchmarks, can transform prompts that were
originally regarded as safe to evade existing safety mechanisms, thus revealing
the defects of the so-called safety mechanisms which could practically lead to
the generation of harmful contents. | [
"Yu-Lin Tsai",
"Chia-Yi Hsu",
"Chulin Xie",
"Chih-Hsun Lin",
"Jia-You Chen",
"Bo Li",
"Pin-Yu Chen",
"Chia-Mu Yu",
"Chun-Ying Huang"
] | 2023-10-16 02:11:20 | http://arxiv.org/abs/2310.10012v1 | http://arxiv.org/pdf/2310.10012v1 | 2310.10012v1 |
Towards Unified and Effective Domain Generalization | We propose $\textbf{UniDG}$, a novel and $\textbf{Uni}$fied framework for
$\textbf{D}$omain $\textbf{G}$eneralization that is capable of significantly
enhancing the out-of-distribution generalization performance of foundation
models regardless of their architectures. The core idea of UniDG is to finetune
models during the inference stage, which saves the cost of iterative training.
Specifically, we encourage models to learn the distribution of test data in an
unsupervised manner and impose a penalty regarding the updating step of model
parameters. The penalty term can effectively reduce the catastrophic forgetting
issue as we would like to maximally preserve the valuable knowledge in the
original model. Empirically, across 12 visual backbones, including CNN-, MLP-,
and Transformer-based models, ranging from 1.89M to 303M parameters, UniDG
shows an average accuracy improvement of +5.4% on DomainBed. These performance
results demonstrate the superiority and versatility of UniDG. The code is
publicly available at https://github.com/invictus717/UniDG | [
"Yiyuan Zhang",
"Kaixiong Gong",
"Xiaohan Ding",
"Kaipeng Zhang",
"Fangrui Lv",
"Kurt Keutzer",
"Xiangyu Yue"
] | 2023-10-16 02:05:03 | http://arxiv.org/abs/2310.10008v1 | http://arxiv.org/pdf/2310.10008v1 | 2310.10008v1 |
Implicit regularization via soft ascent-descent | As models grow larger and more complex, achieving better off-sample
generalization with minimal trial-and-error is critical to the reliability and
economy of machine learning workflows. As a proxy for the well-studied
heuristic of seeking "flat" local minima, gradient regularization is a natural
avenue, and first-order approximations such as Flooding and sharpness-aware
minimization (SAM) have received significant attention, but their performance
depends critically on hyperparameters (flood threshold and neighborhood radius,
respectively) that are non-trivial to specify in advance. In order to develop a
procedure which is more resilient to misspecified hyperparameters, with the
hard-threshold "ascent-descent" switching device used in Flooding as
motivation, we propose a softened, pointwise mechanism called SoftAD that
downweights points on the borderline, limits the effects of outliers, and
retains the ascent-descent effect. We contrast formal stationarity guarantees
with those for Flooding, and empirically demonstrate how SoftAD can realize
classification accuracy competitive with SAM and Flooding while maintaining a
much smaller loss generalization gap and model norm. Our empirical tests range
from simple binary classification on the plane to image classification using
neural networks with millions of parameters; the key trends are observed across
all datasets and models studied, and suggest a potential new approach to
implicit regularization. | [
"Matthew J. Holland",
"Kosuke Nakatani"
] | 2023-10-16 02:02:56 | http://arxiv.org/abs/2310.10006v1 | http://arxiv.org/pdf/2310.10006v1 | 2310.10006v1 |
Conformal Contextual Robust Optimization | Data-driven approaches to predict-then-optimize decision-making problems seek
to mitigate the risk of uncertainty region misspecification in safety-critical
settings. Current approaches, however, suffer from considering overly
conservative uncertainty regions, often resulting in suboptimal decisionmaking.
To this end, we propose Conformal-Predict-Then-Optimize (CPO), a framework for
leveraging highly informative, nonconvex conformal prediction regions over
high-dimensional spaces based on conditional generative models, which have the
desired distribution-free coverage guarantees. Despite guaranteeing robustness,
such black-box optimization procedures alone inspire little confidence owing to
the lack of explanation of why a particular decision was found to be optimal.
We, therefore, augment CPO to additionally provide semantically meaningful
visual summaries of the uncertainty regions to give qualitative intuition for
the optimal decision. We highlight the CPO framework by demonstrating results
on a suite of simulation-based inference benchmark tasks and a vehicle routing
task based on probabilistic weather prediction. | [
"Yash Patel",
"Sahana Rayan",
"Ambuj Tewari"
] | 2023-10-16 01:58:27 | http://arxiv.org/abs/2310.10003v1 | http://arxiv.org/pdf/2310.10003v1 | 2310.10003v1 |
Outlier Detection Using Generative Models with Theoretical Performance Guarantees | This paper considers the problem of recovering signals modeled by generative
models from linear measurements contaminated with sparse outliers. We propose
an outlier detection approach for reconstructing the ground-truth signals
modeled by generative models under sparse outliers. We establish theoretical
recovery guarantees for reconstruction of signals using generative models in
the presence of outliers, giving lower bounds on the number of correctable
outliers. Our results are applicable to both linear generator neural networks
and the nonlinear generator neural networks with an arbitrary number of layers.
We propose an iterative alternating direction method of multipliers (ADMM)
algorithm for solving the outlier detection problem via $\ell_1$ norm
minimization, and a gradient descent algorithm for solving the outlier
detection problem via squared $\ell_1$ norm minimization. We conduct extensive
experiments using variational auto-encoder and deep convolutional generative
adversarial networks, and the experimental results show that the signals can be
successfully reconstructed under outliers using our approach. Our approach
outperforms the traditional Lasso and $\ell_2$ minimization approach. | [
"Jirong Yi",
"Jingchao Gao",
"Tianming Wang",
"Xiaodong Wu",
"Weiyu Xu"
] | 2023-10-16 01:25:34 | http://arxiv.org/abs/2310.09999v1 | http://arxiv.org/pdf/2310.09999v1 | 2310.09999v1 |
Forecaster: Towards Temporally Abstract Tree-Search Planning from Pixels | The ability to plan at many different levels of abstraction enables agents to
envision the long-term repercussions of their decisions and thus enables
sample-efficient learning. This becomes particularly beneficial in complex
environments from high-dimensional state space such as pixels, where the goal
is distant and the reward sparse. We introduce Forecaster, a deep hierarchical
reinforcement learning approach which plans over high-level goals leveraging a
temporally abstract world model. Forecaster learns an abstract model of its
environment by modelling the transitions dynamics at an abstract level and
training a world model on such transition. It then uses this world model to
choose optimal high-level goals through a tree-search planning procedure. It
additionally trains a low-level policy that learns to reach those goals. Our
method not only captures building world models with longer horizons, but also,
planning with such models in downstream tasks. We empirically demonstrate
Forecaster's potential in both single-task learning and generalization to new
tasks in the AntMaze domain. | [
"Thomas Jiralerspong",
"Flemming Kondrup",
"Doina Precup",
"Khimya Khetarpal"
] | 2023-10-16 01:13:26 | http://arxiv.org/abs/2310.09997v1 | http://arxiv.org/pdf/2310.09997v1 | 2310.09997v1 |
Network Analysis of the iNaturalist Citizen Science Community | In recent years, citizen science has become a larger and larger part of the
scientific community. Its ability to crowd source data and expertise from
thousands of citizen scientists makes it invaluable. Despite the field's
growing popularity, the interactions and structure of citizen science projects
are still poorly understood and under analyzed. We use the iNaturalist citizen
science platform as a case study to analyze the structure of citizen science
projects. We frame the data from iNaturalist as a bipartite network and use
visualizations as well as established network science techniques to gain
insights into the structure and interactions between users in citizen science
projects. Finally, we propose a novel unique benchmark for network science
research by using the iNaturalist data to create a network which has an unusual
structure relative to other common benchmark networks. We demonstrate using a
link prediction task that this network can be used to gain novel insights into
a variety of network science methods. | [
"Yu Lu Liu",
"Thomas Jiralerspong"
] | 2023-10-16 00:41:13 | http://arxiv.org/abs/2310.10693v1 | http://arxiv.org/pdf/2310.10693v1 | 2310.10693v1 |
Applications of Machine Learning in Biopharmaceutical Process Development and Manufacturing: Current Trends, Challenges, and Opportunities | While machine learning (ML) has made significant contributions to the
biopharmaceutical field, its applications are still in the early stages in
terms of providing direct support for quality-by-design based development and
manufacturing of biopharmaceuticals, hindering the enormous potential for
bioprocesses automation from their development to manufacturing. However, the
adoption of ML-based models instead of conventional multivariate data analysis
methods is significantly increasing due to the accumulation of large-scale
production data. This trend is primarily driven by the real-time monitoring of
process variables and quality attributes of biopharmaceutical products through
the implementation of advanced process analytical technologies. Given the
complexity and multidimensionality of a bioproduct design, bioprocess
development, and product manufacturing data, ML-based approaches are
increasingly being employed to achieve accurate, flexible, and high-performing
predictive models to address the problems of analytics, monitoring, and control
within the biopharma field. This paper aims to provide a comprehensive review
of the current applications of ML solutions in a bioproduct design, monitoring,
control, and optimisation of upstream, downstream, and product formulation
processes. Finally, this paper thoroughly discusses the main challenges related
to the bioprocesses themselves, process data, and the use of machine learning
models in biopharmaceutical process development and manufacturing. Moreover, it
offers further insights into the adoption of innovative machine learning
methods and novel trends in the development of new digital biopharma solutions. | [
"Thanh Tung Khuat",
"Robert Bassett",
"Ellen Otte",
"Alistair Grevis-James",
"Bogdan Gabrys"
] | 2023-10-16 00:35:24 | http://arxiv.org/abs/2310.09991v1 | http://arxiv.org/pdf/2310.09991v1 | 2310.09991v1 |
Personalization of CTC-based End-to-End Speech Recognition Using Pronunciation-Driven Subword Tokenization | Recent advances in deep learning and automatic speech recognition have
improved the accuracy of end-to-end speech recognition systems, but recognition
of personal content such as contact names remains a challenge. In this work, we
describe our personalization solution for an end-to-end speech recognition
system based on connectionist temporal classification. Building on previous
work, we present a novel method for generating additional subword tokenizations
for personal entities from their pronunciations. We show that using this
technique in combination with two established techniques, contextual biasing
and wordpiece prior normalization, we are able to achieve personal named entity
accuracy on par with a competitive hybrid system. | [
"Zhihong Lei",
"Ernest Pusateri",
"Shiyi Han",
"Leo Liu",
"Mingbin Xu",
"Tim Ng",
"Ruchir Travadi",
"Youyuan Zhang",
"Mirko Hannemann",
"Man-Hung Siu",
"Zhen Huang"
] | 2023-10-16 00:06:32 | http://arxiv.org/abs/2310.09988v1 | http://arxiv.org/pdf/2310.09988v1 | 2310.09988v1 |
On Statistical Learning of Branch and Bound for Vehicle Routing Optimization | Recently, machine learning of the branch and bound algorithm has shown
promise in approximating competent solutions to NP-hard problems. In this
paper, we utilize and comprehensively compare the outcomes of three neural
networks--graph convolutional neural network (GCNN), GraphSAGE, and graph
attention network (GAT)--to solve the capacitated vehicle routing problem. We
train these neural networks to emulate the decision-making process of the
computationally expensive Strong Branching strategy. The neural networks are
trained on six instances with distinct topologies from the CVRPLIB and
evaluated on eight additional instances. Moreover, we reduced the minimum
number of vehicles required to solve a CVRP instance to a bin-packing problem,
which was addressed in a similar manner. Through rigorous experimentation, we
found that this approach can match or improve upon the performance of the
branch and bound algorithm with the Strong Branching strategy while requiring
significantly less computational time. The source code that corresponds to our
research findings and methodology is readily accessible and available for
reference at the following web address: https://isotlaboratory.github.io/ml4vrp | [
"Andrew Naguib",
"Waleed A. Yousef",
"Issa Traoré",
"Mohammad Mamun"
] | 2023-10-15 23:59:57 | http://arxiv.org/abs/2310.09986v2 | http://arxiv.org/pdf/2310.09986v2 | 2310.09986v2 |
Farzi Data: Autoregressive Data Distillation | We study data distillation for auto-regressive machine learning tasks, where
the input and output have a strict left-to-right causal structure. More
specifically, we propose Farzi, which summarizes an event sequence dataset into
a small number of synthetic sequences -- Farzi Data -- which are optimized to
maintain (if not improve) model performance compared to training on the full
dataset. Under the hood, Farzi conducts memory-efficient data distillation by
(i) deriving efficient reverse-mode differentiation of the Adam optimizer by
leveraging Hessian-Vector Products; and (ii) factorizing the high-dimensional
discrete event-space into a latent-space which provably promotes implicit
regularization. Empirically, for sequential recommendation and language
modeling tasks, we are able to achieve 98-120% of downstream full-data
performance when training state-of-the-art models on Farzi Data of size as
little as 0.1% of the original dataset. Notably, being able to train better
models with significantly less data sheds light on the design of future large
auto-regressive models, and opens up new opportunities to further scale up
model and data sizes. | [
"Noveen Sachdeva",
"Zexue He",
"Wang-Cheng Kang",
"Jianmo Ni",
"Derek Zhiyuan Cheng",
"Julian McAuley"
] | 2023-10-15 23:23:27 | http://arxiv.org/abs/2310.09983v1 | http://arxiv.org/pdf/2310.09983v1 | 2310.09983v1 |
Chinese Painting Style Transfer Using Deep Generative Models | Artistic style transfer aims to modify the style of the image while
preserving its content. Style transfer using deep learning models has been
widely studied since 2015, and most of the applications are focused on specific
artists like Van Gogh, Monet, Cezanne. There are few researches and
applications on traditional Chinese painting style transfer. In this paper, we
will study and leverage different state-of-the-art deep generative models for
Chinese painting style transfer and evaluate the performance both qualitatively
and quantitatively. In addition, we propose our own algorithm that combines
several style transfer models for our task. Specifically, we will transfer two
main types of traditional Chinese painting style, known as "Gong-bi" and
"Shui-mo" (to modern images like nature objects, portraits and landscapes. | [
"Weijian Ma",
"Yanyang Kong"
] | 2023-10-15 23:05:17 | http://arxiv.org/abs/2310.09978v2 | http://arxiv.org/pdf/2310.09978v2 | 2310.09978v2 |
AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents | We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses
sequence models to tackle the challenges of generalization, long-term memory,
and meta-learning. Recent works have shown that off-policy learning can make
in-context RL with recurrent policies viable. Nonetheless, these approaches
require extensive tuning and limit scalability by creating key bottlenecks in
agents' memory capacity, planning horizon, and model size. AMAGO revisits and
redesigns the off-policy in-context approach to successfully train
long-sequence Transformers over entire rollouts in parallel with end-to-end RL.
Our agent is uniquely scalable and applicable to a wide range of problems. We
demonstrate its strong performance empirically in meta-RL and long-term memory
domains. AMAGO's focus on sparse rewards and off-policy data also allows
in-context learning to extend to goal-conditioned problems with challenging
exploration. When combined with a novel hindsight relabeling scheme, AMAGO can
solve a previously difficult category of open-world domains, where agents
complete many possible instructions in procedurally generated environments. We
evaluate our agent on three goal-conditioned domains and study how its
individual improvements connect to create a generalist policy. | [
"Jake Grigsby",
"Linxi Fan",
"Yuke Zhu"
] | 2023-10-15 22:20:39 | http://arxiv.org/abs/2310.09971v1 | http://arxiv.org/pdf/2310.09971v1 | 2310.09971v1 |
Specialized Deep Residual Policy Safe Reinforcement Learning-Based Controller for Complex and Continuous State-Action Spaces | Traditional controllers have limitations as they rely on prior knowledge
about the physics of the problem, require modeling of dynamics, and struggle to
adapt to abnormal situations. Deep reinforcement learning has the potential to
address these problems by learning optimal control policies through exploration
in an environment. For safety-critical environments, it is impractical to
explore randomly, and replacing conventional controllers with black-box models
is also undesirable. Also, it is expensive in continuous state and action
spaces, unless the search space is constrained. To address these challenges we
propose a specialized deep residual policy safe reinforcement learning with a
cycle of learning approach adapted for complex and continuous state-action
spaces. Residual policy learning allows learning a hybrid control architecture
where the reinforcement learning agent acts in synchronous collaboration with
the conventional controller. The cycle of learning initiates the policy through
the expert trajectory and guides the exploration around it. Further, the
specialization through the input-output hidden Markov model helps to optimize
policy that lies within the region of interest (such as abnormality), where the
reinforcement learning agent is required and is activated. The proposed
solution is validated on the Tennessee Eastman process control. | [
"Ammar N. Abbas",
"Georgios C. Chasparis",
"John D. Kelleher"
] | 2023-10-15 21:53:23 | http://arxiv.org/abs/2310.14788v1 | http://arxiv.org/pdf/2310.14788v1 | 2310.14788v1 |
Theoretical Evaluation of Asymmetric Shapley Values for Root-Cause Analysis | In this work, we examine Asymmetric Shapley Values (ASV), a variant of the
popular SHAP additive local explanation method. ASV proposes a way to improve
model explanations incorporating known causal relations between variables, and
is also considered as a way to test for unfair discrimination in model
predictions. Unexplored in previous literature, relaxing symmetry in Shapley
values can have counter-intuitive consequences for model explanation. To better
understand the method, we first show how local contributions correspond to
global contributions of variance reduction. Using variance, we demonstrate
multiple cases where ASV yields counter-intuitive attributions, arguably
producing incorrect results for root-cause analysis. Second, we identify
generalized additive models (GAM) as a restricted class for which ASV exhibits
desirable properties. We support our arguments by proving multiple theoretical
results about the method. Finally, we demonstrate the use of asymmetric
attributions on multiple real-world datasets, comparing the results with and
without restricted model families using gradient boosting and deep learning
models. | [
"Domokos M. Kelen",
"Mihály Petreczky",
"Péter Kersch",
"András A. Benczúr"
] | 2023-10-15 21:40:16 | http://arxiv.org/abs/2310.09961v1 | http://arxiv.org/pdf/2310.09961v1 | 2310.09961v1 |
Seeking Next Layer Neurons' Attention for Error-Backpropagation-Like Training in a Multi-Agent Network Framework | Despite considerable theoretical progress in the training of neural networks
viewed as a multi-agent system of neurons, particularly concerning biological
plausibility and decentralized training, their applicability to real-world
problems remains limited due to scalability issues. In contrast,
error-backpropagation has demonstrated its effectiveness for training deep
networks in practice. In this study, we propose a local objective for neurons
that, when pursued by neurons individually, align them to exhibit similarities
to error-backpropagation in terms of efficiency and scalability during
training. For this purpose, we examine a neural network comprising
decentralized, self-interested neurons seeking to maximize their local
objective -- attention from subsequent layer neurons -- and identify the
optimal strategy for neurons. We also analyze the relationship between this
strategy and backpropagation, establishing conditions under which the derived
strategy is equivalent to error-backpropagation. Lastly, we demonstrate the
learning capacity of these multi-agent neural networks through experiments on
three datasets and showcase their superior performance relative to
error-backpropagation in a catastrophic forgetting benchmark. | [
"Arshia Soltani Moakhar",
"Mohammad Azizmalayeri",
"Hossein Mirzaei",
"Mohammad Taghi Manzuri",
"Mohammad Hossein Rohban"
] | 2023-10-15 21:07:09 | http://arxiv.org/abs/2310.09952v1 | http://arxiv.org/pdf/2310.09952v1 | 2310.09952v1 |
Chameleon: a Heterogeneous and Disaggregated Accelerator System for Retrieval-Augmented Language Models | A Retrieval-Augmented Language Model (RALM) augments a generative language
model by retrieving context-specific knowledge from an external database. This
strategy facilitates impressive text generation quality even with smaller
models, thus reducing orders of magnitude of computational demands. However,
RALMs introduce unique system design challenges due to (a) the diverse workload
characteristics between LM inference and retrieval and (b) the various system
requirements and bottlenecks for different RALM configurations such as model
sizes, database sizes, and retrieval frequencies. We propose Chameleon, a
heterogeneous accelerator system that integrates both LM and retrieval
accelerators in a disaggregated architecture. The heterogeneity ensures
efficient acceleration of both LM inference and retrieval, while the
accelerator disaggregation enables the system to independently scale both types
of accelerators to fulfill diverse RALM requirements. Our Chameleon prototype
implements retrieval accelerators on FPGAs and assigns LM inference to GPUs,
with a CPU server orchestrating these accelerators over the network. Compared
to CPU-based and CPU-GPU vector search systems, Chameleon achieves up to 23.72x
speedup and 26.2x energy efficiency. Evaluated on various RALMs, Chameleon
exhibits up to 2.16x reduction in latency and 3.18x speedup in throughput
compared to the hybrid CPU-GPU architecture. These promising results pave the
way for bringing accelerator heterogeneity and disaggregation into future RALM
systems. | [
"Wenqi Jiang",
"Marco Zeller",
"Roger Waleffe",
"Torsten Hoefler",
"Gustavo Alonso"
] | 2023-10-15 20:57:25 | http://arxiv.org/abs/2310.09949v1 | http://arxiv.org/pdf/2310.09949v1 | 2310.09949v1 |
UvA-MT's Participation in the WMT23 General Translation Shared Task | This paper describes the UvA-MT's submission to the WMT 2023 shared task on
general machine translation. We participate in the constrained track in two
directions: English <-> Hebrew. In this competition, we show that by using one
model to handle bidirectional tasks, as a minimal setting of Multilingual
Machine Translation (MMT), it is possible to achieve comparable results with
that of traditional bilingual translation for both directions. By including
effective strategies, like back-translation, re-parameterized embedding table,
and task-oriented fine-tuning, we obtained competitive final results in the
automatic evaluation for both English -> Hebrew and Hebrew -> English
directions. | [
"Di Wu",
"Shaomu Tan",
"David Stap",
"Ali Araabi",
"Christof Monz"
] | 2023-10-15 20:49:31 | http://arxiv.org/abs/2310.09946v1 | http://arxiv.org/pdf/2310.09946v1 | 2310.09946v1 |
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers | Transformers have become a key architecture in speech processing, but our
understanding of how they build up representations of acoustic and linguistic
structure is limited. In this study, we address this gap by investigating how
measures of 'context-mixing' developed for text models can be adapted and
applied to models of spoken language. We identify a linguistic phenomenon that
is ideal for such a case study: homophony in French (e.g. livre vs livres),
where a speech recognition model has to attend to syntactic cues such as
determiners and pronouns in order to disambiguate spoken words with identical
pronunciations and transcribe them while respecting grammatical agreement. We
perform a series of controlled experiments and probing analyses on
Transformer-based speech models. Our findings reveal that representations in
encoder-only models effectively incorporate these cues to identify the correct
transcription, whereas encoders in encoder-decoder models mainly relegate the
task of capturing contextual dependencies to decoder modules. | [
"Hosein Mohebbi",
"Grzegorz Chrupała",
"Willem Zuidema",
"Afra Alishahi"
] | 2023-10-15 19:24:13 | http://arxiv.org/abs/2310.09925v1 | http://arxiv.org/pdf/2310.09925v1 | 2310.09925v1 |
Deep Reinforcement Learning with Explicit Context Representation | Reinforcement learning (RL) has shown an outstanding capability for solving
complex computational problems. However, most RL algorithms lack an explicit
method that would allow learning from contextual information. Humans use
context to identify patterns and relations among elements in the environment,
along with how to avoid making wrong actions. On the other hand, what may seem
like an obviously wrong decision from a human perspective could take hundreds
of steps for an RL agent to learn to avoid. This paper proposes a framework for
discrete environments called Iota explicit context representation (IECR). The
framework involves representing each state using contextual key frames (CKFs),
which can then be used to extract a function that represents the affordances of
the state; in addition, two loss functions are introduced with respect to the
affordances of the state. The novelty of the IECR framework lies in its
capacity to extract contextual information from the environment and learn from
the CKFs' representation. We validate the framework by developing four new
algorithms that learn using context: Iota deep Q-network (IDQN), Iota double
deep Q-network (IDDQN), Iota dueling deep Q-network (IDuDQN), and Iota dueling
double deep Q-network (IDDDQN). Furthermore, we evaluate the framework and the
new algorithms in five discrete environments. We show that all the algorithms,
which use contextual information, converge in around 40,000 training steps of
the neural networks, significantly outperforming their state-of-the-art
equivalents. | [
"Francisco Munguia-Galeano",
"Ah-Hwee Tan",
"Ze Ji"
] | 2023-10-15 19:23:05 | http://arxiv.org/abs/2310.09924v1 | http://arxiv.org/pdf/2310.09924v1 | 2310.09924v1 |
BONES: Near-Optimal Neural-Enhanced Video Streaming | Accessing high-quality video content can be challenging due to insufficient
and unstable network bandwidth. Recent advances in neural enhancement have
shown promising results in improving the quality of degraded videos through
deep learning. Neural-Enhanced Streaming (NES) incorporates this new approach
into video streaming, allowing users to download low-quality video segments and
then enhance them to obtain high-quality content without violating the playback
of the video stream. We introduce BONES, an NES control algorithm that jointly
manages the network and computational resources to maximize the quality of
experience (QoE) of the user. BONES formulates NES as a Lyapunov optimization
problem and solves it in an online manner with near-optimal performance, making
it the first NES algorithm to provide a theoretical performance guarantee. Our
comprehensive experimental results indicate that BONES increases QoE by 4% to
13% over state-of-the-art algorithms, demonstrating its potential to enhance
the video streaming experience for users. Our code and data will be released to
the public. | [
"Lingdong Wang",
"Simran Singh",
"Jacob Chakareski",
"Mohammad Hajiesmaili",
"Ramesh K. Sitaraman"
] | 2023-10-15 19:08:18 | http://arxiv.org/abs/2310.09920v1 | http://arxiv.org/pdf/2310.09920v1 | 2310.09920v1 |
Unsupervised Discovery of Interpretable Directions in h-space of Pre-trained Diffusion Models | We propose the first unsupervised and learning-based method to identify
interpretable directions in the h-space of pre-trained diffusion models. Our
method is derived from an existing technique that operates on the GAN latent
space. In a nutshell, we employ a shift control module for pre-trained
diffusion models to manipulate a sample into a shifted version of itself,
followed by a reconstructor to reproduce both the type and the strength of the
manipulation. By jointly optimizing them, the model will spontaneously discover
disentangled and interpretable directions. To prevent the discovery of
meaningless and destructive directions, we employ a discriminator to maintain
the fidelity of shifted sample. Due to the iterative generative process of
diffusion models, our training requires a substantial amount of GPU VRAM to
store numerous intermediate tensors for back-propagating gradient. To address
this issue, we first propose a general VRAM-efficient training algorithm based
on gradient checkpointing technique to back-propagate any gradient through the
whole generative process, with acceptable occupancy of VRAM and sacrifice of
training efficiency. Compared with existing related works on diffusion models,
our method inherently identifies global and scalable directions, without
necessitating any other complicated procedures. Extensive experiments on
various datasets demonstrate the effectiveness of our method. | [
"Zijian Zhang",
"Luping Liu. Zhijie Lin",
"Yichen Zhu",
"Zhou Zhao"
] | 2023-10-15 18:44:30 | http://arxiv.org/abs/2310.09912v1 | http://arxiv.org/pdf/2310.09912v1 | 2310.09912v1 |
Predictive Maintenance Model Based on Anomaly Detection in Induction Motors: A Machine Learning Approach Using Real-Time IoT Data | With the support of Internet of Things (IoT) devices, it is possible to
acquire data from degradation phenomena and design data-driven models to
perform anomaly detection in industrial equipment. This approach not only
identifies potential anomalies but can also serve as a first step toward
building predictive maintenance policies. In this work, we demonstrate a novel
anomaly detection system on induction motors used in pumps, compressors, fans,
and other industrial machines. This work evaluates a combination of
pre-processing techniques and machine learning (ML) models with a low
computational cost. We use a combination of pre-processing techniques such as
Fast Fourier Transform (FFT), Wavelet Transform (WT), and binning, which are
well-known approaches for extracting features from raw data. We also aim to
guarantee an optimal balance between multiple conflicting parameters, such as
anomaly detection rate, false positive rate, and inference speed of the
solution. To this end, multiobjective optimization and analysis are performed
on the evaluated models. Pareto-optimal solutions are presented to select which
models have the best results regarding classification metrics and computational
effort. Differently from most works in this field that use publicly available
datasets to validate their models, we propose an end-to-end solution combining
low-cost and readily available IoT sensors. The approach is validated by
acquiring a custom dataset from induction motors. Also, we fuse vibration,
temperature, and noise data from these sensors as the input to the proposed ML
model. Therefore, we aim to propose a methodology general enough to be applied
in different industrial contexts in the future. | [
"Sergio F. Chevtchenko",
"Monalisa C. M. dos Santos",
"Diego M. Vieira",
"Ricardo L. Mota",
"Elisson Rocha",
"Bruna V. Cruz",
"Danilo Araújo",
"Ermeson Andrade"
] | 2023-10-15 18:43:45 | http://arxiv.org/abs/2310.14949v1 | http://arxiv.org/pdf/2310.14949v1 | 2310.14949v1 |
Evaluation of feature selection performance for identification of best effective technical indicators on stock market price prediction | Due to the influence of many factors, including technical indicators on stock
market prediction, feature selection is important to choose the best
indicators. One of the feature selection methods that consider the performance
of models during feature selection is the wrapper feature selection method. The
aim of this research is to identify a combination of the best stock market
indicators through feature selection to predict the stock market price with the
least error. In order to evaluate the impact of wrapper feature selection
techniques on stock market prediction, in this paper SFS and SBS with 10
estimators and 123 technical indicators have been examined on the last 13 years
of Apple Company. Also, by the proposed method, the data created by the 3-day
time window were converted to the appropriate input for regression methods.
Based on the results observed: (1) Each wrapper feature selection method has
different results with different machine learning methods, and each method is
more correlated with a specific set of technical indicators of the stock
market. (2) Ridge and LR estimates alone, and with two methods of the wrapper
feature selection, namely SFS and SBS; They had the best results with all
assessment criteria for market forecast. (3)The Ridge and LR method with all
the R2, MSE, RMSE, MAE and MAPE have the best stock market prediction results.
Also, the MLP Regression Method, along with the Sequential Forwards Selection
and the MSE, had the best performance. SVR regression, along with the SFS and
the MSE, has improved greatly compared to the SVR regression with all
indicators. (4) It was also observed that different features are selected by
different ML methods with different evaluation parameters. (5) Most ML methods
have used the Squeeze_pro, Percentage Price Oscillator, Thermo, Decay, Archer
On-Balance Volume, Bollinger Bands, Squeeze and Ichimoku indicator. | [
"Fatemeh Moodi",
"Amir Jahangard-Rafsanjani"
] | 2023-10-15 18:09:09 | http://arxiv.org/abs/2310.09903v3 | http://arxiv.org/pdf/2310.09903v3 | 2310.09903v3 |
Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning | Transfer-based adversarial attacks raise a severe threat to real-world deep
learning systems since they do not require access to target models. Adversarial
training (AT), which is recognized as the strongest defense against white-box
attacks, has also guaranteed high robustness to (black-box) transfer-based
attacks. However, AT suffers from heavy computational overhead since it
optimizes the adversarial examples during the whole training process. In this
paper, we demonstrate that such heavy optimization is not necessary for AT
against transfer-based attacks. Instead, a one-shot adversarial augmentation
prior to training is sufficient, and we name this new defense paradigm
Data-centric Robust Learning (DRL). Our experimental results show that DRL
outperforms widely-used AT techniques (e.g., PGD-AT, TRADES, EAT, and FAT) in
terms of black-box robustness and even surpasses the top-1 defense on
RobustBench when combined with diverse data augmentations and loss
regularizations. We also identify other benefits of DRL, for instance, the
model generalization capability and robust fairness. | [
"Yulong Yang",
"Chenhao Lin",
"Xiang Ji",
"Qiwei Tian",
"Qian Li",
"Hongshan Yang",
"Zhibo Wang",
"Chao Shen"
] | 2023-10-15 17:20:42 | http://arxiv.org/abs/2310.09891v1 | http://arxiv.org/pdf/2310.09891v1 | 2310.09891v1 |
Score-Based Methods for Discrete Optimization in Deep Learning | Discrete optimization problems often arise in deep learning tasks, despite
the fact that neural networks typically operate on continuous data. One class
of these problems involve objective functions which depend on neural networks,
but optimization variables which are discrete. Although the discrete
optimization literature provides efficient algorithms, they are still
impractical in these settings due to the high cost of an objective function
evaluation, which involves a neural network forward-pass. In particular, they
require $O(n)$ complexity per iteration, but real data such as point clouds
have values of $n$ in thousands or more. In this paper, we investigate a
score-based approximation framework to solve such problems. This framework uses
a score function as a proxy for the marginal gain of the objective, leveraging
embeddings of the discrete variables and speed of auto-differentiation
frameworks to compute backward-passes in parallel. We experimentally
demonstrate, in adversarial set classification tasks, that our method achieves
a superior trade-off in terms of speed and solution quality compared to
heuristic methods. | [
"Eric Lei",
"Arman Adibi",
"Hamed Hassani"
] | 2023-10-15 17:14:17 | http://arxiv.org/abs/2310.09890v1 | http://arxiv.org/pdf/2310.09890v1 | 2310.09890v1 |
Statistical inference using machine learning and classical techniques based on accumulated local effects (ALE) | Accumulated Local Effects (ALE) is a model-agnostic approach for global
explanations of the results of black-box machine learning (ML) algorithms.
There are at least three challenges with conducting statistical inference based
on ALE: ensuring the reliability of ALE analyses, especially in the context of
small datasets; intuitively characterizing a variable's overall effect in ML;
and making robust inferences from ML data analysis. In response, we introduce
innovative tools and techniques for statistical inference using ALE,
establishing bootstrapped confidence intervals tailored to dataset size and
introducing ALE effect size measures that intuitively indicate effects on both
the outcome variable scale and a normalized scale. Furthermore, we demonstrate
how to use these tools to draw reliable statistical inferences, reflecting the
flexible patterns ALE adeptly highlights, with implementations available in the
'ale' package in R. This work propels the discourse on ALE and its
applicability in ML and statistical analysis forward, offering practical
solutions to prevailing challenges in the field. | [
"Chitu Okoli"
] | 2023-10-15 16:17:21 | http://arxiv.org/abs/2310.09877v1 | http://arxiv.org/pdf/2310.09877v1 | 2310.09877v1 |
Empower Text-Attributed Graphs Learning with Large Language Models (LLMs) | Text-attributed graphs have recently garnered significant attention due to
their wide range of applications in web domains. Existing methodologies employ
word embedding models for acquiring text representations as node features,
which are subsequently fed into Graph Neural Networks (GNNs) for training.
Recently, the advent of Large Language Models (LLMs) has introduced their
powerful capabilities in information retrieval and text generation, which can
greatly enhance the text attributes of graph data. Furthermore, the acquisition
and labeling of extensive datasets are both costly and time-consuming
endeavors. Consequently, few-shot learning has emerged as a crucial problem in
the context of graph learning tasks. In order to tackle this challenge, we
propose a lightweight paradigm called ENG, which adopts a plug-and-play
approach to empower text-attributed graphs through node generation using LLMs.
Specifically, we utilize LLMs to extract semantic information from the labels
and generate samples that belong to these categories as exemplars.
Subsequently, we employ an edge predictor to capture the structural information
inherent in the raw dataset and integrate the newly generated samples into the
original graph. This approach harnesses LLMs for enhancing class-level
information and seamlessly introduces labeled nodes and edges without modifying
the raw dataset, thereby facilitating the node classification task in few-shot
scenarios. Extensive experiments demonstrate the outstanding performance of our
proposed paradigm, particularly in low-shot scenarios. For instance, in the
1-shot setting of the ogbn-arxiv dataset, ENG achieves a 76% improvement over
the baseline model. | [
"Jianxiang Yu",
"Yuxiang Ren",
"Chenghua Gong",
"Jiaqi Tan",
"Xiang Li",
"Xuecang Zhang"
] | 2023-10-15 16:04:28 | http://arxiv.org/abs/2310.09872v1 | http://arxiv.org/pdf/2310.09872v1 | 2310.09872v1 |
Federated Multi-Objective Learning | In recent years, multi-objective optimization (MOO) emerges as a foundational
problem underpinning many multi-agent multi-task learning applications.
However, existing algorithms in MOO literature remain limited to centralized
learning settings, which do not satisfy the distributed nature and data privacy
needs of such multi-agent multi-task learning applications. This motivates us
to propose a new federated multi-objective learning (FMOL) framework with
multiple clients distributively and collaboratively solving an MOO problem
while keeping their training data private. Notably, our FMOL framework allows a
different set of objective functions across different clients to support a wide
range of applications, which advances and generalizes the MOO formulation to
the federated learning paradigm for the first time. For this FMOL framework, we
propose two new federated multi-objective optimization (FMOO) algorithms called
federated multi-gradient descent averaging (FMGDA) and federated stochastic
multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates
to significantly reduce communication costs, while achieving the {\em same}
convergence rates as those of the their algorithmic counterparts in the
single-objective federated learning. Our extensive experiments also corroborate
the efficacy of our proposed FMOO algorithms. | [
"Haibo Yang",
"Zhuqing Liu",
"Jia Liu",
"Chaosheng Dong",
"Michinari Momma"
] | 2023-10-15 15:45:51 | http://arxiv.org/abs/2310.09866v1 | http://arxiv.org/pdf/2310.09866v1 | 2310.09866v1 |
Federated Reinforcement Learning for Resource Allocation in V2X Networks | Resource allocation significantly impacts the performance of
vehicle-to-everything (V2X) networks. Most existing algorithms for resource
allocation are based on optimization or machine learning (e.g., reinforcement
learning). In this paper, we explore resource allocation in a V2X network under
the framework of federated reinforcement learning (FRL). On one hand, the usage
of RL overcomes many challenges from the model-based optimization schemes. On
the other hand, federated learning (FL) enables agents to deal with a number of
practical issues, such as privacy, communication overhead, and exploration
efficiency. The framework of FRL is then implemented by the inexact alternative
direction method of multipliers (ADMM), where subproblems are solved
approximately using policy gradients and accelerated by an adaptive step size
calculated from their second moments. The developed algorithm, PASM, is proven
to be convergent under mild conditions and has a nice numerical performance
compared with some baseline methods for solving the resource allocation problem
in a V2X network. | [
"Kaidi Xu",
"Shenglong Zhou",
"Geoffrey Ye Li"
] | 2023-10-15 15:26:54 | http://arxiv.org/abs/2310.09858v1 | http://arxiv.org/pdf/2310.09858v1 | 2310.09858v1 |
MERTech: Instrument Playing Technique Detection Using Self-Supervised Pretrained Model With Multi-Task Finetuning | Instrument playing techniques (IPTs) constitute a pivotal component of
musical expression. However, the development of automatic IPT detection methods
suffers from limited labeled data and inherent class imbalance issues. In this
paper, we propose to apply a self-supervised learning model pre-trained on
large-scale unlabeled music data and finetune it on IPT detection tasks. This
approach addresses data scarcity and class imbalance challenges. Recognizing
the significance of pitch in capturing the nuances of IPTs and the importance
of onset in locating IPT events, we investigate multi-task finetuning with
pitch and onset detection as auxiliary tasks. Additionally, we apply a
post-processing approach for event-level prediction, where an IPT activation
initiates an event only if the onset output confirms an onset in that frame.
Our method outperforms prior approaches in both frame-level and event-level
metrics across multiple IPT benchmark datasets. Further experiments demonstrate
the efficacy of multi-task finetuning on each IPT class. | [
"Dichucheng Li",
"Yinghao Ma",
"Weixing Wei",
"Qiuqiang Kong",
"Yulun Wu",
"Mingjin Che",
"Fan Xia",
"Emmanouil Benetos",
"Wei Li"
] | 2023-10-15 15:00:00 | http://arxiv.org/abs/2310.09853v1 | http://arxiv.org/pdf/2310.09853v1 | 2310.09853v1 |
ACES: Generating Diverse Programming Puzzles with Autotelic Language Models and Semantic Descriptors | Finding and selecting new and interesting problems to solve is at the heart
of curiosity, science and innovation. We here study automated problem
generation in the context of the open-ended space of python programming
puzzles. Existing generative models often aim at modeling a reference
distribution without any explicit diversity optimization. Other methods
explicitly optimizing for diversity do so either in limited hand-coded
representation spaces or in uninterpretable learned embedding spaces that may
not align with human perceptions of interesting variations. With ACES
(Autotelic Code Exploration via Semantic descriptors), we introduce a new
autotelic generation method that leverages semantic descriptors produced by a
large language model (LLM) to directly optimize for interesting diversity, as
well as few-shot-based generation. Each puzzle is labeled along 10 dimensions,
each capturing a programming skill required to solve it. ACES generates and
pursues novel and feasible goals to explore that abstract semantic space,
slowly discovering a diversity of solvable programming puzzles in any given
run. Across a set of experiments, we show that ACES discovers a richer
diversity of puzzles than existing diversity-maximizing algorithms as measured
across a range of diversity metrics. We further study whether and in which
conditions this diversity can translate into the successful training of puzzle
solving models. | [
"Julien Pourcel",
"Cédric Colas",
"Pierre-Yves Oudeyer",
"Laetitia Teodorescu"
] | 2023-10-15 14:57:14 | http://arxiv.org/abs/2310.10692v2 | http://arxiv.org/pdf/2310.10692v2 | 2310.10692v2 |
Alpha Elimination: Using Deep Reinforcement Learning to Reduce Fill-In during Sparse Matrix Decomposition | A large number of computational and scientific methods commonly require
decomposing a sparse matrix into triangular factors as LU decomposition. A
common problem faced during this decomposition is that even though the given
matrix may be very sparse, the decomposition may lead to a denser triangular
factors due to fill-in. A significant fill-in may lead to prohibitively larger
computational costs and memory requirement during decomposition as well as
during the solve phase. To this end, several heuristic sparse matrix reordering
methods have been proposed to reduce fill-in before the decomposition. However,
finding an optimal reordering algorithm that leads to minimal fill-in during
such decomposition is known to be a NP-hard problem. A reinforcement learning
based approach is proposed for this problem. The sparse matrix reordering
problem is formulated as a single player game. More specifically, Monte-Carlo
tree search in combination with neural network is used as a decision making
algorithm to search for the best move in our game. The proposed method,
alphaElimination is found to produce significantly lesser non-zeros in the LU
decomposition as compared to existing state-of-the-art heuristic algorithms
with little to no increase in overall running time of the algorithm. The code
for the project will be publicly available
here\footnote{\url{https://github.com/misterpawan/alphaEliminationPaper}}. | [
"Arpan Dasgupta",
"Pawan Kumar"
] | 2023-10-15 14:51:22 | http://arxiv.org/abs/2310.09852v1 | http://arxiv.org/pdf/2310.09852v1 | 2310.09852v1 |
Enhancing ML model accuracy for Digital VLSI circuits using diffusion models: A study on synthetic data generation | Generative AI has seen remarkable growth over the past few years, with
diffusion models being state-of-the-art for image generation. This study
investigates the use of diffusion models in generating artificial data
generation for electronic circuits for enhancing the accuracy of subsequent
machine learning models in tasks such as performance assessment, design, and
testing when training data is usually known to be very limited. We utilize
simulations in the HSPICE design environment with 22nm CMOS technology nodes to
obtain representative real training data for our proposed diffusion model. Our
results demonstrate the close resemblance of synthetic data using diffusion
model to real data. We validate the quality of generated data, and demonstrate
that data augmentation certainly effective in predictive analysis of VLSI
design for digital circuits. | [
"Prasha Srivastava",
"Pawan Kumar",
"Zia Abbas"
] | 2023-10-15 14:20:09 | http://arxiv.org/abs/2310.10691v1 | http://arxiv.org/pdf/2310.10691v1 | 2310.10691v1 |
XRMDN: A Recurrent Mixture Density Networks-based Architecture for Short-Term Probabilistic Demand Forecasting in Mobility-on-Demand Systems with High Volatility | In real Mobility-on-Demand (MoD) systems, demand is subject to high and
dynamic volatility, which is difficult to predict by conventional time-series
forecasting approaches. Most existing forecasting approaches yield the point
value as the prediction result, which ignores the uncertainty that exists in
the forecasting result. This will lead to the forecasting result severely
deviating from the true demand value due to the high volatility existing in
demand. To fill the gap, we propose an extended recurrent mixture density
network (XRMDN), which extends the weight and mean neural networks to recurrent
neural networks. The recurrent neurons for mean and variance can capture the
trend of the historical data-series data, which enables a better forecasting
result in dynamic and high volatility. We conduct comprehensive experiments on
one taxi trip record and one bike-sharing real MoD data set to validate the
performance of XRMDN. Specifically, we compare our model to three types of
benchmark models, including statistical, machine learning, and deep learning
models on three evaluation metrics. The validation results show that XRMDN
outperforms the three groups of benchmark models in terms of the evaluation
metrics. Most importantly, XRMDN substantially improves the forecasting
accuracy with the demands in strong volatility. Last but not least, this
probabilistic demand forecasting model contributes not only to the demand
prediction in MoD systems but also to other optimization application problems,
especially optimization under uncertainty, in MoD applications. | [
"Xiaoming Li",
"Hubert Normandin-Taillon",
"Chun Wang",
"Xiao Huang"
] | 2023-10-15 14:18:42 | http://arxiv.org/abs/2310.09847v1 | http://arxiv.org/pdf/2310.09847v1 | 2310.09847v1 |
Explaining How a Neural Network Play the Go Game and Let People Learn | The AI model has surpassed human players in the game of Go, and it is widely
believed that the AI model has encoded new knowledge about the Go game beyond
human players. In this way, explaining the knowledge encoded by the AI model
and using it to teach human players represent a promising-yet-challenging issue
in explainable AI. To this end, mathematical supports are required to ensure
that human players can learn accurate and verifiable knowledge, rather than
specious intuitive analysis. Thus, in this paper, we extract interaction
primitives between stones encoded by the value network for the Go game, so as
to enable people to learn from the value network. Experiments show the
effectiveness of our method. | [
"Huilin Zhou",
"Huijie Tang",
"Mingjie Li",
"Hao Zhang",
"Zhenyu Liu",
"Quanshi Zhang"
] | 2023-10-15 13:57:50 | http://arxiv.org/abs/2310.09838v1 | http://arxiv.org/pdf/2310.09838v1 | 2310.09838v1 |
Secure and Robust Communications for Cislunar Space Networks | There is no doubt that the Moon has become the center of interest for
commercial and international actors. Over the past decade, the number of
planned long-term missions has increased dramatically. This makes the
establishment of cislunar space networks (CSNs) crucial to orchestrate
uninterrupted communications between the Moon and Earth. However, there are
numerous challenges, unknowns, and uncertainties associated with cislunar
communications that may pose various risks to lunar missions. In this study, we
aim to address these challenges for cislunar communications by proposing a
machine learning-based cislunar space domain awareness (SDA) capability that
enables robust and secure communications. To this end, we first propose a
detailed channel model for selected cislunar scenarios. Secondly, we propose
two types of interference that could model anomalies that occur in cislunar
space and are so far known only to a limited extent. Finally, we discuss our
cislunar SDA to work in conjunction with the spacecraft communication system.
Our proposed cislunar SDA, involving heuristic learning capabilities with
machine learning algorithms, detects interference models with over 96%
accuracy. The results demonstrate the promising performance of our cislunar SDA
approach for secure and robust cislunar communication. | [
"Selen Gecgel Cetin",
"Gunes Karabulut Kurt",
"Angeles Vazquez-Castro"
] | 2023-10-15 13:40:22 | http://arxiv.org/abs/2310.09835v1 | http://arxiv.org/pdf/2310.09835v1 | 2310.09835v1 |
MIR2: Towards Provably Robust Multi-Agent Reinforcement Learning by Mutual Information Regularization | Robust multi-agent reinforcement learning (MARL) necessitates resilience to
uncertain or worst-case actions by unknown allies. Existing max-min
optimization techniques in robust MARL seek to enhance resilience by training
agents against worst-case adversaries, but this becomes intractable as the
number of agents grows, leading to exponentially increasing worst-case
scenarios. Attempts to simplify this complexity often yield overly pessimistic
policies, inadequate robustness across scenarios and high computational
demands. Unlike these approaches, humans naturally learn adaptive and resilient
behaviors without the necessity of preparing for every conceivable worst-case
scenario. Motivated by this, we propose MIR2, which trains policy in routine
scenarios and minimize Mutual Information as Robust Regularization.
Theoretically, we frame robustness as an inference problem and prove that
minimizing mutual information between histories and actions implicitly
maximizes a lower bound on robustness under certain assumptions. Further
analysis reveals that our proposed approach prevents agents from overreacting
to others through an information bottleneck and aligns the policy with a robust
action prior. Empirically, our MIR2 displays even greater resilience against
worst-case adversaries than max-min optimization in StarCraft II, Multi-agent
Mujoco and rendezvous. Our superiority is consistent when deployed in
challenging real-world robot swarm control scenario. See code and demo videos
in Supplementary Materials. | [
"Simin Li",
"Ruixiao Xu",
"Jun Guo",
"Pu Feng",
"Jiakai Wang",
"Aishan Liu",
"Yaodong Yang",
"Xianglong Liu",
"Weifeng Lv"
] | 2023-10-15 13:35:51 | http://arxiv.org/abs/2310.09833v1 | http://arxiv.org/pdf/2310.09833v1 | 2310.09833v1 |
Subsets and Splits