title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
FedNAR: Federated Optimization with Normalized Annealing Regularization | Weight decay is a standard technique to improve generalization performance in
modern deep neural network optimization, and is also widely adopted in
federated learning (FL) to prevent overfitting in local clients. In this paper,
we first explore the choices of weight decay and identify that weight decay
value appreciably influences the convergence of existing FL algorithms. While
preventing overfitting is crucial, weight decay can introduce a different
optimization goal towards the global objective, which is further amplified in
FL due to multiple local updates and heterogeneous data distribution. To
address this challenge, we develop {\it Federated optimization with Normalized
Annealing Regularization} (FedNAR), a simple yet effective and versatile
algorithmic plug-in that can be seamlessly integrated into any existing FL
algorithms. Essentially, we regulate the magnitude of each update by performing
co-clipping of the gradient and weight decay. We provide a comprehensive
theoretical analysis of FedNAR's convergence rate and conduct extensive
experiments on both vision and language datasets with different backbone
federated optimization algorithms. Our experimental results consistently
demonstrate that incorporating FedNAR into existing FL algorithms leads to
accelerated convergence and heightened model accuracy. Moreover, FedNAR
exhibits resilience in the face of various hyperparameter configurations.
Specifically, FedNAR has the ability to self-adjust the weight decay when the
initial specification is not optimal, while the accuracy of traditional FL
algorithms would markedly decline. Our codes are released at
\href{https://github.com/ljb121002/fednar}{https://github.com/ljb121002/fednar}. | [
"Junbo Li",
"Ang Li",
"Chong Tian",
"Qirong Ho",
"Eric P. Xing",
"Hongyi Wang"
] | 2023-10-04 21:11:40 | http://arxiv.org/abs/2310.03163v1 | http://arxiv.org/pdf/2310.03163v1 | 2310.03163v1 |
Neural architecture impact on identifying temporally extended Reinforcement Learning tasks | Inspired by recent developments in attention models for image classification
and natural language processing, we present various Attention based
architectures in reinforcement learning (RL) domain, capable of performing well
on OpenAI Gym Atari-2600 game suite. In spite of the recent success of Deep
Reinforcement learning techniques in various fields like robotics, gaming and
healthcare, they suffer from a major drawback that neural networks are
difficult to interpret. We try to get around this problem with the help of
Attention based models. In Attention based models, extracting and overlaying of
attention map onto images allows for direct observation of information used by
agent to select actions and easier interpretation of logic behind the chosen
actions. Our models in addition to playing well on gym-Atari environments, also
provide insights on how agent perceives its environment. In addition, motivated
by recent developments in attention based video-classification models using
Vision Transformer, we come up with an architecture based on Vision
Transformer, for image-based RL domain too. Compared to previous works in
Vision Transformer, our model is faster to train and requires fewer
computational resources. 3 | [
"Victor Vadakechirayath George"
] | 2023-10-04 21:09:19 | http://arxiv.org/abs/2310.03161v1 | http://arxiv.org/pdf/2310.03161v1 | 2310.03161v1 |
Assessment of Prediction Intervals Using Uncertainty Characteristics Curves | Accurate quantification of model uncertainty has long been recognized as a
fundamental requirement for trusted AI. In regression tasks, uncertainty is
typically quantified using prediction intervals calibrated to an ad-hoc
operating point, making evaluation and comparison across different studies
relatively difficult. Our work leverages: (1) the concept of operating
characteristics curves and (2) the notion of a gain over a null reference, to
derive a novel operating point agnostic assessment methodology for prediction
intervals. The paper defines the Uncertainty Characteristics Curve and
demonstrates its utility in selected scenarios. We argue that the proposed
method addresses the current need for comprehensive assessment of prediction
intervals and thus represents a valuable addition to the uncertainty
quantification toolbox. | [
"Jiri Navratil",
"Benjamin Elder",
"Matthew Arnold",
"Soumya Ghosh",
"Prasanna Sattigeri"
] | 2023-10-04 20:54:08 | http://arxiv.org/abs/2310.03158v1 | http://arxiv.org/pdf/2310.03158v1 | 2310.03158v1 |
FedHyper: A Universal and Robust Learning Rate Scheduler for Federated Learning with Hypergradient Descent | The theoretical landscape of federated learning (FL) undergoes rapid
evolution, but its practical application encounters a series of intricate
challenges, and hyperparameter optimization is one of these critical
challenges. Amongst the diverse adjustments in hyperparameters, the adaptation
of the learning rate emerges as a crucial component, holding the promise of
significantly enhancing the efficacy of FL systems. In response to this
critical need, this paper presents FedHyper, a novel hypergradient-based
learning rate adaptation algorithm specifically designed for FL. FedHyper
serves as a universal learning rate scheduler that can adapt both global and
local rates as the training progresses. In addition, FedHyper not only
showcases unparalleled robustness to a spectrum of initial learning rate
configurations but also significantly alleviates the necessity for laborious
empirical learning rate adjustments. We provide a comprehensive theoretical
analysis of FedHyper's convergence rate and conduct extensive experiments on
vision and language benchmark datasets. The results demonstrate that FEDHYPER
consistently converges 1.1-3x faster than FedAvg and the competing baselines
while achieving superior final accuracy. Moreover, FedHyper catalyzes a
remarkable surge in accuracy, augmenting it by up to 15% compared to FedAvg
under suboptimal initial learning rate settings. | [
"Ziyao Wang",
"Jianyu Wang",
"Ang Li"
] | 2023-10-04 20:51:52 | http://arxiv.org/abs/2310.03156v2 | http://arxiv.org/pdf/2310.03156v2 | 2310.03156v2 |
Towards out-of-distribution generalizable predictions of chemical kinetics properties | Machine Learning (ML) techniques have found applications in estimating
chemical kinetics properties. With the accumulated drug molecules identified
through "AI4drug discovery", the next imperative lies in AI-driven design for
high-throughput chemical synthesis processes, with the estimation of properties
of unseen reactions with unexplored molecules. To this end, the existing ML
approaches for kinetics property prediction are required to be
Out-Of-Distribution (OOD) generalizable. In this paper, we categorize the OOD
kinetic property prediction into three levels (structure, condition, and
mechanism), revealing unique aspects of such problems. Under this framework, we
create comprehensive datasets to benchmark (1) the state-of-the-art ML
approaches for reaction prediction in the OOD setting and (2) the
state-of-the-art graph OOD methods in kinetics property prediction problems.
Our results demonstrated the challenges and opportunities in OOD kinetics
property prediction. Our datasets and benchmarks can further support research
in this direction. | [
"Zihao Wang",
"Yongqiang Chen",
"Yang Duan",
"Weijiang Li",
"Bo Han",
"James Cheng",
"Hanghang Tong"
] | 2023-10-04 20:36:41 | http://arxiv.org/abs/2310.03152v1 | http://arxiv.org/pdf/2310.03152v1 | 2310.03152v1 |
Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly | Large Language Models (LLM) and foundation models are popular as they offer
new opportunities for individuals and businesses to improve natural language
processing, interact with data, and retrieve information faster. However,
training or fine-tuning LLMs requires a vast amount of data, which can be
challenging to access due to legal or technical restrictions and may require
private computing resources. Federated Learning (FL) is a solution designed to
overcome these challenges and expand data access for deep learning
applications.
This paper takes a hardware-centric approach to explore how LLMs can be
brought to modern edge computing systems. Our study fine-tunes the FLAN-T5
model family, ranging from 80M to 3B parameters, using FL for a text
summarization task. We provide a micro-level hardware benchmark, compare the
model FLOP utilization to a state-of-the-art data center GPU, and study the
network utilization in realistic conditions. Our contribution is twofold:
First, we evaluate the current capabilities of edge computing systems and their
potential for LLM FL workloads. Second, by comparing these systems with a
data-center GPU, we demonstrate the potential for improvement and the next
steps toward achieving greater computational efficiency at the edge. | [
"Herbert Woisetschläger",
"Alexander Isenko",
"Shiqiang Wang",
"Ruben Mayer",
"Hans-Arno Jacobsen"
] | 2023-10-04 20:27:20 | http://arxiv.org/abs/2310.03150v1 | http://arxiv.org/pdf/2310.03150v1 | 2310.03150v1 |
Attributing Learned Concepts in Neural Networks to Training Data | By now there is substantial evidence that deep learning models learn certain
human-interpretable features as part of their internal representations of data.
As having the right (or wrong) concepts is critical to trustworthy machine
learning systems, it is natural to ask which inputs from the model's original
training set were most important for learning a concept at a given layer. To
answer this, we combine data attribution methods with methods for probing the
concepts learned by a model. Training network and probe ensembles for two
concept datasets on a range of network layers, we use the recently developed
TRAK method for large-scale data attribution. We find some evidence for
convergence, where removing the 10,000 top attributing images for a concept and
retraining the model does not change the location of the concept in the network
nor the probing sparsity of the concept. This suggests that rather than being
highly dependent on a few specific examples, the features that inform the
development of a concept are spread in a more diffuse manner across its
exemplars, implying robustness in concept formation. | [
"Nicholas Konz",
"Charles Godfrey",
"Madelyn Shapiro",
"Jonathan Tu",
"Henry Kvinge",
"Davis Brown"
] | 2023-10-04 20:26:59 | http://arxiv.org/abs/2310.03149v2 | http://arxiv.org/pdf/2310.03149v2 | 2310.03149v2 |
Fairness-enhancing mixed effects deep learning improves fairness on in- and out-of-distribution clustered (non-iid) data | Traditional deep learning (DL) suffers from two core problems. Firstly, it
assumes training samples are independent and identically distributed. However,
numerous real-world datasets group samples by shared measurements (e.g., study
participants or cells), violating this assumption. In these scenarios, DL can
show compromised performance, limited generalization, and interpretability
issues, coupled with cluster confounding causing Type 1 and 2 errors. Secondly,
models are typically trained for overall accuracy, often neglecting
underrepresented groups and introducing biases in crucial areas like loan
approvals or determining health insurance rates, such biases can significantly
impact one's quality of life. To address both of these challenges
simultaneously, we present a mixed effects deep learning (MEDL) framework. MEDL
separately quantifies cluster-invariant fixed effects (FE) and cluster-specific
random effects (RE) through the introduction of: 1) a cluster adversary which
encourages the learning of cluster-invariant FE, 2) a Bayesian neural network
which quantifies the RE, and a mixing function combining the FE an RE into a
mixed-effect prediction. We marry this MEDL with adversarial debiasing, which
promotes equality-of-odds fairness across FE, RE, and ME predictions for
fairness-sensitive variables. We evaluated our approach using three datasets:
two from census/finance focusing on income classification and one from
healthcare predicting hospitalization duration, a regression task. Our
framework notably enhances fairness across all sensitive variables-increasing
fairness up to 82% for age, 43% for race, 86% for sex, and 27% for
marital-status. Besides promoting fairness, our method maintains the robust
performance and clarity of MEDL. It's versatile, suitable for various dataset
types and tasks, making it broadly applicable. Our GitHub repository houses the
implementation. | [
"Adam Wang",
"Son Nguyen",
"Albert Montillo"
] | 2023-10-04 20:18:45 | http://arxiv.org/abs/2310.03146v1 | http://arxiv.org/pdf/2310.03146v1 | 2310.03146v1 |
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models | With the blowout development of pre-trained models (PTMs), the efficient
tuning of these models for diverse downstream applications has emerged as a
pivotal research concern. Although recent investigations into prompt tuning
have provided promising avenues, three salient challenges persist: (1) memory
constraint: the continuous growth in the size of open-source PTMs renders
fine-tuning, even a fraction of their parameters, challenging for many
practitioners. (2) model privacy: existing PTMs often function as public API
services, with their parameters inaccessible for effective or tailored
fine-tuning. (3) data privacy: the fine-tuning of PTMs necessitates
high-quality datasets, which are typically localized and not shared to public.
To optimally harness each local dataset while navigating memory constraints and
preserving privacy, we propose Federated Black-Box Prompt Tuning (Fed-BBPT).
This innovative approach eschews reliance on parameter architectures and
private dataset access, instead capitalizing on a central server that aids
local users in collaboratively training a prompt generator through regular
aggregation. Local users leverage API-driven learning via a zero-order
optimizer, obviating the need for PTM deployment. Relative to extensive
fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM
storage and fine-tuning on local machines, tapping into comprehensive,
high-quality, yet private training datasets. A thorough evaluation across 40
datasets spanning CV and NLP tasks underscores the robustness of our proposed
model. | [
"Zihao Lin",
"Yan Sun",
"Yifan Shi",
"Xueqian Wang",
"Lifu Huang",
"Li Shen",
"Dacheng Tao"
] | 2023-10-04 19:30:49 | http://arxiv.org/abs/2310.03123v1 | http://arxiv.org/pdf/2310.03123v1 | 2310.03123v1 |
OpenMM 8: Molecular Dynamics Simulation with Machine Learning Potentials | Machine learning plays an important and growing role in molecular simulation.
The newest version of the OpenMM molecular dynamics toolkit introduces new
features to support the use of machine learning potentials. Arbitrary PyTorch
models can be added to a simulation and used to compute forces and energy. A
higher-level interface allows users to easily model their molecules of interest
with general purpose, pretrained potential functions. A collection of optimized
CUDA kernels and custom PyTorch operations greatly improves the speed of
simulations. We demonstrate these features on simulations of cyclin-dependent
kinase 8 (CDK8) and the green fluorescent protein (GFP) chromophore in water.
Taken together, these features make it practical to use machine learning to
improve the accuracy of simulations at only a modest increase in cost. | [
"Peter Eastman",
"Raimondas Galvelis",
"Raúl P. Peláez",
"Charlles R. A. Abreu",
"Stephen E. Farr",
"Emilio Gallicchio",
"Anton Gorenko",
"Michael M. Henry",
"Frank Hu",
"Jing Huang",
"Andreas Krämer",
"Julien Michel",
"Joshua A. Mitchell",
"Vijay S. Pande",
"João PGLM Rodrigues",
"Jaime Rodriguez-Guerra",
"Andrew C. Simmonett",
"Jason Swails",
"Ivy Zhang",
"John D. Chodera",
"Gianni De Fabritiis",
"Thomas E. Markland"
] | 2023-10-04 19:23:57 | http://arxiv.org/abs/2310.03121v1 | http://arxiv.org/pdf/2310.03121v1 | 2310.03121v1 |
Crossed-IoT device portability of Electromagnetic Side Channel Analysis: Challenges and Dataset | IoT (Internet of Things) refers to the network of interconnected physical
devices, vehicles, home appliances, and other items embedded with sensors,
software, and connectivity, enabling them to collect and exchange data. IoT
Forensics is collecting and analyzing digital evidence from IoT devices to
investigate cybercrimes, security breaches, and other malicious activities that
may have taken place on these connected devices. In particular, EM-SCA has
become an essential tool for IoT forensics due to its ability to reveal
confidential information about the internal workings of IoT devices without
interfering these devices or wiretapping their networks. However, the accuracy
and reliability of EM-SCA results can be limited by device variability,
environmental factors, and data collection and processing methods. Besides,
there is very few research on these limitations that affects significantly the
accuracy of EM-SCA approaches for the crossed-IoT device portability as well as
limited research on the possible solutions to address such challenge.
Therefore, this empirical study examines the impact of device variability on
the accuracy and reliability of EM-SCA approaches, in particular
machine-learning (ML) based approaches for EM-SCA. We firstly presents the
background, basic concepts and techniques used to evaluate the limitations of
current EM-SCA approaches and datasets. Our study then addresses one of the
most important limitation, which is caused by the multi-core architecture of
the processors (SoC). We present an approach to collect the EM-SCA datasets and
demonstrate the feasibility of using transfer learning to obtain more
meaningful and reliable results from EM-SCA in IoT forensics of crossed-IoT
devices. Our study moreover contributes a new dataset for using deep learning
models in analysing Electromagnetic Side-Channel data with regards to the
cross-device portability matter. | [
"Tharindu Lakshan Yasarathna",
"Lojenaa Navanesan",
"Simon Barque",
"Assanka Sayakkara",
"Nhien-An Le-Khac"
] | 2023-10-04 19:13:39 | http://arxiv.org/abs/2310.03119v1 | http://arxiv.org/pdf/2310.03119v1 | 2310.03119v1 |
Leveraging Model-based Trees as Interpretable Surrogate Models for Model Distillation | Surrogate models play a crucial role in retrospectively interpreting complex
and powerful black box machine learning models via model distillation. This
paper focuses on using model-based trees as surrogate models which partition
the feature space into interpretable regions via decision rules. Within each
region, interpretable models based on additive main effects are used to
approximate the behavior of the black box model, striking for an optimal
balance between interpretability and performance. Four model-based tree
algorithms, namely SLIM, GUIDE, MOB, and CTree, are compared regarding their
ability to generate such surrogate models. We investigate fidelity,
interpretability, stability, and the algorithms' capability to capture
interaction effects through appropriate splits. Based on our comprehensive
analyses, we finally provide an overview of user-specific recommendations. | [
"Julia Herbinger",
"Susanne Dandl",
"Fiona K. Ewald",
"Sofia Loibl",
"Giuseppe Casalicchio"
] | 2023-10-04 19:06:52 | http://arxiv.org/abs/2310.03112v1 | http://arxiv.org/pdf/2310.03112v1 | 2310.03112v1 |
Multi-modal Gaussian Process Variational Autoencoders for Neural and Behavioral Data | Characterizing the relationship between neural population activity and
behavioral data is a central goal of neuroscience. While latent variable models
(LVMs) are successful in describing high-dimensional time-series data, they are
typically only designed for a single type of data, making it difficult to
identify structure shared across different experimental data modalities. Here,
we address this shortcoming by proposing an unsupervised LVM which extracts
temporally evolving shared and independent latents for distinct, simultaneously
recorded experimental modalities. We do this by combining Gaussian Process
Factor Analysis (GPFA), an interpretable LVM for neural spiking data with
temporally smooth latent space, with Gaussian Process Variational Autoencoders
(GP-VAEs), which similarly use a GP prior to characterize correlations in a
latent space, but admit rich expressivity due to a deep neural network mapping
to observations. We achieve interpretability in our model by partitioning
latent variability into components that are either shared between or
independent to each modality. We parameterize the latents of our model in the
Fourier domain, and show improved latent identification using this approach
over standard GP-VAE methods. We validate our model on simulated multi-modal
data consisting of Poisson spike counts and MNIST images that scale and rotate
smoothly over time. We show that the multi-modal GP-VAE (MM-GPVAE) is able to
not only identify the shared and independent latent structure across modalities
accurately, but provides good reconstructions of both images and neural rates
on held-out trials. Finally, we demonstrate our framework on two real world
multi-modal experimental settings: Drosophila whole-brain calcium imaging
alongside tracked limb positions, and Manduca sexta spike train measurements
from ten wing muscles as the animal tracks a visual stimulus. | [
"Rabia Gondur",
"Usama Bin Sikandar",
"Evan Schaffer",
"Mikio Christian Aoi",
"Stephen L Keeley"
] | 2023-10-04 19:04:55 | http://arxiv.org/abs/2310.03111v1 | http://arxiv.org/pdf/2310.03111v1 | 2310.03111v1 |
Creating an Atlas of Normal Tissue for Pruning WSI Patching Through Anomaly Detection | Patching gigapixel whole slide images (WSIs) is an important task in
computational pathology. Some methods have been proposed to select a subset of
patches as WSI representation for downstream tasks. While most of the
computational pathology tasks are designed to classify or detect the presence
of pathological lesions in each WSI, the confounding role and redundant nature
of normal histology in tissue samples are generally overlooked in WSI
representations. In this paper, we propose and validate the concept of an
"atlas of normal tissue" solely using samples of WSIs obtained from normal
tissue biopsies. Such atlases can be employed to eliminate normal fragments of
tissue samples and hence increase the representativeness collection of patches.
We tested our proposed method by establishing a normal atlas using 107 normal
skin WSIs and demonstrated how established indexes and search engines like
Yottixel can be improved. We used 553 WSIs of cutaneous squamous cell carcinoma
(cSCC) to show the advantage. We also validated our method applied to an
external dataset of 451 breast WSIs. The number of selected WSI patches was
reduced by 30% to 50% after utilizing the proposed normal atlas while
maintaining the same indexing and search performance in leave-one-patinet-out
validation for both datasets. We show that the proposed normal atlas shows
promise for unsupervised selection of the most representative patches of the
abnormal/malignant WSI lesions. | [
"Peyman Nejat",
"Areej Alsaafin",
"Ghazal Alabtah",
"Nneka Comfere",
"Aaron Mangold",
"Dennis Murphree",
"Patricija Zot",
"Saba Yasir",
"Joaquin J. Garcia",
"H. R. Tizhoosh"
] | 2023-10-04 18:51:25 | http://arxiv.org/abs/2310.03106v1 | http://arxiv.org/pdf/2310.03106v1 | 2310.03106v1 |
DP-SGD for non-decomposable objective functions | Unsupervised pre-training is a common step in developing computer vision
models and large language models. In this setting, the absence of labels
requires the use of similarity-based loss functions, such as contrastive loss,
that favor minimizing the distance between similar inputs and maximizing the
distance between distinct inputs. As privacy concerns mount, training these
models using differential privacy has become more important. However, due to
how inputs are generated for these losses, one of their undesirable properties
is that their $L_2$ sensitivity can grow with increasing batch size. This
property is particularly disadvantageous for differentially private training
methods, such as DP-SGD. To overcome this issue, we develop a new DP-SGD
variant for similarity based loss functions -- in particular the commonly used
contrastive loss -- that manipulates gradients of the objective function in a
novel way to obtain a senstivity of the summed gradient that is $O(1)$ for
batch size $n$. We test our DP-SGD variant on some preliminary CIFAR-10
pre-training and CIFAR-100 finetuning tasks and show that, in both tasks, our
method's performance comes close to that of a non-private model and generally
outperforms DP-SGD applied directly to the contrastive loss. | [
"William Kong",
"Andrés Muñoz Medina",
"Mónica Ribero"
] | 2023-10-04 18:48:16 | http://arxiv.org/abs/2310.03104v1 | http://arxiv.org/pdf/2310.03104v1 | 2310.03104v1 |
Dual Prompt Tuning for Domain-Aware Federated Learning | Federated learning is a distributed machine learning paradigm that allows
multiple clients to collaboratively train a shared model with their local data.
Nonetheless, conventional federated learning algorithms often struggle to
generalize well due to the ubiquitous domain shift across clients. In this
work, we consider a challenging yet realistic federated learning scenario where
the training data of each client originates from different domains. We address
the challenges of domain shift by leveraging the technique of prompt learning,
and propose a novel method called Federated Dual Prompt Tuning (Fed-DPT).
Specifically, Fed-DPT employs a pre-trained vision-language model and then
applies both visual and textual prompt tuning to facilitate domain adaptation
over decentralized data. Extensive experiments of Fed-DPT demonstrate its
significant effectiveness in domain-aware federated learning. With a
pre-trained CLIP model (ViT-Base as image encoder), the proposed Fed-DPT
attains 68.4% average accuracy over six domains in the DomainNet dataset, which
improves the original CLIP by a large margin of 14.8%. | [
"Guoyizhe Wei",
"Feng Wang",
"Anshul Shah",
"Rama Chellappa"
] | 2023-10-04 18:47:34 | http://arxiv.org/abs/2310.03103v1 | http://arxiv.org/pdf/2310.03103v1 | 2310.03103v1 |
Large Language Model Cascades with Mixture of Thoughts Representations for Cost-efficient Reasoning | Large language models (LLMs) such as GPT-4 have exhibited remarkable
performance in a variety of tasks, but this strong performance often comes with
the high expense of using paid API services. In this paper, we are motivated to
study building an LLM cascade to save the cost of using LLMs, particularly for
performing reasoning (e.g., mathematical, causal) tasks. Our cascade pipeline
follows the intuition that simpler questions can be addressed by a weaker but
more affordable LLM, whereas only the challenging questions necessitate the
stronger and more expensive LLM. To realize this decision-making, we consider
the "answer consistency" of the weaker LLM as a signal of the question
difficulty and propose several methods for the answer sampling and consistency
checking, including one leveraging a mixture of two thought representations
(i.e., Chain-of-Thought and Program-of-Thought). Through experiments on six
reasoning benchmark datasets, with GPT-3.5-turbo and GPT-4 being the weaker and
stronger LLMs, respectively, we demonstrate that our proposed LLM cascades can
achieve performance comparable to using solely the stronger LLM but require
only 40% of its cost. | [
"Murong Yue",
"Jie Zhao",
"Min Zhang",
"Liang Du",
"Ziyu Yao"
] | 2023-10-04 18:21:17 | http://arxiv.org/abs/2310.03094v2 | http://arxiv.org/pdf/2310.03094v2 | 2310.03094v2 |
Physics-Informed Neural Networks for Accelerating Power System State Estimation | State estimation is the cornerstone of the power system control center since
it provides the operating condition of the system in consecutive time
intervals. This work investigates the application of physics-informed neural
networks (PINNs) for accelerating power systems state estimation in monitoring
the operation of power systems. Traditional state estimation techniques often
rely on iterative algorithms that can be computationally intensive,
particularly for large-scale power systems. In this paper, a novel approach
that leverages the inherent physical knowledge of power systems through the
integration of PINNs is proposed. By incorporating physical laws as prior
knowledge, the proposed method significantly reduces the computational
complexity associated with state estimation while maintaining high accuracy.
The proposed method achieves up to 11% increase in accuracy, 75% reduction in
standard deviation of results, and 30% faster convergence, as demonstrated by
comprehensive experiments on the IEEE 14-bus system. | [
"Solon Falas",
"Markos Asprou",
"Charalambos Konstantinou",
"Maria K. Michael"
] | 2023-10-04 18:14:48 | http://arxiv.org/abs/2310.03088v1 | http://arxiv.org/pdf/2310.03088v1 | 2310.03088v1 |
Discovering Knowledge-Critical Subnetworks in Pretrained Language Models | Pretrained language models (LMs) encode implicit representations of knowledge
in their parameters. However, localizing these representations and
disentangling them from each other remains an open problem. In this work, we
investigate whether pretrained language models contain various
knowledge-critical subnetworks: particular sparse computational subgraphs
responsible for encoding specific knowledge the model has memorized. We propose
a multi-objective differentiable weight masking scheme to discover these
subnetworks and show that we can use them to precisely remove specific
knowledge from models while minimizing adverse effects on the behavior of the
original language model. We demonstrate our method on multiple GPT2 variants,
uncovering highly sparse subnetworks (98%+) that are solely responsible for
specific collections of relational knowledge. When these subnetworks are
removed, the remaining network maintains most of its initial capacity (modeling
language and other memorized relational knowledge) but struggles to express the
removed knowledge, and suffers performance drops on examples needing this
removed knowledge on downstream tasks after finetuning. | [
"Deniz Bayazit",
"Negar Foroutan",
"Zeming Chen",
"Gail Weiss",
"Antoine Bosselut"
] | 2023-10-04 18:02:01 | http://arxiv.org/abs/2310.03084v1 | http://arxiv.org/pdf/2310.03084v1 | 2310.03084v1 |
LanguageMPC: Large Language Models as Decision Makers for Autonomous Driving | Existing learning-based autonomous driving (AD) systems face challenges in
comprehending high-level information, generalizing to rare events, and
providing interpretability. To address these problems, this work employs Large
Language Models (LLMs) as a decision-making component for complex AD scenarios
that require human commonsense understanding. We devise cognitive pathways to
enable comprehensive reasoning with LLMs, and develop algorithms for
translating LLM decisions into actionable driving commands. Through this
approach, LLM decisions are seamlessly integrated with low-level controllers by
guided parameter matrix adaptation. Extensive experiments demonstrate that our
proposed method not only consistently surpasses baseline approaches in
single-vehicle tasks, but also helps handle complex driving behaviors even
multi-vehicle coordination, thanks to the commonsense reasoning capabilities of
LLMs. This paper presents an initial step toward leveraging LLMs as effective
decision-makers for intricate AD scenarios in terms of safety, efficiency,
generalizability, and interoperability. We aspire for it to serve as
inspiration for future research in this field. Project page:
https://sites.google.com/view/llm-mpc | [
"Hao Sha",
"Yao Mu",
"Yuxuan Jiang",
"Li Chen",
"Chenfeng Xu",
"Ping Luo",
"Shengbo Eben Li",
"Masayoshi Tomizuka",
"Wei Zhan",
"Mingyu Ding"
] | 2023-10-04 17:59:49 | http://arxiv.org/abs/2310.03026v2 | http://arxiv.org/pdf/2310.03026v2 | 2310.03026v2 |
Retrieval meets Long Context Large Language Models | Extending the context window of large language models (LLMs) is getting
popular recently, while the solution of augmenting LLMs with retrieval has
existed for years. The natural questions are: i) Retrieval-augmentation versus
long context window, which one is better for downstream tasks? ii) Can both
methods be combined to get the best of both worlds? In this work, we answer
these questions by studying both solutions using two state-of-the-art
pretrained LLMs, i.e., a proprietary 43B GPT and LLaMA2-70B. Perhaps
surprisingly, we find that LLM with 4K context window using simple
retrieval-augmentation at generation can achieve comparable performance to
finetuned LLM with 16K context window via positional interpolation on long
context tasks, while taking much less computation. More importantly, we
demonstrate that retrieval can significantly improve the performance of LLMs
regardless of their extended context window sizes. Our best model,
retrieval-augmented LLaMA2-70B with 32K context window, outperforms
GPT-3.5-turbo-16k and Davinci003 in terms of average score on seven long
context tasks including question answering and query-based summarization. It
also outperforms its non-retrieval LLaMA2-70B-32k baseline by a margin, while
being much faster at generation. Our study provides general insights on the
choice of retrieval-augmentation versus long context extension of LLM for
practitioners. | [
"Peng Xu",
"Wei Ping",
"Xianchao Wu",
"Lawrence McAfee",
"Chen Zhu",
"Zihan Liu",
"Sandeep Subramanian",
"Evelina Bakhturina",
"Mohammad Shoeybi",
"Bryan Catanzaro"
] | 2023-10-04 17:59:41 | http://arxiv.org/abs/2310.03025v1 | http://arxiv.org/pdf/2310.03025v1 | 2310.03025v1 |
Human-oriented Representation Learning for Robotic Manipulation | Humans inherently possess generalizable visual representations that empower
them to efficiently explore and interact with the environments in manipulation
tasks. We advocate that such a representation automatically arises from
simultaneously learning about multiple simple perceptual skills that are
critical for everyday scenarios (e.g., hand detection, state estimate, etc.)
and is better suited for learning robot manipulation policies compared to
current state-of-the-art visual representations purely based on self-supervised
objectives. We formalize this idea through the lens of human-oriented
multi-task fine-tuning on top of pre-trained visual encoders, where each task
is a perceptual skill tied to human-environment interactions. We introduce Task
Fusion Decoder as a plug-and-play embedding translator that utilizes the
underlying relationships among these perceptual skills to guide the
representation learning towards encoding meaningful structure for what's
important for all perceptual skills, ultimately empowering learning of
downstream robotic manipulation tasks. Extensive experiments across a range of
robotic tasks and embodiments, in both simulations and real-world environments,
show that our Task Fusion Decoder consistently improves the representation of
three state-of-the-art visual encoders including R3M, MVP, and EgoVLP, for
downstream manipulation policy-learning. Project page:
https://sites.google.com/view/human-oriented-robot-learning | [
"Mingxiao Huo",
"Mingyu Ding",
"Chenfeng Xu",
"Thomas Tian",
"Xinghao Zhu",
"Yao Mu",
"Lingfeng Sun",
"Masayoshi Tomizuka",
"Wei Zhan"
] | 2023-10-04 17:59:38 | http://arxiv.org/abs/2310.03023v1 | http://arxiv.org/pdf/2310.03023v1 | 2310.03023v1 |
AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models | We present AstroCLIP, a strategy to facilitate the construction of
astronomical foundation models that bridge the gap between diverse
observational modalities. We demonstrate that a cross-modal contrastive
learning approach between images and optical spectra of galaxies yields highly
informative embeddings of both modalities. In particular, we apply our method
on multi-band images and optical spectra from the Dark Energy Spectroscopic
Instrument (DESI), and show that: (1) these embeddings are well-aligned between
modalities and can be used for accurate cross-modal searches, and (2) these
embeddings encode valuable physical information about the galaxies -- in
particular redshift and stellar mass -- that can be used to achieve competitive
zero- and few- shot predictions without further finetuning. Additionally, in
the process of developing our approach, we also construct a novel,
transformer-based model and pretraining approach for processing galaxy spectra. | [
"Francois Lanusse",
"Liam Parker",
"Siavash Golkar",
"Miles Cranmer",
"Alberto Bietti",
"Michael Eickenberg",
"Geraud Krawezik",
"Michael McCabe",
"Ruben Ohana",
"Mariel Pettee",
"Bruno Regaldo-Saint Blancard",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | 2023-10-04 17:59:38 | http://arxiv.org/abs/2310.03024v1 | http://arxiv.org/pdf/2310.03024v1 | 2310.03024v1 |
Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making | The recent success of Transformer in natural language processing has sparked
its use in various domains. In offline reinforcement learning (RL), Decision
Transformer (DT) is emerging as a promising model based on Transformer.
However, we discovered that the attention module of DT is not appropriate to
capture the inherent local dependence pattern in trajectories of RL modeled as
a Markov decision process. To overcome the limitations of DT, we propose a
novel action sequence predictor, named Decision ConvFormer (DC), based on the
architecture of MetaFormer, which is a general structure to process multiple
entities in parallel and understand the interrelationship among the multiple
entities. DC employs local convolution filtering as the token mixer and can
effectively capture the inherent local associations of the RL dataset. In
extensive experiments, DC achieved state-of-the-art performance across various
standard RL benchmarks while requiring fewer resources. Furthermore, we show
that DC better understands the underlying meaning in data and exhibits enhanced
generalization capability. | [
"Jeonghye Kim",
"Suyoung Lee",
"Woojun Kim",
"Youngchul Sung"
] | 2023-10-04 17:59:32 | http://arxiv.org/abs/2310.03022v2 | http://arxiv.org/pdf/2310.03022v2 | 2310.03022v2 |
Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions | In order to understand the in-context learning phenomenon, recent works have
adopted a stylized experimental framework and demonstrated that Transformers
can learn gradient-based learning algorithms for various classes of real-valued
functions. However, the limitations of Transformers in implementing learning
algorithms, and their ability to learn other forms of algorithms are not well
understood. Additionally, the degree to which these capabilities are confined
to attention-based models is unclear. Furthermore, it remains to be seen
whether the insights derived from these stylized settings can be extrapolated
to pretrained Large Language Models (LLMs). In this work, we take a step
towards answering these questions by demonstrating the following: (a) On a
test-bed with a variety of Boolean function classes, we find that Transformers
can nearly match the optimal learning algorithm for 'simpler' tasks, while
their performance deteriorates on more 'complex' tasks. Additionally, we find
that certain attention-free models perform (almost) identically to Transformers
on a range of tasks. (b) When provided a teaching sequence, i.e. a set of
examples that uniquely identifies a function in a class, we show that
Transformers learn more sample-efficiently. Interestingly, our results show
that Transformers can learn to implement two distinct algorithms to solve a
single task, and can adaptively select the more sample-efficient algorithm
depending on the sequence of in-context examples. (c) Lastly, we show that
extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines
on prediction tasks that are guaranteed to not be in their training set. | [
"Satwik Bhattamishra",
"Arkil Patel",
"Phil Blunsom",
"Varun Kanade"
] | 2023-10-04 17:57:33 | http://arxiv.org/abs/2310.03016v1 | http://arxiv.org/pdf/2310.03016v1 | 2310.03016v1 |
SemiReward: A General Reward Model for Semi-supervised Learning | Semi-supervised learning (SSL) has witnessed great progress with various
improvements in the self-training framework with pseudo labeling. The main
challenge is how to distinguish high-quality pseudo labels against the
confirmation bias. However, existing pseudo-label selection strategies are
limited to pre-defined schemes or complex hand-crafted policies specially
designed for classification, failing to achieve high-quality labels, fast
convergence, and task versatility simultaneously. To these ends, we propose a
Semi-supervised Reward framework (SemiReward) that predicts reward scores to
evaluate and filter out high-quality pseudo labels, which is pluggable to
mainstream SSL methods in wide task types and scenarios. To mitigate
confirmation bias, SemiReward is trained online in two stages with a generator
model and subsampling strategy. With classification and regression tasks on 13
standard SSL benchmarks of three modalities, extensive experiments verify that
SemiReward achieves significant performance gains and faster convergence speeds
upon Pseudo Label, FlexMatch, and Free/SoftMatch. | [
"Siyuan Li",
"Weiyang Jin",
"Zedong Wang",
"Fang Wu",
"Zicheng Liu",
"Cheng Tan",
"Stan Z. Li"
] | 2023-10-04 17:56:41 | http://arxiv.org/abs/2310.03013v1 | http://arxiv.org/pdf/2310.03013v1 | 2310.03013v1 |
High-dimensional SGD aligns with emerging outlier eigenspaces | We rigorously study the joint evolution of training dynamics via stochastic
gradient descent (SGD) and the spectra of empirical Hessian and gradient
matrices. We prove that in two canonical classification tasks for multi-class
high-dimensional mixtures and either 1 or 2-layer neural networks, the SGD
trajectory rapidly aligns with emerging low-rank outlier eigenspaces of the
Hessian and gradient matrices. Moreover, in multi-layer settings this alignment
occurs per layer, with the final layer's outlier eigenspace evolving over the
course of training, and exhibiting rank deficiency when the SGD converges to
sub-optimal classifiers. This establishes some of the rich predictions that
have arisen from extensive numerical studies in the last decade about the
spectra of Hessian and information matrices over the course of training in
overparametrized networks. | [
"Gerard Ben Arous",
"Reza Gheissari",
"Jiaoyang Huang",
"Aukosh Jagannath"
] | 2023-10-04 17:53:53 | http://arxiv.org/abs/2310.03010v1 | http://arxiv.org/pdf/2310.03010v1 | 2310.03010v1 |
Soft Convex Quantization: Revisiting Vector Quantization with Convex Optimization | Vector Quantization (VQ) is a well-known technique in deep learning for
extracting informative discrete latent representations. VQ-embedded models have
shown impressive results in a range of applications including image and speech
generation. VQ operates as a parametric K-means algorithm that quantizes inputs
using a single codebook vector in the forward pass. While powerful, this
technique faces practical challenges including codebook collapse,
non-differentiability and lossy compression. To mitigate the aforementioned
issues, we propose Soft Convex Quantization (SCQ) as a direct substitute for
VQ. SCQ works like a differentiable convex optimization (DCO) layer: in the
forward pass, we solve for the optimal convex combination of codebook vectors
that quantize the inputs. In the backward pass, we leverage differentiability
through the optimality conditions of the forward solution. We then introduce a
scalable relaxation of the SCQ optimization and demonstrate its efficacy on the
CIFAR-10, GTSRB and LSUN datasets. We train powerful SCQ autoencoder models
that significantly outperform matched VQ-based architectures, observing an
order of magnitude better image reconstruction and codebook usage with
comparable quantization runtime. | [
"Tanmay Gautam",
"Reid Pryzant",
"Ziyi Yang",
"Chenguang Zhu",
"Somayeh Sojoudi"
] | 2023-10-04 17:45:14 | http://arxiv.org/abs/2310.03004v1 | http://arxiv.org/pdf/2310.03004v1 | 2310.03004v1 |
Learning characteristic parameters and dynamics of centrifugal pumps under multi-phase flow using physics-informed neural networks | Electrical submersible pumps (ESP) are the second most used artificial
lifting equipment in the oil and gas industry due to their high flow rates and
boost pressures. They often have to handle multiphase flows, which usually
contain a mixture of hydrocarbons, water, and/or sediments. Given these
circumstances, emulsions are commonly formed. It is a liquid-liquid flow
composed of two immiscible fluids whose effective viscosity and density differ
from the single phase separately. In this context, accurate modeling of ESP
systems is crucial for optimizing oil production and implementing control
strategies. However, real-time and direct measurement of fluid and system
characteristics is often impractical due to time constraints and economy.
Hence, indirect methods are generally considered to estimate the system
parameters. In this paper, we formulate a machine learning model based on
Physics-Informed Neural Networks (PINNs) to estimate crucial system parameters.
In order to study the efficacy of the proposed PINN model, we conduct
computational studies using not only simulated but also experimental data for
different water-oil ratios. We evaluate the state variable's dynamics and
unknown parameters for various combinations when only intake and discharge
pressure measurements are available. We also study structural and practical
identifiability analyses based on commonly available pressure measurements. The
PINN model could reduce the requirement of expensive field laboratory tests
used to estimate fluid properties. | [
"Felipe de Castro Teixeira Carvalho",
"Kamaljyoti Nath",
"Alberto Luiz Serpa",
"George Em Karniadakis"
] | 2023-10-04 17:40:46 | http://arxiv.org/abs/2310.03001v1 | http://arxiv.org/pdf/2310.03001v1 | 2310.03001v1 |
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models | Large Vision-Language Models (LVLMs) can understand the world comprehensively
by integrating rich information from different modalities, achieving remarkable
performance improvements on various multimodal downstream tasks. However,
deploying LVLMs is often problematic due to their massive computational/energy
costs and carbon consumption. Such issues make it infeasible to adopt
conventional iterative global pruning, which is costly due to computing the
Hessian matrix of the entire large model for sparsification. Alternatively,
several studies have recently proposed layer-wise pruning approaches to avoid
the expensive computation of global pruning and efficiently compress model
weights according to their importance within a layer. However, these methods
often suffer from suboptimal model compression due to their lack of a global
perspective. To address this limitation in recent efficient pruning methods for
large models, we propose Efficient Coarse-to-Fine Layer-Wise Pruning (ECoFLaP),
a two-stage coarse-to-fine weight pruning approach for LVLMs. We first
determine the sparsity ratios of different layers or blocks by leveraging the
global importance score, which is efficiently computed based on the
zeroth-order approximation of the global model gradients. Then, the multimodal
model performs local layer-wise unstructured weight pruning based on
globally-informed sparsity ratios. We validate our proposed method across
various multimodal and unimodal models and datasets, demonstrating significant
performance improvements over prevalent pruning techniques in the high-sparsity
regime. | [
"Yi-Lin Sung",
"Jaehong Yoon",
"Mohit Bansal"
] | 2023-10-04 17:34:00 | http://arxiv.org/abs/2310.02998v1 | http://arxiv.org/pdf/2310.02998v1 | 2310.02998v1 |
IBCL: Zero-shot Model Generation for Task Trade-offs in Continual Learning | Like generic multi-task learning, continual learning has the nature of
multi-objective optimization, and therefore faces a trade-off between the
performance of different tasks. That is, to optimize for the current task
distribution, it may need to compromise performance on some previous tasks.
This means that there exist multiple models that are Pareto-optimal at
different times, each addressing a distinct task performance trade-off.
Researchers have discussed how to train particular models to address specific
trade-off preferences. However, existing algorithms require training overheads
proportional to the number of preferences -- a large burden when there are
multiple, possibly infinitely many, preferences. As a response, we propose
Imprecise Bayesian Continual Learning (IBCL). Upon a new task, IBCL (1) updates
a knowledge base in the form of a convex hull of model parameter distributions
and (2) obtains particular models to address task trade-off preferences with
zero-shot. That is, IBCL does not require any additional training overhead to
generate preference-addressing models from its knowledge base. We show that
models obtained by IBCL have guarantees in identifying the Pareto optimal
parameters. Moreover, experiments on standard image classification and NLP
tasks support this guarantee. Statistically, IBCL improves average per-task
accuracy by at most 23\% and peak per-task accuracy by at most 15\% with
respect to the baseline methods, with steadily near-zero or positive backward
transfer. Most importantly, IBCL significantly reduces the training overhead
from training 1 model per preference to at most 3 models for all preferences. | [
"Pengyuan Lu",
"Michele Caprio",
"Eric Eaton",
"Insup Lee"
] | 2023-10-04 17:30:50 | http://arxiv.org/abs/2310.02995v3 | http://arxiv.org/pdf/2310.02995v3 | 2310.02995v3 |
Multiple Physics Pretraining for Physical Surrogate Models | We introduce multiple physics pretraining (MPP), an autoregressive
task-agnostic pretraining approach for physical surrogate modeling. MPP
involves training large surrogate models to predict the dynamics of multiple
heterogeneous physical systems simultaneously by learning features that are
broadly useful across diverse physical tasks. In order to learn effectively in
this setting, we introduce a shared embedding and normalization strategy that
projects the fields of multiple systems into a single shared embedding space.
We validate the efficacy of our approach on both pretraining and downstream
tasks over a broad fluid mechanics-oriented benchmark. We show that a single
MPP-pretrained transformer is able to match or outperform task-specific
baselines on all pretraining sub-tasks without the need for finetuning. For
downstream tasks, we demonstrate that finetuning MPP-trained models results in
more accurate predictions across multiple time-steps on new physics compared to
training from scratch or finetuning pretrained video foundation models. We
open-source our code and model weights trained at multiple scales for
reproducibility and community experimentation. | [
"Michael McCabe",
"Bruno Régaldo-Saint Blancard",
"Liam Holden Parker",
"Ruben Ohana",
"Miles Cranmer",
"Alberto Bietti",
"Michael Eickenberg",
"Siavash Golkar",
"Geraud Krawezik",
"Francois Lanusse",
"Mariel Pettee",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | 2023-10-04 17:29:19 | http://arxiv.org/abs/2310.02994v1 | http://arxiv.org/pdf/2310.02994v1 | 2310.02994v1 |
xVal: A Continuous Number Encoding for Large Language Models | Large Language Models have not yet been broadly adapted for the analysis of
scientific datasets due in part to the unique difficulties of tokenizing
numbers. We propose xVal, a numerical encoding scheme that represents any real
number using just a single token. xVal represents a given real number by
scaling a dedicated embedding vector by the number value. Combined with a
modified number-inference approach, this strategy renders the model end-to-end
continuous when considered as a map from the numbers of the input string to
those of the output string. This leads to an inductive bias that is generally
more suitable for applications in scientific domains. We empirically evaluate
our proposal on a number of synthetic and real-world datasets. Compared with
existing number encoding schemes, we find that xVal is more token-efficient and
demonstrates improved generalization. | [
"Siavash Golkar",
"Mariel Pettee",
"Michael Eickenberg",
"Alberto Bietti",
"Miles Cranmer",
"Geraud Krawezik",
"Francois Lanusse",
"Michael McCabe",
"Ruben Ohana",
"Liam Parker",
"Bruno Régaldo-Saint Blancard",
"Tiberiu Tesileanu",
"Kyunghyun Cho",
"Shirley Ho"
] | 2023-10-04 17:26:16 | http://arxiv.org/abs/2310.02989v1 | http://arxiv.org/pdf/2310.02989v1 | 2310.02989v1 |
Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions | Machine learning approaches relying on such criteria as adversarial
robustness or multi-agent settings have raised the need for solving
game-theoretic equilibrium problems. Of particular relevance to these
applications are methods targeting finite-sum structure, which generically
arises in empirical variants of learning problems in these contexts. Further,
methods with computable approximation errors are highly desirable, as they
provide verifiable exit criteria. Motivated by these applications, we study
finite-sum monotone inclusion problems, which model broad classes of
equilibrium problems. Our main contributions are variants of the classical
Halpern iteration that employ variance reduction to obtain improved complexity
guarantees in which $n$ component operators in the finite sum are ``on
average'' either cocoercive or Lipschitz continuous and monotone, with
parameter $L$. The resulting oracle complexity of our methods, which provide
guarantees for the last iterate and for a (computable) operator norm residual,
is $\widetilde{\mathcal{O}}( n + \sqrt{n}L\varepsilon^{-1})$, which improves
upon existing methods by a factor up to $\sqrt{n}$. This constitutes the first
variance reduction-type result for general finite-sum monotone inclusions and
for more specific problems such as convex-concave optimization when operator
norm residual is the optimality measure. We further argue that, up to
poly-logarithmic factors, this complexity is unimprovable in the monotone
Lipschitz setting; i.e., the provided result is near-optimal. | [
"Xufeng Cai",
"Ahmet Alacaoglu",
"Jelena Diakonikolas"
] | 2023-10-04 17:24:45 | http://arxiv.org/abs/2310.02987v1 | http://arxiv.org/pdf/2310.02987v1 | 2310.02987v1 |
Exploring the Impact of Disrupted Peer-to-Peer Communications on Fully Decentralized Learning in Disaster Scenarios | Fully decentralized learning enables the distribution of learning resources
and decision-making capabilities across multiple user devices or nodes, and is
rapidly gaining popularity due to its privacy-preserving and decentralized
nature. Importantly, this crowdsourcing of the learning process allows the
system to continue functioning even if some nodes are affected or disconnected.
In a disaster scenario, communication infrastructure and centralized systems
may be disrupted or completely unavailable, hindering the possibility of
carrying out standard centralized learning tasks in these settings. Thus, fully
decentralized learning can help in this case. However, transitioning from
centralized to peer-to-peer communications introduces a dependency between the
learning process and the topology of the communication graph among nodes. In a
disaster scenario, even peer-to-peer communications are susceptible to abrupt
changes, such as devices running out of battery or getting disconnected from
others due to their position. In this study, we investigate the effects of
various disruptions to peer-to-peer communications on decentralized learning in
a disaster setting. We examine the resilience of a decentralized learning
process when a subset of devices drop from the process abruptly. To this end,
we analyze the difference between losing devices holding data, i.e., potential
knowledge, vs. devices contributing only to the graph connectivity, i.e., with
no data. Our findings on a Barabasi-Albert graph topology, where training data
is distributed across nodes in an IID fashion, indicate that the accuracy of
the learning process is more affected by a loss of connectivity than by a loss
of data. Nevertheless, the network remains relatively robust, and the learning
process can achieve a good level of accuracy. | [
"Luigi Palmieri",
"Chiara Boldrini",
"Lorenzo Valerio",
"Andrea Passarella",
"Marco Conti"
] | 2023-10-04 17:24:38 | http://arxiv.org/abs/2310.02986v1 | http://arxiv.org/pdf/2310.02986v1 | 2310.02986v1 |
Scaling Laws for Associative Memories | Learning arguably involves the discovery and memorization of abstract rules.
The aim of this paper is to study associative memory mechanisms. Our model is
based on high-dimensional matrices consisting of outer products of embeddings,
which relates to the inner layers of transformer language models. We derive
precise scaling laws with respect to sample size and parameter size, and
discuss the statistical efficiency of different estimators, including
optimization-based algorithms. We provide extensive numerical experiments to
validate and interpret theoretical results, including fine-grained
visualizations of the stored memory associations. | [
"Vivien Cabannes",
"Elvis Dohmatob",
"Alberto Bietti"
] | 2023-10-04 17:20:34 | http://arxiv.org/abs/2310.02984v1 | http://arxiv.org/pdf/2310.02984v1 | 2310.02984v1 |
Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors | Modeling long-range dependencies across sequences is a longstanding goal in
machine learning and has led to architectures, such as state space models, that
dramatically outperform Transformers on long sequences. However, these
impressive empirical gains have been by and large demonstrated on benchmarks
(e.g. Long Range Arena), where models are randomly initialized and trained to
predict a target label from an input sequence. In this work, we show that
random initialization leads to gross overestimation of the differences between
architectures and that pretraining with standard denoising objectives, using
$\textit{only the downstream task data}$, leads to dramatic gains across
multiple architectures and to very small gaps between Transformers and state
space models (SSMs). In stark contrast to prior works, we find vanilla
Transformers to match the performance of S4 on Long Range Arena when properly
pretrained, and we improve the best reported results of SSMs on the PathX-256
task by 20 absolute points. Subsequently, we analyze the utility of
previously-proposed structured parameterizations for SSMs and show they become
mostly redundant in the presence of data-driven initialization obtained through
pretraining. Our work shows that, when evaluating different architectures on
supervised tasks, incorporation of data-driven priors via pretraining is
essential for reliable performance estimation, and can be done efficiently. | [
"Ido Amos",
"Jonathan Berant",
"Ankit Gupta"
] | 2023-10-04 17:17:06 | http://arxiv.org/abs/2310.02980v1 | http://arxiv.org/pdf/2310.02980v1 | 2310.02980v1 |
T$^3$Bench: Benchmarking Current Progress in Text-to-3D Generation | Recent methods in text-to-3D leverage powerful pretrained diffusion models to
optimize NeRF. Notably, these methods are able to produce high-quality 3D
scenes without training on 3D data. Due to the open-ended nature of the task,
most studies evaluate their results with subjective case studies and user
experiments, thereby presenting a challenge in quantitatively addressing the
question: How has current progress in Text-to-3D gone so far? In this paper, we
introduce T$^3$Bench, the first comprehensive text-to-3D benchmark containing
diverse text prompts of three increasing complexity levels that are specially
designed for 3D generation. To assess both the subjective quality and the text
alignment, we propose two automatic metrics based on multi-view images produced
by the 3D contents. The quality metric combines multi-view text-image scores
and regional convolution to detect quality and view inconsistency. The
alignment metric uses multi-view captioning and Large Language Model (LLM)
evaluation to measure text-3D consistency. Both metrics closely correlate with
different dimensions of human judgments, providing a paradigm for efficiently
evaluating text-to-3D models. The benchmarking results, shown in Fig. 1, reveal
performance differences among six prevalent text-to-3D methods. Our analysis
further highlights the common struggles for current methods on generating
surroundings and multi-object scenes, as well as the bottleneck of leveraging
2D guidance for 3D generation. Our project page is available at:
https://t3bench.com. | [
"Yuze He",
"Yushi Bai",
"Matthieu Lin",
"Wang Zhao",
"Yubin Hu",
"Jenny Sheng",
"Ran Yi",
"Juanzi Li",
"Yong-Jin Liu"
] | 2023-10-04 17:12:18 | http://arxiv.org/abs/2310.02977v1 | http://arxiv.org/pdf/2310.02977v1 | 2310.02977v1 |
Towards Fully Adaptive Regret Minimization in Heavy-Tailed Bandits | Heavy-tailed distributions naturally arise in many settings, from finance to
telecommunications. While regret minimization under sub-Gaussian or bounded
support rewards has been widely studied, learning on heavy-tailed distributions
only gained popularity over the last decade. In the stochastic heavy-tailed
bandit problem, an agent learns under the assumption that the distributions
have finite moments of maximum order $1+\epsilon$ which are uniformly bounded
by a constant $u$, for some $\epsilon \in (0,1]$. To the best of our knowledge,
literature only provides algorithms requiring these two quantities as an input.
In this paper, we study the stochastic adaptive heavy-tailed bandit, a
variation of the standard setting where both $\epsilon$ and $u$ are unknown to
the agent. We show that adaptivity comes at a cost, introducing two lower
bounds on the regret of any adaptive algorithm, implying a higher regret w.r.t.
the standard setting. Finally, we introduce a specific distributional
assumption and provide Adaptive Robust UCB, a regret minimization strategy
matching the known lower bound for the heavy-tailed MAB problem. | [
"Gianmarco Genalti",
"Lupo Marsigli",
"Nicola Gatti",
"Alberto Maria Metelli"
] | 2023-10-04 17:11:15 | http://arxiv.org/abs/2310.02975v1 | http://arxiv.org/pdf/2310.02975v1 | 2310.02975v1 |
Fast, Expressive SE$(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space | Based on the theory of homogeneous spaces we derive \textit{geometrically
optimal edge attributes} to be used within the flexible message passing
framework. We formalize the notion of weight sharing in convolutional networks
as the sharing of message functions over point-pairs that should be treated
equally. We define equivalence classes of point-pairs that are identical up to
a transformation in the group and derive attributes that uniquely identify
these classes. Weight sharing is then obtained by conditioning message
functions on these attributes. As an application of the theory, we develop an
efficient equivariant group convolutional network for processing 3D point
clouds. The theory of homogeneous spaces tells us how to do group convolutions
with feature maps over the homogeneous space of positions $\mathbb{R}^3$,
position and orientations $\mathbb{R}^3 {\times} S^2$, and the group SE$(3)$
itself. Among these, $\mathbb{R}^3 {\times} S^2$ is an optimal choice due to
the ability to represent directional information, which $\mathbb{R}^3$ methods
cannot, and it significantly enhances computational efficiency compared to
indexing features on the full SE$(3)$ group. We empirically support this claim
by reaching state-of-the-art results -- in accuracy and speed -- on three
different benchmarks: interatomic potential energy prediction, trajectory
forecasting in N-body systems, and generating molecules via equivariant
diffusion models. | [
"Erik J Bekkers",
"Sharvaree Vadgama",
"Rob D Hesselink",
"Putri A van der Linden",
"David W Romero"
] | 2023-10-04 17:06:32 | http://arxiv.org/abs/2310.02970v1 | http://arxiv.org/pdf/2310.02970v1 | 2310.02970v1 |
Dual Conic Proxies for AC Optimal Power Flow | In recent years, there has been significant interest in the development of
machine learning-based optimization proxies for AC Optimal Power Flow (AC-OPF).
Although significant progress has been achieved in predicting high-quality
primal solutions, no existing learning-based approach can provide valid dual
bounds for AC-OPF. This paper addresses this gap by training optimization
proxies for a convex relaxation of AC-OPF. Namely, the paper considers a
second-order cone (SOC) relaxation of ACOPF, and proposes a novel dual
architecture that embeds a fast, differentiable (dual) feasibility recovery,
thus providing valid dual bounds. The paper combines this new architecture with
a self-supervised learning scheme, which alleviates the need for costly
training data generation. Extensive numerical experiments on medium- and
large-scale power grids demonstrate the efficiency and scalability of the
proposed methodology. | [
"Guancheng Qiu",
"Mathieu Tanneau",
"Pascal Van Hentenryck"
] | 2023-10-04 17:06:30 | http://arxiv.org/abs/2310.02969v1 | http://arxiv.org/pdf/2310.02969v1 | 2310.02969v1 |
Co-modeling the Sequential and Graphical Routes for Peptide Representation Learning | Peptides are formed by the dehydration condensation of multiple amino acids.
The primary structure of a peptide can be represented either as an amino acid
sequence or as a molecular graph consisting of atoms and chemical bonds.
Previous studies have indicated that deep learning routes specific to
sequential and graphical peptide forms exhibit comparable performance on
downstream tasks. Despite the fact that these models learn representations of
the same modality of peptides, we find that they explain their predictions
differently. Considering sequential and graphical models as two experts making
inferences from different perspectives, we work on fusing expert knowledge to
enrich the learned representations for improving the discriminative
performance. To achieve this, we propose a peptide co-modeling method, RepCon,
which employs a contrastive learning-based framework to enhance the mutual
information of representations from decoupled sequential and graphical
end-to-end models. It considers representations from the sequential encoder and
the graphical encoder for the same peptide sample as a positive pair and learns
to enhance the consistency of representations between positive sample pairs and
to repel representations between negative pairs. Empirical studies of RepCon
and other co-modeling methods are conducted on open-source discriminative
datasets, including aggregation propensity, retention time, antimicrobial
peptide prediction, and family classification from Peptide Database. Our
results demonstrate the superiority of the co-modeling approach over
independent modeling, as well as the superiority of RepCon over other methods
under the co-modeling framework. In addition, the attribution on RepCon further
corroborates the validity of the approach at the level of model explanation. | [
"Zihan Liu",
"Ge Wang",
"Jiaqi Wang",
"Jiangbin Zheng",
"Stan Z. Li"
] | 2023-10-04 16:58:25 | http://arxiv.org/abs/2310.02964v2 | http://arxiv.org/pdf/2310.02964v2 | 2310.02964v2 |
Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models | The popularity of pre-trained large models has revolutionized downstream
tasks across diverse fields, such as language, vision, and multi-modality. To
minimize the adaption cost for downstream tasks, many Parameter-Efficient
Fine-Tuning (PEFT) techniques are proposed for language and 2D image
pre-trained models. However, the specialized PEFT method for 3D pre-trained
models is still under-explored. To this end, we introduce Point-PEFT, a novel
framework for adapting point cloud pre-trained models with minimal learnable
parameters. Specifically, for a pre-trained 3D model, we freeze most of its
parameters, and only tune the newly added PEFT modules on downstream tasks,
which consist of a Point-prior Prompt and a Geometry-aware Adapter. The
Point-prior Prompt adopts a set of learnable prompt tokens, for which we
propose to construct a memory bank with domain-specific knowledge, and utilize
a parameter-free attention to enhance the prompt tokens. The Geometry-aware
Adapter aims to aggregate point cloud features within spatial neighborhoods to
capture fine-grained geometric information through local interactions.
Extensive experiments indicate that our Point-PEFT can achieve better
performance than the full fine-tuning on various downstream tasks, while using
only 5% of the trainable parameters, demonstrating the efficiency and
effectiveness of our approach. Code will be released at
https://github.com/EvenJoker/Point-PEFT. | [
"Ivan Tang",
"Eric Zhang",
"Ray Gu"
] | 2023-10-04 16:49:36 | http://arxiv.org/abs/2310.03059v1 | http://arxiv.org/pdf/2310.03059v1 | 2310.03059v1 |
Credit card score prediction using machine learning models: A new dataset | The use of credit cards has recently increased, creating an essential need
for credit card assessment methods to minimize potential risks. This study
investigates the utilization of machine learning (ML) models for credit card
default prediction system. The main goal here is to investigate the
best-performing ML model for new proposed credit card scoring dataset. This new
dataset includes credit card transaction histories and customer profiles, is
proposed and tested using a variety of machine learning algorithms, including
logistic regression, decision trees, random forests, multi-layer perceptron
(MLP) neural network, XGBoost, and LightGBM. To prepare the data for machine
learning models, we perform data pre-processing, feature extraction, feature
selection, and data balancing techniques. Experimental results demonstrate that
MLP outperforms logistic regression, decision trees, random forests, LightGBM,
and XGBoost in terms of predictive performance in true positive rate, achieving
an impressive area under the curve (AUC) of 86.7% and an accuracy rate of
91.6%, with a recall rate exceeding 80%. These results indicate the superiority
of MLP in predicting the default customers and assessing the potential risks.
Furthermore, they help banks and other financial institutions in predicting
loan defaults at an earlier stage. | [
"Anas Arram",
"Masri Ayob",
"Musatafa Abbas Abbood Albadr",
"Alaa Sulaiman",
"Dheeb Albashish"
] | 2023-10-04 16:46:26 | http://arxiv.org/abs/2310.02956v2 | http://arxiv.org/pdf/2310.02956v2 | 2310.02956v2 |
A Fisher-Rao gradient flow for entropy-regularised Markov decision processes in Polish spaces | We study the global convergence of a Fisher-Rao policy gradient flow for
infinite-horizon entropy-regularised Markov decision processes with Polish
state and action space. The flow is a continuous-time analogue of a policy
mirror descent method. We establish the global well-posedness of the gradient
flow and demonstrate its exponential convergence to the optimal policy.
Moreover, we prove the flow is stable with respect to gradient evaluation,
offering insights into the performance of a natural policy gradient flow with
log-linear policy parameterisation. To overcome challenges stemming from the
lack of the convexity of the objective function and the discontinuity arising
from the entropy regulariser, we leverage the performance difference lemma and
the duality relationship between the gradient and mirror descent flows. | [
"Bekzhan Kerimkulov",
"James-Michael Leahy",
"David Siska",
"Lukasz Szpruch",
"Yufei Zhang"
] | 2023-10-04 16:41:36 | http://arxiv.org/abs/2310.02951v1 | http://arxiv.org/pdf/2310.02951v1 | 2310.02951v1 |
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models | Warning: This paper contains examples of harmful language, and reader
discretion is recommended. The increasing open release of powerful large
language models (LLMs) has facilitated the development of downstream
applications by reducing the essential cost of data annotation and computation.
To ensure AI safety, extensive safety-alignment measures have been conducted to
armor these models against malicious use (primarily hard prompt attack).
However, beneath the seemingly resilient facade of the armor, there might lurk
a shadow. By simply tuning on 100 malicious examples with 1 GPU hour, these
safely aligned LLMs can be easily subverted to generate harmful content.
Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of
data can elicit safely-aligned models to adapt to harmful tasks without
sacrificing model helpfulness. Remarkably, the subverted models retain their
capability to respond appropriately to regular inquiries. Experiments across 8
models released by 5 different organizations (LLaMa-2, Falcon, InternLM,
BaiChuan2, Vicuna) demonstrate the effectiveness of shadow alignment attack.
Besides, the single-turn English-only attack successfully transfers to
multi-turn dialogue and other languages. This study serves as a clarion call
for a collective effort to overhaul and fortify the safety of open-source LLMs
against malicious attackers. | [
"Xianjun Yang",
"Xiao Wang",
"Qi Zhang",
"Linda Petzold",
"William Yang Wang",
"Xun Zhao",
"Dahua Lin"
] | 2023-10-04 16:39:31 | http://arxiv.org/abs/2310.02949v1 | http://arxiv.org/pdf/2310.02949v1 | 2310.02949v1 |
HappyFeat -- An interactive and efficient BCI framework for clinical applications | Brain-Computer Interface (BCI) systems allow users to perform actions by
translating their brain activity into commands. Such systems usually need a
training phase, consisting in training a classification algorithm to
discriminate between mental states using specific features from the recorded
signals. This phase of feature selection and training is crucial for BCI
performance and presents specific constraints to be met in a clinical context,
such as post-stroke rehabilitation.
In this paper, we present HappyFeat, a software making Motor Imagery (MI)
based BCI experiments easier, by gathering all necessary manipulations and
analysis in a single convenient GUI and via automation of experiment or
analysis parameters. The resulting workflow allows for effortlessly selecting
the best features, helping to achieve good BCI performance in time-constrained
environments. Alternative features based on Functional Connectivity can be used
and compared or combined with Power Spectral Density, allowing a
network-oriented approach.
We then give details of HappyFeat's main mechanisms, and a review of its
performances in typical use cases. We also show that it can be used as an
efficient tool for comparing different metrics extracted from the signals, to
train the classification algorithm. To this end, we show a comparison between
the commonly-used Power Spectral Density and network metrics based on
Functional Connectivity.
HappyFeat is available as an open-source project which can be freely
downloaded on GitHub. | [
"Arthur Desbois",
"Tristan Venot",
"Fabrizio De Vico Fallani",
"Marie-Constance Corsi"
] | 2023-10-04 16:36:32 | http://arxiv.org/abs/2310.02948v1 | http://arxiv.org/pdf/2310.02948v1 | 2310.02948v1 |
Online Constraint Tightening in Stochastic Model Predictive Control: A Regression Approach | Solving chance-constrained stochastic optimal control problems is a
significant challenge in control. This is because no analytical solutions exist
for up to a handful of special cases. A common and computationally efficient
approach for tackling chance-constrained stochastic optimal control problems
consists of reformulating the chance constraints as hard constraints with a
constraint-tightening parameter. However, in such approaches, the choice of
constraint-tightening parameter remains challenging, and guarantees can mostly
be obtained assuming that the process noise distribution is known a priori.
Moreover, the chance constraints are often not tightly satisfied, leading to
unnecessarily high costs. This work proposes a data-driven approach for
learning the constraint-tightening parameters online during control. To this
end, we reformulate the choice of constraint-tightening parameter for the
closed-loop as a binary regression problem. We then leverage a highly
expressive \gls{gp} model for binary regression to approximate the smallest
constraint-tightening parameters that satisfy the chance constraints. By tuning
the algorithm parameters appropriately, we show that the resulting
constraint-tightening parameters satisfy the chance constraints up to an
arbitrarily small margin with high probability. Our approach yields
constraint-tightening parameters that tightly satisfy the chance constraints in
numerical experiments, resulting in a lower average cost than three other
state-of-the-art approaches. | [
"Alexandre Capone",
"Tim Brüdigam",
"Sandra Hirche"
] | 2023-10-04 16:22:02 | http://arxiv.org/abs/2310.02942v1 | http://arxiv.org/pdf/2310.02942v1 | 2310.02942v1 |
Hoeffding's Inequality for Markov Chains under Generalized Concentrability Condition | This paper studies Hoeffding's inequality for Markov chains under the
generalized concentrability condition defined via integral probability metric
(IPM). The generalized concentrability condition establishes a framework that
interpolates and extends the existing hypotheses of Markov chain Hoeffding-type
inequalities. The flexibility of our framework allows Hoeffding's inequality to
be applied beyond the ergodic Markov chains in the traditional sense. We
demonstrate the utility by applying our framework to several non-asymptotic
analyses arising from the field of machine learning, including (i) a
generalization bound for empirical risk minimization with Markovian samples,
(ii) a finite sample guarantee for Ployak-Ruppert averaging of SGD, and (iii) a
new regret bound for rested Markovian bandits with general state space. | [
"Hao Chen",
"Abhishek Gupta",
"Yin Sun",
"Ness Shroff"
] | 2023-10-04 16:21:23 | http://arxiv.org/abs/2310.02941v1 | http://arxiv.org/pdf/2310.02941v1 | 2310.02941v1 |
Assessing Large Language Models on Climate Information | Understanding how climate change affects us and learning about available
solutions are key steps toward empowering individuals and communities to
mitigate and adapt to it. As Large Language Models (LLMs) rise in popularity,
it is necessary to assess their capability in this domain. In this study, we
present a comprehensive evaluation framework, grounded in science communication
principles, to analyze LLM responses to climate change topics. Our framework
emphasizes both the presentational and epistemological adequacy of answers,
offering a fine-grained analysis of LLM generations. Spanning 8 dimensions, our
framework discerns up to 30 distinct issues in model outputs. The task is a
real-world example of a growing number of challenging problems where AI can
complement and lift human performance. We introduce a novel and practical
protocol for scalable oversight that uses AI Assistance and relies on raters
with relevant educational backgrounds. We evaluate several recent LLMs and
conduct a comprehensive analysis of the results, shedding light on both the
potential and the limitations of LLMs in the realm of climate communication. | [
"Jannis Bulian",
"Mike S. Schäfer",
"Afra Amini",
"Heidi Lam",
"Massimiliano Ciaramita",
"Ben Gaiarin",
"Michelle Chen Huebscher",
"Christian Buck",
"Niels Mede",
"Markus Leippold",
"Nadine Strauss"
] | 2023-10-04 16:09:48 | http://arxiv.org/abs/2310.02932v1 | http://arxiv.org/pdf/2310.02932v1 | 2310.02932v1 |
Graph data modelling for outcome prediction in oropharyngeal cancer patients | Graph neural networks (GNNs) are becoming increasingly popular in the medical
domain for the tasks of disease classification and outcome prediction. Since
patient data is not readily available as a graph, most existing methods either
manually define a patient graph, or learn a latent graph based on pairwise
similarities between the patients. There are also hypergraph neural network
(HGNN)-based methods that were introduced recently to exploit potential higher
order associations between the patients by representing them as a hypergraph.
In this work, we propose a patient hypergraph network (PHGN), which has been
investigated in an inductive learning setup for binary outcome prediction in
oropharyngeal cancer (OPC) patients using computed tomography (CT)-based
radiomic features for the first time. Additionally, the proposed model was
extended to perform time-to-event analyses, and compared with GNN and baseline
linear models. | [
"Nithya Bhasker",
"Stefan Leger",
"Alexander Zwanenburg",
"Chethan Babu Reddy",
"Sebastian Bodenstedt",
"Steffen Löck",
"Stefanie Speidel"
] | 2023-10-04 16:09:35 | http://arxiv.org/abs/2310.02931v1 | http://arxiv.org/pdf/2310.02931v1 | 2310.02931v1 |
Optimal Transport with Adaptive Regularisation | Regularising the primal formulation of optimal transport (OT) with a strictly
convex term leads to enhanced numerical complexity and a denser transport plan.
Many formulations impose a global constraint on the transport plan, for
instance by relying on entropic regularisation. As it is more expensive to
diffuse mass for outlier points compared to central ones, this typically
results in a significant imbalance in the way mass is spread across the points.
This can be detrimental for some applications where a minimum of smoothing is
required per point. To remedy this, we introduce OT with Adaptive
RegularIsation (OTARI), a new formulation of OT that imposes constraints on the
mass going in or/and out of each point. We then showcase the benefits of this
approach for domain adaptation. | [
"Hugues Van Assel",
"Titouan Vayer",
"Remi Flamary",
"Nicolas Courty"
] | 2023-10-04 16:05:36 | http://arxiv.org/abs/2310.02925v1 | http://arxiv.org/pdf/2310.02925v1 | 2310.02925v1 |
Enhancing Ayurvedic Diagnosis using Multinomial Naive Bayes and K-modes Clustering: An Investigation into Prakriti Types and Dosha Overlapping | The identification of Prakriti types for the human body is a long-lost
medical practice in finding the harmony between the nature of human beings and
their behaviour. There are 3 fundamental Prakriti types of individuals. A
person can belong to any Dosha. In the existing models, researchers have made
use of SVM, KNN, PCA, Decision Tree, and various other algorithms. The output
of these algorithms was quite decent, but it can be enhanced with the help of
Multinomial Naive Bayes and K-modes clustering. Most of the researchers have
confined themselves to 3 basic classes. This might not be accurate in the
real-world scenario, where overlapping might occur. Considering these, we have
classified the Doshas into 7 categories, which includes overlapping of Doshas.
These are namely, VATT-Dosha, PITT-Dosha, KAPH-Dosha, VATT-PITT-Dosha,
PITT-KAPH-Dosha, KAPH-VATT-Dosha, and VATT-PITT-KAPH-Dosha. The data used
contains a balanced set of all individual entries on which preprocessing steps
of machine learning have been performed. Chi-Square test for handling
categorical data is being used for feature selection. For model fitting, the
method used in this approach is K-modes clustering. The empirical results
demonstrate a better result while using the MNB classifier. All key findings of
this work have achieved 0.90 accuracy, 0.81 precision, 0.91 F-score, and 0.90
recall. The discussion suggests a provident analysis of the seven clusters and
predicts their occurrence. The results have been consolidated to improve the
Ayurvedic advancements with machine learning. | [
"Pranav Bidve",
"Shalini Mishra",
"Annapurna J"
] | 2023-10-04 16:01:43 | http://arxiv.org/abs/2310.02920v1 | http://arxiv.org/pdf/2310.02920v1 | 2310.02920v1 |
Attention-based Multi-task Learning for Base Editor Outcome Prediction | Human genetic diseases often arise from point mutations, emphasizing the
critical need for precise genome editing techniques. Among these, base editing
stands out as it allows targeted alterations at the single nucleotide level.
However, its clinical application is hindered by low editing efficiency and
unintended mutations, necessitating extensive trial-and-error experimentation
in the laboratory. To speed up this process, we present an attention-based
two-stage machine learning model that learns to predict the likelihood of all
possible editing outcomes for a given genomic target sequence. We further
propose a multi-task learning schema to jointly learn multiple base editors
(i.e. variants) at once. Our model's predictions consistently demonstrated a
strong correlation with the actual experimental results on multiple datasets
and base editor variants. These results provide further validation for the
models' capacity to enhance and accelerate the process of refining base editing
designs. | [
"Amina Mollaysa",
"Ahmed Allam",
"Michael Krauthammer"
] | 2023-10-04 16:01:06 | http://arxiv.org/abs/2310.02919v1 | http://arxiv.org/pdf/2310.02919v1 | 2310.02919v1 |
ELUQuant: Event-Level Uncertainty Quantification in Deep Inelastic Scattering | We introduce a physics-informed Bayesian Neural Network (BNN) with flow
approximated posteriors using multiplicative normalizing flows (MNF) for
detailed uncertainty quantification (UQ) at the physics event-level. Our method
is capable of identifying both heteroskedastic aleatoric and epistemic
uncertainties, providing granular physical insights. Applied to Deep Inelastic
Scattering (DIS) events, our model effectively extracts the kinematic variables
$x$, $Q^2$, and $y$, matching the performance of recent deep learning
regression techniques but with the critical enhancement of event-level UQ. This
detailed description of the underlying uncertainty proves invaluable for
decision-making, especially in tasks like event filtering. It also allows for
the reduction of true inaccuracies without directly accessing the ground truth.
A thorough DIS simulation using the H1 detector at HERA indicates possible
applications for the future EIC. Additionally, this paves the way for related
tasks such as data quality monitoring and anomaly detection. Remarkably, our
approach effectively processes large samples at high rates. | [
"Cristiano Fanelli",
"James Giroux"
] | 2023-10-04 15:50:05 | http://arxiv.org/abs/2310.02913v1 | http://arxiv.org/pdf/2310.02913v1 | 2310.02913v1 |
Spline-based neural network interatomic potentials: blending classical and machine learning models | While machine learning (ML) interatomic potentials (IPs) are able to achieve
accuracies nearing the level of noise inherent in the first-principles data to
which they are trained, it remains to be shown if their increased complexities
are strictly necessary for constructing high-quality IPs. In this work, we
introduce a new MLIP framework which blends the simplicity of spline-based MEAM
(s-MEAM) potentials with the flexibility of a neural network (NN) architecture.
The proposed framework, which we call the spline-based neural network potential
(s-NNP), is a simplified version of the traditional NNP that can be used to
describe complex datasets in a computationally efficient manner. We demonstrate
how this framework can be used to probe the boundary between classical and ML
IPs, highlighting the benefits of key architectural changes. Furthermore, we
show that using spline filters for encoding atomic environments results in a
readily interpreted embedding layer which can be coupled with modifications to
the NN to incorporate expected physical behaviors and improve overall
interpretability. Finally, we test the flexibility of the spline filters,
observing that they can be shared across multiple chemical systems in order to
provide a convenient reference point from which to begin performing
cross-system analyses. | [
"Joshua A. Vita",
"Dallas R. Trinkle"
] | 2023-10-04 15:42:26 | http://arxiv.org/abs/2310.02904v1 | http://arxiv.org/pdf/2310.02904v1 | 2310.02904v1 |
FroSSL: Frobenius Norm Minimization for Self-Supervised Learning | Self-supervised learning (SSL) is an increasingly popular paradigm for
representation learning. Recent methods can be classified as
sample-contrastive, dimension-contrastive, or asymmetric network-based, with
each family having its own approach to avoiding informational collapse. While
dimension-contrastive methods converge to similar solutions as
sample-contrastive methods, it can be empirically shown that some methods
require more epochs of training to converge. Motivated by closing this divide,
we present the objective function FroSSL which is both sample- and
dimension-contrastive up to embedding normalization. FroSSL works by minimizing
covariance Frobenius norms for avoiding collapse and minimizing mean-squared
error for augmentation invariance. We show that FroSSL converges more quickly
than a variety of other SSL methods and provide theoretical and empirical
support that this faster convergence is due to how FroSSL affects the
eigenvalues of the embedding covariance matrices. We also show that FroSSL
learns competitive representations on linear probe evaluation when used to
train a ResNet18 on the CIFAR-10, CIFAR-100, STL-10, and ImageNet datasets. | [
"Oscar Skean",
"Aayush Dhakal",
"Nathan Jacobs",
"Luis Gonzalo Sanchez Giraldo"
] | 2023-10-04 15:42:23 | http://arxiv.org/abs/2310.02903v1 | http://arxiv.org/pdf/2310.02903v1 | 2310.02903v1 |
Searching for High-Value Molecules Using Reinforcement Learning and Transformers | Reinforcement learning (RL) over text representations can be effective for
finding high-value policies that can search over graphs. However, RL requires
careful structuring of the search space and algorithm design to be effective in
this challenge. Through extensive experiments, we explore how different design
choices for text grammar and algorithmic choices for training can affect an RL
policy's ability to generate molecules with desired properties. We arrive at a
new RL-based molecular design algorithm (ChemRLformer) and perform a thorough
analysis using 25 molecule design tasks, including computationally complex
protein docking simulations. From this analysis, we discover unique insights in
this problem space and show that ChemRLformer achieves state-of-the-art
performance while being more straightforward than prior work by demystifying
which design choices are actually helpful for text-based molecule design. | [
"Raj Ghugare",
"Santiago Miret",
"Adriana Hugessen",
"Mariano Phielipp",
"Glen Berseth"
] | 2023-10-04 15:40:07 | http://arxiv.org/abs/2310.02902v1 | http://arxiv.org/pdf/2310.02902v1 | 2310.02902v1 |
Recovery of Training Data from Overparameterized Autoencoders: An Inverse Problem Perspective | We study the recovery of training data from overparameterized autoencoder
models. Given a degraded training sample, we define the recovery of the
original sample as an inverse problem and formulate it as an optimization task.
In our inverse problem, we use the trained autoencoder to implicitly define a
regularizer for the particular training dataset that we aim to retrieve from.
We develop the intricate optimization task into a practical method that
iteratively applies the trained autoencoder and relatively simple computations
that estimate and address the unknown degradation operator. We evaluate our
method for blind inpainting where the goal is to recover training images from
degradation of many missing pixels in an unknown pattern. We examine various
deep autoencoder architectures, such as fully connected and U-Net (with various
nonlinearities and at diverse train loss values), and show that our method
significantly outperforms previous methods for training data recovery from
autoencoders. Importantly, our method greatly improves the recovery performance
also in settings that were previously considered highly challenging, and even
impractical, for such retrieval. | [
"Koren Abitbul",
"Yehuda Dar"
] | 2023-10-04 15:36:33 | http://arxiv.org/abs/2310.02897v1 | http://arxiv.org/pdf/2310.02897v1 | 2310.02897v1 |
CoLiDE: Concomitant Linear DAG Estimation | We deal with the combinatorial problem of learning directed acyclic graph
(DAG) structure from observational data adhering to a linear structural
equation model (SEM). Leveraging advances in differentiable, nonconvex
characterizations of acyclicity, recent efforts have advocated a continuous
constrained optimization paradigm to efficiently explore the space of DAGs.
Most existing methods employ lasso-type score functions to guide this search,
which (i) require expensive penalty parameter retuning when the
$\textit{unknown}$ SEM noise variances change across problem instances; and
(ii) implicitly rely on limiting homoscedasticity assumptions. In this work, we
propose a new convex score function for sparsity-aware learning of linear DAGs,
which incorporates concomitant estimation of scale and thus effectively
decouples the sparsity parameter from the exogenous noise levels.
Regularization via a smooth, nonconvex acyclicity penalty term yields CoLiDE
($\textbf{Co}$ncomitant $\textbf{Li}$near $\textbf{D}$AG
$\textbf{E}$stimation), a regression-based criterion amenable to efficient
gradient computation and closed-form estimation of noise variances in
heteroscedastic scenarios. Our algorithm outperforms state-of-the-art methods
without incurring added complexity, especially when the DAGs are larger and the
noise level profile is heterogeneous. We also find CoLiDE exhibits enhanced
stability manifested via reduced standard deviations in several domain-specific
metrics, underscoring the robustness of our novel linear DAG estimator. | [
"Seyed Saman Saboksayr",
"Gonzalo Mateos",
"Mariano Tepper"
] | 2023-10-04 15:32:27 | http://arxiv.org/abs/2310.02895v1 | http://arxiv.org/pdf/2310.02895v1 | 2310.02895v1 |
Something for (almost) nothing: Improving deep ensemble calibration using unlabeled data | We present a method to improve the calibration of deep ensembles in the small
training data regime in the presence of unlabeled data. Our approach is
extremely simple to implement: given an unlabeled set, for each unlabeled data
point, we simply fit a different randomly selected label with each ensemble
member. We provide a theoretical analysis based on a PAC-Bayes bound which
guarantees that if we fit such a labeling on unlabeled data, and the true
labels on the training data, we obtain low negative log-likelihood and high
ensemble diversity on testing samples. Empirically, through detailed
experiments, we find that for low to moderately-sized training sets, our
ensembles are more diverse and provide better calibration than standard
ensembles, sometimes significantly. | [
"Konstantinos Pitas",
"Julyan Arbel"
] | 2023-10-04 15:21:54 | http://arxiv.org/abs/2310.02885v1 | http://arxiv.org/pdf/2310.02885v1 | 2310.02885v1 |
Stationarity without mean reversion: Improper Gaussian process regression and improper kernels | Gaussian processes (GP) regression has gained substantial popularity in
machine learning applications. The behavior of a GP regression depends on the
choice of covariance function. Stationary covariance functions are favorite in
machine learning applications. However, (non-periodic) stationary covariance
functions are always mean reverting and can therefore exhibit pathological
behavior when applied to data that does not relax to a fixed global mean value.
In this paper, we show that it is possible to use improper GP prior with
infinite variance to define processes that are stationary but not mean
reverting. To this aim, we introduce a large class of improper kernels that can
only be defined in this improper regime. Specifically, we introduce the Smooth
Walk kernel, which produces infinitely smooth samples, and a family of improper
Mat\'ern kernels, which can be defined to be $j$-times differentiable for any
integer $j$. The resulting posterior distributions can be computed analytically
and it involves a simple correction of the usual formulas. By analyzing both
synthetic and real data, we demonstrate that these improper kernels solve some
known pathologies of mean reverting GP regression while retaining most of the
favourable properties of ordinary smooth stationary kernels. | [
"Luca Ambrogioni"
] | 2023-10-04 15:11:26 | http://arxiv.org/abs/2310.02877v1 | http://arxiv.org/pdf/2310.02877v1 | 2310.02877v1 |
Recent Methodological Advances in Federated Learning for Healthcare | For healthcare datasets, it is often not possible to combine data samples
from multiple sites due to ethical, privacy or logistical concerns. Federated
learning allows for the utilisation of powerful machine learning algorithms
without requiring the pooling of data. Healthcare data has many simultaneous
challenges which require new methodologies to address, such as highly-siloed
data, class imbalance, missing data, distribution shifts and non-standardised
variables. Federated learning adds significant methodological complexity to
conventional centralised machine learning, requiring distributed optimisation,
communication between nodes, aggregation of models and redistribution of
models. In this systematic review, we consider all papers on Scopus that were
published between January 2015 and February 2023 and which describe new
federated learning methodologies for addressing challenges with healthcare
data. We performed a detailed review of the 89 papers which fulfilled these
criteria. Significant systemic issues were identified throughout the literature
which compromise the methodologies in many of the papers reviewed. We give
detailed recommendations to help improve the quality of the methodology
development for federated learning in healthcare. | [
"Fan Zhang",
"Daniel Kreuter",
"Yichen Chen",
"Sören Dittmer",
"Samuel Tull",
"Tolou Shadbahr",
"BloodCounts! Collaboration",
"Jacobus Preller",
"James H. F. Rudd",
"John A. D. Aston",
"Carola-Bibiane Schönlieb",
"Nicholas Gleadall",
"Michael Roberts"
] | 2023-10-04 15:09:40 | http://arxiv.org/abs/2310.02874v1 | http://arxiv.org/pdf/2310.02874v1 | 2310.02874v1 |
Stable and Interpretable Deep Learning for Tabular Data: Introducing InterpreTabNet with the Novel InterpreStability Metric | As Artificial Intelligence (AI) integrates deeper into diverse sectors, the
quest for powerful models has intensified. While significant strides have been
made in boosting model capabilities and their applicability across domains, a
glaring challenge persists: many of these state-of-the-art models remain as
black boxes. This opacity not only complicates the explanation of model
decisions to end-users but also obstructs insights into intermediate processes
for model designers. To address these challenges, we introduce InterpreTabNet,
a model designed to enhance both classification accuracy and interpretability
by leveraging the TabNet architecture with an improved attentive module. This
design ensures robust gradient propagation and computational stability.
Additionally, we present a novel evaluation metric, InterpreStability, which
quantifies the stability of a model's interpretability. The proposed model and
metric mark a significant stride forward in explainable models' research,
setting a standard for transparency and interpretability in AI model design and
application across diverse sectors. InterpreTabNet surpasses other leading
solutions in tabular data analysis across varied application scenarios, paving
the way for further research into creating deep-learning models that are both
highly accurate and inherently explainable. The introduction of the
InterpreStability metric ensures that the interpretability of future models can
be measured and compared in a consistent and rigorous manner. Collectively,
these contributions have the potential to promote the design principles and
development of next-generation interpretable AI models, widening the adoption
of interpretable AI solutions in critical decision-making environments. | [
"Shiyun Wa",
"Xinai Lu",
"Minjuan Wang"
] | 2023-10-04 15:04:13 | http://arxiv.org/abs/2310.02870v1 | http://arxiv.org/pdf/2310.02870v1 | 2310.02870v1 |
Harmonic Control Lyapunov Barrier Functions for Constrained Optimal Control with Reach-Avoid Specifications | This paper introduces harmonic control Lyapunov barrier functions (harmonic
CLBF) that aid in constrained control problems such as reach-avoid problems.
Harmonic CLBFs exploit the maximum principle that harmonic functions satisfy to
encode the properties of control Lyapunov barrier functions (CLBFs). As a
result, they can be initiated at the start of an experiment rather than trained
based on sample trajectories. The control inputs are selected to maximize the
inner product of the system dynamics with the steepest descent direction of the
harmonic CLBF. Numerical results are presented with four different systems
under different reach-avoid environments. Harmonic CLBFs show a significantly
low risk of entering unsafe regions and a high probability of entering the goal
region. | [
"Amartya Mukherjee",
"Ruikun Zhou",
"Jun Liu"
] | 2023-10-04 15:03:56 | http://arxiv.org/abs/2310.02869v1 | http://arxiv.org/pdf/2310.02869v1 | 2310.02869v1 |
Estimation of Models with Limited Data by Leveraging Shared Structure | Modern data sets, such as those in healthcare and e-commerce, are often
derived from many individuals or systems but have insufficient data from each
source alone to separately estimate individual, often high-dimensional, model
parameters. If there is shared structure among systems however, it may be
possible to leverage data from other systems to help estimate individual
parameters, which could otherwise be non-identifiable. In this paper, we assume
systems share a latent low-dimensional parameter space and propose a method for
recovering $d$-dimensional parameters for $N$ different linear systems, even
when there are only $T<d$ observations per system. To do so, we develop a
three-step algorithm which estimates the low-dimensional subspace spanned by
the systems' parameters and produces refined parameter estimates within the
subspace. We provide finite sample subspace estimation error guarantees for our
proposed method. Finally, we experimentally validate our method on simulations
with i.i.d. regression data and as well as correlated time series data. | [
"Maryann Rui",
"Thibaut Horel",
"Munther Dahleh"
] | 2023-10-04 14:54:34 | http://arxiv.org/abs/2310.02864v1 | http://arxiv.org/pdf/2310.02864v1 | 2310.02864v1 |
Conformal Predictions for Longitudinal Data | We introduce Longitudinal Predictive Conformal Inference (LPCI), a novel
distribution-free conformal prediction algorithm for longitudinal data. Current
conformal prediction approaches for time series data predominantly focus on the
univariate setting, and thus lack cross-sectional coverage when applied
individually to each time series in a longitudinal dataset. The current
state-of-the-art for longitudinal data relies on creating infinitely-wide
prediction intervals to guarantee both cross-sectional and asymptotic
longitudinal coverage. The proposed LPCI method addresses this by ensuring that
both longitudinal and cross-sectional coverages are guaranteed without
resorting to infinitely wide intervals. In our approach, we model the residual
data as a quantile fixed-effects regression problem, constructing prediction
intervals with a trained quantile regressor. Our extensive experiments
demonstrate that LPCI achieves valid cross-sectional coverage and outperforms
existing benchmarks in terms of longitudinal coverage rates. Theoretically, we
establish LPCI's asymptotic coverage guarantees for both dimensions, with
finite-width intervals. The robust performance of LPCI in generating reliable
prediction intervals for longitudinal data underscores its potential for broad
applications, including in medicine, finance, and supply chain management. | [
"Devesh Batra",
"Salvatore Mercuri",
"Raad Khraishi"
] | 2023-10-04 14:51:07 | http://arxiv.org/abs/2310.02863v1 | http://arxiv.org/pdf/2310.02863v1 | 2310.02863v1 |
A novel asymmetrical autoencoder with a sparsifying discrete cosine Stockwell transform layer for gearbox sensor data compression | The lack of an efficient compression model remains a challenge for the
wireless transmission of gearbox data in non-contact gear fault diagnosis
problems. In this paper, we present a signal-adaptive asymmetrical autoencoder
with a transform domain layer to compress sensor signals. First, a new discrete
cosine Stockwell transform (DCST) layer is introduced to replace linear layers
in a multi-layer autoencoder. A trainable filter is implemented in the DCST
domain by utilizing the multiplication property of the convolution. A trainable
hard-thresholding layer is applied to reduce redundant data in the DCST layer
to make the feature map sparse. In comparison to the linear layer, the DCST
layer reduces the number of trainable parameters and improves the accuracy of
data reconstruction. Second, training the autoencoder with a sparsifying DCST
layer only requires a small number of datasets. The proposed method is superior
to other autoencoder-based methods on the University of Connecticut (UoC) and
Southeast University (SEU) gearbox datasets, as the average quality score is
improved by 2.00% at the lowest and 32.35% at the highest with a limited number
of training samples | [
"Xin Zhu",
"Daoguang Yang",
"Hongyi Pan",
"Hamid Reza Karimi",
"Didem Ozevin",
"Ahmet Enis Cetin"
] | 2023-10-04 14:50:58 | http://arxiv.org/abs/2310.02862v1 | http://arxiv.org/pdf/2310.02862v1 | 2310.02862v1 |
Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection | Graph-level anomaly detection has gained significant attention as it finds
many applications in various domains, such as cancer diagnosis and enzyme
prediction. However, existing methods fail to capture the underlying properties
of graph anomalies, resulting in unexplainable framework design and
unsatisfying performance. In this paper, we take a step back and re-investigate
the spectral differences between anomalous and normal graphs. Our main
observation shows a significant disparity in the accumulated spectral energy
between these two classes. Moreover, we prove that the accumulated spectral
energy of the graph signal can be represented by its Rayleigh Quotient,
indicating that the Rayleigh Quotient is a driving factor behind the anomalous
properties of graphs. Motivated by this, we propose Rayleigh Quotient Graph
Neural Network (RQGNN), the first spectral GNN for graph-level anomaly
detection, providing a new perspective on exploring the inherent spectral
features of anomalous graphs. Specifically, we introduce a novel framework that
consists of two components: the Rayleigh Quotient learning component (RQL) and
Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly captures the
Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the spectral space
of graphs. Extensive experiments on 10 real-world datasets show that RQGNN
outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC,
demonstrating the effectiveness of our framework. | [
"Xiangyu Dong",
"Xingyi Zhang",
"Sibo Wang"
] | 2023-10-04 14:47:27 | http://arxiv.org/abs/2310.02861v2 | http://arxiv.org/pdf/2310.02861v2 | 2310.02861v2 |
Multi-Domain Causal Representation Learning via Weak Distributional Invariances | Causal representation learning has emerged as the center of action in causal
machine learning research. In particular, multi-domain datasets present a
natural opportunity for showcasing the advantages of causal representation
learning over standard unsupervised representation learning. While recent works
have taken crucial steps towards learning causal representations, they often
lack applicability to multi-domain datasets due to over-simplifying assumptions
about the data; e.g. each domain comes from a different single-node perfect
intervention. In this work, we relax these assumptions and capitalize on the
following observation: there often exists a subset of latents whose certain
distributional properties (e.g., support, variance) remain stable across
domains; this property holds when, for example, each domain comes from a
multi-node imperfect intervention. Leveraging this observation, we show that
autoencoders that incorporate such invariances can provably identify the stable
set of latents from the rest across different settings. | [
"Kartik Ahuja",
"Amin Mansouri",
"Yixin Wang"
] | 2023-10-04 14:41:41 | http://arxiv.org/abs/2310.02854v2 | http://arxiv.org/pdf/2310.02854v2 | 2310.02854v2 |
Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness | Effective OOD detection is crucial for reliable machine learning models, yet
most current methods are limited in practical use due to requirements like
access to training data or intervention in training. We present a novel method
for detecting OOD data in deep neural networks based on transformation
smoothness between intermediate layers of a network (BLOOD), which is
applicable to pre-trained models without access to training data. BLOOD
utilizes the tendency of between-layer representation transformations of
in-distribution (ID) data to be smoother than the corresponding transformations
of OOD data, a property that we also demonstrate empirically for Transformer
networks. We evaluate BLOOD on several text classification tasks with
Transformer networks and demonstrate that it outperforms methods with
comparable resource requirements. Our analysis also suggests that when learning
simpler tasks, OOD data transformations maintain their original sharpness,
whereas sharpness increases with more complex tasks. | [
"Fran Jelenić",
"Josip Jukić",
"Martin Tutek",
"Mate Puljiz",
"Jan Šnajder"
] | 2023-10-04 13:59:45 | http://arxiv.org/abs/2310.02832v1 | http://arxiv.org/pdf/2310.02832v1 | 2310.02832v1 |
Learning to Scale Logits for Temperature-Conditional GFlowNets | GFlowNets are probabilistic models that learn a stochastic policy that
sequentially generates compositional structures, such as molecular graphs. They
are trained with the objective of sampling such objects with probability
proportional to the object's reward. Among GFlowNets, the
temperature-conditional GFlowNets represent a family of policies indexed by
temperature, and each is associated with the correspondingly tempered reward
function. The major benefit of temperature-conditional GFlowNets is the
controllability of GFlowNets' exploration and exploitation through adjusting
temperature. We propose Learning to Scale Logits for temperature-conditional
GFlowNets (LSL-GFN), a novel architectural design that greatly accelerates the
training of temperature-conditional GFlowNets. It is based on the idea that
previously proposed temperature-conditioning approaches introduced numerical
challenges in the training of the deep network because different temperatures
may give rise to very different gradient profiles and ideal scales of the
policy's logits. We find that the challenge is greatly reduced if a learned
function of the temperature is used to scale the policy's logits directly. We
empirically show that our strategy dramatically improves the performances of
GFlowNets, outperforming other baselines, including reinforcement learning and
sampling methods, in terms of discovering diverse modes in multiple biochemical
tasks. | [
"Minsu Kim",
"Joohwan Ko",
"Dinghuai Zhang",
"Ling Pan",
"Taeyoung Yun",
"Woochang Kim",
"Jinkyoo Park",
"Yoshua Bengio"
] | 2023-10-04 13:45:56 | http://arxiv.org/abs/2310.02823v1 | http://arxiv.org/pdf/2310.02823v1 | 2310.02823v1 |
Time-Series Classification in Smart Manufacturing Systems: An Experimental Evaluation of State-of-the-Art Machine Learning Algorithms | Manufacturing is gathering extensive amounts of diverse data, thanks to the
growing number of sensors and rapid advances in sensing technologies. Among the
various data types available in SMS settings, time-series data plays a pivotal
role. Hence, TSC emerges is crucial in this domain. The objective of this study
is to fill this gap by providing a rigorous experimental evaluation of the SoTA
ML and DL algorithms for TSC tasks in manufacturing and industrial settings. We
first explored and compiled a comprehensive list of more than 92 SoTA
algorithms from both TSC and manufacturing literature. Following, we selected
the 36 most representative algorithms from this list. To evaluate their
performance across various manufacturing classification tasks, we curated a set
of 22 manufacturing datasets, representative of different characteristics that
cover diverse manufacturing problems. Subsequently, we implemented and
evaluated the algorithms on the manufacturing benchmark datasets, and analyzed
the results for each dataset. Based on the results, ResNet, DrCIF,
InceptionTime, and ARSENAL are the top-performing algorithms, boasting an
average accuracy of over 96.6% across all 22 manufacturing TSC datasets. These
findings underscore the robustness, efficiency, scalability, and effectiveness
of convolutional kernels in capturing temporal features in time-series data, as
three out of the top four performing algorithms leverage these kernels for
feature extraction. Additionally, LSTM, BiLSTM, and TS-LSTM algorithms deserve
recognition for their effectiveness in capturing features within time-series
data using RNN-based structures. | [
"Mojtaba A. Farahani",
"M. R. McCormick",
"Ramy Harik",
"Thorsten Wuest"
] | 2023-10-04 13:37:34 | http://arxiv.org/abs/2310.02812v1 | http://arxiv.org/pdf/2310.02812v1 | 2310.02812v1 |
A Deep Instance Generative Framework for MILP Solvers Under Limited Data Availability | In the past few years, there has been an explosive surge in the use of
machine learning (ML) techniques to address combinatorial optimization (CO)
problems, especially mixed-integer linear programs (MILPs). Despite the
achievements, the limited availability of real-world instances often leads to
sub-optimal decisions and biased solver assessments, which motivates a suite of
synthetic MILP instance generation techniques. However, existing methods either
rely heavily on expert-designed formulations or struggle to capture the rich
features of real-world instances. To tackle this problem, we propose G2MILP,
which to the best of our knowledge is the first deep generative framework for
MILP instances. Specifically, G2MILP represents MILP instances as bipartite
graphs, and applies a masked variational autoencoder to iteratively corrupt and
replace parts of the original graphs to generate new ones. The appealing
feature of G2MILP is that it can learn to generate novel and realistic MILP
instances without prior expert-designed formulations, while preserving the
structures and computational hardness of real-world datasets, simultaneously.
Thus the generated instances can facilitate downstream tasks for enhancing MILP
solvers under limited data availability. We design a suite of benchmarks to
evaluate the quality of the generated MILP instances. Experiments demonstrate
that our method can produce instances that closely resemble real-world datasets
in terms of both structures and computational hardness. | [
"Zijie Geng",
"Xijun Li",
"Jie Wang",
"Xiao Li",
"Yongdong Zhang",
"Feng Wu"
] | 2023-10-04 13:34:34 | http://arxiv.org/abs/2310.02807v1 | http://arxiv.org/pdf/2310.02807v1 | 2310.02807v1 |
A Data-facilitated Numerical Method for Richards Equation to Model Water Flow Dynamics in Soil | Root-zone soil moisture monitoring is essential for precision agriculture,
smart irrigation, and drought prevention. Modeling the spatiotemporal water
flow dynamics in soil is typically achieved by solving a hydrological model,
such as the Richards equation which is a highly nonlinear partial differential
equation (PDE). In this paper, we present a novel data-facilitated numerical
method for solving the mixed-form Richards equation. This numerical method,
which we call the D-GRW (Data-facilitated global Random Walk) method,
synergistically integrates adaptive linearization scheme, neural networks, and
global random walk in a finite volume discretization framework to produce
accurate numerical solutions of the Richards equation with guaranteed
convergence under reasonable assumptions. Through three illustrative examples,
we demonstrate and discuss the superior accuracy and mass conservation
performance of our D-GRW method and compare it with benchmark numerical methods
and commercial solver. | [
"Zeyuan Song",
"Zheyu Jiang"
] | 2023-10-04 13:33:37 | http://arxiv.org/abs/2310.02806v1 | http://arxiv.org/pdf/2310.02806v1 | 2310.02806v1 |
DOMINO: A Dual-System for Multi-step Visual Language Reasoning | Visual language reasoning requires a system to extract text or numbers from
information-dense images like charts or plots and perform logical or arithmetic
reasoning to arrive at an answer. To tackle this task, existing work relies on
either (1) an end-to-end vision-language model trained on a large amount of
data, or (2) a two-stage pipeline where a captioning model converts the image
into text that is further read by another large language model to deduce the
answer. However, the former approach forces the model to answer a complex
question with one single step, and the latter approach is prone to inaccurate
or distracting information in the converted text that can confuse the language
model. In this work, we propose a dual-system for multi-step multimodal
reasoning, which consists of a "System-1" step for visual information
extraction and a "System-2" step for deliberate reasoning. Given an input,
System-2 breaks down the question into atomic sub-steps, each guiding System-1
to extract the information required for reasoning from the image. Experiments
on chart and plot datasets show that our method with a pre-trained System-2
module performs competitively compared to prior work on in- and
out-of-distribution data. By fine-tuning the System-2 module (LLaMA-2 70B) on
only a small amount of data on multi-step reasoning, the accuracy of our method
is further improved and surpasses the best fully-supervised end-to-end approach
by 5.7% and a pipeline approach with FlanPaLM (540B) by 7.5% on a challenging
dataset with human-authored questions. | [
"Peifang Wang",
"Olga Golovneva",
"Armen Aghajanyan",
"Xiang Ren",
"Muhao Chen",
"Asli Celikyilmaz",
"Maryam Fazel-Zarandi"
] | 2023-10-04 13:29:47 | http://arxiv.org/abs/2310.02804v1 | http://arxiv.org/pdf/2310.02804v1 | 2310.02804v1 |
MAD Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems | Training and deploying large machine learning (ML) models is time-consuming
and requires significant distributed computing infrastructures. Based on
real-world large model training on datacenter-scale infrastructures, we show
14~32% of all GPU hours are spent on communication with no overlapping
computation. To minimize the outstanding communication latency, in this work,
we develop an agile performance modeling framework to guide parallelization and
hardware-software co-design strategies. Using the suite of real-world large ML
models on state-of-the-art GPU training hardware, we demonstrate 2.24x and
5.27x throughput improvement potential for pre-training and inference
scenarios, respectively. | [
"Samuel Hsia",
"Alicia Golden",
"Bilge Acun",
"Newsha Ardalani",
"Zachary DeVito",
"Gu-Yeon Wei",
"David Brooks",
"Carole-Jean Wu"
] | 2023-10-04 13:00:53 | http://arxiv.org/abs/2310.02784v2 | http://arxiv.org/pdf/2310.02784v2 | 2310.02784v2 |
Discovering General Reinforcement Learning Algorithms with Adversarial Environment Design | The past decade has seen vast progress in deep reinforcement learning (RL) on
the back of algorithms manually designed by human researchers. Recently, it has
been shown that it is possible to meta-learn update rules, with the hope of
discovering algorithms that can perform well on a wide range of RL tasks.
Despite impressive initial results from algorithms such as Learned Policy
Gradient (LPG), there remains a generalization gap when these algorithms are
applied to unseen environments. In this work, we examine how characteristics of
the meta-training distribution impact the generalization performance of these
algorithms. Motivated by this analysis and building on ideas from Unsupervised
Environment Design (UED), we propose a novel approach for automatically
generating curricula to maximize the regret of a meta-learned optimizer, in
addition to a novel approximation of regret, which we name algorithmic regret
(AR). The result is our method, General RL Optimizers Obtained Via Environment
Design (GROOVE). In a series of experiments, we show that GROOVE achieves
superior generalization to LPG, and evaluate AR against baseline metrics from
UED, identifying it as a critical component of environment design in this
setting. We believe this approach is a step towards the discovery of truly
general RL algorithms, capable of solving a wide range of real-world
environments. | [
"Matthew Thomas Jackson",
"Minqi Jiang",
"Jack Parker-Holder",
"Risto Vuorio",
"Chris Lu",
"Gregory Farquhar",
"Shimon Whiteson",
"Jakob Nicolaus Foerster"
] | 2023-10-04 12:52:56 | http://arxiv.org/abs/2310.02782v1 | http://arxiv.org/pdf/2310.02782v1 | 2310.02782v1 |
Expected flow networks in stochastic environments and two-player zero-sum games | Generative flow networks (GFlowNets) are sequential sampling models trained
to match a given distribution. GFlowNets have been successfully applied to
various structured object generation tasks, sampling a diverse set of
high-reward objects quickly. We propose expected flow networks (EFlowNets),
which extend GFlowNets to stochastic environments. We show that EFlowNets
outperform other GFlowNet formulations in stochastic tasks such as protein
design. We then extend the concept of EFlowNets to adversarial environments,
proposing adversarial flow networks (AFlowNets) for two-player zero-sum games.
We show that AFlowNets learn to find above 80% of optimal moves in Connect-4
via self-play and outperform AlphaZero in tournaments. | [
"Marco Jiralerspong",
"Bilun Sun",
"Danilo Vucetic",
"Tianyu Zhang",
"Yoshua Bengio",
"Gauthier Gidel",
"Nikolay Malkin"
] | 2023-10-04 12:50:29 | http://arxiv.org/abs/2310.02779v1 | http://arxiv.org/pdf/2310.02779v1 | 2310.02779v1 |
Graph Neural Networks and Time Series as Directed Graphs for Quality Recognition | Graph Neural Networks (GNNs) are becoming central in the study of time
series, coupled with existing algorithms as Temporal Convolutional Networks and
Recurrent Neural Networks. In this paper, we see time series themselves as
directed graphs, so that their topology encodes time dependencies and we start
to explore the effectiveness of GNNs architectures on them. We develop two
distinct Geometric Deep Learning models, a supervised classifier and an
autoencoder-like model for signal reconstruction. We apply these models on a
quality recognition problem. | [
"Angelica Simonetti",
"Ferdinando Zanchetta"
] | 2023-10-04 12:43:38 | http://arxiv.org/abs/2310.02774v1 | http://arxiv.org/pdf/2310.02774v1 | 2310.02774v1 |
Modified LAB Algorithm with Clustering-based Search Space Reduction Method for solving Engineering Design Problems | A modified LAB algorithm is introduced in this paper. It builds upon the
original LAB algorithm (Reddy et al. 2023), which is a socio-inspired algorithm
that models competitive and learning behaviours within a group, establishing
hierarchical roles. The proposed algorithm incorporates the roulette wheel
approach and a reduction factor introducing inter-group competition and
iteratively narrowing down the sample space. The algorithm is validated by
solving the benchmark test problems from CEC 2005 and CEC 2017. The solutions
are validated using standard statistical tests such as two-sided and pairwise
signed rank Wilcoxon test and Friedman rank test. The algorithm exhibited
improved and superior robustness as well as search space exploration
capabilities. Furthermore, a Clustering-Based Search Space Reduction (C-SSR)
method is proposed, making the algorithm capable to solve constrained problems.
The C-SSR method enables the algorithm to identify clusters of feasible
regions, satisfying the constraints and contributing to achieve the optimal
solution. This method demonstrates its effectiveness as a potential alternative
to traditional constraint handling techniques. The results obtained using the
Modified LAB algorithm are then compared with those achieved by other recent
metaheuristic algorithms. | [
"Ruturaj Reddy",
"Utkarsh Gupta",
"Ishaan Kale",
"Apoorva Shastri",
"Anand J Kulkarni"
] | 2023-10-04 12:35:13 | http://arxiv.org/abs/2310.03055v1 | http://arxiv.org/pdf/2310.03055v1 | 2310.03055v1 |
Deep Reinforcement Learning Algorithms for Hybrid V2X Communication: A Benchmarking Study | In today's era, autonomous vehicles demand a safety level on par with
aircraft. Taking a cue from the aerospace industry, which relies on redundancy
to achieve high reliability, the automotive sector can also leverage this
concept by building redundancy in V2X (Vehicle-to-Everything) technologies.
Given the current lack of reliable V2X technologies, this idea is particularly
promising. By deploying multiple RATs (Radio Access Technologies) in parallel,
the ongoing debate over the standard technology for future vehicles can be put
to rest. However, coordinating multiple communication technologies is a complex
task due to dynamic, time-varying channels and varying traffic conditions. This
paper addresses the vertical handover problem in V2X using Deep Reinforcement
Learning (DRL) algorithms. The goal is to assist vehicles in selecting the most
appropriate V2X technology (DSRC/V-VLC) in a serpentine environment. The
results show that the benchmarked algorithms outperform the current
state-of-the-art approaches in terms of redundancy and usage rate of V-VLC
headlights. This result is a significant reduction in communication costs while
maintaining a high level of reliability. These results provide strong evidence
for integrating advanced DRL decision mechanisms into the architecture as a
promising approach to solving the vertical handover problem in V2X. | [
"Fouzi Boukhalfa",
"Reda Alami",
"Mastane Achab",
"Eric Moulines",
"Mehdi Bennis"
] | 2023-10-04 12:32:14 | http://arxiv.org/abs/2310.03767v1 | http://arxiv.org/pdf/2310.03767v1 | 2310.03767v1 |
Kernel-based function learning in dynamic and non stationary environments | One central theme in machine learning is function estimation from sparse and
noisy data. An example is supervised learning where the elements of the
training set are couples, each containing an input location and an output
response. In the last decades, a substantial amount of work has been devoted to
design estimators for the unknown function and to study their convergence to
the optimal predictor, also characterizing the learning rate. These results
typically rely on stationary assumptions where input locations are drawn from a
probability distribution that does not change in time. In this work, we
consider kernel-based ridge regression and derive convergence conditions under
non stationary distributions, addressing also cases where stochastic adaption
may happen infinitely often. This includes the important
exploration-exploitation problems where e.g. a set of agents/robots has to
monitor an environment to reconstruct a sensorial field and their movements
rules are continuously updated on the basis of the acquired knowledge on the
field and/or the surrounding environment. | [
"Alberto Giaretta",
"Mauro Bisiacco",
"Gianluigi Pillonetto"
] | 2023-10-04 12:31:31 | http://arxiv.org/abs/2310.02767v1 | http://arxiv.org/pdf/2310.02767v1 | 2310.02767v1 |
Comparative Study and Framework for Automated Summariser Evaluation: LangChain and Hybrid Algorithms | Automated Essay Score (AES) is proven to be one of the cutting-edge
technologies. Scoring techniques are used for various purposes. Reliable scores
are calculated based on influential variables. Such variables can be computed
by different methods based on the domain. The research is concentrated on the
user's understanding of a given topic. The analysis is based on a scoring index
by using Large Language Models. The user can then compare and contrast the
understanding of a topic that they recently learned. The results are then
contributed towards learning analytics and progression is made for enhancing
the learning ability. In this research, the focus is on summarizing a PDF
document and gauging a user's understanding of its content. The process
involves utilizing a Langchain tool to summarize the PDF and extract the
essential information. By employing this technique, the research aims to
determine how well the user comprehends the summarized content. | [
"Bagiya Lakshmi S",
"Sanjjushri Varshini R",
"Rohith Mahadevan",
"Raja CSP Raman"
] | 2023-10-04 12:14:43 | http://arxiv.org/abs/2310.02759v1 | http://arxiv.org/pdf/2310.02759v1 | 2310.02759v1 |
MUNCH: Modelling Unique 'N Controllable Heads | The automated generation of 3D human heads has been an intriguing and
challenging task for computer vision researchers. Prevailing methods synthesize
realistic avatars but with limited control over the diversity and quality of
rendered outputs and suffer from limited correlation between shape and texture
of the character. We propose a method that offers quality, diversity, control,
and realism along with explainable network design, all desirable features to
game-design artists in the domain. First, our proposed Geometry Generator
identifies disentangled latent directions and generate novel and diverse
samples. A Render Map Generator then learns to synthesize multiply high-fidelty
physically-based render maps including Albedo, Glossiness, Specular, and
Normals. For artists preferring fine-grained control over the output, we
introduce a novel Color Transformer Model that allows semantic color control
over generated maps. We also introduce quantifiable metrics called Uniqueness
and Novelty and a combined metric to test the overall performance of our model.
Demo for both shapes and textures can be found:
https://munch-seven.vercel.app/. We will release our model along with the
synthetic dataset. | [
"Debayan Deb",
"Suvidha Tripathi",
"Pranit Puri"
] | 2023-10-04 11:44:20 | http://arxiv.org/abs/2310.02753v1 | http://arxiv.org/pdf/2310.02753v1 | 2310.02753v1 |
Fair Feature Selection: A Comparison of Multi-Objective Genetic Algorithms | Machine learning classifiers are widely used to make decisions with a major
impact on people's lives (e.g. accepting or denying a loan, hiring decisions,
etc). In such applications,the learned classifiers need to be both accurate and
fair with respect to different groups of people, with different values of
variables such as sex and race. This paper focuses on fair feature selection
for classification, i.e. methods that select a feature subset aimed at
maximising both the accuracy and the fairness of the predictions made by a
classifier. More specifically, we compare two recently proposed Genetic
Algorithms (GAs) for fair feature selection that are based on two different
multi-objective optimisation approaches: (a) a Pareto dominance-based GA; and
(b) a lexicographic optimisation-based GA, where maximising accuracy has higher
priority than maximising fairness. Both GAs use the same measures of accuracy
and fairness, allowing for a controlled comparison. As far as we know, this is
the first comparison between the Pareto and lexicographic approaches for fair
classification. The results show that, overall, the lexicographic GA
outperformed the Pareto GA with respect to accuracy without degradation of the
fairness of the learned classifiers. This is an important result because at
present nearly all GAs for fair classification are based on the Pareto
approach, so these results suggest a promising new direction for research in
this area. | [
"James Brookhouse",
"Alex Freitas"
] | 2023-10-04 11:43:11 | http://arxiv.org/abs/2310.02752v1 | http://arxiv.org/pdf/2310.02752v1 | 2310.02752v1 |
SHOT: Suppressing the Hessian along the Optimization Trajectory for Gradient-Based Meta-Learning | In this paper, we hypothesize that gradient-based meta-learning (GBML)
implicitly suppresses the Hessian along the optimization trajectory in the
inner loop. Based on this hypothesis, we introduce an algorithm called SHOT
(Suppressing the Hessian along the Optimization Trajectory) that minimizes the
distance between the parameters of the target and reference models to suppress
the Hessian in the inner loop. Despite dealing with high-order terms, SHOT does
not increase the computational complexity of the baseline model much. It is
agnostic to both the algorithm and architecture used in GBML, making it highly
versatile and applicable to any GBML baseline. To validate the effectiveness of
SHOT, we conduct empirical tests on standard few-shot learning tasks and
qualitatively analyze its dynamics. We confirm our hypothesis empirically and
demonstrate that SHOT outperforms the corresponding baseline. Code is available
at: https://github.com/JunHoo-Lee/SHOT | [
"JunHoo Lee",
"Jayeon Yoo",
"Nojun Kwak"
] | 2023-10-04 11:43:08 | http://arxiv.org/abs/2310.02751v1 | http://arxiv.org/pdf/2310.02751v1 | 2310.02751v1 |
Posterior Sampling Based on Gradient Flows of the MMD with Negative Distance Kernel | We propose conditional flows of the maximum mean discrepancy (MMD) with the
negative distance kernel for posterior sampling and conditional generative
modeling. This MMD, which is also known as energy distance, has several
advantageous properties like efficient computation via slicing and sorting. We
approximate the joint distribution of the ground truth and the observations
using discrete Wasserstein gradient flows and establish an error bound for the
posterior distributions. Further, we prove that our particle flow is indeed a
Wasserstein gradient flow of an appropriate functional. The power of our method
is demonstrated by numerical examples including conditional image generation
and inverse problems like superresolution, inpainting and computed tomography
in low-dose and limited-angle settings. | [
"Paul Hagemann",
"Johannes Hertrich",
"Fabian Altekrüger",
"Robert Beinert",
"Jannis Chemseddine",
"Gabriele Steidl"
] | 2023-10-04 11:40:02 | http://arxiv.org/abs/2310.03054v1 | http://arxiv.org/pdf/2310.03054v1 | 2310.03054v1 |
SALSA: Semantically-Aware Latent Space Autoencoder | In deep learning for drug discovery, chemical data are often represented as
simplified molecular-input line-entry system (SMILES) sequences which allow for
straightforward implementation of natural language processing methodologies,
one being the sequence-to-sequence autoencoder. However, we observe that
training an autoencoder solely on SMILES is insufficient to learn molecular
representations that are semantically meaningful, where semantics are defined
by the structural (graph-to-graph) similarities between molecules. We
demonstrate by example that autoencoders may map structurally similar molecules
to distant codes, resulting in an incoherent latent space that does not respect
the structural similarities between molecules. To address this shortcoming we
propose Semantically-Aware Latent Space Autoencoder (SALSA), a
transformer-autoencoder modified with a contrastive task, tailored specifically
to learn graph-to-graph similarity between molecules. Formally, the contrastive
objective is to map structurally similar molecules (separated by a single graph
edit) to nearby codes in the latent space. To accomplish this, we generate a
novel dataset comprised of sets of structurally similar molecules and opt for a
supervised contrastive loss that is able to incorporate full sets of positive
samples. We compare SALSA to its ablated counterparts, and show empirically
that the composed training objective (reconstruction and contrastive task)
leads to a higher quality latent space that is more 1) structurally-aware, 2)
semantically continuous, and 3) property-aware. | [
"Kathryn E. Kirchoff",
"Travis Maxfield",
"Alexander Tropsha",
"Shawn M. Gomez"
] | 2023-10-04 11:34:46 | http://arxiv.org/abs/2310.02744v1 | http://arxiv.org/pdf/2310.02744v1 | 2310.02744v1 |
Reward Model Ensembles Help Mitigate Overoptimization | Reinforcement learning from human feedback (RLHF) is a standard approach for
fine-tuning large language models to follow instructions. As part of this
process, learned reward models are used to approximately model human
preferences. However, as imperfect representations of the "true" reward, these
learned reward models are susceptible to \textit{overoptimization}. Gao et al.
(2023) studied this phenomenon in a synthetic human feedback setup with a
significantly larger "gold" reward model acting as the true reward (instead of
humans) and showed that overoptimization remains a persistent problem
regardless of the size of the proxy reward model and training data used. Using
a similar setup, we conduct a systematic study to evaluate the efficacy of
using ensemble-based conservative optimization objectives, specifically
worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for
mitigating reward model overoptimization when using two optimization methods:
(a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We
additionally extend the setup of Gao et al. (2023) to include 25% label noise
to better mirror real-world conditions. Both with and without label noise, we
find that conservative optimization practically eliminates overoptimization and
improves performance by up to 70% for BoN sampling. For PPO, ensemble-based
conservative optimization always reduces overoptimization and outperforms
single reward model optimization. Moreover, combining it with a small KL
penalty successfully prevents overoptimization at no performance cost. Overall,
our results demonstrate that ensemble-based conservative optimization can
effectively counter overoptimization. | [
"Thomas Coste",
"Usman Anwar",
"Robert Kirk",
"David Krueger"
] | 2023-10-04 11:34:22 | http://arxiv.org/abs/2310.02743v1 | http://arxiv.org/pdf/2310.02743v1 | 2310.02743v1 |
Comparative Analysis of Imbalanced Malware Byteplot Image Classification using Transfer Learning | Cybersecurity is a major concern due to the increasing reliance on technology
and interconnected systems. Malware detectors help mitigate cyber-attacks by
comparing malware signatures. Machine learning can improve these detectors by
automating feature extraction, identifying patterns, and enhancing dynamic
analysis. In this paper, the performance of six multiclass classification
models is compared on the Malimg dataset, Blended dataset, and Malevis dataset
to gain insights into the effect of class imbalance on model performance and
convergence. It is observed that the more the class imbalance less the number
of epochs required for convergence and a high variance across the performance
of different models. Moreover, it is also observed that for malware detectors
ResNet50, EfficientNetB0, and DenseNet169 can handle imbalanced and balanced
data well. A maximum precision of 97% is obtained for the imbalanced dataset, a
maximum precision of 95% is obtained on the intermediate imbalance dataset, and
a maximum precision of 95% is obtained for the perfectly balanced dataset. | [
"Jayasudha M",
"Ayesha Shaik",
"Gaurav Pendharkar",
"Soham Kumar",
"Muhesh Kumar B",
"Sudharshanan Balaji"
] | 2023-10-04 11:33:36 | http://arxiv.org/abs/2310.02742v1 | http://arxiv.org/pdf/2310.02742v1 | 2310.02742v1 |
Inclusive Data Representation in Federated Learning: A Novel Approach Integrating Textual and Visual Prompt | Federated Learning (FL) is often impeded by communication overhead issues.
Prompt tuning, as a potential solution, has been introduced to only adjust a
few trainable parameters rather than the whole model. However, current
single-modality prompt tuning approaches fail to comprehensively portray local
clients' data. To overcome this limitation, we present Twin Prompt Federated
learning (TPFL), a pioneering solution that integrates both visual and textual
modalities, ensuring a more holistic representation of local clients' data
characteristics. Furthermore, in order to tackle the data heterogeneity issues,
we introduce the Augmented TPFL (ATPFL) employing the contrastive learning to
TPFL, which not only enhances the global knowledge acquisition of client models
but also fosters the development of robust, compact models. The effectiveness
of TPFL and ATPFL is substantiated by our extensive evaluations, consistently
showing superior performance compared to all baselines. | [
"Zihao Zhao",
"Zhenpeng Shi",
"Yang Liu",
"Wenbo Ding"
] | 2023-10-04 11:20:28 | http://arxiv.org/abs/2310.04455v1 | http://arxiv.org/pdf/2310.04455v1 | 2310.04455v1 |
Extracting Rules from Event Data for Study Planning | In this study, we examine how event data from campus management systems can
be used to analyze the study paths of higher education students. The main goal
is to offer valuable guidance for their study planning. We employ process and
data mining techniques to explore the impact of sequences of taken courses on
academic success. Through the use of decision tree models, we generate
data-driven recommendations in the form of rules for study planning and compare
them to the recommended study plan. The evaluation focuses on RWTH Aachen
University computer science bachelor program students and demonstrates that the
proposed course sequence features effectively explain academic performance
measures. Furthermore, the findings suggest avenues for developing more
adaptable study plans. | [
"Majid Rafiei",
"Duygu Bayrak",
"Mahsa Pourbafrani",
"Gyunam Park",
"Hayyan Helal",
"Gerhard Lakemeyer",
"Wil M. P. van der Aalst"
] | 2023-10-04 11:14:51 | http://arxiv.org/abs/2310.02735v1 | http://arxiv.org/pdf/2310.02735v1 | 2310.02735v1 |
Functional trustworthiness of AI systems by statistically valid testing | The authors are concerned about the safety, health, and rights of the
European citizens due to inadequate measures and procedures required by the
current draft of the EU Artificial Intelligence (AI) Act for the conformity
assessment of AI systems. We observe that not only the current draft of the EU
AI Act, but also the accompanying standardization efforts in CEN/CENELEC, have
resorted to the position that real functional guarantees of AI systems
supposedly would be unrealistic and too complex anyways. Yet enacting a
conformity assessment procedure that creates the false illusion of trust in
insufficiently assessed AI systems is at best naive and at worst grossly
negligent. The EU AI Act thus misses the point of ensuring quality by
functional trustworthiness and correctly attributing responsibilities.
The trustworthiness of an AI decision system lies first and foremost in the
correct statistical testing on randomly selected samples and in the precision
of the definition of the application domain, which enables drawing samples in
the first place. We will subsequently call this testable quality functional
trustworthiness. It includes a design, development, and deployment that enables
correct statistical testing of all relevant functions.
We are firmly convinced and advocate that a reliable assessment of the
statistical functional properties of an AI system has to be the indispensable,
mandatory nucleus of the conformity assessment. In this paper, we describe the
three necessary elements to establish a reliable functional trustworthiness,
i.e., (1) the definition of the technical distribution of the application, (2)
the risk-based minimum performance requirements, and (3) the statistically
valid testing based on independent random samples. | [
"Bernhard Nessler",
"Thomas Doms",
"Sepp Hochreiter"
] | 2023-10-04 11:07:52 | http://arxiv.org/abs/2310.02727v1 | http://arxiv.org/pdf/2310.02727v1 | 2310.02727v1 |
End-to-End Training of a Neural HMM with Label and Transition Probabilities | We investigate a novel modeling approach for end-to-end neural network
training using hidden Markov models (HMM) where the transition probabilities
between hidden states are modeled and learned explicitly. Most contemporary
sequence-to-sequence models allow for from-scratch training by summing over all
possible label segmentations in a given topology. In our approach there are
explicit, learnable probabilities for transitions between segments as opposed
to a blank label that implicitly encodes duration statistics. We implement a
GPU-based forward-backward algorithm that enables the simultaneous training of
label and transition probabilities. We investigate recognition results and
additionally Viterbi alignments of our models. We find that while the
transition model training does not improve recognition performance, it has a
positive impact on the alignment quality. The generated alignments are shown to
be viable targets in state-of-the-art Viterbi trainings. | [
"Daniel Mann",
"Tina Raissi",
"Wilfried Michel",
"Ralf Schlüter",
"Hermann Ney"
] | 2023-10-04 10:56:00 | http://arxiv.org/abs/2310.02724v2 | http://arxiv.org/pdf/2310.02724v2 | 2310.02724v2 |
Leveraging Temporal Graph Networks Using Module Decoupling | Modern approaches for learning on dynamic graphs have adopted the use of
batches instead of applying updates one by one. The use of batches allows these
techniques to become helpful in streaming scenarios where updates to graphs are
received at extreme speeds. Using batches, however, forces the models to update
infrequently, which results in the degradation of their performance. In this
work, we suggest a decoupling strategy that enables the models to update
frequently while using batches. By decoupling the core modules of temporal
graph networks and implementing them using a minimal number of learnable
parameters, we have developed the Lightweight Decoupled Temporal Graph Network
(LDTGN), an exceptionally efficient model for learning on dynamic graphs. LDTG
was validated on various dynamic graph benchmarks, providing comparable or
state-of-the-art results with significantly higher throughput than previous
art. Notably, our method outperforms previous approaches by more than 20\% on
benchmarks that require rapid model update rates, such as USLegis or UNTrade.
The code to reproduce our experiments is available at
\href{https://orfeld415.github.io/module-decoupling}{this http url}. | [
"Or Feldman",
"Chaim Baskin"
] | 2023-10-04 10:52:51 | http://arxiv.org/abs/2310.02721v1 | http://arxiv.org/pdf/2310.02721v1 | 2310.02721v1 |
Understanding Pan-Sharpening via Generalized Inverse | Pan-sharpening algorithm utilizes panchromatic image and multispectral image
to obtain a high spatial and high spectral image. However, the optimizations of
the algorithms are designed with different standards. We adopt the simple
matrix equation to describe the Pan-sharpening problem. The solution existence
condition and the acquirement of spectral and spatial resolution are discussed.
A down-sampling enhancement method was introduced for better acquiring the
spatial and spectral down-sample matrices. By the generalized inverse theory,
we derived two forms of general inverse matrix formulations that can correspond
to the two prominent classes of Pan-sharpening methods, that is, component
substitution and multi-resolution analysis methods. Specifically, the Gram
Schmidt Adaptive(GSA) was proved to follow the general inverse matrix
formulation of component substitution. A model prior to the general inverse
matrix of the spectral function was rendered. The theoretical errors are
analyzed. Synthetic experiments and real data experiments are implemented. The
proposed methods are better and sharper than other methods qualitatively in
both synthetic and real experiments. The down-sample enhancement effect is
shown of better results both quantitatively and qualitatively in real
experiments. The generalized inverse matrix theory help us better understand
the Pan-sharpening. | [
"Shiqi Liu",
"Yutong Bai",
"Xinyang Han",
"Alan Yuille"
] | 2023-10-04 10:41:21 | http://arxiv.org/abs/2310.02718v1 | http://arxiv.org/pdf/2310.02718v1 | 2310.02718v1 |
Online Clustering of Bandits with Misspecified User Models | The contextual linear bandit is an important online learning problem where
given arm features, a learning agent selects an arm at each round to maximize
the cumulative rewards in the long run. A line of works, called the clustering
of bandits (CB), utilize the collaborative effect over user preferences and
have shown significant improvements over classic linear bandit algorithms.
However, existing CB algorithms require well-specified linear user models and
can fail when this critical assumption does not hold. Whether robust CB
algorithms can be designed for more practical scenarios with misspecified user
models remains an open problem. In this paper, we are the first to present the
important problem of clustering of bandits with misspecified user models
(CBMUM), where the expected rewards in user models can be perturbed away from
perfect linear models. We devise two robust CB algorithms, RCLUMB and RSCLUMB
(representing the learned clustering structure with dynamic graph and sets,
respectively), that can accommodate the inaccurate user preference estimations
and erroneous clustering caused by model misspecifications. We prove regret
upper bounds of $O(\epsilon_*T\sqrt{md\log T} + d\sqrt{mT}\log T)$ for our
algorithms under milder assumptions than previous CB works (notably, we move
past a restrictive technical assumption on the distribution of the arms), which
match the lower bound asymptotically in $T$ up to logarithmic factors, and also
match the state-of-the-art results in several degenerate cases. The techniques
in proving the regret caused by misclustering users are quite general and may
be of independent interest. Experiments on both synthetic and real-world data
show our outperformance over previous algorithms. | [
"Zhiyong Wang",
"Jize Xie",
"Xutong Liu",
"Shuai Li",
"John C. S. Lui"
] | 2023-10-04 10:40:50 | http://arxiv.org/abs/2310.02717v2 | http://arxiv.org/pdf/2310.02717v2 | 2310.02717v2 |
scHyena: Foundation Model for Full-Length Single-Cell RNA-Seq Analysis in Brain | Single-cell RNA sequencing (scRNA-seq) has made significant strides in
unraveling the intricate cellular diversity within complex tissues. This is
particularly critical in the brain, presenting a greater diversity of cell
types than other tissue types, to gain a deeper understanding of brain function
within various cellular contexts. However, analyzing scRNA-seq data remains a
challenge due to inherent measurement noise stemming from dropout events and
the limited utilization of extensive gene expression information. In this work,
we introduce scHyena, a foundation model designed to address these challenges
and enhance the accuracy of scRNA-seq analysis in the brain. Specifically,
inspired by the recent Hyena operator, we design a novel Transformer
architecture called singe-cell Hyena (scHyena) that is equipped with a linear
adaptor layer, the positional encoding via gene-embedding, and a
{bidirectional} Hyena operator. This enables us to process full-length
scRNA-seq data without losing any information from the raw data. In particular,
our model learns generalizable features of cells and genes through pre-training
scHyena using the full length of scRNA-seq data. We demonstrate the superior
performance of scHyena compared to other benchmark methods in downstream tasks,
including cell type classification and scRNA-seq imputation. | [
"Gyutaek Oh",
"Baekgyu Choi",
"Inkyung Jung",
"Jong Chul Ye"
] | 2023-10-04 10:30:08 | http://arxiv.org/abs/2310.02713v1 | http://arxiv.org/pdf/2310.02713v1 | 2310.02713v1 |
ED-NeRF: Efficient Text-Guided Editing of 3D Scene using Latent Space NeRF | Recently, there has been a significant advancement in text-to-image diffusion
models, leading to groundbreaking performance in 2D image generation. These
advancements have been extended to 3D models, enabling the generation of novel
3D objects from textual descriptions. This has evolved into NeRF editing
methods, which allow the manipulation of existing 3D objects through textual
conditioning. However, existing NeRF editing techniques have faced limitations
in their performance due to slow training speeds and the use of loss functions
that do not adequately consider editing. To address this, here we present a
novel 3D NeRF editing approach dubbed ED-NeRF by successfully embedding
real-world scenes into the latent space of the latent diffusion model (LDM)
through a unique refinement layer. This approach enables us to obtain a NeRF
backbone that is not only faster but also more amenable to editing compared to
traditional image space NeRF editing. Furthermore, we propose an improved loss
function tailored for editing by migrating the delta denoising score (DDS)
distillation loss, originally used in 2D image editing to the three-dimensional
domain. This novel loss function surpasses the well-known score distillation
sampling (SDS) loss in terms of suitability for editing purposes. Our
experimental results demonstrate that ED-NeRF achieves faster editing speed
while producing improved output quality compared to state-of-the-art 3D editing
models. | [
"Jangho Park",
"Gihyun Kwon",
"Jong Chul Ye"
] | 2023-10-04 10:28:38 | http://arxiv.org/abs/2310.02712v1 | http://arxiv.org/pdf/2310.02712v1 | 2310.02712v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.