title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data | Backdoor attacks pose a serious security threat for training neural networks
as they surreptitiously introduce hidden functionalities into a model. Such
backdoors remain silent during inference on clean inputs, evading detection due
to inconspicuous behavior. However, once a specific trigger pattern appears in
the input data, the backdoor activates, causing the model to execute its
concealed function. Detecting such poisoned samples within vast datasets is
virtually impossible through manual inspection. To address this challenge, we
propose a novel approach that enables model training on potentially poisoned
datasets by utilizing the power of recent diffusion models. Specifically, we
create synthetic variations of all training samples, leveraging the inherent
resilience of diffusion models to potential trigger patterns in the data. By
combining this generative approach with knowledge distillation, we produce
student models that maintain their general performance on the task while
exhibiting robust resistance to backdoor triggers. | [
"Lukas Struppek",
"Martin B. Hentschel",
"Clifton Poth",
"Dominik Hintersdorf",
"Kristian Kersting"
] | 2023-10-10 07:25:06 | http://arxiv.org/abs/2310.06372v1 | http://arxiv.org/pdf/2310.06372v1 | 2310.06372v1 |
Partition-based differentially private synthetic data generation | Private synthetic data sharing is preferred as it keeps the distribution and
nuances of original data compared to summary statistics. The state-of-the-art
methods adopt a select-measure-generate paradigm, but measuring large domain
marginals still results in much error and allocating privacy budget iteratively
is still difficult. To address these issues, our method employs a
partition-based approach that effectively reduces errors and improves the
quality of synthetic data, even with a limited privacy budget. Results from our
experiments demonstrate the superiority of our method over existing approaches.
The synthetic data produced using our approach exhibits improved quality and
utility, making it a preferable choice for private synthetic data sharing. | [
"Meifan Zhang",
"Dihang Deng",
"Lihua Yin"
] | 2023-10-10 07:23:37 | http://arxiv.org/abs/2310.06371v1 | http://arxiv.org/pdf/2310.06371v1 | 2310.06371v1 |
Geometrically Aligned Transfer Encoder for Inductive Transfer in Regression Tasks | Transfer learning is a crucial technique for handling a small amount of data
that is potentially related to other abundant data. However, most of the
existing methods are focused on classification tasks using images and language
datasets. Therefore, in order to expand the transfer learning scheme to
regression tasks, we propose a novel transfer technique based on differential
geometry, namely the Geometrically Aligned Transfer Encoder (GATE). In this
method, we interpret the latent vectors from the model to exist on a Riemannian
curved manifold. We find a proper diffeomorphism between pairs of tasks to
ensure that every arbitrary point maps to a locally flat coordinate in the
overlapping region, allowing the transfer of knowledge from the source to the
target data. This also serves as an effective regularizer for the model to
behave in extrapolation regions. In this article, we demonstrate that GATE
outperforms conventional methods and exhibits stable behavior in both the
latent space and extrapolation regions for various molecular graph datasets. | [
"Sung Moon Ko",
"Sumin Lee",
"Dae-Woong Jeong",
"Woohyung Lim",
"Sehui Han"
] | 2023-10-10 07:11:25 | http://arxiv.org/abs/2310.06369v1 | http://arxiv.org/pdf/2310.06369v1 | 2310.06369v1 |
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual Screening | Virtual screening, which identifies potential drugs from vast compound
databases to bind with a particular protein pocket, is a critical step in
AI-assisted drug discovery. Traditional docking methods are highly
time-consuming, and can only work with a restricted search library in real-life
applications. Recent supervised learning approaches using scoring functions for
binding-affinity prediction, although promising, have not yet surpassed docking
methods due to their strong dependency on limited data with reliable
binding-affinity labels. In this paper, we propose a novel contrastive learning
framework, DrugCLIP, by reformulating virtual screening as a dense retrieval
task and employing contrastive learning to align representations of binding
protein pockets and molecules from a large quantity of pairwise data without
explicit binding-affinity scores. We also introduce a biological-knowledge
inspired data augmentation strategy to learn better protein-molecule
representations. Extensive experiments show that DrugCLIP significantly
outperforms traditional docking and supervised learning methods on diverse
virtual screening benchmarks with highly reduced computation time, especially
in zero-shot setting. | [
"Bowen Gao",
"Bo Qiang",
"Haichuan Tan",
"Minsi Ren",
"Yinjun Jia",
"Minsi Lu",
"Jingjing Liu",
"Weiying Ma",
"Yanyan Lan"
] | 2023-10-10 07:08:35 | http://arxiv.org/abs/2310.06367v1 | http://arxiv.org/pdf/2310.06367v1 | 2310.06367v1 |
Core-Intermediate-Peripheral Index: Factor Analysis of Neighborhood and Shortest Paths-based Centrality Metrics | We perform factor analysis on the raw data of the four major neighborhood and
shortest paths-based centrality metrics (Degree, Eigenvector, Betweeenness and
Closeness) and propose a novel quantitative measure called the
Core-Intermediate-Peripheral (CIP) Index to capture the extent with which a
node could play the role of a core node (nodes at the center of a network with
larger values for any centrality metric) vis-a-vis a peripheral node (nodes
that exist at the periphery of a network with lower values for any centrality
metric). We conduct factor analysis (varimax-based rotation of the
Eigenvectors) on the transpose matrix of the raw centrality metrics dataset,
with the node ids as features, under the hypothesis that there are two factors
(core and peripheral) that drive the values incurred by the nodes with respect
to the centrality metrics. We test our approach on a diverse suite of 12
complex real-world networks. | [
"Natarajan Meghanathan"
] | 2023-10-10 06:52:20 | http://arxiv.org/abs/2310.06358v1 | http://arxiv.org/pdf/2310.06358v1 | 2310.06358v1 |
Boosting Continuous Control with Consistency Policy | Due to its training stability and strong expression, the diffusion model has
attracted considerable attention in offline reinforcement learning. However,
several challenges have also come with it: 1) The demand for a large number of
diffusion steps makes the diffusion-model-based methods time inefficient and
limits their applications in real-time control; 2) How to achieve policy
improvement with accurate guidance for diffusion model-based policy is still an
open problem. Inspired by the consistency model, we propose a novel
time-efficiency method named Consistency Policy with Q-Learning (CPQL), which
derives action from noise by a single step. By establishing a mapping from the
reverse diffusion trajectories to the desired policy, we simultaneously address
the issues of time efficiency and inaccurate guidance when updating diffusion
model-based policy with the learned Q-function. We demonstrate that CPQL can
achieve policy improvement with accurate guidance for offline reinforcement
learning, and can be seamlessly extended for online RL tasks. Experimental
results indicate that CPQL achieves new state-of-the-art performance on 11
offline and 21 online tasks, significantly improving inference speed by nearly
45 times compared to Diffusion-QL. We will release our code later. | [
"Yuhui Chen",
"Haoran Li",
"Dongbin Zhao"
] | 2023-10-10 06:26:05 | http://arxiv.org/abs/2310.06343v1 | http://arxiv.org/pdf/2310.06343v1 | 2310.06343v1 |
Federated Learning with Reduced Information Leakage and Computation | Federated learning (FL) is a distributed learning paradigm that allows
multiple decentralized clients to collaboratively learn a common model without
sharing local data. Although local data is not exposed directly, privacy
concerns nonetheless exist as clients' sensitive information can be inferred
from intermediate computations. Moreover, such information leakage accumulates
substantially over time as the same data is repeatedly used during the
iterative learning process. As a result, it can be particularly difficult to
balance the privacy-accuracy trade-off when designing privacy-preserving FL
algorithms. In this paper, we introduce Upcycled-FL, a novel federated learning
framework with first-order approximation applied at every even iteration. Under
this framework, half of the FL updates incur no information leakage and require
much less computation. We first conduct the theoretical analysis on the
convergence (rate) of Upcycled-FL, and then apply perturbation mechanisms to
preserve privacy. Experiments on real-world data show that Upcycled-FL
consistently outperforms existing methods over heterogeneous data, and
significantly improves privacy-accuracy trade-off while reducing 48% of the
training time on average. | [
"Tongxin Yin",
"Xueru Zhang",
"Mohammad Mahdi Khalili",
"Mingyan Liu"
] | 2023-10-10 06:22:06 | http://arxiv.org/abs/2310.06341v1 | http://arxiv.org/pdf/2310.06341v1 | 2310.06341v1 |
Automatic nodule identification and differentiation in ultrasound videos to facilitate per-nodule examination | Ultrasound is a vital diagnostic technique in health screening, with the
advantages of non-invasive, cost-effective, and radiation free, and therefore
is widely applied in the diagnosis of nodules. However, it relies heavily on
the expertise and clinical experience of the sonographer. In ultrasound images,
a single nodule might present heterogeneous appearances in different
cross-sectional views which makes it hard to perform per-nodule examination.
Sonographers usually discriminate different nodules by examining the nodule
features and the surrounding structures like gland and duct, which is
cumbersome and time-consuming. To address this problem, we collected hundreds
of breast ultrasound videos and built a nodule reidentification system that
consists of two parts: an extractor based on the deep learning model that can
extract feature vectors from the input video clips and a real-time clustering
algorithm that automatically groups feature vectors by nodules. The system
obtains satisfactory results and exhibits the capability to differentiate
ultrasound videos. As far as we know, it's the first attempt to apply
re-identification technique in the ultrasonic field. | [
"Siyuan Jiang",
"Yan Ding",
"Yuling Wang",
"Lei Xu",
"Wenli Dai",
"Wanru Chang",
"Jianfeng Zhang",
"Jie Yu",
"Jianqiao Zhou",
"Chunquan Zhang",
"Ping Liang",
"Dexing Kong"
] | 2023-10-10 06:20:14 | http://arxiv.org/abs/2310.06339v1 | http://arxiv.org/pdf/2310.06339v1 | 2310.06339v1 |
Learning bounded-degree polytrees with known skeleton | We establish finite-sample guarantees for efficient proper learning of
bounded-degree polytrees, a rich class of high-dimensional probability
distributions and a subclass of Bayesian networks, a widely-studied type of
graphical model. Recently, Bhattacharyya et al. (2021) obtained finite-sample
guarantees for recovering tree-structured Bayesian networks, i.e., 1-polytrees.
We extend their results by providing an efficient algorithm which learns
$d$-polytrees in polynomial time and sample complexity for any bounded $d$ when
the underlying undirected graph (skeleton) is known. We complement our
algorithm with an information-theoretic sample complexity lower bound, showing
that the dependence on the dimension and target accuracy parameters are nearly
tight. | [
"Davin Choo",
"Joy Qiping Yang",
"Arnab Bhattacharyya",
"Clément L. Canonne"
] | 2023-10-10 06:03:51 | http://arxiv.org/abs/2310.06333v1 | http://arxiv.org/pdf/2310.06333v1 | 2310.06333v1 |
Exploit the antenna response consistency to define the alignment criteria for CSI data | Self-supervised learning (SSL) for WiFi-based human activity recognition
(HAR) holds great promise due to its ability to address the challenge of
insufficient labeled data. However, directly transplanting SSL algorithms,
especially contrastive learning, originally designed for other domains to CSI
data, often fails to achieve the expected performance. We attribute this issue
to the inappropriate alignment criteria, which disrupt the semantic distance
consistency between the feature space and the input space. To address this
challenge, we introduce \textbf{A}netenna \textbf{R}esponse
\textbf{C}onsistency (ARC) as a solution to define proper alignment criteria.
ARC is designed to retain semantic information from the input space while
introducing robustness to real-world noise. We analyze ARC from the perspective
of CSI data structure, demonstrating that its optimal solution leads to a
direct mapping from input CSI data to action vectors in the feature map.
Furthermore, we provide extensive experimental evidence to validate the
effectiveness of ARC in improving the performance of self-supervised learning
for WiFi-based HAR. | [
"Ke Xu",
"Jiangtao Wang",
"Hongyuan Zhu",
"Dingchang Zheng"
] | 2023-10-10 05:54:00 | http://arxiv.org/abs/2310.06328v1 | http://arxiv.org/pdf/2310.06328v1 | 2310.06328v1 |
Predicting Three Types of Freezing of Gait Events Using Deep Learning Models | Freezing of gait is a Parkinson's Disease symptom that episodically inflicts
a patient with the inability to step or turn while walking. While medical
experts have discovered various triggers and alleviating actions for freezing
of gait, the underlying causes and prediction models are still being explored
today. Current freezing of gait prediction models that utilize machine learning
achieve high sensitivity and specificity in freezing of gait predictions based
on time-series data; however, these models lack specifications on the type of
freezing of gait events. We develop various deep learning models using the
transformer encoder architecture plus Bidirectional LSTM layers and different
feature sets to predict the three different types of freezing of gait events.
The best performing model achieves a score of 0.427 on testing data, which
would rank top 5 in Kaggle's Freezing of Gait prediction competition, hosted by
THE MICHAEL J. FOX FOUNDATION. However, we also recognize overfitting in
training data that could be potentially improved through pseudo labelling on
additional data and model architecture simplification. | [
"Wen Tao Mo",
"Jonathan H. Chan"
] | 2023-10-10 05:35:02 | http://arxiv.org/abs/2310.06322v1 | http://arxiv.org/pdf/2310.06322v1 | 2310.06322v1 |
Transfer learning-based physics-informed convolutional neural network for simulating flow in porous media with time-varying controls | A physics-informed convolutional neural network is proposed to simulate two
phase flow in porous media with time-varying well controls. While most of
PICNNs in existing literatures worked on parameter-to-state mapping, our
proposed network parameterizes the solution with time-varying controls to
establish a control-to-state regression. Firstly, finite volume scheme is
adopted to discretize flow equations and formulate loss function that respects
mass conservation laws. Neumann boundary conditions are seamlessly incorporated
into the semi-discretized equations so no additional loss term is needed. The
network architecture comprises two parallel U-Net structures, with network
inputs being well controls and outputs being the system states. To capture the
time-dependent relationship between inputs and outputs, the network is well
designed to mimic discretized state space equations. We train the network
progressively for every timestep, enabling it to simultaneously predict oil
pressure and water saturation at each timestep. After training the network for
one timestep, we leverage transfer learning techniques to expedite the training
process for subsequent timestep. The proposed model is used to simulate
oil-water porous flow scenarios with varying reservoir gridblocks and aspects
including computation efficiency and accuracy are compared against
corresponding numerical approaches. The results underscore the potential of
PICNN in effectively simulating systems with numerous grid blocks, as
computation time does not scale with model dimensionality. We assess the
temporal error using 10 different testing controls with variation in magnitude
and another 10 with higher alternation frequency with proposed control-to-state
architecture. Our observations suggest the need for a more robust and reliable
model when dealing with controls that exhibit significant variations in
magnitude or frequency. | [
"Jungang Chen",
"Eduardo Gildin",
"John E. Killough"
] | 2023-10-10 05:29:33 | http://arxiv.org/abs/2310.06319v1 | http://arxiv.org/pdf/2310.06319v1 | 2310.06319v1 |
Adversarial Masked Image Inpainting for Robust Detection of Mpox and Non-Mpox | Due to the lack of efficient mpox diagnostic technology, mpox cases continue
to increase. Recently, the great potential of deep learning models in detecting
mpox and non-mpox has been proven. However, existing models learn image
representations via image classification, which results in they may be easily
susceptible to interference from real-world noise, require diverse non-mpox
images, and fail to detect abnormal input. These drawbacks make classification
models inapplicable in real-world settings. To address these challenges, we
propose "Mask, Inpainting, and Measure" (MIM). In MIM's pipeline, a generative
adversarial network only learns mpox image representations by inpainting the
masked mpox images. Then, MIM determines whether the input belongs to mpox by
measuring the similarity between the inpainted image and the original image.
The underlying intuition is that since MIM solely models mpox images, it
struggles to accurately inpaint non-mpox images in real-world settings. Without
utilizing any non-mpox images, MIM cleverly detects mpox and non-mpox and can
handle abnormal inputs. We used the recognized mpox dataset (MSLD) and images
of eighteen non-mpox skin diseases to verify the effectiveness and robustness
of MIM. Experimental results show that the average AUROC of MIM achieves
0.8237. In addition, we demonstrated the drawbacks of classification models and
buttressed the potential of MIM through clinical validation. Finally, we
developed an online smartphone app to provide free testing to the public in
affected areas. This work first employs generative models to improve mpox
detection and provides new insights into binary decision-making tasks in
medical images. | [
"Yubiao Yue",
"Zhenzhang Li"
] | 2023-10-10 05:28:02 | http://arxiv.org/abs/2310.06318v1 | http://arxiv.org/pdf/2310.06318v1 | 2310.06318v1 |
Discovering Mixtures of Structural Causal Models from Time Series Data | In fields such as finance, climate science, and neuroscience, inferring
causal relationships from time series data poses a formidable challenge. While
contemporary techniques can handle nonlinear relationships between variables
and flexible noise distributions, they rely on the simplifying assumption that
data originates from the same underlying causal model. In this work, we relax
this assumption and perform causal discovery from time series data originating
from mixtures of different causal models. We infer both the underlying
structural causal models and the posterior probability for each sample
belonging to a specific mixture component. Our approach employs an end-to-end
training process that maximizes an evidence-lower bound for data likelihood.
Through extensive experimentation on both synthetic and real-world datasets, we
demonstrate that our method surpasses state-of-the-art benchmarks in causal
discovery tasks, particularly when the data emanates from diverse underlying
causal graphs. Theoretically, we prove the identifiability of such a model
under some mild assumptions. | [
"Sumanth Varambally",
"Yi-An Ma",
"Rose Yu"
] | 2023-10-10 05:13:10 | http://arxiv.org/abs/2310.06312v1 | http://arxiv.org/pdf/2310.06312v1 | 2310.06312v1 |
Ensemble Active Learning by Contextual Bandits for AI Incubation in Manufacturing | It is challenging but important to save annotation efforts in streaming data
acquisition to maintain data quality for supervised learning base learners. We
propose an ensemble active learning method to actively acquire samples for
annotation by contextual bandits, which is will enforce the
exploration-exploitation balance and leading to improved AI modeling
performance. | [
"Yingyan Zeng",
"Xiaoyu Chen",
"Ran Jin"
] | 2023-10-10 04:44:35 | http://arxiv.org/abs/2310.06306v2 | http://arxiv.org/pdf/2310.06306v2 | 2310.06306v2 |
Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition | We investigate phase transitions in a Toy Model of Superposition (TMS) using
Singular Learning Theory (SLT). We derive a closed formula for the theoretical
loss and, in the case of two hidden dimensions, discover that regular $k$-gons
are critical points. We present supporting theory indicating that the local
learning coefficient (a geometric invariant) of these $k$-gons determines phase
transitions in the Bayesian posterior as a function of training sample size. We
then show empirically that the same $k$-gon critical points also determine the
behavior of SGD training. The picture that emerges adds evidence to the
conjecture that the SGD learning trajectory is subject to a sequential learning
mechanism. Specifically, we find that the learning process in TMS, be it
through SGD or Bayesian learning, can be characterized by a journey through
parameter space from regions of high loss and low complexity to regions of low
loss and high complexity. | [
"Zhongtian Chen",
"Edmund Lau",
"Jake Mendel",
"Susan Wei",
"Daniel Murfet"
] | 2023-10-10 04:26:04 | http://arxiv.org/abs/2310.06301v1 | http://arxiv.org/pdf/2310.06301v1 | 2310.06301v1 |
Gem5Pred: Predictive Approaches For Gem5 Simulation Time | Gem5, an open-source, flexible, and cost-effective simulator, is widely
recognized and utilized in both academic and industry fields for hardware
simulation. However, the typically time-consuming nature of simulating programs
on Gem5 underscores the need for a predictive model that can estimate
simulation time. As of now, no such dataset or model exists. In response to
this gap, this paper makes a novel contribution by introducing a unique dataset
specifically created for this purpose. We also conducted analysis of the
effects of different instruction types on the simulation time in Gem5. After
this, we employ three distinct models leveraging CodeBERT to execute the
prediction task based on the developed dataset. Our superior regression model
achieves a Mean Absolute Error (MAE) of 0.546, while our top-performing
classification model records an Accuracy of 0.696. Our models establish a
foundation for future investigations on this topic, serving as benchmarks
against which subsequent models can be compared. We hope that our contribution
can simulate further research in this field. The dataset we used is available
at https://github.com/XueyangLiOSU/Gem5Pred. | [
"Tian Yan",
"Xueyang Li",
"Sifat Ut Taki",
"Saeid Mehrdad"
] | 2023-10-10 04:05:26 | http://arxiv.org/abs/2310.06290v1 | http://arxiv.org/pdf/2310.06290v1 | 2310.06290v1 |
Better and Simpler Lower Bounds for Differentially Private Statistical Estimation | We provide improved lower bounds for two well-known high-dimensional private
estimation tasks. First, we prove that for estimating the covariance of a
Gaussian up to spectral error $\alpha$ with approximate differential privacy,
one needs $\tilde{\Omega}\left(\frac{d^{3/2}}{\alpha \varepsilon} +
\frac{d}{\alpha^2}\right)$ samples for any $\alpha \le O(1)$, which is tight up
to logarithmic factors. This improves over previous work which established this
for $\alpha \le O\left(\frac{1}{\sqrt{d}}\right)$, and is also simpler than
previous work. Next, we prove that for estimating the mean of a heavy-tailed
distribution with bounded $k$th moments with approximate differential privacy,
one needs $\tilde{\Omega}\left(\frac{d}{\alpha^{k/(k-1)} \varepsilon} +
\frac{d}{\alpha^2}\right)$ samples. This matches known upper bounds and
improves over the best known lower bound for this problem, which only hold for
pure differential privacy, or when $k = 2$. Our techniques follow the method of
fingerprinting and are generally quite simple. Our lower bound for heavy-tailed
estimation is based on a black-box reduction from privately estimating
identity-covariance Gaussians. Our lower bound for covariance estimation
utilizes a Bayesian approach to show that, under an Inverse Wishart prior
distribution for the covariance matrix, no private estimator can be accurate
even in expectation, without sufficiently many samples. | [
"Shyam Narayanan"
] | 2023-10-10 04:02:43 | http://arxiv.org/abs/2310.06289v1 | http://arxiv.org/pdf/2310.06289v1 | 2310.06289v1 |
Suppressing Overestimation in Q-Learning through Adversarial Behaviors | The goal of this paper is to propose a new Q-learning algorithm with a dummy
adversarial player, which is called dummy adversarial Q-learning (DAQ), that
can effectively regulate the overestimation bias in standard Q-learning. With
the dummy player, the learning can be formulated as a two-player zero-sum game.
The proposed DAQ unifies several Q-learning variations to control
overestimation biases, such as maxmin Q-learning and minmax Q-learning
(proposed in this paper) in a single framework. The proposed DAQ is a simple
but effective way to suppress the overestimation bias thourgh dummy adversarial
behaviors and can be easily applied to off-the-shelf reinforcement learning
algorithms to improve the performances. A finite-time convergence of DAQ is
analyzed from an integrated perspective by adapting an adversarial Q-learning.
The performance of the suggested DAQ is empirically demonstrated under various
benchmark environments. | [
"HyeAnn Lee",
"Donghwan Lee"
] | 2023-10-10 03:46:32 | http://arxiv.org/abs/2310.06286v1 | http://arxiv.org/pdf/2310.06286v1 | 2310.06286v1 |
MuseChat: A Conversational Music Recommendation System for Videos | We introduce MuseChat, an innovative dialog-based music recommendation
system. This unique platform not only offers interactive user engagement but
also suggests music tailored for input videos, so that users can refine and
personalize their music selections. In contrast, previous systems predominantly
emphasized content compatibility, often overlooking the nuances of users'
individual preferences. For example, all the datasets only provide basic
music-video pairings or such pairings with textual music descriptions. To
address this gap, our research offers three contributions. First, we devise a
conversation-synthesis method that simulates a two-turn interaction between a
user and a recommendation system, which leverages pre-trained music tags and
artist information. In this interaction, users submit a video to the system,
which then suggests a suitable music piece with a rationale. Afterwards, users
communicate their musical preferences, and the system presents a refined music
recommendation with reasoning. Second, we introduce a multi-modal
recommendation engine that matches music either by aligning it with visual cues
from the video or by harmonizing visual information, feedback from previously
recommended music, and the user's textual input. Third, we bridge music
representations and textual data with a Large Language Model(Vicuna-7B). This
alignment equips MuseChat to deliver music recommendations and their underlying
reasoning in a manner resembling human communication. Our evaluations show that
MuseChat surpasses existing state-of-the-art models in music retrieval tasks
and pioneers the integration of the recommendation process within a natural
language framework. | [
"Zhikang Dong",
"Bin Chen",
"Xiulong Liu",
"Pawel Polak",
"Peng Zhang"
] | 2023-10-10 03:32:33 | http://arxiv.org/abs/2310.06282v2 | http://arxiv.org/pdf/2310.06282v2 | 2310.06282v2 |
BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large Language Models | In recent years, artificial intelligence (AI) and machine learning (ML) are
reshaping society's production methods and productivity, and also changing the
paradigm of scientific research. Among them, the AI language model represented
by ChatGPT has made great progress. Such large language models (LLMs) serve
people in the form of AI-generated content (AIGC) and are widely used in
consulting, healthcare, and education. However, it is difficult to guarantee
the authenticity and reliability of AIGC learning data. In addition, there are
also hidden dangers of privacy disclosure in distributed AI training. Moreover,
the content generated by LLMs is difficult to identify and trace, and it is
difficult to cross-platform mutual recognition. The above information security
issues in the coming era of AI powered by LLMs will be infinitely amplified and
affect everyone's life. Therefore, we consider empowering LLMs using blockchain
technology with superior security features to propose a vision for trusted AI.
This paper mainly introduces the motivation and technical route of blockchain
for LLM (BC4LLM), including reliable learning corpus, secure training process,
and identifiable generated content. Meanwhile, this paper also reviews the
potential applications and future challenges, especially in the frontier
communication networks field, including network resource allocation, dynamic
spectrum sharing, and semantic communication. Based on the above work combined
and the prospect of blockchain and LLMs, it is expected to help the early
realization of trusted AI and provide guidance for the academic community. | [
"Haoxiang Luo",
"Jian Luo",
"Athanasios V. Vasilakos"
] | 2023-10-10 03:18:26 | http://arxiv.org/abs/2310.06278v1 | http://arxiv.org/pdf/2310.06278v1 | 2310.06278v1 |
Let Models Speak Ciphers: Multiagent Debate through Embeddings | Discussion and debate among Large Language Models (LLMs) have gained
considerable attention due to their potential to enhance the reasoning ability
of LLMs. Although natural language is an obvious choice for communication due
to LLM's language understanding capability, the token sampling step needed when
generating natural language poses a potential risk of information loss, as it
uses only one token to represent the model's belief across the entire
vocabulary. In this paper, we introduce a communication regime named CIPHER
(Communicative Inter-Model Protocol Through Embedding Representation) to
address this issue. Specifically, we remove the token sampling step from LLMs
and let them communicate their beliefs across the vocabulary through the
expectation of the raw transformer output embeddings. Remarkably, by deviating
from natural language, CIPHER offers an advantage of encoding a broader
spectrum of information without any modification to the model weights. While
the state-of-the-art LLM debate methods using natural language outperforms
traditional inference by a margin of 1.5-8%, our experiment results show that
CIPHER debate further extends this lead by 1-3.5% across five reasoning tasks
and multiple open-source LLMs of varying sizes. This showcases the superiority
and robustness of embeddings as an alternative "language" for communication
among LLMs. | [
"Chau Pham",
"Boyi Liu",
"Yingxiang Yang",
"Zhengyu Chen",
"Tianyi Liu",
"Jianbo Yuan",
"Bryan A. Plummer",
"Zhaoran Wang",
"Hongxia Yang"
] | 2023-10-10 03:06:38 | http://arxiv.org/abs/2310.06272v1 | http://arxiv.org/pdf/2310.06272v1 | 2310.06272v1 |
Bi-Level Offline Policy Optimization with Limited Exploration | We study offline reinforcement learning (RL) which seeks to learn a good
policy based on a fixed, pre-collected dataset. A fundamental challenge behind
this task is the distributional shift due to the dataset lacking sufficient
exploration, especially under function approximation. To tackle this issue, we
propose a bi-level structured policy optimization algorithm that models a
hierarchical interaction between the policy (upper-level) and the value
function (lower-level). The lower level focuses on constructing a confidence
set of value estimates that maintain sufficiently small weighted average
Bellman errors, while controlling uncertainty arising from distribution
mismatch. Subsequently, at the upper level, the policy aims to maximize a
conservative value estimate from the confidence set formed at the lower level.
This novel formulation preserves the maximum flexibility of the implicitly
induced exploratory data distribution, enabling the power of model
extrapolation. In practice, it can be solved through a computationally
efficient, penalized adversarial estimation procedure. Our theoretical regret
guarantees do not rely on any data-coverage and completeness-type assumptions,
only requiring realizability. These guarantees also demonstrate that the
learned policy represents the "best effort" among all policies, as no other
policies can outperform it. We evaluate our model using a blend of synthetic,
benchmark, and real-world datasets for offline RL, showing that it performs
competitively with state-of-the-art methods. | [
"Wenzhuo Zhou"
] | 2023-10-10 02:45:50 | http://arxiv.org/abs/2310.06268v1 | http://arxiv.org/pdf/2310.06268v1 | 2310.06268v1 |
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model | Code Large Language Models (Code LLMs) have gained significant attention in
the industry due to their wide applications in the full lifecycle of software
engineering. However, the effectiveness of existing models in understanding
non-English inputs for multi-lingual code-related tasks is still far from well
studied. This paper introduces CodeFuse-13B, an open-sourced pre-trained code
LLM. It is specifically designed for code-related tasks with both English and
Chinese prompts and supports over 40 programming languages. CodeFuse achieves
its effectiveness by utilizing a high quality pre-training dataset that is
carefully filtered by program analyzers and optimized during the training
process. Extensive experiments are conducted using real-world usage scenarios,
the industry-standard benchmark HumanEval-x, and the specially designed
CodeFuseEval for Chinese prompts. To assess the effectiveness of CodeFuse, we
actively collected valuable human feedback from the AntGroup's software
development process where CodeFuse has been successfully deployed. The results
demonstrate that CodeFuse-13B achieves a HumanEval pass@1 score of 37.10%,
positioning it as one of the top multi-lingual code LLMs with similar parameter
sizes. In practical scenarios, such as code generation, code translation, code
comments, and testcase generation, CodeFuse performs better than other models
when confronted with Chinese prompts. | [
"Peng Di",
"Jianguo Li",
"Hang Yu",
"Wei Jiang",
"Wenting Cai",
"Yang Cao",
"Chaoyu Chen",
"Dajun Chen",
"Hongwei Chen",
"Liang Chen",
"Gang Fan",
"Jie Gong",
"Zi Gong",
"Wen Hu",
"Tingting Guo",
"Zhichao Lei",
"Ting Li",
"Zheng Li",
"Ming Liang",
"Cong Liao",
"Bingchang Liu",
"Jiachen Liu",
"Zhiwei Liu",
"Shaojun Lu",
"Min Shen",
"Guangpei Wang",
"Huan Wang",
"Zhi Wang",
"Zhaogui Xu",
"Jiawei Yang",
"Qing Ye",
"Gehao Zhang",
"Yu Zhang",
"Zelin Zhao",
"Xunjin Zheng",
"Hailian Zhou",
"Lifu Zhu",
"Xianying Zhu"
] | 2023-10-10 02:38:44 | http://arxiv.org/abs/2310.06266v1 | http://arxiv.org/pdf/2310.06266v1 | 2310.06266v1 |
Self-Discriminative Modeling for Anomalous Graph Detection | This paper studies the problem of detecting anomalous graphs using a machine
learning model trained on only normal graphs, which has many applications in
molecule, biology, and social network data analysis. We present a
self-discriminative modeling framework for anomalous graph detection. The key
idea, mathematically and numerically illustrated, is to learn a discriminator
(classifier) from the given normal graphs together with pseudo-anomalous graphs
generated by a model jointly trained, where we never use any true anomalous
graphs and we hope that the generated pseudo-anomalous graphs interpolate
between normal ones and (real) anomalous ones. Under the framework, we provide
three algorithms with different computational efficiencies and stabilities for
anomalous graph detection. The three algorithms are compared with several
state-of-the-art graph-level anomaly detection baselines on nine popular graph
datasets (four with small size and five with moderate size) and show
significant improvement in terms of AUC. The success of our algorithms stems
from the integration of the discriminative classifier and the well-posed
pseudo-anomalous graphs, which provide new insights for anomaly detection.
Moreover, we investigate our algorithms for large-scale imbalanced graph
datasets. Surprisingly, our algorithms, though fully unsupervised, are able to
significantly outperform supervised learning algorithms of anomalous graph
detection. The corresponding reason is also analyzed. | [
"Jinyu Cai",
"Yunhe Zhang",
"Jicong Fan"
] | 2023-10-10 02:08:09 | http://arxiv.org/abs/2310.06261v1 | http://arxiv.org/pdf/2310.06261v1 | 2310.06261v1 |
A Unified View on Solving Objective Mismatch in Model-Based Reinforcement Learning | Model-based Reinforcement Learning (MBRL) aims to make agents more
sample-efficient, adaptive, and explainable by learning an explicit model of
the environment. While the capabilities of MBRL agents have significantly
improved in recent years, how to best learn the model is still an unresolved
question. The majority of MBRL algorithms aim at training the model to make
accurate predictions about the environment and subsequently using the model to
determine the most rewarding actions. However, recent research has shown that
model predictive accuracy is often not correlated with action quality, tracing
the root cause to the \emph{objective mismatch} between accurate dynamics model
learning and policy optimization of rewards. A number of interrelated solution
categories to the objective mismatch problem have emerged as MBRL continues to
mature as a research area. In this work, we provide an in-depth survey of these
solution categories and propose a taxonomy to foster future research. | [
"Ran Wei",
"Nathan Lambert",
"Anthony McDonald",
"Alfredo Garcia",
"Roberto Calandra"
] | 2023-10-10 01:58:38 | http://arxiv.org/abs/2310.06253v1 | http://arxiv.org/pdf/2310.06253v1 | 2310.06253v1 |
Deep Learning: A Tutorial | Our goal is to provide a review of deep learning methods which provide
insight into structured high-dimensional data. Rather than using shallow
additive architectures common to most statistical models, deep learning uses
layers of semi-affine input transformations to provide a predictive rule.
Applying these layers of transformations leads to a set of attributes (or,
features) to which probabilistic statistical methods can be applied. Thus, the
best of both worlds can be achieved: scalable prediction rules fortified with
uncertainty quantification, where sparse regularization finds the features. | [
"Nick Polson",
"Vadim Sokolov"
] | 2023-10-10 01:55:22 | http://arxiv.org/abs/2310.06251v1 | http://arxiv.org/pdf/2310.06251v1 | 2310.06251v1 |
Sample-Efficient Multi-Agent RL: An Optimization Perspective | We study multi-agent reinforcement learning (MARL) for the general-sum Markov
Games (MGs) under the general function approximation. In order to find the
minimum assumption for sample-efficient learning, we introduce a novel
complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for
general-sum MGs. Using this measure, we propose the first unified algorithmic
framework that ensures sample efficiency in learning Nash Equilibrium, Coarse
Correlated Equilibrium, and Correlated Equilibrium for both model-based and
model-free MARL problems with low MADC. We also show that our algorithm
provides comparable sublinear regret to the existing works. Moreover, our
algorithm combines an equilibrium-solving oracle with a single objective
optimization subprocedure that solves for the regularized payoff of each
deterministic joint policy, which avoids solving constrained optimization
problems within data-dependent constraints (Jin et al. 2020; Wang et al. 2023)
or executing sampling procedures with complex multi-objective optimization
problems (Foster et al. 2023), thus being more amenable to empirical
implementation. | [
"Nuoya Xiong",
"Zhihan Liu",
"Zhaoran Wang",
"Zhuoran Yang"
] | 2023-10-10 01:39:04 | http://arxiv.org/abs/2310.06243v1 | http://arxiv.org/pdf/2310.06243v1 | 2310.06243v1 |
A Bayesian framework for discovering interpretable Lagrangian of dynamical systems from data | Learning and predicting the dynamics of physical systems requires a profound
understanding of the underlying physical laws. Recent works on learning
physical laws involve generalizing the equation discovery frameworks to the
discovery of Hamiltonian and Lagrangian of physical systems. While the existing
methods parameterize the Lagrangian using neural networks, we propose an
alternate framework for learning interpretable Lagrangian descriptions of
physical systems from limited data using the sparse Bayesian approach. Unlike
existing neural network-based approaches, the proposed approach (a) yields an
interpretable description of Lagrangian, (b) exploits Bayesian learning to
quantify the epistemic uncertainty due to limited data, (c) automates the
distillation of Hamiltonian from the learned Lagrangian using Legendre
transformation, and (d) provides ordinary (ODE) and partial differential
equation (PDE) based descriptions of the observed systems. Six different
examples involving both discrete and continuous system illustrates the efficacy
of the proposed approach. | [
"Tapas Tripura",
"Souvik Chakraborty"
] | 2023-10-10 01:35:54 | http://arxiv.org/abs/2310.06241v1 | http://arxiv.org/pdf/2310.06241v1 | 2310.06241v1 |
Tackling Data Bias in MUSIC-AVQA: Crafting a Balanced Dataset for Unbiased Question-Answering | In recent years, there has been a growing emphasis on the intersection of
audio, vision, and text modalities, driving forward the advancements in
multimodal research. However, strong bias that exists in any modality can lead
to the model neglecting the others. Consequently, the model's ability to
effectively reason across these diverse modalities is compromised, impeding
further advancement. In this paper, we meticulously review each question type
from the original dataset, selecting those with pronounced answer biases. To
counter these biases, we gather complementary videos and questions, ensuring
that no answers have outstanding skewed distribution. In particular, for binary
questions, we strive to ensure that both answers are almost uniformly spread
within each question category. As a result, we construct a new dataset, named
MUSIC-AVQA v2.0, which is more challenging and we believe could better foster
the progress of AVQA task. Furthermore, we present a novel baseline model that
delves deeper into the audio-visual-text interrelation. On MUSIC-AVQA v2.0,
this model surpasses all the existing benchmarks, improving accuracy by 2% on
MUSIC-AVQA v2.0, setting a new state-of-the-art performance. | [
"Xiulong Liu",
"Zhikang Dong",
"Peng Zhang"
] | 2023-10-10 01:22:41 | http://arxiv.org/abs/2310.06238v1 | http://arxiv.org/pdf/2310.06238v1 | 2310.06238v1 |
Differentially Private Multi-Site Treatment Effect Estimation | Patient privacy is a major barrier to healthcare AI. For confidentiality
reasons, most patient data remains in silo in separate hospitals, preventing
the design of data-driven healthcare AI systems that need large volumes of
patient data to make effective decisions. A solution to this is collective
learning across multiple sites through federated learning with differential
privacy. However, literature in this space typically focuses on differentially
private statistical estimation and machine learning, which is different from
the causal inference-related problems that arise in healthcare. In this work,
we take a fresh look at federated learning with a focus on causal inference;
specifically, we look at estimating the average treatment effect (ATE), an
important task in causal inference for healthcare applications, and provide a
federated analytics approach to enable ATE estimation across multiple sites
along with differential privacy (DP) guarantees at each site. The main
challenge comes from site heterogeneity -- different sites have different
sample sizes and privacy budgets. We address this through a class of per-site
estimation algorithms that reports the ATE estimate and its variance as a
quality measure, and an aggregation algorithm on the server side that minimizes
the overall variance of the final ATE estimate. Our experiments on real and
synthetic data show that our method reliably aggregates private statistics
across sites and provides better privacy-utility tradeoff under site
heterogeneity than baselines. | [
"Tatsuki Koga",
"Kamalika Chaudhuri",
"David Page"
] | 2023-10-10 01:21:01 | http://arxiv.org/abs/2310.06237v1 | http://arxiv.org/pdf/2310.06237v1 | 2310.06237v1 |
Efficient Adaptation of Large Vision Transformer via Adapter Re-Composing | The advent of high-capacity pre-trained models has revolutionized
problem-solving in computer vision, shifting the focus from training
task-specific models to adapting pre-trained models. Consequently, effectively
adapting large pre-trained models to downstream tasks in an efficient manner
has become a prominent research area. Existing solutions primarily concentrate
on designing lightweight adapters and their interaction with pre-trained
models, with the goal of minimizing the number of parameters requiring updates.
In this study, we propose a novel Adapter Re-Composing (ARC) strategy that
addresses efficient pre-trained model adaptation from a fresh perspective. Our
approach considers the reusability of adaptation parameters and introduces a
parameter-sharing scheme. Specifically, we leverage symmetric
down-/up-projections to construct bottleneck operations, which are shared
across layers. By learning low-dimensional re-scaling coefficients, we can
effectively re-compose layer-adaptive adapters. This parameter-sharing strategy
in adapter design allows us to significantly reduce the number of new
parameters while maintaining satisfactory performance, thereby offering a
promising approach to compress the adaptation cost. We conduct experiments on
24 downstream image classification tasks using various Vision Transformer
variants to evaluate our method. The results demonstrate that our approach
achieves compelling transfer learning performance with a reduced parameter
count. Our code is available at
\href{https://github.com/DavidYanAnDe/ARC}{https://github.com/DavidYanAnDe/ARC}. | [
"Wei Dong",
"Dawei Yan",
"Zhijun Lin",
"Peng Wang"
] | 2023-10-10 01:04:15 | http://arxiv.org/abs/2310.06234v1 | http://arxiv.org/pdf/2310.06234v1 | 2310.06234v1 |
Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers | To alleviate the bias generated by the l1-norm in the low-rank tensor
completion problem, nonconvex surrogates/regularizers have been suggested to
replace the tensor nuclear norm, although both can achieve sparsity. However,
the thresholding functions of these nonconvex regularizers may not have
closed-form expressions and thus iterations are needed, which increases the
computational loads. To solve this issue, we devise a framework to generate
sparsity-inducing regularizers with closed-form thresholding functions. These
regularizers are applied to low-tubal-rank tensor completion, and efficient
algorithms based on the alternating direction method of multipliers are
developed. Furthermore, convergence of our methods is analyzed and it is proved
that the generated sequences are bounded and any limit point is a stationary
point. Experimental results using synthetic and real-world datasets show that
the proposed algorithms outperform the state-of-the-art methods in terms of
restoration performance. | [
"Zhi-Yong Wang",
"Hing Cheung So",
"Abdelhak M. Zoubir"
] | 2023-10-10 01:00:13 | http://arxiv.org/abs/2310.06233v1 | http://arxiv.org/pdf/2310.06233v1 | 2310.06233v1 |
Exploring adversarial attacks in federated learning for medical imaging | Federated learning offers a privacy-preserving framework for medical image
analysis but exposes the system to adversarial attacks. This paper aims to
evaluate the vulnerabilities of federated learning networks in medical image
analysis against such attacks. Employing domain-specific MRI tumor and
pathology imaging datasets, we assess the effectiveness of known threat
scenarios in a federated learning environment. Our tests reveal that
domain-specific configurations can increase the attacker's success rate
significantly. The findings emphasize the urgent need for effective defense
mechanisms and suggest a critical re-evaluation of current security protocols
in federated medical image analysis systems. | [
"Erfan Darzi",
"Florian Dubost",
"N. M. Sijtsema",
"P. M. A van Ooijen"
] | 2023-10-10 00:39:58 | http://arxiv.org/abs/2310.06227v1 | http://arxiv.org/pdf/2310.06227v1 | 2310.06227v1 |
GPT-4 as an Agronomist Assistant? Answering Agriculture Exams Using Large Language Models | Large language models (LLMs) have demonstrated remarkable capabilities in
natural language understanding across various domains, including healthcare and
finance. For some tasks, LLMs achieve similar or better performance than
trained human beings, therefore it is reasonable to employ human exams (e.g.,
certification tests) to assess the performance of LLMs. We present a
comprehensive evaluation of popular LLMs, such as Llama 2 and GPT, on their
ability to answer agriculture-related questions. In our evaluation, we also
employ RAG (Retrieval-Augmented Generation) and ER (Ensemble Refinement)
techniques, which combine information retrieval, generation capabilities, and
prompting strategies to improve the LLMs' performance. To demonstrate the
capabilities of LLMs, we selected agriculture exams and benchmark datasets from
three of the largest agriculture producer countries: Brazil, India, and the
USA. Our analysis highlights GPT-4's ability to achieve a passing score on
exams to earn credits for renewing agronomist certifications, answering 93% of
the questions correctly and outperforming earlier general-purpose models, which
achieved 88% accuracy. On one of our experiments, GPT-4 obtained the highest
performance when compared to human subjects. This performance suggests that
GPT-4 could potentially pass on major graduate education admission tests or
even earn credits for renewing agronomy certificates. We also explore the
models' capacity to address general agriculture-related questions and generate
crop management guidelines for Brazilian and Indian farmers, utilizing robust
datasets from the Brazilian Agency of Agriculture (Embrapa) and graduate
program exams from India. The results suggest that GPT-4, ER, and RAG can
contribute meaningfully to agricultural education, assessment, and crop
management practice, offering valuable insights to farmers and agricultural
professionals. | [
"Bruno Silva",
"Leonardo Nunes",
"Roberto Estevão",
"Vijay Aski",
"Ranveer Chandra"
] | 2023-10-10 00:39:04 | http://arxiv.org/abs/2310.06225v2 | http://arxiv.org/pdf/2310.06225v2 | 2310.06225v2 |
Detecting and Learning Out-of-Distribution Data in the Open world: Algorithm and Theory | This thesis makes considerable contributions to the realm of machine
learning, specifically in the context of open-world scenarios where systems
face previously unseen data and contexts. Traditional machine learning models
are usually trained and tested within a fixed and known set of classes, a
condition known as the closed-world setting. While this assumption works in
controlled environments, it falls short in real-world applications where new
classes or categories of data can emerge dynamically and unexpectedly. To
address this, our research investigates two intertwined steps essential for
open-world machine learning: Out-of-distribution (OOD) Detection and Open-world
Representation Learning (ORL). OOD detection focuses on identifying instances
from unknown classes that fall outside the model's training distribution. This
process reduces the risk of making overly confident, erroneous predictions
about unfamiliar inputs. Moving beyond OOD detection, ORL extends the
capabilities of the model to not only detect unknown instances but also learn
from and incorporate knowledge about these new classes. By delving into these
research problems of open-world learning, this thesis contributes both
algorithmic solutions and theoretical foundations, which pave the way for
building machine learning models that are not only performant but also reliable
in the face of the evolving complexities of the real world. | [
"Yiyou Sun"
] | 2023-10-10 00:25:21 | http://arxiv.org/abs/2310.06221v1 | http://arxiv.org/pdf/2310.06221v1 | 2310.06221v1 |
SUBP: Soft Uniform Block Pruning for 1xN Sparse CNNs Multithreading Acceleration | The study of sparsity in Convolutional Neural Networks (CNNs) has become
widespread to compress and accelerate models in environments with limited
resources. By constraining N consecutive weights along the output channel to be
group-wise non-zero, the recent network with 1$\times$N sparsity has received
tremendous popularity for its three outstanding advantages: 1) A large amount
of storage space saving by a \emph{Block Sparse Row} matrix. 2) Excellent
performance at a high sparsity. 3) Significant speedups on CPUs with Advanced
Vector Extensions. Recent work requires selecting and fine-tuning 1$\times$N
sparse weights based on dense pre-trained weights, leading to the problems such
as expensive training cost and memory access, sub-optimal model quality, as
well as unbalanced workload across threads (different sparsity across output
channels). To overcome them, this paper proposes a novel \emph{\textbf{S}oft
\textbf{U}niform \textbf{B}lock \textbf{P}runing} (SUBP) approach to train a
uniform 1$\times$N sparse structured network from scratch. Specifically, our
approach tends to repeatedly allow pruned blocks to regrow to the network based
on block angular redundancy and importance sampling in a uniform manner
throughout the training process. It not only makes the model less dependent on
pre-training, reduces the model redundancy and the risk of pruning the
important blocks permanently but also achieves balanced workload. Empirically,
on ImageNet, comprehensive experiments across various CNN architectures show
that our SUBP consistently outperforms existing 1$\times$N and structured
sparsity methods based on pre-trained models or training from scratch. Source
codes and models are available at \url{https://github.com/JingyangXiang/SUBP}. | [
"Jingyang Xiang",
"Siqi Li",
"Jun Chen",
"Shipeng Bai",
"Yukai Ma",
"Guang Dai",
"Yong Liu"
] | 2023-10-10 00:22:27 | http://arxiv.org/abs/2310.06218v1 | http://arxiv.org/pdf/2310.06218v1 | 2310.06218v1 |
Federated Multi-Level Optimization over Decentralized Networks | Multi-level optimization has gained increasing attention in recent years, as
it provides a powerful framework for solving complex optimization problems that
arise in many fields, such as meta-learning, multi-player games, reinforcement
learning, and nested composition optimization. In this paper, we study the
problem of distributed multi-level optimization over a network, where agents
can only communicate with their immediate neighbors. This setting is motivated
by the need for distributed optimization in large-scale systems, where
centralized optimization may not be practical or feasible. To address this
problem, we propose a novel gossip-based distributed multi-level optimization
algorithm that enables networked agents to solve optimization problems at
different levels in a single timescale and share information through network
propagation. Our algorithm achieves optimal sample complexity, scaling linearly
with the network size, and demonstrates state-of-the-art performance on various
applications, including hyper-parameter tuning, decentralized reinforcement
learning, and risk-averse optimization. | [
"Shuoguang Yang",
"Xuezhou Zhang",
"Mengdi Wang"
] | 2023-10-10 00:21:10 | http://arxiv.org/abs/2310.06217v1 | http://arxiv.org/pdf/2310.06217v1 | 2310.06217v1 |
GeoLLM: Extracting Geospatial Knowledge from Large Language Models | The application of machine learning (ML) in a range of geospatial tasks is
increasingly common but often relies on globally available covariates such as
satellite imagery that can either be expensive or lack predictive power. Here
we explore the question of whether the vast amounts of knowledge found in
Internet language corpora, now compressed within large language models (LLMs),
can be leveraged for geospatial prediction tasks. We first demonstrate that
LLMs embed remarkable spatial information about locations, but naively querying
LLMs using geographic coordinates alone is ineffective in predicting key
indicators like population density. We then present GeoLLM, a novel method that
can effectively extract geospatial knowledge from LLMs with auxiliary map data
from OpenStreetMap. We demonstrate the utility of our approach across multiple
tasks of central interest to the international community, including the
measurement of population density and economic livelihoods. Across these tasks,
our method demonstrates a 70% improvement in performance (measured using
Pearson's $r^2$) relative to baselines that use nearest neighbors or use
information directly from the prompt, and performance equal to or exceeding
satellite-based benchmarks in the literature. With GeoLLM, we observe that
GPT-3.5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting
that the performance of our method scales well with the size of the model and
its pretraining dataset. Our experiments reveal that LLMs are remarkably
sample-efficient, rich in geospatial information, and robust across the globe.
Crucially, GeoLLM shows promise in mitigating the limitations of existing
geospatial covariates and complementing them well. | [
"Rohin Manvi",
"Samar Khanna",
"Gengchen Mai",
"Marshall Burke",
"David Lobell",
"Stefano Ermon"
] | 2023-10-10 00:03:23 | http://arxiv.org/abs/2310.06213v1 | http://arxiv.org/pdf/2310.06213v1 | 2310.06213v1 |
Fair Classifiers that Abstain without Harm | In critical applications, it is vital for classifiers to defer
decision-making to humans. We propose a post-hoc method that makes existing
classifiers selectively abstain from predicting certain samples. Our abstaining
classifier is incentivized to maintain the original accuracy for each
sub-population (i.e. no harm) while achieving a set of group fairness
definitions to a user specified degree. To this end, we design an Integer
Programming (IP) procedure that assigns abstention decisions for each training
sample to satisfy a set of constraints. To generalize the abstaining decisions
to test samples, we then train a surrogate model to learn the abstaining
decisions based on the IP solutions in an end-to-end manner. We analyze the
feasibility of the IP procedure to determine the possible abstention rate for
different levels of unfairness tolerance and accuracy constraint for achieving
no harm. To the best of our knowledge, this work is the first to identify the
theoretical relationships between the constraint parameters and the required
abstention rate. Our theoretical results are important since a high abstention
rate is often infeasible in practice due to a lack of human resources. Our
framework outperforms existing methods in terms of fairness disparity without
sacrificing accuracy at similar abstention rates. | [
"Tongxin Yin",
"Jean-François Ton",
"Ruocheng Guo",
"Yuanshun Yao",
"Mingyan Liu",
"Yang Liu"
] | 2023-10-09 23:07:28 | http://arxiv.org/abs/2310.06205v1 | http://arxiv.org/pdf/2310.06205v1 | 2310.06205v1 |
The Importance of Prompt Tuning for Automated Neuron Explanations | Recent advances have greatly increased the capabilities of large language
models (LLMs), but our understanding of the models and their safety has not
progressed as fast. In this paper we aim to understand LLMs deeper by studying
their individual neurons. We build upon previous work showing large language
models such as GPT-4 can be useful in explaining what each neuron in a language
model does. Specifically, we analyze the effect of the prompt used to generate
explanations and show that reformatting the explanation prompt in a more
natural way can significantly improve neuron explanation quality and greatly
reduce computational cost. We demonstrate the effects of our new prompts in
three different ways, incorporating both automated and human evaluations. | [
"Justin Lee",
"Tuomas Oikarinen",
"Arjun Chatha",
"Keng-Chi Chang",
"Yilan Chen",
"Tsui-Wei Weng"
] | 2023-10-09 23:02:07 | http://arxiv.org/abs/2310.06200v2 | http://arxiv.org/pdf/2310.06200v2 | 2310.06200v2 |
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization | Deep neural networks (DNNs) are vulnerable to adversarial attacks. It is
found empirically that adversarially robust generalization is crucial in
establishing defense algorithms against adversarial attacks. Therefore, it is
interesting to study the theoretical guarantee of robust generalization. This
paper focuses on norm-based complexity, based on a PAC-Bayes approach
(Neyshabur et al., 2017). The main challenge lies in extending the key
ingredient, which is a weight perturbation bound in standard settings, to the
robust settings. Existing attempts heavily rely on additional strong
assumptions, leading to loose bounds. In this paper, we address this issue and
provide a spectrally-normalized robust generalization bound for DNNs. Compared
to existing bounds, our bound offers two significant advantages: Firstly, it
does not depend on additional assumptions. Secondly, it is considerably
tighter, aligning with the bounds of standard generalization. Therefore, our
result provides a different perspective on understanding robust generalization:
The mismatch terms between standard and robust generalization bounds shown in
previous studies do not contribute to the poor robust generalization. Instead,
these disparities solely due to mathematical issues. Finally, we extend the
main result to adversarial robustness against general non-$\ell_p$ attacks and
other neural network architectures. | [
"Jiancong Xiao",
"Ruoyu Sun",
"Zhi-quan Luo"
] | 2023-10-09 22:20:27 | http://arxiv.org/abs/2310.06182v1 | http://arxiv.org/pdf/2310.06182v1 | 2310.06182v1 |
Automatic Integration for Spatiotemporal Neural Point Processes | Learning continuous-time point processes is essential to many discrete event
forecasting tasks. However, integration poses a major challenge, particularly
for spatiotemporal point processes (STPPs), as it involves calculating the
likelihood through triple integrals over space and time. Existing methods for
integrating STPP either assume a parametric form of the intensity function,
which lacks flexibility; or approximating the intensity with Monte Carlo
sampling, which introduces numerical errors. Recent work by Omi et al. [2019]
proposes a dual network or AutoInt approach for efficient integration of
flexible intensity function. However, the method only focuses on the 1D
temporal point process. In this paper, we introduce a novel paradigm: AutoSTPP
(Automatic Integration for Spatiotemporal Neural Point Processes) that extends
the AutoInt approach to 3D STPP. We show that direct extension of the previous
work overly constrains the intensity function, leading to poor performance. We
prove consistency of AutoSTPP and validate it on synthetic data and benchmark
real world datasets, showcasing its significant advantage in recovering complex
intensity functions from irregular spatiotemporal events, particularly when the
intensity is sharply localized. | [
"Zihao Zhou",
"Rose Yu"
] | 2023-10-09 22:07:48 | http://arxiv.org/abs/2310.06179v1 | http://arxiv.org/pdf/2310.06179v1 | 2310.06179v1 |
Look-Up mAI GeMM: Increasing AI GeMMs Performance by Nearly 2.5x via msGeMM | AI models are increasing in size and recent advancement in the community has
shown that unlike HPC applications where double precision datatype are
required, lower-precision datatypes such as fp8 or int4 are sufficient to bring
the same model quality both for training and inference. Following these trends,
GPU vendors such as NVIDIA and AMD have added hardware support for fp16, fp8
and int8 GeMM operations with an exceptional performance via Tensor Cores.
However, this paper proposes a new algorithm called msGeMM which shows that AI
models with low-precision datatypes can run with ~2.5x fewer multiplication and
add instructions. Efficient implementation of this algorithm requires special
CUDA cores with the ability to add elements from a small look-up table at the
rate of Tensor Cores. | [
"Saeed Maleki"
] | 2023-10-09 22:06:35 | http://arxiv.org/abs/2310.06178v1 | http://arxiv.org/pdf/2310.06178v1 | 2310.06178v1 |
DockGame: Cooperative Games for Multimeric Rigid Protein Docking | Protein interactions and assembly formation are fundamental to most
biological processes. Predicting the assembly structure from constituent
proteins -- referred to as the protein docking task -- is thus a crucial step
in protein design applications. Most traditional and deep learning methods for
docking have focused mainly on binary docking, following either a search-based,
regression-based, or generative modeling paradigm. In this paper, we focus on
the less-studied multimeric (i.e., two or more proteins) docking problem. We
introduce DockGame, a novel game-theoretic framework for docking -- we view
protein docking as a cooperative game between proteins, where the final
assembly structure(s) constitute stable equilibria w.r.t. the underlying game
potential. Since we do not have access to the true potential, we consider two
approaches - i) learning a surrogate game potential guided by physics-based
energy functions and computing equilibria by simultaneous gradient updates, and
ii) sampling from the Gibbs distribution of the true potential by learning a
diffusion generative model over the action spaces (rotations and translations)
of all proteins. Empirically, on the Docking Benchmark 5.5 (DB5.5) dataset,
DockGame has much faster runtimes than traditional docking methods, can
generate multiple plausible assembly structures, and achieves comparable
performance to existing binary docking baselines, despite solving the harder
task of coordinating multiple protein chains. | [
"Vignesh Ram Somnath",
"Pier Giuseppe Sessa",
"Maria Rodriguez Martinez",
"Andreas Krause"
] | 2023-10-09 22:02:05 | http://arxiv.org/abs/2310.06177v1 | http://arxiv.org/pdf/2310.06177v1 | 2310.06177v1 |
Memory-Consistent Neural Networks for Imitation Learning | Imitation learning considerably simplifies policy synthesis compared to
alternative approaches by exploiting access to expert demonstrations. For such
imitation policies, errors away from the training samples are particularly
critical. Even rare slip-ups in the policy action outputs can compound quickly
over time, since they lead to unfamiliar future states where the policy is
still more likely to err, eventually causing task failures. We revisit simple
supervised ``behavior cloning'' for conveniently training the policy from
nothing more than pre-recorded demonstrations, but carefully design the model
class to counter the compounding error phenomenon. Our ``memory-consistent
neural network'' (MCNN) outputs are hard-constrained to stay within clearly
specified permissible regions anchored to prototypical ``memory'' training
samples. We provide a guaranteed upper bound for the sub-optimality gap induced
by MCNN policies. Using MCNNs on 9 imitation learning tasks, with MLP,
Transformer, and Diffusion backbones, spanning dexterous robotic manipulation
and driving, proprioceptive inputs and visual inputs, and varying sizes and
types of demonstration data, we find large and consistent gains in performance,
validating that MCNNs are better-suited than vanilla deep neural networks for
imitation learning applications. Website:
https://sites.google.com/view/mcnn-imitation | [
"Kaustubh Sridhar",
"Souradeep Dutta",
"Dinesh Jayaraman",
"James Weimer",
"Insup Lee"
] | 2023-10-09 21:49:48 | http://arxiv.org/abs/2310.06171v1 | http://arxiv.org/pdf/2310.06171v1 | 2310.06171v1 |
Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness | Neural networks (NNs) are known to exhibit simplicity bias where they tend to
prefer learning 'simple' features over more 'complex' ones, even when the
latter may be more informative. Simplicity bias can lead to the model making
biased predictions which have poor out-of-distribution (OOD) generalization. To
address this, we propose a framework that encourages the model to use a more
diverse set of features to make predictions. We first train a simple model, and
then regularize the conditional mutual information with respect to it to obtain
the final model. We demonstrate the effectiveness of this framework in various
problem settings and real-world applications, showing that it effectively
addresses simplicity bias and leads to more features being used, enhances OOD
generalization, and improves subgroup robustness and fairness. We complement
these results with theoretical analyses of the effect of the regularization and
its OOD generalization properties. | [
"Bhavya Vasudeva",
"Kameron Shahabi",
"Vatsal Sharan"
] | 2023-10-09 21:19:39 | http://arxiv.org/abs/2310.06161v1 | http://arxiv.org/pdf/2310.06161v1 | 2310.06161v1 |
Provably Accelerating Ill-Conditioned Low-rank Estimation via Scaled Gradient Descent, Even with Overparameterization | Many problems encountered in science and engineering can be formulated as
estimating a low-rank object (e.g., matrices and tensors) from incomplete, and
possibly corrupted, linear measurements. Through the lens of matrix and tensor
factorization, one of the most popular approaches is to employ simple iterative
algorithms such as gradient descent (GD) to recover the low-rank factors
directly, which allow for small memory and computation footprints. However, the
convergence rate of GD depends linearly, and sometimes even quadratically, on
the condition number of the low-rank object, and therefore, GD slows down
painstakingly when the problem is ill-conditioned. This chapter introduces a
new algorithmic approach, dubbed scaled gradient descent (ScaledGD), that
provably converges linearly at a constant rate independent of the condition
number of the low-rank object, while maintaining the low per-iteration cost of
gradient descent for a variety of tasks including sensing, robust principal
component analysis and completion. In addition, ScaledGD continues to admit
fast global convergence to the minimax-optimal solution, again almost
independent of the condition number, from a small random initialization when
the rank is over-specified in the presence of Gaussian noise. In total,
ScaledGD highlights the power of appropriate preconditioning in accelerating
nonconvex statistical estimation, where the iteration-varying preconditioners
promote desirable invariance properties of the trajectory with respect to the
symmetry in low-rank factorization without hurting generalization. | [
"Cong Ma",
"Xingyu Xu",
"Tian Tong",
"Yuejie Chi"
] | 2023-10-09 21:16:57 | http://arxiv.org/abs/2310.06159v1 | http://arxiv.org/pdf/2310.06159v1 | 2310.06159v1 |
Manifold-augmented Eikonal Equations: Geodesic Distances and Flows on Differentiable Manifolds | Manifolds discovered by machine learning models provide a compact
representation of the underlying data. Geodesics on these manifolds define
locally length-minimising curves and provide a notion of distance, which are
key for reduced-order modelling, statistical inference, and interpolation. In
this work, we propose a model-based parameterisation for distance fields and
geodesic flows on manifolds, exploiting solutions of a manifold-augmented
Eikonal equation. We demonstrate how the geometry of the manifold impacts the
distance field, and exploit the geodesic flow to obtain globally
length-minimising curves directly. This work opens opportunities for statistics
and reduced-order modelling on differentiable manifolds. | [
"Daniel Kelshaw",
"Luca Magri"
] | 2023-10-09 21:11:13 | http://arxiv.org/abs/2310.06157v1 | http://arxiv.org/pdf/2310.06157v1 | 2310.06157v1 |
Latent Diffusion Model for DNA Sequence Generation | The harnessing of machine learning, especially deep generative models, has
opened up promising avenues in the field of synthetic DNA sequence generation.
Whilst Generative Adversarial Networks (GANs) have gained traction for this
application, they often face issues such as limited sample diversity and mode
collapse. On the other hand, Diffusion Models are a promising new class of
generative models that are not burdened with these problems, enabling them to
reach the state-of-the-art in domains such as image generation. In light of
this, we propose a novel latent diffusion model, DiscDiff, tailored for
discrete DNA sequence generation. By simply embedding discrete DNA sequences
into a continuous latent space using an autoencoder, we are able to leverage
the powerful generative abilities of continuous diffusion models for the
generation of discrete data. Additionally, we introduce Fr\'echet
Reconstruction Distance (FReD) as a new metric to measure the sample quality of
DNA sequence generations. Our DiscDiff model demonstrates an ability to
generate synthetic DNA sequences that align closely with real DNA in terms of
Motif Distribution, Latent Embedding Distribution (FReD), and Chromatin
Profiles. Additionally, we contribute a comprehensive cross-species dataset of
150K unique promoter-gene sequences from 15 species, enriching resources for
future generative modelling in genomics. We will make our code public upon
publication. | [
"Zehui Li",
"Yuhao Ni",
"Tim August B. Huygelen",
"Akashaditya Das",
"Guoxuan Xia",
"Guy-Bart Stan",
"Yiren Zhao"
] | 2023-10-09 20:58:52 | http://arxiv.org/abs/2310.06150v1 | http://arxiv.org/pdf/2310.06150v1 | 2310.06150v1 |
Understanding Transfer Learning and Gradient-Based Meta-Learning Techniques | Deep neural networks can yield good performance on various tasks but often
require large amounts of data to train them. Meta-learning received
considerable attention as one approach to improve the generalization of these
networks from a limited amount of data. Whilst meta-learning techniques have
been observed to be successful at this in various scenarios, recent results
suggest that when evaluated on tasks from a different data distribution than
the one used for training, a baseline that simply finetunes a pre-trained
network may be more effective than more complicated meta-learning techniques
such as MAML, which is one of the most popular meta-learning techniques. This
is surprising as the learning behaviour of MAML mimics that of finetuning: both
rely on re-using learned features. We investigate the observed performance
differences between finetuning, MAML, and another meta-learning technique
called Reptile, and show that MAML and Reptile specialize for fast adaptation
in low-data regimes of similar data distribution as the one used for training.
Our findings show that both the output layer and the noisy training conditions
induced by data scarcity play important roles in facilitating this
specialization for MAML. Lastly, we show that the pre-trained features as
obtained by the finetuning baseline are more diverse and discriminative than
those learned by MAML and Reptile. Due to this lack of diversity and
distribution specialization, MAML and Reptile may fail to generalize to
out-of-distribution tasks whereas finetuning can fall back on the diversity of
the learned features. | [
"Mike Huisman",
"Aske Plaat",
"Jan N. van Rijn"
] | 2023-10-09 20:51:49 | http://arxiv.org/abs/2310.06148v1 | http://arxiv.org/pdf/2310.06148v1 | 2310.06148v1 |
Reinforcement Learning in the Era of LLMs: What is Essential? What is needed? An RL Perspective on RLHF, Prompting, and Beyond | Recent advancements in Large Language Models (LLMs) have garnered wide
attention and led to successful products such as ChatGPT and GPT-4. Their
proficiency in adhering to instructions and delivering harmless, helpful, and
honest (3H) responses can largely be attributed to the technique of
Reinforcement Learning from Human Feedback (RLHF). In this paper, we aim to
link the research in conventional RL to RL techniques used in LLM research.
Demystify this technique by discussing why, when, and how RL excels.
Furthermore, we explore potential future avenues that could either benefit from
or contribute to RLHF research.
Highlighted Takeaways:
1. RLHF is Online Inverse RL with Offline Demonstration Data.
2. RLHF $>$ SFT because Imitation Learning (and Inverse RL) $>$ Behavior
Cloning (BC) by alleviating the problem of compounding error.
3. The RM step in RLHF generates a proxy of the expensive human feedback,
such an insight can be generalized to other LLM tasks such as prompting
evaluation and optimization where feedback is also expensive.
4. The policy learning in RLHF is more challenging than conventional problems
studied in IRL due to their high action dimensionality and feedback sparsity.
5. The main superiority of PPO over off-policy value-based methods is its
stability gained from (almost) on-policy data and conservative policy updates. | [
"Hao Sun"
] | 2023-10-09 20:49:42 | http://arxiv.org/abs/2310.06147v1 | http://arxiv.org/pdf/2310.06147v1 | 2310.06147v1 |
HydraViT: Adaptive Multi-Branch Transformer for Multi-Label Disease Classification from Chest X-ray Images | Chest X-ray is an essential diagnostic tool in the identification of chest
diseases given its high sensitivity to pathological abnormalities in the lungs.
However, image-driven diagnosis is still challenging due to heterogeneity in
size and location of pathology, as well as visual similarities and
co-occurrence of separate pathology. Since disease-related regions often occupy
a relatively small portion of diagnostic images, classification models based on
traditional convolutional neural networks (CNNs) are adversely affected given
their locality bias. While CNNs were previously augmented with attention maps
or spatial masks to guide focus on potentially critical regions, learning
localization guidance under heterogeneity in the spatial distribution of
pathology is challenging. To improve multi-label classification performance,
here we propose a novel method, HydraViT, that synergistically combines a
transformer backbone with a multi-branch output module with learned weighting.
The transformer backbone enhances sensitivity to long-range context in X-ray
images, while using the self-attention mechanism to adaptively focus on
task-critical regions. The multi-branch output module dedicates an independent
branch to each disease label to attain robust learning across separate disease
classes, along with an aggregated branch across labels to maintain sensitivity
to co-occurrence relationships among pathology. Experiments demonstrate that,
on average, HydraViT outperforms competing attention-guided methods by 1.2%,
region-guided methods by 1.4%, and semantic-guided methods by 1.0% in
multi-label classification performance. | [
"Şaban Öztürk",
"M. Yiğit Turalı",
"Tolga Çukur"
] | 2023-10-09 20:45:29 | http://arxiv.org/abs/2310.06143v1 | http://arxiv.org/pdf/2310.06143v1 | 2310.06143v1 |
On the Correlation between Random Variables and their Principal Components | The article attempts to find an algebraic formula describing the correlation
coefficients between random variables and the principal components representing
them. As a result of the analysis, starting from selected statistics relating
to individual random variables, the equivalents of these statistics relating to
a set of random variables were presented in the language of linear algebra,
using the concepts of vector and matrix. This made it possible, in subsequent
steps, to derive the expected formula. The formula found is identical to the
formula used in Factor Analysis to calculate factor loadings. The discussion
showed that it is possible to apply this formula to optimize the number of
principal components in Principal Component Analysis, as well as to optimize
the number of factors in Factor Analysis. | [
"Zenon Gniazdowski"
] | 2023-10-09 20:35:38 | http://arxiv.org/abs/2310.06139v1 | http://arxiv.org/pdf/2310.06139v1 | 2310.06139v1 |
Layout Sequence Prediction From Noisy Mobile Modality | Trajectory prediction plays a vital role in understanding pedestrian movement
for applications such as autonomous driving and robotics. Current trajectory
prediction models depend on long, complete, and accurately observed sequences
from visual modalities. Nevertheless, real-world situations often involve
obstructed cameras, missed objects, or objects out of sight due to
environmental factors, leading to incomplete or noisy trajectories. To overcome
these limitations, we propose LTrajDiff, a novel approach that treats objects
obstructed or out of sight as equally important as those with fully visible
trajectories. LTrajDiff utilizes sensor data from mobile phones to surmount
out-of-sight constraints, albeit introducing new challenges such as modality
fusion, noisy data, and the absence of spatial layout and object size
information. We employ a denoising diffusion model to predict precise layout
sequences from noisy mobile data using a coarse-to-fine diffusion strategy,
incorporating the RMS, Siamese Masked Encoding Module, and MFM. Our model
predicts layout sequences by implicitly inferring object size and projection
status from a single reference timestamp or significantly obstructed sequences.
Achieving SOTA results in randomly obstructed experiments and extremely short
input experiments, our model illustrates the effectiveness of leveraging noisy
mobile data. In summary, our approach offers a promising solution to the
challenges faced by layout sequence and trajectory prediction models in
real-world settings, paving the way for utilizing sensor data from mobile
phones to accurately predict pedestrian bounding box trajectories. To the best
of our knowledge, this is the first work that addresses severely obstructed and
extremely short layout sequences by combining vision with noisy mobile
modality, making it the pioneering work in the field of layout sequence
trajectory prediction. | [
"Haichao Zhang",
"Yi Xu",
"Hongsheng Lu",
"Takayuki Shimizu",
"Yun Fu"
] | 2023-10-09 20:32:49 | http://arxiv.org/abs/2310.06138v1 | http://arxiv.org/pdf/2310.06138v1 | 2310.06138v1 |
Learning Layer-wise Equivariances Automatically using Gradients | Convolutions encode equivariance symmetries into neural networks leading to
better generalisation performance. However, symmetries provide fixed hard
constraints on the functions a network can represent, need to be specified in
advance, and can not be adapted. Our goal is to allow flexible symmetry
constraints that can automatically be learned from data using gradients.
Learning symmetry and associated weight connectivity structures from scratch is
difficult for two reasons. First, it requires efficient and flexible
parameterisations of layer-wise equivariances. Secondly, symmetries act as
constraints and are therefore not encouraged by training losses measuring data
fit. To overcome these challenges, we improve parameterisations of soft
equivariance and learn the amount of equivariance in layers by optimising the
marginal likelihood, estimated using differentiable Laplace approximations. The
objective balances data fit and model complexity enabling layer-wise symmetry
discovery in deep networks. We demonstrate the ability to automatically learn
layer-wise equivariances on image classification tasks, achieving equivalent or
improved performance over baselines with hard-coded symmetry. | [
"Tycho F. A. van der Ouderaa",
"Alexander Immer",
"Mark van der Wilk"
] | 2023-10-09 20:22:43 | http://arxiv.org/abs/2310.06131v1 | http://arxiv.org/pdf/2310.06131v1 | 2310.06131v1 |
On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments | Speech separation remains an important topic for multi-speaker technology
researchers. Convolution augmented transformers (conformers) have performed
well for many speech processing tasks but have been under-researched for speech
separation. Most recent state-of-the-art (SOTA) separation models have been
time-domain audio separation networks (TasNets). A number of successful models
have made use of dual-path (DP) networks which sequentially process local and
global information. Time domain conformers (TD-Conformers) are an analogue of
the DP approach in that they also process local and global context sequentially
but have a different time complexity function. It is shown that for realistic
shorter signal lengths, conformers are more efficient when controlling for
feature dimension. Subsampling layers are proposed to further improve
computational efficiency. The best TD-Conformer achieves 14.6 dB and 21.2 dB
SISDR improvement on the WHAMR and WSJ0-2Mix benchmarks, respectively. | [
"William Ravenscroft",
"Stefan Goetze",
"Thomas Hain"
] | 2023-10-09 20:02:11 | http://arxiv.org/abs/2310.06125v1 | http://arxiv.org/pdf/2310.06125v1 | 2310.06125v1 |
Factorized Tensor Networks for Multi-Task and Multi-Domain Learning | Multi-task and multi-domain learning methods seek to learn multiple
tasks/domains, jointly or one after another, using a single unified network.
The key challenge and opportunity is to exploit shared information across tasks
and domains to improve the efficiency of the unified network. The efficiency
can be in terms of accuracy, storage cost, computation, or sample complexity.
In this paper, we propose a factorized tensor network (FTN) that can achieve
accuracy comparable to independent single-task/domain networks with a small
number of additional parameters. FTN uses a frozen backbone network from a
source model and incrementally adds task/domain-specific low-rank tensor
factors to the shared frozen network. This approach can adapt to a large number
of target domains and tasks without catastrophic forgetting. Furthermore, FTN
requires a significantly smaller number of task-specific parameters compared to
existing methods. We performed experiments on widely used multi-domain and
multi-task datasets. We show the experiments on convolutional-based
architecture with different backbones and on transformer-based architecture. We
observed that FTN achieves similar accuracy as single-task/domain methods while
using only a fraction of additional parameters per task. | [
"Yash Garg",
"Nebiyou Yismaw",
"Rakib Hyder",
"Ashley Prater-Bennette",
"M. Salman Asif"
] | 2023-10-09 19:59:59 | http://arxiv.org/abs/2310.06124v1 | http://arxiv.org/pdf/2310.06124v1 | 2310.06124v1 |
Exploring Progress in Multivariate Time Series Forecasting: Comprehensive Benchmarking and Heterogeneity Analysis | Multivariate Time Series (MTS) widely exists in real-word complex systems,
such as traffic and energy systems, making their forecasting crucial for
understanding and influencing these systems. Recently, deep learning-based
approaches have gained much popularity for effectively modeling temporal and
spatial dependencies in MTS, specifically in Long-term Time Series Forecasting
(LTSF) and Spatial-Temporal Forecasting (STF). However, the fair benchmarking
issue and the choice of technical approaches have been hotly debated in related
work. Such controversies significantly hinder our understanding of progress in
this field. Thus, this paper aims to address these controversies to present
insights into advancements achieved. To resolve benchmarking issues, we
introduce BasicTS, a benchmark designed for fair comparisons in MTS
forecasting. BasicTS establishes a unified training pipeline and reasonable
evaluation settings, enabling an unbiased evaluation of over 30 popular MTS
forecasting models on more than 18 datasets. Furthermore, we highlight the
heterogeneity among MTS datasets and classify them based on temporal and
spatial characteristics. We further prove that neglecting heterogeneity is the
primary reason for generating controversies in technical approaches. Moreover,
based on the proposed BasicTS and rich heterogeneous MTS datasets, we conduct
an exhaustive and reproducible performance and efficiency comparison of popular
models, providing insights for researchers in selecting and designing MTS
forecasting models. | [
"Zezhi Shao",
"Fei Wang",
"Yongjun Xu",
"Wei Wei",
"Chengqing Yu",
"Zhao Zhang",
"Di Yao",
"Guangyin Jin",
"Xin Cao",
"Gao Cong",
"Christian S. Jensen",
"Xueqi Cheng"
] | 2023-10-09 19:52:22 | http://arxiv.org/abs/2310.06119v1 | http://arxiv.org/pdf/2310.06119v1 | 2310.06119v1 |
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models | We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
instances containing specific details. Using the concepts and principles to
guide the reasoning steps, LLMs significantly improve their abilities in
following a correct reasoning path towards the solution. We conduct experiments
of Step-Back Prompting with PaLM-2L models and observe substantial performance
gains on a wide range of challenging reasoning-intensive tasks including STEM,
Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting
improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%,
TimeQA by 27%, and MuSiQue by 7%. | [
"Huaixiu Steven Zheng",
"Swaroop Mishra",
"Xinyun Chen",
"Heng-Tze Cheng",
"Ed H. Chi",
"Quoc V Le",
"Denny Zhou"
] | 2023-10-09 19:48:55 | http://arxiv.org/abs/2310.06117v1 | http://arxiv.org/pdf/2310.06117v1 | 2310.06117v1 |
When is Agnostic Reinforcement Learning Statistically Tractable? | We study the problem of agnostic PAC reinforcement learning (RL): given a
policy class $\Pi$, how many rounds of interaction with an unknown MDP (with a
potentially large state and action space) are required to learn an
$\epsilon$-suboptimal policy with respect to $\Pi$? Towards that end, we
introduce a new complexity measure, called the \emph{spanning capacity}, that
depends solely on the set $\Pi$ and is independent of the MDP dynamics. With a
generative model, we show that for any policy class $\Pi$, bounded spanning
capacity characterizes PAC learnability. However, for online RL, the situation
is more subtle. We show there exists a policy class $\Pi$ with a bounded
spanning capacity that requires a superpolynomial number of samples to learn.
This reveals a surprising separation for agnostic learnability between
generative access and online access models (as well as between
deterministic/stochastic MDPs under online access). On the positive side, we
identify an additional \emph{sunflower} structure, which in conjunction with
bounded spanning capacity enables statistically efficient online RL via a new
algorithm called POPLER, which takes inspiration from classical importance
sampling methods as well as techniques for reachable-state identification and
policy evaluation in reward-free exploration. | [
"Zeyu Jia",
"Gene Li",
"Alexander Rakhlin",
"Ayush Sekhari",
"Nathan Srebro"
] | 2023-10-09 19:40:54 | http://arxiv.org/abs/2310.06113v1 | http://arxiv.org/pdf/2310.06113v1 | 2310.06113v1 |
Theoretical Analysis of Robust Overfitting for Wide DNNs: An NTK Approach | Adversarial training (AT) is a canonical method for enhancing the robustness
of deep neural networks (DNNs). However, recent studies empirically
demonstrated that it suffers from robust overfitting, i.e., a long time AT can
be detrimental to the robustness of DNNs. This paper presents a theoretical
explanation of robust overfitting for DNNs. Specifically, we non-trivially
extend the neural tangent kernel (NTK) theory to AT and prove that an
adversarially trained wide DNN can be well approximated by a linearized DNN.
Moreover, for squared loss, closed-form AT dynamics for the linearized DNN can
be derived, which reveals a new AT degeneration phenomenon: a long-term AT will
result in a wide DNN degenerates to that obtained without AT and thus cause
robust overfitting. Based on our theoretical results, we further design a
method namely Adv-NTK, the first AT algorithm for infinite-width DNNs.
Experiments on real-world datasets show that Adv-NTK can help infinite-width
DNNs enhance comparable robustness to that of their finite-width counterparts,
which in turn justifies our theoretical findings. The code is available at
https://github.com/fshp971/adv-ntk. | [
"Shaopeng Fu",
"Di Wang"
] | 2023-10-09 19:40:25 | http://arxiv.org/abs/2310.06112v1 | http://arxiv.org/pdf/2310.06112v1 | 2310.06112v1 |
BYOC: Personalized Few-Shot Classification with Co-Authored Class Descriptions | Text classification is a well-studied and versatile building block for many
NLP applications. Yet, existing approaches require either large annotated
corpora to train a model with or, when using large language models as a base,
require carefully crafting the prompt as well as using a long context that can
fit many examples. As a result, it is not possible for end-users to build
classifiers for themselves. To address this issue, we propose a novel approach
to few-shot text classification using an LLM. Rather than few-shot examples,
the LLM is prompted with descriptions of the salient features of each class.
These descriptions are coauthored by the user and the LLM interactively: while
the user annotates each few-shot example, the LLM asks relevant questions that
the user answers. Examples, questions, and answers are summarized to form the
classification prompt. Our experiments show that our approach yields high
accuracy classifiers, within 82% of the performance of models trained with
significantly larger datasets while using only 1% of their training sets.
Additionally, in a study with 30 participants, we show that end-users are able
to build classifiers to suit their specific needs. The personalized classifiers
show an average accuracy of 90%, which is 15% higher than the state-of-the-art
approach. | [
"Arth Bohra",
"Govert Verkes",
"Artem Harutyunyan",
"Pascal Weinberger",
"Giovanni Campagna"
] | 2023-10-09 19:37:38 | http://arxiv.org/abs/2310.06111v1 | http://arxiv.org/pdf/2310.06111v1 | 2310.06111v1 |
Grokking as the Transition from Lazy to Rich Training Dynamics | We propose that the grokking phenomenon, where the train loss of a neural
network decreases much earlier than its test loss, can arise due to a neural
network transitioning from lazy training dynamics to a rich, feature learning
regime. To illustrate this mechanism, we study the simple setting of vanilla
gradient descent on a polynomial regression problem with a two layer neural
network which exhibits grokking without regularization in a way that cannot be
explained by existing theories. We identify sufficient statistics for the test
loss of such a network, and tracking these over training reveals that grokking
arises in this setting when the network first attempts to fit a kernel
regression solution with its initial features, followed by late-time feature
learning where a generalizing solution is identified after train loss is
already low. We find that the key determinants of grokking are the rate of
feature learning -- which can be controlled precisely by parameters that scale
the network output -- and the alignment of the initial features with the target
function $y(x)$. We argue this delayed generalization arises when (1) the top
eigenvectors of the initial neural tangent kernel and the task labels $y(x)$
are misaligned, but (2) the dataset size is large enough so that it is possible
for the network to generalize eventually, but not so large that train loss
perfectly tracks test loss at all epochs, and (3) the network begins training
in the lazy regime so does not learn features immediately. We conclude with
evidence that this transition from lazy (linear model) to rich training
(feature learning) can control grokking in more general settings, like on
MNIST, one-layer Transformers, and student-teacher networks. | [
"Tanishq Kumar",
"Blake Bordelon",
"Samuel J. Gershman",
"Cengiz Pehlevan"
] | 2023-10-09 19:33:21 | http://arxiv.org/abs/2310.06110v1 | http://arxiv.org/pdf/2310.06110v1 | 2310.06110v1 |
Quantifying Uncertainty in Deep Learning Classification with Noise in Discrete Inputs for Risk-Based Decision Making | The use of Deep Neural Network (DNN) models in risk-based decision-making has
attracted extensive attention with broad applications in medical, finance,
manufacturing, and quality control. To mitigate prediction-related risks in
decision making, prediction confidence or uncertainty should be assessed
alongside the overall performance of algorithms. Recent studies on Bayesian
deep learning helps quantify prediction uncertainty arises from input noises
and model parameters. However, the normality assumption of input noise in these
models limits their applicability to problems involving categorical and
discrete feature variables in tabular datasets. In this paper, we propose a
mathematical framework to quantify prediction uncertainty for DNN models. The
prediction uncertainty arises from errors in predictors that follow some known
finite discrete distribution. We then conducted a case study using the
framework to predict treatment outcome for tuberculosis patients during their
course of treatment. The results demonstrate under a certain level of risk, we
can identify risk-sensitive cases, which are prone to be misclassified due to
error in predictors. Comparing to the Monte Carlo dropout method, our proposed
framework is more aware of misclassification cases. Our proposed framework for
uncertainty quantification in deep learning can support risk-based decision
making in applications when discrete errors in predictors are present. | [
"Maryam Kheirandish",
"Shengfan Zhang",
"Donald G. Catanzaro",
"Valeriu Crudu"
] | 2023-10-09 19:26:24 | http://arxiv.org/abs/2310.06105v1 | http://arxiv.org/pdf/2310.06105v1 | 2310.06105v1 |
High Dimensional Causal Inference with Variational Backdoor Adjustment | Backdoor adjustment is a technique in causal inference for estimating
interventional quantities from purely observational data. For example, in
medical settings, backdoor adjustment can be used to control for confounding
and estimate the effectiveness of a treatment. However, high dimensional
treatments and confounders pose a series of potential pitfalls: tractability,
identifiability, optimization. In this work, we take a generative modeling
approach to backdoor adjustment for high dimensional treatments and
confounders. We cast backdoor adjustment as an optimization problem in
variational inference without reliance on proxy variables and hidden
confounders. Empirically, our method is able to estimate interventional
likelihood in a variety of high dimensional settings, including semi-synthetic
X-ray medical data. To the best of our knowledge, this is the first application
of backdoor adjustment in which all the relevant variables are high
dimensional. | [
"Daniel Israel",
"Aditya Grover",
"Guy Van den Broeck"
] | 2023-10-09 19:21:41 | http://arxiv.org/abs/2310.06100v1 | http://arxiv.org/pdf/2310.06100v1 | 2310.06100v1 |
Transformers and Large Language Models for Chemistry and Drug Discovery | Language modeling has seen impressive progress over the last years, mainly
prompted by the invention of the Transformer architecture, sparking a
revolution in many fields of machine learning, with breakthroughs in chemistry
and biology. In this chapter, we explore how analogies between chemical and
natural language have inspired the use of Transformers to tackle important
bottlenecks in the drug discovery process, such as retrosynthetic planning and
chemical space exploration. The revolution started with models able to perform
particular tasks with a single type of data, like linearised molecular graphs,
which then evolved to include other types of data, like spectra from analytical
instruments, synthesis actions, and human language. A new trend leverages
recent developments in large language models, giving rise to a wave of models
capable of solving generic tasks in chemistry, all facilitated by the
flexibility of natural language. As we continue to explore and harness these
capabilities, we can look forward to a future where machine learning plays an
even more integral role in accelerating scientific discovery. | [
"Andres M Bran",
"Philippe Schwaller"
] | 2023-10-09 18:40:04 | http://arxiv.org/abs/2310.06083v1 | http://arxiv.org/pdf/2310.06083v1 | 2310.06083v1 |
Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting | This work considers a rather general and broad class of Markov chains, Ito
chains that look like Euler-Maryama discretization of some Stochastic
Differential Equation. The chain we study is a unified framework for
theoretical analysis. It comes with almost arbitrary isotropic and
state-dependent noise instead of normal and state-independent one, as in most
related papers. Moreover, our chain's drift and diffusion coefficient can be
inexact to cover a wide range of applications such as Stochastic Gradient
Langevin Dynamics, sampling, Stochastic Gradient Descent, or Stochastic
Gradient Boosting. We prove an upper bound for $W_{2}$-distance between laws of
the Ito chain and the corresponding Stochastic Differential Equation. These
results improve or cover most of the known estimates. Moreover, for some
particular cases, our analysis is the first. | [
"Aleksei Ustimenko",
"Aleksandr Beznosikov"
] | 2023-10-09 18:38:56 | http://arxiv.org/abs/2310.06081v1 | http://arxiv.org/pdf/2310.06081v1 | 2310.06081v1 |
Performative Time-Series Forecasting | Time-series forecasting is a critical challenge in various domains and has
witnessed substantial progress in recent years. Many real-life scenarios, such
as public health, economics, and social applications, involve feedback loops
where predictions can influence the predicted outcome, subsequently altering
the target variable's distribution. This phenomenon, known as performativity,
introduces the potential for 'self-negating' or 'self-fulfilling' predictions.
Despite extensive studies in classification problems across domains,
performativity remains largely unexplored in the context of time-series
forecasting from a machine-learning perspective.
In this paper, we formalize performative time-series forecasting (PeTS),
addressing the challenge of accurate predictions when performativity-induced
distribution shifts are possible. We propose a novel approach, Feature
Performative-Shifting (FPS), which leverages the concept of delayed response to
anticipate distribution shifts and subsequently predicts targets accordingly.
We provide theoretical insights suggesting that FPS can potentially lead to
reduced generalization error. We conduct comprehensive experiments using
multiple time-series models on COVID-19 and traffic forecasting tasks. The
results demonstrate that FPS consistently outperforms conventional time-series
forecasting methods, highlighting its efficacy in handling
performativity-induced challenges. | [
"Zhiyuan Zhao",
"Alexander Rodriguez",
"B. Aditya Prakash"
] | 2023-10-09 18:34:29 | http://arxiv.org/abs/2310.06077v1 | http://arxiv.org/pdf/2310.06077v1 | 2310.06077v1 |
Pain Forecasting using Self-supervised Learning and Patient Phenotyping: An attempt to prevent Opioid Addiction | Sickle Cell Disease (SCD) is a chronic genetic disorder characterized by
recurrent acute painful episodes. Opioids are often used to manage these
painful episodes; the extent of their use in managing pain in this disorder is
an issue of debate. The risk of addiction and side effects of these opioid
treatments can often lead to more pain episodes in the future. Hence, it is
crucial to forecast future patient pain trajectories to help patients manage
their SCD to improve their quality of life without compromising their
treatment. It is challenging to obtain many pain records to design forecasting
models since it is mainly recorded by patients' self-report. Therefore, it is
expensive and painful (due to the need for patient compliance) to solve pain
forecasting problems in a purely supervised manner. In light of this challenge,
we propose to solve the pain forecasting problem using self-supervised learning
methods. Also, clustering such time-series data is crucial for patient
phenotyping, anticipating patients' prognoses by identifying "similar"
patients, and designing treatment guidelines tailored to homogeneous patient
subgroups. Hence, we propose a self-supervised learning approach for clustering
time-series data, where each cluster comprises patients who share similar
future pain profiles. Experiments on five years of real-world datasets show
that our models achieve superior performance over state-of-the-art benchmarks
and identify meaningful clusters that can be translated into actionable
information for clinical decision-making. | [
"Swati Padhee",
"Tanvi Banerjee",
"Daniel M. Abrams",
"Nirmish Shah"
] | 2023-10-09 18:31:50 | http://arxiv.org/abs/2310.06075v1 | http://arxiv.org/pdf/2310.06075v1 | 2310.06075v1 |
Optimal Exploration is no harder than Thompson Sampling | Given a set of arms $\mathcal{Z}\subset \mathbb{R}^d$ and an unknown
parameter vector $\theta_\ast\in\mathbb{R}^d$, the pure exploration linear
bandit problem aims to return $\arg\max_{z\in \mathcal{Z}}
z^{\top}\theta_{\ast}$, with high probability through noisy measurements of
$x^{\top}\theta_{\ast}$ with $x\in \mathcal{X}\subset \mathbb{R}^d$. Existing
(asymptotically) optimal methods require either a) potentially costly
projections for each arm $z\in \mathcal{Z}$ or b) explicitly maintaining a
subset of $\mathcal{Z}$ under consideration at each time. This complexity is at
odds with the popular and simple Thompson Sampling algorithm for regret
minimization, which just requires access to a posterior sampling and argmax
oracle, and does not need to enumerate $\mathcal{Z}$ at any point.
Unfortunately, Thompson sampling is known to be sub-optimal for pure
exploration. In this work, we pose a natural question: is there an algorithm
that can explore optimally and only needs the same computational primitives as
Thompson Sampling? We answer the question in the affirmative. We provide an
algorithm that leverages only sampling and argmax oracles and achieves an
exponential convergence rate, with the exponent being the optimal among all
possible allocations asymptotically. In addition, we show that our algorithm
can be easily implemented and performs as well empirically as existing
asymptotically optimal methods. | [
"Zhaoqi Li",
"Kevin Jamieson",
"Lalit Jain"
] | 2023-10-09 18:21:39 | http://arxiv.org/abs/2310.06069v1 | http://arxiv.org/pdf/2310.06069v1 | 2310.06069v1 |
Early Warning via tipping-preserving latent stochastic dynamical system and meta label correcting | Early warning for epilepsy patients is crucial for their safety and
well-being, in terms of preventing or minimizing the severity of seizures.
Through the patients' EEG data, we propose a meta learning framework for
improving prediction on early ictal signals. To better utilize the meta label
corrector method, we fuse the information from both the real data and the
augmented data from the latent Stochastic differential equation(SDE). Besides,
we also optimally select the latent dynamical system via distribution of
transition time between real data and that from the latent SDE. In this way,
the extracted tipping dynamical feature is also integrated into the meta
network to better label the noisy data. To validate our method, LSTM is
implemented as the baseline model. We conduct a series of experiments to
predict seizure in various long-term window from 1-2 seconds input data and
find surprisingly increment of prediction accuracy. | [
"Peng Zhang",
"Ting Gao",
"Jin Guo",
"Jinqiao Duan"
] | 2023-10-09 18:12:46 | http://arxiv.org/abs/2310.06059v1 | http://arxiv.org/pdf/2310.06059v1 | 2310.06059v1 |
Knowledge Distillation for Anomaly Detection | Unsupervised deep learning techniques are widely used to identify anomalous
behaviour. The performance of such methods is a product of the amount of
training data and the model size. However, the size is often a limiting factor
for the deployment on resource-constrained devices. We present a novel
procedure based on knowledge distillation for compressing an unsupervised
anomaly detection model into a supervised deployable one and we suggest a set
of techniques to improve the detection sensitivity. Compressed models perform
comparably to their larger counterparts while significantly reducing the size
and memory footprint. | [
"Adrian Alan Pol",
"Ekaterina Govorkova",
"Sonja Gronroos",
"Nadezda Chernyavskaya",
"Philip Harris",
"Maurizio Pierini",
"Isobel Ojalvo",
"Peter Elmer"
] | 2023-10-09 18:02:38 | http://arxiv.org/abs/2310.06047v1 | http://arxiv.org/pdf/2310.06047v1 | 2310.06047v1 |
Generative ensemble deep learning severe weather prediction from a deterministic convection-allowing model | An ensemble post-processing method is developed for the probabilistic
prediction of severe weather (tornadoes, hail, and wind gusts) over the
conterminous United States (CONUS). The method combines conditional generative
adversarial networks (CGANs), a type of deep generative model, with a
convolutional neural network (CNN) to post-process convection-allowing model
(CAM) forecasts. The CGANs are designed to create synthetic ensemble members
from deterministic CAM forecasts, and their outputs are processed by the CNN to
estimate the probability of severe weather. The method is tested using
High-Resolution Rapid Refresh (HRRR) 1--24 hr forecasts as inputs and Storm
Prediction Center (SPC) severe weather reports as targets. The method produced
skillful predictions with up to 20% Brier Skill Score (BSS) increases compared
to other neural-network-based reference methods using a testing dataset of HRRR
forecasts in 2021. For the evaluation of uncertainty quantification, the method
is overconfident but produces meaningful ensemble spreads that can distinguish
good and bad forecasts. The quality of CGAN outputs is also evaluated. Results
show that the CGAN outputs behave similarly to a numerical ensemble; they
preserved the inter-variable correlations and the contribution of influential
predictors as in the original HRRR forecasts. This work provides a novel
approach to post-process CAM output using neural networks that can be applied
to severe weather prediction. | [
"Yingkai Sha",
"Ryan A. Sobash",
"David John Gagne II"
] | 2023-10-09 18:02:11 | http://arxiv.org/abs/2310.06045v1 | http://arxiv.org/pdf/2310.06045v1 | 2310.06045v1 |
DyST: Towards Dynamic Neural Scene Representations on Real-World Videos | Visual understanding of the world goes beyond the semantics and flat
structure of individual images. In this work, we aim to capture both the 3D
structure and dynamics of real-world scenes from monocular real-world videos.
Our Dynamic Scene Transformer (DyST) model leverages recent work in neural
scene representation to learn a latent decomposition of monocular real-world
videos into scene content, per-view scene dynamics, and camera pose. This
separation is achieved through a novel co-training scheme on monocular videos
and our new synthetic dataset DySO. DyST learns tangible latent representations
for dynamic scenes that enable view generation with separate control over the
camera and the content of the scene. | [
"Maximilian Seitzer",
"Sjoerd van Steenkiste",
"Thomas Kipf",
"Klaus Greff",
"Mehdi S. M. Sajjadi"
] | 2023-10-09 18:00:01 | http://arxiv.org/abs/2310.06020v1 | http://arxiv.org/pdf/2310.06020v1 | 2310.06020v1 |
Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions | We introduce Conformal Decision Theory, a framework for producing safe
autonomous decisions despite imperfect machine learning predictions. Examples
of such decisions are ubiquitous, from robot planning algorithms that rely on
pedestrian predictions, to calibrating autonomous manufacturing to exhibit high
throughput and low error, to the choice of trusting a nominal policy versus
switching to a safe backup policy at run-time. The decisions produced by our
algorithms are safe in the sense that they come with provable statistical
guarantees of having low risk without any assumptions on the world model
whatsoever; the observations need not be I.I.D. and can even be adversarial.
The theory extends results from conformal prediction to calibrate decisions
directly, without requiring the construction of prediction sets. Experiments
demonstrate the utility of our approach in robot motion planning around humans,
automated stock trading, and robot manufacturing. | [
"Jordan Lekeufack",
"Anastasios N. Angelopoulos",
"Andrea Bajcsy",
"Michael I. Jordan",
"Jitendra Malik"
] | 2023-10-09 17:59:30 | http://arxiv.org/abs/2310.05921v2 | http://arxiv.org/pdf/2310.05921v2 | 2310.05921v2 |
Divide-and-Conquer Dynamics in AI-Driven Disempowerment | AI companies are attempting to create AI systems that outperform humans at
most economically valuable work. Current AI models are already automating away
the livelihoods of some artists, actors, and writers. But there is infighting
between those who prioritize current harms and future harms. We construct a
game-theoretic model of conflict to study the causes and consequences of this
disunity. Our model also helps explain why throughout history, stakeholders
sharing a common threat have found it advantageous to unite against it, and why
the common threat has in turn found it advantageous to divide and conquer.
Under realistic parameter assumptions, our model makes several predictions
that find preliminary corroboration in the historical-empirical record. First,
current victims of AI-driven disempowerment need the future victims to realize
that their interests are also under serious and imminent threat, so that future
victims are incentivized to support current victims in solidarity. Second, the
movement against AI-driven disempowerment can become more united, and thereby
more likely to prevail, if members believe that their efforts will be
successful as opposed to futile. Finally, the movement can better unite and
prevail if its members are less myopic. Myopic members prioritize their future
well-being less than their present well-being, and are thus disinclined to
solidarily support current victims today at personal cost, even if this is
necessary to counter the shared threat of AI-driven disempowerment. | [
"Peter S. Park",
"Max Tegmark"
] | 2023-10-09 17:59:26 | http://arxiv.org/abs/2310.06009v1 | http://arxiv.org/pdf/2310.06009v1 | 2310.06009v1 |
Grokking as Compression: A Nonlinear Complexity Perspective | We attribute grokking, the phenomenon where generalization is much delayed
after memorization, to compression. To do so, we define linear mapping number
(LMN) to measure network complexity, which is a generalized version of linear
region number for ReLU networks. LMN can nicely characterize neural network
compression before generalization. Although the $L_2$ norm has been a popular
choice for characterizing model complexity, we argue in favor of LMN for a
number of reasons: (1) LMN can be naturally interpreted as
information/computation, while $L_2$ cannot. (2) In the compression phase, LMN
has linear relations with test losses, while $L_2$ is correlated with test
losses in a complicated nonlinear way. (3) LMN also reveals an intriguing
phenomenon of the XOR network switching between two generalization solutions,
while $L_2$ does not. Besides explaining grokking, we argue that LMN is a
promising candidate as the neural network version of the Kolmogorov complexity
since it explicitly considers local or conditioned linear computations aligned
with the nature of modern artificial neural networks. | [
"Ziming Liu",
"Ziqian Zhong",
"Max Tegmark"
] | 2023-10-09 17:59:18 | http://arxiv.org/abs/2310.05918v1 | http://arxiv.org/pdf/2310.05918v1 | 2310.05918v1 |
FireAct: Toward Language Agent Fine-tuning | Recent efforts have augmented language models (LMs) with external tools or
environments, leading to the development of language agents that can reason and
act. However, most of these agents rely on few-shot prompting techniques with
off-the-shelf LMs. In this paper, we investigate and argue for the overlooked
direction of fine-tuning LMs to obtain language agents. Using a setup of
question answering (QA) with a Google search API, we explore a variety of base
LMs, prompting methods, fine-tuning data, and QA tasks, and find language
agents are consistently improved after fine-tuning their backbone LMs. For
example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4
leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct,
a novel approach to fine-tuning LMs with trajectories from multiple tasks and
prompting methods, and show having more diverse fine-tuning data can further
improve agents. Along with other findings regarding scaling effects,
robustness, generalization, efficiency and cost, our work establishes
comprehensive benefits of fine-tuning LMs for agents, and provides an initial
set of experimental designs, insights, as well as open questions toward
language agent fine-tuning. | [
"Baian Chen",
"Chang Shu",
"Ehsan Shareghi",
"Nigel Collier",
"Karthik Narasimhan",
"Shunyu Yao"
] | 2023-10-09 17:58:38 | http://arxiv.org/abs/2310.05915v1 | http://arxiv.org/pdf/2310.05915v1 | 2310.05915v1 |
NEFTune: Noisy Embeddings Improve Instruction Finetuning | We show that language model finetuning can be improved, sometimes
dramatically, with a simple augmentation. NEFTune adds noise to the embedding
vectors during training. Standard finetuning of LLaMA-2-7B using Alpaca
achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings.
NEFTune also improves over strong baselines on modern instruction datasets.
Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8%
improvement, and with OpenPlatypus an 8% improvement. Even powerful models
further refined with RLHF such as LLaMA-2-Chat benefit from additional training
with NEFTune. | [
"Neel Jain",
"Ping-yeh Chiang",
"Yuxin Wen",
"John Kirchenbauer",
"Hong-Min Chu",
"Gowthami Somepalli",
"Brian R. Bartoldson",
"Bhavya Kailkhura",
"Avi Schwarzschild",
"Aniruddha Saha",
"Micah Goldblum",
"Jonas Geiping",
"Tom Goldstein"
] | 2023-10-09 17:58:34 | http://arxiv.org/abs/2310.05914v2 | http://arxiv.org/pdf/2310.05914v2 | 2310.05914v2 |
SALMON: Self-Alignment with Principle-Following Reward Models | Supervised Fine-Tuning (SFT) on response demonstrations combined with
Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful
paradigm for aligning LLM-based AI agents. However, a significant limitation of
such an approach is its dependency on high-quality human annotations, making
its application to intricate tasks challenging due to difficulties in obtaining
consistent response demonstrations and in-distribution response preferences.
This paper presents a novel approach, namely SALMON (Self-ALignMent with
principle-fOllowiNg reward models), to align base language models with minimal
human supervision, using only a small set of human-defined principles, yet
achieving superior performance. Central to our approach is a
principle-following reward model. Trained on synthetic preference data, this
model can generate reward scores based on arbitrary human-defined principles.
By merely adjusting these principles during the RL training phase, we gain full
control over the preferences with the reward model, subsequently influencing
the behavior of the RL-trained policies, and eliminating the reliance on the
collection of online human preferences. Applying our method to the LLaMA-2-70b
base language model, we developed an AI assistant named Dromedary-2. With only
6 exemplars for in-context learning and 31 human-defined principles,
Dromedary-2 significantly surpasses the performance of several state-of-the-art
AI systems, including LLaMA-2-Chat-70b, on various benchmark datasets. We have
open-sourced the code and model weights to encourage further research into
aligning LLM-based AI agents with enhanced supervision efficiency, improved
controllability, and scalable oversight. | [
"Zhiqing Sun",
"Yikang Shen",
"Hongxin Zhang",
"Qinhong Zhou",
"Zhenfang Chen",
"David Cox",
"Yiming Yang",
"Chuang Gan"
] | 2023-10-09 17:56:53 | http://arxiv.org/abs/2310.05910v1 | http://arxiv.org/pdf/2310.05910v1 | 2310.05910v1 |
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models | The full potential of large pretrained models remains largely untapped in
control domains like robotics. This is mainly because of the scarcity of data
and the computational challenges associated with training or fine-tuning these
large models for such applications. Prior work mainly emphasizes effective
pretraining of large models for decision-making, with little exploration into
how to perform data-efficient continual adaptation of these models for new
tasks. Recognizing these constraints, we introduce TAIL (Task-specific Adapters
for Imitation Learning), a framework for efficient adaptation to new control
tasks. Inspired by recent advancements in parameter-efficient fine-tuning in
language domains, we explore efficient fine-tuning techniques -- e.g.,
Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA) -- in TAIL to
adapt large pretrained models for new tasks with limited demonstration data.
Our extensive experiments in large-scale language-conditioned manipulation
tasks comparing prevalent parameter-efficient fine-tuning techniques and
adaptation baselines suggest that TAIL with LoRA can achieve the best
post-adaptation performance with only 1\% of the trainable parameters of full
fine-tuning, while avoiding catastrophic forgetting and preserving adaptation
plasticity in continual learning settings. | [
"Zuxin Liu",
"Jesse Zhang",
"Kavosh Asadi",
"Yao Liu",
"Ding Zhao",
"Shoham Sabach",
"Rasool Fakoor"
] | 2023-10-09 17:49:50 | http://arxiv.org/abs/2310.05905v1 | http://arxiv.org/pdf/2310.05905v1 | 2310.05905v1 |
Learning to Decode the Surface Code with a Recurrent, Transformer-Based Neural Network | Quantum error-correction is a prerequisite for reliable quantum computation.
Towards this goal, we present a recurrent, transformer-based neural network
which learns to decode the surface code, the leading quantum error-correction
code. Our decoder outperforms state-of-the-art algorithmic decoders on
real-world data from Google's Sycamore quantum processor for distance 3 and 5
surface codes. On distances up to 11, the decoder maintains its advantage on
simulated data with realistic noise including cross-talk, leakage, and analog
readout signals, and sustains its accuracy far beyond the 25 cycles it was
trained on. Our work illustrates the ability of machine learning to go beyond
human-designed algorithms by learning from data directly, highlighting machine
learning as a strong contender for decoding in quantum computers. | [
"Johannes Bausch",
"Andrew W Senior",
"Francisco J H Heras",
"Thomas Edlich",
"Alex Davies",
"Michael Newman",
"Cody Jones",
"Kevin Satzinger",
"Murphy Yuezhen Niu",
"Sam Blackwell",
"George Holland",
"Dvir Kafri",
"Juan Atalaya",
"Craig Gidney",
"Demis Hassabis",
"Sergio Boixo",
"Hartmut Neven",
"Pushmeet Kohli"
] | 2023-10-09 17:41:37 | http://arxiv.org/abs/2310.05900v1 | http://arxiv.org/pdf/2310.05900v1 | 2310.05900v1 |
Lion Secretly Solves Constrained Optimization: As Lyapunov Predicts | Lion (Evolved Sign Momentum), a new optimizer discovered through program
search, has shown promising results in training large AI models. It performs
comparably or favorably to AdamW but with greater memory efficiency. As we can
expect from the results of a random search program, Lion incorporates elements
from several existing algorithms, including signed momentum, decoupled weight
decay, Polak, and Nesterov momentum, but does not fit into any existing
category of theoretically grounded optimizers. Thus, even though Lion appears
to perform well as a general-purpose optimizer for a wide range of tasks, its
theoretical basis remains uncertain. This lack of theoretical clarity limits
opportunities to further enhance and expand Lion's efficacy.
This work aims to demystify Lion. Based on both continuous-time and
discrete-time analysis, we demonstrate that Lion is a theoretically novel and
principled approach for minimizing a general loss function $f(x)$ while
enforcing a bound constraint $\|x\|_\infty \leq 1/\lambda$. Lion achieves this
through the incorporation of decoupled weight decay, where $\lambda$ represents
the weight decay coefficient. Our analysis is made possible by the development
of a new Lyapunov function for the Lion updates. It applies to a broader family
of Lion-$\kappa$ algorithms, where the $\text{sign}(\cdot)$ operator in Lion is
replaced by the subgradient of a convex function $\kappa$, leading to the
solution of a general composite optimization problem of $\min_x f(x) +
\kappa^*(x)$. Our findings provide valuable insights into the dynamics of Lion
and pave the way for further improvements and extensions of Lion-related
algorithms. | [
"Lizhang Chen",
"Bo Liu",
"Kaizhao Liang",
"Qiang Liu"
] | 2023-10-09 17:41:29 | http://arxiv.org/abs/2310.05898v2 | http://arxiv.org/pdf/2310.05898v2 | 2310.05898v2 |
A Generalization Bound of Deep Neural Networks for Dependent Data | Existing generalization bounds for deep neural networks require data to be
independent and identically distributed (iid). This assumption may not hold in
real-life applications such as evolutionary biology, infectious disease
epidemiology, and stock price prediction. This work establishes a
generalization bound of feed-forward neural networks for non-stationary
$\phi$-mixing data. | [
"Quan Huu Do",
"Binh T. Nguyen",
"Lam Si Tung Ho"
] | 2023-10-09 17:33:37 | http://arxiv.org/abs/2310.05892v1 | http://arxiv.org/pdf/2310.05892v1 | 2310.05892v1 |
Streaming Anchor Loss: Augmenting Supervision with Temporal Significance | Streaming neural network models for fast frame-wise responses to various
speech and sensory signals are widely adopted on resource-constrained
platforms. Hence, increasing the learning capacity of such streaming models
(i.e., by adding more parameters) to improve the predictive power may not be
viable for real-world tasks. In this work, we propose a new loss, Streaming
Anchor Loss (SAL), to better utilize the given learning capacity by encouraging
the model to learn more from essential frames. More specifically, our SAL and
its focal variations dynamically modulate the frame-wise cross entropy loss
based on the importance of the corresponding frames so that a higher loss
penalty is assigned for frames within the temporal proximity of semantically
critical events. Therefore, our loss ensures that the model training focuses on
predicting the relatively rare but task-relevant frames. Experimental results
with standard lightweight convolutional and recurrent streaming networks on
three different speech based detection tasks demonstrate that SAL enables the
model to learn the overall task more effectively with improved accuracy and
latency, without any additional data, model parameters, or architectural
changes. | [
"Utkarsh Oggy Sarawgi",
"John Berkowitz",
"Vineet Garg",
"Arnav Kundu",
"Minsik Cho",
"Sai Srujana Buddi",
"Saurabh Adya",
"Ahmed Tewfik"
] | 2023-10-09 17:28:35 | http://arxiv.org/abs/2310.05886v1 | http://arxiv.org/pdf/2310.05886v1 | 2310.05886v1 |
A Meta-Learning Perspective on Transformers for Causal Language Modeling | The Transformer architecture has become prominent in developing large causal
language models. However, mechanisms to explain its capabilities are not well
understood. Focused on the training process, here we establish a meta-learning
view of the Transformer architecture when trained for the causal language
modeling task, by explicating an inner optimization process that may happen
within the Transformer. Further, from within the inner optimization, we
discover and theoretically analyze a special characteristic of the norms of
learned token representations within Transformer-based causal language models.
Our analysis is supported by experiments conducted on pre-trained large
language models and real-world data. | [
"Xinbo Wu",
"Lav R. Varshney"
] | 2023-10-09 17:27:36 | http://arxiv.org/abs/2310.05884v1 | http://arxiv.org/pdf/2310.05884v1 | 2310.05884v1 |
A Machine Learning Approach to Predicting Single Event Upsets | A single event upset (SEU) is a critical soft error that occurs in
semiconductor devices on exposure to ionising particles from space
environments. SEUs cause bit flips in the memory component of semiconductors.
This creates a multitude of safety hazards as stored information becomes less
reliable. Currently, SEUs are only detected several hours after their
occurrence. CREMER, the model presented in this paper, predicts SEUs in advance
using machine learning. CREMER uses only positional data to predict SEU
occurrence, making it robust, inexpensive and scalable. Upon implementation,
the improved reliability of memory devices will create a digitally safer
environment onboard space vehicles. | [
"Archit Gupta",
"Chong Yock Eng",
"Deon Lim Meng Wee",
"Rashna Analia Ahmed",
"See Min Sim"
] | 2023-10-09 17:19:49 | http://arxiv.org/abs/2310.05878v1 | http://arxiv.org/pdf/2310.05878v1 | 2310.05878v1 |
Dynamic value alignment through preference aggregation of multiple objectives | The development of ethical AI systems is currently geared toward setting
objective functions that align with human objectives. However, finding such
functions remains a research challenge, while in RL, setting rewards by hand is
a fairly standard approach. We present a methodology for dynamic value
alignment, where the values that are to be aligned with are dynamically
changing, using a multiple-objective approach. We apply this approach to extend
Deep $Q$-Learning to accommodate multiple objectives and evaluate this method
on a simplified two-leg intersection controlled by a switching agent.Our
approach dynamically accommodates the preferences of drivers on the system and
achieves better overall performance across three metrics (speeds, stops, and
waits) while integrating objectives that have competing or conflicting actions. | [
"Marcin Korecki",
"Damian Dailisan",
"Cesare Carissimo"
] | 2023-10-09 17:07:26 | http://arxiv.org/abs/2310.05871v1 | http://arxiv.org/pdf/2310.05871v1 | 2310.05871v1 |
HyperAttention: Long-context Attention in Near-Linear Time | We present an approximate attention mechanism named HyperAttention to address
the computational challenges posed by the growing complexity of long contexts
used in Large Language Models (LLMs). Recent work suggests that in the
worst-case scenario, quadratic time is necessary unless the entries of the
attention matrix are bounded or the matrix has low stable rank. We introduce
two parameters which measure: (1) the max column norm in the normalized
attention matrix, and (2) the ratio of row norms in the unnormalized attention
matrix after detecting and removing large entries. We use these fine-grained
parameters to capture the hardness of the problem. Despite previous lower
bounds, we are able to achieve a linear time sampling algorithm even when the
matrix has unbounded entries or a large stable rank, provided the above
parameters are small. HyperAttention features a modular design that easily
accommodates integration of other fast low-level implementations, particularly
FlashAttention. Empirically, employing Locality Sensitive Hashing (LSH) to
identify large entries, HyperAttention outperforms existing methods, giving
significant speed improvements compared to state-of-the-art solutions like
FlashAttention. We validate the empirical performance of HyperAttention on a
variety of different long-context length datasets. For example, HyperAttention
makes the inference time of ChatGLM2 50\% faster on 32k context length while
perplexity increases from 5.6 to 6.3. On larger context length, e.g., 131k,
with causal masking, HyperAttention offers 5-fold speedup on a single attention
layer. | [
"Insu Han",
"Rajesh Jayaram",
"Amin Karbasi",
"Vahab Mirrokni",
"David P. Woodruff",
"Amir Zandieh"
] | 2023-10-09 17:05:25 | http://arxiv.org/abs/2310.05869v2 | http://arxiv.org/pdf/2310.05869v2 | 2310.05869v2 |
Bio-inspired computational memory model of the Hippocampus: an approach to a neuromorphic spike-based Content-Addressable Memory | The brain has computational capabilities that surpass those of modern
systems, being able to solve complex problems efficiently in a simple way.
Neuromorphic engineering aims to mimic biology in order to develop new systems
capable of incorporating such capabilities. Bio-inspired learning systems
continue to be a challenge that must be solved, and much work needs to be done
in this regard. Among all brain regions, the hippocampus stands out as an
autoassociative short-term memory with the capacity to learn and recall
memories from any fragment of them. These characteristics make the hippocampus
an ideal candidate for developing bio-inspired learning systems that, in
addition, resemble content-addressable memories. Therefore, in this work we
propose a bio-inspired spiking content-addressable memory model based on the
CA3 region of the hippocampus with the ability to learn, forget and recall
memories, both orthogonal and non-orthogonal, from any fragment of them. The
model was implemented on the SpiNNaker hardware platform using Spiking Neural
Networks. A set of experiments based on functional, stress and applicability
tests were performed to demonstrate its correct functioning. This work presents
the first hardware implementation of a fully-functional bio-inspired spiking
hippocampal content-addressable memory model, paving the way for the
development of future more complex neuromorphic systems. | [
"Daniel Casanueva-Morato",
"Alvaro Ayuso-Martinez",
"Juan P. Dominguez-Morales",
"Angel Jimenez-Fernandez",
"Gabriel Jimenez-Moreno"
] | 2023-10-09 17:05:23 | http://arxiv.org/abs/2310.05868v1 | http://arxiv.org/pdf/2310.05868v1 | 2310.05868v1 |
Generative quantum machine learning via denoising diffusion probabilistic models | Deep generative models are key-enabling technology to computer vision, text
generation and large language models. Denoising diffusion probabilistic models
(DDPMs) have recently gained much attention due to their ability to generate
diverse and high-quality samples in many computer vision tasks, as well as to
incorporate flexible model architectures and relatively simple training scheme.
Quantum generative models, empowered by entanglement and superposition, have
brought new insight to learning classical and quantum data. Inspired by the
classical counterpart, we propose the quantum denoising diffusion probabilistic
models (QuDDPM) to enable efficiently trainable generative learning of quantum
data. QuDDPM adopts sufficient layers of circuits to guarantee expressivity,
while introduces multiple intermediate training tasks as interpolation between
the target distribution and noise to avoid barren plateau and guarantee
efficient training. We demonstrate QuDDPM's capability in learning correlated
quantum noise model and learning topological structure of nontrivial
distribution of quantum data. | [
"Bingzhi Zhang",
"Peng Xu",
"Xiaohui Chen",
"Quntao Zhuang"
] | 2023-10-09 17:03:08 | http://arxiv.org/abs/2310.05866v1 | http://arxiv.org/pdf/2310.05866v1 | 2310.05866v1 |
Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models | An increasing number of vision-language tasks can be handled with little to
no training, i.e., in a zero and few-shot manner, by marrying large language
models (LLMs) to vision encoders, resulting in large vision-language models
(LVLMs). While this has huge upsides, such as not requiring training data or
custom architectures, how an input is presented to a LVLM can have a major
impact on zero-shot model performance. In particular, inputs phrased in an
underspecified way can result in incorrect answers due to factors like missing
visual information, complex implicit reasoning, or linguistic ambiguity.
Therefore, adding visually grounded information to the input as a preemptive
clarification should improve model performance by reducing underspecification,
e.g., by localizing objects and disambiguating references. Similarly, in the
VQA setting, changing the way questions are framed can make them easier for
models to answer. To this end, we present Rephrase, Augment and Reason
(RepARe), a gradient-free framework that extracts salient details about the
image using the underlying LVLM as a captioner and reasoner, in order to
propose modifications to the original question. We then use the LVLM's
confidence over a generated answer as an unsupervised scoring function to
select the rephrased question most likely to improve zero-shot performance.
Focusing on two visual question answering tasks, we show that RepARe can result
in a 3.85% (absolute) increase in zero-shot performance on VQAv2 and a 6.41%
point increase on A-OKVQA. Additionally, we find that using gold answers for
oracle question candidate selection achieves a substantial gain in VQA accuracy
by up to 14.41%. Through extensive analysis, we demonstrate that outputs from
RepARe increase syntactic complexity, and effectively utilize vision-language
interaction and the frozen language model in LVLMs. | [
"Archiki Prasad",
"Elias Stengel-Eskin",
"Mohit Bansal"
] | 2023-10-09 16:57:57 | http://arxiv.org/abs/2310.05861v1 | http://arxiv.org/pdf/2310.05861v1 | 2310.05861v1 |
DSAC-T: Distributional Soft Actor-Critic with Three Refinements | Reinforcement learning (RL) has proven to be highly effective in tackling
complex decision-making and control tasks. However, prevalent model-free RL
methods often face severe performance degradation due to the well-known
overestimation issue. In response to this problem, we recently introduced an
off-policy RL algorithm, called distributional soft actor-critic (DSAC or
DSAC-v1), which can effectively improve the value estimation accuracy by
learning a continuous Gaussian value distribution. Nonetheless, standard DSAC
has its own shortcomings, including occasionally unstable learning processes
and needs for task-specific reward scaling, which may hinder its overall
performance and adaptability in some special tasks. This paper further
introduces three important refinements to standard DSAC in order to address
these shortcomings. These refinements consist of critic gradient adjusting,
twin value distribution learning, and variance-based target return clipping.
The modified RL algorithm is named as DSAC with three refinements (DSAC-T or
DSAC-v2), and its performances are systematically evaluated on a diverse set of
benchmark tasks. Without any task-specific hyperparameter tuning, DSAC-T
surpasses a lot of mainstream model-free RL algorithms, including SAC, TD3,
DDPG, TRPO, and PPO, in all tested environments. Additionally, DSAC-T, unlike
its standard version, ensures a highly stable learning process and delivers
similar performance across varying reward scales. | [
"Jingliang Duan",
"Wenxuan Wang",
"Liming Xiao",
"Jiaxin Gao",
"Shengbo Eben Li"
] | 2023-10-09 16:52:48 | http://arxiv.org/abs/2310.05858v1 | http://arxiv.org/pdf/2310.05858v1 | 2310.05858v1 |
Improving Summarization with Human Edits | Recent work has shown the promise of learning with human feedback paradigms
to produce human-determined high-quality text. Existing works use human
feedback to train large language models (LLMs) in general domain abstractive
summarization and have obtained summary quality exceeding traditional
likelihood training. In this paper, we focus on a less explored form of human
feedback -- Human Edits. We propose Sequence Alignment (un)Likelihood Training
(SALT), a novel technique to use both the human-edited and model-generated data
together in the training loop. In addition, we demonstrate simulating Human
Edits with ground truth summaries coming from existing training data --
Imitation edits, along with the model-generated summaries obtained after the
training, to reduce the need for expensive human-edit data. In our experiments,
we extend human feedback exploration from general domain summarization to
medical domain summarization. Our results demonstrate the effectiveness of SALT
to improve the summary quality with Human and Imitation Edits. | [
"Zonghai Yao",
"Benjamin J Schloss",
"Sai P. Selvaraj"
] | 2023-10-09 16:52:07 | http://arxiv.org/abs/2310.05857v1 | http://arxiv.org/pdf/2310.05857v1 | 2310.05857v1 |
Robust Angular Synchronization via Directed Graph Neural Networks | The angular synchronization problem aims to accurately estimate (up to a
constant additive phase) a set of unknown angles $\theta_1, \dots,
\theta_n\in[0, 2\pi)$ from $m$ noisy measurements of their offsets
$\theta_i-\theta_j \;\mbox{mod} \; 2\pi.$ Applications include, for example,
sensor network localization, phase retrieval, and distributed clock
synchronization. An extension of the problem to the heterogeneous setting
(dubbed $k$-synchronization) is to estimate $k$ groups of angles
simultaneously, given noisy observations (with unknown group assignment) from
each group. Existing methods for angular synchronization usually perform poorly
in high-noise regimes, which are common in applications. In this paper, we
leverage neural networks for the angular synchronization problem, and its
heterogeneous extension, by proposing GNNSync, a theoretically-grounded
end-to-end trainable framework using directed graph neural networks. In
addition, new loss functions are devised to encode synchronization objectives.
Experimental results on extensive data sets demonstrate that GNNSync attains
competitive, and often superior, performance against a comprehensive set of
baselines for the angular synchronization problem and its extension, validating
the robustness of GNNSync even at high noise levels. | [
"Yixuan He",
"Gesine Reinert",
"David Wipf",
"Mihai Cucuringu"
] | 2023-10-09 16:37:19 | http://arxiv.org/abs/2310.05842v1 | http://arxiv.org/pdf/2310.05842v1 | 2310.05842v1 |
Predicting Accident Severity: An Analysis Of Factors Affecting Accident Severity Using Random Forest Model | Road accidents have significant economic and societal costs, with a small
number of severe accidents accounting for a large portion of these costs.
Predicting accident severity can help in the proactive approach to road safety
by identifying potential unsafe road conditions and taking well-informed
actions to reduce the number of severe accidents. This study investigates the
effectiveness of the Random Forest machine learning algorithm for predicting
the severity of an accident. The model is trained on a dataset of accident
records from a large metropolitan area and evaluated using various metrics.
Hyperparameters and feature selection are optimized to improve the model's
performance. The results show that the Random Forest model is an effective tool
for predicting accident severity with an accuracy of over 80%. The study also
identifies the top six most important variables in the model, which include
wind speed, pressure, humidity, visibility, clear conditions, and cloud cover.
The fitted model has an Area Under the Curve of 80%, a recall of 79.2%, a
precision of 97.1%, and an F1 score of 87.3%. These results suggest that the
proposed model has higher performance in explaining the target variable, which
is the accident severity class. Overall, the study provides evidence that the
Random Forest model is a viable and reliable tool for predicting accident
severity and can be used to help reduce the number of fatalities and injuries
due to road accidents in the United States | [
"Adekunle Adefabi",
"Somtobe Olisah",
"Callistus Obunadike",
"Oluwatosin Oyetubo",
"Esther Taiwo",
"Edward Tella"
] | 2023-10-09 16:33:44 | http://arxiv.org/abs/2310.05840v1 | http://arxiv.org/pdf/2310.05840v1 | 2310.05840v1 |
A Bias-Variance-Covariance Decomposition of Kernel Scores for Generative Models | Generative models, like large language models, are becoming increasingly
relevant in our daily lives, yet a theoretical framework to assess their
generalization behavior and uncertainty does not exist. Particularly, the
problem of uncertainty estimation is commonly solved in an ad-hoc manner and
task dependent. For example, natural language approaches cannot be transferred
to image generation. In this paper we introduce the first
bias-variance-covariance decomposition for kernel scores and their associated
entropy. We propose unbiased and consistent estimators for each quantity which
only require generated samples but not the underlying model itself. As an
application, we offer a generalization evaluation of diffusion models and
discover how mode collapse of minority groups is a contrary phenomenon to
overfitting. Further, we demonstrate that variance and predictive kernel
entropy are viable measures of uncertainty for image, audio, and language
generation. Specifically, our approach for uncertainty estimation is more
predictive of performance on CoQA and TriviaQA question answering datasets than
existing baselines and can also be applied to closed-source models. | [
"Sebastian G. Gruber",
"Florian Buettner"
] | 2023-10-09 16:22:11 | http://arxiv.org/abs/2310.05833v1 | http://arxiv.org/pdf/2310.05833v1 | 2310.05833v1 |
Pre-trained Spatial Priors on Multichannel NMF for Music Source Separation | This paper presents a novel approach to sound source separation that
leverages spatial information obtained during the recording setup. Our method
trains a spatial mixing filter using solo passages to capture information about
the room impulse response and transducer response at each sensor location. This
pre-trained filter is then integrated into a multichannel non-negative matrix
factorization (MNMF) scheme to better capture the variances of different sound
sources. The recording setup used in our experiments is the typical setup for
orchestra recordings, with a main microphone and a close "cardioid" or
"supercardioid" microphone for each section of the orchestra. This makes the
proposed method applicable to many existing recordings. Experiments on
polyphonic ensembles demonstrate the effectiveness of the proposed framework in
separating individual sound sources, improving performance compared to
conventional MNMF methods. | [
"Pablo Cabanas-Molero",
"Antonio J. Munoz-Montoro",
"Julio Carabias-Orti",
"Pedro Vera-Candeas"
] | 2023-10-09 16:05:43 | http://arxiv.org/abs/2310.05821v1 | http://arxiv.org/pdf/2310.05821v1 | 2310.05821v1 |
Provably Convergent Data-Driven Convex-Nonconvex Regularization | An emerging new paradigm for solving inverse problems is via the use of deep
learning to learn a regularizer from data. This leads to high-quality results,
but often at the cost of provable guarantees. In this work, we show how
well-posedness and convergent regularization arises within the convex-nonconvex
(CNC) framework for inverse problems. We introduce a novel input weakly convex
neural network (IWCNN) construction to adapt the method of learned adversarial
regularization to the CNC framework. Empirically we show that our method
overcomes numerical issues of previous adversarial methods. | [
"Zakhar Shumaylov",
"Jeremy Budd",
"Subhadip Mukherjee",
"Carola-Bibiane Schönlieb"
] | 2023-10-09 15:52:59 | http://arxiv.org/abs/2310.05812v1 | http://arxiv.org/pdf/2310.05812v1 | 2310.05812v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.