title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
On the Disconnect Between Theory and Practice of Overparametrized Neural Networks | The infinite-width limit of neural networks (NNs) has garnered significant
attention as a theoretical framework for analyzing the behavior of large-scale,
overparametrized networks. By approaching infinite width, NNs effectively
converge to a linear model with features characterized by the neural tangent
kernel (NTK). This establishes a connection between NNs and kernel methods, the
latter of which are well understood. Based on this link, theoretical benefits
and algorithmic improvements have been hypothesized and empirically
demonstrated in synthetic architectures. These advantages include faster
optimization, reliable uncertainty quantification and improved continual
learning. However, current results quantifying the rate of convergence to the
kernel regime suggest that exploiting these benefits requires architectures
that are orders of magnitude wider than they are deep. This assumption raises
concerns that practically relevant architectures do not exhibit behavior as
predicted via the NTK. In this work, we empirically investigate whether the
limiting regime either describes the behavior of large-width architectures used
in practice or is informative for algorithmic improvements. Our empirical
results demonstrate that this is not the case in optimization, uncertainty
quantification or continual learning. This observed disconnect between theory
and practice calls into question the practical relevance of the infinite-width
limit. | [
"Jonathan Wenger",
"Felix Dangel",
"Agustinus Kristiadi"
] | 2023-09-29 20:51:24 | http://arxiv.org/abs/2310.00137v1 | http://arxiv.org/pdf/2310.00137v1 | 2310.00137v1 |
Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs | Memory complexity and data scarcity have so far prohibited learning solution
operators of partial differential equations (PDEs) at high resolutions. We
address these limitations by introducing a new data efficient and highly
parallelizable operator learning approach with reduced memory requirement and
better generalization, called multi-grid tensorized neural operator (MG-TFNO).
MG-TFNO scales to large resolutions by leveraging local and global structures
of full-scale, real-world phenomena, through a decomposition of both the input
domain and the operator's parameter space. Our contributions are threefold: i)
we enable parallelization over input samples with a novel multi-grid-based
domain decomposition, ii) we represent the parameters of the model in a
high-order latent subspace of the Fourier domain, through a global tensor
factorization, resulting in an extreme reduction in the number of parameters
and improved generalization, and iii) we propose architectural improvements to
the backbone FNO. Our approach can be used in any operator learning setting. We
demonstrate superior performance on the turbulent Navier-Stokes equations where
we achieve less than half the error with over 150x compression. The
tensorization combined with the domain decomposition, yields over 150x
reduction in the number of parameters and 7x reduction in the domain size
without losses in accuracy, while slightly enabling parallelism. | [
"Jean Kossaifi",
"Nikola Kovachki",
"Kamyar Azizzadenesheli",
"Anima Anandkumar"
] | 2023-09-29 20:18:52 | http://arxiv.org/abs/2310.00120v1 | http://arxiv.org/pdf/2310.00120v1 | 2310.00120v1 |
ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models | Exploring alternative ideas by rewriting text is integral to the writing
process. State-of-the-art large language models (LLMs) can simplify writing
variation generation. However, current interfaces pose challenges for
simultaneous consideration of multiple variations: creating new versions
without overwriting text can be difficult, and pasting them sequentially can
clutter documents, increasing workload and disrupting writers' flow. To tackle
this, we present ABScribe, an interface that supports rapid, yet visually
structured, exploration of writing variations in human-AI co-writing tasks.
With ABScribe, users can swiftly produce multiple variations using LLM prompts,
which are auto-converted into reusable buttons. Variations are stored
adjacently within text segments for rapid in-place comparisons using mouse-over
interactions on a context toolbar. Our user study with 12 writers shows that
ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances
user perceptions of the revision process (d = 2.41, p < 0.001) compared to a
popular baseline workflow, and provides insights into how writers explore
variations using LLMs. | [
"Mohi Reza",
"Nathan Laundry",
"Ilya Musabirov",
"Peter Dushniku",
"Zhi Yuan \"Michael\" Yu",
"Kashish Mittal",
"Tovi Grossman",
"Michael Liut",
"Anastasia Kuzminykh",
"Joseph Jay Williams"
] | 2023-09-29 20:11:15 | http://arxiv.org/abs/2310.00117v2 | http://arxiv.org/pdf/2310.00117v2 | 2310.00117v2 |
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization | To improve the robustness of deep classifiers against adversarial
perturbations, many approaches have been proposed, such as designing new
architectures with better robustness properties (e.g., Lipschitz-capped
networks), or modifying the training process itself (e.g., min-max
optimization, constrained learning, or regularization). These approaches,
however, might not be effective at increasing the margin in the input (feature)
space. As a result, there has been an increasing interest in developing
training procedures that can directly manipulate the decision boundary in the
input space. In this paper, we build upon recent developments in this category
by developing a robust training algorithm whose objective is to increase the
margin in the output (logit) space while regularizing the Lipschitz constant of
the model along vulnerable directions. We show that these two objectives can
directly promote larger margins in the input space. To this end, we develop a
scalable method for calculating guaranteed differentiable upper bounds on the
Lipschitz constant of neural networks accurately and efficiently. The relative
accuracy of the bounds prevents excessive regularization and allows for more
direct manipulation of the decision boundary. Furthermore, our Lipschitz
bounding algorithm exploits the monotonicity and Lipschitz continuity of the
activation layers, and the resulting bounds can be used to design new layers
with controllable bounds on their Lipschitz constant. Experiments on the MNIST,
CIFAR-10, and Tiny-ImageNet data sets verify that our proposed algorithm
obtains competitively improved results compared to the state-of-the-art. | [
"Mahyar Fazlyab",
"Taha Entesari",
"Aniket Roy",
"Rama Chellappa"
] | 2023-09-29 20:07:02 | http://arxiv.org/abs/2310.00116v1 | http://arxiv.org/pdf/2310.00116v1 | 2310.00116v1 |
Learning Over Molecular Conformer Ensembles: Datasets and Benchmarks | Molecular Representation Learning (MRL) has proven impactful in numerous
biochemical applications such as drug discovery and enzyme design. While Graph
Neural Networks (GNNs) are effective at learning molecular representations from
a 2D molecular graph or a single 3D structure, existing works often overlook
the flexible nature of molecules, which continuously interconvert across
conformations via chemical bond rotations and minor vibrational perturbations.
To better account for molecular flexibility, some recent works formulate MRL as
an ensemble learning problem, focusing on explicitly learning from a set of
conformer structures. However, most of these studies have limited datasets,
tasks, and models. In this work, we introduce the first MoleculAR Conformer
Ensemble Learning (MARCEL) benchmark to thoroughly evaluate the potential of
learning on conformer ensembles and suggest promising research directions.
MARCEL includes four datasets covering diverse molecule- and reaction-level
properties of chemically diverse molecules including organocatalysts and
transition-metal catalysts, extending beyond the scope of common GNN benchmarks
that are confined to drug-like molecules. In addition, we conduct a
comprehensive empirical study, which benchmarks representative 1D, 2D, and 3D
molecular representation learning models, along with two strategies that
explicitly incorporate conformer ensembles into 3D MRL models. Our findings
reveal that direct learning from an accessible conformer space can improve
performance on a variety of tasks and models. | [
"Yanqiao Zhu",
"Jeehyun Hwang",
"Keir Adams",
"Zhen Liu",
"Bozhao Nan",
"Brock Stenfors",
"Yuanqi Du",
"Jatin Chauhan",
"Olaf Wiest",
"Olexandr Isayev",
"Connor W. Coley",
"Yizhou Sun",
"Wei Wang"
] | 2023-09-29 20:06:46 | http://arxiv.org/abs/2310.00115v1 | http://arxiv.org/pdf/2310.00115v1 | 2310.00115v1 |
HyperMask: Adaptive Hypernetwork-based Masks for Continual Learning | Artificial neural networks suffer from catastrophic forgetting when they are
sequentially trained on multiple tasks. To overcome this problem, there exist
many continual learning strategies. One of the most effective is the
hypernetwork-based approach. The hypernetwork generates the weights of a target
model based on the task's identity. The model's main limitation is that
hypernetwork can produce completely different nests for each task.
Consequently, each task is solved separately. The model does not use
information from the network dedicated to previous tasks and practically
produces new architectures when it learns the subsequent tasks. To solve such a
problem, we use the lottery ticket hypothesis, which postulates the existence
of sparse subnetworks, named winning tickets, that preserve the performance of
a full network. In the paper, we propose a method called HyperMask, which
trains a single network for all tasks. Hypernetwork produces semi-binary masks
to obtain target subnetworks dedicated to new tasks. This solution inherits the
ability of the hypernetwork to adapt to new tasks with minimal forgetting.
Moreover, due to the lottery ticket hypothesis, we can use a single network
with weighted subnets dedicated to each task. | [
"Kamil Książek",
"Przemysław Spurek"
] | 2023-09-29 20:01:11 | http://arxiv.org/abs/2310.00113v2 | http://arxiv.org/pdf/2310.00113v2 | 2310.00113v2 |
Reinforcement Learning for Node Selection in Branch-and-Bound | A big challenge in branch and bound lies in identifying the optimal node
within the search tree from which to proceed. Current state-of-the-art
selectors utilize either hand-crafted ensembles that automatically switch
between naive sub-node selectors, or learned node selectors that rely on
individual node data. We propose a novel bi-simulation technique that uses
reinforcement learning (RL) while considering the entire tree state, rather
than just isolated nodes. To achieve this, we train a graph neural network that
produces a probability distribution based on the path from the model's root to
its ``to-be-selected'' leaves. Modelling node-selection as a probability
distribution allows us to train the model using state-of-the-art RL techniques
that capture both intrinsic node-quality and node-evaluation costs. Our method
induces a high quality node selection policy on a set of varied and complex
problem sets, despite only being trained on specially designed, synthetic TSP
instances. Experiments on several benchmarks show significant improvements in
optimality gap reductions and per-node efficiency under strict time
constraints. | [
"Alexander Mattick",
"Christopher Mutschler"
] | 2023-09-29 19:55:56 | http://arxiv.org/abs/2310.00112v1 | http://arxiv.org/pdf/2310.00112v1 | 2310.00112v1 |
Gradient and Uncertainty Enhanced Sequential Sampling for Global Fit | Surrogate models based on machine learning methods have become an important
part of modern engineering to replace costly computer simulations. The data
used for creating a surrogate model are essential for the model accuracy and
often restricted due to cost and time constraints. Adaptive sampling strategies
have been shown to reduce the number of samples needed to create an accurate
model. This paper proposes a new sampling strategy for global fit called
Gradient and Uncertainty Enhanced Sequential Sampling (GUESS). The acquisition
function uses two terms: the predictive posterior uncertainty of the surrogate
model for exploration of unseen regions and a weighted approximation of the
second and higher-order Taylor expansion values for exploitation. Although
various sampling strategies have been proposed so far, the selection of a
suitable method is not trivial. Therefore, we compared our proposed strategy to
9 adaptive sampling strategies for global surrogate modeling, based on 26
different 1 to 8-dimensional deterministic benchmarks functions. Results show
that GUESS achieved on average the highest sample efficiency compared to other
surrogate-based strategies on the tested examples. An ablation study
considering the behavior of GUESS in higher dimensions and the importance of
surrogate choice is also presented. | [
"Sven Lämmle",
"Can Bogoclu",
"Kevin Cremanns",
"Dirk Roos"
] | 2023-09-29 19:49:39 | http://arxiv.org/abs/2310.00110v1 | http://arxiv.org/pdf/2310.00110v1 | 2310.00110v1 |
FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things | There is a significant relevance of federated learning (FL) in the realm of
Artificial Intelligence of Things (AIoT). However, most existing FL works are
not conducted on datasets collected from authentic IoT devices that capture
unique modalities and inherent challenges of IoT data. In this work, we
introduce FedAIoT, an FL benchmark for AIoT to fill this critical gap. FedAIoT
includes eight datatsets collected from a wide range of IoT devices. These
datasets cover unique IoT modalities and target representative applications of
AIoT. FedAIoT also includes a unified end-to-end FL framework for AIoT that
simplifies benchmarking the performance of the datasets. Our benchmark results
shed light on the opportunities and challenges of FL for AIoT. We hope FedAIoT
could serve as an invaluable resource to foster advancements in the important
field of FL for AIoT. The repository of FedAIoT is maintained at
https://github.com/AIoT-MLSys-Lab/FedAIoT. | [
"Samiul Alam",
"Tuo Zhang",
"Tiantian Feng",
"Hui Shen",
"Zhichao Cao",
"Dong Zhao",
"JeongGil Ko",
"Kiran Somasundaram",
"Shrikanth S. Narayanan",
"Salman Avestimehr",
"Mi Zhang"
] | 2023-09-29 19:46:56 | http://arxiv.org/abs/2310.00109v1 | http://arxiv.org/pdf/2310.00109v1 | 2310.00109v1 |
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study | Membership inference attacks (MIAs) aim to infer whether a data point has
been used to train a machine learning model. These attacks can be employed to
identify potential privacy vulnerabilities and detect unauthorized use of
personal data. While MIAs have been traditionally studied for simple
classification models, recent advancements in multi-modal pre-training, such as
CLIP, have demonstrated remarkable zero-shot performance across a range of
computer vision tasks. However, the sheer scale of data and models presents
significant computational challenges for performing the attacks.
This paper takes a first step towards developing practical MIAs against
large-scale multi-modal models. We introduce a simple baseline strategy by
thresholding the cosine similarity between text and image features of a target
point and propose further enhancing the baseline by aggregating cosine
similarity across transformations of the target. We also present a new weakly
supervised attack method that leverages ground-truth non-members (e.g.,
obtained by using the publication date of a target model and the timestamps of
the open data) to further enhance the attack. Our evaluation shows that CLIP
models are susceptible to our attack strategies, with our simple baseline
achieving over $75\%$ membership identification accuracy. Furthermore, our
enhanced attacks outperform the baseline across multiple models and datasets,
with the weakly supervised attack demonstrating an average-case performance
improvement of $17\%$ and being at least $7$X more effective at low
false-positive rates. These findings highlight the importance of protecting the
privacy of multi-modal foundational models, which were previously assumed to be
less susceptible to MIAs due to less overfitting. Our code is available at
https://github.com/ruoxi-jia-group/CLIP-MIA. | [
"Myeongseob Ko",
"Ming Jin",
"Chenguang Wang",
"Ruoxi Jia"
] | 2023-09-29 19:38:40 | http://arxiv.org/abs/2310.00108v1 | http://arxiv.org/pdf/2310.00108v1 | 2310.00108v1 |
Latent Space Symmetry Discovery | Equivariant neural networks require explicit knowledge of the symmetry group.
Automatic symmetry discovery methods aim to relax this constraint and learn
invariance and equivariance from data. However, existing symmetry discovery
methods are limited to linear symmetries in their search space and cannot
handle the complexity of symmetries in real-world, often high-dimensional data.
We propose a novel generative model, Latent LieGAN (LaLiGAN), which can
discover nonlinear symmetries from data. It learns a mapping from data to a
latent space where the symmetries become linear and simultaneously discovers
symmetries in the latent space. Theoretically, we show that our method can
express any nonlinear symmetry under certain conditions. Experimentally, our
method can capture the intrinsic symmetry in high-dimensional observations,
which results in a well-structured latent space that is useful for other
downstream tasks. We demonstrate the use cases for LaLiGAN in improving
equation discovery and long-term forecasting for various dynamical systems. | [
"Jianke Yang",
"Nima Dehmamy",
"Robin Walters",
"Rose Yu"
] | 2023-09-29 19:33:01 | http://arxiv.org/abs/2310.00105v1 | http://arxiv.org/pdf/2310.00105v1 | 2310.00105v1 |
Federated Learning with Differential Privacy for End-to-End Speech Recognition | While federated learning (FL) has recently emerged as a promising approach to
train machine learning models, it is limited to only preliminary explorations
in the domain of automatic speech recognition (ASR). Moreover, FL does not
inherently guarantee user privacy and requires the use of differential privacy
(DP) for robust privacy guarantees. However, we are not aware of prior work on
applying DP to FL for ASR. In this paper, we aim to bridge this research gap by
formulating an ASR benchmark for FL with DP and establishing the first
baselines. First, we extend the existing research on FL for ASR by exploring
different aspects of recent $\textit{large end-to-end transformer models}$:
architecture design, seed models, data heterogeneity, domain shift, and impact
of cohort size. With a $\textit{practical}$ number of central aggregations we
are able to train $\textbf{FL models}$ that are \textbf{nearly optimal} even
with heterogeneous data, a seed model from another domain, or no pre-trained
seed model. Second, we apply DP to FL for ASR, which is non-trivial since DP
noise severely affects model training, especially for large transformer models,
due to highly imbalanced gradients in the attention block. We counteract the
adverse effect of DP noise by reviving per-layer clipping and explaining why
its effect is more apparent in our case than in the prior work. Remarkably, we
achieve user-level ($7.2$, $10^{-9}$)-$\textbf{DP}$ (resp. ($4.5$,
$10^{-9}$)-$\textbf{DP}$) with a 1.3% (resp. 4.6%) absolute drop in the word
error rate for extrapolation to high (resp. low) population scale for
$\textbf{FL with DP in ASR}$. | [
"Martin Pelikan",
"Sheikh Shams Azam",
"Vitaly Feldman",
"Jan \"Honza\" Silovsky",
"Kunal Talwar",
"Tatiana Likhomanenko"
] | 2023-09-29 19:11:49 | http://arxiv.org/abs/2310.00098v1 | http://arxiv.org/pdf/2310.00098v1 | 2310.00098v1 |
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation | Diffusion models showcased strong capabilities in image synthesis, being used
in many computer vision tasks with great success. To this end, we propose to
explore a new use case, namely to copy black-box classification models without
having access to the original training data, the architecture, and the weights
of the model, \ie~the model is only exposed through an inference API. More
specifically, we can only observe the (soft or hard) labels for some image
samples passed as input to the model. Furthermore, we consider an additional
constraint limiting the number of model calls, mostly focusing our research on
few-call model stealing. In order to solve the model extraction task given the
applied restrictions, we propose the following framework. As training data, we
create a synthetic data set (called proxy data set) by leveraging the ability
of diffusion models to generate realistic and diverse images. Given a maximum
number of allowed API calls, we pass the respective number of samples through
the black-box model to collect labels. Finally, we distill the knowledge of the
black-box teacher (attacked model) into a student model (copy of the attacked
model), harnessing both labeled and unlabeled data generated by the diffusion
model. We employ a novel active self-paced learning framework to make the most
of the proxy data during distillation. Our empirical results on two data sets
confirm the superiority of our framework over two state-of-the-art methods in
the few-call model extraction scenario. | [
"Vlad Hondru",
"Radu Tudor Ionescu"
] | 2023-09-29 19:09:27 | http://arxiv.org/abs/2310.00096v1 | http://arxiv.org/pdf/2310.00096v1 | 2310.00096v1 |
DataDAM: Efficient Dataset Distillation with Attention Matching | Researchers have long tried to minimize training costs in deep learning while
maintaining strong generalization across diverse datasets. Emerging research on
dataset distillation aims to reduce training costs by creating a small
synthetic set that contains the information of a larger real dataset and
ultimately achieves test accuracy equivalent to a model trained on the whole
dataset. Unfortunately, the synthetic data generated by previous methods are
not guaranteed to distribute and discriminate as well as the original training
data, and they incur significant computational costs. Despite promising
results, there still exists a significant performance gap between models
trained on condensed synthetic sets and those trained on the whole dataset. In
this paper, we address these challenges using efficient Dataset Distillation
with Attention Matching (DataDAM), achieving state-of-the-art performance while
reducing training costs. Specifically, we learn synthetic images by matching
the spatial attention maps of real and synthetic data generated by different
layers within a family of randomly initialized neural networks. Our method
outperforms the prior methods on several datasets, including CIFAR10/100,
TinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the
settings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and
ImageNet-1K, respectively. We also show that our high-quality distilled images
have practical benefits for downstream applications, such as continual learning
and neural architecture search. | [
"Ahmad Sajedi",
"Samir Khaki",
"Ehsan Amjadian",
"Lucy Z. Liu",
"Yuri A. Lawryshyn",
"Konstantinos N. Plataniotis"
] | 2023-09-29 19:07:48 | http://arxiv.org/abs/2310.00093v1 | http://arxiv.org/pdf/2310.00093v1 | 2310.00093v1 |
Optimizing with Low Budgets: a Comparison on the Black-box Optimization Benchmarking Suite and OpenAI Gym | The growing ubiquity of machine learning (ML) has led it to enter various
areas of computer science, including black-box optimization (BBO). Recent
research is particularly concerned with Bayesian optimization (BO). BO-based
algorithms are popular in the ML community, as they are used for hyperparameter
optimization and more generally for algorithm configuration. However, their
efficiency decreases as the dimensionality of the problem and the budget of
evaluations increase. Meanwhile, derivative-free optimization methods have
evolved independently in the optimization community. Therefore, we urge to
understand whether cross-fertilization is possible between the two communities,
ML and BBO, i.e., whether algorithms that are heavily used in ML also work well
in BBO and vice versa. Comparative experiments often involve rather small
benchmarks and show visible problems in the experimental setup, such as poor
initialization of baselines, overfitting due to problem-specific setting of
hyperparameters, and low statistical significance.
With this paper, we update and extend a comparative study presented by Hutter
et al. in 2013. We compare BBO tools for ML with more classical heuristics,
first on the well-known BBOB benchmark suite from the COCO environment and then
on Direct Policy Search for OpenAI Gym, a reinforcement learning benchmark. Our
results confirm that BO-based optimizers perform well on both benchmarks when
budgets are limited, albeit with a higher computational cost, while they are
often outperformed by algorithms from other families when the evaluation budget
becomes larger. We also show that some algorithms from the BBO community
perform surprisingly well on ML tasks. | [
"Elena Raponi",
"Nathanael Rakotonirina Carraz",
"Jérémy Rapin",
"Carola Doerr",
"Olivier Teytaud"
] | 2023-09-29 18:33:10 | http://arxiv.org/abs/2310.00077v2 | http://arxiv.org/pdf/2310.00077v2 | 2310.00077v2 |
EPiC-ly Fast Particle Cloud Generation with Flow-Matching and Diffusion | Jets at the LHC, typically consisting of a large number of highly correlated
particles, are a fascinating laboratory for deep generative modeling. In this
paper, we present two novel methods that generate LHC jets as point clouds
efficiently and accurately. We introduce \epcjedi, which combines
score-matching diffusion models with the Equivariant Point Cloud (EPiC)
architecture based on the deep sets framework. This model offers a much faster
alternative to previous transformer-based diffusion models without reducing the
quality of the generated jets. In addition, we introduce \epcfm, the first
permutation equivariant continuous normalizing flow (CNF) for particle cloud
generation. This model is trained with {\it flow-matching}, a scalable and
easy-to-train objective based on optimal transport that directly regresses the
vector fields connecting the Gaussian noise prior to the data distribution. Our
experiments demonstrate that \epcjedi and \epcfm both achieve state-of-the-art
performance on the top-quark JetNet datasets whilst maintaining fast generation
speed. Most notably, we find that the \epcfm model consistently outperforms all
the other generative models considered here across every metric. Finally, we
also introduce two new particle cloud performance metrics: the first based on
the Kullback-Leibler divergence between feature distributions, the second is
the negative log-posterior of a multi-model ParticleNet classifier. | [
"Erik Buhmann",
"Cedric Ewen",
"Darius A. Faroughy",
"Tobias Golling",
"Gregor Kasieczka",
"Matthew Leigh",
"Guillaume Quétant",
"John Andrew Raine",
"Debajyoti Sengupta",
"David Shih"
] | 2023-09-29 18:00:03 | http://arxiv.org/abs/2310.00049v1 | http://arxiv.org/pdf/2310.00049v1 | 2310.00049v1 |
Machine Learning Clifford invariants of ADE Coxeter elements | There has been recent interest in novel Clifford geometric invariants of
linear transformations. This motivates the investigation of such invariants for
a certain type of geometric transformation of interest in the context of root
systems, reflection groups, Lie groups and Lie algebras: the Coxeter
transformations. We perform exhaustive calculations of all Coxeter
transformations for $A_8$, $D_8$ and $E_8$ for a choice of basis of simple
roots and compute their invariants, using high-performance computing. This
computational algebra paradigm generates a dataset that can then be mined using
techniques from data science such as supervised and unsupervised machine
learning. In this paper we focus on neural network classification and principal
component analysis. Since the output -- the invariants -- is fully determined
by the choice of simple roots and the permutation order of the corresponding
reflections in the Coxeter element, we expect huge degeneracy in the mapping.
This provides the perfect setup for machine learning, and indeed we see that
the datasets can be machine learned to very high accuracy. This paper is a
pump-priming study in experimental mathematics using Clifford algebras, showing
that such Clifford algebraic datasets are amenable to machine learning, and
shedding light on relationships between these novel and other well-known
geometric invariants and also giving rise to analytic results. | [
"Siqi Chen",
"Pierre-Philippe Dechant",
"Yang-Hui He",
"Elli Heyes",
"Edward Hirst",
"Dmitrii Riabchenko"
] | 2023-09-29 18:00:01 | http://arxiv.org/abs/2310.00041v1 | http://arxiv.org/pdf/2310.00041v1 | 2310.00041v1 |
L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models | Recently, large language models (LLMs), especially those that are pretrained
on code, have demonstrated strong capabilities in generating programs from
natural language inputs in a few-shot or even zero-shot manner. Despite
promising results, there is a notable lack of a comprehensive evaluation of
these models language-to-code generation capabilities. Existing studies often
focus on specific tasks, model architectures, or learning paradigms, leading to
a fragmented understanding of the overall landscape. In this work, we present
L2CEval, a systematic evaluation of the language-to-code generation
capabilities of LLMs on 7 tasks across the domain spectrum of semantic parsing,
math reasoning and Python programming, analyzing the factors that potentially
affect their performance, such as model size, pretraining data, instruction
tuning, and different prompting methods. In addition to assessing model
performance, we measure confidence calibration for the models and conduct human
evaluations of the output programs. This enables us to identify and analyze the
typical failure modes across various tasks and models. L2CEval offers a
comprehensive understanding of the capabilities and limitations of LLMs in
language-to-code generation. We also release the evaluation framework and all
model outputs, hoping to lay the groundwork for further future research in this
domain. | [
"Ansong Ni",
"Pengcheng Yin",
"Yilun Zhao",
"Martin Riddell",
"Troy Feng",
"Rui Shen",
"Stephen Yin",
"Ye Liu",
"Semih Yavuz",
"Caiming Xiong",
"Shafiq Joty",
"Yingbo Zhou",
"Dragomir Radev",
"Arman Cohan"
] | 2023-09-29 17:57:00 | http://arxiv.org/abs/2309.17446v2 | http://arxiv.org/pdf/2309.17446v2 | 2309.17446v2 |
CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets | Large language models (LLMs) are often augmented with tools to solve complex
tasks. By generating code snippets and executing them through task-specific
Application Programming Interfaces (APIs), they can offload certain functions
to dedicated external modules, such as image encoding and performing
calculations. However, most existing approaches to augment LLMs with tools are
constrained by general-purpose APIs and lack the flexibility for tailoring them
to specific tasks. In this work, we present CRAFT, a general tool creation and
retrieval framework for LLMs. It creates toolsets specifically curated for the
tasks and equips LLMs with a component that retrieves tools from these sets to
enhance their capability to solve complex tasks. For each task, we collect
specific code solutions by prompting GPT-4 to solve the training examples.
Following a validation step ensuring the correctness, these solutions are
abstracted into code snippets to enhance reusability, and deduplicated for
higher quality. At inference time, the language model retrieves snippets from
the toolsets and then executes them or generates the output conditioning on the
retrieved snippets. Our method is designed to be flexible and offers a
plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and
modalities, without any finetuning. Experiments on vision-language, tabular
processing, and mathematical reasoning tasks show that our approach achieves
substantial improvements compared to strong baselines. In addition, our
in-depth analysis reveals that: (1) consistent performance improvement can be
achieved by scaling up the number of tools and the capability of the backbone
models; (2) each component of our approach contributes to the performance
gains; (3) the created tools are well-structured and reliable with low
complexity and atomicity. The code is available at
\url{https://github.com/lifan-yuan/CRAFT}. | [
"Lifan Yuan",
"Yangyi Chen",
"Xingyao Wang",
"Yi R. Fung",
"Hao Peng",
"Heng Ji"
] | 2023-09-29 17:40:26 | http://arxiv.org/abs/2309.17428v1 | http://arxiv.org/pdf/2309.17428v1 | 2309.17428v1 |
Data Filtering Networks | Large training sets have become a cornerstone of machine learning and are the
foundation for recent advances in language modeling and multimodal learning.
While data curation for pre-training is often still ad-hoc, one common paradigm
is to first collect a massive pool of data from the Web and then filter this
candidate pool down to an actual training set via various heuristics. In this
work, we study the problem of learning a data filtering network (DFN) for this
second step of filtering a large uncurated dataset. Our key finding is that the
quality of a network for filtering is distinct from its performance on
downstream tasks: for instance, a model that performs well on ImageNet can
yield worse training sets than a model with low ImageNet accuracy that is
trained on a small amount of high-quality data. Based on our insights, we
construct new data filtering networks that induce state-of-the-art image-text
datasets. Specifically, our best performing dataset DFN-5B enables us to train
state-of-the-art models for their compute budgets: among other improvements on
a variety of tasks, a ViT-H trained on our dataset achieves 83.0% zero-shot
transfer accuracy on ImageNet, out-performing models trained on other datasets
such as LAION-2B, DataComp-1B, or OpenAI's WIT. In order to facilitate further
research in dataset design, we also release a new 2 billion example dataset
DFN-2B and show that high performance data filtering networks can be trained
from scratch using only publicly available data. | [
"Alex Fang",
"Albin Madappally Jose",
"Amit Jain",
"Ludwig Schmidt",
"Alexander Toshev",
"Vaishaal Shankar"
] | 2023-09-29 17:37:29 | http://arxiv.org/abs/2309.17425v2 | http://arxiv.org/pdf/2309.17425v2 | 2309.17425v2 |
Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction | Graph neural network (GNN) link prediction is increasingly deployed in
citation, collaboration, and online social networks to recommend academic
literature, collaborators, and friends. While prior research has investigated
the dyadic fairness of GNN link prediction, the within-group fairness and
``rich get richer'' dynamics of link prediction remain underexplored. However,
these aspects have significant consequences for degree and power imbalances in
networks. In this paper, we shed light on how degree bias in networks affects
Graph Convolutional Network (GCN) link prediction. In particular, we
theoretically uncover that GCNs with a symmetric normalized graph filter have a
within-group preferential attachment bias. We validate our theoretical analysis
on real-world citation, collaboration, and online social networks. We further
bridge GCN's preferential attachment bias with unfairness in link prediction
and propose a new within-group fairness metric. This metric quantifies
disparities in link prediction scores between social groups, towards combating
the amplification of degree and power disparities. Finally, we propose a simple
training-time strategy to alleviate within-group unfairness, and we show that
it is effective on citation, online social, and credit networks. | [
"Arjun Subramonian",
"Levent Sagun",
"Yizhou Sun"
] | 2023-09-29 17:26:44 | http://arxiv.org/abs/2309.17417v1 | http://arxiv.org/pdf/2309.17417v1 | 2309.17417v1 |
Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform | Distributed Deep Reinforcement Learning (DRL) aims to leverage more
computational resources to train autonomous agents with less training time.
Despite recent progress in the field, reproducibility issues have not been
sufficiently explored. This paper first shows that the typical actor-learner
framework can have reproducibility issues even if hyperparameters are
controlled. We then introduce Cleanba, a new open-source platform for
distributed DRL that proposes a highly reproducible architecture. Cleanba
implements highly optimized distributed variants of PPO and IMPALA. Our Atari
experiments show that these variants can obtain equivalent or higher scores
than strong IMPALA baselines in moolib and torchbeast and PPO baseline in
CleanRL. However, Cleanba variants present 1) shorter training time and 2) more
reproducible learning curves in different hardware settings. Cleanba's source
code is available at \url{https://github.com/vwxyzjn/cleanba} | [
"Shengyi Huang",
"Jiayi Weng",
"Rujikorn Charakorn",
"Min Lin",
"Zhongwen Xu",
"Santiago Ontañón"
] | 2023-09-29 17:20:07 | http://arxiv.org/abs/2310.00036v1 | http://arxiv.org/pdf/2310.00036v1 | 2310.00036v1 |
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks | Pretrained language models sometimes possess knowledge that we do not wish
them to, including memorized personal information and knowledge that could be
used to harm people. They can also output toxic or harmful text. To mitigate
these safety and informational issues, we propose an attack-and-defense
framework for studying the task of deleting sensitive information directly from
model weights. We study direct edits to model weights because (1) this approach
should guarantee that particular deleted information is never extracted by
future prompt attacks, and (2) it should protect against whitebox attacks,
which is necessary for making claims about safety/privacy in a setting where
publicly available model weights could be used to elicit sensitive information.
Our threat model assumes that an attack succeeds if the answer to a sensitive
question is located among a set of B generated candidates, based on scenarios
where the information would be insecure if the answer is among B candidates.
Experimentally, we show that even state-of-the-art model editing methods such
as ROME struggle to truly delete factual information from models like GPT-J, as
our whitebox and blackbox attacks can recover "deleted" information from an
edited model 38% of the time. These attacks leverage two key observations: (1)
that traces of deleted information can be found in intermediate model hidden
states, and (2) that applying an editing method for one question may not delete
information across rephrased versions of the question. Finally, we provide new
defense methods that protect against some extraction attacks, but we do not
find a single universally effective defense method. Our results suggest that
truly deleting sensitive information is a tractable but difficult problem,
since even relatively low attack success rates have potentially severe societal
implications for real-world deployment of language models. | [
"Vaidehi Patil",
"Peter Hase",
"Mohit Bansal"
] | 2023-09-29 17:12:43 | http://arxiv.org/abs/2309.17410v1 | http://arxiv.org/pdf/2309.17410v1 | 2309.17410v1 |
Maximal Volume Matrix Cross Approximation for Image Compression and Least Squares Solution | We study the classic cross approximation of matrices based on the maximal
volume submatrices. Our main results consist of an improvement of a classic
estimate for matrix cross approximation and a greedy approach for finding the
maximal volume submatrices. Indeed, we present a new proof of a classic
estimate of the inequality with an improved constant. Also, we present a family
of greedy maximal volume algorithms which improve the error bound of cross
approximation of a matrix in the Chebyshev norm and also improve the
computational efficiency of classic maximal volume algorithm. The proposed
algorithms are shown to have theoretical guarantees of convergence. Finally, we
present two applications: one is image compression and the other is least
squares approximation of continuous functions. Our numerical results in the end
of the paper demonstrate the effective performances of our approach. | [
"Kenneth Allen",
"Ming-Jun Lai",
"Zhaiming Shen"
] | 2023-09-29 17:04:06 | http://arxiv.org/abs/2309.17403v1 | http://arxiv.org/pdf/2309.17403v1 | 2309.17403v1 |
Adversarial Machine Learning in Latent Representations of Neural Networks | Distributed deep neural networks (DNNs) have been shown to reduce the
computational burden of mobile devices and decrease the end-to-end inference
latency in edge computing scenarios. While distributed DNNs have been studied,
to the best of our knowledge the resilience of distributed DNNs to adversarial
action still remains an open problem. In this paper, we fill the existing
research gap by rigorously analyzing the robustness of distributed DNNs against
adversarial action. We cast this problem in the context of information theory
and introduce two new measurements for distortion and robustness. Our
theoretical findings indicate that (i) assuming the same level of information
distortion, latent features are always more robust than input representations;
(ii) the adversarial robustness is jointly determined by the feature dimension
and the generalization capability of the DNN. To test our theoretical findings,
we perform extensive experimental analysis by considering 6 different DNN
architectures, 6 different approaches for distributed DNN and 10 different
adversarial attacks to the ImageNet-1K dataset. Our experimental results
support our theoretical findings by showing that the compressed latent
representations can reduce the success rate of adversarial attacks by 88% in
the best case and by 57% on the average compared to attacks to the input space. | [
"Milin Zhang",
"Mohammad Abdi",
"Francesco Restuccia"
] | 2023-09-29 17:01:29 | http://arxiv.org/abs/2309.17401v1 | http://arxiv.org/pdf/2309.17401v1 | 2309.17401v1 |
Directly Fine-Tuning Diffusion Models on Differentiable Rewards | We present Direct Reward Fine-Tuning (DRaFT), a simple and effective method
for fine-tuning diffusion models to maximize differentiable reward functions,
such as scores from human preference models. We first show that it is possible
to backpropagate the reward function gradient through the full sampling
procedure, and that doing so achieves strong performance on a variety of
rewards, outperforming reinforcement learning-based approaches. We then propose
more efficient variants of DRaFT: DRaFT-K, which truncates backpropagation to
only the last K steps of sampling, and DRaFT-LV, which obtains lower-variance
gradient estimates for the case when K=1. We show that our methods work well
for a variety of reward functions and can be used to substantially improve the
aesthetic quality of images generated by Stable Diffusion 1.4. Finally, we draw
connections between our approach and prior work, providing a unifying
perspective on the design space of gradient-based fine-tuning algorithms. | [
"Kevin Clark",
"Paul Vicol",
"Kevin Swersky",
"David J Fleet"
] | 2023-09-29 17:01:02 | http://arxiv.org/abs/2309.17400v1 | http://arxiv.org/pdf/2309.17400v1 | 2309.17400v1 |
AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition | Audio-visual speech contains synchronized audio and visual information that
provides cross-modal supervision to learn representations for both automatic
speech recognition (ASR) and visual speech recognition (VSR). We introduce
continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a
semi-supervised method to train an audio-visual speech recognition (AVSR) model
on a combination of labeled and unlabeled videos with continuously regenerated
pseudo-labels. Our models are trained for speech recognition from audio-visual
inputs and can perform speech recognition using both audio and visual
modalities, or only one modality. Our method uses the same audio-visual model
for both supervised training and pseudo-label generation, mitigating the need
for external speech recognition models to generate pseudo-labels. AV-CPL
obtains significant improvements in VSR performance on the LRS3 dataset while
maintaining practical ASR and AVSR performance. Finally, using visual-only
speech data, our method is able to leverage unlabeled visual speech to improve
VSR. | [
"Andrew Rouditchenko",
"Ronan Collobert",
"Tatiana Likhomanenko"
] | 2023-09-29 16:57:21 | http://arxiv.org/abs/2309.17395v1 | http://arxiv.org/pdf/2309.17395v1 | 2309.17395v1 |
Tree Cross Attention | Cross Attention is a popular method for retrieving information from a set of
context tokens for making predictions. At inference time, for each prediction,
Cross Attention scans the full set of $\mathcal{O}(N)$ tokens. In practice,
however, often only a small subset of tokens are required for good performance.
Methods such as Perceiver IO are cheap at inference as they distill the
information to a smaller-sized set of latent tokens $L < N$ on which cross
attention is then applied, resulting in only $\mathcal{O}(L)$ complexity.
However, in practice, as the number of input tokens and the amount of
information to distill increases, the number of latent tokens needed also
increases significantly. In this work, we propose Tree Cross Attention (TCA) -
a module based on Cross Attention that only retrieves information from a
logarithmic $\mathcal{O}(\log(N))$ number of tokens for performing inference.
TCA organizes the data in a tree structure and performs a tree search at
inference time to retrieve the relevant tokens for prediction. Leveraging TCA,
we introduce ReTreever, a flexible architecture for token-efficient inference.
We show empirically that Tree Cross Attention (TCA) performs comparable to
Cross Attention across various classification and uncertainty regression tasks
while being significantly more token-efficient. Furthermore, we compare
ReTreever against Perceiver IO, showing significant gains while using the same
number of tokens for inference. | [
"Leo Feng",
"Frederick Tung",
"Hossein Hajimirsadeghi",
"Yoshua Bengio",
"Mohamed Osama Ahmed"
] | 2023-09-29 16:50:23 | http://arxiv.org/abs/2309.17388v1 | http://arxiv.org/pdf/2309.17388v1 | 2309.17388v1 |
Parallel Computation of Multi-Slice Clustering of Third-Order Tensors | Machine Learning approaches like clustering methods deal with massive
datasets that present an increasing challenge. We devise parallel algorithms to
compute the Multi-Slice Clustering (MSC) for 3rd-order tensors. The MSC method
is based on spectral analysis of the tensor slices and works independently on
each tensor mode. Such features fit well in the parallel paradigm via a
distributed memory system. We show that our parallel scheme outperforms
sequential computing and allows for the scalability of the MSC method. | [
"Dina Faneva Andriantsiory",
"Camille Coti",
"Joseph Ben Geloun",
"Mustapha Lebbah"
] | 2023-09-29 16:38:51 | http://arxiv.org/abs/2309.17383v1 | http://arxiv.org/pdf/2309.17383v1 | 2309.17383v1 |
LoRA ensembles for large language model fine-tuning | Finetuned LLMs often exhibit poor uncertainty quantification, manifesting as
overconfidence, poor calibration, and unreliable prediction results on test
data or out-of-distribution samples. One approach commonly used in vision for
alleviating this issue is a deep ensemble, which constructs an ensemble by
training the same model multiple times using different random initializations.
However, there is a huge challenge to ensembling LLMs: the most effective LLMs
are very, very large. Keeping a single LLM in memory is already challenging
enough: keeping an ensemble of e.g. 5 LLMs in memory is impossible in many
settings. To address these issues, we propose an ensemble approach using
Low-Rank Adapters (LoRA), a parameter-efficient fine-tuning technique.
Critically, these low-rank adapters represent a very small number of
parameters, orders of magnitude less than the underlying pre-trained model.
Thus, it is possible to construct large ensembles of LoRA adapters with almost
the same computational overhead as using the original model. We find that LoRA
ensembles, applied on its own or on top of pre-existing regularization
techniques, gives consistent improvements in predictive accuracy and
uncertainty quantification. | [
"Xi Wang",
"Laurence Aitchison",
"Maja Rudolph"
] | 2023-09-29 16:38:38 | http://arxiv.org/abs/2310.00035v2 | http://arxiv.org/pdf/2310.00035v2 | 2310.00035v2 |
Reason for Future, Act for Now: A Principled Framework for Autonomous LLM Agents with Provable Sample Efficiency | Large language models (LLMs) demonstrate impressive reasoning abilities, but
translating reasoning into actions in the real world remains challenging. In
particular, it remains unclear how to complete a given task provably within a
minimum number of interactions with the external environment, e.g., through an
internal mechanism of reasoning. To this end, we propose a principled framework
with provable regret guarantees to orchestrate reasoning and acting, which we
call "reason for future, act for now" (\texttt{RAFA}). Specifically, we design
a prompt template for reasoning that learns from the memory buffer and plans a
future trajectory over a long horizon ("reason for future"). At each step, the
LLM agent takes the initial action of the planned trajectory ("act for now"),
stores the collected feedback in the memory buffer, and reinvokes the reasoning
routine to replan the future trajectory from the new state.
The key idea is to cast reasoning in LLMs as learning and planning in
Bayesian adaptive Markov decision processes (MDPs). Correspondingly, we prompt
LLMs to form an updated posterior of the unknown environment from the memory
buffer (learning) and generate an optimal trajectory for multiple future steps
that maximizes a value function (planning). The learning and planning
subroutines are performed in an "in-context" manner to emulate the actor-critic
update for MDPs. Our theoretical analysis proves that the novel combination of
long-term reasoning and short-term acting achieves a $\sqrt{T}$ regret. In
particular, the regret bound highlights an intriguing interplay between the
prior knowledge obtained through pretraining and the uncertainty reduction
achieved by reasoning and acting. Our empirical validation shows that it
outperforms various existing frameworks and achieves nearly perfect scores on a
few benchmarks. | [
"Zhihan Liu",
"Hao Hu",
"Shenao Zhang",
"Hongyi Guo",
"Shuqi Ke",
"Boyi Liu",
"Zhaoran Wang"
] | 2023-09-29 16:36:39 | http://arxiv.org/abs/2309.17382v2 | http://arxiv.org/pdf/2309.17382v2 | 2309.17382v2 |
Revolutionizing Mobile Interaction: Enabling a 3 Billion Parameter GPT LLM on Mobile | The field of Artificial Intelligence has witnessed remarkable progress in
recent years, especially with the emergence of powerful large language models
(LLMs) based on the transformer architecture. Cloud-based LLMs, such as
OpenAI's ChatGPT, offer impressive capabilities but come with concerns
regarding latency and privacy due to network dependencies. This article
presents an innovative approach to LLM inference, envisioning a future where
LLMs with billions of parameters can be executed directly on mobile devices
without network connectivity. The article showcases a fine-tuned GPT LLM with 3
billion parameters that can operate smoothly on devices with as low as 4GB of
memory. Through the integration of native code and model quantization
techniques, the application not only serves as a general-purpose assistant but
also facilitates seamless mobile interactions with text-to-actions features.
The article provides insights into the training pipeline, implementation
details, test results, and future directions of on-device LLM inference. This
breakthrough technology opens up possibilities for empowering users with
sophisticated AI capabilities while preserving their privacy and eliminating
latency concerns. | [
"Samuel Carreira",
"Tomás Marques",
"José Ribeiro",
"Carlos Grilo"
] | 2023-09-29 16:30:49 | http://arxiv.org/abs/2310.01434v1 | http://arxiv.org/pdf/2310.01434v1 | 2310.01434v1 |
Adversarial Imitation Learning from Visual Observations using Latent Information | We focus on the problem of imitation learning from visual observations, where
the learning agent has access to videos of experts as its sole learning source.
The challenges of this framework include the absence of expert actions and the
partial observability of the environment, as the ground-truth states can only
be inferred from pixels. To tackle this problem, we first conduct a theoretical
analysis of imitation learning in partially observable environments. We
establish upper bounds on the suboptimality of the learning agent with respect
to the divergence between the expert and the agent latent state-transition
distributions. Motivated by this analysis, we introduce an algorithm called
Latent Adversarial Imitation from Observations, which combines off-policy
adversarial imitation techniques with a learned latent representation of the
agent's state from sequences of observations. In experiments on
high-dimensional continuous robotic tasks, we show that our algorithm matches
state-of-the-art performance while providing significant computational
advantages. Additionally, we show how our method can be used to improve the
efficiency of reinforcement learning from pixels by leveraging expert videos.
To ensure reproducibility, we provide free access to our code. | [
"Vittorio Giammarino",
"James Queeney",
"Ioannis Ch. Paschalidis"
] | 2023-09-29 16:20:36 | http://arxiv.org/abs/2309.17371v1 | http://arxiv.org/pdf/2309.17371v1 | 2309.17371v1 |
Graph-based Neural Weather Prediction for Limited Area Modeling | The rise of accurate machine learning methods for weather forecasting is
creating radical new possibilities for modeling the atmosphere. In the time of
climate change, having access to high-resolution forecasts from models like
these is also becoming increasingly vital. While most existing Neural Weather
Prediction (NeurWP) methods focus on global forecasting, an important question
is how these techniques can be applied to limited area modeling. In this work
we adapt the graph-based NeurWP approach to the limited area setting and
propose a multi-scale hierarchical model extension. Our approach is validated
by experiments with a local model for the Nordic region. | [
"Joel Oskarsson",
"Tomas Landelius",
"Fredrik Lindsten"
] | 2023-09-29 16:20:34 | http://arxiv.org/abs/2309.17370v1 | http://arxiv.org/pdf/2309.17370v1 | 2309.17370v1 |
Module-wise Training of Neural Networks via the Minimizing Movement Scheme | Greedy layer-wise or module-wise training of neural networks is compelling in
constrained and on-device settings where memory is limited, as it circumvents a
number of problems of end-to-end back-propagation. However, it suffers from a
stagnation problem, whereby early layers overfit and deeper layers stop
increasing the test accuracy after a certain depth. We propose to solve this
issue by introducing a module-wise regularization inspired by the minimizing
movement scheme for gradient flows in distribution space. We call the method
TRGL for Transport Regularized Greedy Learning and study it theoretically,
proving that it leads to greedy modules that are regular and that progressively
solve the task. Experimentally, we show improved accuracy of module-wise
training of various architectures such as ResNets, Transformers and VGG, when
our regularization is added, superior to that of other module-wise training
methods and often to end-to-end training, with as much as 60% less memory
usage. | [
"Skander Karkar",
"Ibrahim Ayed",
"Emmanuel de Bézenac",
"Patrick Gallinari"
] | 2023-09-29 16:03:25 | http://arxiv.org/abs/2309.17357v3 | http://arxiv.org/pdf/2309.17357v3 | 2309.17357v3 |
Efficient Biologically Plausible Adversarial Training | Artificial Neural Networks (ANNs) trained with Backpropagation (BP) show
astounding performance and are increasingly often used in performing our daily
life tasks. However, ANNs are highly vulnerable to adversarial attacks, which
alter inputs with small targeted perturbations that drastically disrupt the
models' performance. The most effective method to make ANNs robust against
these attacks is adversarial training, in which the training dataset is
augmented with exemplary adversarial samples. Unfortunately, this approach has
the drawback of increased training complexity since generating adversarial
samples is very computationally demanding. In contrast to ANNs, humans are not
susceptible to adversarial attacks. Therefore, in this work, we investigate
whether biologically-plausible learning algorithms are more robust against
adversarial attacks than BP. In particular, we present an extensive comparative
analysis of the adversarial robustness of BP and Present the Error to Perturb
the Input To modulate Activity (PEPITA), a recently proposed
biologically-plausible learning algorithm, on various computer vision tasks. We
observe that PEPITA has higher intrinsic adversarial robustness and, with
adversarial training, has a more favourable natural-vs-adversarial performance
trade-off as, for the same natural accuracies, PEPITA's adversarial accuracies
decrease in average by 0.26% and BP's by 8.05%. | [
"Matilde Tristany Farinha",
"Thomas Ortner",
"Giorgia Dellaferrera",
"Benjamin Grewe",
"Angeliki Pantazi"
] | 2023-09-29 15:55:17 | http://arxiv.org/abs/2309.17348v3 | http://arxiv.org/pdf/2309.17348v3 | 2309.17348v3 |
Towards Free Data Selection with General-Purpose Models | A desirable data selection algorithm can efficiently choose the most
informative samples to maximize the utility of limited annotation budgets.
However, current approaches, represented by active learning methods, typically
follow a cumbersome pipeline that iterates the time-consuming model training
and batch data selection repeatedly. In this paper, we challenge this status
quo by designing a distinct data selection pipeline that utilizes existing
general-purpose models to select data from various datasets with a single-pass
inference without the need for additional training or supervision. A novel free
data selection (FreeSel) method is proposed following this new pipeline.
Specifically, we define semantic patterns extracted from inter-mediate features
of the general-purpose model to capture subtle local information in each image.
We then enable the selection of all data samples in a single pass through
distance-based sampling at the fine-grained semantic pattern level. FreeSel
bypasses the heavy batch selection process, achieving a significant improvement
in efficiency and being 530x faster than existing active learning methods.
Extensive experiments verify the effectiveness of FreeSel on various computer
vision tasks. Our code is available at https://github.com/yichen928/FreeSel. | [
"Yichen Xie",
"Mingyu Ding",
"Masayoshi Tomizuka",
"Wei Zhan"
] | 2023-09-29 15:50:14 | http://arxiv.org/abs/2309.17342v2 | http://arxiv.org/pdf/2309.17342v2 | 2309.17342v2 |
MixQuant: Mixed Precision Quantization with a Bit-width Optimization Search | Quantization is a technique for creating efficient Deep Neural Networks
(DNNs), which involves performing computations and storing tensors at lower
bit-widths than f32 floating point precision. Quantization reduces model size
and inference latency, and therefore allows for DNNs to be deployed on
platforms with constrained computational resources and real-time systems.
However, quantization can lead to numerical instability caused by roundoff
error which leads to inaccurate computations and therefore, a decrease in
quantized model accuracy. Similarly to prior works, which have shown that both
biases and activations are more sensitive to quantization and are best kept in
full precision or quantized with higher bit-widths, we show that some weights
are more sensitive than others which should be reflected on their quantization
bit-width. To that end we propose MixQuant, a search algorithm that finds the
optimal custom quantization bit-width for each layer weight based on roundoff
error and can be combined with any quantization method as a form of
pre-processing optimization. We show that combining MixQuant with BRECQ, a
state-of-the-art quantization method, yields better quantized model accuracy
than BRECQ alone. Additionally, we combine MixQuant with vanilla asymmetric
quantization to show that MixQuant has the potential to optimize the
performance of any quantization technique. | [
"Eliska Kloberdanz",
"Wei Le"
] | 2023-09-29 15:49:54 | http://arxiv.org/abs/2309.17341v1 | http://arxiv.org/pdf/2309.17341v1 | 2309.17341v1 |
Outage-Watch: Early Prediction of Outages using Extreme Event Regularizer | Cloud services are omnipresent and critical cloud service failure is a fact
of life. In order to retain customers and prevent revenue loss, it is important
to provide high reliability guarantees for these services. One way to do this
is by predicting outages in advance, which can help in reducing the severity as
well as time to recovery. It is difficult to forecast critical failures due to
the rarity of these events. Moreover, critical failures are ill-defined in
terms of observable data. Our proposed method, Outage-Watch, defines critical
service outages as deteriorations in the Quality of Service (QoS) captured by a
set of metrics. Outage-Watch detects such outages in advance by using current
system state to predict whether the QoS metrics will cross a threshold and
initiate an extreme event. A mixture of Gaussian is used to model the
distribution of the QoS metrics for flexibility and an extreme event
regularizer helps in improving learning in tail of the distribution. An outage
is predicted if the probability of any one of the QoS metrics crossing
threshold changes significantly. Our evaluation on a real-world SaaS company
dataset shows that Outage-Watch significantly outperforms traditional methods
with an average AUC of 0.98. Additionally, Outage-Watch detects all the outages
exhibiting a change in service metrics and reduces the Mean Time To Detection
(MTTD) of outages by up to 88% when deployed in an enterprise cloud-service
system, demonstrating efficacy of our proposed method. | [
"Shubham Agarwal",
"Sarthak Chakraborty",
"Shaddy Garg",
"Sumit Bisht",
"Chahat Jain",
"Ashritha Gonuguntla",
"Shiv Saini"
] | 2023-09-29 15:48:40 | http://arxiv.org/abs/2309.17340v1 | http://arxiv.org/pdf/2309.17340v1 | 2309.17340v1 |
Scaling Experiments in Self-Supervised Cross-Table Representation Learning | To analyze the scaling potential of deep tabular representation learning
models, we introduce a novel Transformer-based architecture specifically
tailored to tabular data and cross-table representation learning by utilizing
table-specific tokenizers and a shared Transformer backbone. Our training
approach encompasses both single-table and cross-table models, trained via
missing value imputation through a self-supervised masked cell recovery
objective. To understand the scaling behavior of our method, we train models of
varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These
models are trained on a carefully curated pretraining dataset, consisting of
135M training tokens sourced from 76 diverse datasets. We assess the scaling of
our architecture in both single-table and cross-table pretraining setups by
evaluating the pretrained models using linear probing on a curated set of
benchmark datasets and comparing the results with conventional baselines. | [
"Maximilian Schambach",
"Dominique Paul",
"Johannes S. Otterbach"
] | 2023-09-29 15:48:38 | http://arxiv.org/abs/2309.17339v1 | http://arxiv.org/pdf/2309.17339v1 | 2309.17339v1 |
Improving Trajectory Prediction in Dynamic Multi-Agent Environment by Dropping Waypoints | The inherently diverse and uncertain nature of trajectories presents a
formidable challenge in accurately modeling them. Motion prediction systems
must effectively learn spatial and temporal information from the past to
forecast the future trajectories of the agent. Many existing methods learn
temporal motion via separate components within stacked models to capture
temporal features. This paper introduces a novel framework, called Temporal
Waypoint Dropping (TWD), that promotes explicit temporal learning through the
waypoint dropping technique. Learning through waypoint dropping can compel the
model to improve its understanding of temporal correlations among agents, thus
leading to a significant enhancement in trajectory prediction. Trajectory
prediction methods often operate under the assumption that observed trajectory
waypoint sequences are complete, disregarding real-world scenarios where
missing values may occur, which can influence their performance. Moreover,
these models frequently exhibit a bias towards particular waypoint sequences
when making predictions. Our TWD is capable of effectively addressing these
issues. It incorporates stochastic and fixed processes that regularize
projected past trajectories by strategically dropping waypoints based on
temporal sequences. Through extensive experiments, we demonstrate the
effectiveness of TWD in forcing the model to learn complex temporal
correlations among agents. Our approach can complement existing trajectory
prediction methods to enhance prediction accuracy. We also evaluate our
proposed method across three datasets: NBA Sports VU, ETH-UCY, and TrajNet++. | [
"Pranav Singh Chib",
"Pravendra Singh"
] | 2023-09-29 15:48:35 | http://arxiv.org/abs/2309.17338v1 | http://arxiv.org/pdf/2309.17338v1 | 2309.17338v1 |
Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools | While algorithmic fairness is a thriving area of research, in practice,
mitigating issues of bias often gets reduced to enforcing an arbitrarily chosen
fairness metric, either by enforcing fairness constraints during the
optimization step, post-processing model outputs, or by manipulating the
training data. Recent work has called on the ML community to take a more
holistic approach to tackle fairness issues by systematically investigating the
many design choices made through the ML pipeline, and identifying interventions
that target the issue's root cause, as opposed to its symptoms. While we share
the conviction that this pipeline-based approach is the most appropriate for
combating algorithmic unfairness on the ground, we believe there are currently
very few methods of \emph{operationalizing} this approach in practice. Drawing
on our experience as educators and practitioners, we first demonstrate that
without clear guidelines and toolkits, even individuals with specialized ML
knowledge find it challenging to hypothesize how various design choices
influence model behavior. We then consult the fair-ML literature to understand
the progress to date toward operationalizing the pipeline-aware approach: we
systematically collect and organize the prior work that attempts to detect,
measure, and mitigate various sources of unfairness through the ML pipeline. We
utilize this extensive categorization of previous contributions to sketch a
research agenda for the community. We hope this work serves as the stepping
stone toward a more comprehensive set of resources for ML researchers,
practitioners, and students interested in exploring, designing, and testing
pipeline-oriented approaches to algorithmic fairness. | [
"Emily Black",
"Rakshit Naidu",
"Rayid Ghani",
"Kit T. Rodolfa",
"Daniel E. Ho",
"Hoda Heidari"
] | 2023-09-29 15:48:26 | http://arxiv.org/abs/2309.17337v1 | http://arxiv.org/pdf/2309.17337v1 | 2309.17337v1 |
Asynchronous Graph Generators | We introduce the asynchronous graph generator (AGG), a novel graph neural
network architecture for multi-channel time series which models observations as
nodes on a dynamic graph and can thus perform data imputation by transductive
node generation. Completely free from recurrent components or assumptions about
temporal regularity, AGG represents measurements, timestamps and metadata
directly in the nodes via learnable embeddings, to then leverage attention to
learn expressive relationships across the variables of interest. This way, the
proposed architecture implicitly learns a causal graph representation of sensor
measurements which can be conditioned on unseen timestamps and metadata to
predict new measurements by an expansion of the learnt graph. The proposed AGG
is compared both conceptually and empirically to previous work, and the impact
of data augmentation on the performance of AGG is also briefly discussed. Our
experiments reveal that AGG achieved state-of-the-art results in time series
data imputation, classification and prediction for the benchmark datasets
Beijing Air Quality, PhysioNet Challenge 2012 and UCI localisation. | [
"Christopher P. Ley",
"Felipe Tobar"
] | 2023-09-29 15:46:41 | http://arxiv.org/abs/2309.17335v1 | http://arxiv.org/pdf/2309.17335v1 | 2309.17335v1 |
Efficient Anatomical Labeling of Pulmonary Tree Structures via Implicit Point-Graph Networks | Pulmonary diseases rank prominently among the principal causes of death
worldwide. Curing them will require, among other things, a better understanding
of the many complex 3D tree-shaped structures within the pulmonary system, such
as airways, arteries, and veins. In theory, they can be modeled using
high-resolution image stacks. Unfortunately, standard CNN approaches operating
on dense voxel grids are prohibitively expensive. To remedy this, we introduce
a point-based approach that preserves graph connectivity of tree skeleton and
incorporates an implicit surface representation. It delivers SOTA accuracy at a
low computational cost and the resulting models have usable surfaces. Due to
the scarcity of publicly accessible data, we have also curated an extensive
dataset to evaluate our approach and will make it public. | [
"Kangxian Xie",
"Jiancheng Yang",
"Donglai Wei",
"Ziqiao Weng",
"Pascal Fua"
] | 2023-09-29 15:40:58 | http://arxiv.org/abs/2309.17329v2 | http://arxiv.org/pdf/2309.17329v2 | 2309.17329v2 |
Robust Stochastic Optimization via Gradient Quantile Clipping | We introduce a clipping strategy for Stochastic Gradient Descent (SGD) which
uses quantiles of the gradient norm as clipping thresholds. We prove that this
new strategy provides a robust and efficient optimization algorithm for smooth
objectives (convex or non-convex), that tolerates heavy-tailed samples
(including infinite variance) and a fraction of outliers in the data stream
akin to Huber contamination. Our mathematical analysis leverages the connection
between constant step size SGD and Markov chains and handles the bias
introduced by clipping in an original way. For strongly convex objectives, we
prove that the iteration converges to a concentrated distribution and derive
high probability bounds on the final estimation error. In the non-convex case,
we prove that the limit distribution is localized on a neighborhood with low
gradient. We propose an implementation of this algorithm using rolling
quantiles which leads to a highly efficient optimization procedure with strong
robustness properties, as confirmed by our numerical experiments. | [
"Ibrahim Merad",
"Stéphane Gaïffas"
] | 2023-09-29 15:24:48 | http://arxiv.org/abs/2309.17316v1 | http://arxiv.org/pdf/2309.17316v1 | 2309.17316v1 |
Leave-one-out Distinguishability in Machine Learning | We introduce a new analytical framework to quantify the changes in a machine
learning algorithm's output distribution following the inclusion of a few data
points in its training set, a notion we define as leave-one-out
distinguishability (LOOD). This problem is key to measuring data
**memorization** and **information leakage** in machine learning, and the
**influence** of training data points on model predictions. We illustrate how
our method broadens and refines existing empirical measures of memorization and
privacy risks associated with training data. We use Gaussian processes to model
the randomness of machine learning algorithms, and validate LOOD with extensive
empirical analysis of information leakage using membership inference attacks.
Our theoretical framework enables us to investigate the causes of information
leakage and where the leakage is high. For example, we analyze the influence of
activation functions, on data memorization. Additionally, our method allows us
to optimize queries that disclose the most significant information about the
training data in the leave-one-out setting. We illustrate how optimal queries
can be used for accurate **reconstruction** of training data. | [
"Jiayuan Ye",
"Anastasia Borovykh",
"Soufiane Hayou",
"Reza Shokri"
] | 2023-09-29 15:08:28 | http://arxiv.org/abs/2309.17310v1 | http://arxiv.org/pdf/2309.17310v1 | 2309.17310v1 |
Navigating the Design Space of Equivariant Diffusion-Based Generative Models for De Novo 3D Molecule Generation | Deep generative diffusion models are a promising avenue for de novo 3D
molecular design in material science and drug discovery. However, their utility
is still constrained by suboptimal performance with large molecular structures
and limited training data. Addressing this gap, we explore the design space of
E(3) equivariant diffusion models, focusing on previously blank spots. Our
extensive comparative analysis evaluates the interplay between continuous and
discrete state spaces. Out of this investigation, we introduce the EQGAT-diff
model, which consistently surpasses the performance of established models on
the QM9 and GEOM-Drugs datasets by a large margin. Distinctively, EQGAT-diff
takes continuous atomic positions while chemical elements and bond types are
categorical and employ a time-dependent loss weighting that significantly
increases training convergence and the quality of generated samples. To further
strengthen the applicability of diffusion models to limited training data, we
examine the transferability of EQGAT-diff trained on the large PubChem3D
dataset with implicit hydrogens to target distributions with explicit
hydrogens. Fine-tuning EQGAT-diff for a couple of iterations further pushes
state-of-the-art performance across datasets. We envision that our findings
will find applications in structure-based drug design, where the accuracy of
generative models for small datasets of complex molecules is critical. | [
"Tuan Le",
"Julian Cremer",
"Frank Noé",
"Djork-Arné Clevert",
"Kristof Schütt"
] | 2023-09-29 14:53:05 | http://arxiv.org/abs/2309.17296v1 | http://arxiv.org/pdf/2309.17296v1 | 2309.17296v1 |
Deep learning soliton dynamics and complex potentials recognition for 1D and 2D PT-symmetric saturable nonlinear Schrödinger equations | In this paper, we firstly extend the physics-informed neural networks (PINNs)
to learn data-driven stationary and non-stationary solitons of 1D and 2D
saturable nonlinear Schr\"odinger equations (SNLSEs) with two fundamental
PT-symmetric Scarf-II and periodic potentials in optical fibers. Secondly, the
data-driven inverse problems are studied for PT-symmetric potential functions
discovery rather than just potential parameters in the 1D and 2D SNLSEs.
Particularly, we propose a modified PINNs (mPINNs) scheme to identify directly
the PT potential functions of the 1D and 2D SNLSEs by the solution data. And
the inverse problems about 1D and 2D PT -symmetric potentials depending on
propagation distance z are also investigated using mPINNs method. We also
identify the potential functions by the PINNs applied to the stationary
equation of the SNLSE. Furthermore, two network structures are compared under
different parameter conditions such that the predicted PT potentials can
achieve the similar high accuracy. These results illustrate that the
established deep neural networks can be successfully used in 1D and 2D SNLSEs
with high accuracies. Moreover, some main factors affecting neural networks
performance are discussed in 1D and 2D PT Scarf-II and periodic potentials,
including activation functions, structures of the networks, and sizes of the
training data. In particular, twelve different nonlinear activation functions
are in detail analyzed containing the periodic and non-periodic functions such
that it is concluded that selecting activation functions according to the form
of solution and equation usually can achieve better effect. | [
"Jin Song",
"Zhenya Yan"
] | 2023-09-29 14:49:24 | http://arxiv.org/abs/2310.02276v1 | http://arxiv.org/pdf/2310.02276v1 | 2310.02276v1 |
In search of dispersed memories: Generative diffusion models are associative memory networks | Hopfield networks are widely used in neuroscience as simplified theoretical
models of biological associative memory. The original Hopfield networks store
memories by encoding patterns of binary associations, which result in a
synaptic learning mechanism known as Hebbian learning rule. Modern Hopfield
networks can achieve exponential capacity scaling by using highly non-linear
energy functions. However, the energy function of these newer models cannot be
straightforwardly compressed into binary synaptic couplings and it does not
directly provide new synaptic learning rules. In this work we show that
generative diffusion models can be interpreted as energy-based models and that,
when trained on discrete patterns, their energy function is equivalent to that
of modern Hopfield networks. This equivalence allows us to interpret the
supervised training of diffusion models as a synaptic learning process that
encodes the associative dynamics of a modern Hopfield network in the weight
structure of a deep neural network. Accordingly, in our experiments we show
that the storage capacity of a continuous modern Hopfield network is identical
to the capacity of a diffusion model. Our results establish a strong link
between generative modeling and the theoretical neuroscience of memory, which
provide a powerful computational foundation for the reconstructive theory of
memory, where creative generation and memory recall can be seen as parts of a
unified continuum. | [
"Luca Ambrogioni"
] | 2023-09-29 14:48:24 | http://arxiv.org/abs/2309.17290v1 | http://arxiv.org/pdf/2309.17290v1 | 2309.17290v1 |
AI-Aristotle: A Physics-Informed framework for Systems Biology Gray-Box Identification | Discovering mathematical equations that govern physical and biological
systems from observed data is a fundamental challenge in scientific research.
We present a new physics-informed framework for parameter estimation and
missing physics identification (gray-box) in the field of Systems Biology. The
proposed framework -- named AI-Aristotle -- combines eXtreme Theory of
Functional Connections (X-TFC) domain-decomposition and Physics-Informed Neural
Networks (PINNs) with symbolic regression (SR) techniques for parameter
discovery and gray-box identification. We test the accuracy, speed, flexibility
and robustness of AI-Aristotle based on two benchmark problems in Systems
Biology: a pharmacokinetics drug absorption model, and an ultradian endocrine
model for glucose-insulin interactions. We compare the two machine learning
methods (X-TFC and PINNs), and moreover, we employ two different symbolic
regression techniques to cross-verify our results. While the current work
focuses on the performance of AI-Aristotle based on synthetic data, it can
equally handle noisy experimental data and can even be used for black-box
identification in just a few minutes on a laptop. More broadly, our work
provides insights into the accuracy, cost, scalability, and robustness of
integrating neural networks with symbolic regressors, offering a comprehensive
guide for researchers tackling gray-box identification challenges in complex
dynamical systems in biomedicine and beyond. | [
"Nazanin Ahmadi Daryakenari",
"Mario De Florio",
"Khemraj Shukla",
"George Em Karniadakis"
] | 2023-09-29 14:45:51 | http://arxiv.org/abs/2310.01433v1 | http://arxiv.org/pdf/2310.01433v1 | 2310.01433v1 |
PB-LLM: Partially Binarized Large Language Models | This paper explores network binarization, a radical form of quantization,
compressing model weights to a single bit, specifically for Large Language
Models (LLMs) compression. Due to previous binarization methods collapsing
LLMs, we propose a novel approach, Partially-Binarized LLM (PB-LLM), which can
achieve extreme low-bit quantization while maintaining the linguistic reasoning
capacity of quantized LLMs. Specifically, our exploration first uncovers the
ineffectiveness of naive applications of existing binarization algorithms and
highlights the imperative role of salient weights in achieving low-bit
quantization. Thus, PB-LLM filters a small ratio of salient weights during
binarization, allocating them to higher-bit storage, i.e.,
partially-binarization. PB-LLM is extended to recover the capacities of
quantized LMMs, by analyzing from the perspective of post-training quantization
(PTQ) and quantization-aware training (QAT). Under PTQ, combining the concepts
from GPTQ, we reconstruct the binarized weight matrix guided by the Hessian
matrix and successfully recover the reasoning capacity of PB-LLM in low-bit.
Under QAT, we freeze the salient weights during training, explore the
derivation of optimal scaling factors crucial for minimizing the quantization
error, and propose a scaling mechanism based on this derived scaling strategy
for residual binarized weights. Those explorations and the developed
methodologies significantly contribute to rejuvenating the performance of
low-bit quantized LLMs and present substantial advancements in the field of
network binarization for LLMs.The code is available at
https://github.com/hahnyuan/BinaryLLM. | [
"Yuzhang Shang",
"Zhihang Yuan",
"Qiang Wu",
"Zhen Dong"
] | 2023-09-29 14:35:27 | http://arxiv.org/abs/2310.00034v1 | http://arxiv.org/pdf/2310.00034v1 | 2310.00034v1 |
Toward Robust Recommendation via Real-time Vicinal Defense | Recommender systems have been shown to be vulnerable to poisoning attacks,
where malicious data is injected into the dataset to cause the recommender
system to provide biased recommendations. To defend against such attacks,
various robust learning methods have been proposed. However, most methods are
model-specific or attack-specific, making them lack generality, while other
methods, such as adversarial training, are oriented towards evasion attacks and
thus have a weak defense strength in poisoning attacks.
In this paper, we propose a general method, Real-time Vicinal Defense (RVD),
which leverages neighboring training data to fine-tune the model before making
a recommendation for each user. RVD works in the inference phase to ensure the
robustness of the specific sample in real-time, so there is no need to change
the model structure and training process, making it more practical. Extensive
experimental results demonstrate that RVD effectively mitigates targeted
poisoning attacks across various models without sacrificing accuracy. Moreover,
the defensive effect can be further amplified when our method is combined with
other strategies. | [
"Yichang Xu",
"Chenwang Wu",
"Defu Lian"
] | 2023-09-29 14:30:05 | http://arxiv.org/abs/2309.17278v1 | http://arxiv.org/pdf/2309.17278v1 | 2309.17278v1 |
Utility-based Adaptive Teaching Strategies using Bayesian Theory of Mind | Good teachers always tailor their explanations to the learners. Cognitive
scientists model this process under the rationality principle: teachers try to
maximise the learner's utility while minimising teaching costs. To this end,
human teachers seem to build mental models of the learner's internal state, a
capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build
on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor
their teaching strategies to the learners. Our ToM-equipped teachers construct
models of learners' internal states from observations and leverage them to
select demonstrations that maximise the learners' rewards while minimising
teaching costs. Our experiments in simulated environments demonstrate that
learners taught this way are more efficient than those taught in a
learner-agnostic way. This effect gets stronger when the teacher's model of the
learner better aligns with the actual learner's state, either using a more
accurate prior or after accumulating observations of the learner's behaviour.
This work is a first step towards social machines that teach us and each other,
see https://teacher-with-tom.github.io. | [
"Clémence Grislain",
"Hugo Caselles-Dupré",
"Olivier Sigaud",
"Mohamed Chetouani"
] | 2023-09-29 14:27:53 | http://arxiv.org/abs/2309.17275v1 | http://arxiv.org/pdf/2309.17275v1 | 2309.17275v1 |
A Foundation Model for General Moving Object Segmentation in Medical Images | Medical image segmentation aims to delineate the anatomical or pathological
structures of interest, playing a crucial role in clinical diagnosis. A
substantial amount of high-quality annotated data is crucial for constructing
high-precision deep segmentation models. However, medical annotation is highly
cumbersome and time-consuming, especially for medical videos or 3D volumes, due
to the huge labeling space and poor inter-frame consistency. Recently, a
fundamental task named Moving Object Segmentation (MOS) has made significant
advancements in natural images. Its objective is to delineate moving objects
from the background within image sequences, requiring only minimal annotations.
In this paper, we propose the first foundation model, named iMOS, for MOS in
medical images. Extensive experiments on a large multi-modal medical dataset
validate the effectiveness of the proposed iMOS. Specifically, with the
annotation of only a small number of images in the sequence, iMOS can achieve
satisfactory tracking and segmentation performance of moving objects throughout
the entire sequence in bi-directions. We hope that the proposed iMOS can help
accelerate the annotation speed of experts, and boost the development of
medical foundation models. | [
"Zhongnuo Yan",
"Tong Han",
"Yuhao Huang",
"Lian Liu",
"Han Zhou",
"Jiongquan Chen",
"Wenlong Shi",
"Yan Cao",
"Xin Yang",
"Dong Ni"
] | 2023-09-29 14:17:24 | http://arxiv.org/abs/2309.17264v3 | http://arxiv.org/pdf/2309.17264v3 | 2309.17264v3 |
Estimation and Inference in Distributional Reinforcement Learning | In this paper, we study distributional reinforcement learning from the
perspective of statistical efficiency.
We investigate distributional policy evaluation, aiming to estimate the
complete distribution of the random return (denoted $\eta^\pi$) attained by a
given policy $\pi$.
We use the certainty-equivalence method to construct our estimator
$\hat\eta^\pi$, given a generative model is available.
We show that in this circumstance we need a dataset of size $\widetilde
O\left(\frac{|\mathcal{S}||\mathcal{A}|}{\epsilon^{2p}(1-\gamma)^{2p+2}}\right)$
to guarantee a $p$-Wasserstein metric between $\hat\eta^\pi$ and $\eta^\pi$ is
less than $\epsilon$ with high probability.
This implies the distributional policy evaluation problem can be solved with
sample efficiency.
Also, we show that under different mild assumptions a dataset of size
$\widetilde
O\left(\frac{|\mathcal{S}||\mathcal{A}|}{\epsilon^{2}(1-\gamma)^{4}}\right)$
suffices to ensure the Kolmogorov metric and total variation metric between
$\hat\eta^\pi$ and $\eta^\pi$ is below $\epsilon$ with high probability.
Furthermore, we investigate the asymptotic behavior of $\hat\eta^\pi$.
We demonstrate that the ``empirical process''
$\sqrt{n}(\hat\eta^\pi-\eta^\pi)$ converges weakly to a Gaussian process in the
space of bounded functionals on Lipschitz function class
$\ell^\infty(\mathcal{F}_{W_1})$, also in the space of bounded functionals on
indicator function class $\ell^\infty(\mathcal{F}_{\mathrm{KS}})$ and bounded
measurable function class $\ell^\infty(\mathcal{F}_{\mathrm{TV}})$ when some
mild conditions hold.
Our findings give rise to a unified approach to statistical inference of a
wide class of statistical functionals of $\eta^\pi$. | [
"Liangyu Zhang",
"Yang Peng",
"Jiadong Liang",
"Wenhao Yang",
"Zhihua Zhang"
] | 2023-09-29 14:14:53 | http://arxiv.org/abs/2309.17262v1 | http://arxiv.org/pdf/2309.17262v1 | 2309.17262v1 |
PlaceNav: Topological Navigation through Place Recognition | Recent results suggest that splitting topological navigation into
robot-independent and robot-specific components improves navigation performance
by enabling the robot-independent part to be trained with data collected by
different robot types. However, the navigation methods are still limited by the
scarcity of suitable training data and suffer from poor computational scaling.
In this work, we present PlaceNav, subdividing the robot-independent part into
navigation-specific and generic computer vision components. We utilize visual
place recognition for the subgoal selection of the topological navigation
pipeline. This makes subgoal selection more efficient and enables leveraging
large-scale datasets from non-robotics sources, increasing training data
availability. Bayesian filtering, enabled by place recognition, further
improves navigation performance by increasing the temporal consistency of
subgoals. Our experimental results verify the design and the new model obtains
a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks
with higher computational efficiency. | [
"Lauri Suomela",
"Jussi Kalliola",
"Harry Edelman",
"Joni-Kristian Kämäräinen"
] | 2023-09-29 14:12:54 | http://arxiv.org/abs/2309.17260v3 | http://arxiv.org/pdf/2309.17260v3 | 2309.17260v3 |
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering | Prompting and in-context learning (ICL) have become efficient learning
paradigms for large language models (LLMs). However, LLMs suffer from prompt
brittleness and various bias factors in the prompt, including but not limited
to the formatting, the choice verbalizers, and the ICL examples. To address
this problem that results in unexpected performance degradation, calibration
methods have been developed to mitigate the effects of these biases while
recovering LLM performance. In this work, we first conduct a systematic
analysis of the existing calibration methods, where we both provide a unified
view and reveal the failure cases. Inspired by these analyses, we propose Batch
Calibration (BC), a simple yet intuitive method that controls the contextual
bias from the batched input, unifies various prior approaches, and effectively
addresses the aforementioned issues. BC is zero-shot, inference-only, and
incurs negligible additional costs. In the few-shot setup, we further extend BC
to allow it to learn the contextual bias from labeled data. We validate the
effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate
state-of-the-art performance over previous calibration baselines across more
than 10 natural language understanding and image classification tasks. | [
"Han Zhou",
"Xingchen Wan",
"Lev Proleev",
"Diana Mincu",
"Jilin Chen",
"Katherine Heller",
"Subhrajit Roy"
] | 2023-09-29 13:55:45 | http://arxiv.org/abs/2309.17249v1 | http://arxiv.org/pdf/2309.17249v1 | 2309.17249v1 |
Data-driven localized waves and parameter discovery in the massive Thirring model via extended physics-informed neural networks with interface zones | In this paper, we study data-driven localized wave solutions and parameter
discovery in the massive Thirring (MT) model via the deep learning in the
framework of physics-informed neural networks (PINNs) algorithm. Abundant
data-driven solutions including soliton of bright/dark type, breather and rogue
wave are simulated accurately and analyzed contrastively with relative and
absolute errors. For higher-order localized wave solutions, we employ the
extended PINNs (XPINNs) with domain decomposition to capture the complete
pictures of dynamic behaviors such as soliton collisions, breather oscillations
and rogue-wave superposition. In particular, we modify the interface line in
domain decomposition of XPINNs into a small interface zone and introduce the
pseudo initial, residual and gradient conditions as interface conditions linked
adjacently with individual neural networks. Then this modified approach is
applied successfully to various solutions ranging from bright-bright soliton,
dark-dark soliton, dark-antidark soliton, general breather, Kuznetsov-Ma
breather and second-order rogue wave. Experimental results show that this
improved version of XPINNs reduce the complexity of computation with faster
convergence rate and keep the quality of learned solutions with smoother
stitching performance as well. For the inverse problems, the unknown
coefficient parameters of linear and nonlinear terms in the MT model are
identified accurately with and without noise by using the classical PINNs
algorithm. | [
"Junchao Chen",
"Jin Song",
"Zijian Zhou",
"Zhenya Yan"
] | 2023-09-29 13:50:32 | http://arxiv.org/abs/2309.17240v1 | http://arxiv.org/pdf/2309.17240v1 | 2309.17240v1 |
MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph Data | Discovering genes with similar functions across diverse biomedical contexts
poses a significant challenge in gene representation learning due to data
heterogeneity. In this study, we resolve this problem by introducing a novel
model called Multimodal Similarity Learning Graph Neural Network, which
combines Multimodal Machine Learning and Deep Graph Neural Networks to learn
gene representations from single-cell sequencing and spatial transcriptomic
data. Leveraging 82 training datasets from 10 tissues, three sequencing
techniques, and three species, we create informative graph structures for model
training and gene representations generation, while incorporating
regularization with weighted similarity learning and contrastive learning to
learn cross-data gene-gene relationships. This novel design ensures that we can
offer gene representations containing functional similarity across different
contexts in a joint space. Comprehensive benchmarking analysis shows our
model's capacity to effectively capture gene function similarity across
multiple modalities, outperforming state-of-the-art methods in gene
representation learning by up to 97.5%. Moreover, we employ bioinformatics
tools in conjunction with gene representations to uncover pathway enrichment,
regulation causal networks, and functions of disease-associated or
dosage-sensitive genes. Therefore, our model efficiently produces unified gene
representations for the analysis of gene functions, tissue functions, diseases,
and species evolution. | [
"Tianyu Liu",
"Yuge Wang",
"Rex Ying",
"Hongyu Zhao"
] | 2023-09-29 13:33:53 | http://arxiv.org/abs/2310.02275v1 | http://arxiv.org/pdf/2310.02275v1 | 2310.02275v1 |
LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games | There is a growing interest in using Large Language Models (LLMs) as agents
to tackle real-world tasks that may require assessing complex situations. Yet,
we have a limited understanding of LLMs' reasoning and decision-making
capabilities, partly stemming from a lack of dedicated evaluation benchmarks.
As negotiating and compromising are key aspects of our everyday communication
and collaboration, we propose using scorable negotiation games as a new
evaluation framework for LLMs. We create a testbed of diverse text-based,
multi-agent, multi-issue, semantically rich negotiation games, with easily
tunable difficulty. To solve the challenge, agents need to have strong
arithmetic, inference, exploration, and planning capabilities, while seamlessly
integrating them. Via a systematic zero-shot Chain-of-Thought prompting (CoT),
we show that agents can negotiate and consistently reach successful deals. We
quantify the performance with multiple metrics and observe a large gap between
GPT-4 and earlier models. Importantly, we test the generalization to new games
and setups. Finally, we show that these games can help evaluate other critical
aspects, such as the interaction dynamics between agents in the presence of
greedy and adversarial players. | [
"Sahar Abdelnabi",
"Amr Gomaa",
"Sarath Sivaprasad",
"Lea Schönherr",
"Mario Fritz"
] | 2023-09-29 13:33:06 | http://arxiv.org/abs/2309.17234v1 | http://arxiv.org/pdf/2309.17234v1 | 2309.17234v1 |
Spurious Feature Diversification Improves Out-of-distribution Generalization | Generalization to out-of-distribution (OOD) data is a critical challenge in
machine learning. Ensemble-based methods, like weight space ensembles that
interpolate model parameters, have been shown to achieve superior OOD
performance. However, the underlying mechanism for their effectiveness remains
unclear. In this study, we closely examine WiSE-FT, a popular weight space
ensemble method that interpolates between a pre-trained and a fine-tuned model.
We observe an unexpected phenomenon, in which WiSE-FT successfully corrects
many cases where each individual model makes incorrect predictions, which
contributes significantly to its OOD effectiveness. To gain further insights,
we conduct theoretical analysis in a multi-class setting with a large number of
spurious features. Our analysis predicts the above phenomenon and it further
shows that ensemble-based models reduce prediction errors in the OOD settings
by utilizing a more diverse set of spurious features. Contrary to the
conventional wisdom that focuses on learning invariant features for better OOD
performance, our findings suggest that incorporating a large number of diverse
spurious features weakens their individual contributions, leading to improved
overall OOD generalization performance. Empirically we demonstrate the
effectiveness of utilizing diverse spurious features on a MultiColorMNIST
dataset, and our experimental results are consistent with the theoretical
analysis. Building upon the new theoretical insights into the efficacy of
ensemble methods, we further identify an issue of WiSE-FT caused by the
overconfidence of fine-tuned models in OOD situations. This overconfidence
magnifies the fine-tuned model's incorrect prediction, leading to deteriorated
OOD ensemble performance. To remedy this problem, we propose a novel method
called BAlaNced averaGing (BANG), which significantly enhances the OOD
performance of WiSE-FT. | [
"Yong Lin",
"Lu Tan",
"Yifan Hao",
"Honam Wong",
"Hanze Dong",
"Weizhong Zhang",
"Yujiu Yang",
"Tong Zhang"
] | 2023-09-29 13:29:22 | http://arxiv.org/abs/2309.17230v1 | http://arxiv.org/pdf/2309.17230v1 | 2309.17230v1 |
MORPH: Design Co-optimization with Reinforcement Learning via a Differentiable Hardware Model Proxy | We introduce MORPH, a method for co-optimization of hardware design
parameters and control policies in simulation using reinforcement learning.
Like most co-optimization methods, MORPH relies on a model of the hardware
being optimized, usually simulated based on the laws of physics. However, such
a model is often difficult to integrate into an effective optimization routine.
To address this, we introduce a proxy hardware model, which is always
differentiable and enables efficient co-optimization alongside a long-horizon
control policy using RL. MORPH is designed to ensure that the optimized
hardware proxy remains as close as possible to its realistic counterpart, while
still enabling task completion. We demonstrate our approach on simulated 2D
reaching and 3D multi-fingered manipulation tasks. | [
"Zhanpeng He",
"Matei Ciocarlie"
] | 2023-09-29 13:25:45 | http://arxiv.org/abs/2309.17227v1 | http://arxiv.org/pdf/2309.17227v1 | 2309.17227v1 |
Training and inference of large language models using 8-bit floating point | FP8 formats are gaining popularity to boost the computational efficiency for
training and inference of large deep learning models. Their main challenge is
that a careful choice of scaling is needed to prevent degradation due to the
reduced dynamic range compared to higher-precision formats. Although there
exists ample literature about selecting such scalings for INT formats, this
critical aspect has yet to be addressed for FP8. This paper presents a
methodology to select the scalings for FP8 linear layers, based on dynamically
updating per-tensor scales for the weights, gradients and activations. We apply
this methodology to train and validate large language models of the type of GPT
and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate
the understanding of the FP8 dynamics, our results are accompanied by plots of
the per-tensor scale distribution for weights, activations and gradients during
both training and inference. | [
"Sergio P. Perez",
"Yan Zhang",
"James Briggs",
"Charlie Blake",
"Josh Levy-Kramer",
"Paul Balanca",
"Carlo Luschi",
"Stephen Barlow",
"Andrew William Fitzgibbon"
] | 2023-09-29 13:24:33 | http://arxiv.org/abs/2309.17224v1 | http://arxiv.org/pdf/2309.17224v1 | 2309.17224v1 |
RSAM: Learning on manifolds with Riemannian Sharpness-aware Minimization | Nowadays, understanding the geometry of the loss landscape shows promise in
enhancing a model's generalization ability. In this work, we draw upon prior
works that apply geometric principles to optimization and present a novel
approach to improve robustness and generalization ability for constrained
optimization problems. Indeed, this paper aims to generalize the
Sharpness-Aware Minimization (SAM) optimizer to Riemannian manifolds. In doing
so, we first extend the concept of sharpness and introduce a novel notion of
sharpness on manifolds. To support this notion of sharpness, we present a
theoretical analysis characterizing generalization capabilities with respect to
manifold sharpness, which demonstrates a tighter bound on the generalization
gap, a result not known before. Motivated by this analysis, we introduce our
algorithm, Riemannian Sharpness-Aware Minimization (RSAM). To demonstrate
RSAM's ability to enhance generalization ability, we evaluate and contrast our
algorithm on a broad set of problems, such as image classification and
contrastive learning across different datasets, including CIFAR100, CIFAR10,
and FGVCAircraft. Our code is publicly available at
\url{https://t.ly/RiemannianSAM}. | [
"Tuan Truong",
"Hoang-Phi Nguyen",
"Tung Pham",
"Minh-Tuan Tran",
"Mehrtash Harandi",
"Dinh Phung",
"Trung Le"
] | 2023-09-29 13:14:28 | http://arxiv.org/abs/2309.17215v1 | http://arxiv.org/pdf/2309.17215v1 | 2309.17215v1 |
Instant Complexity Reduction in CNNs using Locality-Sensitive Hashing | To reduce the computational cost of convolutional neural networks (CNNs) for
usage on resource-constrained devices, structured pruning approaches have shown
promising results, drastically reducing floating-point operations (FLOPs)
without substantial drops in accuracy. However, most recent methods require
fine-tuning or specific training procedures to achieve a reasonable trade-off
between retained accuracy and reduction in FLOPs. This introduces additional
cost in the form of computational overhead and requires training data to be
available. To this end, we propose HASTE (Hashing for Tractable Efficiency), a
parameter-free and data-free module that acts as a plug-and-play replacement
for any regular convolution module. It instantly reduces the network's
test-time inference cost without requiring any training or fine-tuning. We are
able to drastically compress latent feature maps without sacrificing much
accuracy by using locality-sensitive hashing (LSH) to detect redundancies in
the channel dimension. Similar channels are aggregated to reduce the input and
filter depth simultaneously, allowing for cheaper convolutions. We demonstrate
our approach on the popular vision benchmarks CIFAR-10 and ImageNet. In
particular, we are able to instantly drop 46.72% of FLOPs while only losing
1.25% accuracy by just swapping the convolution modules in a ResNet34 on
CIFAR-10 for our HASTE module. | [
"Lukas Meiner",
"Jens Mehnert",
"Alexandru Paul Condurache"
] | 2023-09-29 13:09:40 | http://arxiv.org/abs/2309.17211v1 | http://arxiv.org/pdf/2309.17211v1 | 2309.17211v1 |
Robots That Can See: Leveraging Human Pose for Trajectory Prediction | Anticipating the motion of all humans in dynamic environments such as homes
and offices is critical to enable safe and effective robot navigation. Such
spaces remain challenging as humans do not follow strict rules of motion and
there are often multiple occluded entry points such as corners and doors that
create opportunities for sudden encounters. In this work, we present a
Transformer based architecture to predict human future trajectories in
human-centric environments from input features including human positions, head
orientations, and 3D skeletal keypoints from onboard in-the-wild sensory
information. The resulting model captures the inherent uncertainty for future
human trajectory prediction and achieves state-of-the-art performance on common
prediction benchmarks and a human tracking dataset captured from a mobile robot
adapted for the prediction task. Furthermore, we identify new agents with
limited historical data as a major contributor to error and demonstrate the
complementary nature of 3D skeletal poses in reducing prediction error in such
challenging scenarios. | [
"Tim Salzmann",
"Lewis Chiang",
"Markus Ryll",
"Dorsa Sadigh",
"Carolina Parada",
"Alex Bewley"
] | 2023-09-29 13:02:56 | http://arxiv.org/abs/2309.17209v1 | http://arxiv.org/pdf/2309.17209v1 | 2309.17209v1 |
Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes | Memory Gym introduces a unique benchmark designed to test Deep Reinforcement
Learning agents, specifically comparing Gated Recurrent Unit (GRU) against
Transformer-XL (TrXL), on their ability to memorize long sequences, withstand
noise, and generalize. It features partially observable 2D environments with
discrete controls, namely Mortar Mayhem, Mystery Path, and Searing Spotlights.
These originally finite environments are extrapolated to novel endless tasks
that act as an automatic curriculum, drawing inspiration from the car game ``I
packed my bag". These endless tasks are not only beneficial for evaluating
efficiency but also intriguingly valuable for assessing the effectiveness of
approaches in memory-based agents. Given the scarcity of publicly available
memory baselines, we contribute an implementation driven by TrXL and Proximal
Policy Optimization. This implementation leverages TrXL as episodic memory
using a sliding window approach. In our experiments on the finite environments,
TrXL demonstrates superior sample efficiency in Mystery Path and outperforms in
Mortar Mayhem. However, GRU is more efficient on Searing Spotlights. Most
notably, in all endless tasks, GRU makes a remarkable resurgence, consistently
outperforming TrXL by significant margins. | [
"Marco Pleines",
"Matthias Pallasch",
"Frank Zimmer",
"Mike Preuss"
] | 2023-09-29 12:59:28 | http://arxiv.org/abs/2309.17207v1 | http://arxiv.org/pdf/2309.17207v1 | 2309.17207v1 |
ComSD: Balancing Behavioral Quality and Diversity in Unsupervised Skill Discovery | Learning diverse and qualified behaviors for utilization and adaptation
without supervision is a key ability of intelligent creatures. Ideal
unsupervised skill discovery methods are able to produce diverse and qualified
skills in the absence of extrinsic reward, while the discovered skill set can
efficiently adapt to downstream tasks in various ways. Maximizing the Mutual
Information (MI) between skills and visited states can achieve ideal
skill-conditioned behavior distillation in theory. However, it's difficult for
recent advanced methods to well balance behavioral quality (exploration) and
diversity (exploitation) in practice, which may be attributed to the
unreasonable MI estimation by their rigid intrinsic reward design. In this
paper, we propose Contrastive multi-objectives Skill Discovery (ComSD) which
tries to mitigate the quality-versus-diversity conflict of discovered behaviors
through a more reasonable MI estimation and a dynamically weighted intrinsic
reward. ComSD proposes to employ contrastive learning for a more reasonable
estimation of skill-conditioned entropy in MI decomposition. In addition, a
novel weighting mechanism is proposed to dynamically balance different entropy
(in MI decomposition) estimations into a novel multi-objective intrinsic
reward, to improve both skill diversity and quality. For challenging robot
behavior discovery, ComSD can produce a qualified skill set consisting of
diverse behaviors at different activity levels, which recent advanced methods
cannot. On numerical evaluations, ComSD exhibits state-of-the-art adaptation
performance, significantly outperforming recent advanced skill discovery
methods across all skill combination tasks and most skill finetuning tasks.
Codes will be released at https://github.com/liuxin0824/ComSD. | [
"Xin Liu",
"Yaran Chen",
"Dongbin Zhao"
] | 2023-09-29 12:53:41 | http://arxiv.org/abs/2309.17203v1 | http://arxiv.org/pdf/2309.17203v1 | 2309.17203v1 |
An Investigation Into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features | Recent research has shown that artificial intelligence (AI) models can
exhibit bias in performance when trained using data that are imbalanced by
protected attribute(s). Most work to date has focused on deep learning models,
but classical AI techniques that make use of hand-crafted features may also be
susceptible to such bias. In this paper we investigate the potential for race
bias in random forest (RF) models trained using radiomics features. Our
application is prediction of tumour molecular subtype from dynamic contrast
enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our
results show that radiomics features derived from DCE-MRI data do contain
race-identifiable information, and that RF models can be trained to predict
White and Black race from these data with 60-70% accuracy, depending on the
subset of features used. Furthermore, RF models trained to predict tumour
molecular subtype using race-imbalanced data seem to produce biased behaviour,
exhibiting better performance on test data from the race on which they were
trained. | [
"Mohamed Huti",
"Tiarna Lee",
"Elinor Sawyer",
"Andrew P. King"
] | 2023-09-29 12:45:53 | http://arxiv.org/abs/2309.17197v1 | http://arxiv.org/pdf/2309.17197v1 | 2309.17197v1 |
ResBit: Residual Bit Vector for Categorical Values | The one-hot vector has long been widely used in machine learning as a simple
and generic method for representing discrete data. However, this method
increases the number of dimensions linearly with the categorical data to be
represented, which is problematic from the viewpoint of spatial computational
complexity in deep learning, which requires a large amount of data. Recently,
Analog Bits, a method for representing discrete data as a sequence of bits, was
proposed on the basis of the high expressiveness of diffusion models. However,
since the number of category types to be represented in a generation task is
not necessarily at a power of two, there is a discrepancy between the range
that Analog Bits can represent and the range represented as category data. If
such a value is generated, the problem is that the original category value
cannot be restored. To address this issue, we propose Residual Bit Vector
(ResBit), which is a hierarchical bit representation. Although it is a
general-purpose representation method, in this paper, we treat it as numerical
data and show that it can be used as an extension of Analog Bits using Table
Residual Bit Diffusion (TRBD), which is incorporated into TabDDPM, a tabular
data generation method. We experimentally confirmed that TRBD can generate
diverse and high-quality data from small-scale table data to table data
containing diverse category values faster than TabDDPM. Furthermore, we show
that ResBit can also serve as an alternative to the one-hot vector by utilizing
ResBit for conditioning in GANs and as a label expression in image
classification. | [
"Masane Fuchi",
"Amar Zanashir",
"Hiroto Minami",
"Tomohiro Takagi"
] | 2023-09-29 12:45:39 | http://arxiv.org/abs/2309.17196v1 | http://arxiv.org/pdf/2309.17196v1 | 2309.17196v1 |
Generalized Activation via Multivariate Projection | Activation functions are essential to introduce nonlinearity into neural
networks, with the Rectified Linear Unit (ReLU) often favored for its
simplicity and effectiveness. Motivated by the structural similarity between a
shallow Feedforward Neural Network (FNN) and a single iteration of the
Projected Gradient Descent (PGD) algorithm, a standard approach for solving
constrained optimization problems, we consider ReLU as a projection from R onto
the nonnegative half-line R+. Building on this interpretation, we extend ReLU
by substituting it with a generalized projection operator onto a convex cone,
such as the Second-Order Cone (SOC) projection, thereby naturally extending it
to a Multivariate Projection Unit (MPU), an activation function with multiple
inputs and multiple outputs. We further provide a mathematical proof
establishing that FNNs activated by SOC projections outperform those utilizing
ReLU in terms of expressive power. Experimental evaluations on widely-adopted
architectures further corroborate MPU's effectiveness against a broader range
of existing activation functions. | [
"Jiayun Li",
"Yuxiao Cheng",
"Zhuofan Xia",
"Yilin Mo",
"Gao Huang"
] | 2023-09-29 12:44:27 | http://arxiv.org/abs/2309.17194v1 | http://arxiv.org/pdf/2309.17194v1 | 2309.17194v1 |
A Survey of Incremental Transfer Learning: Combining Peer-to-Peer Federated Learning and Domain Incremental Learning for Multicenter Collaboration | Due to data privacy constraints, data sharing among multiple clinical centers
is restricted, which impedes the development of high performance deep learning
models from multicenter collaboration. Naive weight transfer methods share
intermediate model weights without raw data and hence can bypass data privacy
restrictions. However, performance drops are typically observed when the model
is transferred from one center to the next because of the forgetting problem.
Incremental transfer learning, which combines peer-to-peer federated learning
and domain incremental learning, can overcome the data privacy issue and
meanwhile preserve model performance by using continual learning techniques. In
this work, a conventional domain/task incremental learning framework is adapted
for incremental transfer learning. A comprehensive survey on the efficacy of
different regularization-based continual learning methods for multicenter
collaboration is performed. The influences of data heterogeneity, classifier
head setting, network optimizer, model initialization, center order, and weight
transfer type have been investigated thoroughly. Our framework is publicly
accessible to the research community for further development. | [
"Yixing Huang",
"Christoph Bert",
"Ahmed Gomaa",
"Rainer Fietkau",
"Andreas Maier",
"Florian Putz"
] | 2023-09-29 12:43:21 | http://arxiv.org/abs/2309.17192v1 | http://arxiv.org/pdf/2309.17192v1 | 2309.17192v1 |
RECOMBINER: Robust and Enhanced Compression with Bayesian Implicit Neural Representations | COMpression with Bayesian Implicit NEural Representations (COMBINER) is a
recent data compression method that addresses a key inefficiency of previous
Implicit Neural Representation (INR)-based approaches: it avoids quantization
and enables direct optimization of the rate-distortion performance. However,
COMBINER still has significant limitations: 1) it uses factorized priors and
posterior approximations that lack flexibility; 2) it cannot effectively adapt
to local deviations from global patterns in the data; and 3) its performance
can be susceptible to modeling choices and the variational parameters'
initializations. Our proposed method, Robust and Enhanced COMBINER
(RECOMBINER), addresses these issues by 1) enriching the variational
approximation while maintaining its computational cost via a linear
reparameterization of the INR weights, 2) augmenting our INRs with learnable
positional encodings that enable them to adapt to local details and 3)
splitting high-resolution data into patches to increase robustness and
utilizing expressive hierarchical priors to capture dependency across patches.
We conduct extensive experiments across several data modalities, showcasing
that RECOMBINER achieves competitive results with the best INR-based methods
and even outperforms autoencoder-based codecs on low-resolution images at low
bitrates. | [
"Jiajun He",
"Gergely Flamich",
"Zongyu Guo",
"José Miguel Hernández-Lobato"
] | 2023-09-29 12:27:15 | http://arxiv.org/abs/2309.17182v1 | http://arxiv.org/pdf/2309.17182v1 | 2309.17182v1 |
Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training | Large language models (LLMs) typically employ sampling or beam search,
accompanied by prompts such as Chain-of-Thought (CoT), to boost reasoning and
decoding ability. Recent work like Tree-of-Thought (ToT) and Reasoning via
Planning (RAP) aim to augment the reasoning capabilities of LLMs by utilizing
tree-search algorithms to guide multi-step reasoning. These methods mainly
focus on LLMs' reasoning ability during inference and heavily rely on
human-designed prompts to activate LLM as a value function, which lacks general
applicability and scalability. To address these limitations, we present an
AlphaZero-like tree-search framework for LLMs (termed TS-LLM), systematically
illustrating how tree-search with a learned value function can guide LLMs'
decoding ability. TS-LLM distinguishes itself in two key ways: (1) Leveraging a
learned value function, our approach can be generally applied to different
tasks beyond reasoning (such as RLHF alignment), and LLMs of any size, without
prompting advanced, large-scale models. (2) It can guide LLM's decoding during
both inference and training. Empirical evaluations across reasoning, planning,
and RLHF alignment tasks validate the effectiveness of TS-LLM, even on trees
with a depth of 64. | [
"Xidong Feng",
"Ziyu Wan",
"Muning Wen",
"Ying Wen",
"Weinan Zhang",
"Jun Wang"
] | 2023-09-29 12:20:19 | http://arxiv.org/abs/2309.17179v1 | http://arxiv.org/pdf/2309.17179v1 | 2309.17179v1 |
FedZeN: Towards superlinear zeroth-order federated learning via incremental Hessian estimation | Federated learning is a distributed learning framework that allows a set of
clients to collaboratively train a model under the orchestration of a central
server, without sharing raw data samples. Although in many practical scenarios
the derivatives of the objective function are not available, only few works
have considered the federated zeroth-order setting, in which functions can only
be accessed through a budgeted number of point evaluations. In this work we
focus on convex optimization and design the first federated zeroth-order
algorithm to estimate the curvature of the global objective, with the purpose
of achieving superlinear convergence. We take an incremental Hessian estimator
whose error norm converges linearly, and we adapt it to the federated
zeroth-order setting, sampling the random search directions from the Stiefel
manifold for improved performance. In particular, both the gradient and Hessian
estimators are built at the central server in a communication-efficient and
privacy-preserving way by leveraging synchronized pseudo-random number
generators. We provide a theoretical analysis of our algorithm, named FedZeN,
proving local quadratic convergence with high probability and global linear
convergence up to zeroth-order precision. Numerical simulations confirm the
superlinear convergence rate and show that our algorithm outperforms the
federated zeroth-order methods available in the literature. | [
"Alessio Maritan",
"Subhrakanti Dey",
"Luca Schenato"
] | 2023-09-29 12:13:41 | http://arxiv.org/abs/2309.17174v1 | http://arxiv.org/pdf/2309.17174v1 | 2309.17174v1 |
Comparative Analysis of Named Entity Recognition in the Dungeons and Dragons Domain | Many NLP tasks, although well-resolved for general English, face challenges
in specific domains like fantasy literature. This is evident in Named Entity
Recognition (NER), which detects and categorizes entities in text. We analyzed
10 NER models on 7 Dungeons and Dragons (D&D) adventure books to assess
domain-specific performance. Using open-source Large Language Models, we
annotated named entities in these books and evaluated each model's precision.
Our findings indicate that, without modifications, Flair, Trankit, and Spacy
outperform others in identifying named entities in the D&D context. | [
"Gayashan Weerasundara",
"Nisansa de Silva"
] | 2023-09-29 12:09:36 | http://arxiv.org/abs/2309.17171v1 | http://arxiv.org/pdf/2309.17171v1 | 2309.17171v1 |
DyVal: Graph-informed Dynamic Evaluation of Large Language Models | Large language models (LLMs) have achieved remarkable performance in various
evaluation benchmarks. However, concerns about their performance are raised on
potential data contamination in their considerable volume of training corpus.
Moreover, the static nature and fixed complexity of current benchmarks may
inadequately gauge the advancing capabilities of LLMs. In this paper, we
introduce DyVal, a novel, general, and flexible evaluation protocol for dynamic
evaluation of LLMs. Based on our proposed dynamic evaluation framework, we
build graph-informed DyVal by leveraging the structural advantage of directed
acyclic graphs to dynamically generate evaluation samples with controllable
complexities. DyVal generates challenging evaluation sets on reasoning tasks
including mathematics, logical reasoning, and algorithm problems. We evaluate
various LLMs ranging from Flan-T5-large to ChatGPT and GPT4. Experiments
demonstrate that LLMs perform worse in DyVal-generated evaluation samples with
different complexities, emphasizing the significance of dynamic evaluation. We
also analyze the failure cases and results of different prompting methods.
Moreover, DyVal-generated samples are not only evaluation sets, but also
helpful data for fine-tuning to improve the performance of LLMs on existing
benchmarks. We hope that DyVal can shed light on the future evaluation research
of LLMs. | [
"Kaijie Zhu",
"Jiaao Chen",
"Jindong Wang",
"Neil Zhenqiang Gong",
"Diyi Yang",
"Xing Xie"
] | 2023-09-29 12:04:14 | http://arxiv.org/abs/2309.17167v2 | http://arxiv.org/pdf/2309.17167v2 | 2309.17167v2 |
Age Group Discrimination via Free Handwriting Indicators | The growing global elderly population is expected to increase the prevalence
of frailty, posing significant challenges to healthcare systems. Frailty, a
syndrome associated with ageing, is characterised by progressive health
decline, increased vulnerability to stressors and increased risk of mortality.
It represents a significant burden on public health and reduces the quality of
life of those affected. The lack of a universally accepted method to assess
frailty and a standardised definition highlights a critical research gap. Given
this lack and the importance of early prevention, this study presents an
innovative approach using an instrumented ink pen to ecologically assess
handwriting for age group classification. Content-free handwriting data from 80
healthy participants in different age groups (20-40, 41-60, 61-70 and 70+) were
analysed. Fourteen gesture- and tremor-related indicators were computed from
the raw data and used in five classification tasks. These tasks included
discriminating between adjacent and non-adjacent age groups using Catboost and
Logistic Regression classifiers. Results indicate exceptional classifier
performance, with accuracy ranging from 82.5% to 97.5%, precision from 81.8% to
100%, recall from 75% to 100% and ROC-AUC from 92.2% to 100%. Model
interpretability, facilitated by SHAP analysis, revealed age-dependent
sensitivity of temporal and tremor-related handwriting features. Importantly,
this classification method offers potential for early detection of abnormal
signs of ageing in uncontrolled settings such as remote home monitoring,
thereby addressing the critical issue of frailty detection and contributing to
improved care for older adults. | [
"Eugenio Lomurno",
"Simone Toffoli",
"Davide Di Febbo",
"Matteo Matteucci",
"Francesca Lunardini",
"Simona Ferrante"
] | 2023-09-29 11:44:18 | http://arxiv.org/abs/2309.17156v1 | http://arxiv.org/pdf/2309.17156v1 | 2309.17156v1 |
Efficient Interpretable Nonlinear Modeling for Multiple Time Series | Predictive linear and nonlinear models based on kernel machines or deep
neural networks have been used to discover dependencies among time series. This
paper proposes an efficient nonlinear modeling approach for multiple time
series, with a complexity comparable to linear vector autoregressive (VAR)
models while still incorporating nonlinear interactions among different
time-series variables. The modeling assumption is that the set of time series
is generated in two steps: first, a linear VAR process in a latent space, and
second, a set of invertible and Lipschitz continuous nonlinear mappings that
are applied per sensor, that is, a component-wise mapping from each latent
variable to a variable in the measurement space. The VAR coefficient
identification provides a topology representation of the dependencies among the
aforementioned variables. The proposed approach models each component-wise
nonlinearity using an invertible neural network and imposes sparsity on the VAR
coefficients to reflect the parsimonious dependencies usually found in real
applications. To efficiently solve the formulated optimization problems, a
custom algorithm is devised combining proximal gradient descent, stochastic
primal-dual updates, and projection to enforce the corresponding constraints.
Experimental results on both synthetic and real data sets show that the
proposed algorithm improves the identification of the support of the VAR
coefficients in a parsimonious manner while also improving the time-series
prediction, as compared to the current state-of-the-art methods. | [
"Kevin Roy",
"Luis Miguel Lopez-Ramos",
"Baltasar Beferull-Lozano"
] | 2023-09-29 11:42:59 | http://arxiv.org/abs/2309.17154v1 | http://arxiv.org/pdf/2309.17154v1 | 2309.17154v1 |
Prototype Generation: Robust Feature Visualisation for Data Independent Interpretability | We introduce Prototype Generation, a stricter and more robust form of feature
visualisation for model-agnostic, data-independent interpretability of image
classification models. We demonstrate its ability to generate inputs that
result in natural activation paths, countering previous claims that feature
visualisation algorithms are untrustworthy due to the unnatural internal
activations. We substantiate these claims by quantitatively measuring
similarity between the internal activations of our generated prototypes and
natural images. We also demonstrate how the interpretation of generated
prototypes yields important insights, highlighting spurious correlations and
biases learned by models which quantitative methods over test-sets cannot
identify. | [
"Arush Tagade",
"Jessica Rumbelow"
] | 2023-09-29 11:16:06 | http://arxiv.org/abs/2309.17144v1 | http://arxiv.org/pdf/2309.17144v1 | 2309.17144v1 |
GRANDE: Gradient-Based Decision Tree Ensembles | Despite the success of deep learning for text and image data, tree-based
ensemble models are still state-of-the-art for machine learning with
heterogeneous tabular data. However, there is a significant need for
tabular-specific gradient-based methods due to their high flexibility. In this
paper, we propose $\text{GRANDE}$, $\text{GRA}$die$\text{N}$t-Based
$\text{D}$ecision Tree $\text{E}$nsembles, a novel approach for learning hard,
axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE
is based on a dense representation of tree ensembles, which affords to use
backpropagation with a straight-through operator to jointly optimize all model
parameters. Our method combines axis-aligned splits, which is a useful
inductive bias for tabular data, with the flexibility of gradient-based
optimization. Furthermore, we introduce an advanced instance-wise weighting
that facilitates learning representations for both, simple and complex
relations, within a single model. We conducted an extensive evaluation on a
predefined benchmark with 19 classification datasets and demonstrate that our
method outperforms existing gradient-boosting and deep learning frameworks on
most datasets. | [
"Sascha Marton",
"Stefan Lüdtke",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] | 2023-09-29 10:49:14 | http://arxiv.org/abs/2309.17130v1 | http://arxiv.org/pdf/2309.17130v1 | 2309.17130v1 |
Style Transfer for Non-differentiable Audio Effects | Digital audio effects are widely used by audio engineers to alter the
acoustic and temporal qualities of audio data. However, these effects can have
a large number of parameters which can make them difficult to learn for
beginners and hamper creativity for professionals. Recently, there have been a
number of efforts to employ progress in deep learning to acquire the low-level
parameter configurations of audio effects by minimising an objective function
between an input and reference track, commonly referred to as style transfer.
However, current approaches use inflexible black-box techniques or require that
the effects under consideration are implemented in an auto-differentiation
framework. In this work, we propose a deep learning approach to audio
production style matching which can be used with effects implemented in some of
the most widely used frameworks, requiring only that the parameters under
consideration have a continuous domain. Further, our method includes style
matching for various classes of effects, many of which are difficult or
impossible to be approximated closely using differentiable functions. We show
that our audio embedding approach creates logical encodings of timbral
information, which can be used for a number of downstream tasks. Further, we
perform a listening test which demonstrates that our approach is able to
convincingly style match a multi-band compressor effect. | [
"Kieran Grant"
] | 2023-09-29 10:40:19 | http://arxiv.org/abs/2309.17125v1 | http://arxiv.org/pdf/2309.17125v1 | 2309.17125v1 |
Reconstruction of Patient-Specific Confounders in AI-based Radiologic Image Interpretation using Generative Pretraining | Detecting misleading patterns in automated diagnostic assistance systems,
such as those powered by Artificial Intelligence, is critical to ensuring their
reliability, particularly in healthcare. Current techniques for evaluating deep
learning models cannot visualize confounding factors at a diagnostic level.
Here, we propose a self-conditioned diffusion model termed DiffChest and train
it on a dataset of 515,704 chest radiographs from 194,956 patients from
multiple healthcare centers in the United States and Europe. DiffChest explains
classifications on a patient-specific level and visualizes the confounding
factors that may mislead the model. We found high inter-reader agreement when
evaluating DiffChest's capability to identify treatment-related confounders,
with Fleiss' Kappa values of 0.8 or higher across most imaging findings.
Confounders were accurately captured with 11.1% to 100% prevalence rates.
Furthermore, our pretraining process optimized the model to capture the most
relevant information from the input radiographs. DiffChest achieved excellent
diagnostic accuracy when diagnosing 11 chest conditions, such as pleural
effusion and cardiac insufficiency, and at least sufficient diagnostic accuracy
for the remaining conditions. Our findings highlight the potential of
pretraining based on diffusion models in medical image classification,
specifically in providing insights into confounding factors and model
robustness. | [
"Tianyu Han",
"Laura Žigutytė",
"Luisa Huck",
"Marc Huppertz",
"Robert Siepmann",
"Yossi Gandelsman",
"Christian Blüthgen",
"Firas Khader",
"Christiane Kuhl",
"Sven Nebelung",
"Jakob Kather",
"Daniel Truhn"
] | 2023-09-29 10:38:08 | http://arxiv.org/abs/2309.17123v1 | http://arxiv.org/pdf/2309.17123v1 | 2309.17123v1 |
Sheaf Hypergraph Networks | Higher-order relations are widespread in nature, with numerous phenomena
involving complex interactions that extend beyond simple pairwise connections.
As a result, advancements in higher-order processing can accelerate the growth
of various fields requiring structured data. Current approaches typically
represent these interactions using hypergraphs. We enhance this representation
by introducing cellular sheaves for hypergraphs, a mathematical construction
that adds extra structure to the conventional hypergraph while maintaining
their local, higherorder connectivity. Drawing inspiration from existing
Laplacians in the literature, we develop two unique formulations of sheaf
hypergraph Laplacians: linear and non-linear. Our theoretical analysis
demonstrates that incorporating sheaves into the hypergraph Laplacian provides
a more expressive inductive bias than standard hypergraph diffusion, creating a
powerful instrument for effectively modelling complex data structures. We
employ these sheaf hypergraph Laplacians to design two categories of models:
Sheaf Hypergraph Neural Networks and Sheaf Hypergraph Convolutional Networks.
These models generalize classical Hypergraph Networks often found in the
literature. Through extensive experimentation, we show that this generalization
significantly improves performance, achieving top results on multiple benchmark
datasets for hypergraph node classification. | [
"Iulia Duta",
"Giulia Cassarà",
"Fabrizio Silvestri",
"Pietro Liò"
] | 2023-09-29 10:25:43 | http://arxiv.org/abs/2309.17116v1 | http://arxiv.org/pdf/2309.17116v1 | 2309.17116v1 |
Meta-Path Learning for Multi-relational Graph Neural Networks | Existing multi-relational graph neural networks use one of two strategies for
identifying informative relations: either they reduce this problem to low-level
weight learning, or they rely on handcrafted chains of relational dependencies,
called meta-paths. However, the former approach faces challenges in the
presence of many relations (e.g., knowledge graphs), while the latter requires
substantial domain expertise to identify relevant meta-paths. In this work we
propose a novel approach to learn meta-paths and meta-path GNNs that are highly
accurate based on a small number of informative meta-paths. Key element of our
approach is a scoring function for measuring the potential informativeness of a
relation in the incremental construction of the meta-path. Our experimental
evaluation shows that the approach manages to correctly identify relevant
meta-paths even with a large number of relations, and substantially outperforms
existing multi-relational GNNs on synthetic and real-world experiments. | [
"Francesco Ferrini",
"Antonio Longa",
"Andrea Passerini",
"Manfred Jaeger"
] | 2023-09-29 10:12:30 | http://arxiv.org/abs/2309.17113v1 | http://arxiv.org/pdf/2309.17113v1 | 2309.17113v1 |
Benchmarking Collaborative Learning Methods Cost-Effectiveness for Prostate Segmentation | Healthcare data is often split into medium/small-sized collections across
multiple hospitals and access to it is encumbered by privacy regulations. This
brings difficulties to use them for the development of machine learning and
deep learning models, which are known to be data-hungry. One way to overcome
this limitation is to use collaborative learning (CL) methods, which allow
hospitals to work collaboratively to solve a task, without the need to
explicitly share local data.
In this paper, we address a prostate segmentation problem from MRI in a
collaborative scenario by comparing two different approaches: federated
learning (FL) and consensus-based methods (CBM).
To the best of our knowledge, this is the first work in which CBM, such as
label fusion techniques, are used to solve a problem of collaborative learning.
In this setting, CBM combine predictions from locally trained models to obtain
a federated strong learner with ideally improved robustness and predictive
variance properties.
Our experiments show that, in the considered practical scenario, CBMs provide
equal or better results than FL, while being highly cost-effective. Our results
demonstrate that the consensus paradigm may represent a valid alternative to FL
for typical training tasks in medical imaging. | [
"Lucia Innocenti",
"Michela Antonelli",
"Francesco Cremonesi",
"Kenaan Sarhan",
"Alejandro Granados",
"Vicky Goh",
"Sebastien Ourselin",
"Marco Lorenzi"
] | 2023-09-29 09:47:18 | http://arxiv.org/abs/2309.17097v2 | http://arxiv.org/pdf/2309.17097v2 | 2309.17097v2 |
Dynamic Interpretability for Model Comparison via Decision Rules | Explainable AI (XAI) methods have mostly been built to investigate and shed
light on single machine learning models and are not designed to capture and
explain differences between multiple models effectively. This paper addresses
the challenge of understanding and explaining differences between machine
learning models, which is crucial for model selection, monitoring and lifecycle
management in real-world applications. We propose DeltaXplainer, a
model-agnostic method for generating rule-based explanations describing the
differences between two binary classifiers. To assess the effectiveness of
DeltaXplainer, we conduct experiments on synthetic and real-world datasets,
covering various model comparison scenarios involving different types of
concept drift. | [
"Adam Rida",
"Marie-Jeanne Lesot",
"Xavier Renard",
"Christophe Marsala"
] | 2023-09-29 09:42:49 | http://arxiv.org/abs/2309.17095v1 | http://arxiv.org/pdf/2309.17095v1 | 2309.17095v1 |
Too Big, so Fail? -- Enabling Neural Construction Methods to Solve Large-Scale Routing Problems | In recent years new deep learning approaches to solve combinatorial
optimization problems, in particular NP-hard Vehicle Routing Problems (VRP),
have been proposed. The most impactful of these methods are sequential neural
construction approaches which are usually trained via reinforcement learning.
Due to the high training costs of these models, they usually are trained on
limited instance sizes (e.g. serving 100 customers) and later applied to vastly
larger instance size (e.g. 2000 customers). By means of a systematic scale-up
study we show that even state-of-the-art neural construction methods are
outperformed by simple heuristics, failing to generalize to larger problem
instances. We propose to use the ruin recreate principle that alternates
between completely destroying a localized part of the solution and then
recreating an improved variant. In this way, neural construction methods like
POMO are never applied to the global problem but just in the reconstruction
step, which only involves partial problems much closer in size to their
original training instances. In thorough experiments on four datasets of
varying distributions and modalities we show that our neural ruin recreate
approach outperforms alternative forms of improving construction methods such
as sampling and beam search and in several experiments also advanced local
search approaches. | [
"Jonas K. Falkner",
"Lars Schmidt-Thieme"
] | 2023-09-29 09:36:37 | http://arxiv.org/abs/2309.17089v1 | http://arxiv.org/pdf/2309.17089v1 | 2309.17089v1 |
From Empirical Measurements to Augmented Data Rates: A Machine Learning Approach for MCS Adaptation in Sidelink Communication | Due to the lack of a feedback channel in the C-V2X sidelink, finding a
suitable modulation and coding scheme (MCS) is a difficult task. However,
recent use cases for vehicle-to-everything (V2X) communication with higher
demands on data rate necessitate choosing the MCS adaptively. In this paper, we
propose a machine learning approach to predict suitable MCS levels.
Additionally, we propose the use of quantile prediction and evaluate it in
combination with different algorithms for the task of predicting the MCS level
with the highest achievable data rate. Thereby, we show significant
improvements over conventional methods of choosing the MCS level. Using a
machine learning approach, however, requires larger real-world data sets than
are currently publicly available for research. For this reason, this paper
presents a data set that was acquired in extensive drive tests, and that we
make publicly available. | [
"Asif Abdullah Rokoni",
"Daniel Schäufele",
"Martin Kasparick",
"Sławomir Stańczak"
] | 2023-09-29 09:32:08 | http://arxiv.org/abs/2309.17086v1 | http://arxiv.org/pdf/2309.17086v1 | 2309.17086v1 |
Diffusion Models as Stochastic Quantization in Lattice Field Theory | In this work, we establish a direct connection between generative diffusion
models (DMs) and stochastic quantization (SQ). The DM is realized by
approximating the reversal of a stochastic process dictated by the Langevin
equation, generating samples from a prior distribution to effectively mimic the
target distribution. Using numerical simulations, we demonstrate that the DM
can serve as a global sampler for generating quantum lattice field
configurations in two-dimensional $\phi^4$ theory. We demonstrate that DMs can
notably reduce autocorrelation times in the Markov chain, especially in the
critical region where standard Markov Chain Monte-Carlo (MCMC) algorithms
experience critical slowing down. The findings can potentially inspire further
advancements in lattice field theory simulations, in particular in cases where
it is expensive to generate large ensembles. | [
"Lingxiao Wang",
"Gert Aarts",
"Kai Zhou"
] | 2023-09-29 09:26:59 | http://arxiv.org/abs/2309.17082v1 | http://arxiv.org/pdf/2309.17082v1 | 2309.17082v1 |
Assessment and treatment of visuospatial neglect using active learning with Gaussian processes regression | Visuospatial neglect is a disorder characterised by impaired awareness for
visual stimuli located in regions of space and frames of reference. It is often
associated with stroke. Patients can struggle with all aspects of daily living
and community participation. Assessment methods are limited and show several
shortcomings, considering they are mainly performed on paper and do not
implement the complexity of daily life. Similarly, treatment options are sparse
and often show only small improvements. We present an artificial intelligence
solution designed to accurately assess a patient's visuospatial neglect in a
three-dimensional setting. We implement an active learning method based on
Gaussian process regression to reduce the effort it takes a patient to undergo
an assessment. Furthermore, we describe how this model can be utilised in
patient oriented treatment and how this opens the way to gamification,
tele-rehabilitation and personalised healthcare, providing a promising avenue
for improving patient engagement and rehabilitation outcomes. To validate our
assessment module, we conducted clinical trials involving patients in a
real-world setting. We compared the results obtained using our AI-based
assessment with the widely used conventional visuospatial neglect tests
currently employed in clinical practice. The validation process serves to
establish the accuracy and reliability of our model, confirming its potential
as a valuable tool for diagnosing and monitoring visuospatial neglect. Our VR
application proves to be more sensitive, while intra-rater reliability remains
high. | [
"Ivan De Boi",
"Elissa Embrechts",
"Quirine Schatteman",
"Rudi Penne",
"Steven Truijen",
"Wim Saeys"
] | 2023-09-29 09:18:32 | http://arxiv.org/abs/2310.13701v1 | http://arxiv.org/pdf/2310.13701v1 | 2310.13701v1 |
Benefits of mirror weight symmetry for 3D mesh segmentation in biomedical applications | 3D mesh segmentation is an important task with many biomedical applications.
The human body has bilateral symmetry and some variations in organ positions.
It allows us to expect a positive effect of rotation and inversion invariant
layers in convolutional neural networks that perform biomedical segmentations.
In this study, we show the impact of weight symmetry in neural networks that
perform 3D mesh segmentation. We analyze the problem of 3D mesh segmentation
for pathological vessel structures (aneurysms) and conventional anatomical
structures (endocardium and epicardium of ventricles). Local geometrical
features are encoded as sampling from the signed distance function, and the
neural network performs prediction for each mesh node. We show that weight
symmetry gains from 1 to 3% of additional accuracy and allows decreasing the
number of trainable parameters up to 8 times without suffering the performance
loss if neural networks have at least three convolutional layers. This also
works for very small training sets. | [
"Vladislav Dordiuk",
"Maksim Dzhigil",
"Konstantin Ushenin"
] | 2023-09-29 09:10:58 | http://arxiv.org/abs/2309.17076v1 | http://arxiv.org/pdf/2309.17076v1 | 2309.17076v1 |
On the Power of the Weisfeiler-Leman Test for Graph Motif Parameters | Seminal research in the field of graph neural networks (GNNs) has revealed a
direct correspondence between the expressive capabilities of GNNs and the
$k$-dimensional Weisfeiler-Leman ($k$WL) test, a widely-recognized method for
verifying graph isomorphism. This connection has reignited interest in
comprehending the specific graph properties effectively distinguishable by the
$k$WL test. A central focus of research in this field revolves around
determining the least dimensionality $k$, for which $k$WL can discern graphs
with different number of occurrences of a pattern graph $P$. We refer to such a
least $k$ as the WL-dimension of this pattern counting problem. This inquiry
traditionally delves into two distinct counting problems related to patterns:
subgraph counting and induced subgraph counting. Intriguingly, despite their
initial appearance as separate challenges with seemingly divergent approaches,
both of these problems are interconnected components of a more comprehensive
problem: "graph motif parameters". In this paper, we provide a precise
characterization of the WL-dimension of labeled graph motif parameters. As
specific instances of this result, we obtain characterizations of the
WL-dimension of the subgraph counting and induced subgraph counting problem for
every labeled pattern $P$. We additionally demonstrate that in cases where the
$k$WL test distinguishes between graphs with varying occurrences of a pattern
$P$, the exact number of occurrences of $P$ can be computed uniformly using
only local information of the last layer of a corresponding GNN. We finally
delve into the challenge of recognizing the WL-dimension of various graph
parameters. We give a polynomial time algorithm for determining the
WL-dimension of the subgraph counting problem for given pattern $P$, answering
an open question from previous work. | [
"Matthias Lanzinger",
"Pablo Barceló"
] | 2023-09-29 08:26:44 | http://arxiv.org/abs/2309.17053v2 | http://arxiv.org/pdf/2309.17053v2 | 2309.17053v2 |
On Continuity of Robust and Accurate Classifiers | The reliability of a learning model is key to the successful deployment of
machine learning in various applications. Creating a robust model, particularly
one unaffected by adversarial attacks, requires a comprehensive understanding
of the adversarial examples phenomenon. However, it is difficult to describe
the phenomenon due to the complicated nature of the problems in machine
learning. It has been shown that adversarial training can improve the
robustness of the hypothesis. However, this improvement comes at the cost of
decreased performance on natural samples. Hence, it has been suggested that
robustness and accuracy of a hypothesis are at odds with each other. In this
paper, we put forth the alternative proposal that it is the continuity of a
hypothesis that is incompatible with its robustness and accuracy. In other
words, a continuous function cannot effectively learn the optimal robust
hypothesis. To this end, we will introduce a framework for a rigorous study of
harmonic and holomorphic hypothesis in learning theory terms and provide
empirical evidence that continuous hypotheses does not perform as well as
discontinuous hypotheses in some common machine learning tasks. From a
practical point of view, our results suggests that a robust and accurate
learning rule would train different continuous hypotheses for different regions
of the domain. From a theoretical perspective, our analysis explains the
adversarial examples phenomenon as a conflict between the continuity of a
sequence of functions and its uniform convergence to a discontinuous function. | [
"Ramin Barati",
"Reza Safabakhsh",
"Mohammad Rahmati"
] | 2023-09-29 08:14:25 | http://arxiv.org/abs/2309.17048v1 | http://arxiv.org/pdf/2309.17048v1 | 2309.17048v1 |
Unveiling Document Structures with YOLOv5 Layout Detection | The current digital environment is characterized by the widespread presence
of data, particularly unstructured data, which poses many issues in sectors
including finance, healthcare, and education. Conventional techniques for data
extraction encounter difficulties in dealing with the inherent variety and
complexity of unstructured data, hence requiring the adoption of more efficient
methodologies. This research investigates the utilization of YOLOv5, a
cutting-edge computer vision model, for the purpose of rapidly identifying
document layouts and extracting unstructured data.
The present study establishes a conceptual framework for delineating the
notion of "objects" as they pertain to documents, incorporating various
elements such as paragraphs, tables, photos, and other constituent parts. The
main objective is to create an autonomous system that can effectively recognize
document layouts and extract unstructured data, hence improving the
effectiveness of data extraction.
In the conducted examination, the YOLOv5 model exhibits notable effectiveness
in the task of document layout identification, attaining a high accuracy rate
along with a precision value of 0.91, a recall value of 0.971, an F1-score of
0.939, and an area under the receiver operating characteristic curve (AUC-ROC)
of 0.975. The remarkable performance of this system optimizes the process of
extracting textual and tabular data from document images. Its prospective
applications are not limited to document analysis but can encompass
unstructured data from diverse sources, such as audio data.
This study lays the foundation for future investigations into the wider
applicability of YOLOv5 in managing various types of unstructured data,
offering potential for novel applications across multiple domains. | [
"Herman Sugiharto",
"Yorissa Silviana",
"Yani Siti Nurpazrin"
] | 2023-09-29 07:45:10 | http://arxiv.org/abs/2309.17033v1 | http://arxiv.org/pdf/2309.17033v1 | 2309.17033v1 |
Efficient Agnostic Learning with Average Smoothness | We study distribution-free nonparametric regression following a notion of
average smoothness initiated by Ashlagi et al. (2021), which measures the
"effective" smoothness of a function with respect to an arbitrary unknown
underlying distribution. While the recent work of Hanneke et al. (2023)
established tight uniform convergence bounds for average-smooth functions in
the realizable case and provided a computationally efficient realizable
learning algorithm, both of these results currently lack analogs in the general
agnostic (i.e. noisy) case.
In this work, we fully close these gaps. First, we provide a
distribution-free uniform convergence bound for average-smoothness classes in
the agnostic setting. Second, we match the derived sample complexity with a
computationally efficient agnostic learning algorithm. Our results, which are
stated in terms of the intrinsic geometry of the data and hold over any totally
bounded metric space, show that the guarantees recently obtained for realizable
learning of average-smooth functions transfer to the agnostic setting. At the
heart of our proof, we establish the uniform convergence rate of a function
class in terms of its bracketing entropy, which may be of independent interest. | [
"Steve Hanneke",
"Aryeh Kontorovich",
"Guy Kornowski"
] | 2023-09-29 07:01:28 | http://arxiv.org/abs/2309.17016v1 | http://arxiv.org/pdf/2309.17016v1 | 2309.17016v1 |
Benchmarking Cognitive Biases in Large Language Models as Evaluators | Large Language Models (LLMs) have recently been shown to be effective as
automatic evaluators with simple prompting and in-context learning. In this
work, we assemble 15 LLMs of four different size ranges and evaluate their
output responses by preference ranking from the other LLMs as evaluators, such
as System Star is better than System Square. We then evaluate the quality of
ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators
(CoBBLEr), a benchmark to measure six different cognitive biases in LLM
evaluation outputs, such as the Egocentric bias where a model prefers to rank
its own outputs highly in evaluation. We find that LLMs are biased text quality
evaluators, exhibiting strong indications on our bias benchmark (average of 40%
of comparisons across all models) within each of their evaluations that
question their robustness as evaluators. Furthermore, we examine the
correlation between human and machine preferences and calculate the average
Rank-Biased Overlap (RBO) score to be 49.6%, indicating that machine
preferences are misaligned with humans. According to our findings, LLMs may
still be unable to be utilized for automatic annotation aligned with human
preferences. Our project page is at: https://minnesotanlp.github.io/cobbler. | [
"Ryan Koo",
"Minhwa Lee",
"Vipul Raheja",
"Jong Inn Park",
"Zae Myung Kim",
"Dongyeop Kang"
] | 2023-09-29 06:53:10 | http://arxiv.org/abs/2309.17012v1 | http://arxiv.org/pdf/2309.17012v1 | 2309.17012v1 |
Feature Cognition Enhancement via Interaction-Aware Automated Transformation | Creating an effective representation space is crucial for mitigating the
curse of dimensionality, enhancing model generalization, addressing data
sparsity, and leveraging classical models more effectively. Recent advancements
in automated feature engineering (AutoFE) have made significant progress in
addressing various challenges associated with representation learning, issues
such as heavy reliance on intensive labor and empirical experiences, lack of
explainable explicitness, and inflexible feature space reconstruction embedded
into downstream tasks. However, these approaches are constrained by: 1)
generation of potentially unintelligible and illogical reconstructed feature
spaces, stemming from the neglect of expert-level cognitive processes; 2) lack
of systematic exploration, which subsequently results in slower model
convergence for identification of optimal feature space. To address these, we
introduce an interaction-aware reinforced generation perspective. We redefine
feature space reconstruction as a nested process of creating meaningful
features and controlling feature set size through selection. We develop a
hierarchical reinforcement learning structure with cascading Markov Decision
Processes to automate feature and operation selection, as well as feature
crossing. By incorporating statistical measures, we reward agents based on the
interaction strength between selected features, resulting in intelligent and
efficient exploration of the feature space that emulates human decision-making.
Extensive experiments are conducted to validate our proposed approach. | [
"Ehtesamul Azim",
"Dongjie Wang",
"Kunpeng Liu",
"Wei Zhang",
"Yanjie Fu"
] | 2023-09-29 06:48:16 | http://arxiv.org/abs/2309.17011v1 | http://arxiv.org/pdf/2309.17011v1 | 2309.17011v1 |
Deep Representation Learning for Prediction of Temporal Event Sets in the Continuous Time Domain | Temporal Point Processes (TPP) play an important role in predicting or
forecasting events. Although these problems have been studied extensively,
predicting multiple simultaneously occurring events can be challenging. For
instance, more often than not, a patient gets admitted to a hospital with
multiple conditions at a time. Similarly people buy more than one stock and
multiple news breaks out at the same time. Moreover, these events do not occur
at discrete time intervals, and forecasting event sets in the continuous time
domain remains an open problem. Naive approaches for extending the existing TPP
models for solving this problem lead to dealing with an exponentially large
number of events or ignoring set dependencies among events. In this work, we
propose a scalable and efficient approach based on TPPs to solve this problem.
Our proposed approach incorporates contextual event embeddings, temporal
information, and domain features to model the temporal event sets. We
demonstrate the effectiveness of our approach through extensive experiments on
multiple datasets, showing that our model outperforms existing methods in terms
of prediction metrics and computational efficiency. To the best of our
knowledge, this is the first work that solves the problem of predicting event
set intensities in the continuous time domain by using TPPs. | [
"Parag Dutta",
"Kawin Mayilvaghanan",
"Pratyaksha Sinha",
"Ambedkar Dukkipati"
] | 2023-09-29 06:46:31 | http://arxiv.org/abs/2309.17009v1 | http://arxiv.org/pdf/2309.17009v1 | 2309.17009v1 |
Medical Foundation Models are Susceptible to Targeted Misinformation Attacks | Large language models (LLMs) have broad medical knowledge and can reason
about medical information across many domains, holding promising potential for
diverse medical applications in the near future. In this study, we demonstrate
a concerning vulnerability of LLMs in medicine. Through targeted manipulation
of just 1.1% of the model's weights, we can deliberately inject an incorrect
biomedical fact. The erroneous information is then propagated in the model's
output, whilst its performance on other biomedical tasks remains intact. We
validate our findings in a set of 1,038 incorrect biomedical facts. This
peculiar susceptibility raises serious security and trustworthiness concerns
for the application of LLMs in healthcare settings. It accentuates the need for
robust protective measures, thorough verification mechanisms, and stringent
management of access to these models, ensuring their reliable and safe use in
medical practice. | [
"Tianyu Han",
"Sven Nebelung",
"Firas Khader",
"Tianci Wang",
"Gustav Mueller-Franzes",
"Christiane Kuhl",
"Sebastian Försch",
"Jens Kleesiek",
"Christoph Haarburger",
"Keno K. Bressem",
"Jakob Nikolas Kather",
"Daniel Truhn"
] | 2023-09-29 06:44:36 | http://arxiv.org/abs/2309.17007v1 | http://arxiv.org/pdf/2309.17007v1 | 2309.17007v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.