title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Positivity-free Policy Learning with Observational Data | Policy learning utilizing observational data is pivotal across various
domains, with the objective of learning the optimal treatment assignment policy
while adhering to specific constraints such as fairness, budget, and
simplicity. This study introduces a novel positivity-free (stochastic) policy
learning framework designed to address the challenges posed by the
impracticality of the positivity assumption in real-world scenarios. This
framework leverages incremental propensity score policies to adjust propensity
score values instead of assigning fixed values to treatments. We characterize
these incremental propensity score policies and establish identification
conditions, employing semiparametric efficiency theory to propose efficient
estimators capable of achieving rapid convergence rates, even when integrated
with advanced machine learning algorithms. This paper provides a thorough
exploration of the theoretical guarantees associated with policy learning and
validates the proposed framework's finite-sample performance through
comprehensive numerical experiments, ensuring the identification of causal
effects from observational data is both robust and reliable. | [
"Pan Zhao",
"Antoine Chambaz",
"Julie Josse",
"Shu Yang"
] | 2023-10-10 19:47:27 | http://arxiv.org/abs/2310.06969v1 | http://arxiv.org/pdf/2310.06969v1 | 2310.06969v1 |
ObjectComposer: Consistent Generation of Multiple Objects Without Fine-tuning | Recent text-to-image generative models can generate high-fidelity images from
text prompts. However, these models struggle to consistently generate the same
objects in different contexts with the same appearance. Consistent object
generation is important to many downstream tasks like generating comic book
illustrations with consistent characters and setting. Numerous approaches
attempt to solve this problem by extending the vocabulary of diffusion models
through fine-tuning. However, even lightweight fine-tuning approaches can be
prohibitively expensive to run at scale and in real-time. We introduce a method
called ObjectComposer for generating compositions of multiple objects that
resemble user-specified images. Our approach is training-free, leveraging the
abilities of preexisting models. We build upon the recent BLIP-Diffusion model,
which can generate images of single objects specified by reference images.
ObjectComposer enables the consistent generation of compositions containing
multiple specific objects simultaneously, all without modifying the weights of
the underlying models. | [
"Alec Helbling",
"Evan Montoya",
"Duen Horng Chau"
] | 2023-10-10 19:46:58 | http://arxiv.org/abs/2310.06968v1 | http://arxiv.org/pdf/2310.06968v1 | 2310.06968v1 |
On the Interpretability of Part-Prototype Based Classifiers: A Human Centric Analysis | Part-prototype networks have recently become methods of interest as an
interpretable alternative to many of the current black-box image classifiers.
However, the interpretability of these methods from the perspective of human
users has not been sufficiently explored. In this work, we have devised a
framework for evaluating the interpretability of part-prototype-based models
from a human perspective. The proposed framework consists of three actionable
metrics and experiments. To demonstrate the usefulness of our framework, we
performed an extensive set of experiments using Amazon Mechanical Turk. They
not only show the capability of our framework in assessing the interpretability
of various part-prototype-based models, but they also are, to the best of our
knowledge, the most comprehensive work on evaluating such methods in a unified
framework. | [
"Omid Davoodi",
"Shayan Mohammadizadehsamakosh",
"Majid Komeili"
] | 2023-10-10 19:32:59 | http://arxiv.org/abs/2310.06966v1 | http://arxiv.org/pdf/2310.06966v1 | 2310.06966v1 |
Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks | Nowadays neural-network-based image- and video-quality metrics show better
performance compared to traditional methods. However, they also became more
vulnerable to adversarial attacks that increase metrics' scores without
improving visual quality. The existing benchmarks of quality metrics compare
their performance in terms of correlation with subjective quality and
calculation time. However, the adversarial robustness of image-quality metrics
is also an area worth researching. In this paper, we analyse modern metrics'
robustness to different adversarial attacks. We adopted adversarial attacks
from computer vision tasks and compared attacks' efficiency against 15
no-reference image/video-quality metrics. Some metrics showed high resistance
to adversarial attacks which makes their usage in benchmarks safer than
vulnerable metrics. The benchmark accepts new metrics submissions for
researchers who want to make their metrics more robust to attacks or to find
such metrics for their needs. Try our benchmark using pip install
robustness-benchmark. | [
"Anastasia Antsiferova",
"Khaled Abud",
"Aleksandr Gushchin",
"Sergey Lavrushkin",
"Ekaterina Shumitskaya",
"Maksim Velikanov",
"Dmitriy Vatolin"
] | 2023-10-10 19:21:41 | http://arxiv.org/abs/2310.06958v1 | http://arxiv.org/pdf/2310.06958v1 | 2310.06958v1 |
Diffusion Prior Regularized Iterative Reconstruction for Low-dose CT | Computed tomography (CT) involves a patient's exposure to ionizing radiation.
To reduce the radiation dose, we can either lower the X-ray photon count or
down-sample projection views. However, either of the ways often compromises
image quality. To address this challenge, here we introduce an iterative
reconstruction algorithm regularized by a diffusion prior. Drawing on the
exceptional imaging prowess of the denoising diffusion probabilistic model
(DDPM), we merge it with a reconstruction procedure that prioritizes data
fidelity. This fusion capitalizes on the merits of both techniques, delivering
exceptional reconstruction results in an unsupervised framework. To further
enhance the efficiency of the reconstruction process, we incorporate the
Nesterov momentum acceleration technique. This enhancement facilitates superior
diffusion sampling in fewer steps. As demonstrated in our experiments, our
method offers a potential pathway to high-definition CT image reconstruction
with minimized radiation. | [
"Wenjun Xia",
"Yongyi Shi",
"Chuang Niu",
"Wenxiang Cong",
"Ge Wang"
] | 2023-10-10 19:08:57 | http://arxiv.org/abs/2310.06949v1 | http://arxiv.org/pdf/2310.06949v1 | 2310.06949v1 |
A Variational Autoencoder Framework for Robust, Physics-Informed Cyberattack Recognition in Industrial Cyber-Physical Systems | Cybersecurity of Industrial Cyber-Physical Systems is drawing significant
concerns as data communication increasingly leverages wireless networks. A lot
of data-driven methods were develope for detecting cyberattacks, but few are
focused on distinguishing them from equipment faults. In this paper, we develop
a data-driven framework that can be used to detect, diagnose, and localize a
type of cyberattack called covert attacks on networked industrial control
systems. The framework has a hybrid design that combines a variational
autoencoder (VAE), a recurrent neural network (RNN), and a Deep Neural Network
(DNN). This data-driven framework considers the temporal behavior of a generic
physical system that extracts features from the time series of the sensor
measurements that can be used for detecting covert attacks, distinguishing them
from equipment faults, as well as localize the attack/fault. We evaluate the
performance of the proposed method through a realistic simulation study on a
networked power transmission system as a typical example of ICS. We compare the
performance of the proposed method with the traditional model-based method to
show its applicability and efficacy. | [
"Navid Aftabi",
"Dan Li",
"Paritosh Ramanan"
] | 2023-10-10 19:07:53 | http://arxiv.org/abs/2310.06948v1 | http://arxiv.org/pdf/2310.06948v1 | 2310.06948v1 |
LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing | In this paper, we explore the potential of Large Language Models (LLMs) to
reason about threats, generate information about tools, and automate cyber
campaigns. We begin with a manual exploration of LLMs in supporting specific
threat-related actions and decisions. We proceed by automating the decision
process in a cyber campaign. We present prompt engineering approaches for a
plan-act-report loop for one action of a threat campaign and and a prompt
chaining design that directs the sequential decision process of a multi-action
campaign. We assess the extent of LLM's cyber-specific knowledge w.r.t the
short campaign we demonstrate and provide insights into prompt design for
eliciting actionable responses. We discuss the potential impact of LLMs on the
threat landscape and the ethical considerations of using LLMs for accelerating
threat actor capabilities. We report a promising, yet concerning, application
of generative AI to cyber threats. However, the LLM's capabilities to deal with
more complex networks, sophisticated vulnerabilities, and the sensitivity of
prompts are open questions. This research should spur deliberations over the
inevitable advancements in LLM-supported cyber adversarial landscape. | [
"Stephen Moskal",
"Sam Laney",
"Erik Hemberg",
"Una-May O'Reilly"
] | 2023-10-10 18:49:20 | http://arxiv.org/abs/2310.06936v1 | http://arxiv.org/pdf/2310.06936v1 | 2310.06936v1 |
Quantum Shadow Gradient Descent for Quantum Learning | This paper proposes a new procedure called quantum shadow gradient descent
(QSGD) that addresses these key challenges. Our method has the benefits of a
one-shot approach, in not requiring any sample duplication while having a
convergence rate comparable to the ideal update rule using exact gradient
computation. We propose a new technique for generating quantum shadow samples
(QSS), which generates quantum shadows as opposed to classical shadows used in
existing works. With classical shadows, the computations are typically
performed on classical computers and, hence, are prohibitive since the
dimension grows exponentially. Our approach resolves this issue by measurements
of quantum shadows. As the second main contribution, we study more general
non-product ansatz of the form $\exp\{i\sum_j \theta_j A_j\}$ that model
variational Hamiltonians. We prove that the gradient can be written in terms of
the gradient of single-parameter ansatzes that can be easily measured. Our
proof is based on the Suzuki-Trotter approximation; however, our expressions
are exact, unlike prior efforts that approximate non-product operators. As a
result, existing gradient measurement techniques can be applied to more general
VQAs followed by correction terms without any approximation penalty. We provide
theoretical proofs, convergence analysis and verify our results through
numerical experiments. | [
"Mohsen Heidari",
"Mobasshir A Naved",
"Wenbo Xie",
"Arjun Jacob Grama",
"Wojciech Szpankowski"
] | 2023-10-10 18:45:43 | http://arxiv.org/abs/2310.06935v1 | http://arxiv.org/pdf/2310.06935v1 | 2310.06935v1 |
Prosody Analysis of Audiobooks | Recent advances in text-to-speech have made it possible to generate
natural-sounding audio from text. However, audiobook narrations involve
dramatic vocalizations and intonations by the reader, with greater reliance on
emotions, dialogues, and descriptions in the narrative. Using our dataset of 93
aligned book-audiobook pairs, we present improved models for prosody prediction
properties (pitch, volume, and rate of speech) from narrative text using
language modeling. Our predicted prosody attributes correlate much better with
human audiobook readings than results from a state-of-the-art commercial TTS
system: our predicted pitch shows a higher correlation with human reading for
22 out of the 24 books, while our predicted volume attribute proves more
similar to human reading for 23 out of the 24 books. Finally, we present a
human evaluation study to quantify the extent that people prefer
prosody-enhanced audiobook readings over commercial text-to-speech systems. | [
"Charuta Pethe",
"Yunting Yin",
"Steven Skiena"
] | 2023-10-10 18:33:47 | http://arxiv.org/abs/2310.06930v1 | http://arxiv.org/pdf/2310.06930v1 | 2310.06930v1 |
Stochastic Super-resolution of Cosmological Simulations with Denoising Diffusion Models | In recent years, deep learning models have been successfully employed for
augmenting low-resolution cosmological simulations with small-scale
information, a task known as "super-resolution". So far, these cosmological
super-resolution models have relied on generative adversarial networks (GANs),
which can achieve highly realistic results, but suffer from various
shortcomings (e.g. low sample diversity). We introduce denoising diffusion
models as a powerful generative model for super-resolving cosmic large-scale
structure predictions (as a first proof-of-concept in two dimensions). To
obtain accurate results down to small scales, we develop a new "filter-boosted"
training approach that redistributes the importance of different scales in the
pixel-wise training objective. We demonstrate that our model not only produces
convincing super-resolution images and power spectra consistent at the percent
level, but is also able to reproduce the diversity of small-scale features
consistent with a given low-resolution simulation. This enables uncertainty
quantification for the generated small-scale features, which is critical for
the usefulness of such super-resolution models as a viable surrogate model for
cosmic structure formation. | [
"Andreas Schanz",
"Florian List",
"Oliver Hahn"
] | 2023-10-10 18:32:11 | http://arxiv.org/abs/2310.06929v1 | http://arxiv.org/pdf/2310.06929v1 | 2310.06929v1 |
PICProp: Physics-Informed Confidence Propagation for Uncertainty Quantification | Standard approaches for uncertainty quantification in deep learning and
physics-informed learning have persistent limitations. Indicatively, strong
assumptions regarding the data likelihood are required, the performance highly
depends on the selection of priors, and the posterior can be sampled only
approximately, which leads to poor approximations because of the associated
computational cost. This paper introduces and studies confidence interval (CI)
estimation for deterministic partial differential equations as a novel problem.
That is, to propagate confidence, in the form of CIs, from data locations to
the entire domain with probabilistic guarantees. We propose a method, termed
Physics-Informed Confidence Propagation (PICProp), based on bi-level
optimization to compute a valid CI without making heavy assumptions. We provide
a theorem regarding the validity of our method, and computational experiments,
where the focus is on physics-informed learning. | [
"Qianli Shen",
"Wai Hoh Tang",
"Zhun Deng",
"Apostolos Psaros",
"Kenji Kawaguchi"
] | 2023-10-10 18:24:50 | http://arxiv.org/abs/2310.06923v2 | http://arxiv.org/pdf/2310.06923v2 | 2310.06923v2 |
Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE | The recent success of SimCSE has greatly advanced state-of-the-art sentence
representations. However, the original formulation of SimCSE does not fully
exploit the potential of hard negative samples in contrastive learning. This
study introduces an unsupervised contrastive learning framework that combines
SimCSE with hard negative mining, aiming to enhance the quality of sentence
embeddings. The proposed focal-InfoNCE function introduces self-paced
modulation terms in the contrastive objective, downweighting the loss
associated with easy negatives and encouraging the model focusing on hard
negatives. Experimentation on various STS benchmarks shows that our method
improves sentence embeddings in terms of Spearman's correlation and
representation alignment and uniformity. | [
"Pengyue Hou",
"Xingyu Li"
] | 2023-10-10 18:15:24 | http://arxiv.org/abs/2310.06918v2 | http://arxiv.org/pdf/2310.06918v2 | 2310.06918v2 |
Distributed Transfer Learning with 4th Gen Intel Xeon Processors | In this paper, we explore how transfer learning, coupled with Intel Xeon,
specifically 4th Gen Intel Xeon scalable processor, defies the conventional
belief that training is primarily GPU-dependent. We present a case study where
we achieved near state-of-the-art accuracy for image classification on a
publicly available Image Classification TensorFlow dataset using Intel Advanced
Matrix Extensions(AMX) and distributed training with Horovod. | [
"Lakshmi Arunachalam",
"Fahim Mohammad",
"Vrushabh H. Sanghavi"
] | 2023-10-10 18:12:46 | http://arxiv.org/abs/2310.06916v1 | http://arxiv.org/pdf/2310.06916v1 | 2310.06916v1 |
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression | In long context scenarios, large language models (LLMs) face three main
challenges: higher computational/financial cost, longer latency, and inferior
performance. Some studies reveal that the performance of LLMs depends on both
the density and the position of the key information (question relevant) in the
input prompt. Inspired by these findings, we propose LongLLMLingua for prompt
compression towards improving LLMs' perception of the key information to
simultaneously address the three challenges. We conduct evaluation on a wide
range of long context scenarios including single-/multi-document QA, few-shot
learning, summarization, synthetic tasks, and code completion. The experimental
results show that LongLLMLingua compressed prompt can derive higher performance
with much less cost. The latency of the end-to-end system is also reduced. For
example, on NaturalQuestions benchmark, LongLLMLingua gains a performance boost
of up to 17.1% over the original prompt with ~4x fewer tokens as input to
GPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000
samples from the LongBench and ZeroScrolls benchmark, respectively.
Additionally, when compressing prompts of ~10k tokens at a compression rate of
2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Our
code is available at https://aka.ms/LLMLingua. | [
"Huiqiang Jiang",
"Qianhui Wu",
"Xufang Luo",
"Dongsheng Li",
"Chin-Yew Lin",
"Yuqing Yang",
"Lili Qiu"
] | 2023-10-10 17:59:58 | http://arxiv.org/abs/2310.06839v1 | http://arxiv.org/pdf/2310.06839v1 | 2310.06839v1 |
Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency | Developing an educational test can be expensive and time-consuming, as each
item must be written by experts and then evaluated by collecting hundreds of
student responses. Moreover, many tests require multiple distinct sets of
questions administered throughout the school year to closely monitor students'
progress, known as parallel tests. In this study, we focus on tests of silent
sentence reading efficiency, used to assess students' reading ability over
time. To generate high-quality parallel tests, we propose to fine-tune large
language models (LLMs) to simulate how previous students would have responded
to unseen items. With these simulated responses, we can estimate each item's
difficulty and ambiguity. We first use GPT-4 to generate new test items
following a list of expert-developed rules and then apply a fine-tuned LLM to
filter the items based on criteria from psychological measurements. We also
propose an optimal-transport-inspired technique for generating parallel tests
and show the generated tests closely correspond to the original test's
difficulty and reliability based on crowdworker responses. Our evaluation of a
generated test with 234 students from grades 2 to 8 produces test scores highly
correlated (r=0.93) to those of a standard test form written by human experts
and evaluated across thousands of K-12 students. | [
"Eric Zelikman",
"Wanjing Anya Ma",
"Jasmine E. Tran",
"Diyi Yang",
"Jason D. Yeatman",
"Nick Haber"
] | 2023-10-10 17:59:51 | http://arxiv.org/abs/2310.06837v1 | http://arxiv.org/pdf/2310.06837v1 | 2310.06837v1 |
Scalable Semantic Non-Markovian Simulation Proxy for Reinforcement Learning | Recent advances in reinforcement learning (RL) have shown much promise across
a variety of applications. However, issues such as scalability, explainability,
and Markovian assumptions limit its applicability in certain domains. We
observe that many of these shortcomings emanate from the simulator as opposed
to the RL training algorithms themselves. As such, we propose a semantic proxy
for simulation based on a temporal extension to annotated logic. In comparison
with two high-fidelity simulators, we show up to three orders of magnitude
speed-up while preserving the quality of policy learned. In addition, we show
the ability to model and leverage non-Markovian dynamics and instantaneous
actions while providing an explainable trace describing the outcomes of the
agent actions. | [
"Kaustuv Mukherji",
"Devendra Parkar",
"Lahari Pokala",
"Dyuman Aditya",
"Paulo Shakarian",
"Clark Dorman"
] | 2023-10-10 17:59:26 | http://arxiv.org/abs/2310.06835v2 | http://arxiv.org/pdf/2310.06835v2 | 2310.06835v2 |
Teaching Language Models to Hallucinate Less with Synthetic Tasks | Large language models (LLMs) frequently hallucinate on abstractive
summarization tasks such as document-based question-answering, meeting
summarization, and clinical report generation, even though all necessary
information is included in context. However, optimizing LLMs to hallucinate
less on these tasks is challenging, as hallucination is hard to efficiently
evaluate at each optimization step. In this work, we show that reducing
hallucination on a synthetic task can also reduce hallucination on real-world
downstream tasks. Our method, SynTra, first designs a synthetic task where
hallucinations are easy to elicit and measure. It next optimizes the LLM's
system message via prefix-tuning on the synthetic task, and finally transfers
the system message to realistic, hard-to-optimize tasks. Across three realistic
abstractive summarization tasks, SynTra reduces hallucination for two
13B-parameter LLMs using only a synthetic retrieval task for supervision. We
also find that optimizing the system message rather than the model weights can
be critical; fine-tuning the entire model on the synthetic task can
counterintuitively increase hallucination. Overall, SynTra demonstrates that
the extra flexibility of working with synthetic data can help mitigate
undesired behaviors in practice. | [
"Erik Jones",
"Hamid Palangi",
"Clarisse Simões",
"Varun Chandrasekaran",
"Subhabrata Mukherjee",
"Arindam Mitra",
"Ahmed Awadallah",
"Ece Kamar"
] | 2023-10-10 17:57:00 | http://arxiv.org/abs/2310.06827v1 | http://arxiv.org/pdf/2310.06827v1 | 2310.06827v1 |
Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | [
"Albert Q. Jiang",
"Alexandre Sablayrolles",
"Arthur Mensch",
"Chris Bamford",
"Devendra Singh Chaplot",
"Diego de las Casas",
"Florian Bressand",
"Gianna Lengyel",
"Guillaume Lample",
"Lucile Saulnier",
"Lélio Renard Lavaud",
"Marie-Anne Lachaux",
"Pierre Stock",
"Teven Le Scao",
"Thibaut Lavril",
"Thomas Wang",
"Timothée Lacroix",
"William El Sayed"
] | 2023-10-10 17:54:58 | http://arxiv.org/abs/2310.06825v1 | http://arxiv.org/pdf/2310.06825v1 | 2310.06825v1 |
NECO: NEural Collapse Based Out-of-distribution detection | Detecting out-of-distribution (OOD) data is a critical challenge in machine
learning due to model overconfidence, often without awareness of their
epistemological limits. We hypothesize that ``neural collapse'', a phenomenon
affecting in-distribution data for models trained beyond loss convergence, also
influences OOD data. To benefit from this interplay, we introduce NECO, a novel
post-hoc method for OOD detection, which leverages the geometric properties of
``neural collapse'' and of principal component spaces to identify OOD data. Our
extensive experiments demonstrate that NECO achieves state-of-the-art results
on both small and large-scale OOD detection tasks while exhibiting strong
generalization capabilities across different network architectures.
Furthermore, we provide a theoretical explanation for the effectiveness of our
method in OOD detection. We plan to release the code after the anonymity
period. | [
"Mouïn Ben Ammar",
"Nacim Belkhir",
"Sebastian Popescu",
"Antoine Manzanera",
"Gianni Franchi"
] | 2023-10-10 17:53:36 | http://arxiv.org/abs/2310.06823v2 | http://arxiv.org/pdf/2310.06823v2 | 2310.06823v2 |
Text Embeddings Reveal (Almost) As Much As Text | How much private information do text embeddings reveal about the original
text? We investigate the problem of embedding \textit{inversion},
reconstructing the full text represented in dense text embeddings. We frame the
problem as controlled generation: generating text that, when reembedded, is
close to a fixed point in latent space. We find that although a na\"ive model
conditioned on the embedding performs poorly, a multi-step method that
iteratively corrects and re-embeds text is able to recover $92\%$ of
$32\text{-token}$ text inputs exactly. We train our model to decode text
embeddings from two state-of-the-art embedding models, and also show that our
model can recover important personal information (full names) from a dataset of
clinical notes. Our code is available on Github:
\href{https://github.com/jxmorris12/vec2text}{github.com/jxmorris12/vec2text}. | [
"John X. Morris",
"Volodymyr Kuleshov",
"Vitaly Shmatikov",
"Alexander M. Rush"
] | 2023-10-10 17:39:03 | http://arxiv.org/abs/2310.06816v1 | http://arxiv.org/pdf/2310.06816v1 | 2310.06816v1 |
Advancing Transformer's Capabilities in Commonsense Reasoning | Recent advances in general purpose pre-trained language models have shown
great potential in commonsense reasoning. However, current works still perform
poorly on standard commonsense reasoning benchmarks including the Com2Sense
Dataset. We argue that this is due to a disconnect with current cutting-edge
machine learning methods. In this work, we aim to bridge the gap by introducing
current ML-based methods to improve general purpose pre-trained language models
in the task of commonsense reasoning. Specifically, we experiment with and
systematically evaluate methods including knowledge transfer, model ensemble,
and introducing an additional pairwise contrastive objective. Our best model
outperforms the strongest previous works by ~15\% absolute gains in Pairwise
Accuracy and ~8.7\% absolute gains in Standard Accuracy. | [
"Yu Zhou",
"Yunqiu Han",
"Hanyu Zhou",
"Yulun Wu"
] | 2023-10-10 17:21:03 | http://arxiv.org/abs/2310.06803v1 | http://arxiv.org/pdf/2310.06803v1 | 2310.06803v1 |
Inverse Factorized Q-Learning for Cooperative Multi-agent Imitation Learning | This paper concerns imitation learning (IL) (i.e, the problem of learning to
mimic expert behaviors from demonstrations) in cooperative multi-agent systems.
The learning problem under consideration poses several challenges,
characterized by high-dimensional state and action spaces and intricate
inter-agent dependencies. In a single-agent setting, IL has proven to be done
efficiently through an inverse soft-Q learning process given expert
demonstrations. However, extending this framework to a multi-agent context
introduces the need to simultaneously learn both local value functions to
capture local observations and individual actions, and a joint value function
for exploiting centralized learning. In this work, we introduce a novel
multi-agent IL algorithm designed to address these challenges. Our approach
enables the centralized learning by leveraging mixing networks to aggregate
decentralized Q functions. A main advantage of this approach is that the
weights of the mixing networks can be trained using information derived from
global states. We further establish conditions for the mixing networks under
which the multi-agent objective function exhibits convexity within the Q
function space. We present extensive experiments conducted on some challenging
competitive and cooperative multi-agent game environments, including an
advanced version of the Star-Craft multi-agent challenge (i.e., SMACv2), which
demonstrates the effectiveness of our proposed algorithm compared to existing
state-of-the-art multi-agent IL algorithms. | [
"The Viet Bui",
"Tien Mai",
"Thanh Hong Nguyen"
] | 2023-10-10 17:11:20 | http://arxiv.org/abs/2310.06801v1 | http://arxiv.org/pdf/2310.06801v1 | 2310.06801v1 |
Test & Evaluation Best Practices for Machine Learning-Enabled Systems | Machine learning (ML) - based software systems are rapidly gaining adoption
across various domains, making it increasingly essential to ensure they perform
as intended. This report presents best practices for the Test and Evaluation
(T&E) of ML-enabled software systems across its lifecycle. We categorize the
lifecycle of ML-enabled software systems into three stages: component,
integration and deployment, and post-deployment. At the component level, the
primary objective is to test and evaluate the ML model as a standalone
component. Next, in the integration and deployment stage, the goal is to
evaluate an integrated ML-enabled system consisting of both ML and non-ML
components. Finally, once the ML-enabled software system is deployed and
operationalized, the T&E objective is to ensure the system performs as
intended. Maintenance activities for ML-enabled software systems span the
lifecycle and involve maintaining various assets of ML-enabled software
systems.
Given its unique characteristics, the T&E of ML-enabled software systems is
challenging. While significant research has been reported on T&E at the
component level, limited work is reported on T&E in the remaining two stages.
Furthermore, in many cases, there is a lack of systematic T&E strategies
throughout the ML-enabled system's lifecycle. This leads practitioners to
resort to ad-hoc T&E practices, which can undermine user confidence in the
reliability of ML-enabled software systems. New systematic testing approaches,
adequacy measurements, and metrics are required to address the T&E challenges
across all stages of the ML-enabled system lifecycle. | [
"Jaganmohan Chandrasekaran",
"Tyler Cody",
"Nicola McCarthy",
"Erin Lanus",
"Laura Freeman"
] | 2023-10-10 17:11:14 | http://arxiv.org/abs/2310.06800v1 | http://arxiv.org/pdf/2310.06800v1 | 2310.06800v1 |
$f$-Policy Gradients: A General Framework for Goal Conditioned RL using $f$-Divergences | Goal-Conditioned Reinforcement Learning (RL) problems often have access to
sparse rewards where the agent receives a reward signal only when it has
achieved the goal, making policy optimization a difficult problem. Several
works augment this sparse reward with a learned dense reward function, but this
can lead to sub-optimal policies if the reward is misaligned. Moreover, recent
works have demonstrated that effective shaping rewards for a particular problem
can depend on the underlying learning algorithm. This paper introduces a novel
way to encourage exploration called $f$-Policy Gradients, or $f$-PG. $f$-PG
minimizes the f-divergence between the agent's state visitation distribution
and the goal, which we show can lead to an optimal policy. We derive gradients
for various f-divergences to optimize this objective. Our learning paradigm
provides dense learning signals for exploration in sparse reward settings. We
further introduce an entropy-regularized policy optimization objective, that we
call $state$-MaxEnt RL (or $s$-MaxEnt RL) as a special case of our objective.
We show that several metric-based shaping rewards like L2 can be used with
$s$-MaxEnt RL, providing a common ground to study such metric-based shaping
rewards with efficient exploration. We find that $f$-PG has better performance
compared to standard policy gradient methods on a challenging gridworld as well
as the Point Maze and FetchReach environments. More information on our website
https://agarwalsiddhant10.github.io/projects/fpg.html. | [
"Siddhant Agarwal",
"Ishan Durugkar",
"Peter Stone",
"Amy Zhang"
] | 2023-10-10 17:07:05 | http://arxiv.org/abs/2310.06794v1 | http://arxiv.org/pdf/2310.06794v1 | 2310.06794v1 |
Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement Learning | We study matrix estimation problems arising in reinforcement learning (RL)
with low-rank structure. In low-rank bandits, the matrix to be recovered
specifies the expected arm rewards, and for low-rank Markov Decision Processes
(MDPs), it may for example characterize the transition kernel of the MDP. In
both cases, each entry of the matrix carries important information, and we seek
estimation methods with low entry-wise error. Importantly, these methods
further need to accommodate for inherent correlations in the available data
(e.g. for MDPs, the data consists of system trajectories). We investigate the
performance of simple spectral-based matrix estimation approaches: we show that
they efficiently recover the singular subspaces of the matrix and exhibit
nearly-minimal entry-wise error. These new results on low-rank matrix
estimation make it possible to devise reinforcement learning algorithms that
fully exploit the underlying low-rank structure. We provide two examples of
such algorithms: a regret minimization algorithm for low-rank bandit problems,
and a best policy identification algorithm for reward-free RL in low-rank MDPs.
Both algorithms yield state-of-the-art performance guarantees. | [
"Stefan Stojanovic",
"Yassir Jedra",
"Alexandre Proutiere"
] | 2023-10-10 17:06:41 | http://arxiv.org/abs/2310.06793v1 | http://arxiv.org/pdf/2310.06793v1 | 2310.06793v1 |
Enhancing Predictive Capabilities in Data-Driven Dynamical Modeling with Automatic Differentiation: Koopman and Neural ODE Approaches | Data-driven approximations of the Koopman operator are promising for
predicting the time evolution of systems characterized by complex dynamics.
Among these methods, the approach known as extended dynamic mode decomposition
with dictionary learning (EDMD-DL) has garnered significant attention. Here we
present a modification of EDMD-DL that concurrently determines both the
dictionary of observables and the corresponding approximation of the Koopman
operator. This innovation leverages automatic differentiation to facilitate
gradient descent computations through the pseudoinverse. We also address the
performance of several alternative methodologies. We assess a 'pure' Koopman
approach, which involves the direct time-integration of a linear,
high-dimensional system governing the dynamics within the space of observables.
Additionally, we explore a modified approach where the system alternates
between spaces of states and observables at each time step -- this approach no
longer satisfies the linearity of the true Koopman operator representation. For
further comparisons, we also apply a state space approach (neural ODEs). We
consider systems encompassing two and three-dimensional ordinary differential
equation systems featuring steady, oscillatory, and chaotic attractors, as well
as partial differential equations exhibiting increasingly complex and intricate
behaviors. Our framework significantly outperforms EDMD-DL. Furthermore, the
state space approach offers superior performance compared to the 'pure' Koopman
approach where the entire time evolution occurs in the space of observables.
When the temporal evolution of the Koopman approach alternates between states
and observables at each time step, however, its predictions become comparable
to those of the state space approach. | [
"C. Ricardo Constante-Amores",
"Alec J. Linot",
"Michael D. Graham"
] | 2023-10-10 17:04:21 | http://arxiv.org/abs/2310.06790v1 | http://arxiv.org/pdf/2310.06790v1 | 2310.06790v1 |
OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text | There is growing evidence that pretraining on high quality, carefully
thought-out tokens such as code or mathematics plays an important role in
improving the reasoning abilities of large language models. For example,
Minerva, a PaLM model finetuned on billions of tokens of mathematical documents
from arXiv and the web, reported dramatically improved performance on problems
that require quantitative reasoning. However, because all known open source web
datasets employ preprocessing that does not faithfully preserve mathematical
notation, the benefits of large scale training on quantitive web documents are
unavailable to the research community. We introduce OpenWebMath, an open
dataset inspired by these works containing 14.7B tokens of mathematical
webpages from Common Crawl. We describe in detail our method for extracting
text and LaTeX content and removing boilerplate from HTML documents, as well as
our methods for quality filtering and deduplication. Additionally, we run
small-scale experiments by training 1.4B parameter language models on
OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass
the performance of models trained on over 20x the amount of general language
data. We hope that our dataset, openly released on the Hugging Face Hub, will
help spur advances in the reasoning abilities of large language models. | [
"Keiran Paster",
"Marco Dos Santos",
"Zhangir Azerbayev",
"Jimmy Ba"
] | 2023-10-10 16:57:28 | http://arxiv.org/abs/2310.06786v1 | http://arxiv.org/pdf/2310.06786v1 | 2310.06786v1 |
A Supervised Embedding and Clustering Anomaly Detection method for classification of Mobile Network Faults | The paper introduces Supervised Embedding and Clustering Anomaly Detection
(SEMC-AD), a method designed to efficiently identify faulty alarm logs in a
mobile network and alleviate the challenges of manual monitoring caused by the
growing volume of alarm logs. SEMC-AD employs a supervised embedding approach
based on deep neural networks, utilizing historical alarm logs and their labels
to extract numerical representations for each log, effectively addressing the
issue of imbalanced classification due to a small proportion of anomalies in
the dataset without employing one-hot encoding. The robustness of the embedding
is evaluated by plotting the two most significant principle components of the
embedded alarm logs, revealing that anomalies form distinct clusters with
similar embeddings. Multivariate normal Gaussian clustering is then applied to
these components, identifying clusters with a high ratio of anomalies to normal
alarms (above 90%) and labeling them as the anomaly group. To classify new
alarm logs, we check if their embedded vectors' two most significant principle
components fall within the anomaly-labeled clusters. If so, the log is
classified as an anomaly. Performance evaluation demonstrates that SEMC-AD
outperforms conventional random forest and gradient boosting methods without
embedding. SEMC-AD achieves 99% anomaly detection, whereas random forest and
XGBoost only detect 86% and 81% of anomalies, respectively. While supervised
classification methods may excel in labeled datasets, the results demonstrate
that SEMC-AD is more efficient in classifying anomalies in datasets with
numerous categorical features, significantly enhancing anomaly detection,
reducing operator burden, and improving network maintenance. | [
"R. Mosayebi",
"H. Kia",
"A. Kianpour Raki"
] | 2023-10-10 16:54:25 | http://arxiv.org/abs/2310.06779v1 | http://arxiv.org/pdf/2310.06779v1 | 2310.06779v1 |
Information Content Exploration | Sparse reward environments are known to be challenging for reinforcement
learning agents. In such environments, efficient and scalable exploration is
crucial. Exploration is a means by which an agent gains information about the
environment. We expand on this topic and propose a new intrinsic reward that
systemically quantifies exploratory behavior and promotes state coverage by
maximizing the information content of a trajectory taken by an agent. We
compare our method to alternative exploration based intrinsic reward
techniques, namely Curiosity Driven Learning and Random Network Distillation.
We show that our information theoretic reward induces efficient exploration and
outperforms in various games, including Montezuma Revenge, a known difficult
task for reinforcement learning. Finally, we propose an extension that
maximizes information content in a discretely compressed latent space which
boosts sample efficiency and generalizes to continuous state spaces. | [
"Jacob Chmura",
"Hasham Burhani",
"Xiao Qi Shi"
] | 2023-10-10 16:51:32 | http://arxiv.org/abs/2310.06777v1 | http://arxiv.org/pdf/2310.06777v1 | 2310.06777v1 |
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning | Differentially private learning algorithms inject noise into the learning
process. While the most common private learning algorithm, DP-SGD, adds
independent Gaussian noise in each iteration, recent work on matrix
factorization mechanisms has shown empirically that introducing correlations in
the noise can greatly improve their utility. We characterize the asymptotic
learning utility for any choice of the correlation function, giving precise
analytical bounds for linear regression and as the solution to a convex program
for general convex functions. We show, using these bounds, how correlated noise
provably improves upon vanilla DP-SGD as a function of problem parameters such
as the effective dimension and condition number. Moreover, our analytical
expression for the near-optimal correlation function circumvents the cubic
complexity of the semi-definite program used to optimize the noise correlation
matrix in previous work. We validate our theory with experiments on private
deep learning. Our work matches or outperforms prior work while being efficient
both in terms of compute and memory. | [
"Christopher A. Choquette-Choo",
"Krishnamurthy Dvijotham",
"Krishna Pillutla",
"Arun Ganesh",
"Thomas Steinke",
"Abhradeep Thakurta"
] | 2023-10-10 16:48:18 | http://arxiv.org/abs/2310.06771v1 | http://arxiv.org/pdf/2310.06771v1 | 2310.06771v1 |
FABind: Fast and Accurate Protein-Ligand Binding | Modeling the interaction between proteins and ligands and accurately
predicting their binding structures is a critical yet challenging task in drug
discovery. Recent advancements in deep learning have shown promise in
addressing this challenge, with sampling-based and regression-based methods
emerging as two prominent approaches. However, these methods have notable
limitations. Sampling-based methods often suffer from low efficiency due to the
need for generating multiple candidate structures for selection. On the other
hand, regression-based methods offer fast predictions but may experience
decreased accuracy. Additionally, the variation in protein sizes often requires
external modules for selecting suitable binding pockets, further impacting
efficiency. In this work, we propose $\mathbf{FABind}$, an end-to-end model
that combines pocket prediction and docking to achieve accurate and fast
protein-ligand binding. $\mathbf{FABind}$ incorporates a unique ligand-informed
pocket prediction module, which is also leveraged for docking pose estimation.
The model further enhances the docking process by incrementally integrating the
predicted pocket to optimize protein-ligand binding, reducing discrepancies
between training and inference. Through extensive experiments on benchmark
datasets, our proposed $\mathbf{FABind}$ demonstrates strong advantages in
terms of effectiveness and efficiency compared to existing methods. Our code is
available at $\href{https://github.com/QizhiPei/FABind}{Github}$. | [
"Qizhi Pei",
"Kaiyuan Gao",
"Lijun Wu",
"Jinhua Zhu",
"Yingce Xia",
"Shufang Xie",
"Tao Qin",
"Kun He",
"Tie-Yan Liu",
"Rui Yan"
] | 2023-10-10 16:39:47 | http://arxiv.org/abs/2310.06763v3 | http://arxiv.org/pdf/2310.06763v3 | 2310.06763v3 |
Going Beyond Neural Network Feature Similarity: The Network Feature Complexity and Its Interpretation Using Category Theory | The behavior of neural networks still remains opaque, and a recently widely
noted phenomenon is that networks often achieve similar performance when
initialized with different random parameters. This phenomenon has attracted
significant attention in measuring the similarity between features learned by
distinct networks. However, feature similarity could be vague in describing the
same feature since equivalent features hardly exist. In this paper, we expand
the concept of equivalent feature and provide the definition of what we call
functionally equivalent features. These features produce equivalent output
under certain transformations. Using this definition, we aim to derive a more
intrinsic metric for the so-called feature complexity regarding the redundancy
of features learned by a neural network at each layer. We offer a formal
interpretation of our approach through the lens of category theory, a
well-developed area in mathematics. To quantify the feature complexity, we
further propose an efficient algorithm named Iterative Feature Merging. Our
experimental results validate our ideas and theories from various perspectives.
We empirically demonstrate that the functionally equivalence widely exists
among different features learned by the same neural network and we could reduce
the number of parameters of the network without affecting the performance.The
IFM shows great potential as a data-agnostic model prune method. We have also
drawn several interesting empirical findings regarding the defined feature
complexity. | [
"Yiting Chen",
"Zhanpeng Zhou",
"Junchi Yan"
] | 2023-10-10 16:27:12 | http://arxiv.org/abs/2310.06756v1 | http://arxiv.org/pdf/2310.06756v1 | 2310.06756v1 |
Causal Rule Learning: Enhancing the Understanding of Heterogeneous Treatment Effect via Weighted Causal Rules | Interpretability is a key concern in estimating heterogeneous treatment
effects using machine learning methods, especially for healthcare applications
where high-stake decisions are often made. Inspired by the Predictive,
Descriptive, Relevant framework of interpretability, we propose causal rule
learning which finds a refined set of causal rules characterizing potential
subgroups to estimate and enhance our understanding of heterogeneous treatment
effects. Causal rule learning involves three phases: rule discovery, rule
selection, and rule analysis. In the rule discovery phase, we utilize a causal
forest to generate a pool of causal rules with corresponding subgroup average
treatment effects. The selection phase then employs a D-learning method to
select a subset of these rules to deconstruct individual-level treatment
effects as a linear combination of the subgroup-level effects. This helps to
answer an ignored question by previous literature: what if an individual
simultaneously belongs to multiple groups with different average treatment
effects? The rule analysis phase outlines a detailed procedure to further
analyze each rule in the subset from multiple perspectives, revealing the most
promising rules for further validation. The rules themselves, their
corresponding subgroup treatment effects, and their weights in the linear
combination give us more insights into heterogeneous treatment effects.
Simulation and real-world data analysis demonstrate the superior performance of
causal rule learning on the interpretable estimation of heterogeneous treatment
effect when the ground truth is complex and the sample size is sufficient. | [
"Ying Wu",
"Hanzhong Liu",
"Kai Ren",
"Xiangyu Chang"
] | 2023-10-10 16:19:20 | http://arxiv.org/abs/2310.06746v1 | http://arxiv.org/pdf/2310.06746v1 | 2310.06746v1 |
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks | Learning feature representations of geographical space is vital for any
machine learning model that integrates geolocated data, spanning application
domains such as remote sensing, ecology, or epidemiology. Recent work mostly
embeds coordinates using sine and cosine projections based on Double Fourier
Sphere (DFS) features -- these embeddings assume a rectangular data domain even
on global data, which can lead to artifacts, especially at the poles. At the
same time, relatively little attention has been paid to the exact design of the
neural network architectures these functional embeddings are combined with.
This work proposes a novel location encoder for globally distributed geographic
data that combines spherical harmonic basis functions, natively defined on
spherical surfaces, with sinusoidal representation networks (SirenNets) that
can be interpreted as learned Double Fourier Sphere embedding. We
systematically evaluate the cross-product of positional embeddings and neural
network architectures across various classification and regression benchmarks
and synthetic evaluation datasets. In contrast to previous approaches that
require the combination of both positional encoding and neural networks to
learn meaningful representations, we show that both spherical harmonics and
sinusoidal representation networks are competitive on their own but set
state-of-the-art performances across tasks when combined. We provide source
code at www.github.com/marccoru/locationencoder | [
"Marc Rußwurm",
"Konstantin Klemmer",
"Esther Rolf",
"Robin Zbinden",
"Devis Tuia"
] | 2023-10-10 16:12:17 | http://arxiv.org/abs/2310.06743v1 | http://arxiv.org/pdf/2310.06743v1 | 2310.06743v1 |
Multi-domain improves out-of-distribution and data-limited scenarios for medical image analysis | Current machine learning methods for medical image analysis primarily focus
on developing models tailored for their specific tasks, utilizing data within
their target domain. These specialized models tend to be data-hungry and often
exhibit limitations in generalizing to out-of-distribution samples. Recently,
foundation models have been proposed, which combine data from various domains
and demonstrate excellent generalization capabilities. Building upon this, this
work introduces the incorporation of diverse medical image domains, including
different imaging modalities like X-ray, MRI, CT, and ultrasound images, as
well as various viewpoints such as axial, coronal, and sagittal views. We refer
to this approach as multi-domain model and compare its performance to that of
specialized models. Our findings underscore the superior generalization
capabilities of multi-domain models, particularly in scenarios characterized by
limited data availability and out-of-distribution, frequently encountered in
healthcare applications. The integration of diverse data allows multi-domain
models to utilize shared information across domains, enhancing the overall
outcomes significantly. To illustrate, for organ recognition, multi-domain
model can enhance accuracy by up to 10% compared to conventional specialized
models. | [
"Ece Ozkan",
"Xavier Boix"
] | 2023-10-10 16:07:23 | http://arxiv.org/abs/2310.06737v1 | http://arxiv.org/pdf/2310.06737v1 | 2310.06737v1 |
Growing ecosystem of deep learning methods for modeling protein$\unicode{x2013}$protein interactions | Numerous cellular functions rely on protein$\unicode{x2013}$protein
interactions. Efforts to comprehensively characterize them remain challenged
however by the diversity of molecular recognition mechanisms employed within
the proteome. Deep learning has emerged as a promising approach for tackling
this problem by exploiting both experimental data and basic biophysical
knowledge about protein interactions. Here, we review the growing ecosystem of
deep learning methods for modeling protein interactions, highlighting the
diversity of these biophysically-informed models and their respective
trade-offs. We discuss recent successes in using representation learning to
capture complex features pertinent to predicting protein interactions and
interaction sites, geometric deep learning to reason over protein structures
and predict complex structures, and generative modeling to design de novo
protein assemblies. We also outline some of the outstanding challenges and
promising new directions. Opportunities abound to discover novel interactions,
elucidate their physical mechanisms, and engineer binders to modulate their
functions using deep learning and, ultimately, unravel how protein interactions
orchestrate complex cellular behaviors. | [
"Julia R. Rogers",
"Gergő Nikolényi",
"Mohammed AlQuraishi"
] | 2023-10-10 15:53:27 | http://arxiv.org/abs/2310.06725v1 | http://arxiv.org/pdf/2310.06725v1 | 2310.06725v1 |
Improving Pseudo-Time Stepping Convergence for CFD Simulations With Neural Networks | Computational fluid dynamics (CFD) simulations of viscous fluids described by
the Navier-Stokes equations are considered. Depending on the Reynolds number of
the flow, the Navier-Stokes equations may exhibit a highly nonlinear behavior.
The system of nonlinear equations resulting from the discretization of the
Navier-Stokes equations can be solved using nonlinear iteration methods, such
as Newton's method. However, fast quadratic convergence is typically only
obtained in a local neighborhood of the solution, and for many configurations,
the classical Newton iteration does not converge at all. In such cases,
so-called globalization techniques may help to improve convergence.
In this paper, pseudo-transient continuation is employed in order to improve
nonlinear convergence. The classical algorithm is enhanced by a neural network
model that is trained to predict a local pseudo-time step. Generalization of
the novel approach is facilitated by predicting the local pseudo-time step
separately on each element using only local information on a patch of adjacent
elements as input. Numerical results for standard benchmark problems, including
flow through a backward facing step geometry and Couette flow, show the
performance of the machine learning-enhanced globalization approach; as the
software for the simulations, the CFD module of COMSOL Multiphysics is
employed. | [
"Anouk Zandbergen",
"Tycho van Noorden",
"Alexander Heinlein"
] | 2023-10-10 15:45:19 | http://arxiv.org/abs/2310.06717v1 | http://arxiv.org/pdf/2310.06717v1 | 2310.06717v1 |
S4Sleep: Elucidating the design space of deep-learning-based sleep stage classification models | Scoring sleep stages in polysomnography recordings is a time-consuming task
plagued by significant inter-rater variability. Therefore, it stands to benefit
from the application of machine learning algorithms. While many algorithms have
been proposed for this purpose, certain critical architectural decisions have
not received systematic exploration. In this study, we meticulously investigate
these design choices within the broad category of encoder-predictor
architectures. We identify robust architectures applicable to both time series
and spectrogram input representations. These architectures incorporate
structured state space models as integral components, leading to statistically
significant advancements in performance on the extensive SHHS dataset. These
improvements are assessed through both statistical and systematic error
estimations. We anticipate that the architectural insights gained from this
study will not only prove valuable for future research in sleep staging but
also hold relevance for other time series annotation tasks. | [
"Tiezhi Wang",
"Nils Strodthoff"
] | 2023-10-10 15:42:14 | http://arxiv.org/abs/2310.06715v1 | http://arxiv.org/pdf/2310.06715v1 | 2310.06715v1 |
Exploring Memorization in Fine-tuned Language Models | LLMs have shown great capabilities in various tasks but also exhibited
memorization of training data, thus raising tremendous privacy and copyright
concerns. While prior work has studied memorization during pre-training, the
exploration of memorization during fine-tuning is rather limited. Compared with
pre-training, fine-tuning typically involves sensitive data and diverse
objectives, thus may bring unique memorization behaviors and distinct privacy
risks. In this work, we conduct the first comprehensive analysis to explore
LMs' memorization during fine-tuning across tasks. Our studies with
open-sourced and our own fine-tuned LMs across various tasks indicate that
fine-tuned memorization presents a strong disparity among tasks. We provide an
understanding of this task disparity via sparse coding theory and unveil a
strong correlation between memorization and attention score distribution. By
investigating its memorization behavior, multi-task fine-tuning paves a
potential strategy to mitigate fine-tuned memorization. | [
"Shenglai Zeng",
"Yaxin Li",
"Jie Ren",
"Yiding Liu",
"Han Xu",
"Pengfei He",
"Yue Xing",
"Shuaiqiang Wang",
"Jiliang Tang",
"Dawei Yin"
] | 2023-10-10 15:41:26 | http://arxiv.org/abs/2310.06714v1 | http://arxiv.org/pdf/2310.06714v1 | 2310.06714v1 |
Interpretable Traffic Event Analysis with Bayesian Networks | Although existing machine learning-based methods for traffic accident
analysis can provide good quality results to downstream tasks, they lack
interpretability which is crucial for this critical problem. This paper
proposes an interpretable framework based on Bayesian Networks for traffic
accident prediction. To enable the ease of interpretability, we design a
dataset construction pipeline to feed the traffic data into the framework while
retaining the essential traffic data information. With a concrete case study,
our framework can derive a Bayesian Network from a dataset based on the causal
relationships between weather and traffic events across the United States.
Consequently, our framework enables the prediction of traffic accidents with
competitive accuracy while examining how the probability of these events
changes under different conditions, thus illustrating transparent relationships
between traffic and weather events. Additionally, the visualization of the
network simplifies the analysis of relationships between different variables,
revealing the primary causes of traffic accidents and ultimately providing a
valuable reference for reducing traffic accidents. | [
"Tong Yuan",
"Jian Yang",
"Zeyi Wen"
] | 2023-10-10 15:38:30 | http://arxiv.org/abs/2310.06713v1 | http://arxiv.org/pdf/2310.06713v1 | 2310.06713v1 |
Zero-Shot Transfer in Imitation Learning | We present an algorithm that learns to imitate expert behavior and can
transfer to previously unseen domains without retraining. Such an algorithm is
extremely relevant in real-world applications such as robotic learning because
1) reward functions are difficult to design, 2) learned policies from one
domain are difficult to deploy in another domain and 3) learning directly in
the real world is either expensive or unfeasible due to security concerns. To
overcome these constraints, we combine recent advances in Deep RL by using an
AnnealedVAE to learn a disentangled state representation and imitate an expert
by learning a single Q-function which avoids adversarial training. We
demonstrate the effectiveness of our method in 3 environments ranging in
difficulty and the type of transfer knowledge required. | [
"Alvaro Cauderan",
"Gauthier Boeshertz",
"Florian Schwarb",
"Calvin Zhang"
] | 2023-10-10 15:36:58 | http://arxiv.org/abs/2310.06710v1 | http://arxiv.org/pdf/2310.06710v1 | 2310.06710v1 |
Temporally Aligning Long Audio Interviews with Questions: A Case Study in Multimodal Data Integration | The problem of audio-to-text alignment has seen significant amount of
research using complete supervision during training. However, this is typically
not in the context of long audio recordings wherein the text being queried does
not appear verbatim within the audio file. This work is a collaboration with a
non-governmental organization called CARE India that collects long audio health
surveys from young mothers residing in rural parts of Bihar, India. Given a
question drawn from a questionnaire that is used to guide these surveys, we aim
to locate where the question is asked within a long audio recording. This is of
great value to African and Asian organizations that would otherwise have to
painstakingly go through long and noisy audio recordings to locate questions
(and answers) of interest. Our proposed framework, INDENT, uses a
cross-attention-based model and prior information on the temporal ordering of
sentences to learn speech embeddings that capture the semantics of the
underlying spoken text. These learnt embeddings are used to retrieve the
corresponding audio segment based on text queries at inference time. We
empirically demonstrate the significant effectiveness (improvement in R-avg of
about 3%) of our model over those obtained using text-based heuristics. We also
show how noisy ASR, generated using state-of-the-art ASR models for Indian
languages, yields better results when used in place of speech. INDENT, trained
only on Hindi data is able to cater to all languages supported by the
(semantically) shared text space. We illustrate this empirically on 11 Indic
languages. | [
"Piyush Singh Pasi",
"Karthikeya Battepati",
"Preethi Jyothi",
"Ganesh Ramakrishnan",
"Tanmay Mahapatra",
"Manoj Singh"
] | 2023-10-10 15:25:33 | http://arxiv.org/abs/2310.06702v1 | http://arxiv.org/pdf/2310.06702v1 | 2310.06702v1 |
Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning | The popularity of LLaMA (Touvron et al., 2023a;b) and other recently emerged
moderate-sized large language models (LLMs) highlights the potential of
building smaller yet powerful LLMs. Regardless, the cost of training such
models from scratch on trillions of tokens remains high. In this work, we study
structured pruning as an effective means to develop smaller LLMs from
pre-trained, larger models. Our approach employs two key techniques: (1)
targeted structured pruning, which prunes a larger model to a specified target
shape by removing layers, heads, and intermediate and hidden dimensions in an
end-to-end manner, and (2) dynamic batch loading, which dynamically updates the
composition of sampled data in each training batch based on varying losses
across different domains. We demonstrate the efficacy of our approach by
presenting the Sheared-LLaMA series, pruning the LLaMA2-7B model down to 1.3B
and 2.7B parameters. Sheared-LLaMA models outperform state-of-the-art
open-source models of equivalent sizes, such as Pythia, INCITE, and OpenLLaMA
models, on a wide range of downstream and instruction tuning evaluations, while
requiring only 3% of compute compared to training such models from scratch.
This work provides compelling evidence that leveraging existing LLMs with
structured pruning is a far more cost-effective approach for building smaller
LLMs. | [
"Mengzhou Xia",
"Tianyu Gao",
"Zhiyuan Zeng",
"Danqi Chen"
] | 2023-10-10 15:13:30 | http://arxiv.org/abs/2310.06694v1 | http://arxiv.org/pdf/2310.06694v1 | 2310.06694v1 |
Generalized Wick Decompositions | We review the cumulant decomposition (a way of decomposing the expectation of
a product of random variables (e.g. $\mathbb{E}[XYZ]$) into a sum of terms
corresponding to partitions of these variables.) and the Wick decomposition (a
way of decomposing a product of (not necessarily random) variables into a sum
of terms corresponding to subsets of the variables). Then we generalize each
one to a new decomposition where the product function is generalized to an
arbitrary function. | [
"Chris MacLeod",
"Evgenia Nitishinskaya",
"Buck Shlegeris"
] | 2023-10-10 15:00:27 | http://arxiv.org/abs/2310.06686v1 | http://arxiv.org/pdf/2310.06686v1 | 2310.06686v1 |
Learning Multiplex Embeddings on Text-rich Networks with One Text Encoder | In real-world scenarios, texts in a network are often linked by multiple
semantic relations (e.g., papers in an academic network are referenced by other
publications, written by the same author, or published in the same venue),
where text documents and their relations form a multiplex text-rich network.
Mainstream text representation learning methods use pretrained language models
(PLMs) to generate one embedding for each text unit, expecting that all types
of relations between texts can be captured by these single-view embeddings.
However, this presumption does not hold particularly in multiplex text-rich
networks. Along another line of work, multiplex graph neural networks (GNNs)
directly initialize node attributes as a feature vector for node representation
learning, but they cannot fully capture the semantics of the nodes' associated
texts. To bridge these gaps, we propose METERN, a new framework for learning
Multiplex Embeddings on TExt-Rich Networks. In contrast to existing methods,
METERN uses one text encoder to model the shared knowledge across relations and
leverages a small number of parameters per relation to derive relation-specific
representations. This allows the encoder to effectively capture the multiplex
structures in the network while also preserving parameter efficiency. We
conduct experiments on nine downstream tasks in five networks from both
academic and e-commerce domains, where METERN outperforms baselines
significantly and consistently. The code is available at
https://github.com/PeterGriffinJin/METERN-submit. | [
"Bowen Jin",
"Wentao Zhang",
"Yu Zhang",
"Yu Meng",
"Han Zhao",
"Jiawei Han"
] | 2023-10-10 14:59:22 | http://arxiv.org/abs/2310.06684v1 | http://arxiv.org/pdf/2310.06684v1 | 2310.06684v1 |
Enhanced Graph Neural Networks with Ego-Centric Spectral Subgraph Embeddings Augmentation | Graph Neural Networks (GNNs) have shown remarkable merit in performing
various learning-based tasks in complex networks. The superior performance of
GNNs often correlates with the availability and quality of node-level features
in the input networks. However, for many network applications, such node-level
information may be missing or unreliable, thereby limiting the applicability
and efficacy of GNNs. To address this limitation, we present a novel approach
denoted as Ego-centric Spectral subGraph Embedding Augmentation (ESGEA), which
aims to enhance and design node features, particularly in scenarios where
information is lacking. Our method leverages the topological structure of the
local subgraph to create topology-aware node features. The subgraph features
are generated using an efficient spectral graph embedding technique, and they
serve as node features that capture the local topological organization of the
network. The explicit node features, if present, are then enhanced with the
subgraph embeddings in order to improve the overall performance. ESGEA is
compatible with any GNN-based architecture and is effective even in the absence
of node features. We evaluate the proposed method in a social network graph
classification task where node attributes are unavailable, as well as in a node
classification task where node features are corrupted or even absent. The
evaluation results on seven datasets and eight baseline models indicate up to a
10% improvement in AUC and a 7% improvement in accuracy for graph and node
classification tasks, respectively. | [
"Anwar Said",
"Mudassir Shabbir",
"Tyler Derr",
"Waseem Abbas",
"Xenofon Koutsoukos"
] | 2023-10-10 14:57:29 | http://arxiv.org/abs/2310.12169v1 | http://arxiv.org/pdf/2310.12169v1 | 2310.12169v1 |
On the importance of catalyst-adsorbate 3D interactions for relaxed energy predictions | The use of machine learning for material property prediction and discovery
has traditionally centered on graph neural networks that incorporate the
geometric configuration of all atoms. However, in practice not all this
information may be readily available, e.g.~when evaluating the potentially
unknown binding of adsorbates to catalyst. In this paper, we investigate
whether it is possible to predict a system's relaxed energy in the OC20 dataset
while ignoring the relative position of the adsorbate with respect to the
electro-catalyst. We consider SchNet, DimeNet++ and FAENet as base
architectures and measure the impact of four modifications on model
performance: removing edges in the input graph, pooling independent
representations, not sharing the backbone weights and using an attention
mechanism to propagate non-geometric relative information. We find that while
removing binding site information impairs accuracy as expected, modified models
are able to predict relaxed energies with remarkably decent MAE. Our work
suggests future research directions in accelerated materials discovery where
information on reactant configurations can be reduced or altogether omitted. | [
"Alvaro Carbonero",
"Alexandre Duval",
"Victor Schmidt",
"Santiago Miret",
"Alex Hernandez-Garcia",
"Yoshua Bengio",
"David Rolnick"
] | 2023-10-10 14:57:04 | http://arxiv.org/abs/2310.06682v1 | http://arxiv.org/pdf/2310.06682v1 | 2310.06682v1 |
Machine Learning Quantum Systems with Magnetic p-bits | The slowing down of Moore's Law has led to a crisis as the computing
workloads of Artificial Intelligence (AI) algorithms continue skyrocketing.
There is an urgent need for scalable and energy-efficient hardware catering to
the unique requirements of AI algorithms and applications. In this environment,
probabilistic computing with p-bits emerged as a scalable, domain-specific, and
energy-efficient computing paradigm, particularly useful for probabilistic
applications and algorithms. In particular, spintronic devices such as
stochastic magnetic tunnel junctions (sMTJ) show great promise in designing
integrated p-computers. Here, we examine how a scalable probabilistic computer
with such magnetic p-bits can be useful for an emerging field combining machine
learning and quantum physics. | [
"Shuvro Chowdhury",
"Kerem Y. Camsari"
] | 2023-10-10 14:54:57 | http://arxiv.org/abs/2310.06679v1 | http://arxiv.org/pdf/2310.06679v1 | 2310.06679v1 |
Domain Generalization by Rejecting Extreme Augmentations | Data augmentation is one of the most effective techniques for regularizing
deep learning models and improving their recognition performance in a variety
of tasks and domains. However, this holds for standard in-domain settings, in
which the training and test data follow the same distribution. For the
out-of-domain case, where the test data follow a different and unknown
distribution, the best recipe for data augmentation is unclear. In this paper,
we show that for out-of-domain and domain generalization settings, data
augmentation can provide a conspicuous and robust improvement in performance.
To do that, we propose a simple training procedure: (i) use uniform sampling on
standard data augmentation transformations; (ii) increase the strength
transformations to account for the higher data variance expected when working
out-of-domain, and (iii) devise a new reward function to reject extreme
transformations that can harm the training. With this procedure, our data
augmentation scheme achieves a level of accuracy that is comparable to or
better than state-of-the-art methods on benchmark domain generalization
datasets. Code: \url{https://github.com/Masseeh/DCAug} | [
"Masih Aminbeidokhti",
"Fidel A. Guerrero Peña",
"Heitor Rapela Medeiros",
"Thomas Dubail",
"Eric Granger",
"Marco Pedersoli"
] | 2023-10-10 14:46:22 | http://arxiv.org/abs/2310.06670v1 | http://arxiv.org/pdf/2310.06670v1 | 2310.06670v1 |
Latent Diffusion Counterfactual Explanations | Counterfactual explanations have emerged as a promising method for
elucidating the behavior of opaque black-box models. Recently, several works
leveraged pixel-space diffusion models for counterfactual generation. To handle
noisy, adversarial gradients during counterfactual generation -- causing
unrealistic artifacts or mere adversarial perturbations -- they required either
auxiliary adversarially robust models or computationally intensive guidance
schemes. However, such requirements limit their applicability, e.g., in
scenarios with restricted access to the model's training data. To address these
limitations, we introduce Latent Diffusion Counterfactual Explanations (LDCE).
LDCE harnesses the capabilities of recent class- or text-conditional foundation
latent diffusion models to expedite counterfactual generation and focus on the
important, semantic parts of the data. Furthermore, we propose a novel
consensus guidance mechanism to filter out noisy, adversarial gradients that
are misaligned with the diffusion model's implicit classifier. We demonstrate
the versatility of LDCE across a wide spectrum of models trained on diverse
datasets with different learning paradigms. Finally, we showcase how LDCE can
provide insights into model errors, enhancing our understanding of black-box
model behavior. | [
"Karim Farid",
"Simon Schrodi",
"Max Argus",
"Thomas Brox"
] | 2023-10-10 14:42:34 | http://arxiv.org/abs/2310.06668v1 | http://arxiv.org/pdf/2310.06668v1 | 2310.06668v1 |
SC2GAN: Rethinking Entanglement by Self-correcting Correlated GAN Space | Generative Adversarial Networks (GANs) can synthesize realistic images, with
the learned latent space shown to encode rich semantic information with various
interpretable directions. However, due to the unstructured nature of the
learned latent space, it inherits the bias from the training data where
specific groups of visual attributes that are not causally related tend to
appear together, a phenomenon also known as spurious correlations, e.g., age
and eyeglasses or women and lipsticks. Consequently, the learned distribution
often lacks the proper modelling of the missing examples. The interpolation
following editing directions for one attribute could result in entangled
changes with other attributes. To address this problem, previous works
typically adjust the learned directions to minimize the changes in other
attributes, yet they still fail on strongly correlated features. In this work,
we study the entanglement issue in both the training data and the learned
latent space for the StyleGAN2-FFHQ model. We propose a novel framework
SC$^2$GAN that achieves disentanglement by re-projecting low-density latent
code samples in the original latent space and correcting the editing directions
based on both the high-density and low-density regions. By leveraging the
original meaningful directions and semantic region-specific layers, our
framework interpolates the original latent codes to generate images with
attribute combination that appears infrequently, then inverts these samples
back to the original latent space. We apply our framework to pre-existing
methods that learn meaningful latent directions and showcase its strong
capability to disentangle the attributes with small amounts of low-density
region samples added. | [
"Zikun Chen",
"Han Zhao",
"Parham Aarabi",
"Ruowei Jiang"
] | 2023-10-10 14:42:32 | http://arxiv.org/abs/2310.06667v1 | http://arxiv.org/pdf/2310.06667v1 | 2310.06667v1 |
Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization | Counterfactually-Augmented Data (CAD) -- minimal editing of sentences to flip
the corresponding labels -- has the potential to improve the
Out-Of-Distribution (OOD) generalization capability of language models, as CAD
induces language models to exploit domain-independent causal features and
exclude spurious correlations. However, the empirical results of CAD's OOD
generalization are not as efficient as anticipated. In this study, we attribute
the inefficiency to the myopia phenomenon caused by CAD: language models only
focus on causal features that are edited in the augmentation operation and
exclude other non-edited causal features. Therefore, the potential of CAD is
not fully exploited. To address this issue, we analyze the myopia phenomenon in
feature space from the perspective of Fisher's Linear Discriminant, then we
introduce two additional constraints based on CAD's structural properties
(dataset-level and sentence-level) to help language models extract more
complete causal features in CAD, thereby mitigating the myopia phenomenon and
improving OOD generalization capability. We evaluate our method on two tasks:
Sentiment Analysis and Natural Language Inference, and the experimental results
demonstrate that our method could unlock the potential of CAD and improve the
OOD generalization performance of language models by 1.0% to 5.9%. | [
"Caoyun Fan",
"Wenqing Chen",
"Jidong Tian",
"Yitian Li",
"Hao He",
"Yaohui Jin"
] | 2023-10-10 14:41:38 | http://arxiv.org/abs/2310.06666v1 | http://arxiv.org/pdf/2310.06666v1 | 2310.06666v1 |
Tertiary Lymphoid Structures Generation through Graph-based Diffusion | Graph-based representation approaches have been proven to be successful in
the analysis of biomedical data, due to their capability of capturing intricate
dependencies between biological entities, such as the spatial organization of
different cell types in a tumor tissue. However, to further enhance our
understanding of the underlying governing biological mechanisms, it is
important to accurately capture the actual distributions of such complex data.
Graph-based deep generative models are specifically tailored to accomplish
that. In this work, we leverage state-of-the-art graph-based diffusion models
to generate biologically meaningful cell-graphs. In particular, we show that
the adopted graph diffusion model is able to accurately learn the distribution
of cells in terms of their tertiary lymphoid structures (TLS) content, a
well-established biomarker for evaluating the cancer progression in oncology
research. Additionally, we further illustrate the utility of the learned
generative models for data augmentation in a TLS classification task. To the
best of our knowledge, this is the first work that leverages the power of graph
diffusion models in generating meaningful biological cell structures. | [
"Manuel Madeira",
"Dorina Thanou",
"Pascal Frossard"
] | 2023-10-10 14:37:17 | http://arxiv.org/abs/2310.06661v1 | http://arxiv.org/pdf/2310.06661v1 | 2310.06661v1 |
Diversity from Human Feedback | Diversity plays a significant role in many problems, such as ensemble
learning, reinforcement learning, and combinatorial optimization. How to define
the diversity measure is a longstanding problem. Many methods rely on expert
experience to define a proper behavior space and then obtain the diversity
measure, which is, however, challenging in many scenarios. In this paper, we
propose the problem of learning a behavior space from human feedback and
present a general method called Diversity from Human Feedback (DivHF) to solve
it. DivHF learns a behavior descriptor consistent with human preference by
querying human feedback. The learned behavior descriptor can be combined with
any distance measure to define a diversity measure. We demonstrate the
effectiveness of DivHF by integrating it with the Quality-Diversity
optimization algorithm MAP-Elites and conducting experiments on the QDax suite.
The results show that DivHF learns a behavior space that aligns better with
human requirements compared to direct data-driven approaches and leads to more
diverse solutions under human preference. Our contributions include formulating
the problem, proposing the DivHF method, and demonstrating its effectiveness
through experiments. | [
"Ren-Jian Wang",
"Ke Xue",
"Yutong Wang",
"Peng Yang",
"Haobo Fu",
"Qiang Fu",
"Chao Qian"
] | 2023-10-10 14:13:59 | http://arxiv.org/abs/2310.06648v1 | http://arxiv.org/pdf/2310.06648v1 | 2310.06648v1 |
Self-Supervised Representation Learning for Online Handwriting Text Classification | Self-supervised learning offers an efficient way of extracting rich
representations from various types of unlabeled data while avoiding the cost of
annotating large-scale datasets. This is achievable by designing a pretext task
to form pseudo labels with respect to the modality and domain of the data.
Given the evolving applications of online handwritten texts, in this study, we
propose the novel Part of Stroke Masking (POSM) as a pretext task for
pretraining models to extract informative representations from the online
handwriting of individuals in English and Chinese languages, along with two
suggested pipelines for fine-tuning the pretrained models. To evaluate the
quality of the extracted representations, we use both intrinsic and extrinsic
evaluation methods. The pretrained models are fine-tuned to achieve
state-of-the-art results in tasks such as writer identification, gender
classification, and handedness classification, also highlighting the
superiority of utilizing the pretrained models over the models trained from
scratch. | [
"Pouya Mehralian",
"Bagher BabaAli",
"Ashena Gorgan Mohammadi"
] | 2023-10-10 14:07:49 | http://arxiv.org/abs/2310.06645v1 | http://arxiv.org/pdf/2310.06645v1 | 2310.06645v1 |
Zero-Level-Set Encoder for Neural Distance Fields | Neural shape representation generally refers to representing 3D geometry
using neural networks, e.g., to compute a signed distance or occupancy value at
a specific spatial position. Previous methods tend to rely on the auto-decoder
paradigm, which often requires densely-sampled and accurate signed distances to
be known during training and testing, as well as an additional optimization
loop during inference. This introduces a lot of computational overhead, in
addition to having to compute signed distances analytically, even during
testing. In this paper, we present a novel encoder-decoder neural network for
embedding 3D shapes in a single forward pass. Our architecture is based on a
multi-scale hybrid system incorporating graph-based and voxel-based components,
as well as a continuously differentiable decoder. Furthermore, the network is
trained to solve the Eikonal equation and only requires knowledge of the
zero-level set for training and inference. Additional volumetric samples can be
generated on-the-fly, and incorporated in an unsupervised manner. This means
that in contrast to most previous work, our network is able to output valid
signed distance fields without explicit prior knowledge of non-zero distance
values or shape occupancy. In other words, our network computes approximate
solutions to the boundary-valued Eikonal equation. It also requires only a
single forward pass during inference, instead of the common latent code
optimization. We further propose a modification of the loss function in case
that surface normals are not well defined, e.g., in the context of
non-watertight surface-meshes and non-manifold geometry. We finally demonstrate
the efficacy, generalizability and scalability of our method on datasets
consisting of deforming 3D shapes, single class encoding and multiclass
encoding, showcasing a wide range of possible applications. | [
"Stefan Rhys Jeske",
"Jonathan Klein",
"Dominik L. Michels",
"Jan Bender"
] | 2023-10-10 14:07:37 | http://arxiv.org/abs/2310.06644v1 | http://arxiv.org/pdf/2310.06644v1 | 2310.06644v1 |
Implicit Variational Inference for High-Dimensional Posteriors | In variational inference, the benefits of Bayesian models rely on accurately
capturing the true posterior distribution. We propose using neural samplers
that specify implicit distributions, which are well-suited for approximating
complex multimodal and correlated posteriors in high-dimensional spaces. Our
approach advances inference using implicit distributions by introducing novel
bounds that come about by locally linearising the neural sampler. This is
distinct from existing methods that rely on additional discriminator networks
and unstable adversarial objectives. Furthermore, we present a new sampler
architecture that, for the first time, enables implicit distributions over
millions of latent variables, addressing computational concerns by using
differentiable numerical approximations. Our empirical analysis indicates our
method is capable of recovering correlations across layers in large Bayesian
neural networks, a property that is crucial for a network's performance but
notoriously challenging to achieve. To the best of our knowledge, no other
method has been shown to accomplish this task for such large models. Through
experiments in downstream tasks, we demonstrate that our expressive posteriors
outperform state-of-the-art uncertainty quantification methods, validating the
effectiveness of our training algorithm and the quality of the learned implicit
approximation. | [
"Anshuk Uppal",
"Kristoffer Stensbo-Smidt",
"Wouter K. Boomsma",
"Jes Frellsen"
] | 2023-10-10 14:06:56 | http://arxiv.org/abs/2310.06643v1 | http://arxiv.org/pdf/2310.06643v1 | 2310.06643v1 |
The Lattice Overparametrization Paradigm for the Machine Learning of Lattice Operators | The machine learning of lattice operators has three possible bottlenecks.
From a statistical standpoint, it is necessary to design a constrained class of
operators based on prior information with low bias, and low complexity relative
to the sample size. From a computational perspective, there should be an
efficient algorithm to minimize an empirical error over the class. From an
understanding point of view, the properties of the learned operator need to be
derived, so its behavior can be theoretically understood. The statistical
bottleneck can be overcome due to the rich literature about the representation
of lattice operators, but there is no general learning algorithm for them. In
this paper, we discuss a learning paradigm in which, by overparametrizing a
class via elements in a lattice, an algorithm for minimizing functions in a
lattice is applied to learn. We present the stochastic lattice gradient descent
algorithm as a general algorithm to learn on constrained classes of operators
as long as a lattice overparametrization of it is fixed, and we discuss
previous works which are proves of concept. Moreover, if there are algorithms
to compute the basis of an operator from its overparametrization, then its
properties can be deduced and the understanding bottleneck is also overcome.
This learning paradigm has three properties that modern methods based on neural
networks lack: control, transparency and interpretability. Nowadays, there is
an increasing demand for methods with these characteristics, and we believe
that mathematical morphology is in a unique position to supply them. The
lattice overparametrization paradigm could be a missing piece for it to achieve
its full potential within modern machine learning. | [
"Diego Marcondes",
"Junior Barrera"
] | 2023-10-10 14:00:03 | http://arxiv.org/abs/2310.06639v1 | http://arxiv.org/pdf/2310.06639v1 | 2310.06639v1 |
What If the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-modal Language Models | Counterfactual reasoning ability is one of the core abilities of human
intelligence. This reasoning process involves the processing of alternatives to
observed states or past events, and this process can improve our ability for
planning and decision-making. In this work, we focus on benchmarking the
counterfactual reasoning ability of multi-modal large language models. We take
the question and answer pairs from the VQAv2 dataset and add one counterfactual
presupposition to the questions, with the answer being modified accordingly.
After generating counterfactual questions and answers using ChatGPT, we
manually examine all generated questions and answers to ensure correctness.
Over 2k counterfactual question and answer pairs are collected this way. We
evaluate recent vision language models on our newly collected test dataset and
found that all models exhibit a large performance drop compared to the results
tested on questions without the counterfactual presupposition. This result
indicates that there still exists space for developing vision language models.
Apart from the vision language models, our proposed dataset can also serves as
a benchmark for evaluating the ability of code generation LLMs, results
demonstrate a large gap between GPT-4 and current open-source models. Our code
and dataset are available at \url{https://github.com/Letian2003/C-VQA}. | [
"Letian Zhang",
"Xiaotong Zhai",
"Zhongkai Zhao",
"Xin Wen",
"Yongshuo Zong",
"Bingchen Zhao"
] | 2023-10-10 13:45:59 | http://arxiv.org/abs/2310.06627v1 | http://arxiv.org/pdf/2310.06627v1 | 2310.06627v1 |
iTransformer: Inverted Transformers Are Effective for Time Series Forecasting | The recent boom of linear forecasting models questions the ongoing passion
for architectural modifications of Transformer-based forecasters. These
forecasters leverage Transformers to model the global dependencies over
temporal tokens of time series, with each token formed by multiple variates of
the same timestamp. However, Transformer is challenged in forecasting series
with larger lookback windows due to performance degradation and computation
explosion. Besides, the unified embedding for each temporal token fuses
multiple variates with potentially unaligned timestamps and distinct physical
measurements, which may fail in learning variate-centric representations and
result in meaningless attention maps. In this work, we reflect on the competent
duties of Transformer components and repurpose the Transformer architecture
without any adaptation on the basic components. We propose iTransformer that
simply inverts the duties of the attention mechanism and the feed-forward
network. Specifically, the time points of individual series are embedded into
variate tokens which are utilized by the attention mechanism to capture
multivariate correlations; meanwhile, the feed-forward network is applied for
each variate token to learn nonlinear representations. The iTransformer model
achieves consistent state-of-the-art on several real-world datasets, which
further empowers the Transformer family with promoted performance,
generalization ability across different variates, and better utilization of
arbitrary lookback windows, making it a nice alternative as the fundamental
backbone of time series forecasting. | [
"Yong Liu",
"Tengge Hu",
"Haoran Zhang",
"Haixu Wu",
"Shiyu Wang",
"Lintao Ma",
"Mingsheng Long"
] | 2023-10-10 13:44:09 | http://arxiv.org/abs/2310.06625v1 | http://arxiv.org/pdf/2310.06625v1 | 2310.06625v1 |
Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts | Out-of-distribution (OOD) generalization is a complicated problem due to the
idiosyncrasies of possible distribution shifts between training and test
domains. Most benchmarks employ diverse datasets to address this issue;
however, the degree of the distribution shift between the training domains and
the test domains of each dataset remains largely fixed. This may lead to biased
conclusions that either underestimate or overestimate the actual OOD
performance of a model. Our study delves into a more nuanced evaluation setting
that covers a broad range of shift degrees. We show that the robustness of
models can be quite brittle and inconsistent under different degrees of
distribution shifts, and therefore one should be more cautious when drawing
conclusions from evaluations under a limited range of degrees. In addition, we
observe that large-scale pre-trained models, such as CLIP, are sensitive to
even minute distribution shifts of novel downstream tasks. This indicates that
while pre-trained representations may help improve downstream in-distribution
performance, they could have minimal or even adverse effects on generalization
in certain OOD scenarios of the downstream task if not used properly. In light
of these findings, we encourage future research to conduct evaluations across a
broader range of shift degrees whenever possible. | [
"Kaican Li",
"Yifan Zhang",
"Lanqing Hong",
"Zhenguo Li",
"Nevin L. Zhang"
] | 2023-10-10 13:39:18 | http://arxiv.org/abs/2310.06622v1 | http://arxiv.org/pdf/2310.06622v1 | 2310.06622v1 |
Discovering Interpretable Physical Models Using Symbolic Regression and Discrete Exterior Calculus | Computational modeling is a key resource to gather insight into physical
systems in modern scientific research and engineering. While access to large
amount of data has fueled the use of Machine Learning (ML) to recover physical
models from experiments and increase the accuracy of physical simulations,
purely data-driven models have limited generalization and interpretability. To
overcome these limitations, we propose a framework that combines Symbolic
Regression (SR) and Discrete Exterior Calculus (DEC) for the automated
discovery of physical models starting from experimental data. Since these
models consist of mathematical expressions, they are interpretable and amenable
to analysis, and the use of a natural, general-purpose discrete mathematical
language for physics favors generalization with limited input data.
Importantly, DEC provides building blocks for the discrete analogue of field
theories, which are beyond the state-of-the-art applications of SR to physical
problems. Further, we show that DEC allows to implement a strongly-typed SR
procedure that guarantees the mathematical consistency of the recovered models
and reduces the search space of symbolic expressions. Finally, we prove the
effectiveness of our methodology by re-discovering three models of Continuum
Physics from synthetic experimental data: Poisson equation, the Euler's
Elastica and the equations of Linear Elasticity. Thanks to their
general-purpose nature, the methods developed in this paper may be applied to
diverse contexts of physical modeling. | [
"Simone Manti",
"Alessandro Lucantonio"
] | 2023-10-10 13:23:05 | http://arxiv.org/abs/2310.06609v1 | http://arxiv.org/pdf/2310.06609v1 | 2310.06609v1 |
Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels | Label noise is a pervasive problem in deep learning that often compromises
the generalization performance of trained models. Recently, leveraging
privileged information (PI) -- information available only during training but
not at test time -- has emerged as an effective approach to mitigate this
issue. Yet, existing PI-based methods have failed to consistently outperform
their no-PI counterparts in terms of preventing overfitting to label noise. To
address this deficiency, we introduce Pi-DUAL, an architecture designed to
harness PI to distinguish clean from wrong labels. Pi-DUAL decomposes the
output logits into a prediction term, based on conventional input features, and
a noise-fitting term influenced solely by PI. A gating mechanism steered by PI
adaptively shifts focus between these terms, allowing the model to implicitly
separate the learning paths of clean and wrong labels. Empirically, Pi-DUAL
achieves significant performance improvements on key PI benchmarks (e.g., +6.8%
on ImageNet-PI), establishing a new state-of-the-art test set accuracy.
Additionally, Pi-DUAL is a potent method for identifying noisy samples
post-training, outperforming other strong methods at this task. Overall,
Pi-DUAL is a simple, scalable and practical approach for mitigating the effects
of label noise in a variety of real-world scenarios with PI. | [
"Ke Wang",
"Guillermo Ortiz-Jimenez",
"Rodolphe Jenatton",
"Mark Collier",
"Efi Kokiopoulou",
"Pascal Frossard"
] | 2023-10-10 13:08:50 | http://arxiv.org/abs/2310.06600v1 | http://arxiv.org/pdf/2310.06600v1 | 2310.06600v1 |
FTFT: efficient and robust Fine-Tuning by transFerring Training dynamics | Despite the massive success of fine-tuning large Pre-trained Language Models
(PLMs) on a wide range of Natural Language Processing (NLP) tasks, they remain
susceptible to out-of-distribution (OOD) and adversarial inputs. Data map (DM)
is a simple yet effective dual-model approach that enhances the robustness of
fine-tuned PLMs, which involves fine-tuning a model on the original training
set (i.e. reference model), selecting a specified fraction of important
training examples according to the training dynamics of the reference model,
and fine-tuning the same model on these selected examples (i.e. main model).
However, it suffers from the drawback of requiring fine-tuning the same model
twice, which is computationally expensive for large models. In this paper, we
first show that 1) training dynamics are highly transferable across different
model sizes and different pre-training methods, and that 2) main models
fine-tuned using DM learn faster than when using conventional Empirical Risk
Minimization (ERM). Building on these observations, we propose a novel
fine-tuning approach based on the DM method: Fine-Tuning by transFerring
Training dynamics (FTFT). Compared with DM, FTFT uses more efficient reference
models and then fine-tunes more capable main models for fewer steps. Our
experiments show that FTFT achieves better generalization robustness than ERM
while spending less than half of the training cost. | [
"Yupei Du",
"Albert Gatt",
"Dong Nguyen"
] | 2023-10-10 12:53:48 | http://arxiv.org/abs/2310.06588v1 | http://arxiv.org/pdf/2310.06588v1 | 2310.06588v1 |
A Black-Box Physics-Informed Estimator based on Gaussian Process Regression for Robot Inverse Dynamics Identification | In this paper, we propose a black-box model based on Gaussian process
regression for the identification of the inverse dynamics of robotic
manipulators. The proposed model relies on a novel multidimensional kernel,
called \textit{Lagrangian Inspired Polynomial} (\kernelInitials{}) kernel. The
\kernelInitials{} kernel is based on two main ideas. First, instead of directly
modeling the inverse dynamics components, we model as GPs the kinetic and
potential energy of the system. The GP prior on the inverse dynamics components
is derived from those on the energies by applying the properties of GPs under
linear operators. Second, as regards the energy prior definition, we prove a
polynomial structure of the kinetic and potential energy, and we derive a
polynomial kernel that encodes this property. As a consequence, the proposed
model allows also to estimate the kinetic and potential energy without
requiring any label on these quantities. Results on simulation and on two real
robotic manipulators, namely a 7 DOF Franka Emika Panda and a 6 DOF MELFA
RV4FL, show that the proposed model outperforms state-of-the-art black-box
estimators based both on Gaussian Processes and Neural Networks in terms of
accuracy, generality and data efficiency. The experiments on the MELFA robot
also demonstrate that our approach achieves performance comparable to
fine-tuned model-based estimators, despite requiring less prior information. | [
"Giulio Giacomuzzo",
"Alberto Dalla Libera",
"Diego Romeres",
"Ruggero Carli"
] | 2023-10-10 12:52:42 | http://arxiv.org/abs/2310.06585v1 | http://arxiv.org/pdf/2310.06585v1 | 2310.06585v1 |
XAI for Early Crop Classification | We propose an approach for early crop classification through identifying
important timesteps with eXplainable AI (XAI) methods. Our approach consists of
training a baseline crop classification model to carry out layer-wise relevance
propagation (LRP) so that the salient time step can be identified. We chose a
selected number of such important time indices to create the bounding region of
the shortest possible classification timeframe. We identified the period 21st
April 2019 to 9th August 2019 as having the best trade-off in terms of accuracy
and earliness. This timeframe only suffers a 0.75% loss in accuracy as compared
to using the full timeseries. We observed that the LRP-derived important
timesteps also highlight small details in input values that differentiates
between different classes and | [
"Ayshah Chan",
"Maja Schneider",
"Marco Körner"
] | 2023-10-10 12:35:20 | http://arxiv.org/abs/2310.06574v1 | http://arxiv.org/pdf/2310.06574v1 | 2310.06574v1 |
Deep Learning reconstruction with uncertainty estimation for $γ$ photon interaction in fast scintillator detectors | This article presents a physics-informed deep learning method for the
quantitative estimation of the spatial coordinates of gamma interactions within
a monolithic scintillator, with a focus on Positron Emission Tomography (PET)
imaging. A Density Neural Network approach is designed to estimate the
2-dimensional gamma photon interaction coordinates in a fast lead tungstate
(PbWO4) monolithic scintillator detector. We introduce a custom loss function
to estimate the inherent uncertainties associated with the reconstruction
process and to incorporate the physical constraints of the detector.
This unique combination allows for more robust and reliable position
estimations and the obtained results demonstrate the effectiveness of the
proposed approach and highlights the significant benefits of the uncertainties
estimation. We discuss its potential impact on improving PET imaging quality
and show how the results can be used to improve the exploitation of the model,
to bring benefits to the application and how to evaluate the validity of the
given prediction and the associated uncertainties. Importantly, our proposed
methodology extends beyond this specific use case, as it can be generalized to
other applications beyond PET imaging. | [
"Geoffrey Daniel",
"Mohamed Bahi Yahiaoui",
"Claude Comtat",
"Sebastien Jan",
"Olga Kochebina",
"Jean-Marc Martinez",
"Viktoriya Sergeyeva",
"Viatcheslav Sharyy",
"Chi-Hsun Sung",
"Dominique Yvon"
] | 2023-10-10 12:31:29 | http://arxiv.org/abs/2310.06572v1 | http://arxiv.org/pdf/2310.06572v1 | 2310.06572v1 |
Statistical properties and privacy guarantees of an original distance-based fully synthetic data generation method | Introduction: The amount of data generated by original research is growing
exponentially. Publicly releasing them is recommended to comply with the Open
Science principles. However, data collected from human participants cannot be
released as-is without raising privacy concerns. Fully synthetic data represent
a promising answer to this challenge. This approach is explored by the French
Centre de Recherche en {\'E}pid{\'e}miologie et Sant{\'e} des Populations in
the form of a synthetic data generation framework based on Classification and
Regression Trees and an original distance-based filtering. The goal of this
work was to develop a refined version of this framework and to assess its
risk-utility profile with empirical and formal tools, including novel ones
developed for the purpose of this evaluation.Materials and Methods: Our
synthesis framework consists of four successive steps, each of which is
designed to prevent specific risks of disclosure. We assessed its performance
by applying two or more of these steps to a rich epidemiological dataset.
Privacy and utility metrics were computed for each of the resulting synthetic
datasets, which were further assessed using machine learning
approaches.Results: Computed metrics showed a satisfactory level of protection
against attribute disclosure attacks for each synthetic dataset, especially
when the full framework was used. Membership disclosure attacks were formally
prevented without significantly altering the data. Machine learning approaches
showed a low risk of success for simulated singling out and linkability
attacks. Distributional and inferential similarity with the original data were
high with all datasets.Discussion: This work showed the technical feasibility
of generating publicly releasable synthetic data using a multi-step framework.
Formal and empirical tools specifically developed for this demonstration are a
valuable contribution to this field. Further research should focus on the
extension and validation of these tools, in an effort to specify the intrinsic
qualities of alternative data synthesis methods.Conclusion: By successfully
assessing the quality of data produced using a novel multi-step synthetic data
generation framework, we showed the technical and conceptual soundness of the
Open-CESP initiative, which seems ripe for full-scale implementation. | [
"Rémy Chapelle",
"Bruno Falissard"
] | 2023-10-10 12:29:57 | http://arxiv.org/abs/2310.06571v1 | http://arxiv.org/pdf/2310.06571v1 | 2310.06571v1 |
Data efficient deep learning for medical image analysis: A survey | The rapid evolution of deep learning has significantly advanced the field of
medical image analysis. However, despite these achievements, the further
enhancement of deep learning models for medical image analysis faces a
significant challenge due to the scarcity of large, well-annotated datasets. To
address this issue, recent years have witnessed a growing emphasis on the
development of data-efficient deep learning methods. This paper conducts a
thorough review of data-efficient deep learning methods for medical image
analysis. To this end, we categorize these methods based on the level of
supervision they rely on, encompassing categories such as no supervision,
inexact supervision, incomplete supervision, inaccurate supervision, and only
limited supervision. We further divide these categories into finer
subcategories. For example, we categorize inexact supervision into multiple
instance learning and learning with weak annotations. Similarly, we categorize
incomplete supervision into semi-supervised learning, active learning, and
domain-adaptive learning and so on. Furthermore, we systematically summarize
commonly used datasets for data efficient deep learning in medical image
analysis and investigate future research directions to conclude this survey. | [
"Suruchi Kumari",
"Pravendra Singh"
] | 2023-10-10 12:13:38 | http://arxiv.org/abs/2310.06557v1 | http://arxiv.org/pdf/2310.06557v1 | 2310.06557v1 |
On Temporal References in Emergent Communication | As humans, we use linguistic elements referencing time, such as before or
tomorrow, to easily share past experiences and future predictions. While
temporal aspects of the language have been considered in computational
linguistics, no such exploration has been done within the field of emergent
communication. We research this gap, providing the first reported temporal
vocabulary within emergent communication literature. Our experimental analysis
shows that a different agent architecture is sufficient for the natural
emergence of temporal references, and that no additional losses are necessary.
Our readily transferable architectural insights provide the basis for the
incorporation of temporal referencing into other emergent communication
environments. | [
"Olaf Lipinski",
"Adam J. Sobey",
"Federico Cerutti",
"Timothy J. Norman"
] | 2023-10-10 12:10:40 | http://arxiv.org/abs/2310.06555v1 | http://arxiv.org/pdf/2310.06555v1 | 2310.06555v1 |
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks | Label smoothing -- using softened labels instead of hard ones -- is a widely
adopted regularization method for deep learning, showing diverse benefits such
as enhanced generalization and calibration. Its implications for preserving
model privacy, however, have remained unexplored. To fill this gap, we
investigate the impact of label smoothing on model inversion attacks (MIAs),
which aim to generate class-representative samples by exploiting the knowledge
encoded in a classifier, thereby inferring sensitive information about its
training data. Through extensive analyses, we uncover that traditional label
smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even
more, we reveal that smoothing with negative factors counters this trend,
impeding the extraction of class-related information and leading to privacy
preservation, beating state-of-the-art defenses. This establishes a practical
and powerful novel way for enhancing model resilience against MIAs. | [
"Lukas Struppek",
"Dominik Hintersdorf",
"Kristian Kersting"
] | 2023-10-10 11:51:12 | http://arxiv.org/abs/2310.06549v1 | http://arxiv.org/pdf/2310.06549v1 | 2310.06549v1 |
An Edge-Aware Graph Autoencoder Trained on Scale-Imbalanced Data for Travelling Salesman Problems | Recent years have witnessed a surge in research on machine learning for
combinatorial optimization since learning-based approaches can outperform
traditional heuristics and approximate exact solvers at a lower computation
cost. However, most existing work on supervised neural combinatorial
optimization focuses on TSP instances with a fixed number of cities and
requires large amounts of training samples to achieve a good performance,
making them less practical to be applied to realistic optimization scenarios.
This work aims to develop a data-driven graph representation learning method
for solving travelling salesman problems (TSPs) with various numbers of cities.
To this end, we propose an edge-aware graph autoencoder (EdgeGAE) model that
can learn to solve TSPs after being trained on solution data of various sizes
with an imbalanced distribution. We formulate the TSP as a link prediction task
on sparse connected graphs. A residual gated encoder is trained to learn latent
edge embeddings, followed by an edge-centered decoder to output link
predictions in an end-to-end manner. To improve the model's generalization
capability of solving large-scale problems, we introduce an active sampling
strategy into the training process. In addition, we generate a benchmark
dataset containing 50,000 TSP instances with a size from 50 to 500 cities,
following an extremely scale-imbalanced distribution, making it ideal for
investigating the model's performance for practical applications. We conduct
experiments using different amounts of training data with various scales, and
the experimental results demonstrate that the proposed data-driven approach
achieves a highly competitive performance among state-of-the-art learning-based
methods for solving TSPs. | [
"Shiqing Liu",
"Xueming Yan",
"Yaochu Jin"
] | 2023-10-10 11:42:49 | http://arxiv.org/abs/2310.06543v1 | http://arxiv.org/pdf/2310.06543v1 | 2310.06543v1 |
A Novel Contrastive Learning Method for Clickbait Detection on RoCliCo: A Romanian Clickbait Corpus of News Articles | To increase revenue, news websites often resort to using deceptive news
titles, luring users into clicking on the title and reading the full news.
Clickbait detection is the task that aims to automatically detect this form of
false advertisement and avoid wasting the precious time of online users.
Despite the importance of the task, to the best of our knowledge, there is no
publicly available clickbait corpus for the Romanian language. To this end, we
introduce a novel Romanian Clickbait Corpus (RoCliCo) comprising 8,313 news
samples which are manually annotated with clickbait and non-clickbait labels.
Furthermore, we conduct experiments with four machine learning methods, ranging
from handcrafted models to recurrent and transformer-based neural networks, to
establish a line-up of competitive baselines. We also carry out experiments
with a weighted voting ensemble. Among the considered baselines, we propose a
novel BERT-based contrastive learning model that learns to encode news titles
and contents into a deep metric space such that titles and contents of
non-clickbait news have high cosine similarity, while titles and contents of
clickbait news have low cosine similarity. Our data set and code to reproduce
the baselines are publicly available for download at
https://github.com/dariabroscoteanu/RoCliCo. | [
"Daria-Mihaela Broscoteanu",
"Radu Tudor Ionescu"
] | 2023-10-10 11:38:16 | http://arxiv.org/abs/2310.06540v1 | http://arxiv.org/pdf/2310.06540v1 | 2310.06540v1 |
Data-level hybrid strategy selection for disk fault prediction model based on multivariate GAN | Data class imbalance is a common problem in classification problems, where
minority class samples are often more important and more costly to misclassify
in a classification task. Therefore, it is very important to solve the data
class imbalance classification problem. The SMART dataset exhibits an evident
class imbalance, comprising a substantial quantity of healthy samples and a
comparatively limited number of defective samples. This dataset serves as a
reliable indicator of the disc's health status. In this paper, we obtain the
best balanced disk SMART dataset for a specific classification model by mixing
and integrating the data synthesised by multivariate generative adversarial
networks (GAN) to balance the disk SMART dataset at the data level; and combine
it with genetic algorithms to obtain higher disk fault classification
prediction accuracy on a specific classification model. | [
"Shuangshuang Yuan",
"Peng Wu",
"Yuehui Chen"
] | 2023-10-10 11:34:53 | http://arxiv.org/abs/2310.06537v1 | http://arxiv.org/pdf/2310.06537v1 | 2310.06537v1 |
Disk failure prediction based on multi-layer domain adaptive learning | Large scale data storage is susceptible to failure. As disks are damaged and
replaced, traditional machine learning models, which rely on historical data to
make predictions, struggle to accurately predict disk failures. This paper
presents a novel method for predicting disk failures by leveraging multi-layer
domain adaptive learning techniques. First, disk data with numerous faults is
selected as the source domain, and disk data with fewer faults is selected as
the target domain. A training of the feature extraction network is performed
with the selected origin and destination domains. The contrast between the two
domains facilitates the transfer of diagnostic knowledge from the domain of
source and target. According to the experimental findings, it has been
demonstrated that the proposed technique can generate a reliable prediction
model and improve the ability to predict failures on disk data with few failure
samples. | [
"Guangfu Gao",
"Peng Wu",
"Hussain Dawood"
] | 2023-10-10 11:28:40 | http://arxiv.org/abs/2310.06534v1 | http://arxiv.org/pdf/2310.06534v1 | 2310.06534v1 |
Watt For What: Rethinking Deep Learning's Energy-Performance Relationship | Deep learning models have revolutionized various fields, from image
recognition to natural language processing, by achieving unprecedented levels
of accuracy. However, their increasing energy consumption has raised concerns
about their environmental impact, disadvantaging smaller entities in research
and exacerbating global energy consumption. In this paper, we explore the
trade-off between model accuracy and electricity consumption, proposing a
metric that penalizes large consumption of electricity. We conduct a
comprehensive study on the electricity consumption of various deep learning
models across different GPUs, presenting a detailed analysis of their
accuracy-efficiency trade-offs. By evaluating accuracy per unit of electricity
consumed, we demonstrate how smaller, more energy-efficient models can
significantly expedite research while mitigating environmental concerns. Our
results highlight the potential for a more sustainable approach to deep
learning, emphasizing the importance of optimizing models for efficiency. This
research also contributes to a more equitable research landscape, where smaller
entities can compete effectively with larger counterparts. This advocates for
the adoption of efficient deep learning practices to reduce electricity
consumption, safeguarding the environment for future generations whilst also
helping ensure a fairer competitive landscape. | [
"Shreyank N Gowda",
"Xinyue Hao",
"Gen Li",
"Laura Sevilla-Lara",
"Shashank Narayana Gowda"
] | 2023-10-10 11:08:31 | http://arxiv.org/abs/2310.06522v1 | http://arxiv.org/pdf/2310.06522v1 | 2310.06522v1 |
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments | Feature attribution explains neural network outputs by identifying relevant
input features. How do we know if the identified features are indeed relevant
to the network? This notion is referred to as faithfulness, an essential
property that reflects the alignment between the identified (attributed)
features and the features used by the model. One recent trend to test
faithfulness is to design the data such that we know which input features are
relevant to the label and then train a model on the designed data.
Subsequently, the identified features are evaluated by comparing them with
these designed ground truth features. However, this idea has the underlying
assumption that the neural network learns to use all and only these designed
features, while there is no guarantee that the learning process trains the
network in this way. In this paper, we solve this missing link by explicitly
designing the neural network by manually setting its weights, along with
designing data, so we know precisely which input features in the dataset are
relevant to the designed network. Thus, we can test faithfulness in
AttributionLab, our designed synthetic environment, which serves as a sanity
check and is effective in filtering out attribution methods. If an attribution
method is not faithful in a simple controlled environment, it can be unreliable
in more complex scenarios. Furthermore, the AttributionLab environment serves
as a laboratory for controlled experiments through which we can study feature
attribution methods, identify issues, and suggest potential improvements. | [
"Yang Zhang",
"Yawei Li",
"Hannah Brown",
"Mina Rezaei",
"Bernd Bischl",
"Philip Torr",
"Ashkan Khakzar",
"Kenji Kawaguchi"
] | 2023-10-10 10:55:49 | http://arxiv.org/abs/2310.06514v1 | http://arxiv.org/pdf/2310.06514v1 | 2310.06514v1 |
Self-Supervised Dataset Distillation for Transfer Learning | Dataset distillation methods have achieved remarkable success in distilling a
large dataset into a small set of representative samples. However, they are not
designed to produce a distilled dataset that can be effectively used for
facilitating self-supervised pre-training. To this end, we propose a novel
problem of distilling an unlabeled dataset into a set of small synthetic
samples for efficient self-supervised learning (SSL). We first prove that a
gradient of synthetic samples with respect to a SSL objective in naive bilevel
optimization is \textit{biased} due to the randomness originating from data
augmentations or masking. To address this issue, we propose to minimize the
mean squared error (MSE) between a model's representations of the synthetic
examples and their corresponding learnable target feature representations for
the inner objective, which does not introduce any randomness. Our primary
motivation is that the model obtained by the proposed inner optimization can
mimic the \textit{self-supervised target model}. To achieve this, we also
introduce the MSE between representations of the inner model and the
self-supervised target model on the original full dataset for outer
optimization. Lastly, assuming that a feature extractor is fixed, we only
optimize a linear head on top of the feature extractor, which allows us to
reduce the computational cost and obtain a closed-form solution of the head
with kernel ridge regression. We empirically validate the effectiveness of our
method on various applications involving transfer learning. | [
"Dong Bok Lee",
"Seanie Lee",
"Joonho Ko",
"Kenji Kawaguchi",
"Juho Lee",
"Sung Ju Hwang"
] | 2023-10-10 10:48:52 | http://arxiv.org/abs/2310.06511v2 | http://arxiv.org/pdf/2310.06511v2 | 2310.06511v2 |
RK-core: An Established Methodology for Exploring the Hierarchical Structure within Datasets | Recently, the field of machine learning has undergone a transition from
model-centric to data-centric. The advancements in diverse learning tasks have
been propelled by the accumulation of more extensive datasets, subsequently
facilitating the training of larger models on these datasets. However, these
datasets remain relatively under-explored. To this end, we introduce a
pioneering approach known as RK-core, to empower gaining a deeper understanding
of the intricate hierarchical structure within datasets. Across several
benchmark datasets, we find that samples with low coreness values appear less
representative of their respective categories, and conversely, those with high
coreness values exhibit greater representativeness. Correspondingly, samples
with high coreness values make a more substantial contribution to the
performance in comparison to those with low coreness values. Building upon
this, we further employ RK-core to analyze the hierarchical structure of
samples with different coreset selection methods. Remarkably, we find that a
high-quality coreset should exhibit hierarchical diversity instead of solely
opting for representative samples. The code is available at
https://github.com/yaolu-zjut/Kcore. | [
"Yao Lu",
"Yutian Huang",
"Jiaqi Nie",
"Zuohui Chen",
"Qi Xuan"
] | 2023-10-10 10:48:27 | http://arxiv.org/abs/2310.12168v1 | http://arxiv.org/pdf/2310.12168v1 | 2310.12168v1 |
Runway Sign Classifier: A DAL C Certifiable Machine Learning System | In recent years, the remarkable progress of Machine Learning (ML)
technologies within the domain of Artificial Intelligence (AI) systems has
presented unprecedented opportunities for the aviation industry, paving the way
for further advancements in automation, including the potential for single
pilot or fully autonomous operation of large commercial airplanes. However, ML
technology faces major incompatibilities with existing airborne certification
standards, such as ML model traceability and explainability issues or the
inadequacy of traditional coverage metrics. Certification of ML-based airborne
systems using current standards is problematic due to these challenges. This
paper presents a case study of an airborne system utilizing a Deep Neural
Network (DNN) for airport sign detection and classification. Building upon our
previous work, which demonstrates compliance with Design Assurance Level (DAL)
D, we upgrade the system to meet the more stringent requirements of Design
Assurance Level C. To achieve DAL C, we employ an established architectural
mitigation technique involving two redundant and dissimilar Deep Neural
Networks. The application of novel ML-specific data management techniques
further enhances this approach. This work is intended to illustrate how the
certification challenges of ML-based systems can be addressed for medium
criticality airborne applications. | [
"Konstantin Dmitriev",
"Johann Schumann",
"Islam Bostanov",
"Mostafa Abdelhamid",
"Florian Holzapfel"
] | 2023-10-10 10:26:30 | http://arxiv.org/abs/2310.06506v1 | http://arxiv.org/pdf/2310.06506v1 | 2310.06506v1 |
Revisit Input Perturbation Problems for LLMs: A Unified Robustness Evaluation Framework for Noisy Slot Filling Task | With the increasing capabilities of large language models (LLMs), these
high-performance models have achieved state-of-the-art results on a wide range
of natural language processing (NLP) tasks. However, the models' performance on
commonly-used benchmark datasets often fails to accurately reflect their
reliability and robustness when applied to real-world noisy data. To address
these challenges, we propose a unified robustness evaluation framework based on
the slot-filling task to systematically evaluate the dialogue understanding
capability of LLMs in diverse input perturbation scenarios. Specifically, we
construct a input perturbation evaluation dataset, Noise-LLM, which contains
five types of single perturbation and four types of mixed perturbation data.
Furthermore, we utilize a multi-level data augmentation method (character,
word, and sentence levels) to construct a candidate data pool, and carefully
design two ways of automatic task demonstration construction strategies
(instance-level and entity-level) with various prompt templates. Our aim is to
assess how well various robustness methods of LLMs perform in real-world noisy
scenarios. The experiments have demonstrated that the current open-source LLMs
generally achieve limited perturbation robustness performance. Based on these
experimental observations, we make some forward-looking suggestions to fuel the
research in this direction. | [
"Guanting Dong",
"Jinxu Zhao",
"Tingfeng Hui",
"Daichi Guo",
"Wenlong Wan",
"Boqi Feng",
"Yueyan Qiu",
"Zhuoma Gongque",
"Keqing He",
"Zechen Wang",
"Weiran Xu"
] | 2023-10-10 10:22:05 | http://arxiv.org/abs/2310.06504v1 | http://arxiv.org/pdf/2310.06504v1 | 2310.06504v1 |
Deep Learning for Automatic Detection and Facial Recognition in Japanese Macaques: Illuminating Social Networks | Individual identification plays a pivotal role in ecology and ethology,
notably as a tool for complex social structures understanding. However,
traditional identification methods often involve invasive physical tags and can
prove both disruptive for animals and time-intensive for researchers. In recent
years, the integration of deep learning in research offered new methodological
perspectives through automatization of complex tasks. Harnessing object
detection and recognition technologies is increasingly used by researchers to
achieve identification on video footage. This study represents a preliminary
exploration into the development of a non-invasive tool for face detection and
individual identification of Japanese macaques (Macaca fuscata) through deep
learning. The ultimate goal of this research is, using identifications done on
the dataset, to automatically generate a social network representation of the
studied population. The current main results are promising: (i) the creation of
a Japanese macaques' face detector (Faster-RCNN model), reaching a 82.2%
accuracy and (ii) the creation of an individual recognizer for K{\=o}jima
island macaques population (YOLOv8n model), reaching a 83% accuracy. We also
created a K{\=o}jima population social network by traditional methods, based on
co-occurrences on videos. Thus, we provide a benchmark against which the
automatically generated network will be assessed for reliability. These
preliminary results are a testament to the potential of this innovative
approach to provide the scientific community with a tool for tracking
individuals and social network studies in Japanese macaques. | [
"Julien Paulet",
"Axel Molina",
"Benjamin Beltzung",
"Takafumi Suzumura",
"Shinya Yamamoto",
"Cédric Sueur"
] | 2023-10-10 09:57:19 | http://arxiv.org/abs/2310.06489v1 | http://arxiv.org/pdf/2310.06489v1 | 2310.06489v1 |
SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network | Spiking neural networks (SNNs) have demonstrated the capability to achieve
comparable performance to deep neural networks (DNNs) in both visual and
linguistic domains while offering the advantages of improved energy efficiency
and adherence to biological plausibility. However, the extension of such
single-modality SNNs into the realm of multimodal scenarios remains an
unexplored territory. Drawing inspiration from the concept of contrastive
language-image pre-training (CLIP), we introduce a novel framework, named
SpikeCLIP, to address the gap between two modalities within the context of
spike-based computing through a two-step recipe involving ``Alignment
Pre-training + Dual-Loss Fine-tuning". Extensive experiments demonstrate that
SNNs achieve comparable results to their DNN counterparts while significantly
reducing energy consumption across a variety of datasets commonly used for
multimodal model evaluation. Furthermore, SpikeCLIP maintains robust
performance in image classification tasks that involve class labels not
predefined within specific categories. | [
"Tianlong Li",
"Wenhao Liu",
"Changze Lv",
"Jianhan Xu",
"Cenyuan Zhang",
"Muling Wu",
"Xiaoqing Zheng",
"Xuanjing Huang"
] | 2023-10-10 09:57:17 | http://arxiv.org/abs/2310.06488v2 | http://arxiv.org/pdf/2310.06488v2 | 2310.06488v2 |
Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory | Pairwise learning is essential in machine learning, especially for problems
involving loss functions defined on pairs of training examples. Online gradient
descent (OGD) algorithms have been proposed to handle online pairwise learning,
where data arrives sequentially. However, the pairwise nature of the problem
makes scalability challenging, as the gradient computation for a new sample
involves all past samples. Recent advancements in OGD algorithms have aimed to
reduce the complexity of calculating online gradients, achieving complexities
less than $O(T)$ and even as low as $O(1)$. However, these approaches are
primarily limited to linear models and have induced variance. In this study, we
propose a limited memory OGD algorithm that extends to kernel online pairwise
learning while improving the sublinear regret. Specifically, we establish a
clear connection between the variance of online gradients and the regret, and
construct online gradients using the most recent stratified samples with a
limited buffer of size of $s$ representing all past data, which have a
complexity of $O(sT)$ and employs $O(\sqrt{T}\log{T})$ random Fourier features
for kernel approximation. Importantly, our theoretical results demonstrate that
the variance-reduced online gradients lead to an improved sublinear regret
bound. The experiments on real-world datasets demonstrate the superiority of
our algorithm over both kernelized and linear online pairwise learning
algorithms. | [
"Hilal AlQuabeh",
"Bhaskar Mukhoty",
"Bin Gu"
] | 2023-10-10 09:50:54 | http://arxiv.org/abs/2310.06483v1 | http://arxiv.org/pdf/2310.06483v1 | 2310.06483v1 |
An improved CTGAN for data processing method of imbalanced disk failure | To address the problem of insufficient failure data generated by disks and
the imbalance between the number of normal and failure data. The existing
Conditional Tabular Generative Adversarial Networks (CTGAN) deep learning
methods have been proven to be effective in solving imbalance disk failure
data. But CTGAN cannot learn the internal information of disk failure data very
well. In this paper, a fault diagnosis method based on improved CTGAN, a
classifier for specific category discrimination is added and a discriminator
generate adversarial network based on residual network is proposed. We named it
Residual Conditional Tabular Generative Adversarial Networks (RCTGAN). Firstly,
to enhance the stability of system a residual network is utilized. RCTGAN uses
a small amount of real failure data to synthesize fake fault data; Then, the
synthesized data is mixed with the real data to balance the amount of normal
and failure data; Finally, four classifier (multilayer perceptron, support
vector machine, decision tree, random forest) models are trained using the
balanced data set, and the performance of the models is evaluated using G-mean.
The experimental results show that the data synthesized by the RCTGAN can
further improve the fault diagnosis accuracy of the classifier. | [
"Jingbo Jia",
"Peng Wu",
"Hussain Dawood"
] | 2023-10-10 09:49:06 | http://arxiv.org/abs/2310.06481v1 | http://arxiv.org/pdf/2310.06481v1 | 2310.06481v1 |
Understanding the Effects of RLHF on LLM Generalisation and Diversity | Large language models (LLMs) fine-tuned with reinforcement learning from
human feedback (RLHF) have been used in some of the most widely deployed AI
models to date, such as OpenAI's ChatGPT, Anthropic's Claude, or Meta's
LLaMA-2. While there has been significant work developing these methods, our
understanding of the benefits and downsides of each stage in RLHF is still
limited. To fill this gap, we present an extensive analysis of how each stage
of the process (i.e. supervised fine-tuning (SFT), reward modelling, and RLHF)
affects two key properties: out-of-distribution (OOD) generalisation and output
diversity. OOD generalisation is crucial given the wide range of real-world
scenarios in which these models are being used, while output diversity refers
to the model's ability to generate varied outputs and is important for a
variety of use cases. We perform our analysis across two base models on both
summarisation and instruction following tasks, the latter being highly relevant
for current LLM use cases. We find that RLHF generalises better than SFT to new
inputs, particularly as the distribution shift between train and test becomes
larger. However, RLHF significantly reduces output diversity compared to SFT
across a variety of measures, implying a tradeoff in current LLM fine-tuning
methods between generalisation and diversity. Our results provide guidance on
which fine-tuning method should be used depending on the application, and show
that more research is needed to improve the trade-off between generalisation
and diversity. | [
"Robert Kirk",
"Ishita Mediratta",
"Christoforos Nalmpantis",
"Jelena Luketina",
"Eric Hambro",
"Edward Grefenstette",
"Roberta Raileanu"
] | 2023-10-10 09:25:44 | http://arxiv.org/abs/2310.06452v1 | http://arxiv.org/pdf/2310.06452v1 | 2310.06452v1 |
Asynchronous Federated Learning with Incentive Mechanism Based on Contract Theory | To address the challenges posed by the heterogeneity inherent in federated
learning (FL) and to attract high-quality clients, various incentive mechanisms
have been employed. However, existing incentive mechanisms are typically
utilized in conventional synchronous aggregation, resulting in significant
straggler issues. In this study, we propose a novel asynchronous FL framework
that integrates an incentive mechanism based on contract theory. Within the
incentive mechanism, we strive to maximize the utility of the task publisher by
adaptively adjusting clients' local model training epochs, taking into account
factors such as time delay and test accuracy. In the asynchronous scheme,
considering client quality, we devise aggregation weights and an access control
algorithm to facilitate asynchronous aggregation. Through experiments conducted
on the MNIST dataset, the simulation results demonstrate that the test accuracy
achieved by our framework is 3.12% and 5.84% higher than that achieved by
FedAvg and FedProx without any attacks, respectively. The framework exhibits a
1.35% accuracy improvement over the ideal Local SGD under attacks. Furthermore,
aiming for the same target accuracy, our framework demands notably less
computation time than both FedAvg and FedProx. | [
"Danni Yang",
"Yun Ji",
"Zhoubin Kou",
"Xiaoxiong Zhong",
"Sheng Zhang"
] | 2023-10-10 09:17:17 | http://arxiv.org/abs/2310.06448v1 | http://arxiv.org/pdf/2310.06448v1 | 2310.06448v1 |
Rule Mining for Correcting Classification Models | Machine learning models need to be continually updated or corrected to ensure
that the prediction accuracy remains consistently high. In this study, we
consider scenarios where developers should be careful to change the prediction
results by the model correction, such as when the model is part of a complex
system or software. In such scenarios, the developers want to control the
specification of the corrections. To achieve this, the developers need to
understand which subpopulations of the inputs get inaccurate predictions by the
model. Therefore, we propose correction rule mining to acquire a comprehensive
list of rules that describe inaccurate subpopulations and how to correct them.
We also develop an efficient correction rule mining algorithm that is a
combination of frequent itemset mining and a unique pruning technique for
correction rules. We observed that the proposed algorithm found various rules
which help to collect data insufficiently learned, directly correct model
outputs, and analyze concept drift. | [
"Hirofumi Suzuki",
"Hiroaki Iwashita",
"Takuya Takagi",
"Yuta Fujishige",
"Satoshi Hara"
] | 2023-10-10 09:17:12 | http://arxiv.org/abs/2310.06446v2 | http://arxiv.org/pdf/2310.06446v2 | 2310.06446v2 |
Skeleton Ground Truth Extraction: Methodology, Annotation Tool and Benchmarks | Skeleton Ground Truth (GT) is critical to the success of supervised skeleton
extraction methods, especially with the popularity of deep learning techniques.
Furthermore, we see skeleton GTs used not only for training skeleton detectors
with Convolutional Neural Networks (CNN) but also for evaluating
skeleton-related pruning and matching algorithms. However, most existing shape
and image datasets suffer from the lack of skeleton GT and inconsistency of GT
standards. As a result, it is difficult to evaluate and reproduce CNN-based
skeleton detectors and algorithms on a fair basis. In this paper, we present a
heuristic strategy for object skeleton GT extraction in binary shapes and
natural images. Our strategy is built on an extended theory of diagnosticity
hypothesis, which enables encoding human-in-the-loop GT extraction based on
clues from the target's context, simplicity, and completeness. Using this
strategy, we developed a tool, SkeView, to generate skeleton GT of 17 existing
shape and image datasets. The GTs are then structurally evaluated with
representative methods to build viable baselines for fair comparisons.
Experiments demonstrate that GTs generated by our strategy yield promising
quality with respect to standard consistency, and also provide a balance
between simplicity and completeness. | [
"Cong Yang",
"Bipin Indurkhya",
"John See",
"Bo Gao",
"Yan Ke",
"Zeyd Boukhers",
"Zhenyu Yang",
"Marcin Grzegorzek"
] | 2023-10-10 09:06:39 | http://arxiv.org/abs/2310.06437v1 | http://arxiv.org/pdf/2310.06437v1 | 2310.06437v1 |
Conformal Prediction for Deep Classifier via Label Ranking | Conformal prediction is a statistical framework that generates prediction
sets containing ground-truth labels with a desired coverage guarantee. The
predicted probabilities produced by machine learning models are generally
miscalibrated, leading to large prediction sets in conformal prediction. In
this paper, we empirically and theoretically show that disregarding the
probabilities' value will mitigate the undesirable effect of miscalibrated
probability values. Then, we propose a novel algorithm named $\textit{Sorted
Adaptive prediction sets}$ (SAPS), which discards all the probability values
except for the maximum softmax probability. The key idea behind SAPS is to
minimize the dependence of the non-conformity score on the probability values
while retaining the uncertainty information. In this manner, SAPS can produce
sets of small size and communicate instance-wise uncertainty. Theoretically, we
provide a finite-sample coverage guarantee of SAPS and show that the expected
value of set size from SAPS is always smaller than APS. Extensive experiments
validate that SAPS not only lessens the prediction sets but also broadly
enhances the conditional coverage rate and adaptation of prediction sets. | [
"Jianguo Huang",
"Huajun Xi",
"Linjun Zhang",
"Huaxiu Yao",
"Yue Qiu",
"Hongxin Wei"
] | 2023-10-10 08:54:14 | http://arxiv.org/abs/2310.06430v1 | http://arxiv.org/pdf/2310.06430v1 | 2310.06430v1 |
TANGO: Time-Reversal Latent GraphODE for Multi-Agent Dynamical Systems | Learning complex multi-agent system dynamics from data is crucial across many
domains, such as in physical simulations and material modeling. Extended from
purely data-driven approaches, existing physics-informed approaches such as
Hamiltonian Neural Network strictly follow energy conservation law to introduce
inductive bias, making their learning more sample efficiently. However, many
real-world systems do not strictly conserve energy, such as spring systems with
frictions. Recognizing this, we turn our attention to a broader physical
principle: Time-Reversal Symmetry, which depicts that the dynamics of a system
shall remain invariant when traversed back over time. It still helps to
preserve energies for conservative systems and in the meanwhile, serves as a
strong inductive bias for non-conservative, reversible systems. To inject such
inductive bias, in this paper, we propose a simple-yet-effective
self-supervised regularization term as a soft constraint that aligns the
forward and backward trajectories predicted by a continuous graph neural
network-based ordinary differential equation (GraphODE). It effectively imposes
time-reversal symmetry to enable more accurate model predictions across a wider
range of dynamical systems under classical mechanics. In addition, we further
provide theoretical analysis to show that our regularization essentially
minimizes higher-order Taylor expansion terms during the ODE integration steps,
which enables our model to be more noise-tolerant and even applicable to
irreversible systems. Experimental results on a variety of physical systems
demonstrate the effectiveness of our proposed method. Particularly, it achieves
an MSE improvement of 11.5 % on a challenging chaotic triple-pendulum systems. | [
"Zijie Huang",
"Wanjia Zhao",
"Jingdong Gao",
"Ziniu Hu",
"Xiao Luo",
"Yadi Cao",
"Yuanzhou Chen",
"Yizhou Sun",
"Wei Wang"
] | 2023-10-10 08:52:16 | http://arxiv.org/abs/2310.06427v1 | http://arxiv.org/pdf/2310.06427v1 | 2310.06427v1 |
Advective Diffusion Transformers for Topological Generalization in Graph Learning | Graph diffusion equations are intimately related to graph neural networks
(GNNs) and have recently attracted attention as a principled framework for
analyzing GNN dynamics, formalizing their expressive power, and justifying
architectural choices. One key open questions in graph learning is the
generalization capabilities of GNNs. A major limitation of current approaches
hinges on the assumption that the graph topologies in the training and test
sets come from the same distribution. In this paper, we make steps towards
understanding the generalization of GNNs by exploring how graph diffusion
equations extrapolate and generalize in the presence of varying graph
topologies. We first show deficiencies in the generalization capability of
existing models built upon local diffusion on graphs, stemming from the
exponential sensitivity to topology variation. Our subsequent analysis reveals
the promise of non-local diffusion, which advocates for feature propagation
over fully-connected latent graphs, under the assumption of a specific
data-generating condition. In addition to these findings, we propose a novel
graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by
advective graph diffusion equations that have a closed-form solution backed up
with theoretical guarantees of desired generalization under topological
distribution shifts. The new model, functioning as a versatile graph
Transformer, demonstrates superior performance across a wide range of graph
learning tasks. | [
"Qitian Wu",
"Chenxiao Yang",
"Kaipeng Zeng",
"Fan Nie",
"Michael Bronstein",
"Junchi Yan"
] | 2023-10-10 08:40:47 | http://arxiv.org/abs/2310.06417v1 | http://arxiv.org/pdf/2310.06417v1 | 2310.06417v1 |
Deep reinforcement learning uncovers processes for separating azeotropic mixtures without prior knowledge | Process synthesis in chemical engineering is a complex planning problem due
to vast search spaces, continuous parameters and the need for generalization.
Deep reinforcement learning agents, trained without prior knowledge, have shown
to outperform humans in various complex planning problems in recent years.
Existing work on reinforcement learning for flowsheet synthesis shows promising
concepts, but focuses on narrow problems in a single chemical system, limiting
its practicality. We present a general deep reinforcement learning approach for
flowsheet synthesis. We demonstrate the adaptability of a single agent to the
general task of separating binary azeotropic mixtures. Without prior knowledge,
it learns to craft near-optimal flowsheets for multiple chemical systems,
considering different feed compositions and conceptual approaches. On average,
the agent can separate more than 99% of the involved materials into pure
components, while autonomously learning fundamental process engineering
paradigms. This highlights the agent's planning flexibility, an encouraging
step toward true generality. | [
"Quirin Göttl",
"Jonathan Pirnay",
"Jakob Burger",
"Dominik G. Grimm"
] | 2023-10-10 08:36:21 | http://arxiv.org/abs/2310.06415v1 | http://arxiv.org/pdf/2310.06415v1 | 2310.06415v1 |
Hexa: Self-Improving for Knowledge-Grounded Dialogue System | A common practice in knowledge-grounded dialogue generation is to explicitly
utilize intermediate steps (e.g., web-search, memory retrieval) with modular
approaches. However, data for such steps are often inaccessible compared to
those of dialogue responses as they are unobservable in an ordinary dialogue.
To fill in the absence of these data, we develop a self-improving method to
improve the generative performances of intermediate steps without the ground
truth data. In particular, we propose a novel bootstrapping scheme with a
guided prompt and a modified loss function to enhance the diversity of
appropriate self-generated responses. Through experiments on various benchmark
datasets, we empirically demonstrate that our method successfully leverages a
self-improving mechanism in generating intermediate and final responses and
improves the performances on the task of knowledge-grounded dialogue
generation. | [
"Daejin Jo",
"Daniel Wontae Nam",
"Gunsoo Han",
"Kyoung-Woon On",
"Taehwan Kwon",
"Seungeun Rho",
"Sungwoong Kim"
] | 2023-10-10 08:15:24 | http://arxiv.org/abs/2310.06404v2 | http://arxiv.org/pdf/2310.06404v2 | 2310.06404v2 |
Lo-Hi: Practical ML Drug Discovery Benchmark | Finding new drugs is getting harder and harder. One of the hopes of drug
discovery is to use machine learning models to predict molecular properties.
That is why models for molecular property prediction are being developed and
tested on benchmarks such as MoleculeNet. However, existing benchmarks are
unrealistic and are too different from applying the models in practice. We have
created a new practical \emph{Lo-Hi} benchmark consisting of two tasks: Lead
Optimization (Lo) and Hit Identification (Hi), corresponding to the real drug
discovery process. For the Hi task, we designed a novel molecular splitting
algorithm that solves the Balanced Vertex Minimum $k$-Cut problem. We tested
state-of-the-art and classic ML models, revealing which works better under
practical settings. We analyzed modern benchmarks and showed that they are
unrealistic and overoptimistic.
Review: https://openreview.net/forum?id=H2Yb28qGLV
Lo-Hi benchmark: https://github.com/SteshinSS/lohi_neurips2023
Lo-Hi splitter library: https://github.com/SteshinSS/lohi_splitter | [
"Simon Steshin"
] | 2023-10-10 08:06:32 | http://arxiv.org/abs/2310.06399v1 | http://arxiv.org/pdf/2310.06399v1 | 2310.06399v1 |
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach | Graph neural networks (GNNs) are vulnerable to adversarial perturbations,
including those that affect both node features and graph topology. This paper
investigates GNNs derived from diverse neural flows, concentrating on their
connection to various stability notions such as BIBO stability, Lyapunov
stability, structural stability, and conservative stability. We argue that
Lyapunov stability, despite its common use, does not necessarily ensure
adversarial robustness. Inspired by physics principles, we advocate for the use
of conservative Hamiltonian neural flows to construct GNNs that are robust to
adversarial attacks. The adversarial robustness of different neural flow GNNs
is empirically compared on several benchmark datasets under a variety of
adversarial attacks. Extensive numerical experiments demonstrate that GNNs
leveraging conservative Hamiltonian flows with Lyapunov stability substantially
improve robustness against adversarial perturbations. The implementation code
of experiments is available at
https://github.com/zknus/NeurIPS-2023-HANG-Robustness. | [
"Kai Zhao",
"Qiyu Kang",
"Yang Song",
"Rui She",
"Sijie Wang",
"Wee Peng Tay"
] | 2023-10-10 07:59:23 | http://arxiv.org/abs/2310.06396v1 | http://arxiv.org/pdf/2310.06396v1 | 2310.06396v1 |
Harnessing Administrative Data Inventories to Create a Reliable Transnational Reference Database for Crop Type Monitoring | With leaps in machine learning techniques and their applicationon Earth
observation challenges has unlocked unprecedented performance across the
domain. While the further development of these methods was previously limited
by the availability and volume of sensor data and computing resources, the lack
of adequate reference data is now constituting new bottlenecks. Since creating
such ground-truth information is an expensive and error-prone task, new ways
must be devised to source reliable, high-quality reference data on large
scales. As an example, we showcase E URO C ROPS, a reference dataset for crop
type classification that aggregates and harmonizes administrative data surveyed
in different countries with the goal of transnational interoperability. | [
"Maja Schneider",
"Marco Körner"
] | 2023-10-10 07:57:00 | http://arxiv.org/abs/2310.06393v1 | http://arxiv.org/pdf/2310.06393v1 | 2310.06393v1 |
Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations | Large Language Models (LLMs) have shown remarkable success in various tasks,
but concerns about their safety and the potential for generating malicious
content have emerged. In this paper, we explore the power of In-Context
Learning (ICL) in manipulating the alignment ability of LLMs. We find that by
providing just few in-context demonstrations without fine-tuning, LLMs can be
manipulated to increase or decrease the probability of jailbreaking, i.e.
answering malicious prompts. Based on these observations, we propose In-Context
Attack (ICA) and In-Context Defense (ICD) methods for jailbreaking and guarding
aligned language model purposes. ICA crafts malicious contexts to guide models
in generating harmful outputs, while ICD enhances model robustness by
demonstrations of rejecting to answer harmful prompts. Our experiments show the
effectiveness of ICA and ICD in increasing or reducing the success rate of
adversarial jailbreaking attacks. Overall, we shed light on the potential of
ICL to influence LLM behavior and provide a new perspective for enhancing the
safety and alignment of LLMs. | [
"Zeming Wei",
"Yifei Wang",
"Yisen Wang"
] | 2023-10-10 07:50:29 | http://arxiv.org/abs/2310.06387v1 | http://arxiv.org/pdf/2310.06387v1 | 2310.06387v1 |
CAST: Cluster-Aware Self-Training for Tabular Data | Self-training has gained attraction because of its simplicity and
versatility, yet it is vulnerable to noisy pseudo-labels. Several studies have
proposed successful approaches to tackle this issue, but they have diminished
the advantages of self-training because they require specific modifications in
self-training algorithms or model architectures. Furthermore, most of them are
incompatible with gradient boosting decision trees, which dominate the tabular
domain. To address this, we revisit the cluster assumption, which states that
data samples that are close to each other tend to belong to the same class.
Inspired by the assumption, we propose Cluster-Aware Self-Training (CAST) for
tabular data. CAST is a simple and universally adaptable approach for enhancing
existing self-training algorithms without significant modifications.
Concretely, our method regularizes the confidence of the classifier, which
represents the value of the pseudo-label, forcing the pseudo-labels in
low-density regions to have lower confidence by leveraging prior knowledge for
each class within the training data. Extensive empirical evaluations on up to
20 real-world datasets confirm not only the superior performance of CAST but
also its robustness in various setups in self-training contexts. | [
"Minwook Kim",
"Juseong Kim",
"Kibeom Kim",
"Donggil Kang",
"Giltae Song"
] | 2023-10-10 07:46:54 | http://arxiv.org/abs/2310.06380v1 | http://arxiv.org/pdf/2310.06380v1 | 2310.06380v1 |
Initialization Bias of Fourier Neural Operator: Revisiting the Edge of Chaos | This paper investigates the initialization bias of the Fourier neural
operator (FNO). A mean-field theory for FNO is established, analyzing the
behavior of the random FNO from an ``edge of chaos'' perspective. We uncover
that the forward and backward propagation behaviors exhibit characteristics
unique to FNO, induced by mode truncation, while also showcasing similarities
to those of densely connected networks. Building upon this observation, we also
propose a FNO version of the He initialization scheme to mitigate the negative
initialization bias leading to training instability. Experimental results
demonstrate the effectiveness of our initialization scheme, enabling stable
training of a 32-layer FNO without the need for additional techniques or
significant performance degradation. | [
"Takeshi Koshizuka",
"Masahiro Fujisawa",
"Yusuke Tanaka",
"Issei Sato"
] | 2023-10-10 07:43:41 | http://arxiv.org/abs/2310.06379v1 | http://arxiv.org/pdf/2310.06379v1 | 2310.06379v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.