title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Extreme Parkour with Legged Robots | Humans can perform parkour by traversing obstacles in a highly dynamic
fashion requiring precise eye-muscle coordination and movement. Getting robots
to do the same task requires overcoming similar challenges. Classically, this
is done by independently engineering perception, actuation, and control systems
to very low tolerances. This restricts them to tightly controlled settings such
as a predetermined obstacle course in labs. In contrast, humans are able to
learn parkour through practice without significantly changing their underlying
biology. In this paper, we take a similar approach to developing robot parkour
on a small low-cost robot with imprecise actuation and a single front-facing
depth camera for perception which is low-frequency, jittery, and prone to
artifacts. We show how a single neural net policy operating directly from a
camera image, trained in simulation with large-scale RL, can overcome imprecise
sensing and actuation to output highly precise control behavior end-to-end. We
show our robot can perform a high jump on obstacles 2x its height, long jump
across gaps 2x its length, do a handstand and run across tilted ramps, and
generalize to novel obstacle courses with different physical properties.
Parkour videos at https://extreme-parkour.github.io/ | [
"Xuxin Cheng",
"Kexin Shi",
"Ananye Agarwal",
"Deepak Pathak"
] | 2023-09-25 17:59:55 | http://arxiv.org/abs/2309.14341v1 | http://arxiv.org/pdf/2309.14341v1 | 2309.14341v1 |
UnitedHuman: Harnessing Multi-Source Data for High-Resolution Human Generation | Human generation has achieved significant progress. Nonetheless, existing
methods still struggle to synthesize specific regions such as faces and hands.
We argue that the main reason is rooted in the training data. A holistic human
dataset inevitably has insufficient and low-resolution information on local
parts. Therefore, we propose to use multi-source datasets with various
resolution images to jointly learn a high-resolution human generative model.
However, multi-source data inherently a) contains different parts that do not
spatially align into a coherent human, and b) comes with different scales. To
tackle these challenges, we propose an end-to-end framework, UnitedHuman, that
empowers continuous GAN with the ability to effectively utilize multi-source
data for high-resolution human generation. Specifically, 1) we design a
Multi-Source Spatial Transformer that spatially aligns multi-source images to
full-body space with a human parametric model. 2) Next, a continuous GAN is
proposed with global-structural guidance and CutMix consistency. Patches from
different datasets are then sampled and transformed to supervise the training
of this scale-invariant generative model. Extensive experiments demonstrate
that our model jointly learned from multi-source data achieves superior quality
than those learned from a holistic dataset. | [
"Jianglin Fu",
"Shikai Li",
"Yuming Jiang",
"Kwan-Yee Lin",
"Wayne Wu",
"Ziwei Liu"
] | 2023-09-25 17:58:46 | http://arxiv.org/abs/2309.14335v1 | http://arxiv.org/pdf/2309.14335v1 | 2309.14335v1 |
Tasks Makyth Models: Machine Learning Assisted Surrogates for Tipping Points | We present a machine learning (ML)-assisted framework bridging manifold
learning, neural networks, Gaussian processes, and Equation-Free multiscale
modeling, for (a) detecting tipping points in the emergent behavior of complex
systems, and (b) characterizing probabilities of rare events (here,
catastrophic shifts) near them. Our illustrative example is an event-driven,
stochastic agent-based model (ABM) describing the mimetic behavior of traders
in a simple financial market. Given high-dimensional spatiotemporal data --
generated by the stochastic ABM -- we construct reduced-order models for the
emergent dynamics at different scales: (a) mesoscopic Integro-Partial
Differential Equations (IPDEs); and (b) mean-field-type Stochastic Differential
Equations (SDEs) embedded in a low-dimensional latent space, targeted to the
neighborhood of the tipping point. We contrast the uses of the different models
and the effort involved in learning them. | [
"Gianluca Fabiani",
"Nikolaos Evangelou",
"Tianqi Cui",
"Juan M. Bello-Rivas",
"Cristina P. Martin-Linares",
"Constantinos Siettos",
"Ioannis G. Kevrekidis"
] | 2023-09-25 17:58:23 | http://arxiv.org/abs/2309.14334v1 | http://arxiv.org/pdf/2309.14334v1 | 2309.14334v1 |
pLMFPPred: a novel approach for accurate prediction of functional peptides integrating embedding from pre-trained protein language model and imbalanced learning | Functional peptides have the potential to treat a variety of diseases. Their
good therapeutic efficacy and low toxicity make them ideal therapeutic agents.
Artificial intelligence-based computational strategies can help quickly
identify new functional peptides from collections of protein sequences and
discover their different functions.Using protein language model-based
embeddings (ESM-2), we developed a tool called pLMFPPred (Protein Language
Model-based Functional Peptide Predictor) for predicting functional peptides
and identifying toxic peptides. We also introduced SMOTE-TOMEK data synthesis
sampling and Shapley value-based feature selection techniques to relieve data
imbalance issues and reduce computational costs. On a validated independent
test set, pLMFPPred achieved accuracy, Area under the curve - Receiver
Operating Characteristics, and F1-Score values of 0.974, 0.99, and 0.974,
respectively. Comparative experiments show that pLMFPPred outperforms current
methods for predicting functional peptides.The experimental results suggest
that the proposed method (pLMFPPred) can provide better performance in terms of
Accuracy, Area under the curve - Receiver Operating Characteristics, and
F1-Score than existing methods. pLMFPPred has achieved good performance in
predicting functional peptides and represents a new computational method for
predicting functional peptides. | [
"Zebin Ma",
"Yonglin Zou",
"Xiaobin Huang",
"Wenjin Yan",
"Hao Xu",
"Jiexin Yang",
"Ying Zhang",
"Jinqi Huang"
] | 2023-09-25 17:57:39 | http://arxiv.org/abs/2309.14404v1 | http://arxiv.org/pdf/2309.14404v1 | 2309.14404v1 |
LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference | The growth of Graph Convolution Network (GCN) model sizes has revolutionized
numerous applications, surpassing human performance in areas such as personal
healthcare and financial systems. The deployment of GCNs in the cloud raises
privacy concerns due to potential adversarial attacks on client data. To
address security concerns, Privacy-Preserving Machine Learning (PPML) using
Homomorphic Encryption (HE) secures sensitive client data. However, it
introduces substantial computational overhead in practical applications. To
tackle those challenges, we present LinGCN, a framework designed to reduce
multiplication depth and optimize the performance of HE based GCN inference.
LinGCN is structured around three key elements: (1) A differentiable structural
linearization algorithm, complemented by a parameterized discrete indicator
function, co-trained with model weights to meet the optimization goal. This
strategy promotes fine-grained node-level non-linear location selection,
resulting in a model with minimized multiplication depth. (2) A compact
node-wise polynomial replacement policy with a second-order trainable
activation function, steered towards superior convergence by a two-level
distillation approach from an all-ReLU based teacher model. (3) an enhanced HE
solution that enables finer-grained operator fusion for node-wise activation
functions, further reducing multiplication level consumption in HE-based
inference. Our experiments on the NTU-XVIEW skeleton joint dataset reveal that
LinGCN excels in latency, accuracy, and scalability for homomorphically
encrypted inference, outperforming solutions such as CryptoGCN. Remarkably,
LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving
an inference accuracy of 75% and notably reducing multiplication depth. | [
"Hongwu Peng",
"Ran Ran",
"Yukui Luo",
"Jiahui Zhao",
"Shaoyi Huang",
"Kiran Thorat",
"Tong Geng",
"Chenghong Wang",
"Xiaolin Xu",
"Wujie Wen",
"Caiwen Ding"
] | 2023-09-25 17:56:54 | http://arxiv.org/abs/2309.14331v3 | http://arxiv.org/pdf/2309.14331v3 | 2309.14331v3 |
Noise-in, Bias-out: Balanced and Real-time MoCap Solving | Real-time optical Motion Capture (MoCap) systems have not benefited from the
advances in modern data-driven modeling. In this work we apply machine learning
to solve noisy unstructured marker estimates in real-time and deliver robust
marker-based MoCap even when using sparse affordable sensors. To achieve this
we focus on a number of challenges related to model training, namely the
sourcing of training data and their long-tailed distribution. Leveraging
representation learning we design a technique for imbalanced regression that
requires no additional data or labels and improves the performance of our model
in rare and challenging poses. By relying on a unified representation, we show
that training such a model is not bound to high-end MoCap training data
acquisition, and exploit the advances in marker-less MoCap to acquire the
necessary data. Finally, we take a step towards richer and affordable MoCap by
adapting a body model-based inverse kinematics solution to account for
measurement and inference uncertainty, further improving performance and
robustness. Project page: https://moverseai.github.io/noise-tail | [
"Georgios Albanis",
"Nikolaos Zioulis",
"Spyridon Thermos",
"Anargyros Chatzitofis",
"Kostas Kolomvatsos"
] | 2023-09-25 17:55:24 | http://arxiv.org/abs/2309.14330v1 | http://arxiv.org/pdf/2309.14330v1 | 2309.14330v1 |
A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing | In generative compressed sensing (GCS), we want to recover a signal
$\mathbf{x}^* \in \mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a
generative prior $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$, where $G$ is typically
an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents
the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements,
most prior results are non-uniform, i.e., they hold with high probability for a
fixed $\mathbf{x}^*$ rather than for all $\mathbf{x}^*$ simultaneously. In this
paper, we build a unified framework to derive uniform recovery guarantees for
nonlinear GCS where the observation model is nonlinear and possibly
discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly
quantized observations and single index models as canonical examples.
Specifically, using a single realization of the sensing ensemble and
generalized Lasso, {\em all} $\mathbf{x}^*\in G(\mathbb{B}_2^k(r))$ can be
recovered up to an $\ell_2$-error at most $\epsilon$ using roughly
$\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors
typically being dominated by $\log L$. Notably, this almost coincides with
existing non-uniform guarantees up to logarithmic factors, hence the uniformity
costs very little. As part of our technical contributions, we introduce the
Lipschitz approximation to handle discontinuous observation models. We also
develop a concentration inequality that produces tighter bounds for product
processes whose index sets have low metric entropy. Experimental results are
presented to corroborate our theory. | [
"Junren Chen",
"Jonathan Scarlett",
"Michael K. Ng",
"Zhaoqiang Liu"
] | 2023-09-25 17:54:19 | http://arxiv.org/abs/2310.03758v2 | http://arxiv.org/pdf/2310.03758v2 | 2310.03758v2 |
Futility and utility of a few ancillas for Pauli channel learning | In this paper we revisit one of the prototypical tasks for characterizing the
structure of noise in quantum devices, estimating the eigenvalues of an
$n$-qubit Pauli noise channel. Prior work (Chen et al., 2022) established
exponential lower bounds for this task for algorithms with limited quantum
memory. We first improve upon their lower bounds and show:
(1) Any algorithm without quantum memory must make $\Omega(2^n/\epsilon^2)$
measurements to estimate each eigenvalue within error $\epsilon$. This is tight
and implies the randomized benchmarking protocol is optimal, resolving an open
question of (Flammia and Wallman, 2020).
(2) Any algorithm with $\le k$ ancilla qubits of quantum memory must make
$\Omega(2^{(n-k)/3})$ queries to the unknown channel. Crucially, unlike in
(Chen et al., 2022), our bound holds even if arbitrary adaptive control and
channel concatenation are allowed.
In fact these lower bounds, like those of (Chen et al., 2022), hold even for
the easier hypothesis testing problem of determining whether the underlying
channel is completely depolarizing or has exactly one other nontrivial
eigenvalue. Surprisingly, we show that:
(3) With only $k=2$ ancilla qubits of quantum memory, there is an algorithm
that solves this hypothesis testing task with high probability using a single
measurement.
Note that (3) does not contradict (2) as the protocol concatenates
exponentially many queries to the channel before the measurement. This result
suggests a novel mechanism by which channel concatenation and $O(1)$ qubits of
quantum memory could work in tandem to yield striking speedups for quantum
process learning that are not possible for quantum state learning. | [
"Sitan Chen",
"Weiyuan Gong"
] | 2023-09-25 17:53:12 | http://arxiv.org/abs/2309.14326v1 | http://arxiv.org/pdf/2309.14326v1 | 2309.14326v1 |
Towards General-Purpose Text-Instruction-Guided Voice Conversion | This paper introduces a novel voice conversion (VC) model, guided by text
instructions such as "articulate slowly with a deep tone" or "speak in a
cheerful boyish voice". Unlike traditional methods that rely on reference
utterances to determine the attributes of the converted speech, our model adds
versatility and specificity to voice conversion. The proposed VC model is a
neural codec language model which processes a sequence of discrete codes,
resulting in the code sequence of converted speech. It utilizes text
instructions as style prompts to modify the prosody and emotional information
of the given speech. In contrast to previous approaches, which often rely on
employing separate encoders like prosody and content encoders to handle
different aspects of the source speech, our model handles various information
of speech in an end-to-end manner. Experiments have demonstrated the impressive
capabilities of our model in comprehending instructions and delivering
reasonable results. | [
"Chun-Yi Kuan",
"Chen An Li",
"Tsu-Yuan Hsu",
"Tse-Yang Lin",
"Ho-Lam Chung",
"Kai-Wei Chang",
"Shuo-yiin Chang",
"Hung-yi Lee"
] | 2023-09-25 17:52:09 | http://arxiv.org/abs/2309.14324v1 | http://arxiv.org/pdf/2309.14324v1 | 2309.14324v1 |
Physics of Language Models: Part 3.2, Knowledge Manipulation | Language models can store vast amounts of factual knowledge, but their
ability to use this knowledge for logical reasoning remains questionable. This
paper explores a language model's ability to manipulate its stored knowledge
during inference. We focus on four manipulation types: retrieval (e.g., "What
is person A's attribute X"), classification (e.g., "Is A's attribute X even or
odd?"), comparison (e.g., "Is A greater than B in attribute X?") and inverse
search (e.g., "Which person's attribute X equals T?")
We observe that pre-trained language models like GPT2/3/4 excel in knowledge
retrieval but struggle with simple classification or comparison tasks unless
Chain of Thoughts (CoTs) are employed during both training and inference. They
also perform poorly in inverse knowledge search, irrespective of the prompts.
Our primary contribution is a synthetic dataset for a controlled experiment
that confirms these inherent weaknesses: a language model cannot efficiently
manipulate knowledge from pre-training data, even when such knowledge is
perfectly stored and fully extractable in the models, and despite adequate
instruct fine-tuning. | [
"Zeyuan Allen-Zhu",
"Yuanzhi Li"
] | 2023-09-25 17:50:41 | http://arxiv.org/abs/2309.14402v1 | http://arxiv.org/pdf/2309.14402v1 | 2309.14402v1 |
Small-scale proxies for large-scale Transformer training instabilities | Teams that have trained large Transformer-based models have reported training
instabilities at large scale that did not appear when training with the same
hyperparameters at smaller scales. Although the causes of such instabilities
are of scientific interest, the amount of resources required to reproduce them
has made investigation difficult. In this work, we seek ways to reproduce and
study training stability and instability at smaller scales. First, we focus on
two sources of training instability described in previous work: the growth of
logits in attention layers (Dehghani et al., 2023) and divergence of the output
logits from the log probabilities (Chowdhery et al., 2022). By measuring the
relationship between learning rate and loss across scales, we show that these
instabilities also appear in small models when training at high learning rates,
and that mitigations previously employed at large scales are equally effective
in this regime. This prompts us to investigate the extent to which other known
optimizer and model interventions influence the sensitivity of the final loss
to changes in the learning rate. To this end, we study methods such as warm-up,
weight decay, and the $\mu$Param (Yang et al., 2022), and combine techniques to
train small models that achieve similar losses across orders of magnitude of
learning rate variation. Finally, to conclude our exploration we study two
cases where instabilities can be predicted before they emerge by examining the
scaling behavior of model activation and gradient norms. | [
"Mitchell Wortsman",
"Peter J. Liu",
"Lechao Xiao",
"Katie Everett",
"Alex Alemi",
"Ben Adlam",
"John D. Co-Reyes",
"Izzeddin Gur",
"Abhishek Kumar",
"Roman Novak",
"Jeffrey Pennington",
"Jascha Sohl-dickstein",
"Kelvin Xu",
"Jaehoon Lee",
"Justin Gilmer",
"Simon Kornblith"
] | 2023-09-25 17:48:51 | http://arxiv.org/abs/2309.14322v2 | http://arxiv.org/pdf/2309.14322v2 | 2309.14322v2 |
Human-Assisted Continual Robot Learning with Foundation Models | Large Language Models (LLMs) have been shown to act like planners that can
decompose high-level instructions into a sequence of executable instructions.
However, current LLM-based planners are only able to operate with a fixed set
of skills. We overcome this critical limitation and present a method for using
LLM-based planners to query new skills and teach robots these skills in a data
and time-efficient manner for rigid object manipulation. Our system can re-use
newly acquired skills for future tasks, demonstrating the potential of open
world and lifelong learning. We evaluate the proposed framework on multiple
tasks in simulation and the real world. Videos are available at:
https://sites.google.com/mit.edu/halp-robot-learning. | [
"Meenal Parakh",
"Alisha Fong",
"Anthony Simeonov",
"Abhishek Gupta",
"Tao Chen",
"Pulkit Agrawal"
] | 2023-09-25 17:45:55 | http://arxiv.org/abs/2309.14321v1 | http://arxiv.org/pdf/2309.14321v1 | 2309.14321v1 |
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction | Large language models can store extensive world knowledge, often extractable
through question-answering (e.g., "What is Abraham Lincoln's birthday?").
However, it's unclear whether the model answers questions based on exposure to
exact/similar questions during training, or if it genuinely extracts knowledge
from the source (e.g., Wikipedia biographies).
In this paper, we conduct an in-depth study of this problem using a
controlled set of semi-synthetic biography data. We uncover a relationship
between the model's knowledge extraction ability and different diversity
measures of the training data. We conduct (nearly) linear probing, revealing a
strong correlation between this relationship and whether the model (nearly)
linearly encodes the knowledge attributes at the hidden embedding of the entity
names, or across the embeddings of other tokens in the training text. | [
"Zeyuan Allen Zhu",
"Yuanzhi Li"
] | 2023-09-25 17:37:20 | http://arxiv.org/abs/2309.14316v1 | http://arxiv.org/pdf/2309.14316v1 | 2309.14316v1 |
A post-selection algorithm for improving dynamic ensemble selection methods | Dynamic Ensemble Selection (DES) is a Multiple Classifier Systems (MCS)
approach that aims to select an ensemble for each query sample during the
selection phase. Even with the proposal of several DES approaches, no
particular DES technique is the best choice for different problems. Thus, we
hypothesize that selecting the best DES approach per query instance can lead to
better accuracy. To evaluate this idea, we introduce the Post-Selection Dynamic
Ensemble Selection (PS-DES) approach, a post-selection scheme that evaluates
ensembles selected by several DES techniques using different metrics.
Experimental results show that using accuracy as a metric to select the
ensembles, PS-DES performs better than individual DES techniques. PS-DES source
code is available in a GitHub repository | [
"Paulo R. G. Cordeiro",
"George D. C. Cavalcanti",
"Rafael M. O. Cruz"
] | 2023-09-25 17:25:39 | http://arxiv.org/abs/2309.14307v2 | http://arxiv.org/pdf/2309.14307v2 | 2309.14307v2 |
Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures | We present improved algorithms with worst-case regret guarantees for the
stochastic linear bandit problem. The widely used "optimism in the face of
uncertainty" principle reduces a stochastic bandit problem to the construction
of a confidence sequence for the unknown reward function. The performance of
the resulting bandit algorithm depends on the size of the confidence sequence,
with smaller confidence sets yielding better empirical performance and stronger
regret guarantees. In this work, we use a novel tail bound for adaptive
martingale mixtures to construct confidence sequences which are suitable for
stochastic bandits. These confidence sequences allow for efficient action
selection via convex programming. We prove that a linear bandit algorithm based
on our confidence sequences is guaranteed to achieve competitive worst-case
regret. We show that our confidence sequences are tighter than competitors,
both empirically and theoretically. Finally, we demonstrate that our tighter
confidence sequences give improved performance in several hyperparameter tuning
tasks. | [
"Hamish Flynn",
"David Reeb",
"Melih Kandemir",
"Jan Peters"
] | 2023-09-25 17:13:46 | http://arxiv.org/abs/2309.14298v2 | http://arxiv.org/pdf/2309.14298v2 | 2309.14298v2 |
Identifying the Risks of LM Agents with an LM-Emulated Sandbox | Recent advances in Language Model (LM) agents and tool use, exemplified by
applications like ChatGPT Plugins, enable a rich set of capabilities but also
amplify potential risks - such as leaking private data or causing financial
losses. Identifying these risks is labor-intensive, necessitating implementing
the tools, manually setting up the environment for each test scenario, and
finding risky cases. As tools and agents become more complex, the high cost of
testing these agents will make it increasingly difficult to find high-stakes,
long-tailed risks. To address these challenges, we introduce ToolEmu: a
framework that uses an LM to emulate tool execution and enables the testing of
LM agents against a diverse range of tools and scenarios, without manual
instantiation. Alongside the emulator, we develop an LM-based automatic safety
evaluator that examines agent failures and quantifies associated risks. We test
both the tool emulator and evaluator through human evaluation and find that
68.8% of failures identified with ToolEmu would be valid real-world agent
failures. Using our curated initial benchmark consisting of 36 high-stakes
tools and 144 test cases, we provide a quantitative risk analysis of current LM
agents and identify numerous failures with potentially severe outcomes.
Notably, even the safest LM agent exhibits such failures 23.9% of the time
according to our evaluator, underscoring the need to develop safer LM agents
for real-world deployment. | [
"Yangjun Ruan",
"Honghua Dong",
"Andrew Wang",
"Silviu Pitis",
"Yongchao Zhou",
"Jimmy Ba",
"Yann Dubois",
"Chris J. Maddison",
"Tatsunori Hashimoto"
] | 2023-09-25 17:08:02 | http://arxiv.org/abs/2309.15817v1 | http://arxiv.org/pdf/2309.15817v1 | 2309.15817v1 |
NAS-NeRF: Generative Neural Architecture Search for Neural Radiance Fields | Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but
their high computational complexity limits deployability. While existing
neural-based solutions strive for efficiency, they use one-size-fits-all
architectures regardless of scene complexity. The same architecture may be
unnecessarily large for simple scenes but insufficient for complex ones. Thus,
there is a need to dynamically optimize the neural network component of NeRFs
to achieve a balance between computational complexity and specific targets for
synthesis quality. We introduce NAS-NeRF, a generative neural architecture
search strategy that generates compact, scene-specialized NeRF architectures by
balancing architecture complexity and target synthesis quality metrics. Our
method incorporates constraints on target metrics and budgets to guide the
search towards architectures tailored for each scene. Experiments on the
Blender synthetic dataset show the proposed NAS-NeRF can generate architectures
up to 5.74$\times$ smaller, with 4.19$\times$ fewer FLOPs, and 1.93$\times$
faster on a GPU than baseline NeRFs, without suffering a drop in SSIM.
Furthermore, we illustrate that NAS-NeRF can also achieve architectures up to
23$\times$ smaller, with 22$\times$ fewer FLOPs, and 4.7$\times$ faster than
baseline NeRFs with only a 5.3% average SSIM drop. Our source code is also made
publicly available at https://saeejithnair.github.io/NAS-NeRF. | [
"Saeejith Nair",
"Yuhao Chen",
"Mohammad Javad Shafiee",
"Alexander Wong"
] | 2023-09-25 17:04:30 | http://arxiv.org/abs/2309.14293v2 | http://arxiv.org/pdf/2309.14293v2 | 2309.14293v2 |
On the Non-Associativity of Analog Computations | The energy efficiency of analog forms of computing makes it one of the most
promising candidates to deploy resource-hungry machine learning tasks on
resource-constrained system such as mobile or embedded devices. However, it is
well known that for analog computations the safety net of discretization is
missing, thus all analog computations are exposed to a variety of imperfections
of corresponding implementations. Examples include non-linearities, saturation
effect and various forms of noise. In this work, we observe that the ordering
of input operands of an analog operation also has an impact on the output
result, which essentially makes analog computations non-associative, even
though the underlying operation might be mathematically associative. We conduct
a simple test by creating a model of a real analog processor which captures
such ordering effects. With this model we assess the importance of ordering by
comparing the test accuracy of a neural network for keyword spotting, which is
trained based either on an ordered model, on a non-ordered variant, and on real
hardware. The results prove the existence of ordering effects as well as their
high impact, as neglecting ordering results in substantial accuracy drops. | [
"Lisa Kuhn",
"Bernhard Klein",
"Holger Fröning"
] | 2023-09-25 17:04:09 | http://arxiv.org/abs/2309.14292v1 | http://arxiv.org/pdf/2309.14292v1 | 2309.14292v1 |
SINCERE: Supervised Information Noise-Contrastive Estimation REvisited | The information noise-contrastive estimation (InfoNCE) loss function provides
the basis of many self-supervised deep learning methods due to its strong
empirical results and theoretic motivation. Previous work suggests a supervised
contrastive (SupCon) loss to extend InfoNCE to learn from available class
labels. This SupCon loss has been widely-used due to reports of good empirical
performance. However, in this work we suggest that the specific SupCon loss
formulated by prior work has questionable theoretic justification, because it
can encourage images from the same class to repel one another in the learned
embedding space. This problematic behavior gets worse as the number of inputs
sharing one class label increases. We propose the Supervised InfoNCE REvisited
(SINCERE) loss as a remedy. SINCERE is a theoretically justified solution for a
supervised extension of InfoNCE that never causes images from the same class to
repel one another. We further show that minimizing our new loss is equivalent
to maximizing a bound on the KL divergence between class conditional embedding
distributions. We compare SINCERE and SupCon losses in terms of learning
trajectories during pretraining and in ultimate linear classifier performance
after finetuning. Our proposed SINCERE loss better separates embeddings from
different classes during pretraining while delivering competitive accuracy. | [
"Patrick Feeney",
"Michael C. Hughes"
] | 2023-09-25 16:40:56 | http://arxiv.org/abs/2309.14277v1 | http://arxiv.org/pdf/2309.14277v1 | 2309.14277v1 |
Industrial Application of 6D Pose Estimation for Robotic Manipulation in Automotive Internal Logistics | Despite the advances in robotics a large proportion of the of parts handling
tasks in the automotive industry's internal logistics are not automated but
still performed by humans. A key component to competitively automate these
processes is a 6D pose estimation that can handle a large number of different
parts, is adaptable to new parts with little manual effort, and is sufficiently
accurate and robust with respect to industry requirements. In this context, the
question arises as to the current status quo with respect to these measures. To
address this we built a representative 6D pose estimation pipeline with
state-of-the-art components from economically scalable real to synthetic data
generation to pose estimators and evaluated it on automotive parts with regards
to a realistic sequencing process. We found that using the data generation
approaches, the performance of the trained 6D pose estimators are promising,
but do not meet industry requirements. We reveal that the reason for this is
the inability of the estimators to provide reliable uncertainties for their
poses, rather than the ability of to provide sufficiently accurate poses. In
this context we further analyzed how RGB- and RGB-D-based approaches compare
against this background and show that they are differently vulnerable to the
domain gap induced by synthetic data. | [
"Philipp Quentin",
"Dino Knoll",
"Daniel Goehring"
] | 2023-09-25 16:23:49 | http://arxiv.org/abs/2309.14265v1 | http://arxiv.org/pdf/2309.14265v1 | 2309.14265v1 |
Enhancing Healthcare with EOG: A Novel Approach to Sleep Stage Classification | We introduce an innovative approach to automated sleep stage classification
using EOG signals, addressing the discomfort and impracticality associated with
EEG data acquisition. In addition, it is important to note that this approach
is untapped in the field, highlighting its potential for novel insights and
contributions. Our proposed SE-Resnet-Transformer model provides an accurate
classification of five distinct sleep stages from raw EOG signal. Extensive
validation on publically available databases (SleepEDF-20, SleepEDF-78, and
SHHS) reveals noteworthy performance, with macro-F1 scores of 74.72, 70.63, and
69.26, respectively. Our model excels in identifying REM sleep, a crucial
aspect of sleep disorder investigations. We also provide insight into the
internal mechanisms of our model using techniques such as 1D-GradCAM and t-SNE
plots. Our method improves the accessibility of sleep stage classification
while decreasing the need for EEG modalities. This development will have
promising implications for healthcare and the incorporation of wearable
technology into sleep studies, thereby advancing the field's potential for
enhanced diagnostics and patient comfort. | [
"Suvadeep Maiti",
"Shivam Kumar Sharma",
"Raju S. Bapi"
] | 2023-09-25 16:23:39 | http://arxiv.org/abs/2310.03757v1 | http://arxiv.org/pdf/2310.03757v1 | 2310.03757v1 |
DECORAIT -- DECentralized Opt-in/out Registry for AI Training | We present DECORAIT; a decentralized registry through which content creators
may assert their right to opt in or out of AI training as well as receive
reward for their contributions. Generative AI (GenAI) enables images to be
synthesized using AI models trained on vast amounts of data scraped from public
sources. Model and content creators who may wish to share their work openly
without sanctioning its use for training are thus presented with a data
governance challenge. Further, establishing the provenance of GenAI training
data is important to creatives to ensure fair recognition and reward for their
such use. We report a prototype of DECORAIT, which explores hierarchical
clustering and a combination of on/off-chain storage to create a scalable
decentralized registry to trace the provenance of GenAI training data in order
to determine training consent and reward creatives who contribute that data.
DECORAIT combines distributed ledger technology (DLT) with visual
fingerprinting, leveraging the emerging C2PA (Coalition for Content Provenance
and Authenticity) standard to create a secure, open registry through which
creatives may express consent and data ownership for GenAI. | [
"Kar Balan",
"Alex Black",
"Simon Jenni",
"Andrew Gilbert",
"Andy Parsons",
"John Collomosse"
] | 2023-09-25 16:19:35 | http://arxiv.org/abs/2309.14400v1 | http://arxiv.org/pdf/2309.14400v1 | 2309.14400v1 |
Rethinking Internet Communication Through LLMs: How Close Are We? | In this paper, we rethink the way that communication among users over the
Internet, one of the fundamental outcomes of the Internet evolution, takes
place. Instead of users communicating directly over the Internet, we explore an
architecture that enables users to communicate with (query) Large Language
Models (LLMs) that capture the cognition of users on the other end of the
communication channel. We present an architecture to achieve such LLM-based
communication and we perform a reality check to assess how close we are today
to realizing such a communication architecture from a technical point of view.
Finally, we discuss several research challenges and identify interesting
directions for future research. | [
"Sifat Ut Taki",
"Spyridon Mastorakis"
] | 2023-09-25 16:07:07 | http://arxiv.org/abs/2309.14247v1 | http://arxiv.org/pdf/2309.14247v1 | 2309.14247v1 |
Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning | Deployment in hazardous environments requires robots to understand the risks
associated with their actions and movements to prevent accidents. Despite its
importance, these risks are not explicitly modeled by currently deployed
locomotion controllers for legged robots. In this work, we propose a risk
sensitive locomotion training method employing distributional reinforcement
learning to consider safety explicitly. Instead of relying on a value
expectation, we estimate the complete value distribution to account for
uncertainty in the robot's interaction with the environment. The value
distribution is consumed by a risk metric to extract risk sensitive value
estimates. These are integrated into Proximal Policy Optimization (PPO) to
derive our method, Distributional Proximal Policy Optimization (DPPO). The risk
preference, ranging from risk-averse to risk-seeking, can be controlled by a
single parameter, which enables to adjust the robot's behavior dynamically.
Importantly, our approach removes the need for additional reward function
tuning to achieve risk sensitivity. We show emergent risk sensitive locomotion
behavior in simulation and on the quadrupedal robot ANYmal. | [
"Lukas Schneider",
"Jonas Frey",
"Takahiro Miki",
"Marco Hutter"
] | 2023-09-25 16:05:32 | http://arxiv.org/abs/2309.14246v1 | http://arxiv.org/pdf/2309.14246v1 | 2309.14246v1 |
Enhancing data efficiency in reinforcement learning: a novel imagination mechanism based on mesh information propagation | Reinforcement learning(RL) algorithms face the challenge of limited data
efficiency, particularly when dealing with high-dimensional state spaces and
large-scale problems. Most of RL methods often rely solely on state transition
information within the same episode when updating the agent's Critic, which can
lead to low data efficiency and sub-optimal training time consumption. Inspired
by human-like analogical reasoning abilities, we introduce a novel mesh
information propagation mechanism, termed the 'Imagination Mechanism (IM)',
designed to significantly enhance the data efficiency of RL algorithms.
Specifically, IM enables information generated by a single sample to be
effectively broadcasted to different states across episodes, instead of simply
transmitting in the same episode. This capability enhances the model's
comprehension of state interdependencies and facilitates more efficient
learning of limited sample information. To promote versatility, we extend the
IM to function as a plug-and-play module that can be seamlessly and fluidly
integrated into other widely adopted RL algorithms. Our experiments demonstrate
that IM consistently boosts four mainstream SOTA RL algorithms, such as SAC,
PPO, DDPG, and DQN, by a considerable margin, ultimately leading to superior
performance than before across various tasks. For access to our code and data,
please visit https://github.com/OuAzusaKou/imagination_mechanism | [
"Zihang Wang",
"Maowei Jiang"
] | 2023-09-25 16:03:08 | http://arxiv.org/abs/2309.14243v2 | http://arxiv.org/pdf/2309.14243v2 | 2309.14243v2 |
Seeing and hearing what has not been said; A multimodal client behavior classifier in Motivational Interviewing with interpretable fusion | Motivational Interviewing (MI) is an approach to therapy that emphasizes
collaboration and encourages behavioral change. To evaluate the quality of an
MI conversation, client utterances can be classified using the MISC code as
either change talk, sustain talk, or follow/neutral talk. The proportion of
change talk in a MI conversation is positively correlated with therapy
outcomes, making accurate classification of client utterances essential. In
this paper, we present a classifier that accurately distinguishes between the
three MISC classes (change talk, sustain talk, and follow/neutral talk)
leveraging multimodal features such as text, prosody, facial expressivity, and
body expressivity. To train our model, we perform annotations on the publicly
available AnnoMI dataset to collect multimodal information, including text,
audio, facial expressivity, and body expressivity. Furthermore, we identify the
most important modalities in the decision-making process, providing valuable
insights into the interplay of different modalities during a MI conversation. | [
"Lucie Galland",
"Catherine Pelachaud",
"Florian Pecune"
] | 2023-09-25 16:00:06 | http://arxiv.org/abs/2309.14398v2 | http://arxiv.org/pdf/2309.14398v2 | 2309.14398v2 |
Learning to Abstain From Uninformative Data | Learning and decision-making in domains with naturally high noise-to-signal
ratio, such as Finance or Healthcare, is often challenging, while the stakes
are very high. In this paper, we study the problem of learning and acting under
a general noisy generative process. In this problem, the data distribution has
a significant proportion of uninformative samples with high noise in the label,
while part of the data contains useful information represented by low label
noise. This dichotomy is present during both training and inference, which
requires the proper handling of uninformative data during both training and
testing. We propose a novel approach to learning under these conditions via a
loss inspired by the selective learning theory. By minimizing this loss, the
model is guaranteed to make a near-optimal decision by distinguishing
informative data from uninformative data and making predictions. We build upon
the strength of our theoretical guarantees by describing an iterative
algorithm, which jointly optimizes both a predictor and a selector, and
evaluates its empirical performance in a variety of settings. | [
"Yikai Zhang",
"Songzhu Zheng",
"Mina Dalirrooyfard",
"Pengxiang Wu",
"Anderson Schneider",
"Anant Raj",
"Yuriy Nevmyvaka",
"Chao Chen"
] | 2023-09-25 15:55:55 | http://arxiv.org/abs/2309.14240v1 | http://arxiv.org/pdf/2309.14240v1 | 2309.14240v1 |
Predicting environment effects on breast cancer by implementing machine learning | The biggest Breast cancer is increasingly a major factor in female
fatalities, overtaking heart disease. While genetic factors are important in
the growth of breast cancer, new research indicates that environmental factors
also play a substantial role in its occurrence and progression. The literature
on the various environmental factors that may affect breast cancer risk,
incidence, and outcomes is thoroughly reviewed in this study report. The study
starts by looking at how lifestyle decisions, such as eating habits, exercise
routines, and alcohol consumption, may affect hormonal imbalances and
inflammation, two important factors driving the development of breast cancer.
Additionally, it explores the part played by environmental contaminants such
pesticides, endocrine-disrupting chemicals (EDCs), and industrial emissions,
all of which have been linked to a higher risk of developing breast cancer due
to their interference with hormone signaling and DNA damage. Algorithms for
machine learning are used to express predictions. Logistic Regression, Random
Forest, KNN Algorithm, SVC and extra tree classifier. Metrics including the
confusion matrix correlation coefficient, F1-score, Precision, Recall, and ROC
curve were used to evaluate the models. The best accuracy among all the
classifiers is Random Forest with 0.91% accuracy and ROC curve 0.901% of
Logistic Regression. The accuracy of the multiple algorithms for machine
learning utilized in this research was good, which is important and indicates
that these techniques could serve as replacement forecasting techniques in
breast cancer survival analysis, notably in the Asia region. | [
"Muhammad Shoaib Farooq",
"Mehreen Ilyas"
] | 2023-09-25 15:54:03 | http://arxiv.org/abs/2309.14397v1 | http://arxiv.org/pdf/2309.14397v1 | 2309.14397v1 |
MoDem-V2: Visuo-Motor World Models for Real-World Robot Manipulation | Robotic systems that aspire to operate in uninstrumented real-world
environments must perceive the world directly via onboard sensing. Vision-based
learning systems aim to eliminate the need for environment instrumentation by
building an implicit understanding of the world based on raw pixels, but
navigating the contact-rich high-dimensional search space from solely sparse
visual reward signals significantly exacerbates the challenge of exploration.
The applicability of such systems is thus typically restricted to simulated or
heavily engineered environments since agent exploration in the real-world
without the guidance of explicit state estimation and dense rewards can lead to
unsafe behavior and safety faults that are catastrophic. In this study, we
isolate the root causes behind these limitations to develop a system, called
MoDem-V2, capable of learning contact-rich manipulation directly in the
uninstrumented real world. Building on the latest algorithmic advancements in
model-based reinforcement learning (MBRL), demo-bootstrapping, and effective
exploration, MoDem-V2 can acquire contact-rich dexterous manipulation skills
directly in the real world. We identify key ingredients for leveraging
demonstrations in model learning while respecting real-world safety
considerations -- exploration centering, agency handover, and actor-critic
ensembles. We empirically demonstrate the contribution of these ingredients in
four complex visuo-motor manipulation problems in both simulation and the real
world. To the best of our knowledge, our work presents the first successful
system for demonstration-augmented visual MBRL trained directly in the real
world. Visit https://sites.google.com/view/modem-v2 for videos and more
details. | [
"Patrick Lancaster",
"Nicklas Hansen",
"Aravind Rajeswaran",
"Vikash Kumar"
] | 2023-09-25 15:51:29 | http://arxiv.org/abs/2309.14236v1 | http://arxiv.org/pdf/2309.14236v1 | 2309.14236v1 |
Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving | The deployment of autonomous vehicles (AVs) has faced hurdles due to the
dominance of rare but critical corner cases within the long-tail distribution
of driving scenarios, which negatively affects their overall performance. To
address this challenge, adversarial generation methods have emerged as a class
of efficient approaches to synthesize safety-critical scenarios for AV testing.
However, these generated scenarios are often underutilized for AV training,
resulting in the potential for continual AV policy improvement remaining
untapped, along with a deficiency in the closed-loop design needed to achieve
it. Therefore, we tailor the Stackelberg Driver Model (SDM) to accurately
characterize the hierarchical nature of vehicle interaction dynamics,
facilitating iterative improvement by engaging background vehicles (BVs) and AV
in a sequential game-like interaction paradigm. With AV acting as the leader
and BVs as followers, this leader-follower modeling ensures that AV would
consistently refine its policy, always taking into account the additional
information that BVs play the best response to challenge AV. Extensive
experiments have shown that our algorithm exhibits superior performance
compared to several baselines especially in higher dimensional scenarios,
leading to substantial advancements in AV capabilities while continually
generating progressively challenging scenarios. Code is available at
https://github.com/BlueCat-de/SDM. | [
"Haoyi Niu",
"Qimao Chen",
"Yingyue Li",
"Jianming Hu"
] | 2023-09-25 15:47:07 | http://arxiv.org/abs/2309.14235v2 | http://arxiv.org/pdf/2309.14235v2 | 2309.14235v2 |
Urdu Poetry Generated by Using Deep Learning Techniques | This study provides Urdu poetry generated using different deep-learning
techniques and algorithms. The data was collected through the Rekhta website,
containing 1341 text files with several couplets. The data on poetry was not
from any specific genre or poet. Instead, it was a collection of mixed Urdu
poems and Ghazals. Different deep learning techniques, such as the model
applied Long Short-term Memory Networks (LSTM) and Gated Recurrent Unit (GRU),
have been used. Natural Language Processing (NLP) may be used in machine
learning to understand, analyze, and generate a language humans may use and
understand. Much work has been done on generating poetry for different
languages using different techniques. The collection and use of data were also
different for different researchers. The primary purpose of this project is to
provide a model that generates Urdu poems by using data completely, not by
sampling data. Also, this may generate poems in pure Urdu, not Roman Urdu, as
in the base paper. The results have shown good accuracy in the poems generated
by the model. | [
"Muhammad Shoaib Farooq",
"Ali Abbas"
] | 2023-09-25 15:44:24 | http://arxiv.org/abs/2309.14233v1 | http://arxiv.org/pdf/2309.14233v1 | 2309.14233v1 |
Guess & Sketch: Language Model Guided Transpilation | Maintaining legacy software requires many software and systems engineering
hours. Assembly code programs, which demand low-level control over the computer
machine state and have no variable names, are particularly difficult for humans
to analyze. Existing conventional program translators guarantee correctness,
but are hand-engineered for the source and target programming languages in
question. Learned transpilation, i.e. automatic translation of code, offers an
alternative to manual re-writing and engineering efforts. Automated symbolic
program translation approaches guarantee correctness but struggle to scale to
longer programs due to the exponentially large search space. Their rigid
rule-based systems also limit their expressivity, so they can only reason about
a reduced space of programs. Probabilistic neural language models (LMs) produce
plausible outputs for every input, but do so at the cost of guaranteed
correctness. In this work, we leverage the strengths of LMs and symbolic
solvers in a neurosymbolic approach to learned transpilation for assembly code.
Assembly code is an appropriate setting for a neurosymbolic approach, since
assembly code can be divided into shorter non-branching basic blocks amenable
to the use of symbolic methods. Guess & Sketch extracts alignment and
confidence information from features of the LM then passes it to a symbolic
solver to resolve semantic equivalence of the transpilation input and output.
We test Guess & Sketch on three different test sets of assembly transpilation
tasks, varying in difficulty, and show that it successfully transpiles 57.6%
more examples than GPT-4 and 39.6% more examples than an engineered transpiler.
We also share a training and evaluation dataset for this task. | [
"Celine Lee",
"Abdulrahman Mahmoud",
"Michal Kurek",
"Simone Campanoni",
"David Brooks",
"Stephen Chong",
"Gu-Yeon Wei",
"Alexander M. Rush"
] | 2023-09-25 15:42:18 | http://arxiv.org/abs/2309.14396v1 | http://arxiv.org/pdf/2309.14396v1 | 2309.14396v1 |
Implicit Sensing in Traffic Optimization: Advanced Deep Reinforcement Learning Techniques | A sudden roadblock on highways due to many reasons such as road maintenance,
accidents, and car repair is a common situation we encounter almost daily.
Autonomous Vehicles (AVs) equipped with sensors that can acquire vehicle
dynamics such as speed, acceleration, and location can make intelligent
decisions to change lanes before reaching a roadblock. A number of literature
studies have examined car-following models and lane-changing models. However,
only a few studies proposed an integrated car-following and lane-changing
model, which has the potential to model practical driving maneuvers. Hence, in
this paper, we present an integrated car-following and lane-changing
decision-control system based on Deep Reinforcement Learning (DRL) to address
this issue. Specifically, we consider a scenario where sudden construction work
will be carried out along a highway. We model the scenario as a Markov Decision
Process (MDP) and employ the well-known DQN algorithm to train the RL agent to
make the appropriate decision accordingly (i.e., either stay in the same lane
or change lanes). To overcome the delay and computational requirement of DRL
algorithms, we adopt an MEC-assisted architecture where the RL agents are
trained on MEC servers. We utilize the highly reputable SUMO simulator and
OPENAI GYM to evaluate the performance of the proposed model under two
policies; {\epsilon}-greedy policy and Boltzmann policy. The results
unequivocally demonstrate that the DQN agent trained using the
{\epsilon}-greedy policy significantly outperforms the one trained with the
Boltzmann policy. | [
"Emanuel Figetakis",
"Yahuza Bello",
"Ahmed Refaey",
"Lei Lei",
"Medhat Moussa"
] | 2023-09-25 15:33:08 | http://arxiv.org/abs/2309.14395v1 | http://arxiv.org/pdf/2309.14395v1 | 2309.14395v1 |
Multiple Noises in Diffusion Model for Semi-Supervised Multi-Domain Translation | Domain-to-domain translation involves generating a target domain sample given
a condition in the source domain. Most existing methods focus on fixed input
and output domains, i.e. they only work for specific configurations (i.e. for
two domains, either $D_1\rightarrow{}D_2$ or $D_2\rightarrow{}D_1$). This paper
proposes Multi-Domain Diffusion (MDD), a conditional diffusion framework for
multi-domain translation in a semi-supervised context. Unlike previous methods,
MDD does not require defining input and output domains, allowing translation
between any partition of domains within a set (such as $(D_1,
D_2)\rightarrow{}D_3$, $D_2\rightarrow{}(D_1, D_3)$, $D_3\rightarrow{}D_1$,
etc. for 3 domains), without the need to train separate models for each domain
configuration. The key idea behind MDD is to leverage the noise formulation of
diffusion models by incorporating one noise level per domain, which allows
missing domains to be modeled with noise in a natural way. This transforms the
training task from a simple reconstruction task to a domain translation task,
where the model relies on less noisy domains to reconstruct more noisy domains.
We present results on a multi-domain (with more than two domains) synthetic
image translation dataset with challenging semantic domain inversion. | [
"Tsiry Mayet",
"Simon Bernard",
"Clement Chatelain",
"Romain Herault"
] | 2023-09-25 15:31:16 | http://arxiv.org/abs/2309.14394v1 | http://arxiv.org/pdf/2309.14394v1 | 2309.14394v1 |
Accelerating Machine Learning Algorithms with Adaptive Sampling | The era of huge data necessitates highly efficient machine learning
algorithms. Many common machine learning algorithms, however, rely on
computationally intensive subroutines that are prohibitively expensive on large
datasets. Oftentimes, existing techniques subsample the data or use other
methods to improve computational efficiency, at the expense of incurring some
approximation error. This thesis demonstrates that it is often sufficient,
instead, to substitute computationally intensive subroutines with a special
kind of randomized counterparts that results in almost no degradation in
quality. | [
"Mo Tiwari"
] | 2023-09-25 15:25:59 | http://arxiv.org/abs/2309.14221v1 | http://arxiv.org/pdf/2309.14221v1 | 2309.14221v1 |
MemDA: Forecasting Urban Time Series with Memory-based Drift Adaptation | Urban time series data forecasting featuring significant contributions to
sustainable development is widely studied as an essential task of the smart
city. However, with the dramatic and rapid changes in the world environment,
the assumption that data obey Independent Identically Distribution is
undermined by the subsequent changes in data distribution, known as concept
drift, leading to weak replicability and transferability of the model over
unseen data. To address the issue, previous approaches typically retrain the
model, forcing it to fit the most recent observed data. However, retraining is
problematic in that it leads to model lag, consumption of resources, and model
re-invalidation, causing the drift problem to be not well solved in realistic
scenarios. In this study, we propose a new urban time series prediction model
for the concept drift problem, which encodes the drift by considering the
periodicity in the data and makes on-the-fly adjustments to the model based on
the drift using a meta-dynamic network. Experiments on real-world datasets show
that our design significantly outperforms state-of-the-art methods and can be
well generalized to existing prediction backbones by reducing their sensitivity
to distribution changes. | [
"Zekun Cai",
"Renhe Jiang",
"Xinyu Yang",
"Zhaonan Wang",
"Diansheng Guo",
"Hiroki Kobayashi",
"Xuan Song",
"Ryosuke Shibasaki"
] | 2023-09-25 15:22:28 | http://arxiv.org/abs/2309.14216v1 | http://arxiv.org/pdf/2309.14216v1 | 2309.14216v1 |
Continual Driving Policy Optimization with Closed-Loop Individualized Curricula | The safety of autonomous vehicles (AV) has been a long-standing top concern,
stemming from the absence of rare and safety-critical scenarios in the
long-tail naturalistic driving distribution. To tackle this challenge, a surge
of research in scenario-based autonomous driving has emerged, with a focus on
generating high-risk driving scenarios and applying them to conduct
safety-critical testing of AV models. However, limited work has been explored
on the reuse of these extensive scenarios to iteratively improve AV models.
Moreover, it remains intractable and challenging to filter through gigantic
scenario libraries collected from other AV models with distinct behaviors,
attempting to extract transferable information for current AV improvement.
Therefore, we develop a continual driving policy optimization framework
featuring Closed-Loop Individualized Curricula (CLIC), which we factorize into
a set of standardized sub-modules for flexible implementation choices: AV
Evaluation, Scenario Selection, and AV Training. CLIC frames AV Evaluation as a
collision prediction task, where it estimates the chance of AV failures in
these scenarios at each iteration. Subsequently, by re-sampling from historical
scenarios based on these failure probabilities, CLIC tailors individualized
curricula for downstream training, aligning them with the evaluated capability
of AV. Accordingly, CLIC not only maximizes the utilization of the vast
pre-collected scenario library for closed-loop driving policy optimization but
also facilitates AV improvement by individualizing its training with more
challenging cases out of those poorly organized scenarios. Experimental results
clearly indicate that CLIC surpasses other curriculum-based training
strategies, showing substantial improvement in managing risky scenarios, while
still maintaining proficiency in handling simpler cases. | [
"Haoyi Niu",
"Yizhou Xu",
"Xingjian Jiang",
"Jianming Hu"
] | 2023-09-25 15:14:54 | http://arxiv.org/abs/2309.14209v1 | http://arxiv.org/pdf/2309.14209v1 | 2309.14209v1 |
Framework based on complex networks to model and mine patient pathways | The automatic discovery of a model to represent the history of encounters of
a group of patients with the healthcare system -- the so-called ``pathway of
patients'' -- is a new field of research that supports clinical and
organisational decisions to improve the quality and efficiency of the treatment
provided. The pathways of patients with chronic conditions tend to vary
significantly from one person to another, have repetitive tasks, and demand the
analysis of multiple perspectives (interventions, diagnoses, medical
specialities, among others) influencing the results. Therefore, modelling and
mining those pathways is still a challenging task. In this work, we propose a
framework comprising: (i) a pathway model based on a multi-aspect graph, (ii) a
novel dissimilarity measurement to compare pathways taking the elapsed time
into account, and (iii) a mining method based on traditional centrality
measures to discover the most relevant steps of the pathways. We evaluated the
framework using the study cases of pregnancy and diabetes, which revealed its
usefulness in finding clusters of similar pathways, representing them in an
easy-to-interpret way, and highlighting the most significant patterns according
to multiple perspectives. | [
"Caroline de Oliveira Costa Souza Rosa",
"Márcia Ito",
"Alex Borges Vieira",
"Klaus Wehmuth",
"Antônio Tadeu Azevedo Gomes"
] | 2023-09-25 15:11:52 | http://arxiv.org/abs/2309.14208v1 | http://arxiv.org/pdf/2309.14208v1 | 2309.14208v1 |
(Predictable) Performance Bias in Unsupervised Anomaly Detection | Background: With the ever-increasing amount of medical imaging data, the
demand for algorithms to assist clinicians has amplified. Unsupervised anomaly
detection (UAD) models promise to aid in the crucial first step of disease
detection. While previous studies have thoroughly explored fairness in
supervised models in healthcare, for UAD, this has so far been unexplored.
Methods: In this study, we evaluated how dataset composition regarding
subgroups manifests in disparate performance of UAD models along multiple
protected variables on three large-scale publicly available chest X-ray
datasets. Our experiments were validated using two state-of-the-art UAD models
for medical images. Finally, we introduced a novel subgroup-AUROC (sAUROC)
metric, which aids in quantifying fairness in machine learning.
Findings: Our experiments revealed empirical "fairness laws" (similar to
"scaling laws" for Transformers) for training-dataset composition: Linear
relationships between anomaly detection performance within a subpopulation and
its representation in the training data. Our study further revealed performance
disparities, even in the case of balanced training data, and compound effects
that exacerbate the drop in performance for subjects associated with multiple
adversely affected groups.
Interpretation: Our study quantified the disparate performance of UAD models
against certain demographic subgroups. Importantly, we showed that this
unfairness cannot be mitigated by balanced representation alone. Instead, the
representation of some subgroups seems harder to learn by UAD models than that
of others. The empirical fairness laws discovered in our study make disparate
performance in UAD models easier to estimate and aid in determining the most
desirable dataset composition. | [
"Felix Meissen",
"Svenja Breuer",
"Moritz Knolle",
"Alena Buyx",
"Ruth Müller",
"Georgios Kaissis",
"Benedikt Wiestler",
"Daniel Rückert"
] | 2023-09-25 14:57:43 | http://arxiv.org/abs/2309.14198v1 | http://arxiv.org/pdf/2309.14198v1 | 2309.14198v1 |
Learning Restricted Boltzmann Machines with greedy quantum search | Restricted Boltzmann Machines (RBMs) are widely used probabilistic undirected
graphical models with visible and latent nodes, playing an important role in
statistics and machine learning. The task of structure learning for RBMs
involves inferring the underlying graph by using samples from the visible
nodes. Specifically, learning the two-hop neighbors of each visible node allows
for the inference of the graph structure. Prior research has addressed the
structure learning problem for specific classes of RBMs, namely ferromagnetic
and locally consistent RBMs. In this paper, we extend the scope to the quantum
computing domain and propose corresponding quantum algorithms for this problem.
Our study demonstrates that the proposed quantum algorithms yield a polynomial
speedup compared to the classical algorithms for learning the structure of
these two classes of RBMs. | [
"Liming Zhao",
"Aman Agrawal",
"Patrick Rebentrost"
] | 2023-09-25 14:56:30 | http://arxiv.org/abs/2309.14196v1 | http://arxiv.org/pdf/2309.14196v1 | 2309.14196v1 |
LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models | The carbon footprint associated with large language models (LLMs) is a
significant concern, encompassing emissions from their training, inference,
experimentation, and storage processes, including operational and embodied
carbon emissions. An essential aspect is accurately estimating the carbon
impact of emerging LLMs even before their training, which heavily relies on GPU
usage. Existing studies have reported the carbon footprint of LLM training, but
only one tool, mlco2, can predict the carbon footprint of new neural networks
prior to physical training. However, mlco2 has several serious limitations. It
cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs,
disregards critical architectural parameters, focuses solely on GPUs, and
cannot model embodied carbon footprints. Addressing these gaps, we introduce
\textit{LLMCarbon}, an end-to-end carbon footprint projection model designed
for both dense and MoE LLMs. Compared to mlco2, LLMCarbon significantly
enhances the accuracy of carbon footprint estimations for various LLMs. | [
"Ahmad Faiz",
"Sotaro Kaneda",
"Ruhan Wang",
"Rita Osi",
"Parteek Sharma",
"Fan Chen",
"Lei Jiang"
] | 2023-09-25 14:50:04 | http://arxiv.org/abs/2309.14393v1 | http://arxiv.org/pdf/2309.14393v1 | 2309.14393v1 |
Federated Learning Under Restricted User Availability | Federated Learning (FL) is a decentralized machine learning framework that
enables collaborative model training while respecting data privacy. In various
applications, non-uniform availability or participation of users is unavoidable
due to an adverse or stochastic environment, the latter often being
uncontrollable during learning. Here, we posit a generic user selection
mechanism implementing a possibly randomized, stationary selection policy,
suggestively termed as a Random Access Model (RAM). We propose a new
formulation of the FL problem which effectively captures and mitigates limited
participation of data originating from infrequent, or restricted users, at the
presence of a RAM. By employing the Conditional Value-at-Risk (CVaR) over the
(unknown) RAM distribution, we extend the expected loss FL objective to a
risk-aware objective, enabling the design of an efficient training algorithm
that is completely oblivious to the RAM, and with essentially identical
complexity as FedAvg. Our experiments on synthetic and benchmark datasets show
that the proposed approach achieves significantly improved performance as
compared with standard FL, under a variety of setups. | [
"Periklis Theodoropoulos",
"Konstantinos E. Nikolakakis",
"Dionysis Kalogerias"
] | 2023-09-25 14:40:27 | http://arxiv.org/abs/2309.14176v1 | http://arxiv.org/pdf/2309.14176v1 | 2309.14176v1 |
Designing and evaluating an online reinforcement learning agent for physical exercise recommendations in N-of-1 trials | Personalized adaptive interventions offer the opportunity to increase patient
benefits, however, there are challenges in their planning and implementation.
Once implemented, it is an important question whether personalized adaptive
interventions are indeed clinically more effective compared to a fixed gold
standard intervention. In this paper, we present an innovative N-of-1 trial
study design testing whether implementing a personalized intervention by an
online reinforcement learning agent is feasible and effective. Throughout, we
use a new study on physical exercise recommendations to reduce pain in
endometriosis for illustration. We describe the design of a contextual bandit
recommendation agent and evaluate the agent in simulation studies. The results
show that adaptive interventions add complexity to the design and
implementation process, but have the potential to improve patients' benefits
even if only few observations are available. In order to quantify the expected
benefit, data from previous interventional studies is required. We expect our
approach to be transferable to other interventions and clinical interventions. | [
"Dominik Meier",
"Ipek Ensari",
"Stefan Konigorski"
] | 2023-09-25 14:08:21 | http://arxiv.org/abs/2309.14156v1 | http://arxiv.org/pdf/2309.14156v1 | 2309.14156v1 |
Extragradient Type Methods for Riemannian Variational Inequality Problems | Riemannian convex optimization and minimax optimization have recently drawn
considerable attention. Their appeal lies in their capacity to adeptly manage
the non-convexity of the objective function as well as constraints inherent in
the feasible set in the Euclidean sense. In this work, we delve into monotone
Riemannian Variational Inequality Problems (RVIPs), which encompass both
Riemannian convex optimization and minimax optimization as particular cases. In
the context of Euclidean space, it is established that the last-iterates of
both the extragradient (EG) and past extragradient (PEG) methods converge to
the solution of monotone variational inequality problems at a rate of
$O\left(\frac{1}{\sqrt{T}}\right)$ (Cai et al., 2022). However, analogous
behavior on Riemannian manifolds remains an open question. To bridge this gap,
we introduce the Riemannian extragradient (REG) and Riemannian past
extragradient (RPEG) methods. We demonstrate that both exhibit
$O\left(\frac{1}{\sqrt{T}}\right)$ last-iterate convergence. Additionally, we
show that the average-iterate convergence of both REG and RPEG is
$O\left(\frac{1}{{T}}\right)$, aligning with observations in the Euclidean case
(Mokhtari et al., 2020). These results are enabled by judiciously addressing
the holonomy effect so that additional complications in Riemannian cases can be
reduced and the Euclidean proof inspired by the performance estimation problem
(PEP) technique or the sum-of-squares (SOS) technique can be applied again. | [
"Zihao Hu",
"Guanghui Wang",
"Xi Wang",
"Andre Wibisono",
"Jacob Abernethy",
"Molei Tao"
] | 2023-09-25 14:08:02 | http://arxiv.org/abs/2309.14155v1 | http://arxiv.org/pdf/2309.14155v1 | 2309.14155v1 |
One-Class Classification for Intrusion Detection on Vehicular Networks | Controller Area Network bus systems within vehicular networks are not
equipped with the tools necessary to ward off and protect themselves from
modern cyber-security threats. Work has been done on using machine learning
methods to detect and report these attacks, but common methods are not robust
towards unknown attacks. These methods usually rely on there being a sufficient
representation of attack data, which may not be available due to there either
not being enough data present to adequately represent its distribution or the
distribution itself is too diverse in nature for there to be a sufficient
representation of it. With the use of one-class classification methods, this
issue can be mitigated as only normal data is required to train a model for the
detection of anomalous instances. Research has been done on the efficacy of
these methods, most notably One-Class Support Vector Machine and Support Vector
Data Description, but many new extensions of these works have been proposed and
have yet to be tested for injection attacks in vehicular networks. In this
paper, we investigate the performance of various state-of-the-art one-class
classification methods for detecting injection attacks on Controller Area
Network bus traffic. We investigate the effectiveness of these techniques on
attacks launched on Controller Area Network buses from two different vehicles
during normal operation and while being attacked. We observe that the Subspace
Support Vector Data Description method outperformed all other tested methods
with a Gmean of about 85%. | [
"Jake Guidry",
"Fahad Sohrab",
"Raju Gottumukkala",
"Satya Katragadda",
"Moncef Gabbouj"
] | 2023-09-25 13:42:22 | http://arxiv.org/abs/2309.14134v1 | http://arxiv.org/pdf/2309.14134v1 | 2309.14134v1 |
On the Relation between Internal Language Model and Sequence Discriminative Training for Neural Transducers | Internal language model (ILM) subtraction has been widely applied to improve
the performance of the RNN-Transducer with external language model (LM) fusion
for speech recognition. In this work, we show that sequence discriminative
training has a strong correlation with ILM subtraction from both theoretical
and empirical points of view. Theoretically, we derive that the global optimum
of maximum mutual information (MMI) training shares a similar formula as ILM
subtraction. Empirically, we show that ILM subtraction and sequence
discriminative training achieve similar performance across a wide range of
experiments on Librispeech, including both MMI and minimum Bayes risk (MBR)
criteria, as well as neural transducers and LMs of both full and limited
context. The benefit of ILM subtraction also becomes much smaller after
sequence discriminative training. We also provide an in-depth study to show
that sequence discriminative training has a minimal effect on the commonly used
zero-encoder ILM estimation, but a joint effect on both encoder and prediction
+ joint network for posterior probability reshaping including both ILM and
blank suppression. | [
"Zijian Yang",
"Wei Zhou",
"Ralf Schlüter",
"Hermann Ney"
] | 2023-09-25 13:35:28 | http://arxiv.org/abs/2309.14130v1 | http://arxiv.org/pdf/2309.14130v1 | 2309.14130v1 |
Driving behavior-guided battery health monitoring for electric vehicles using machine learning | An accurate estimation of the state of health (SOH) of batteries is critical
to ensuring the safe and reliable operation of electric vehicles (EVs).
Feature-based machine learning methods have exhibited enormous potential for
rapidly and precisely monitoring battery health status. However, simultaneously
using various health indicators (HIs) may weaken estimation performance due to
feature redundancy. Furthermore, ignoring real-world driving behaviors can lead
to inaccurate estimation results as some features are rarely accessible in
practical scenarios. To address these issues, we proposed a feature-based
machine learning pipeline for reliable battery health monitoring, enabled by
evaluating the acquisition probability of features under real-world driving
conditions. We first summarized and analyzed various individual HIs with
mechanism-related interpretations, which provide insightful guidance on how
these features relate to battery degradation modes. Moreover, all features were
carefully evaluated and screened based on estimation accuracy and correlation
analysis on three public battery degradation datasets. Finally, the
scenario-based feature fusion and acquisition probability-based practicality
evaluation method construct a useful tool for feature extraction with
consideration of driving behaviors. This work highlights the importance of
balancing the performance and practicality of HIs during the development of
feature-based battery health monitoring algorithms. | [
"Nanhua Jiang",
"Jiawei Zhang",
"Weiran Jiang",
"Yao Ren",
"Jing Lin",
"Edwin Khoo",
"Ziyou Song"
] | 2023-09-25 13:24:53 | http://arxiv.org/abs/2309.14125v1 | http://arxiv.org/pdf/2309.14125v1 | 2309.14125v1 |
Physics-Informed Solution of The Stationary Fokker-Plank Equation for a Class of Nonlinear Dynamical Systems: An Evaluation Study | The Fokker-Planck (FP) equation is a linear partial differential equation
which governs the temporal and spatial evolution of the probability density
function (PDF) associated with the response of stochastic dynamical systems. An
exact analytical solution of the FP equation is only available for a limited
subset of dynamical systems. Semi-analytical methods are available for larger,
yet still a small subset of systems, while traditional computational methods;
e.g. Finite Elements and Finite Difference require dividing the computational
domain into a grid of discrete points, which incurs significant computational
costs for high-dimensional systems. Physics-informed learning offers a
potentially powerful alternative to traditional computational schemes. To
evaluate its potential, we present a data-free, physics-informed neural network
(PINN) framework to solve the FP equation for a class of nonlinear stochastic
dynamical systems. In particular, through several examples concerning the
stochastic response of the Duffing, Van der Pol, and the Duffing-Van der Pol
oscillators, we assess the ability and accuracy of the PINN framework in $i)$
predicting the PDF under the combined effect of additive and multiplicative
noise, $ii)$ capturing P-bifurcations of the PDF, and $iii)$ effectively
treating high-dimensional systems. Through comparisons with Monte-Carlo
simulations and the available literature, we show that PINN can effectively
address all of the afore-described points. We also demonstrate that the
computational time associated with the PINN solution can be substantially
reduced by using transfer learning. | [
"Hussam Alhussein",
"Mohammed Khasawneh",
"Mohammed F. Daqaq"
] | 2023-09-25 13:17:34 | http://arxiv.org/abs/2309.16725v1 | http://arxiv.org/pdf/2309.16725v1 | 2309.16725v1 |
MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks | Predicting multiple real-world tasks in a single model often requires a
particularly diverse feature space. Multimodal (MM) models aim to extract the
synergistic predictive potential of multiple data types to create a shared
feature space with aligned semantic meaning across inputs of drastically
varying sizes (i.e. images, text, sound). Most current MM architectures fuse
these representations in parallel, which not only limits their interpretability
but also creates a dependency on modality availability. We present MultiModN, a
multimodal, modular network that fuses latent representations in a sequence of
any number, combination, or type of modality while providing granular real-time
predictive feedback on any number or combination of predictive tasks.
MultiModN's composable pipeline is interpretable-by-design, as well as innately
multi-task and robust to the fundamental issue of biased missingness. We
perform four experiments on several benchmark MM datasets across 10 real-world
tasks (predicting medical diagnoses, academic performance, and weather), and
show that MultiModN's sequential MM fusion does not compromise performance
compared with a baseline of parallel fusion. By simulating the challenging bias
of missing not-at-random (MNAR), this work shows that, contrary to MultiModN,
parallel fusion baselines erroneously learn MNAR and suffer catastrophic
failure when faced with different patterns of MNAR at inference. To the best of
our knowledge, this is the first inherently MNAR-resistant approach to MM
modeling. In conclusion, MultiModN provides granular insights, robustness, and
flexibility without compromising performance. | [
"Vinitra Swamy",
"Malika Satayeva",
"Jibril Frej",
"Thierry Bossy",
"Thijs Vogels",
"Martin Jaggi",
"Tanja Käser",
"Mary-Anne Hartley"
] | 2023-09-25 13:16:57 | http://arxiv.org/abs/2309.14118v1 | http://arxiv.org/pdf/2309.14118v1 | 2309.14118v1 |
HyperTrack: Neural Combinatorics for High Energy Physics | Combinatorial inverse problems in high energy physics span enormous
algorithmic challenges. This work presents a new deep learning driven
clustering algorithm that utilizes a space-time non-local trainable graph
constructor, a graph neural network, and a set transformer. The model is
trained with loss functions at the graph node, edge and object level, including
contrastive learning and meta-supervision. The algorithm can be applied to
problems such as charged particle tracking, calorimetry, pile-up
discrimination, jet physics, and beyond. We showcase the effectiveness of this
cutting-edge AI approach through particle tracking simulations. The code is
available online. | [
"Mikael Mieskolainen"
] | 2023-09-25 13:12:08 | http://arxiv.org/abs/2309.14113v1 | http://arxiv.org/pdf/2309.14113v1 | 2309.14113v1 |
Wav2vec-based Detection and Severity Level Classification of Dysarthria from Speech | Automatic detection and severity level classification of dysarthria directly
from acoustic speech signals can be used as a tool in medical diagnosis. In
this work, the pre-trained wav2vec 2.0 model is studied as a feature extractor
to build detection and severity level classification systems for dysarthric
speech. The experiments were carried out with the popularly used UA-speech
database. In the detection experiments, the results revealed that the best
performance was obtained using the embeddings from the first layer of the
wav2vec model that yielded an absolute improvement of 1.23% in accuracy
compared to the best performing baseline feature (spectrogram). In the studied
severity level classification task, the results revealed that the embeddings
from the final layer gave an absolute improvement of 10.62% in accuracy
compared to the best baseline features (mel-frequency cepstral coefficients). | [
"Farhad Javanmardi",
"Saska Tirronen",
"Manila Kodali",
"Sudarsana Reddy Kadiri",
"Paavo Alku"
] | 2023-09-25 13:00:33 | http://arxiv.org/abs/2309.14107v2 | http://arxiv.org/pdf/2309.14107v2 | 2309.14107v2 |
Affective Game Computing: A Survey | This paper surveys the current state of the art in affective computing
principles, methods and tools as applied to games. We review this emerging
field, namely affective game computing, through the lens of the four core
phases of the affective loop: game affect elicitation, game affect sensing,
game affect detection and game affect adaptation. In addition, we provide a
taxonomy of terms, methods and approaches used across the four phases of the
affective game loop and situate the field within this taxonomy. We continue
with a comprehensive review of available affect data collection methods with
regards to gaming interfaces, sensors, annotation protocols, and available
corpora. The paper concludes with a discussion on the current limitations of
affective game computing and our vision for the most promising future research
directions in the field. | [
"Georgios N. Yannakakis",
"David Melhart"
] | 2023-09-25 12:52:48 | http://arxiv.org/abs/2309.14104v1 | http://arxiv.org/pdf/2309.14104v1 | 2309.14104v1 |
Tracking Control for a Spherical Pendulum via Curriculum Reinforcement Learning | Reinforcement Learning (RL) allows learning non-trivial robot control laws
purely from data. However, many successful applications of RL have relied on
ad-hoc regularizations, such as hand-crafted curricula, to regularize the
learning performance. In this paper, we pair a recent algorithm for
automatically building curricula with RL on massively parallelized simulations
to learn a tracking controller for a spherical pendulum on a robotic arm via
RL. Through an improved optimization scheme that better respects the
non-Euclidean task structure, we allow the method to reliably generate
curricula of trajectories to be tracked, resulting in faster and more robust
learning compared to an RL baseline that does not exploit this form of
structured learning. The learned policy matches the performance of an optimal
control baseline on the real system, demonstrating the potential of curriculum
RL to jointly learn state estimation and control for non-linear tracking tasks. | [
"Pascal Klink",
"Florian Wolf",
"Kai Ploeger",
"Jan Peters",
"Joni Pajarinen"
] | 2023-09-25 12:48:47 | http://arxiv.org/abs/2309.14096v1 | http://arxiv.org/pdf/2309.14096v1 | 2309.14096v1 |
On the Benefit of Optimal Transport for Curriculum Reinforcement Learning | Curriculum reinforcement learning (CRL) allows solving complex tasks by
generating a tailored sequence of learning tasks, starting from easy ones and
subsequently increasing their difficulty. Although the potential of curricula
in RL has been clearly shown in various works, it is less clear how to generate
them for a given learning environment, resulting in various methods aiming to
automate this task. In this work, we focus on framing curricula as
interpolations between task distributions, which has previously been shown to
be a viable approach to CRL. Identifying key issues of existing methods, we
frame the generation of a curriculum as a constrained optimal transport problem
between task distributions. Benchmarks show that this way of curriculum
generation can improve upon existing CRL methods, yielding high performance in
various tasks with different characteristics. | [
"Pascal Klink",
"Carlo D'Eramo",
"Jan Peters",
"Joni Pajarinen"
] | 2023-09-25 12:31:37 | http://arxiv.org/abs/2309.14091v1 | http://arxiv.org/pdf/2309.14091v1 | 2309.14091v1 |
Convolutional autoencoder-based multimodal one-class classification | One-class classification refers to approaches of learning using data from a
single class only. In this paper, we propose a deep learning one-class
classification method suitable for multimodal data, which relies on two
convolutional autoencoders jointly trained to reconstruct the positive input
data while obtaining the data representations in the latent space as compact as
possible. During inference, the distance of the latent representation of an
input to the origin can be used as an anomaly score. Experimental results using
a multimodal macroinvertebrate image classification dataset show that the
proposed multimodal method yields better results as compared to the unimodal
approach. Furthermore, study the effect of different input image sizes, and we
investigate how recently proposed feature diversity regularizers affect the
performance of our approach. We show that such regularizers improve
performance. | [
"Firas Laakom",
"Fahad Sohrab",
"Jenni Raitoharju",
"Alexandros Iosifidis",
"Moncef Gabbouj"
] | 2023-09-25 12:31:18 | http://arxiv.org/abs/2309.14090v1 | http://arxiv.org/pdf/2309.14090v1 | 2309.14090v1 |
BiSinger: Bilingual Singing Voice Synthesis | Although Singing Voice Synthesis (SVS) has made great strides with
Text-to-Speech (TTS) techniques, multilingual singing voice modeling remains
relatively unexplored. This paper presents BiSinger, a bilingual pop SVS system
for English and Chinese Mandarin. Current systems require separate models per
language and cannot accurately represent both Chinese and English, hindering
code-switch SVS. To address this gap, we design a shared representation between
Chinese and English singing voices, achieved by using the CMU dictionary with
mapping rules. We fuse monolingual singing datasets with open-source singing
voice conversion techniques to generate bilingual singing voices while also
exploring the potential use of bilingual speech data. Experiments affirm that
our language-independent representation and incorporation of related datasets
enable a single model with enhanced performance in English and code-switch SVS
while maintaining Chinese song performance. Audio samples are available at
https://bisinger-svs.github.io. | [
"Huali Zhou",
"Yueqian Lin",
"Yao Shi",
"Peng Sun",
"Ming Li"
] | 2023-09-25 12:31:05 | http://arxiv.org/abs/2309.14089v2 | http://arxiv.org/pdf/2309.14089v2 | 2309.14089v2 |
REPA: Client Clustering without Training and Data Labels for Improved Federated Learning in Non-IID Settings | Clustering clients into groups that exhibit relatively homogeneous data
distributions represents one of the major means of improving the performance of
federated learning (FL) in non-independent and identically distributed
(non-IID) data settings. Yet, the applicability of current state-of-the-art
approaches remains limited as these approaches cluster clients based on
information, such as the evolution of local model parameters, that is only
obtainable through actual on-client training. On the other hand, there is a
need to make FL models available to clients who are not able to perform the
training themselves, as they do not have the processing capabilities required
for training, or simply want to use the model without participating in the
training. Furthermore, the existing alternative approaches that avert the
training still require that individual clients have a sufficient amount of
labeled data upon which the clustering is based, essentially assuming that each
client is a data annotator. In this paper, we present REPA, an approach to
client clustering in non-IID FL settings that requires neither training nor
labeled data collection. REPA uses a novel supervised autoencoder-based method
to create embeddings that profile a client's underlying data-generating
processes without exposing the data to the server and without requiring local
training. Our experimental analysis over three different datasets demonstrates
that REPA delivers state-of-the-art model performance while expanding the
applicability of cluster-based FL to previously uncovered use cases. | [
"Boris Radovič",
"Veljko Pejović"
] | 2023-09-25 12:30:43 | http://arxiv.org/abs/2309.14088v1 | http://arxiv.org/pdf/2309.14088v1 | 2309.14088v1 |
Analysis and Detection of Pathological Voice using Glottal Source Features | Automatic detection of voice pathology enables objective assessment and
earlier intervention for the diagnosis. This study provides a systematic
analysis of glottal source features and investigates their effectiveness in
voice pathology detection. Glottal source features are extracted using glottal
flows estimated with the quasi-closed phase (QCP) glottal inverse filtering
method, using approximate glottal source signals computed with the zero
frequency filtering (ZFF) method, and using acoustic voice signals directly. In
addition, we propose to derive mel-frequency cepstral coefficients (MFCCs) from
the glottal source waveforms computed by QCP and ZFF to effectively capture the
variations in glottal source spectra of pathological voice. Experiments were
carried out using two databases, the Hospital Universitario Principe de
Asturias (HUPA) database and the Saarbrucken Voice Disorders (SVD) database.
Analysis of features revealed that the glottal source contains information that
discriminates normal and pathological voice. Pathology detection experiments
were carried out using support vector machine (SVM). From the detection
experiments it was observed that the performance achieved with the studied
glottal source features is comparable or better than that of conventional MFCCs
and perceptual linear prediction (PLP) features. The best detection performance
was achieved when the glottal source features were combined with the
conventional MFCCs and PLP features, which indicates the complementary nature
of the features. | [
"Sudarsana Reddy Kadiri",
"Paavo Alku"
] | 2023-09-25 12:14:25 | http://arxiv.org/abs/2309.14080v2 | http://arxiv.org/pdf/2309.14080v2 | 2309.14080v2 |
ODE-based Recurrent Model-free Reinforcement Learning for POMDPs | Neural ordinary differential equations (ODEs) are widely recognized as the
standard for modeling physical mechanisms, which help to perform approximate
inference in unknown physical or biological environments. In partially
observable (PO) environments, how to infer unseen information from raw
observations puzzled the agents. By using a recurrent policy with a compact
context, context-based reinforcement learning provides a flexible way to
extract unobservable information from historical transitions. To help the agent
extract more dynamics-related information, we present a novel ODE-based
recurrent model combines with model-free reinforcement learning (RL) framework
to solve partially observable Markov decision processes (POMDPs). We
experimentally demonstrate the efficacy of our methods across various PO
continuous control and meta-RL tasks. Furthermore, our experiments illustrate
that our method is robust against irregular observations, owing to the ability
of ODEs to model irregularly-sampled time series. | [
"Xuanle Zhao",
"Duzhen Zhang",
"Liyuan Han",
"Tielin Zhang",
"Bo Xu"
] | 2023-09-25 12:13:56 | http://arxiv.org/abs/2309.14078v1 | http://arxiv.org/pdf/2309.14078v1 | 2309.14078v1 |
Maximum Likelihood Estimation of Latent Variable Structural Equation Models: A Neural Network Approach | We propose a graphical structure for structural equation models that is
stable under marginalization under linearity and Gaussianity assumptions. We
show that computing the maximum likelihood estimation of this model is
equivalent to training a neural network. We implement a GPU-based algorithm
that computes the maximum likelihood estimation of these models. | [
"Mehrzad Saremi"
] | 2023-09-25 12:07:00 | http://arxiv.org/abs/2309.14073v2 | http://arxiv.org/pdf/2309.14073v2 | 2309.14073v2 |
Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion Models | Because diffusion models have shown impressive performances in a number of
tasks, such as image synthesis, there is a trend in recent works to prove (with
certain assumptions) that these models have strong approximation capabilities.
In this paper, we show that current diffusion models actually have an
expressive bottleneck in backward denoising and some assumption made by
existing theoretical guarantees is too strong. Based on this finding, we prove
that diffusion models have unbounded errors in both local and global denoising.
In light of our theoretical studies, we introduce soft mixture denoising (SMD),
an expressive and efficient model for backward denoising. SMD not only permits
diffusion models to well approximate any Gaussian mixture distributions in
theory, but also is simple and efficient for implementation. Our experiments on
multiple image datasets show that SMD significantly improves different types of
diffusion models (e.g., DDPM), espeically in the situation of few backward
iterations. | [
"Yangming Li",
"Boris van Breugel",
"Mihaela van der Schaar"
] | 2023-09-25 12:03:32 | http://arxiv.org/abs/2309.14068v2 | http://arxiv.org/pdf/2309.14068v2 | 2309.14068v2 |
FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning | Exemplar-free class-incremental learning (CIL) poses several challenges since
it prohibits the rehearsal of data from previous tasks and thus suffers from
catastrophic forgetting. Recent approaches to incrementally learning the
classifier by freezing the feature extractor after the first task have gained
much attention. In this paper, we explore prototypical networks for CIL, which
generate new class prototypes using the frozen feature extractor and classify
the features based on the Euclidean distance to the prototypes. In an analysis
of the feature distributions of classes, we show that classification based on
Euclidean metrics is successful for jointly trained features. However, when
learning from non-stationary data, we observe that the Euclidean metric is
suboptimal and that feature distributions are heterogeneous. To address this
challenge, we revisit the anisotropic Mahalanobis distance for CIL. In
addition, we empirically show that modeling the feature covariance relations is
better than previous attempts at sampling features from normal distributions
and training a linear classifier. Unlike existing methods, our approach
generalizes to both many- and few-shot CIL settings, as well as to
domain-incremental settings. Interestingly, without updating the backbone
network, our method obtains state-of-the-art results on several standard
continual learning benchmarks. Code is available at
https://github.com/dipamgoswami/FeCAM. | [
"Dipam Goswami",
"Yuyang Liu",
"Bartłomiej Twardowski",
"Joost van de Weijer"
] | 2023-09-25 11:54:33 | http://arxiv.org/abs/2309.14062v1 | http://arxiv.org/pdf/2309.14062v1 | 2309.14062v1 |
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks | The increased attention to regulating the outputs of deep generative models,
driven by growing concerns about privacy and regulatory compliance, has
highlighted the need for effective control over these models. This necessity
arises from instances where generative models produce outputs containing
undesirable, offensive, or potentially harmful content. To tackle this
challenge, the concept of machine unlearning has emerged, aiming to forget
specific learned information or to erase the influence of undesired data
subsets from a trained model. The objective of this work is to prevent the
generation of outputs containing undesired features from a pre-trained GAN
where the underlying training data set is inaccessible. Our approach is
inspired by a crucial observation: the parameter space of GANs exhibits
meaningful directions that can be leveraged to suppress specific undesired
features. However, such directions usually result in the degradation of the
quality of generated samples. Our proposed method, known as
'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also
maintaining the quality of generated samples. This method unfolds in two
stages: in the initial stage, we adapt the pre-trained GAN using negative
samples provided by the user, while in the subsequent stage, we focus on
unlearning the undesired feature. During the latter phase, we train the
pre-trained GAN using positive samples, incorporating a repulsion regularizer.
This regularizer encourages the model's parameters to be away from the
parameters associated with the adapted model from the first stage while also
maintaining the quality of generated samples. To the best of our knowledge, our
approach stands as first method addressing unlearning in GANs. We validate the
effectiveness of our method through comprehensive experiments. | [
"Piyush Tiwary",
"Atri Guha",
"Subhodip Panda",
"Prathosh A. P"
] | 2023-09-25 11:36:20 | http://arxiv.org/abs/2309.14054v1 | http://arxiv.org/pdf/2309.14054v1 | 2309.14054v1 |
Revisiting LARS for Large Batch Training Generalization of Neural Networks | LARS and LAMB have emerged as prominent techniques in Large Batch Learning
(LBL), ensuring the stability of AI training. One of the primary challenges in
LBL is convergence stability, where the AI agent usually gets trapped into the
sharp minimizer. Addressing this challenge, a relatively recent technique,
known as warm-up, has been employed. However, warm-up lacks a strong
theoretical foundation, leaving the door open for further exploration of more
efficacious algorithms. In light of this situation, we conduct empirical
experiments to analyze the behaviors of the two most popular optimizers in the
LARS family: LARS and LAMB, with and without a warm-up strategy. Our analyses
give us a comprehension of the novel LARS, LAMB, and the necessity of a warm-up
technique in LBL. Building upon these insights, we propose a novel algorithm
called Time Varying LARS (TVLARS), which facilitates robust training in the
initial phase without the need for warm-up. Experimental evaluation
demonstrates that TVLARS achieves competitive results with LARS and LAMB when
warm-up is utilized while surpassing their performance without the warm-up
technique. | [
"Khoi Do",
"Duong Nguyen",
"Hoa Nguyen",
"Long Tran-Thanh",
"Quoc-Viet Pham"
] | 2023-09-25 11:35:10 | http://arxiv.org/abs/2309.14053v1 | http://arxiv.org/pdf/2309.14053v1 | 2309.14053v1 |
Diversify and Conquer: Bandits and Diversity for an Enhanced E-commerce Homepage Experience | In the realm of e-commerce, popular platforms utilize widgets to recommend
advertisements and products to their users. However, the prevalence of mobile
device usage on these platforms introduces a unique challenge due to the
limited screen real estate available. Consequently, the positioning of relevant
widgets becomes pivotal in capturing and maintaining customer engagement. Given
the restricted screen size of mobile devices, widgets placed at the top of the
interface are more prominently displayed and thus attract greater user
attention. Conversely, widgets positioned further down the page require users
to scroll, resulting in reduced visibility and subsequent lower impression
rates. Therefore it becomes imperative to place relevant widgets on top.
However, selecting relevant widgets to display is a challenging task as the
widgets can be heterogeneous, widgets can be introduced or removed at any given
time from the platform. In this work, we model the vertical widget reordering
as a contextual multi-arm bandit problem with delayed batch feedback. The
objective is to rank the vertical widgets in a personalized manner. We present
a two-stage ranking framework that combines contextual bandits with a diversity
layer to improve the overall ranking. We demonstrate its effectiveness through
offline and online A/B results, conducted on proprietary data from Myntra, a
major fashion e-commerce platform in India. | [
"Sangeet Jaiswal",
"Korah T Malayil",
"Saif Jawaid",
"Sreekanth Vempati"
] | 2023-09-25 11:22:19 | http://arxiv.org/abs/2309.14046v1 | http://arxiv.org/pdf/2309.14046v1 | 2309.14046v1 |
Unveiling Fairness Biases in Deep Learning-Based Brain MRI Reconstruction | Deep learning (DL) reconstruction particularly of MRI has led to improvements
in image fidelity and reduction of acquisition time. In neuroimaging, DL
methods can reconstruct high-quality images from undersampled data. However, it
is essential to consider fairness in DL algorithms, particularly in terms of
demographic characteristics. This study presents the first fairness analysis in
a DL-based brain MRI reconstruction model. The model utilises the U-Net
architecture for image reconstruction and explores the presence and sources of
unfairness by implementing baseline Empirical Risk Minimisation (ERM) and
rebalancing strategies. Model performance is evaluated using image
reconstruction metrics. Our findings reveal statistically significant
performance biases between the gender and age subgroups. Surprisingly, data
imbalance and training discrimination are not the main sources of bias. This
analysis provides insights of fairness in DL-based image reconstruction and
aims to improve equity in medical AI applications. | [
"Yuning Du",
"Yuyang Xue",
"Rohan Dharmakumar",
"Sotirios A. Tsaftaris"
] | 2023-09-25 11:07:25 | http://arxiv.org/abs/2309.14392v1 | http://arxiv.org/pdf/2309.14392v1 | 2309.14392v1 |
DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization | Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been
successfully applied to various Combinatorial Optimization Problems (COPs).
Traditionally, customizing ACO for a specific problem requires the expert
design of knowledge-driven heuristics. In this paper, we propose DeepACO, a
generic framework that leverages deep reinforcement learning to automate
heuristic designs. DeepACO serves to strengthen the heuristic measures of
existing ACO algorithms and dispense with laborious manual design in future ACO
applications. As a neural-enhanced meta-heuristic, DeepACO consistently
outperforms its ACO counterparts on eight COPs using a single neural model and
a single set of hyperparameters. As a Neural Combinatorial Optimization method,
DeepACO performs better than or on par with problem-specific methods on
canonical routing problems. Our code is publicly available at
https://github.com/henry-yeh/DeepACO. | [
"Haoran Ye",
"Jiarui Wang",
"Zhiguang Cao",
"Helan Liang",
"Yong Li"
] | 2023-09-25 10:56:38 | http://arxiv.org/abs/2309.14032v1 | http://arxiv.org/pdf/2309.14032v1 | 2309.14032v1 |
Diffeomorphic Transformations for Time Series Analysis: An Efficient Approach to Nonlinear Warping | The proliferation and ubiquity of temporal data across many disciplines has
sparked interest for similarity, classification and clustering methods
specifically designed to handle time series data. A core issue when dealing
with time series is determining their pairwise similarity, i.e., the degree to
which a given time series resembles another. Traditional distance measures such
as the Euclidean are not well-suited due to the time-dependent nature of the
data. Elastic metrics such as dynamic time warping (DTW) offer a promising
approach, but are limited by their computational complexity,
non-differentiability and sensitivity to noise and outliers. This thesis
proposes novel elastic alignment methods that use parametric \& diffeomorphic
warping transformations as a means of overcoming the shortcomings of DTW-based
metrics. The proposed method is differentiable \& invertible, well-suited for
deep learning architectures, robust to noise and outliers, computationally
efficient, and is expressive and flexible enough to capture complex patterns.
Furthermore, a closed-form solution was developed for the gradient of these
diffeomorphic transformations, which allows an efficient search in the
parameter space, leading to better solutions at convergence. Leveraging the
benefits of these closed-form diffeomorphic transformations, this thesis
proposes a suite of advancements that include: (a) an enhanced temporal
transformer network for time series alignment and averaging, (b) a
deep-learning based time series classification model to simultaneously align
and classify signals with high accuracy, (c) an incremental time series
clustering algorithm that is warping-invariant, scalable and can operate under
limited computational and time resources, and finally, (d) a normalizing flow
model that enhances the flexibility of affine transformations in coupling and
autoregressive layers. | [
"Iñigo Martinez"
] | 2023-09-25 10:51:47 | http://arxiv.org/abs/2309.14029v1 | http://arxiv.org/pdf/2309.14029v1 | 2309.14029v1 |
Hierarchical Imitation Learning for Stochastic Environments | Many applications of imitation learning require the agent to generate the
full distribution of behaviour observed in the training data. For example, to
evaluate the safety of autonomous vehicles in simulation, accurate and diverse
behaviour models of other road users are paramount. Existing methods that
improve this distributional realism typically rely on hierarchical policies.
These condition the policy on types such as goals or personas that give rise to
multi-modal behaviour. However, such methods are often inappropriate for
stochastic environments where the agent must also react to external factors:
because agent types are inferred from the observed future trajectory during
training, these environments require that the contributions of internal and
external factors to the agent behaviour are disentangled and only internal
factors, i.e., those under the agent's control, are encoded in the type.
Encoding future information about external factors leads to inappropriate agent
reactions during testing, when the future is unknown and types must be drawn
independently from the actual future. We formalize this challenge as
distribution shift in the conditional distribution of agent types under
environmental stochasticity. We propose Robust Type Conditioning (RTC), which
eliminates this shift with adversarial training under randomly sampled types.
Experiments on two domains, including the large-scale Waymo Open Motion
Dataset, show improved distributional realism while maintaining or improving
task performance compared to state-of-the-art baselines. | [
"Maximilian Igl",
"Punit Shah",
"Paul Mougin",
"Sirish Srinivasan",
"Tarun Gupta",
"Brandyn White",
"Kyriacos Shiarlis",
"Shimon Whiteson"
] | 2023-09-25 10:10:34 | http://arxiv.org/abs/2309.14003v1 | http://arxiv.org/pdf/2309.14003v1 | 2309.14003v1 |
Identification of Mixtures of Discrete Product Distributions in Near-Optimal Sample and Time Complexity | We consider the problem of identifying, from statistics, a distribution of
discrete random variables $X_1,\ldots,X_n$ that is a mixture of $k$ product
distributions. The best previous sample complexity for $n \in O(k)$ was
$(1/\zeta)^{O(k^2 \log k)}$ (under a mild separation assumption parameterized
by $\zeta$). The best known lower bound was $\exp(\Omega(k))$. It is known that
$n\geq 2k-1$ is necessary and sufficient for identification. We show, for any
$n\geq 2k-1$, how to achieve sample complexity and run-time complexity
$(1/\zeta)^{O(k)}$. We also extend the known lower bound of $e^{\Omega(k)}$ to
match our upper bound across a broad range of $\zeta$. Our results are obtained
by combining (a) a classic method for robust tensor decomposition, (b) a novel
way of bounding the condition number of key matrices called Hadamard
extensions, by studying their action only on flattened rank-1 tensors. | [
"Spencer L. Gordon",
"Erik Jahn",
"Bijan Mazaheri",
"Yuval Rabani",
"Leonard J. Schulman"
] | 2023-09-25 09:50:15 | http://arxiv.org/abs/2309.13993v1 | http://arxiv.org/pdf/2309.13993v1 | 2309.13993v1 |
A Novel Approach for Effective Multi-View Clustering with Information-Theoretic Perspective | Multi-view clustering (MVC) is a popular technique for improving clustering
performance using various data sources. However, existing methods primarily
focus on acquiring consistent information while often neglecting the issue of
redundancy across multiple views. This study presents a new approach called
Sufficient Multi-View Clustering (SUMVC) that examines the multi-view
clustering framework from an information-theoretic standpoint. Our proposed
method consists of two parts. Firstly, we develop a simple and reliable
multi-view clustering method SCMVC (simple consistent multi-view clustering)
that employs variational analysis to generate consistent information. Secondly,
we propose a sufficient representation lower bound to enhance consistent
information and minimise unnecessary information among views. The proposed
SUMVC method offers a promising solution to the problem of multi-view
clustering and provides a new perspective for analyzing multi-view data.
To verify the effectiveness of our model, we conducted a theoretical analysis
based on the Bayes Error Rate, and experiments on multiple multi-view datasets
demonstrate the superior performance of SUMVC. | [
"Chenhang Cui",
"Yazhou Ren",
"Jingyu Pu",
"Jiawei Li",
"Xiaorong Pu",
"Tianyi Wu",
"Yutao Shi",
"Lifang He"
] | 2023-09-25 09:41:11 | http://arxiv.org/abs/2309.13989v1 | http://arxiv.org/pdf/2309.13989v1 | 2309.13989v1 |
Physics-Driven ML-Based Modelling for Correcting Inverse Estimation | When deploying machine learning estimators in science and engineering (SAE)
domains, it is critical to avoid failed estimations that can have disastrous
consequences, e.g., in aero engine design. This work focuses on detecting and
correcting failed state estimations before adopting them in SAE inverse
problems, by utilizing simulations and performance metrics guided by physical
laws. We suggest to flag a machine learning estimation when its physical model
error exceeds a feasible threshold, and propose a novel approach, GEESE, to
correct it through optimization, aiming at delivering both low error and high
efficiency. The key designs of GEESE include (1) a hybrid surrogate error model
to provide fast error estimations to reduce simulation cost and to enable
gradient based backpropagation of error feedback, and (2) two generative models
to approximate the probability distributions of the candidate states for
simulating the exploitation and exploration behaviours. All three models are
constructed as neural networks. GEESE is tested on three real-world SAE inverse
problems and compared to a number of state-of-the-art optimization/search
approaches. Results show that it fails the least number of times in terms of
finding a feasible state correction, and requires physical evaluations less
frequently in general. | [
"Ruiyuan Kang",
"Tingting Mu",
"Panos Liatsis",
"Dimitrios C. Kyritsis"
] | 2023-09-25 09:37:19 | http://arxiv.org/abs/2309.13985v1 | http://arxiv.org/pdf/2309.13985v1 | 2309.13985v1 |
An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems | Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the
open-world assumption in service-oriented systems. Deep RL was successfully
applied to problems such as dynamic service composition, job scheduling, and
offloading, as well as service adaptation. While Deep RL offers many benefits,
understanding the decision-making of Deep RL is challenging because its learned
decision-making policy essentially appears as a black box. Yet, understanding
the decision-making of Deep RL is key to help service developers perform
debugging, support service providers to comply with relevant legal frameworks,
and facilitate service users to build trust. We introduce Chat4XAI to
facilitate the understanding of the decision-making of Deep RL by providing
natural-language explanations. Compared with visual explanations, the reported
benefits of natural-language explanations include better understandability for
non-technical users, increased user acceptance and trust, as well as more
efficient explanations. Chat4XAI leverages modern AI chatbot technology and
dedicated prompt engineering. Compared to earlier work on natural-language
explanations using classical software-based dialogue systems, using an AI
chatbot eliminates the need for eliciting and defining potential questions and
answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API
and evaluate the fidelity and stability of its explanations using an adaptive
service exemplar. | [
"Andreas Metzger",
"Jone Bartel",
"Jan Laufer"
] | 2023-09-25 09:05:36 | http://arxiv.org/abs/2309.14391v1 | http://arxiv.org/pdf/2309.14391v1 | 2309.14391v1 |
Newton Method-based Subspace Support Vector Data Description | In this paper, we present an adaptation of Newton's method for the
optimization of Subspace Support Vector Data Description (S-SVDD). The
objective of S-SVDD is to map the original data to a subspace optimized for
one-class classification, and the iterative optimization process of data
mapping and description in S-SVDD relies on gradient descent. However, gradient
descent only utilizes first-order information, which may lead to suboptimal
results. To address this limitation, we leverage Newton's method to enhance
data mapping and data description for an improved optimization of subspace
learning-based one-class classification. By incorporating this auxiliary
information, Newton's method offers a more efficient strategy for subspace
learning in one-class classification as compared to gradient-based
optimization. The paper discusses the limitations of gradient descent and the
advantages of using Newton's method in subspace learning for one-class
classification tasks. We provide both linear and nonlinear formulations of
Newton's method-based optimization for S-SVDD. In our experiments, we explored
both the minimization and maximization strategies of the objective. The results
demonstrate that the proposed optimization strategy outperforms the
gradient-based S-SVDD in most cases. | [
"Fahad Sohrab",
"Firas Laakom",
"Moncef Gabbouj"
] | 2023-09-25 08:49:41 | http://arxiv.org/abs/2309.13960v1 | http://arxiv.org/pdf/2309.13960v1 | 2309.13960v1 |
Early Churn Prediction from Large Scale User-Product Interaction Time Series | User churn, characterized by customers ending their relationship with a
business, has profound economic consequences across various
Business-to-Customer scenarios. For numerous system-to-user actions, such as
promotional discounts and retention campaigns, predicting potential churners
stands as a primary objective. In volatile sectors like fantasy sports,
unpredictable factors such as international sports events can influence even
regular spending habits. Consequently, while transaction history and
user-product interaction are valuable in predicting churn, they demand deep
domain knowledge and intricate feature engineering. Additionally, feature
development for churn prediction systems can be resource-intensive,
particularly in production settings serving 200m+ users, where inference
pipelines largely focus on feature engineering. This paper conducts an
exhaustive study on predicting user churn using historical data. We aim to
create a model forecasting customer churn likelihood, facilitating businesses
in comprehending attrition trends and formulating effective retention plans.
Our approach treats churn prediction as multivariate time series
classification, demonstrating that combining user activity and deep neural
networks yields remarkable results for churn prediction in complex
business-to-customer contexts. | [
"Shamik Bhattacharjee",
"Utkarsh Thukral",
"Nilesh Patil"
] | 2023-09-25 08:44:32 | http://arxiv.org/abs/2309.14390v1 | http://arxiv.org/pdf/2309.14390v1 | 2309.14390v1 |
Beam Enumeration: Probabilistic Explainability For Sample Efficient Self-conditioned Molecular Design | Generative molecular design has moved from proof-of-concept to real-world
applicability, as marked by the surge in very recent papers reporting
experimental validation. Key challenges in explainability and sample efficiency
present opportunities to enhance generative design to directly optimize
expensive high-fidelity oracles and provide actionable insights to domain
experts. Here, we propose Beam Enumeration to exhaustively enumerate the most
probable sub-sequences from language-based molecular generative models and show
that molecular substructures can be extracted. When coupled with reinforcement
learning, extracted substructures become meaningful, providing a source of
explainability and improving sample efficiency through self-conditioned
generation. Beam Enumeration is generally applicable to any language-based
molecular generative model and notably further improves the performance of the
recently reported Augmented Memory algorithm, which achieved the new
state-of-the-art on the Practical Molecular Optimization benchmark for sample
efficiency. The combined algorithm generates more high reward molecules and
faster, given a fixed oracle budget. Beam Enumeration is the first method to
jointly address explainability and sample efficiency for molecular design. | [
"Jeff Guo",
"Philippe Schwaller"
] | 2023-09-25 08:43:13 | http://arxiv.org/abs/2309.13957v1 | http://arxiv.org/pdf/2309.13957v1 | 2309.13957v1 |
Deep Reinforcement Learning for the Heat Transfer Control of Pulsating Impinging Jets | This research study explores the applicability of Deep Reinforcement Learning
(DRL) for thermal control based on Computational Fluid Dynamics. To accomplish
that, the forced convection on a hot plate prone to a pulsating cooling jet
with variable velocity has been investigated. We begin with evaluating the
efficiency and viability of a vanilla Deep Q-Network (DQN) method for thermal
control. Subsequently, a comprehensive comparison between different variants of
DRL is conducted. Soft Double and Duel DQN achieved better thermal control
performance among all the variants due to their efficient learning and action
prioritization capabilities. Results demonstrate that the soft Double DQN
outperforms the hard Double DQN. Moreover, soft Double and Duel can maintain
the temperature in the desired threshold for more than 98% of the control
cycle. These findings demonstrate the promising potential of DRL in effectively
addressing thermal control systems. | [
"Sajad Salavatidezfouli",
"Giovanni Stabile",
"Gianluigi Rozza"
] | 2023-09-25 08:41:50 | http://arxiv.org/abs/2309.13955v1 | http://arxiv.org/pdf/2309.13955v1 | 2309.13955v1 |
VidChapters-7M: Video Chapters at Scale | Segmenting long videos into chapters enables users to quickly navigate to the
information of their interest. This important topic has been understudied due
to the lack of publicly released datasets. To address this issue, we present
VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters
in total. VidChapters-7M is automatically created from videos online in a
scalable manner by scraping user-annotated chapters and hence without any
additional manual annotation. We introduce the following three tasks based on
this data. First, the video chapter generation task consists of temporally
segmenting the video and generating a chapter title for each segment. To
further dissect the problem, we also define two variants of this task: video
chapter generation given ground-truth boundaries, which requires generating a
chapter title given an annotated video segment, and video chapter grounding,
which requires temporally localizing a chapter given its annotated title. We
benchmark both simple baselines and state-of-the-art video-language models for
these three tasks. We also show that pretraining on VidChapters-7M transfers
well to dense video captioning tasks in both zero-shot and finetuning settings,
largely improving the state of the art on the YouCook2 and ViTT benchmarks.
Finally, our experiments reveal that downstream performance scales well with
the size of the pretraining dataset. Our dataset, code, and models are publicly
available at https://antoyang.github.io/vidchapters.html. | [
"Antoine Yang",
"Arsha Nagrani",
"Ivan Laptev",
"Josef Sivic",
"Cordelia Schmid"
] | 2023-09-25 08:38:11 | http://arxiv.org/abs/2309.13952v1 | http://arxiv.org/pdf/2309.13952v1 | 2309.13952v1 |
Local and Global Trend Bayesian Exponential Smoothing Models | This paper describes a family of seasonal and non-seasonal time series models
that can be viewed as generalisations of additive and multiplicative
exponential smoothing models. Their development is motivated by fast-growing,
volatile time series, and facilitated by state-of-the-art Bayesian fitting
techniques. When applied to the M3 competition data set, they outperform the
best algorithms in the competition as well as other benchmarks, thus achieving
to the best of our knowledge the best results of univariate methods on this
dataset in the literature. | [
"Slawek Smyl",
"Christoph Bergmeir",
"Alexander Dokumentov",
"Erwin Wibowo",
"Daniel Schmidt"
] | 2023-09-25 08:31:50 | http://arxiv.org/abs/2309.13950v1 | http://arxiv.org/pdf/2309.13950v1 | 2309.13950v1 |
Characterising User Transfer Amid Industrial Resource Variation: A Bayesian Nonparametric Approach | In a multitude of industrial fields, a key objective entails optimising
resource management whilst satisfying user requirements. Resource management by
industrial practitioners can result in a passive transfer of user loads across
resource providers, a phenomenon whose accurate characterisation is both
challenging and crucial. This research reveals the existence of user clusters,
which capture macro-level user transfer patterns amid resource variation. We
then propose CLUSTER, an interpretable hierarchical Bayesian nonparametric
model capable of automating cluster identification, and thereby predicting user
transfer in response to resource variation. Furthermore, CLUSTER facilitates
uncertainty quantification for further reliable decision-making. Our method
enables privacy protection by functioning independently of personally
identifiable information. Experiments with simulated and real-world data from
the communications industry reveal a pronounced alignment between prediction
results and empirical observations across a spectrum of resource management
scenarios. This research establishes a solid groundwork for advancing resource
management strategy development. | [
"Dongxu Lei",
"Xiaotian Lin",
"Xinghu Yu",
"Zhan Li",
"Weichao Sun",
"Jianbin Qiu",
"Songlin Zhuang",
"Huijun Gao"
] | 2023-09-25 08:31:14 | http://arxiv.org/abs/2309.13949v1 | http://arxiv.org/pdf/2309.13949v1 | 2309.13949v1 |
Provable Training for Graph Contrastive Learning | Graph Contrastive Learning (GCL) has emerged as a popular training approach
for learning node embeddings from augmented graphs without labels. Despite the
key principle that maximizing the similarity between positive node pairs while
minimizing it between negative node pairs is well established, some fundamental
problems are still unclear. Considering the complex graph structure, are some
nodes consistently well-trained and following this principle even with
different graph augmentations? Or are there some nodes more likely to be
untrained across graph augmentations and violate the principle? How to
distinguish these nodes and further guide the training of GCL? To answer these
questions, we first present experimental evidence showing that the training of
GCL is indeed imbalanced across all nodes. To address this problem, we propose
the metric "node compactness", which is the lower bound of how a node follows
the GCL principle related to the range of augmentations. We further derive the
form of node compactness theoretically through bound propagation, which can be
integrated into binary cross-entropy as a regularization. To this end, we
propose the PrOvable Training (POT) for GCL, which regularizes the training of
GCL to encode node embeddings that follows the GCL principle better. Through
extensive experiments on various benchmarks, POT consistently improves the
existing GCL approaches, serving as a friendly plugin. | [
"Yue Yu",
"Xiao Wang",
"Mengmei Zhang",
"Nian Liu",
"Chuan Shi"
] | 2023-09-25 08:23:53 | http://arxiv.org/abs/2309.13944v1 | http://arxiv.org/pdf/2309.13944v1 | 2309.13944v1 |
Evaluating Classification Systems Against Soft Labels with Fuzzy Precision and Recall | Classification systems are normally trained by minimizing the cross-entropy
between system outputs and reference labels, which makes the Kullback-Leibler
divergence a natural choice for measuring how closely the system can follow the
data. Precision and recall provide another perspective for measuring the
performance of a classification system. Non-binary references can arise from
various sources, and it is often beneficial to use the soft labels for training
instead of the binarized data. However, the existing definitions for precision
and recall require binary reference labels, and binarizing the data can cause
erroneous interpretations. We present a novel method to calculate precision,
recall and F-score without quantizing the data. The proposed metrics extend the
well established metrics as the definitions coincide when used with binary
labels. To understand the behavior of the metrics we show simple example cases
and an evaluation of different sound event detection models trained on real
data with soft labels. | [
"Manu Harju",
"Annamaria Mesaros"
] | 2023-09-25 08:16:01 | http://arxiv.org/abs/2309.13938v1 | http://arxiv.org/pdf/2309.13938v1 | 2309.13938v1 |
SAMN: A Sample Attention Memory Network Combining SVM and NN in One Architecture | Support vector machine (SVM) and neural networks (NN) have strong
complementarity. SVM focuses on the inner operation among samples while NN
focuses on the operation among the features within samples. Thus, it is
promising and attractive to combine SVM and NN, as it may provide a more
powerful function than SVM or NN alone. However, current work on combining them
lacks true integration. To address this, we propose a sample attention memory
network (SAMN) that effectively combines SVM and NN by incorporating sample
attention module, class prototypes, and memory block to NN. SVM can be viewed
as a sample attention machine. It allows us to add a sample attention module to
NN to implement the main function of SVM. Class prototypes are representatives
of all classes, which can be viewed as alternatives to support vectors. The
memory block is used for the storage and update of class prototypes. Class
prototypes and memory block effectively reduce the computational cost of sample
attention and make SAMN suitable for multi-classification tasks. Extensive
experiments show that SAMN achieves better classification performance than
single SVM or single NN with similar parameter sizes, as well as the previous
best model for combining SVM and NN. The sample attention mechanism is a
flexible module that can be easily deepened and incorporated into neural
networks that require it. | [
"Qiaoling Yang",
"Linkai Luo",
"Haoyu Zhang",
"Hong Peng",
"Ziyang Chen"
] | 2023-09-25 08:01:05 | http://arxiv.org/abs/2309.13930v1 | http://arxiv.org/pdf/2309.13930v1 | 2309.13930v1 |
Pseudo Label Selection is a Decision Problem | Pseudo-Labeling is a simple and effective approach to semi-supervised
learning. It requires criteria that guide the selection of pseudo-labeled data.
The latter have been shown to crucially affect pseudo-labeling's generalization
performance. Several such criteria exist and were proven to work reasonably
well in practice. However, their performance often depends on the initial model
fit on labeled data. Early overfitting can be propagated to the final model by
choosing instances with overconfident but wrong predictions, often called
confirmation bias. In two recent works, we demonstrate that pseudo-label
selection (PLS) can be naturally embedded into decision theory. This paves the
way for BPLS, a Bayesian framework for PLS that mitigates the issue of
confirmation bias. At its heart is a novel selection criterion: an analytical
approximation of the posterior predictive of pseudo-samples and labeled data.
We derive this selection criterion by proving Bayes-optimality of this "pseudo
posterior predictive". We empirically assess BPLS for generalized linear,
non-parametric generalized additive models and Bayesian neural networks on
simulated and real-world data. When faced with data prone to overfitting and
thus a high chance of confirmation bias, BPLS outperforms traditional PLS
methods. The decision-theoretic embedding further allows us to render PLS more
robust towards the involved modeling assumptions. To achieve this goal, we
introduce a multi-objective utility function. We demonstrate that the latter
can be constructed to account for different sources of uncertainty and explore
three examples: model selection, accumulation of errors and covariate shift. | [
"Julian Rodemann"
] | 2023-09-25 07:48:02 | http://arxiv.org/abs/2309.13926v2 | http://arxiv.org/pdf/2309.13926v2 | 2309.13926v2 |
Sample Complexity of Neural Policy Mirror Descent for Policy Optimization on Low-Dimensional Manifolds | Policy-based algorithms equipped with deep neural networks have achieved
great success in solving high-dimensional policy optimization problems in
reinforcement learning. However, current analyses cannot explain why they are
resistant to the curse of dimensionality. In this work, we study the sample
complexity of the neural policy mirror descent (NPMD) algorithm with
convolutional neural networks (CNN) as function approximators. Motivated by the
empirical observation that many high-dimensional environments have state spaces
possessing low-dimensional structures, such as those taking images as states,
we consider the state space to be a $d$-dimensional manifold embedded in the
$D$-dimensional Euclidean space with intrinsic dimension $d\ll D$. We show that
in each iteration of NPMD, both the value function and the policy can be well
approximated by CNNs. The approximation errors are controlled by the size of
the networks, and the smoothness of the previous networks can be inherited. As
a result, by properly choosing the network size and hyperparameters, NPMD can
find an $\epsilon$-optimal policy with
$\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})$ samples in expectation, where
$\alpha\in(0,1]$ indicates the smoothness of environment. Compared to previous
work, our result exhibits that NPMD can leverage the low-dimensional structure
of state space to escape from the curse of dimensionality, providing an
explanation for the efficacy of deep policy-based algorithms. | [
"Zhenghao Xu",
"Xiang Ji",
"Minshuo Chen",
"Mengdi Wang",
"Tuo Zhao"
] | 2023-09-25 07:31:22 | http://arxiv.org/abs/2309.13915v1 | http://arxiv.org/pdf/2309.13915v1 | 2309.13915v1 |
Matrix Factorization in Tropical and Mixed Tropical-Linear Algebras | Matrix Factorization (MF) has found numerous applications in Machine Learning
and Data Mining, including collaborative filtering recommendation systems,
dimensionality reduction, data visualization, and community detection.
Motivated by the recent successes of tropical algebra and geometry in machine
learning, we investigate two problems involving matrix factorization over the
tropical algebra. For the first problem, Tropical Matrix Factorization (TMF),
which has been studied already in the literature, we propose an improved
algorithm that avoids many of the local optima. The second formulation
considers the approximate decomposition of a given matrix into the product of
three matrices where a usual matrix product is followed by a tropical product.
This formulation has a very interesting interpretation in terms of the learning
of the utility functions of multiple users. We also present numerical results
illustrating the effectiveness of the proposed algorithms, as well as an
application to recommendation systems with promising results. | [
"Ioannis Kordonis",
"Emmanouil Theodosis",
"George Retsinas",
"Petros Maragos"
] | 2023-09-25 07:29:59 | http://arxiv.org/abs/2309.13914v1 | http://arxiv.org/pdf/2309.13914v1 | 2309.13914v1 |
A comparison of controller architectures and learning mechanisms for arbitrary robot morphologies | The main question this paper addresses is: What combination of a robot
controller and a learning method should be used, if the morphology of the
learning robot is not known in advance? Our interest is rooted in the context
of morphologically evolving modular robots, but the question is also relevant
in general, for system designers interested in widely applicable solutions. We
perform an experimental comparison of three controller-and-learner
combinations: one approach where controllers are based on modelling animal
locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary
algorithm, a completely different method using Reinforcement Learning (RL) with
a neural network controller architecture, and a combination `in-between' where
controllers are neural networks and the learner is an evolutionary algorithm.
We apply these three combinations to a test suite of modular robots and compare
their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based
and RL-based options are outperformed by the in-between combination that is
more robust and efficient than the other two setups. | [
"Jie Luo",
"Jakub Tomczak",
"Karine Miras",
"Agoston E. Eiben"
] | 2023-09-25 07:11:43 | http://arxiv.org/abs/2309.13908v1 | http://arxiv.org/pdf/2309.13908v1 | 2309.13908v1 |
Exploring Robot Morphology Spaces through Breadth-First Search and Random Query | Evolutionary robotics offers a powerful framework for designing and evolving
robot morphologies, particularly in the context of modular robots. However, the
role of query mechanisms during the genotype-to-phenotype mapping process has
been largely overlooked. This research addresses this gap by conducting a
comparative analysis of query mechanisms in the brain-body co-evolution of
modular robots. Using two different query mechanisms, Breadth-First Search
(BFS) and Random Query, within the context of evolving robot morphologies using
CPPNs and robot controllers using tensors, and testing them in two evolutionary
frameworks, Lamarckian and Darwinian systems, this study investigates their
influence on evolutionary outcomes and performance. The findings demonstrate
the impact of the two query mechanisms on the evolution and performance of
modular robot bodies, including morphological intelligence, diversity, and
morphological traits. This study suggests that BFS is both more effective and
efficient in producing highly performing robots. It also reveals that
initially, robot diversity was higher with BFS compared to Random Query, but in
the Lamarckian system, it declines faster, converging to superior designs,
while in the Darwinian system, BFS led to higher end-process diversity. | [
"Jie Luo"
] | 2023-09-25 06:46:19 | http://arxiv.org/abs/2309.14387v1 | http://arxiv.org/pdf/2309.14387v1 | 2309.14387v1 |
Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts | Standard contextual bandit problem assumes that all the relevant contexts are
observed before the algorithm chooses an arm. This modeling paradigm, while
useful, often falls short when dealing with problems in which valuable
additional context can be observed after arm selection. For example, content
recommendation platforms like Youtube, Instagram, Tiktok also observe valuable
follow-up information pertinent to the user's reward after recommendation
(e.g., how long the user stayed, what is the user's watch speed, etc.). To
improve online learning efficiency in these applications, we study a novel
contextual bandit problem with post-serving contexts and design a new
algorithm, poLinUCB, that achieves tight regret under standard assumptions.
Core to our technical proof is a robustified and generalized version of the
well-known Elliptical Potential Lemma (EPL), which can accommodate noise in
data. Such robustification is necessary for tackling our problem, and we
believe it could also be of general interest. Extensive empirical tests on both
synthetic and real-world datasets demonstrate the significant benefit of
utilizing post-serving contexts as well as the superior performance of our
algorithm over the state-of-the-art approaches. | [
"Chaoqi Wang",
"Ziyu Ye",
"Zhe Feng",
"Ashwinkumar Badanidiyuru",
"Haifeng Xu"
] | 2023-09-25 06:22:28 | http://arxiv.org/abs/2309.13896v1 | http://arxiv.org/pdf/2309.13896v1 | 2309.13896v1 |
Graph Representation Learning Towards Patents Network Analysis | Patent analysis has recently been recognized as a powerful technique for
large companies worldwide to lend them insight into the age of competition
among various industries. This technique is considered a shortcut for
developing countries since it can significantly accelerate their technology
development. Therefore, as an inevitable process, patent analysis can be
utilized to monitor rival companies and diverse industries. This research
employed a graph representation learning approach to create, analyze, and find
similarities in the patent data registered in the Iranian Official Gazette. The
patent records were scrapped and wrangled through the Iranian Official Gazette
portal. Afterward, the key entities were extracted from the scrapped patents
dataset to create the Iranian patents graph from scratch based on novel natural
language processing and entity resolution techniques. Finally, thanks to the
utilization of novel graph algorithms and text mining methods, we identified
new areas of industry and research from Iranian patent data, which can be used
extensively to prevent duplicate patents, familiarity with similar and
connected inventions, Awareness of legal entities supporting patents and
knowledge of researchers and linked stakeholders in a particular research
field. | [
"Mohammad Heydari",
"Babak Teimourpour"
] | 2023-09-25 05:49:40 | http://arxiv.org/abs/2309.13888v1 | http://arxiv.org/pdf/2309.13888v1 | 2309.13888v1 |
Can Class-Priors Help Single-Positive Multi-Label Learning? | Single-positive multi-label learning (SPMLL) is a typical weakly supervised
multi-label learning problem, where each training example is annotated with
only one positive label. Existing SPMLL methods typically assign pseudo-labels
to unannotated labels with the assumption that prior probabilities of all
classes are identical. However, the class-prior of each category may differ
significantly in real-world scenarios, which makes the predictive model not
perform as well as expected due to the unrealistic assumption on real-world
application. To alleviate this issue, a novel framework named {\proposed},
i.e., Class-pRiors Induced Single-Positive multi-label learning, is proposed.
Specifically, a class-priors estimator is introduced, which could estimate the
class-priors that are theoretically guaranteed to converge to the ground-truth
class-priors. In addition, based on the estimated class-priors, an unbiased
risk estimator for classification is derived, and the corresponding risk
minimizer could be guaranteed to approximately converge to the optimal risk
minimizer on fully supervised data. Experimental results on ten MLL benchmark
datasets demonstrate the effectiveness and superiority of our method over
existing SPMLL approaches. | [
"Biao Liu",
"Jie Wang",
"Ning Xu",
"Xin Geng"
] | 2023-09-25 05:45:57 | http://arxiv.org/abs/2309.13886v1 | http://arxiv.org/pdf/2309.13886v1 | 2309.13886v1 |
TouchUp-G: Improving Feature Representation through Graph-Centric Finetuning | How can we enhance the node features acquired from Pretrained Models (PMs) to
better suit downstream graph learning tasks? Graph Neural Networks (GNNs) have
become the state-of-the-art approach for many high-impact, real-world graph
applications. For feature-rich graphs, a prevalent practice involves utilizing
a PM directly to generate features, without incorporating any domain adaptation
techniques. Nevertheless, this practice is suboptimal because the node features
extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the
potential correlations between the graph structure and node features, leading
to a decline in GNNs performance. In this work, we seek to improve the node
features obtained from a PM for downstream graph tasks and introduce TOUCHUP-G,
which has several advantages. It is (a) General: applicable to any downstream
graph task, including link prediction which is often employed in recommender
systems; (b) Multi-modal: able to improve raw features of any modality (e.g.
images, texts, audio); (c) Principled: it is closely related to a novel metric,
feature homophily, which we propose to quantify the potential correlations
between the graph structure and node features and we show that TOUCHUP-G can
effectively shrink the discrepancy between the graph structure and node
features; (d) Effective: achieving state-of-the-art results on four real-world
datasets spanning different tasks and modalities. | [
"Jing Zhu",
"Xiang Song",
"Vassilis N. Ioannidis",
"Danai Koutra",
"Christos Faloutsos"
] | 2023-09-25 05:44:40 | http://arxiv.org/abs/2309.13885v1 | http://arxiv.org/pdf/2309.13885v1 | 2309.13885v1 |
Estimating Treatment Effects Under Heterogeneous Interference | Treatment effect estimation can assist in effective decision-making in
e-commerce, medicine, and education. One popular application of this estimation
lies in the prediction of the impact of a treatment (e.g., a promotion) on an
outcome (e.g., sales) of a particular unit (e.g., an item), known as the
individual treatment effect (ITE). In many online applications, the outcome of
a unit can be affected by the treatments of other units, as units are often
associated, which is referred to as interference. For example, on an online
shopping website, sales of an item will be influenced by an advertisement of
its co-purchased item. Prior studies have attempted to model interference to
estimate the ITE accurately, but they often assume a homogeneous interference,
i.e., relationships between units only have a single view. However, in
real-world applications, interference may be heterogeneous, with multi-view
relationships. For instance, the sale of an item is usually affected by the
treatment of its co-purchased and co-viewed items. We hypothesize that ITE
estimation will be inaccurate if this heterogeneous interference is not
properly modeled. Therefore, we propose a novel approach to model heterogeneous
interference by developing a new architecture to aggregate information from
diverse neighbors. Our proposed method contains graph neural networks that
aggregate same-view information, a mechanism that aggregates information from
different views, and attention mechanisms. In our experiments on multiple
datasets with heterogeneous interference, the proposed method significantly
outperforms existing methods for ITE estimation, confirming the importance of
modeling heterogeneous interference. | [
"Xiaofeng Lin",
"Guoxi Zhang",
"Xiaotian Lu",
"Han Bao",
"Koh Takeuchi",
"Hisashi Kashima"
] | 2023-09-25 05:44:17 | http://arxiv.org/abs/2309.13884v1 | http://arxiv.org/pdf/2309.13884v1 | 2309.13884v1 |
Diffusion Conditional Expectation Model for Efficient and Robust Target Speech Extraction | Target Speech Extraction (TSE) is a crucial task in speech processing that
focuses on isolating the clean speech of a specific speaker from complex
mixtures. While discriminative methods are commonly used for TSE, they can
introduce distortion in terms of speech perception quality. On the other hand,
generative approaches, particularly diffusion-based methods, can enhance speech
quality perceptually but suffer from slower inference speed. We propose an
efficient generative approach named Diffusion Conditional Expectation Model
(DCEM) for TSE. It can handle multi- and single-speaker scenarios in both noisy
and clean conditions. Additionally, we introduce Regenerate-DCEM (R-DCEM) that
can regenerate and optimize speech quality based on pre-processed speech from a
discriminative model. Our method outperforms conventional methods in terms of
both intrusive and non-intrusive metrics and demonstrates notable strengths in
inference efficiency and robustness to unseen tasks. Audio examples are
available online (https://vivian556123.github.io/dcem). | [
"Leying Zhang",
"Yao Qian",
"Linfeng Yu",
"Heming Wang",
"Xinkai Wang",
"Hemin Yang",
"Long Zhou",
"Shujie Liu",
"Yanmin Qian",
"Michael Zeng"
] | 2023-09-25 04:58:38 | http://arxiv.org/abs/2309.13874v1 | http://arxiv.org/pdf/2309.13874v1 | 2309.13874v1 |
Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images | Segmentation of the sigmoid colon is a crucial aspect of treating
diverticulitis. It enables accurate identification and localisation of
inflammation, which in turn helps healthcare professionals make informed
decisions about the most appropriate treatment options. This research presents
a novel deep learning architecture for segmenting the sigmoid colon from
Computed Tomography (CT) images using a modified 3D U-Net architecture. Several
variations of the 3D U-Net model with modified hyper-parameters were examined
in this study. Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation
(csSE) were also used to improve the model performance. The networks were
trained using manually annotated sigmoid colon. A five-fold cross-validation
procedure was used on a test dataset to evaluate the network's performance. As
indicated by the maximum Dice similarity coefficient (DSC) of 56.92+/-1.42%,
the application of PyP and csSE techniques improves segmentation precision. We
explored ensemble methods including averaging, weighted averaging, majority
voting, and max ensemble. The results show that average and majority voting
approaches with a threshold value of 0.5 and consistent weight distribution
among the top three models produced comparable and optimal results with DSC of
88.11+/-3.52%. The results indicate that the application of a modified 3D U-Net
architecture is effective for segmenting the sigmoid colon in Computed
Tomography (CT) images. In addition, the study highlights the potential
benefits of integrating ensemble methods to improve segmentation precision. | [
"Md Akizur Rahman",
"Sonit Singh",
"Kuruparan Shanmugalingam",
"Sankaran Iyer",
"Alan Blair",
"Praveen Ravindran",
"Arcot Sowmya"
] | 2023-09-25 04:52:46 | http://arxiv.org/abs/2309.13872v1 | http://arxiv.org/pdf/2309.13872v1 | 2309.13872v1 |
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration | Document-level relation extraction (DocRE) aims to extract relations of all
entity pairs in a document. A key challenge in DocRE is the cost of annotating
such data which requires intensive human effort. Thus, we investigate the case
of DocRE in a low-resource setting, and we find that existing models trained on
low data overestimate the NA ("no relation") label, causing limited
performance. In this work, we approach the problem from a calibration
perspective and propose PRiSM, which learns to adapt logits based on relation
semantic information. We evaluate our method on three DocRE datasets and
demonstrate that integrating existing models with PRiSM improves performance by
as much as 26.38 F1 score, while the calibration error drops as much as 36
times when trained with about 3% of data. The code is publicly available at
https://github.com/brightjade/PRiSM. | [
"Minseok Choi",
"Hyesu Lim",
"Jaegul Choo"
] | 2023-09-25 04:42:39 | http://arxiv.org/abs/2309.13869v1 | http://arxiv.org/pdf/2309.13869v1 | 2309.13869v1 |
On Calibration of Modern Quantized Efficient Neural Networks | We explore calibration properties at various precisions for three
architectures: ShuffleNetv2, GhostNet-VGG, and MobileOne; and two datasets:
CIFAR-100 and PathMNIST. The quality of calibration is observed to track the
quantization quality; it is well-documented that performance worsens with lower
precision, and we observe a similar correlation with poorer calibration. This
becomes especially egregious at 4-bit activation regime. GhostNet-VGG is shown
to be the most robust to overall performance drop at lower precision. We find
that temperature scaling can improve calibration error for quantized networks,
with some caveats. We hope that these preliminary insights can lead to more
opportunities for explainable and reliable EdgeML. | [
"Joey Kuang",
"Alexander Wong"
] | 2023-09-25 04:30:18 | http://arxiv.org/abs/2309.13866v2 | http://arxiv.org/pdf/2309.13866v2 | 2309.13866v2 |
Fast-HuBERT: An Efficient Training Framework for Self-Supervised Speech Representation Learning | Recent years have witnessed significant advancements in self-supervised
learning (SSL) methods for speech-processing tasks. Various speech-based SSL
models have been developed and present promising performance on a range of
downstream tasks including speech recognition. However, existing speech-based
SSL models face a common dilemma in terms of computational cost, which might
hinder their potential application and in-depth academic research. To address
this issue, we first analyze the computational cost of different modules during
HuBERT pre-training and then introduce a stack of efficiency optimizations,
which is named Fast-HuBERT in this paper. The proposed Fast-HuBERT can be
trained in 1.1 days with 8 V100 GPUs on the Librispeech 960h benchmark, without
performance degradation, resulting in a 5.2x speedup, compared to the original
implementation. Moreover, we explore two well-studied techniques in the
Fast-HuBERT and demonstrate consistent improvements as reported in previous
work. | [
"Guanrou Yang",
"Ziyang Ma",
"Zhisheng Zheng",
"Yakun Song",
"Zhikang Niu",
"Xie Chen"
] | 2023-09-25 04:07:34 | http://arxiv.org/abs/2309.13860v2 | http://arxiv.org/pdf/2309.13860v2 | 2309.13860v2 |
Can neural networks count digit frequency? | In this research, we aim to compare the performance of different classical
machine learning models and neural networks in identifying the frequency of
occurrence of each digit in a given number. It has various applications in
machine learning and computer vision, e.g. for obtaining the frequency of a
target object in a visual scene. We considered this problem as a hybrid of
classification and regression tasks. We carefully create our own datasets to
observe systematic differences between different methods. We evaluate each of
the methods using different metrics across multiple datasets.The metrics of
performance used were the root mean squared error and mean absolute error for
regression evaluation, and accuracy for classification performance evaluation.
We observe that decision trees and random forests overfit to the dataset, due
to their inherent bias, and are not able to generalize well. We also observe
that the neural networks significantly outperform the classical machine
learning models in terms of both the regression and classification metrics for
both the 6-digit and 10-digit number datasets. Dataset and code are available
on github. | [
"Padmaksh Khandelwal"
] | 2023-09-25 03:45:36 | http://arxiv.org/abs/2310.04431v1 | http://arxiv.org/pdf/2310.04431v1 | 2310.04431v1 |
Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts | Top-K sparse softmax gating mixture of experts has been widely used for
scaling up massive deep-learning architectures without increasing the
computational cost. Despite its popularity in real-world applications, the
theoretical understanding of that gating function has remained an open problem.
The main challenge comes from the structure of the top-K sparse softmax gating
function, which partitions the input space into multiple regions with distinct
behaviors. By focusing on a Gaussian mixture of experts, we establish
theoretical results on the effects of the top-K sparse softmax gating function
on both density and parameter estimations. Our results hinge upon defining
novel loss functions among parameters to capture different behaviors of the
input regions. When the true number of experts $k_{\ast}$ is known, we
demonstrate that the convergence rates of density and parameter estimations are
both parametric on the sample size. However, when $k_{\ast}$ becomes unknown
and the true model is over-specified by a Gaussian mixture of $k$ experts where
$k > k_{\ast}$, our findings suggest that the number of experts selected from
the top-K sparse softmax gating function must exceed the total cardinality of a
certain number of Voronoi cells associated with the true parameters to
guarantee the convergence of the density estimation. Moreover, while the
density estimation rate remains parametric under this setting, the parameter
estimation rates become substantially slow due to an intrinsic interaction
between the softmax gating and expert functions. | [
"Huy Nguyen",
"Pedram Akbarian",
"Fanqi Yan",
"Nhat Ho"
] | 2023-09-25 03:28:01 | http://arxiv.org/abs/2309.13850v1 | http://arxiv.org/pdf/2309.13850v1 | 2309.13850v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.