title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Asca: less audio data is more insightful | Audio recognition in specialized areas such as birdsong and submarine
acoustics faces challenges in large-scale pre-training due to the limitations
in available samples imposed by sampling environments and specificity
requirements. While the Transformer model excels in audio recognition, its
dependence on vast amounts of data becomes restrictive in resource-limited
settings. Addressing this, we introduce the Audio Spectrogram Convolution
Attention (ASCA) based on CoAtNet, integrating a Transformer-convolution hybrid
architecture, novel network design, and attention techniques, further augmented
with data enhancement and regularization strategies. On the BirdCLEF2023 and
AudioSet(Balanced), ASCA achieved accuracies of 81.2% and 35.1%, respectively,
significantly outperforming competing methods. The unique structure of our
model enriches output, enabling generalization across various audio detection
tasks. Our code can be found at https://github.com/LeeCiang/ASCA. | [
"Xiang Li",
"Junhao Chen",
"Chao Li",
"Hongwu Lv"
] | 2023-09-23 13:24:06 | http://arxiv.org/abs/2309.13373v1 | http://arxiv.org/pdf/2309.13373v1 | 2309.13373v1 |
Limits of Actor-Critic Algorithms for Decision Tree Policies Learning in IBMDPs | Interpretability of AI models allows for user safety checks to build trust in
such AIs. In particular, Decision Trees (DTs) provide a global look at the
learned model and transparently reveal which features of the input are critical
for making a decision. However, interpretability is hindered if the DT is too
large. To learn compact trees, a recent Reinforcement Learning (RL) framework
has been proposed to explore the space of DTs using deep RL. This framework
augments a decision problem (e.g. a supervised classification task) with
additional actions that gather information about the features of an otherwise
hidden input. By appropriately penalizing these actions, the agent learns to
optimally trade-off size and performance of DTs. In practice, a reactive policy
for a partially observable Markov decision process (MDP) needs to be learned,
which is still an open problem. We show in this paper that deep RL can fail
even on simple toy tasks of this class. However, when the underlying decision
problem is a supervised classification task, we show that finding the optimal
tree can be cast as a fully observable Markov decision problem and be solved
efficiently, giving rise to a new family of algorithms for learning DTs that go
beyond the classical greedy maximization ones. | [
"Hecotr Kohler",
"Riad Akrour",
"Philippe Preux"
] | 2023-09-23 13:06:20 | http://arxiv.org/abs/2309.13365v2 | http://arxiv.org/pdf/2309.13365v2 | 2309.13365v2 |
MLPST: MLP is All You Need for Spatio-Temporal Prediction | Traffic prediction is a typical spatio-temporal data mining task and has
great significance to the public transportation system. Considering the demand
for its grand application, we recognize key factors for an ideal
spatio-temporal prediction method: efficient, lightweight, and effective.
However, the current deep model-based spatio-temporal prediction solutions
generally own intricate architectures with cumbersome optimization, which can
hardly meet these expectations. To accomplish the above goals, we propose an
intuitive and novel framework, MLPST, a pure multi-layer perceptron
architecture for traffic prediction. Specifically, we first capture spatial
relationships from both local and global receptive fields. Then, temporal
dependencies in different intervals are comprehensively considered. Through
compact and swift MLP processing, MLPST can well capture the spatial and
temporal dependencies while requiring only linear computational complexity, as
well as model parameters that are more than an order of magnitude lower than
baselines. Extensive experiments validated the superior effectiveness and
efficiency of MLPST against advanced baselines, and among models with optimal
accuracy, MLPST achieves the best time and space efficiency. | [
"Zijian Zhang",
"Ze Huang",
"Zhiwei Hu",
"Xiangyu Zhao",
"Wanyu Wang",
"Zitao Liu",
"Junbo Zhang",
"S. Joe Qin",
"Hongwei Zhao"
] | 2023-09-23 12:58:16 | http://arxiv.org/abs/2309.13363v1 | http://arxiv.org/pdf/2309.13363v1 | 2309.13363v1 |
Machine Learning with Chaotic Strange Attractors | Machine learning studies need colossal power to process massive datasets and
train neural networks to reach high accuracies, which have become gradually
unsustainable. Limited by the von Neumann bottleneck, current computing
architectures and methods fuel this high power consumption. Here, we present an
analog computing method that harnesses chaotic nonlinear attractors to perform
machine learning tasks with low power consumption. Inspired by neuromorphic
computing, our model is a programmable, versatile, and generalized platform for
machine learning tasks. Our mode provides exceptional performance in clustering
by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial
conditions. When deployed as a simple analog device, it only requires
milliwatt-scale power levels while being on par with current machine learning
techniques. We demonstrate low errors and high accuracies with our model for
regression and classification-based learning tasks. | [
"Bahadır Utku Kesgin",
"Uğur Teğin"
] | 2023-09-23 12:54:38 | http://arxiv.org/abs/2309.13361v1 | http://arxiv.org/pdf/2309.13361v1 | 2309.13361v1 |
Lexical Squad@Multimodal Hate Speech Event Detection 2023: Multimodal Hate Speech Detection using Fused Ensemble Approach | With a surge in the usage of social media postings to express opinions,
emotions, and ideologies, there has been a significant shift towards the
calibration of social media as a rapid medium of conveying viewpoints and
outlooks over the globe. Concurrently, the emergence of a multitude of
conflicts between two entities has given rise to a stream of social media
content containing propaganda, hate speech, and inconsiderate views. Thus, the
issue of monitoring social media postings is rising swiftly, attracting major
attention from those willing to solve such problems. One such problem is Hate
Speech detection. To mitigate this problem, we present our novel ensemble
learning approach for detecting hate speech, by classifying text-embedded
images into two labels, namely "Hate Speech" and "No Hate Speech". We have
incorporated state-of-art models including InceptionV3, BERT, and XLNet. Our
proposed ensemble model yielded promising results with 75.21 and 74.96 as
accuracy and F-1 score (respectively). We also present an empirical evaluation
of the text-embedded images to elaborate on how well the model was able to
predict and classify. We release our codebase here
(https://github.com/M0hammad-Kashif/MultiModalHateSpeech). | [
"Mohammad Kashif",
"Mohammad Zohair",
"Saquib Ali"
] | 2023-09-23 12:06:05 | http://arxiv.org/abs/2309.13354v1 | http://arxiv.org/pdf/2309.13354v1 | 2309.13354v1 |
Accelerating Particle and Fluid Simulations with Differentiable Graph Networks for Solving Forward and Inverse Problems | We leverage physics-embedded differentiable graph network simulators (GNS) to
accelerate particulate and fluid simulations to solve forward and inverse
problems. GNS represents the domain as a graph with particles as nodes and
learned interactions as edges. Compared to modeling global dynamics, GNS
enables learning local interaction laws through edge messages, improving its
generalization to new environments. GNS achieves over 165x speedup for granular
flow prediction compared to parallel CPU numerical simulations. We propose a
novel hybrid GNS/Material Point Method (MPM) to accelerate forward simulations
by minimizing error on a pure surrogate model by interleaving MPM in GNS
rollouts to satisfy conservation laws and minimize errors achieving 24x speedup
compared to pure numerical simulations. The differentiable GNS enables solving
inverse problems through automatic differentiation, identifying material
parameters that result in target runout distances. We demonstrate the ability
of GNS to solve inverse problems by iteratively updating the friction angle (a
material property) by computing the gradient of a loss function based on the
final and target runouts, thereby identifying the friction angle that best
matches the observed runout. The physics-embedded and differentiable simulators
open an exciting new paradigm for AI-accelerated design, control, and
optimization. | [
"Krishna Kumar",
"Yongjin Choi"
] | 2023-09-23 11:52:43 | http://arxiv.org/abs/2309.13348v1 | http://arxiv.org/pdf/2309.13348v1 | 2309.13348v1 |
LLMs as Counterfactual Explanation Modules: Can ChatGPT Explain Black-box Text Classifiers? | Large language models (LLMs) are increasingly being used for tasks beyond
text generation, including complex tasks such as data labeling, information
extraction, etc. With the recent surge in research efforts to comprehend the
full extent of LLM capabilities, in this work, we investigate the role of LLMs
as counterfactual explanation modules, to explain decisions of black-box text
classifiers. Inspired by causal thinking, we propose a pipeline for using LLMs
to generate post-hoc, model-agnostic counterfactual explanations in a
principled way via (i) leveraging the textual understanding capabilities of the
LLM to identify and extract latent features, and (ii) leveraging the
perturbation and generation capabilities of the same LLM to generate a
counterfactual explanation by perturbing input features derived from the
extracted latent features. We evaluate three variants of our framework, with
varying degrees of specificity, on a suite of state-of-the-art LLMs, including
ChatGPT and LLaMA 2. We evaluate the effectiveness and quality of the generated
counterfactual explanations, over a variety of text classification benchmarks.
Our results show varied performance of these models in different settings, with
a full two-step feature extraction based variant outperforming others in most
cases. Our pipeline can be used in automated explanation systems, potentially
reducing human effort. | [
"Amrita Bhattacharjee",
"Raha Moraffah",
"Joshua Garland",
"Huan Liu"
] | 2023-09-23 11:22:28 | http://arxiv.org/abs/2309.13340v1 | http://arxiv.org/pdf/2309.13340v1 | 2309.13340v1 |
Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic | Recent advancements in large language models have showcased their remarkable
generalizability across various domains. However, their reasoning abilities
still have significant room for improvement, especially when confronted with
scenarios requiring multi-step reasoning. Although large language models
possess extensive knowledge, their behavior, particularly in terms of
reasoning, often fails to effectively utilize this knowledge to establish a
coherent thinking paradigm. Generative language models sometimes show
hallucinations as their reasoning procedures are unconstrained by logical
principles. Aiming to improve the zero-shot chain-of-thought reasoning ability
of large language models, we propose Logical Chain-of-Thought (LogiCoT), a
neurosymbolic framework that leverages principles from symbolic logic to verify
and revise the reasoning processes accordingly. Experimental evaluations
conducted on language tasks in diverse domains, including arithmetic,
commonsense, symbolic, causal inference, and social problems, demonstrate the
efficacy of the enhanced reasoning paradigm by logic. | [
"Xufeng Zhao",
"Mengdi Li",
"Wenhao Lu",
"Cornelius Weber",
"Jae Hee Lee",
"Kun Chu",
"Stefan Wermter"
] | 2023-09-23 11:21:12 | http://arxiv.org/abs/2309.13339v1 | http://arxiv.org/pdf/2309.13339v1 | 2309.13339v1 |
On the Asymptotic Learning Curves of Kernel Ridge Regression under Power-law Decay | The widely observed 'benign overfitting phenomenon' in the neural network
literature raises the challenge to the 'bias-variance trade-off' doctrine in
the statistical learning theory. Since the generalization ability of the 'lazy
trained' over-parametrized neural network can be well approximated by that of
the neural tangent kernel regression, the curve of the excess risk (namely, the
learning curve) of kernel ridge regression attracts increasing attention
recently. However, most recent arguments on the learning curve are heuristic
and are based on the 'Gaussian design' assumption. In this paper, under mild
and more realistic assumptions, we rigorously provide a full characterization
of the learning curve: elaborating the effect and the interplay of the choice
of the regularization parameter, the source condition and the noise. In
particular, our results suggest that the 'benign overfitting phenomenon' exists
in very wide neural networks only when the noise level is small. | [
"Yicheng Li",
"Haobo Zhang",
"Qian Lin"
] | 2023-09-23 11:18:13 | http://arxiv.org/abs/2309.13337v1 | http://arxiv.org/pdf/2309.13337v1 | 2309.13337v1 |
Predicting Temperature of Major Cities Using Machine Learning and Deep Learning | Currently, the issue that concerns the world leaders most is climate change
for its effect on agriculture, environment and economies of daily life. So, to
combat this, temperature prediction with strong accuracy is vital. So far, the
most effective widely used measure for such forecasting is Numerical weather
prediction (NWP) which is a mathematical model that needs broad data from
different applications to make predictions. This expensive, time and labor
consuming work can be minimized through making such predictions using Machine
learning algorithms. Using the database made by University of Dayton which
consists the change of temperature in major cities we used the Time Series
Analysis method where we use LSTM for the purpose of turning existing data into
a tool for future prediction. LSTM takes the long-term data as well as any
short-term exceptions or anomalies that may have occurred and calculates trend,
seasonality and the stationarity of a data. By using models such as ARIMA,
SARIMA, Prophet with the concept of RNN and LSTM we can, filter out any
abnormalities, preprocess the data compare it with previous trends and make a
prediction of future trends. Also, seasonality and stationarity help us analyze
the reoccurrence or repeat over one year variable and removes the constrain of
time in which the data was dependent so see the general changes that are
predicted. By doing so we managed to make prediction of the temperature of
different cities during any time in future based on available data and built a
method of accurate prediction. This document contains our methodology for being
able to make such predictions. | [
"Wasiou Jaharabi",
"MD Ibrahim Al Hossain",
"Rownak Tahmid",
"Md. Zuhayer Islam",
"T. M. Saad Rayhan"
] | 2023-09-23 10:23:00 | http://arxiv.org/abs/2309.13330v1 | http://arxiv.org/pdf/2309.13330v1 | 2309.13330v1 |
An Interpretable Systematic Review of Machine Learning Models for Predictive Maintenance of Aircraft Engine | This paper presents an interpretable review of various machine learning and
deep learning models to predict the maintenance of aircraft engine to avoid any
kind of disaster. One of the advantages of the strategy is that it can work
with modest datasets. In this study, sensor data is utilized to predict
aircraft engine failure within a predetermined number of cycles using LSTM,
Bi-LSTM, RNN, Bi-RNN GRU, Random Forest, KNN, Naive Bayes, and Gradient
Boosting. We explain how deep learning and machine learning can be used to
generate predictions in predictive maintenance using a straightforward scenario
with just one data source. We applied lime to the models to help us understand
why machine learning models did not perform well than deep learning models. An
extensive analysis of the model's behavior is presented for several test data
to understand the black box scenario of the models. A lucrative accuracy of
97.8%, 97.14%, and 96.42% are achieved by GRU, Bi-LSTM, and LSTM respectively
which denotes the capability of the models to predict maintenance at an early
stage. | [
"Abdullah Al Hasib",
"Ashikur Rahman",
"Mahpara Khabir",
"Md. Tanvir Rouf Shawon"
] | 2023-09-23 08:54:10 | http://arxiv.org/abs/2309.13310v1 | http://arxiv.org/pdf/2309.13310v1 | 2309.13310v1 |
CORE: Common Random Reconstruction for Distributed Optimization with Provable Low Communication Complexity | With distributed machine learning being a prominent technique for large-scale
machine learning tasks, communication complexity has become a major bottleneck
for speeding up training and scaling up machine numbers. In this paper, we
propose a new technique named Common randOm REconstruction(CORE), which can be
used to compress the information transmitted between machines in order to
reduce communication complexity without other strict conditions. Especially,
our technique CORE projects the vector-valued information to a low-dimensional
one through common random vectors and reconstructs the information with the
same random noises after communication. We apply CORE to two distributed tasks,
respectively convex optimization on linear models and generic non-convex
optimization, and design new distributed algorithms, which achieve provably
lower communication complexities. For example, we show for linear models
CORE-based algorithm can encode the gradient vector to $\mathcal{O}(1)$-bits
(against $\mathcal{O}(d)$), with the convergence rate not worse, preceding the
existing results. | [
"Pengyun Yue",
"Hanzhen Zhao",
"Cong Fang",
"Di He",
"Liwei Wang",
"Zhouchen Lin",
"Song-chun Zhu"
] | 2023-09-23 08:45:27 | http://arxiv.org/abs/2309.13307v1 | http://arxiv.org/pdf/2309.13307v1 | 2309.13307v1 |
C$^2$VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled Representations with Contrastive Posterior | We present a self-supervised variational autoencoder (VAE) to jointly learn
disentangled and dependent hidden factors and then enhance disentangled
representation learning by a self-supervised classifier to eliminate coupled
representations in a contrastive manner. To this end, a Contrastive Copula VAE
(C$^2$VAE) is introduced without relying on prior knowledge about data in the
probabilistic principle and involving strong modeling assumptions on the
posterior in the neural architecture. C$^2$VAE simultaneously factorizes the
posterior (evidence lower bound, ELBO) with total correlation (TC)-driven
decomposition for learning factorized disentangled representations and extracts
the dependencies between hidden features by a neural Gaussian copula for copula
coupled representations. Then, a self-supervised contrastive classifier
differentiates the disentangled representations from the coupled
representations, where a contrastive loss regularizes this contrastive
classification together with the TC loss for eliminating entangled factors and
strengthening disentangled representations. C$^2$VAE demonstrates a strong
effect in enhancing disentangled representation learning. C$^2$VAE further
contributes to improved optimization addressing the TC-based VAE instability
and the trade-off between reconstruction and representation. | [
"Zhangkai Wu",
"Longbing Cao"
] | 2023-09-23 08:33:48 | http://arxiv.org/abs/2309.13303v1 | http://arxiv.org/pdf/2309.13303v1 | 2309.13303v1 |
Beyond Fairness: Age-Harmless Parkinson's Detection via Voice | Parkinson's disease (PD), a neurodegenerative disorder, often manifests as
speech and voice dysfunction. While utilizing voice data for PD detection has
great potential in clinical applications, the widely used deep learning models
currently have fairness issues regarding different ages of onset. These deep
models perform well for the elderly group (age $>$ 55) but are less accurate
for the young group (age $\leq$ 55). Through our investigation, the discrepancy
between the elderly and the young arises due to 1) an imbalanced dataset and 2)
the milder symptoms often seen in early-onset patients. However, traditional
debiasing methods are impractical as they typically impair the prediction
accuracy for the majority group while minimizing the discrepancy. To address
this issue, we present a new debiasing method using GradCAM-based feature
masking combined with ensemble models, ensuring that neither fairness nor
accuracy is compromised. Specifically, the GradCAM-based feature masking
selectively obscures age-related features in the input voice data while
preserving essential information for PD detection. The ensemble models further
improve the prediction accuracy for the minority (young group). Our approach
effectively improves detection accuracy for early-onset patients without
sacrificing performance for the elderly group. Additionally, we propose a
two-step detection strategy for the young group, offering a practical risk
assessment for potential early-onset PD patients. | [
"Yicheng Wang",
"Xiaotian Han",
"Leisheng Yu",
"Na Zou"
] | 2023-09-23 07:23:44 | http://arxiv.org/abs/2309.13292v1 | http://arxiv.org/pdf/2309.13292v1 | 2309.13292v1 |
Domain-Guided Conditional Diffusion Model for Unsupervised Domain Adaptation | Limited transferability hinders the performance of deep learning models when
applied to new application scenarios. Recently, Unsupervised Domain Adaptation
(UDA) has achieved significant progress in addressing this issue via learning
domain-invariant features. However, the performance of existing UDA methods is
constrained by the large domain shift and limited target domain data. To
alleviate these issues, we propose DomAin-guided Conditional Diffusion Model
(DACDM) to generate high-fidelity and diversity samples for the target domain.
In the proposed DACDM, by introducing class information, the labels of
generated samples can be controlled, and a domain classifier is further
introduced in DACDM to guide the generated samples for the target domain. The
generated samples help existing UDA methods transfer from the source domain to
the target domain more easily, thus improving the transfer performance.
Extensive experiments on various benchmarks demonstrate that DACDM brings a
large improvement to the performance of existing UDA methods. | [
"Yulong Zhang",
"Shuhao Chen",
"Weisen Jiang",
"Yu Zhang",
"Jiangang Lu",
"James T. Kwok"
] | 2023-09-23 07:09:44 | http://arxiv.org/abs/2309.14360v1 | http://arxiv.org/pdf/2309.14360v1 | 2309.14360v1 |
Distributional Shift-Aware Off-Policy Interval Estimation: A Unified Error Quantification Framework | We study high-confidence off-policy evaluation in the context of
infinite-horizon Markov decision processes, where the objective is to establish
a confidence interval (CI) for the target policy value using only offline data
pre-collected from unknown behavior policies. This task faces two primary
challenges: providing a comprehensive and rigorous error quantification in CI
estimation, and addressing the distributional shift that results from
discrepancies between the distribution induced by the target policy and the
offline data-generating process. Motivated by an innovative unified error
analysis, we jointly quantify the two sources of estimation errors: the
misspecification error on modeling marginalized importance weights and the
statistical uncertainty due to sampling, within a single interval. This unified
framework reveals a previously hidden tradeoff between the errors, which
undermines the tightness of the CI. Relying on a carefully designed
discriminator function, the proposed estimator achieves a dual purpose:
breaking the curse of the tradeoff to attain the tightest possible CI, and
adapting the CI to ensure robustness against distributional shifts. Our method
is applicable to time-dependent data without assuming any weak dependence
conditions via leveraging a local supermartingale/martingale structure.
Theoretically, we show that our algorithm is sample-efficient, error-robust,
and provably convergent even in non-linear function approximation settings. The
numerical performance of the proposed method is examined in synthetic datasets
and an OhioT1DM mobile health study. | [
"Wenzhuo Zhou",
"Yuhan Li",
"Ruoqing Zhu",
"Annie Qu"
] | 2023-09-23 06:35:44 | http://arxiv.org/abs/2309.13278v2 | http://arxiv.org/pdf/2309.13278v2 | 2309.13278v2 |
A Deep Learning Sequential Decoder for Transient High-Density Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer Learning | Hand gesture recognition (HGR) has gained significant attention due to the
increasing use of AI-powered human-computer interfaces that can interpret the
deep spatiotemporal dynamics of biosignals from the peripheral nervous system,
such as surface electromyography (sEMG). These interfaces have a range of
applications, including the control of extended reality, agile prosthetics, and
exoskeletons. However, the natural variability of sEMG among individuals has
led researchers to focus on subject-specific solutions. Deep learning methods,
which often have complex structures, are particularly data-hungry and can be
time-consuming to train, making them less practical for subject-specific
applications. In this paper, we propose and develop a generalizable, sequential
decoder of transient high-density sEMG (HD-sEMG) that achieves 73% average
accuracy on 65 gestures for partially-observed subjects through
subject-embedded transfer learning, leveraging pre-knowledge of HGR acquired
during pre-training. The use of transient HD-sEMG before gesture stabilization
allows us to predict gestures with the ultimate goal of counterbalancing system
control delays. The results show that the proposed generalized models
significantly outperform subject-specific approaches, especially when the
training data is limited, and there is a significant number of gesture classes.
By building on pre-knowledge and incorporating a multiplicative
subject-embedded structure, our method comparatively achieves more than 13%
average accuracy across partially observed subjects with minimal data
availability. This work highlights the potential of HD-sEMG and demonstrates
the benefits of modeling common patterns across users to reduce the need for
large amounts of data for new users, enhancing practicality. | [
"Golara Ahmadi Azar",
"Qin Hu",
"Melika Emami",
"Alyson Fletcher",
"Sundeep Rangan",
"S. Farokh Atashzar"
] | 2023-09-23 05:32:33 | http://arxiv.org/abs/2310.03752v1 | http://arxiv.org/pdf/2310.03752v1 | 2310.03752v1 |
Order-preserving Consistency Regularization for Domain Adaptation and Generalization | Deep learning models fail on cross-domain challenges if the model is
oversensitive to domain-specific attributes, e.g., lightning, background,
camera angle, etc. To alleviate this problem, data augmentation coupled with
consistency regularization are commonly adopted to make the model less
sensitive to domain-specific attributes. Consistency regularization enforces
the model to output the same representation or prediction for two views of one
image. These constraints, however, are either too strict or not
order-preserving for the classification probabilities. In this work, we propose
the Order-preserving Consistency Regularization (OCR) for cross-domain tasks.
The order-preserving property for the prediction makes the model robust to
task-irrelevant transformations. As a result, the model becomes less sensitive
to the domain-specific attributes. The comprehensive experiments show that our
method achieves clear advantages on five different cross-domain tasks. | [
"Mengmeng Jing",
"Xiantong Zhen",
"Jingjing Li",
"Cees Snoek"
] | 2023-09-23 04:45:42 | http://arxiv.org/abs/2309.13258v1 | http://arxiv.org/pdf/2309.13258v1 | 2309.13258v1 |
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks | Pre-trained language models (PLMs) have demonstrated remarkable performance
as few-shot learners. However, their security risks under such settings are
largely unexplored. In this work, we conduct a pilot study showing that PLMs as
few-shot learners are highly vulnerable to backdoor attacks while existing
defenses are inadequate due to the unique challenges of few-shot scenarios. To
address such challenges, we advocate MDP, a novel lightweight, pluggable, and
effective defense for PLMs as few-shot learners. Specifically, MDP leverages
the gap between the masking-sensitivity of poisoned and clean samples: with
reference to the limited few-shot data as distributional anchors, it compares
the representations of given samples under varying masking and identifies
poisoned samples as ones with significant variations. We show analytically that
MDP creates an interesting dilemma for the attacker to choose between attack
effectiveness and detection evasiveness. The empirical evaluation using
benchmark datasets and representative attacks validates the efficacy of MDP. | [
"Zhaohan Xi",
"Tianyu Du",
"Changjiang Li",
"Ren Pang",
"Shouling Ji",
"Jinghui Chen",
"Fenglong Ma",
"Ting Wang"
] | 2023-09-23 04:41:55 | http://arxiv.org/abs/2309.13256v1 | http://arxiv.org/pdf/2309.13256v1 | 2309.13256v1 |
Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training | Distributed training is the de facto standard to scale up the training of
Deep Neural Networks (DNNs) with multiple GPUs. The performance bottleneck of
distributed training lies in communications for gradient synchronization.
Recently, practitioners have observed sparsity in gradient tensors, suggesting
the potential to reduce the traffic volume in communication and improve
end-to-end training efficiency. Yet, the optimal communication scheme to fully
leverage sparsity is still missing. This paper aims to address this gap. We
first analyze the characteristics of sparse tensors in popular DNN models to
understand the fundamentals of sparsity. We then systematically explore the
design space of communication schemes for sparse tensors and find the optimal
one. % We then find the optimal scheme based on the characteristics by
systematically exploring the design space. We also develop a gradient
synchronization system called Zen that approximately realizes it for sparse
tensors. We demonstrate that Zen can achieve up to 5.09x speedup in
communication time and up to 2.48x speedup in training throughput compared to
the state-of-the-art methods. | [
"Zhuang Wang",
"Zhaozhuo Xu",
"Anshumali Shrivastava",
"T. S. Eugene Ng"
] | 2023-09-23 04:32:48 | http://arxiv.org/abs/2309.13254v1 | http://arxiv.org/pdf/2309.13254v1 | 2309.13254v1 |
Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models | In recent years, explainable machine learning methods have been very
successful. Despite their success, most explainable machine learning methods
are applied to black-box models without any domain knowledge. By incorporating
domain knowledge, science-informed machine learning models have demonstrated
better generalization and interpretation. But do we obtain consistent
scientific explanations if we apply explainable machine learning methods to
science-informed machine learning models? This question is addressed in the
context of monotonic models that exhibit three different types of monotonicity.
To demonstrate monotonicity, we propose three axioms. Accordingly, this study
shows that when only individual monotonicity is involved, the baseline Shapley
value provides good explanations; however, when strong pairwise monotonicity is
involved, the Integrated gradients method provides reasonable explanations on
average. | [
"Dangxing Chen"
] | 2023-09-23 03:59:02 | http://arxiv.org/abs/2309.13246v1 | http://arxiv.org/pdf/2309.13246v1 | 2309.13246v1 |
Importance of negative sampling in weak label learning | Weak-label learning is a challenging task that requires learning from data
"bags" containing positive and negative instances, but only the bag labels are
known. The pool of negative instances is usually larger than positive
instances, thus making selecting the most informative negative instance
critical for performance. Such a selection strategy for negative instances from
each bag is an open problem that has not been well studied for weak-label
learning. In this paper, we study several sampling strategies that can measure
the usefulness of negative instances for weak-label learning and select them
accordingly. We test our method on CIFAR-10 and AudioSet datasets and show that
it improves the weak-label classification performance and reduces the
computational cost compared to random sampling methods. Our work reveals that
negative instances are not all equally irrelevant, and selecting them wisely
can benefit weak-label learning. | [
"Ankit Shah",
"Fuyu Tang",
"Zelin Ye",
"Rita Singh",
"Bhiksha Raj"
] | 2023-09-23 01:11:15 | http://arxiv.org/abs/2309.13227v1 | http://arxiv.org/pdf/2309.13227v1 | 2309.13227v1 |
Pick Planning Strategies for Large-Scale Package Manipulation | Automating warehouse operations can reduce logistics overhead costs,
ultimately driving down the final price for consumers, increasing the speed of
delivery, and enhancing the resiliency to market fluctuations.
This extended abstract showcases a large-scale package manipulation from
unstructured piles in Amazon Robotics' Robot Induction (Robin) fleet, which is
used for picking and singulating up to 6 million packages per day and so far
has manipulated over 2 billion packages. It describes the various heuristic
methods developed over time and their successor, which utilizes a pick success
predictor trained on real production data.
To the best of the authors' knowledge, this work is the first large-scale
deployment of learned pick quality estimation methods in a real production
system. | [
"Shuai Li",
"Azarakhsh Keipour",
"Kevin Jamieson",
"Nicolas Hudson",
"Sicong Zhao",
"Charles Swan",
"Kostas Bekris"
] | 2023-09-23 00:26:49 | http://arxiv.org/abs/2309.13224v2 | http://arxiv.org/pdf/2309.13224v2 | 2309.13224v2 |
Grad DFT: a software library for machine learning enhanced density functional theory | Density functional theory (DFT) stands as a cornerstone method in
computational quantum chemistry and materials science due to its remarkable
versatility and scalability. Yet, it suffers from limitations in accuracy,
particularly when dealing with strongly correlated systems. To address these
shortcomings, recent work has begun to explore how machine learning can expand
the capabilities of DFT; an endeavor with many open questions and technical
challenges. In this work, we present Grad DFT: a fully differentiable JAX-based
DFT library, enabling quick prototyping and experimentation with machine
learning-enhanced exchange-correlation energy functionals. Grad DFT employs a
pioneering parametrization of exchange-correlation functionals constructed
using a weighted sum of energy densities, where the weights are determined
using neural networks. Moreover, Grad DFT encompasses a comprehensive suite of
auxiliary functions, notably featuring a just-in-time compilable and fully
differentiable self-consistent iterative procedure. To support training and
benchmarking efforts, we additionally compile a curated dataset of experimental
dissociation energies of dimers, half of which contain transition metal atoms
characterized by strong electronic correlations. The software library is tested
against experimental results to study the generalization capabilities of a
neural functional across potential energy surfaces and atomic species, as well
as the effect of training data noise on the resulting model accuracy. | [
"Pablo A. M. Casares",
"Jack S. Baker",
"Matija Medvidovic",
"Roberto dos Reis",
"Juan Miguel Arrazola"
] | 2023-09-23 00:25:06 | http://arxiv.org/abs/2309.15127v1 | http://arxiv.org/pdf/2309.15127v1 | 2309.15127v1 |
COCO-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs | Counterfactual examples have proven to be valuable in the field of natural
language processing (NLP) for both evaluating and improving the robustness of
language models to spurious correlations in datasets. Despite their
demonstrated utility for NLP, multimodal counterfactual examples have been
relatively unexplored due to the difficulty of creating paired image-text data
with minimal counterfactual changes. To address this challenge, we introduce a
scalable framework for automatic generation of counterfactual examples using
text-to-image diffusion models. We use our framework to create
COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and
text captions based on the MS-COCO dataset. We validate the quality of
COCO-Counterfactuals through human evaluations and show that existing
multimodal models are challenged by our counterfactual image-text pairs.
Additionally, we demonstrate the usefulness of COCO-Counterfactuals for
improving out-of-domain generalization of multimodal vision-language models via
training data augmentation. | [
"Tiep Le",
"Vasudev Lal",
"Phillip Howard"
] | 2023-09-23 00:16:47 | http://arxiv.org/abs/2309.14356v1 | http://arxiv.org/pdf/2309.14356v1 | 2309.14356v1 |
Causal Reasoning: Charting a Revolutionary Course for Next-Generation AI-Native Wireless Networks | Despite the basic premise that next-generation wireless networks (e.g., 6G)
will be artificial intelligence (AI)-native, to date, most existing efforts
remain either qualitative or incremental extensions to existing ``AI for
wireless'' paradigms. Indeed, creating AI-native wireless networks faces
significant technical challenges due to the limitations of data-driven,
training-intensive AI. These limitations include the black-box nature of the AI
models, their curve-fitting nature, which can limit their ability to reason and
adapt, their reliance on large amounts of training data, and the energy
inefficiency of large neural networks. In response to these limitations, this
article presents a comprehensive, forward-looking vision that addresses these
shortcomings by introducing a novel framework for building AI-native wireless
networks; grounded in the emerging field of causal reasoning. Causal reasoning,
founded on causal discovery, causal representation learning, and causal
inference, can help build explainable, reasoning-aware, and sustainable
wireless networks. Towards fulfilling this vision, we first highlight several
wireless networking challenges that can be addressed by causal discovery and
representation, including ultra-reliable beamforming for terahertz (THz)
systems, near-accurate physical twin modeling for digital twins, training data
augmentation, and semantic communication. We showcase how incorporating causal
discovery can assist in achieving dynamic adaptability, resilience, and
cognition in addressing these challenges. Furthermore, we outline potential
frameworks that leverage causal inference to achieve the overarching objectives
of future-generation networks, including intent management, dynamic
adaptability, human-level cognition, reasoning, and the critical element of
time sensitivity. | [
"Christo Kurisummoottil Thomas",
"Christina Chaccour",
"Walid Saad",
"Merouane Debbah",
"Choong Seon Hong"
] | 2023-09-23 00:05:39 | http://arxiv.org/abs/2309.13223v1 | http://arxiv.org/pdf/2309.13223v1 | 2309.13223v1 |
Assessing the Impact of Personality on Affective States from Video Game Communication | Individual differences in personality determine our preferences, traits and
values, which should similarly hold for the way we express ourselves. With
current advancements and transformations of technology and society, text-based
communication has become ordinary and often even surpasses natural voice
conversations -- with distinct challenges and opportunities. In this
exploratory work, we investigate the impact of personality on the tendency how
players of a team-based collaborative alternate reality game express themselves
affectively. We collected chat logs from eleven players over two weeks, labeled
them according to their affective state, and assessed the connection between
them and the five-factor personality domains and facets. After applying
multi-linear regression, we found a series of reasonable correlations between
(combinations of) personality variables and expressed affect -- as increased
confusion could be predicted by lower self-competence (C1), personal annoyance
by vulnerability to stress (N6) and expressing anger occured more often in
players that are prone to anxiety (N1), less humble and modest (A5), think less
carefully before they act (C6) and have higher neuroticism (N). Expanding the
data set, sample size and input modalities in subsequent work, we aim to
confirm these findings and reveal even more interesting connections that could
inform affective computing and games user research equally. | [
"Atieh Kashani",
"Johannes Pfau",
"Magy Seif El-Nasr"
] | 2023-09-22 23:24:37 | http://arxiv.org/abs/2309.13214v1 | http://arxiv.org/pdf/2309.13214v1 | 2309.13214v1 |
The LHCb ultra-fast simulation option, Lamarr: design and validation | Detailed detector simulation is the major consumer of CPU resources at LHCb,
having used more than 90% of the total computing budget during Run 2 of the
Large Hadron Collider at CERN. As data is collected by the upgraded LHCb
detector during Run 3 of the LHC, larger requests for simulated data samples
are necessary, and will far exceed the pledged resources of the experiment,
even with existing fast simulation options. An evolution of technologies and
techniques to produce simulated samples is mandatory to meet the upcoming needs
of analysis to interpret signal versus background and measure efficiencies. In
this context, we propose Lamarr, a Gaudi-based framework designed to offer the
fastest solution for the simulation of the LHCb detector. Lamarr consists of a
pipeline of modules parameterizing both the detector response and the
reconstruction algorithms of the LHCb experiment. Most of the parameterizations
are made of Deep Generative Models and Gradient Boosted Decision Trees trained
on simulated samples or alternatively, where possible, on real data. Embedding
Lamarr in the general LHCb Gauss Simulation framework allows combining its
execution with any of the available generators in a seamless way. Lamarr has
been validated by comparing key reconstructed quantities with Detailed
Simulation. Good agreement of the simulated distributions is obtained with
two-order-of-magnitude speed-up of the simulation phase. | [
"Lucio Anderlini",
"Matteo Barbetti",
"Simone Capelli",
"Gloria Corti",
"Adam Davis",
"Denis Derkach",
"Nikita Kazeev",
"Artem Maevskiy",
"Maurizio Martinelli",
"Sergei Mokonenko",
"Benedetto Gianluca Siddi",
"Zehua Xu"
] | 2023-09-22 23:21:27 | http://arxiv.org/abs/2309.13213v1 | http://arxiv.org/pdf/2309.13213v1 | 2309.13213v1 |
Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation for Earth System Science Applications | Robust quantification of predictive uncertainty is critical for understanding
factors that drive weather and climate outcomes. Ensembles provide predictive
uncertainty estimates and can be decomposed physically, but both physics and
machine learning ensembles are computationally expensive. Parametric deep
learning can estimate uncertainty with one model by predicting the parameters
of a probability distribution but do not account for epistemic uncertainty..
Evidential deep learning, a technique that extends parametric deep learning to
higher-order distributions, can account for both aleatoric and epistemic
uncertainty with one model. This study compares the uncertainty derived from
evidential neural networks to those obtained from ensembles. Through
applications of classification of winter precipitation type and regression of
surface layer fluxes, we show evidential deep learning models attaining
predictive accuracy rivaling standard methods, while robustly quantifying both
sources of uncertainty. We evaluate the uncertainty in terms of how well the
predictions are calibrated and how well the uncertainty correlates with
prediction error. Analyses of uncertainty in the context of the inputs reveal
sensitivities to underlying meteorological processes, facilitating
interpretation of the models. The conceptual simplicity, interpretability, and
computational efficiency of evidential neural networks make them highly
extensible, offering a promising approach for reliable and practical
uncertainty quantification in Earth system science modeling. In order to
encourage broader adoption of evidential deep learning in Earth System Science,
we have developed a new Python package, MILES-GUESS
(https://github.com/ai2es/miles-guess), that enables users to train and
evaluate both evidential and ensemble deep learning. | [
"John S. Schreck",
"David John Gagne II",
"Charlie Becker",
"William E. Chapman",
"Kim Elmore",
"Gabrielle Gantos",
"Eliot Kim",
"Dhamma Kimpara",
"Thomas Martin",
"Maria J. Molina",
"Vanessa M. Pryzbylo",
"Jacob Radford",
"Belen Saavedra",
"Justin Willson",
"Christopher Wirz"
] | 2023-09-22 23:04:51 | http://arxiv.org/abs/2309.13207v1 | http://arxiv.org/pdf/2309.13207v1 | 2309.13207v1 |
A Practical Survey on Zero-shot Prompt Design for In-context Learning | The remarkable advancements in large language models (LLMs) have brought
about significant improvements in Natural Language Processing(NLP) tasks. This
paper presents a comprehensive review of in-context learning techniques,
focusing on different types of prompts, including discrete, continuous,
few-shot, and zero-shot, and their impact on LLM performance. We explore
various approaches to prompt design, such as manual design, optimization
algorithms, and evaluation methods, to optimize LLM performance across diverse
tasks. Our review covers key research studies in prompt engineering, discussing
their methodologies and contributions to the field. We also delve into the
challenges faced in evaluating prompt performance, given the absence of a
single "best" prompt and the importance of considering multiple metrics. In
conclusion, the paper highlights the critical role of prompt design in
harnessing the full potential of LLMs and provides insights into the
combination of manual design, optimization techniques, and rigorous evaluation
for more effective and efficient use of LLMs in various NLP tasks. | [
"Yinheng Li"
] | 2023-09-22 23:00:34 | http://arxiv.org/abs/2309.13205v1 | http://arxiv.org/pdf/2309.13205v1 | 2309.13205v1 |
Federated Short-Term Load Forecasting with Personalization Layers for Heterogeneous Clients | The advent of smart meters has enabled pervasive collection of energy
consumption data for training short-term load forecasting (STLF) models. In
response to privacy concerns, federated learning (FL) has been proposed as a
privacy-preserving approach for training, but the quality of trained models
degrades as client data becomes heterogeneous. In this paper we alleviate this
drawback using personalization layers, wherein certain layers of an STLF model
in an FL framework are trained exclusively on the clients' own data. To that
end, we propose a personalized FL algorithm (PL-FL) enabling FL to handle
personalization layers. The PL-FL algorithm is implemented by using the Argonne
Privacy-Preserving Federated Learning package. We test the forecast performance
of models trained on the NREL ComStock dataset, which contains heterogeneous
energy consumption data of multiple commercial buildings. Superior performance
of models trained with PL-FL demonstrates that personalization layers enable
classical FL algorithms to handle clients with heterogeneous data. | [
"Shourya Bose",
"Kibaek Kim"
] | 2023-09-22 21:57:52 | http://arxiv.org/abs/2309.13194v1 | http://arxiv.org/pdf/2309.13194v1 | 2309.13194v1 |
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation | Fine-tuning is the most effective way of adapting pre-trained large language
models (LLMs) to downstream applications. With the fast growth of LLM-enabled
AI applications and democratization of open-souced LLMs, fine-tuning has become
possible for non-expert individuals, but intensively performed LLM fine-tuning
worldwide could result in significantly high energy consumption and carbon
footprint, which may bring large environmental impact. Mitigating such
environmental impact towards Green AI directly correlates to reducing the FLOPs
of fine-tuning, but existing techniques on efficient LLM fine-tuning can only
achieve limited reduction of such FLOPs, due to their ignorance of the
backpropagation cost in fine-tuning. To address this limitation, in this paper
we present GreenTrainer, a new LLM fine-tuning technique that adaptively
evaluates different tensors' backpropagation costs and contributions to the
fine-tuned model accuracy, to minimize the fine-tuning cost by selecting the
most appropriate set of tensors in training. Such selection in GreenTrainer is
made based on a given objective of FLOPs reduction, which can flexibly adapt to
the carbon footprint in energy supply and the need in Green AI. Experiment
results over multiple open-sourced LLM models and abstractive summarization
datasets show that, compared to fine-tuning the whole LLM model, GreenTrainer
can save up to 64% FLOPs in fine-tuning without any noticeable model accuracy
loss. Compared to the existing fine-tuning techniques such as LoRa,
GreenTrainer can achieve up to 4% improvement on model accuracy with on-par
FLOPs reduction. | [
"Kai Huang",
"Hanyun Yin",
"Heng Huang",
"Wei Gao"
] | 2023-09-22 21:55:18 | http://arxiv.org/abs/2309.13192v1 | http://arxiv.org/pdf/2309.13192v1 | 2309.13192v1 |
Spatial-frequency channels, shape bias, and adversarial robustness | What spatial frequency information do humans and neural networks use to
recognize objects? In neuroscience, critical band masking is an established
tool that can reveal the frequency-selective filters used for object
recognition. Critical band masking measures the sensitivity of recognition
performance to noise added at each spatial frequency. Existing critical band
masking studies show that humans recognize periodic patterns (gratings) and
letters by means of a spatial-frequency filter (or "channel'') that has a
frequency bandwidth of one octave (doubling of frequency). Here, we introduce
critical band masking as a task for network-human comparison and test 14 humans
and 76 neural networks on 16-way ImageNet categorization in the presence of
narrowband noise. We find that humans recognize objects in natural images using
the same one-octave-wide channel that they use for letters and gratings, making
it a canonical feature of human object recognition. On the other hand, the
neural network channel, across various architectures and training strategies,
is 2-4 times as wide as the human channel. In other words, networks are
vulnerable to high and low frequency noise that does not affect human
performance. Adversarial and augmented-image training are commonly used to
increase network robustness and shape bias. Does this training align network
and human object recognition channels? Three network channel properties
(bandwidth, center frequency, peak noise sensitivity) correlate strongly with
shape bias (53% variance explained) and with robustness of
adversarially-trained networks (74% variance explained). Adversarial training
increases robustness but expands the channel bandwidth even further away from
the human bandwidth. Thus, critical band masking reveals that the network
channel is more than twice as wide as the human channel, and that adversarial
training only increases this difference. | [
"Ajay Subramanian",
"Elena Sizikova",
"Najib J. Majaj",
"Denis G. Pelli"
] | 2023-09-22 21:35:32 | http://arxiv.org/abs/2309.13190v1 | http://arxiv.org/pdf/2309.13190v1 | 2309.13190v1 |
Masked Discriminators for Content-Consistent Unpaired Image-to-Image Translation | A common goal of unpaired image-to-image translation is to preserve content
consistency between source images and translated images while mimicking the
style of the target domain. Due to biases between the datasets of both domains,
many methods suffer from inconsistencies caused by the translation process.
Most approaches introduced to mitigate these inconsistencies do not constrain
the discriminator, leading to an even more ill-posed training setup. Moreover,
none of these approaches is designed for larger crop sizes. In this work, we
show that masking the inputs of a global discriminator for both domains with a
content-based mask is sufficient to reduce content inconsistencies
significantly. However, this strategy leads to artifacts that can be traced
back to the masking process. To reduce these artifacts, we introduce a local
discriminator that operates on pairs of small crops selected with a similarity
sampling strategy. Furthermore, we apply this sampling strategy to sample
global input crops from the source and target dataset. In addition, we propose
feature-attentive denormalization to selectively incorporate content-based
statistics into the generator stream. In our experiments, we show that our
method achieves state-of-the-art performance in photorealistic sim-to-real
translation and weather translation and also performs well in day-to-night
translation. Additionally, we propose the cKVD metric, which builds on the sKVD
metric and enables the examination of translation quality at the class or
category level. | [
"Bonifaz Stuhr",
"Jürgen Brauer",
"Bernhard Schick",
"Jordi Gonzàlez"
] | 2023-09-22 21:32:07 | http://arxiv.org/abs/2309.13188v1 | http://arxiv.org/pdf/2309.13188v1 | 2309.13188v1 |
Visualizing Topological Importance: A Class-Driven Approach | This paper presents the first approach to visualize the importance of
topological features that define classes of data. Topological features, with
their ability to abstract the fundamental structure of complex data, are an
integral component of visualization and analysis pipelines. Although not all
topological features present in data are of equal importance. To date, the
default definition of feature importance is often assumed and fixed. This work
shows how proven explainable deep learning approaches can be adapted for use in
topological classification. In doing so, it provides the first technique that
illuminates what topological structures are important in each dataset in
regards to their class label. In particular, the approach uses a learned metric
classifier with a density estimator of the points of a persistence diagram as
input. This metric learns how to reweigh this density such that classification
accuracy is high. By extracting this weight, an importance field on persistent
point density can be created. This provides an intuitive representation of
persistence point importance that can be used to drive new visualizations. This
work provides two examples: Visualization on each diagram directly and, in the
case of sublevel set filtrations on images, directly on the images themselves.
This work highlights real-world examples of this approach visualizing the
important topological features in graph, 3D shape, and medical image data. | [
"Yu Qin",
"Brittany Terese Fasy",
"Carola Wenk",
"Brian Summa"
] | 2023-09-22 21:20:41 | http://arxiv.org/abs/2309.13185v1 | http://arxiv.org/pdf/2309.13185v1 | 2309.13185v1 |
Diagnosing and exploiting the computational demands of videos games for deep reinforcement learning | Humans learn by interacting with their environments and perceiving the
outcomes of their actions. A landmark in artificial intelligence has been the
development of deep reinforcement learning (dRL) algorithms capable of doing
the same in video games, on par with or better than humans. However, it remains
unclear whether the successes of dRL models reflect advances in visual
representation learning, the effectiveness of reinforcement learning algorithms
at discovering better policies, or both. To address this question, we introduce
the Learning Challenge Diagnosticator (LCD), a tool that separately measures
the perceptual and reinforcement learning demands of a task. We use LCD to
discover a novel taxonomy of challenges in the Procgen benchmark, and
demonstrate that these predictions are both highly reliable and can instruct
algorithmic development. More broadly, the LCD reveals multiple failure cases
that can occur when optimizing dRL algorithms over entire video game benchmarks
like Procgen, and provides a pathway towards more efficient progress. | [
"Lakshmi Narasimhan Govindarajan",
"Rex G Liu",
"Drew Linsley",
"Alekh Karkada Ashok",
"Max Reuter",
"Michael J Frank",
"Thomas Serre"
] | 2023-09-22 21:03:33 | http://arxiv.org/abs/2309.13181v1 | http://arxiv.org/pdf/2309.13181v1 | 2309.13181v1 |
Enhancing Multi-Objective Optimization through Machine Learning-Supported Multiphysics Simulation | Multiphysics simulations that involve multiple coupled physical phenomena
quickly become computationally expensive. This imposes challenges for
practitioners aiming to find optimal configurations for these problems
satisfying multiple objectives, as optimization algorithms often require
querying the simulation many times. This paper presents a methodological
framework for training, self-optimizing, and self-organizing surrogate models
to approximate and speed up Multiphysics simulations. We generate two
real-world tabular datasets, which we make publicly available, and show that
surrogate models can be trained on relatively small amounts of data to
approximate the underlying simulations accurately. We conduct extensive
experiments combining four machine learning and deep learning algorithms with
two optimization algorithms and a comprehensive evaluation strategy. Finally,
we evaluate the performance of our combined training and optimization pipeline
by verifying the generated Pareto-optimal results using the ground truth
simulations. We also employ explainable AI techniques to analyse our surrogates
and conduct a preselection strategy to determine the most relevant features in
our real-world examples. This approach lets us understand the underlying
problem and identify critical partial dependencies. | [
"Diego Botache",
"Jens Decke",
"Winfried Ripken",
"Abhinay Dornipati",
"Franz Götz-Hahn",
"Mohamed Ayeb",
"Bernhard Sick"
] | 2023-09-22 20:52:50 | http://arxiv.org/abs/2309.13179v1 | http://arxiv.org/pdf/2309.13179v1 | 2309.13179v1 |
Flow Factorized Representation Learning | A prominent goal of representation learning research is to achieve
representations which are factorized in a useful manner with respect to the
ground truth factors of variation. The fields of disentangled and equivariant
representation learning have approached this ideal from a range of
complimentary perspectives; however, to date, most approaches have proven to
either be ill-specified or insufficiently flexible to effectively separate all
realistic factors of interest in a learned latent space. In this work, we
propose an alternative viewpoint on such structured representation learning
which we call Flow Factorized Representation Learning, and demonstrate it to
learn both more efficient and more usefully structured representations than
existing frameworks. Specifically, we introduce a generative model which
specifies a distinct set of latent probability paths that define different
input transformations. Each latent flow is generated by the gradient field of a
learned potential following dynamic optimal transport. Our novel setup brings
new understandings to both \textit{disentanglement} and \textit{equivariance}.
We show that our model achieves higher likelihoods on standard representation
learning benchmarks while simultaneously being closer to approximately
equivariant models. Furthermore, we demonstrate that the transformations
learned by our model are flexibly composable and can also extrapolate to new
data, implying a degree of robustness and generalizability approaching the
ultimate goal of usefully factorized representation learning. | [
"Yue Song",
"T. Anderson Keller",
"Nicu Sebe",
"Max Welling"
] | 2023-09-22 20:15:37 | http://arxiv.org/abs/2309.13167v1 | http://arxiv.org/pdf/2309.13167v1 | 2309.13167v1 |
Invisible Watermarking for Audio Generation Diffusion Models | Diffusion models have gained prominence in the image domain for their
capabilities in data generation and transformation, achieving state-of-the-art
performance in various tasks in both image and audio domains. In the rapidly
evolving field of audio-based machine learning, safeguarding model integrity
and establishing data copyright are of paramount importance. This paper
presents the first watermarking technique applied to audio diffusion models
trained on mel-spectrograms. This offers a novel approach to the aforementioned
challenges. Our model excels not only in benign audio generation, but also
incorporates an invisible watermarking trigger mechanism for model
verification. This watermark trigger serves as a protective layer, enabling the
identification of model ownership and ensuring its integrity. Through extensive
experiments, we demonstrate that invisible watermark triggers can effectively
protect against unauthorized modifications while maintaining high utility in
benign audio generation tasks. | [
"Xirong Cao",
"Xiang Li",
"Divyesh Jadav",
"Yanzhao Wu",
"Zhehui Chen",
"Chen Zeng",
"Wenqi Wei"
] | 2023-09-22 20:10:46 | http://arxiv.org/abs/2309.13166v1 | http://arxiv.org/pdf/2309.13166v1 | 2309.13166v1 |
GAMIX-VAE: A VAE with Gaussian Mixture Based Posterior | Variational Autoencoders (VAEs) have become a cornerstone in generative
modeling and representation learning within machine learning. This paper
explores a nuanced aspect of VAEs, focusing on interpreting the Kullback
Leibler (KL) Divergence, a critical component within the Evidence Lower Bound
(ELBO) that governs the trade-off between reconstruction accuracy and
regularization. While the KL Divergence enforces alignment between latent
variable distributions and a prior imposing a structure on the overall latent
space but leaves individual variable distributions unconstrained. The proposed
method redefines the ELBO with a mixture of Gaussians for the posterior
probability, introduces a regularization term to prevent variance collapse, and
employs a PatchGAN discriminator to enhance texture realism. Implementation
details involve ResNetV2 architectures for both the Encoder and Decoder. The
experiments demonstrate the ability to generate realistic faces, offering a
promising solution for enhancing VAE based generative models. | [
"Mariano Rivera"
] | 2023-09-22 19:52:28 | http://arxiv.org/abs/2309.13160v1 | http://arxiv.org/pdf/2309.13160v1 | 2309.13160v1 |
Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations | In recent years, computer vision has made remarkable advancements in
autonomous driving and robotics. However, it has been observed that deep
learning-based visual perception models lack robustness when faced with camera
motion perturbations. The current certification process for assessing
robustness is costly and time-consuming due to the extensive number of image
projections required for Monte Carlo sampling in the 3D camera motion space. To
address these challenges, we present a novel, efficient, and practical
framework for certifying the robustness of 3D-2D projective transformations
against camera motion perturbations. Our approach leverages a smoothing
distribution over the 2D pixel space instead of in the 3D physical space,
eliminating the need for costly camera motion sampling and significantly
enhancing the efficiency of robustness certifications. With the pixel-wise
smoothed classifier, we are able to fully upper bound the projection errors
using a technique of uniform partitioning in camera motion space. Additionally,
we extend our certification framework to a more general scenario where only a
single-frame point cloud is required in the projection oracle. This is achieved
by deriving Lipschitz-based approximated partition intervals. Through extensive
experimentation, we validate the trade-off between effectiveness and efficiency
enabled by our proposed method. Remarkably, our approach achieves approximately
80% certified accuracy while utilizing only 30% of the projected image frames. | [
"Hanjiang Hu",
"Zuxin Liu",
"Linyi Li",
"Jiacheng Zhu",
"Ding Zhao"
] | 2023-09-22 19:15:49 | http://arxiv.org/abs/2309.13150v1 | http://arxiv.org/pdf/2309.13150v1 | 2309.13150v1 |
Trading-off Mutual Information on Feature Aggregation for Face Recognition | Despite the advances in the field of Face Recognition (FR), the precision of
these methods is not yet sufficient. To improve the FR performance, this paper
proposes a technique to aggregate the outputs of two state-of-the-art (SOTA)
deep FR models, namely ArcFace and AdaFace. In our approach, we leverage the
transformer attention mechanism to exploit the relationship between different
parts of two feature maps. By doing so, we aim to enhance the overall
discriminative power of the FR system. One of the challenges in feature
aggregation is the effective modeling of both local and global dependencies.
Conventional transformers are known for their ability to capture long-range
dependencies, but they often struggle with modeling local dependencies
accurately. To address this limitation, we augment the self-attention mechanism
to capture both local and global dependencies effectively. This allows our
model to take advantage of the overlapping receptive fields present in
corresponding locations of the feature maps. However, fusing two feature maps
from different FR models might introduce redundancies to the face embedding.
Since these models often share identical backbone architectures, the resulting
feature maps may contain overlapping information, which can mislead the
training process. To overcome this problem, we leverage the principle of
Information Bottleneck to obtain a maximally informative facial representation.
This ensures that the aggregated features retain the most relevant and
discriminative information while minimizing redundant or misleading details. To
evaluate the effectiveness of our proposed method, we conducted experiments on
popular benchmarks and compared our results with state-of-the-art algorithms.
The consistent improvement we observed in these benchmarks demonstrates the
efficacy of our approach in enhancing FR performance. | [
"Mohammad Akyash",
"Ali Zafari",
"Nasser M. Nasrabadi"
] | 2023-09-22 18:48:38 | http://arxiv.org/abs/2309.13137v1 | http://arxiv.org/pdf/2309.13137v1 | 2309.13137v1 |
Forecasting Response to Treatment with Deep Learning and Pharmacokinetic Priors | Forecasting healthcare time series is crucial for early detection of adverse
outcomes and for patient monitoring. Forecasting, however, can be difficult in
practice due to noisy and intermittent data. The challenges are often
exacerbated by change points induced via extrinsic factors, such as the
administration of medication. We propose a novel encoder that informs deep
learning models of the pharmacokinetic effects of drugs to allow for accurate
forecasting of time series affected by treatment. We showcase the effectiveness
of our approach in a task to forecast blood glucose using both realistically
simulated and real-world data. Our pharmacokinetic encoder helps deep learning
models surpass baselines by approximately 11% on simulated data and 8% on
real-world data. The proposed approach can have multiple beneficial
applications in clinical practice, such as issuing early warnings about
unexpected treatment responses, or helping to characterize patient-specific
treatment effects in terms of drug absorption and elimination characteristics. | [
"Willa Potosnak",
"Cristian Challu",
"Kin G. Olivares",
"Artur Dubrawski"
] | 2023-09-22 18:43:41 | http://arxiv.org/abs/2309.13135v1 | http://arxiv.org/pdf/2309.13135v1 | 2309.13135v1 |
AntiBARTy Diffusion for Property Guided Antibody Design | Over the past decade, antibodies have steadily grown in therapeutic
importance thanks to their high specificity and low risk of adverse effects
compared to other drug modalities. While traditional antibody discovery is
primarily wet lab driven, the rapid improvement of ML-based generative modeling
has made in-silico approaches an increasingly viable route for discovery and
engineering. To this end, we train an antibody-specific language model,
AntiBARTy, based on BART (Bidirectional and Auto-Regressive Transformer) and
use its latent space to train a property-conditional diffusion model for guided
IgG de novo design. As a test case, we show that we can effectively generate
novel antibodies with improved in-silico solubility while maintaining antibody
validity and controlling sequence diversity. | [
"Jordan Venderley"
] | 2023-09-22 18:30:50 | http://arxiv.org/abs/2309.13129v1 | http://arxiv.org/pdf/2309.13129v1 | 2309.13129v1 |
Data is often loadable in short depth: Quantum circuits from tensor networks for finance, images, fluids, and proteins | Though there has been substantial progress in developing quantum algorithms
to study classical datasets, the cost of simply loading classical data is an
obstacle to quantum advantage. When the amplitude encoding is used, loading an
arbitrary classical vector requires up to exponential circuit depths with
respect to the number of qubits. Here, we address this ``input problem'' with
two contributions. First, we introduce a circuit compilation method based on
tensor network (TN) theory. Our method -- AMLET (Automatic Multi-layer Loader
Exploiting TNs) -- proceeds via careful construction of a specific TN topology
and can be tailored to arbitrary circuit depths. Second, we perform numerical
experiments on real-world classical data from four distinct areas: finance,
images, fluid mechanics, and proteins. To the best of our knowledge, this is
the broadest numerical analysis to date of loading classical data into a
quantum computer. Consistent with other recent work in this area, the required
circuit depths are often several orders of magnitude lower than the
exponentially-scaling general loading algorithm would require. Besides
introducing a more efficient loading algorithm, this work demonstrates that
many classical datasets are loadable in depths that are much shorter than
previously expected, which has positive implications for speeding up classical
workloads on quantum computers. | [
"Raghav Jumade",
"Nicolas PD Sawaya"
] | 2023-09-22 18:00:01 | http://arxiv.org/abs/2309.13108v1 | http://arxiv.org/pdf/2309.13108v1 | 2309.13108v1 |
MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation | We present MosaicFusion, a simple yet effective diffusion-based data
augmentation approach for large vocabulary instance segmentation. Our method is
training-free and does not rely on any label supervision. Two key designs
enable us to employ an off-the-shelf text-to-image diffusion model as a useful
dataset generator for object instances and mask annotations. First, we divide
an image canvas into several regions and perform a single round of diffusion
process to generate multiple instances simultaneously, conditioning on
different text prompts. Second, we obtain corresponding instance masks by
aggregating cross-attention maps associated with object prompts across layers
and diffusion time steps, followed by simple thresholding and edge-aware
refinement processing. Without bells and whistles, our MosaicFusion can produce
a significant amount of synthetic labeled data for both rare and novel
categories. Experimental results on the challenging LVIS long-tailed and
open-vocabulary benchmarks demonstrate that MosaicFusion can significantly
improve the performance of existing instance segmentation models, especially
for rare and novel categories. Code will be released at
https://github.com/Jiahao000/MosaicFusion. | [
"Jiahao Xie",
"Wei Li",
"Xiangtai Li",
"Ziwei Liu",
"Yew Soon Ong",
"Chen Change Loy"
] | 2023-09-22 17:59:42 | http://arxiv.org/abs/2309.13042v1 | http://arxiv.org/pdf/2309.13042v1 | 2309.13042v1 |
Robotic Offline RL from Internet Videos via Value-Function Pre-Training | Pre-training on Internet data has proven to be a key ingredient for broad
generalization in many modern ML systems. What would it take to enable such
capabilities in robotic reinforcement learning (RL)? Offline RL methods, which
learn from datasets of robot experience, offer one way to leverage prior data
into the robotic learning pipeline. However, these methods have a "type
mismatch" with video data (such as Ego4D), the largest prior datasets available
for robotics, since video offers observation-only experience without the action
or reward annotations needed for RL methods. In this paper, we develop a system
for leveraging large-scale human video datasets in robotic offline RL, based
entirely on learning value functions via temporal-difference learning. We show
that value learning on video datasets learns representations that are more
conducive to downstream robotic offline RL than other approaches for learning
from video data. Our system, called V-PTR, combines the benefits of
pre-training on video data with robotic offline RL approaches that train on
diverse robot data, resulting in value functions and policies for manipulation
tasks that perform better, act robustly, and generalize broadly. On several
manipulation tasks on a real WidowX robot, our framework produces policies that
greatly improve over prior methods. Our video and additional details can be
found at https://dibyaghosh.com/vptr/ | [
"Chethan Bhateja",
"Derek Guo",
"Dibya Ghosh",
"Anikait Singh",
"Manan Tomar",
"Quan Vuong",
"Yevgen Chebotar",
"Sergey Levine",
"Aviral Kumar"
] | 2023-09-22 17:59:14 | http://arxiv.org/abs/2309.13041v1 | http://arxiv.org/pdf/2309.13041v1 | 2309.13041v1 |
Memory-augmented conformer for improved end-to-end long-form ASR | Conformers have recently been proposed as a promising modelling approach for
automatic speech recognition (ASR), outperforming recurrent neural
network-based approaches and transformers. Nevertheless, in general, the
performance of these end-to-end models, especially attention-based models, is
particularly degraded in the case of long utterances. To address this
limitation, we propose adding a fully-differentiable memory-augmented neural
network between the encoder and decoder of a conformer. This external memory
can enrich the generalization for longer utterances since it allows the system
to store and retrieve more information recurrently. Notably, we explore the
neural Turing machine (NTM) that results in our proposed Conformer-NTM model
architecture for ASR. Experimental results using Librispeech train-clean-100
and train-960 sets show that the proposed system outperforms the baseline
conformer without memory for long utterances. | [
"Carlos Carvalho",
"Alberto Abad"
] | 2023-09-22 17:44:58 | http://arxiv.org/abs/2309.13029v1 | http://arxiv.org/pdf/2309.13029v1 | 2309.13029v1 |
OpportunityFinder: A Framework for Automated Causal Inference | We introduce OpportunityFinder, a code-less framework for performing a
variety of causal inference studies with panel data for non-expert users. In
its current state, OpportunityFinder only requires users to provide raw
observational data and a configuration file. A pipeline is then triggered that
inspects/processes data, chooses the suitable algorithm(s) to execute the
causal study. It returns the causal impact of the treatment on the configured
outcome, together with sensitivity and robustness results. Causal inference is
widely studied and used to estimate the downstream impact of individual's
interactions with products and features. It is common that these causal studies
are performed by scientists and/or economists periodically. Business
stakeholders are often bottle-necked on scientist or economist bandwidth to
conduct causal studies. We offer OpportunityFinder as a solution for commonly
performed causal studies with four key features: (1) easy to use for both
Business Analysts and Scientists, (2) abstraction of multiple algorithms under
a single I/O interface, (3) support for causal impact analysis under binary
treatment with panel data and (4) dynamic selection of algorithm based on scale
of data. | [
"Huy Nguyen",
"Prince Grover",
"Devashish Khatwani"
] | 2023-09-22 17:35:03 | http://arxiv.org/abs/2309.13103v1 | http://arxiv.org/pdf/2309.13103v1 | 2309.13103v1 |
Graph Neural Network for Stress Predictions in Stiffened Panels Under Uniform Loading | Machine learning (ML) and deep learning (DL) techniques have gained
significant attention as reduced order models (ROMs) to computationally
expensive structural analysis methods, such as finite element analysis (FEA).
Graph neural network (GNN) is a particular type of neural network which
processes data that can be represented as graphs. This allows for efficient
representation of complex geometries that can change during conceptual design
of a structure or a product. In this study, we propose a novel graph embedding
technique for efficient representation of 3D stiffened panels by considering
separate plate domains as vertices. This approach is considered using Graph
Sampling and Aggregation (GraphSAGE) to predict stress distributions in
stiffened panels with varying geometries. A comparison between a
finite-element-vertex graph representation is conducted to demonstrate the
effectiveness of the proposed approach. A comprehensive parametric study is
performed to examine the effect of structural geometry on the prediction
performance. Our results demonstrate the immense potential of graph neural
networks with the proposed graph embedding method as robust reduced-order
models for 3D structures. | [
"Yuecheng Cai",
"Jasmin Jelovica"
] | 2023-09-22 17:34:20 | http://arxiv.org/abs/2309.13022v1 | http://arxiv.org/pdf/2309.13022v1 | 2309.13022v1 |
A Hybrid Deep Learning-based Approach for Optimal Genotype by Environment Selection | Precise crop yield prediction is essential for improving agricultural
practices and ensuring crop resilience in varying climates. Integrating weather
data across the growing season, especially for different crop varieties, is
crucial for understanding their adaptability in the face of climate change. In
the MLCAS2021 Crop Yield Prediction Challenge, we utilized a dataset comprising
93,028 training records to forecast yields for 10,337 test records, covering
159 locations across 28 U.S. states and Canadian provinces over 13 years
(2003-2015). This dataset included details on 5,838 distinct genotypes and
daily weather data for a 214-day growing season, enabling comprehensive
analysis. As one of the winning teams, we developed two novel convolutional
neural network (CNN) architectures: the CNN-DNN model, combining CNN and
fully-connected networks, and the CNN-LSTM-DNN model, with an added LSTM layer
for weather variables. Leveraging the Generalized Ensemble Method (GEM), we
determined optimal model weights, resulting in superior performance compared to
baseline models. The GEM model achieved lower RMSE (5.55% to 39.88%), reduced
MAE (5.34% to 43.76%), and higher correlation coefficients (1.1% to 10.79%)
when evaluated on test data. We applied the CNN-DNN model to identify
top-performing genotypes for various locations and weather conditions, aiding
genotype selection based on weather variables. Our data-driven approach is
valuable for scenarios with limited testing years. Additionally, a feature
importance analysis using RMSE change highlighted the significance of location,
MG, year, and genotype, along with the importance of weather variables MDNI and
AP. | [
"Zahra Khalilzadeh",
"Motahareh Kashanian",
"Saeed Khaki",
"Lizhi Wang"
] | 2023-09-22 17:31:47 | http://arxiv.org/abs/2309.13021v1 | http://arxiv.org/pdf/2309.13021v1 | 2309.13021v1 |
Brain Age Revisited: Investigating the State vs. Trait Hypotheses of EEG-derived Brain-Age Dynamics with Deep Learning | The brain's biological age has been considered as a promising candidate for a
neurologically significant biomarker. However, recent results based on
longitudinal magnetic resonance imaging data have raised questions on its
interpretation. A central question is whether an increased biological age of
the brain is indicative of brain pathology and if changes in brain age
correlate with diagnosed pathology (state hypothesis). Alternatively, could the
discrepancy in brain age be a stable characteristic unique to each individual
(trait hypothesis)? To address this question, we present a comprehensive study
on brain aging based on clinical EEG, which is complementary to previous
MRI-based investigations. We apply a state-of-the-art Temporal Convolutional
Network (TCN) to the task of age regression. We train on recordings of the
Temple University Hospital EEG Corpus (TUEG) explicitly labeled as
non-pathological and evaluate on recordings of subjects with non-pathological
as well as pathological recordings, both with examinations at a single point in
time and repeated examinations over time. Therefore, we created four novel
subsets of TUEG that include subjects with multiple recordings: I) all labeled
non-pathological; II) all labeled pathological; III) at least one recording
labeled non-pathological followed by at least one recording labeled
pathological; IV) similar to III) but with opposing transition (first
pathological then non-pathological). The results show that our TCN reaches
state-of-the-art performance in age decoding with a mean absolute error of 6.6
years. Our extensive analyses demonstrate that the model significantly
underestimates the age of non-pathological and pathological subjects (-1 and -5
years, paired t-test, p <= 0.18 and p <= 0.0066). Furthermore, the brain age
gap biomarker is not indicative of pathological EEG. | [
"Lukas AW Gemein",
"Robin T Schirrmeister",
"Joschka Boedecker",
"Tonio Ball"
] | 2023-09-22 17:29:37 | http://arxiv.org/abs/2310.07029v1 | http://arxiv.org/pdf/2310.07029v1 | 2310.07029v1 |
Understanding Deep Gradient Leakage via Inversion Influence Functions | Deep Gradient Leakage (DGL) is a highly effective attack that recovers
private training images from gradient vectors. This attack casts significant
privacy challenges on distributed learning from clients with sensitive data,
where clients are required to share gradients. Defending against such attacks
requires but lacks an understanding of when and how privacy leakage happens,
mostly because of the black-box nature of deep networks. In this paper, we
propose a novel Inversion Influence Function (I$^2$F) that establishes a
closed-form connection between the recovered images and the private gradients
by implicitly solving the DGL problem. Compared to directly solving DGL, I$^2$F
is scalable for analyzing deep networks, requiring only oracle access to
gradients and Jacobian-vector products. We empirically demonstrate that I$^2$F
effectively approximated the DGL generally on different model architectures,
datasets, attack implementations, and noise-based defenses. With this novel
tool, we provide insights into effective gradient perturbation directions, the
unfairness of privacy protection, and privacy-preferred model initialization.
Our codes are provided in
https://github.com/illidanlab/inversion-influence-function. | [
"Haobo Zhang",
"Junyuan Hong",
"Yuyang Deng",
"Mehrdad Mahdavi",
"Jiayu Zhou"
] | 2023-09-22 17:26:24 | http://arxiv.org/abs/2309.13016v1 | http://arxiv.org/pdf/2309.13016v1 | 2309.13016v1 |
Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design | Sparse training is one of the promising techniques to reduce the
computational cost of DNNs while retaining high accuracy. In particular, N:M
fine-grained structured sparsity, where only N out of consecutive M elements
can be nonzero, has attracted attention due to its hardware-friendly pattern
and capability of achieving a high sparse ratio. However, the potential to
accelerate N:M sparse DNN training has not been fully exploited, and there is a
lack of efficient hardware supporting N:M sparse training. To tackle these
challenges, this paper presents a computation-efficient training scheme for N:M
sparse DNNs using algorithm, architecture, and dataflow co-design. At the
algorithm level, a bidirectional weight pruning method, dubbed BDWP, is
proposed to leverage the N:M sparsity of weights during both forward and
backward passes of DNN training, which can significantly reduce the
computational cost while maintaining model accuracy. At the architecture level,
a sparse accelerator for DNN training, namely SAT, is developed to neatly
support both the regular dense operations and the computation-efficient N:M
sparse operations. At the dataflow level, multiple optimization methods ranging
from interleave mapping, pre-generation of N:M sparse weights, and offline
scheduling, are proposed to boost the computational efficiency of SAT. Finally,
the effectiveness of our training scheme is evaluated on a Xilinx VCU1525 FPGA
card using various DNN models and datasets. Experimental results show the SAT
accelerator with the BDWP sparse training method under 2:8 sparse ratio
achieves an average speedup of 1.75x over that with the dense training,
accompanied by a negligible accuracy loss of 0.56% on average. Furthermore, our
proposed training scheme significantly improves the training throughput by
2.97~25.22x and the energy efficiency by 1.36~3.58x over prior FPGA-based
accelerators. | [
"Chao Fang",
"Wei Sun",
"Aojun Zhou",
"Zhongfeng Wang"
] | 2023-09-22 17:26:19 | http://arxiv.org/abs/2309.13015v1 | http://arxiv.org/pdf/2309.13015v1 | 2309.13015v1 |
Importance of Smoothness Induced by Optimizers in FL4ASR: Towards Understanding Federated Learning for End-to-End ASR | In this paper, we start by training End-to-End Automatic Speech Recognition
(ASR) models using Federated Learning (FL) and examining the fundamental
considerations that can be pivotal in minimizing the performance gap in terms
of word error rate between models trained using FL versus their centralized
counterpart. Specifically, we study the effect of (i) adaptive optimizers, (ii)
loss characteristics via altering Connectionist Temporal Classification (CTC)
weight, (iii) model initialization through seed start, (iv) carrying over
modeling setup from experiences in centralized training to FL, e.g., pre-layer
or post-layer normalization, and (v) FL-specific hyperparameters, such as
number of local epochs, client sampling size, and learning rate scheduler,
specifically for ASR under heterogeneous data distribution. We shed light on
how some optimizers work better than others via inducing smoothness. We also
summarize the applicability of algorithms, trends, and propose best practices
from prior works in FL (in general) toward End-to-End ASR models. | [
"Sheikh Shams Azam",
"Tatiana Likhomanenko",
"Martin Pelikan",
"Jan \"Honza\" Silovsky"
] | 2023-09-22 17:23:01 | http://arxiv.org/abs/2309.13102v1 | http://arxiv.org/pdf/2309.13102v1 | 2309.13102v1 |
ReConcile: Round-Table Conference Improves Reasoning via Consensus among Diverse LLMs | Large Language Models (LLMs) still struggle with complex reasoning tasks.
Motivated by the society of minds (Minsky, 1988), we propose ReConcile, a
multi-model multi-agent framework designed as a round table conference among
diverse LLM agents to foster diverse thoughts and discussion for improved
consensus. ReConcile enhances the reasoning capabilities of LLMs by holding
multiple rounds of discussion, learning to convince other agents to improve
their answers, and employing a confidence-weighted voting mechanism. In each
round, ReConcile initiates discussion between agents via a 'discussion prompt'
that consists of (a) grouped answers and explanations generated by each agent
in the previous round, (b) their uncertainties, and (c) demonstrations of
answer-rectifying human explanations, used for convincing other agents. This
discussion prompt enables each agent to revise their responses in light of
insights from other agents. Once a consensus is reached and the discussion
ends, ReConcile determines the final answer by leveraging the confidence of
each agent in a weighted voting scheme. We implement ReConcile with ChatGPT,
Bard, and Claude2 as the three agents. Our experimental results on various
benchmarks demonstrate that ReConcile significantly enhances the reasoning
performance of the agents (both individually and as a team), surpassing prior
single-agent and multi-agent baselines by 7.7% and also outperforming GPT-4 on
some of these datasets. We also experiment with GPT-4 itself as one of the
agents in ReConcile and demonstrate that its initial performance also improves
by absolute 10.0% through discussion and feedback from other agents. Finally,
we also analyze the accuracy after every round and observe that ReConcile
achieves better and faster consensus between agents, compared to a multi-agent
debate baseline. Our code is available at: https://github.com/dinobby/ReConcile | [
"Justin Chih-Yao Chen",
"Swarnadeep Saha",
"Mohit Bansal"
] | 2023-09-22 17:12:45 | http://arxiv.org/abs/2309.13007v1 | http://arxiv.org/pdf/2309.13007v1 | 2309.13007v1 |
Pursuing Counterfactual Fairness via Sequential Autoencoder Across Domains | Recognizing the prevalence of domain shift as a common challenge in machine
learning, various domain generalization (DG) techniques have been developed to
enhance the performance of machine learning systems when dealing with
out-of-distribution (OOD) data. Furthermore, in real-world scenarios, data
distributions can gradually change across a sequence of sequential domains.
While current methodologies primarily focus on improving model effectiveness
within these new domains, they often overlook fairness issues throughout the
learning process. In response, we introduce an innovative framework called
Counterfactual Fairness-Aware Domain Generalization with Sequential Autoencoder
(CDSAE). This approach effectively separates environmental information and
sensitive attributes from the embedded representation of classification
features. This concurrent separation not only greatly improves model
generalization across diverse and unfamiliar domains but also effectively
addresses challenges related to unfair classification. Our strategy is rooted
in the principles of causal inference to tackle these dual issues. To examine
the intricate relationship between semantic information, sensitive attributes,
and environmental cues, we systematically categorize exogenous uncertainty
factors into four latent variables: 1) semantic information influenced by
sensitive attributes, 2) semantic information unaffected by sensitive
attributes, 3) environmental cues influenced by sensitive attributes, and 4)
environmental cues unaffected by sensitive attributes. By incorporating
fairness regularization, we exclusively employ semantic information for
classification purposes. Empirical validation on synthetic and real-world
datasets substantiates the effectiveness of our approach, demonstrating
improved accuracy levels while ensuring the preservation of fairness in the
evolving landscape of continuous domains. | [
"Yujie Lin",
"Chen Zhao",
"Minglai Shao",
"Baoluo Meng",
"Xujiang Zhao",
"Haifeng Chen"
] | 2023-09-22 17:08:20 | http://arxiv.org/abs/2309.13005v1 | http://arxiv.org/pdf/2309.13005v1 | 2309.13005v1 |
Expressive variational quantum circuits provide inherent privacy in federated learning | Federated learning has emerged as a viable distributed solution to train
machine learning models without the actual need to share data with the central
aggregator. However, standard neural network-based federated learning models
have been shown to be susceptible to data leakage from the gradients shared
with the server. In this work, we introduce federated learning with variational
quantum circuit model built using expressive encoding maps coupled with
overparameterized ans\"atze. We show that expressive maps lead to inherent
privacy against gradient inversion attacks, while overparameterization ensures
model trainability. Our privacy framework centers on the complexity of solving
the system of high-degree multivariate Chebyshev polynomials generated by the
gradients of quantum circuit. We present compelling arguments highlighting the
inherent difficulty in solving these equations, both in exact and approximate
scenarios. Additionally, we delve into machine learning-based attack strategies
and establish a direct connection between overparameterization in the original
federated learning model and underparameterization in the attack model.
Furthermore, we provide numerical scaling arguments showcasing that
underparameterization of the expressive map in the attack model leads to the
loss landscape being swamped with exponentially many spurious local minima
points, thus making it extremely hard to realize a successful attack. This
provides a strong claim, for the first time, that the nature of quantum machine
learning models inherently helps prevent data leakage in federated learning. | [
"Niraj Kumar",
"Jamie Heredge",
"Changhao Li",
"Shaltiel Eloul",
"Shree Hari Sureshbabu",
"Marco Pistoia"
] | 2023-09-22 17:04:50 | http://arxiv.org/abs/2309.13002v2 | http://arxiv.org/pdf/2309.13002v2 | 2309.13002v2 |
Point Cloud Network: An Order of Magnitude Improvement in Linear Layer Parameter Count | This paper introduces the Point Cloud Network (PCN) architecture, a novel
implementation of linear layers in deep learning networks, and provides
empirical evidence to advocate for its preference over the Multilayer
Perceptron (MLP) in linear layers. We train several models, including the
original AlexNet, using both MLP and PCN architectures for direct comparison of
linear layers (Krizhevsky et al., 2012). The key results collected are model
parameter count and top-1 test accuracy over the CIFAR-10 and CIFAR-100
datasets (Krizhevsky, 2009). AlexNet-PCN16, our PCN equivalent to AlexNet,
achieves comparable efficacy (test accuracy) to the original architecture with
a 99.5% reduction of parameters in its linear layers. All training is done on
cloud RTX 4090 GPUs, leveraging pytorch for model construction and training.
Code is provided for anyone to reproduce the trials from this paper. | [
"Charles Hetterich"
] | 2023-09-22 16:56:40 | http://arxiv.org/abs/2309.12996v1 | http://arxiv.org/pdf/2309.12996v1 | 2309.12996v1 |
Deep learning probability flows and entropy production rates in active matter | Active matter systems, from self-propelled colloids to motile bacteria, are
characterized by the conversion of free energy into useful work at the
microscopic scale. These systems generically involve physics beyond the reach
of equilibrium statistical mechanics, and a persistent challenge has been to
understand the nature of their nonequilibrium states. The entropy production
rate and the magnitude of the steady-state probability current provide
quantitative ways to do so by measuring the breakdown of time-reversal symmetry
and the strength of nonequilibrium transport of measure. Yet, their efficient
computation has remained elusive, as they depend on the system's unknown and
high-dimensional probability density. Here, building upon recent advances in
generative modeling, we develop a deep learning framework that estimates the
score of this density. We show that the score, together with the microscopic
equations of motion, gives direct access to the entropy production rate, the
probability current, and their decomposition into local contributions from
individual particles, spatial regions, and degrees of freedom. To represent the
score, we introduce a novel, spatially-local transformer-based network
architecture that learns high-order interactions between particles while
respecting their underlying permutation symmetry. We demonstrate the broad
utility and scalability of the method by applying it to several
high-dimensional systems of interacting active particles undergoing
motility-induced phase separation (MIPS). We show that a single instance of our
network trained on a system of 4096 particles at one packing fraction can
generalize to other regions of the phase diagram, including systems with as
many as 32768 particles. We use this observation to quantify the spatial
structure of the departure from equilibrium in MIPS as a function of the number
of particles and the packing fraction. | [
"Nicholas M. Boffi",
"Eric Vanden-Eijnden"
] | 2023-09-22 16:44:18 | http://arxiv.org/abs/2309.12991v1 | http://arxiv.org/pdf/2309.12991v1 | 2309.12991v1 |
Higher-order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | Despite the recent successes of vanilla Graph Neural Networks (GNNs) on many
tasks, their foundation on pairwise interaction networks inherently limits
their capacity to discern latent higher-order interactions in complex systems.
To bridge this capability gap, we propose a novel approach exploiting the rich
mathematical theory of simplicial complexes (SCs) - a robust tool for modeling
higher-order interactions. Current SC-based GNNs are burdened by high
complexity and rigidity, and quantifying higher-order interaction strengths
remains challenging. Innovatively, we present a higher-order Flower-Petals (FP)
model, incorporating FP Laplacians into SCs. Further, we introduce a
Higher-order Graph Convolutional Network (HiGCN) grounded in FP Laplacians,
capable of discerning intrinsic features across varying topological scales. By
employing learnable graph filters, a parameter group within each FP Laplacian
domain, we can identify diverse patterns where the filters' weights serve as a
quantifiable measure of higher-order interaction strengths. The theoretical
underpinnings of HiGCN's advanced expressiveness are rigorously demonstrated.
Additionally, our empirical investigations reveal that the proposed model
accomplishes state-of-the-art (SOTA) performance on a range of graph tasks and
provides a scalable and flexible solution to explore higher-order interactions
in graphs. | [
"Yiming Huang",
"Yujie Zeng",
"Qiang Wu",
"Linyuan Lü"
] | 2023-09-22 16:11:17 | http://arxiv.org/abs/2309.12971v1 | http://arxiv.org/pdf/2309.12971v1 | 2309.12971v1 |
On Separate Normalization in Self-supervised Transformers | Self-supervised training methods for transformers have demonstrated
remarkable performance across various domains. Previous transformer-based
models, such as masked autoencoders (MAE), typically utilize a single
normalization layer for both the [CLS] symbol and the tokens. We propose in
this paper a simple modification that employs separate normalization layers for
the tokens and the [CLS] symbol to better capture their distinct
characteristics and enhance downstream task performance. Our method aims to
alleviate the potential negative effects of using the same normalization
statistics for both token types, which may not be optimally aligned with their
individual roles. We empirically show that by utilizing a separate
normalization layer, the [CLS] embeddings can better encode the global
contextual information and are distributed more uniformly in its anisotropic
space. When replacing the conventional normalization layer with the two
separate layers, we observe an average 2.7% performance improvement over the
image, natural language, and graph domains. | [
"Xiaohui Chen",
"Yinkai Wang",
"Yuanqi Du",
"Soha Hassoun",
"Li-Ping Liu"
] | 2023-09-22 15:30:53 | http://arxiv.org/abs/2309.12931v1 | http://arxiv.org/pdf/2309.12931v1 | 2309.12931v1 |
BayesDLL: Bayesian Deep Learning Library | We release a new Bayesian neural network library for PyTorch for large-scale
deep networks. Our library implements mainstream approximate Bayesian inference
algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and
Laplace approximation. The main differences from other existing Bayesian neural
network libraries are as follows: 1) Our library can deal with very large-scale
deep networks including Vision Transformers (ViTs). 2) We need virtually zero
code modifications for users (e.g., the backbone network definition codes do
not neet to be modified at all). 3) Our library also allows the pre-trained
model weights to serve as a prior mean, which is very useful for performing
Bayesian inference with the large-scale foundation models like ViTs that are
hard to optimise from scratch with the downstream data alone. Our code is
publicly available at: \url{https://github.com/SamsungLabs/BayesDLL}\footnote{A
mirror repository is also available at:
\url{https://github.com/minyoungkim21/BayesDLL}.}. | [
"Minyoung Kim",
"Timothy Hospedales"
] | 2023-09-22 15:27:54 | http://arxiv.org/abs/2309.12928v1 | http://arxiv.org/pdf/2309.12928v1 | 2309.12928v1 |
Topological Data Mapping of Online Hate Speech, Misinformation, and General Mental Health: A Large Language Model Based Study | The advent of social media has led to an increased concern over its potential
to propagate hate speech and misinformation, which, in addition to contributing
to prejudice and discrimination, has been suspected of playing a role in
increasing social violence and crimes in the United States. While literature
has shown the existence of an association between posting hate speech and
misinformation online and certain personality traits of posters, the general
relationship and relevance of online hate speech/misinformation in the context
of overall psychological wellbeing of posters remain elusive. One difficulty
lies in the lack of adequate data analytics tools capable of adequately
analyzing the massive amount of social media posts to uncover the underlying
hidden links. Recent progresses in machine learning and large language models
such as ChatGPT have made such an analysis possible. In this study, we
collected thousands of posts from carefully selected communities on the social
media site Reddit. We then utilized OpenAI's GPT3 to derive embeddings of these
posts, which are high-dimensional real-numbered vectors that presumably
represent the hidden semantics of posts. We then performed various
machine-learning classifications based on these embeddings in order to
understand the role of hate speech/misinformation in various communities.
Finally, a topological data analysis (TDA) was applied to the embeddings to
obtain a visual map connecting online hate speech, misinformation, various
psychiatric disorders, and general mental health. | [
"Andrew Alexander",
"Hongbin Wang"
] | 2023-09-22 15:10:36 | http://arxiv.org/abs/2309.13098v1 | http://arxiv.org/pdf/2309.13098v1 | 2309.13098v1 |
A matter of attitude: Focusing on positive and active gradients to boost saliency maps | Saliency maps have become one of the most widely used interpretability
techniques for convolutional neural networks (CNN) due to their simplicity and
the quality of the insights they provide. However, there are still some doubts
about whether these insights are a trustworthy representation of what CNNs use
to come up with their predictions. This paper explores how rescuing the sign of
the gradients from the saliency map can lead to a deeper understanding of
multi-class classification problems. Using both pretrained and trained from
scratch CNNs we unveil that considering the sign and the effect not only of the
correct class, but also the influence of the other classes, allows to better
identify the pixels of the image that the network is really focusing on.
Furthermore, how occluding or altering those pixels is expected to affect the
outcome also becomes clearer. | [
"Oscar Llorente",
"Jaime Boal",
"Eugenio F. Sánchez-Úbeda"
] | 2023-09-22 15:00:00 | http://arxiv.org/abs/2309.12913v1 | http://arxiv.org/pdf/2309.12913v1 | 2309.12913v1 |
PopBERT. Detecting populism and its host ideologies in the German Bundestag | The rise of populism concerns many political scientists and practitioners,
yet the detection of its underlying language remains fragmentary. This paper
aims to provide a reliable, valid, and scalable approach to measure populist
stances. For that purpose, we created an annotated dataset based on
parliamentary speeches of the German Bundestag (2013 to 2021). Following the
ideational definition of populism, we label moralizing references to the
virtuous people or the corrupt elite as core dimensions of populist language.
To identify, in addition, how the thin ideology of populism is thickened, we
annotate how populist statements are attached to left-wing or right-wing host
ideologies. We then train a transformer-based model (PopBERT) as a multilabel
classifier to detect and quantify each dimension. A battery of validation
checks reveals that the model has a strong predictive accuracy, provides high
qualitative face validity, matches party rankings of expert surveys, and
detects out-of-sample text snippets correctly. PopBERT enables dynamic analyses
of how German-speaking politicians and parties use populist language as a
strategic device. Furthermore, the annotator-level data may also be applied in
cross-domain applications or to develop related classifiers. | [
"L. Erhard",
"S. Hanke",
"U. Remer",
"A. Falenska",
"R. Heiberger"
] | 2023-09-22 14:48:02 | http://arxiv.org/abs/2309.14355v1 | http://arxiv.org/pdf/2309.14355v1 | 2309.14355v1 |
FairComp: Workshop on Fairness and Robustness in Machine Learning for Ubiquitous Computing | How can we ensure that Ubiquitous Computing (UbiComp) research outcomes are
both ethical and fair? While fairness in machine learning (ML) has gained
traction in recent years, fairness in UbiComp remains unexplored. This workshop
aims to discuss fairness in UbiComp research and its social, technical, and
legal implications. From a social perspective, we will examine the relationship
between fairness and UbiComp research and identify pathways to ensure that
ubiquitous technologies do not cause harm or infringe on individual rights.
From a technical perspective, we will initiate a discussion on data practices
to develop bias mitigation approaches tailored to UbiComp research. From a
legal perspective, we will examine how new policies shape our community's work
and future research. We aim to foster a vibrant community centered around the
topic of responsible UbiComp, while also charting a clear path for future
research endeavours in this field. | [
"Sofia Yfantidou",
"Dimitris Spathis",
"Marios Constantinides",
"Tong Xia",
"Niels van Berkel"
] | 2023-09-22 14:04:51 | http://arxiv.org/abs/2309.12877v1 | http://arxiv.org/pdf/2309.12877v1 | 2309.12877v1 |
AnglE-optimized Text Embeddings | High-quality text embedding is pivotal in improving semantic textual
similarity (STS) tasks, which are crucial components in Large Language Model
(LLM) applications. However, a common challenge existing text embedding models
face is the problem of vanishing gradients, primarily due to their reliance on
the cosine function in the optimization objective, which has saturation zones.
To address this issue, this paper proposes a novel angle-optimized text
embedding model called AnglE. The core idea of AnglE is to introduce angle
optimization in a complex space. This novel approach effectively mitigates the
adverse effects of the saturation zone in the cosine function, which can impede
gradient and hinder optimization processes. To set up a comprehensive STS
evaluation, we experimented on existing short-text STS datasets and a newly
collected long-text STS dataset from GitHub Issues. Furthermore, we examine
domain-specific STS scenarios with limited labeled data and explore how AnglE
works with LLM-annotated data. Extensive experiments were conducted on various
tasks including short-text STS, long-text STS, and domain-specific STS tasks.
The results show that AnglE outperforms the state-of-the-art (SOTA) STS models
that ignore the cosine saturation zone. These findings demonstrate the ability
of AnglE to generate high-quality text embeddings and the usefulness of angle
optimization in STS. | [
"Xianming Li",
"Jing Li"
] | 2023-09-22 13:52:42 | http://arxiv.org/abs/2309.12871v4 | http://arxiv.org/pdf/2309.12871v4 | 2309.12871v4 |
Associative Transformer Is A Sparse Representation Learner | Emerging from the monolithic pairwise attention mechanism in conventional
Transformer models, there is a growing interest in leveraging sparse
interactions that align more closely with biological principles. Approaches
including the Set Transformer and the Perceiver employ cross-attention
consolidated with a latent space that forms an attention bottleneck with
limited capacity. Building upon recent neuroscience studies of Global Workspace
Theory and associative memory, we propose the Associative Transformer (AiT).
AiT induces low-rank explicit memory that serves as both priors to guide
bottleneck attention in the shared workspace and attractors within associative
memory of a Hopfield network. Through joint end-to-end training, these priors
naturally develop module specialization, each contributing a distinct inductive
bias to form attention bottlenecks. A bottleneck can foster competition among
inputs for writing information into the memory. We show that AiT is a sparse
representation learner, learning distinct priors through the bottlenecks that
are complexity-invariant to input quantities and dimensions. AiT demonstrates
its superiority over methods such as the Set Transformer, Vision Transformer,
and Coordination in various vision tasks. | [
"Yuwei Sun",
"Hideya Ochiai",
"Zhirong Wu",
"Stephen Lin",
"Ryota Kanai"
] | 2023-09-22 13:37:10 | http://arxiv.org/abs/2309.12862v1 | http://arxiv.org/pdf/2309.12862v1 | 2309.12862v1 |
Robotic Handling of Compliant Food Objects by Robust Learning from Demonstration | The robotic handling of compliant and deformable food raw materials,
characterized by high biological variation, complex geometrical 3D shapes, and
mechanical structures and texture, is currently in huge demand in the ocean
space, agricultural, and food industries. Many tasks in these industries are
performed manually by human operators who, due to the laborious and tedious
nature of their tasks, exhibit high variability in execution, with variable
outcomes. The introduction of robotic automation for most complex processing
tasks has been challenging due to current robot learning policies. A more
consistent learning policy involving skilled operators is desired. In this
paper, we address the problem of robot learning when presented with
inconsistent demonstrations. To this end, we propose a robust learning policy
based on Learning from Demonstration (LfD) for robotic grasping of food
compliant objects. The approach uses a merging of RGB-D images and tactile data
in order to estimate the necessary pose of the gripper, gripper finger
configuration and forces exerted on the object in order to achieve effective
robot handling. During LfD training, the gripper pose, finger configurations
and tactile values for the fingers, as well as RGB-D images are saved. We
present an LfD learning policy that automatically removes inconsistent
demonstrations, and estimates the teacher's intended policy. The performance of
our approach is validated and demonstrated for fragile and compliant food
objects with complex 3D shapes. The proposed approach has a vast range of
potential applications in the aforementioned industry sectors. | [
"Ekrem Misimi",
"Alexander Olofsson",
"Aleksander Eilertsen",
"Elling Ruud Øye",
"John Reidar Mathiassen"
] | 2023-09-22 13:30:26 | http://arxiv.org/abs/2309.12856v1 | http://arxiv.org/pdf/2309.12856v1 | 2309.12856v1 |
Cross-Modal Translation and Alignment for Survival Analysis | With the rapid advances in high-throughput sequencing technologies, the focus
of survival analysis has shifted from examining clinical indicators to
incorporating genomic profiles with pathological images. However, existing
methods either directly adopt a straightforward fusion of pathological features
and genomic profiles for survival prediction, or take genomic profiles as
guidance to integrate the features of pathological images. The former would
overlook intrinsic cross-modal correlations. The latter would discard
pathological information irrelevant to gene expression. To address these
issues, we present a Cross-Modal Translation and Alignment (CMTA) framework to
explore the intrinsic cross-modal correlations and transfer potential
complementary information. Specifically, we construct two parallel
encoder-decoder structures for multi-modal data to integrate intra-modal
information and generate cross-modal representation. Taking the generated
cross-modal representation to enhance and recalibrate intra-modal
representation can significantly improve its discrimination for comprehensive
survival analysis. To explore the intrinsic crossmodal correlations, we further
design a cross-modal attention module as the information bridge between
different modalities to perform cross-modal interactions and transfer
complementary information. Our extensive experiments on five public TCGA
datasets demonstrate that our proposed framework outperforms the
state-of-the-art methods. | [
"Fengtao Zhou",
"Hao Chen"
] | 2023-09-22 13:29:14 | http://arxiv.org/abs/2309.12855v1 | http://arxiv.org/pdf/2309.12855v1 | 2309.12855v1 |
DeepOPF-U: A Unified Deep Neural Network to Solve AC Optimal Power Flow in Multiple Networks | The traditional machine learning models to solve optimal power flow (OPF) are
mostly trained for a given power network and lack generalizability to today's
power networks with varying topologies and growing plug-and-play distributed
energy resources (DERs). In this paper, we propose DeepOPF-U, which uses one
unified deep neural network (DNN) to solve alternating-current (AC) OPF
problems in different power networks, including a set of power networks that is
successively expanding. Specifically, we design elastic input and output layers
for the vectors of given loads and OPF solutions with varying lengths in
different networks. The proposed method, using a single unified DNN, can deal
with different and growing numbers of buses, lines, loads, and DERs.
Simulations of IEEE 57/118/300-bus test systems and a network growing from 73
to 118 buses verify the improved performance of DeepOPF-U compared to existing
DNN-based solution methods. | [
"Heng Liang",
"Changhong Zhao"
] | 2023-09-22 13:22:15 | http://arxiv.org/abs/2309.12849v1 | http://arxiv.org/pdf/2309.12849v1 | 2309.12849v1 |
Multiple Independent DE Optimizations to Tackle Uncertainty and Variability in Demand in Inventory Management | To determine the effectiveness of metaheuristic Differential Evolution
optimization strategy for inventory management (IM) in the context of
stochastic demand, this empirical study undertakes a thorough investigation.
The primary objective is to discern the most effective strategy for minimizing
inventory costs within the context of uncertain demand patterns. Inventory
costs refer to the expenses associated with holding and managing inventory
within a business. The approach combines a continuous review of IM policies
with a Monte Carlo Simulation (MCS). To find the optimal solution, the study
focuses on meta-heuristic approaches and compares multiple algorithms. The
outcomes reveal that the Differential Evolution (DE) algorithm outperforms its
counterparts in optimizing IM. To fine-tune the parameters, the study employs
the Latin Hypercube Sampling (LHS) statistical method. To determine the final
solution, a method is employed in this study which combines the outcomes of
multiple independent DE optimizations, each initiated with different random
initial conditions. This approach introduces a novel and promising dimension to
the field of inventory management, offering potential enhancements in
performance and cost efficiency, especially in the presence of stochastic
demand patterns. | [
"Sarit Maitra",
"Sukanya Kundu",
"Vivek Mishra"
] | 2023-09-22 13:15:02 | http://arxiv.org/abs/2309.13095v2 | http://arxiv.org/pdf/2309.13095v2 | 2309.13095v2 |
Reward Function Design for Crowd Simulation via Reinforcement Learning | Crowd simulation is important for video-games design, since it enables to
populate virtual worlds with autonomous avatars that navigate in a human-like
manner. Reinforcement learning has shown great potential in simulating virtual
crowds, but the design of the reward function is critical to achieving
effective and efficient results. In this work, we explore the design of reward
functions for reinforcement learning-based crowd simulation. We provide
theoretical insights on the validity of certain reward functions according to
their analytical properties, and evaluate them empirically using a range of
scenarios, using the energy efficiency as the metric. Our experiments show that
directly minimizing the energy usage is a viable strategy as long as it is
paired with an appropriately scaled guiding potential, and enable us to study
the impact of the different reward components on the behavior of the simulated
crowd. Our findings can inform the development of new crowd simulation
techniques, and contribute to the wider study of human-like navigation. | [
"Ariel Kwiatkowski",
"Vicky Kalogeiton",
"Julien Pettré",
"Marie-Paule Cani"
] | 2023-09-22 12:55:30 | http://arxiv.org/abs/2309.12841v1 | http://arxiv.org/pdf/2309.12841v1 | 2309.12841v1 |
AxOCS: Scaling FPGA-based Approximate Operators using Configuration Supersampling | The rising usage of AI and ML-based processing across application domains has
exacerbated the need for low-cost ML implementation, specifically for
resource-constrained embedded systems. To this end, approximate computing, an
approach that explores the power, performance, area (PPA), and behavioral
accuracy (BEHAV) trade-offs, has emerged as a possible solution for
implementing embedded machine learning. Due to the predominance of MAC
operations in ML, designing platform-specific approximate arithmetic operators
forms one of the major research problems in approximate computing. Recently
there has been a rising usage of AI/ML-based design space exploration
techniques for implementing approximate operators. However, most of these
approaches are limited to using ML-based surrogate functions for predicting the
PPA and BEHAV impact of a set of related design decisions. While this approach
leverages the regression capabilities of ML methods, it does not exploit the
more advanced approaches in ML. To this end, we propose AxOCS, a methodology
for designing approximate arithmetic operators through ML-based supersampling.
Specifically, we present a method to leverage the correlation of PPA and BEHAV
metrics across operators of varying bit-widths for generating larger bit-width
operators. The proposed approach involves traversing the relatively smaller
design space of smaller bit-width operators and employing its associated
Design-PPA-BEHAV relationship to generate initial solutions for
metaheuristics-based optimization for larger operators. The experimental
evaluation of AxOCS for FPGA-optimized approximate operators shows that the
proposed approach significantly improves the quality-resulting hypervolume for
multi-objective optimization-of 8x8 signed approximate multipliers. | [
"Siva Satyendra Sahoo",
"Salim Ullah",
"Soumyo Bhattacharjee",
"Akash Kumar"
] | 2023-09-22 12:36:40 | http://arxiv.org/abs/2309.12830v1 | http://arxiv.org/pdf/2309.12830v1 | 2309.12830v1 |
Synthetic Boost: Leveraging Synthetic Data for Enhanced Vision-Language Segmentation in Echocardiography | Accurate segmentation is essential for echocardiography-based assessment of
cardiovascular diseases (CVDs). However, the variability among sonographers and
the inherent challenges of ultrasound images hinder precise segmentation. By
leveraging the joint representation of image and text modalities,
Vision-Language Segmentation Models (VLSMs) can incorporate rich contextual
information, potentially aiding in accurate and explainable segmentation.
However, the lack of readily available data in echocardiography hampers the
training of VLSMs. In this study, we explore using synthetic datasets from
Semantic Diffusion Models (SDMs) to enhance VLSMs for echocardiography
segmentation. We evaluate results for two popular VLSMs (CLIPSeg and CRIS)
using seven different kinds of language prompts derived from several
attributes, automatically extracted from echocardiography images, segmentation
masks, and their metadata. Our results show improved metrics and faster
convergence when pretraining VLSMs on SDM-generated synthetic images before
finetuning on real images. The code, configs, and prompts are available at
https://github.com/naamiinepal/synthetic-boost. | [
"Rabin Adhikari",
"Manish Dhakal",
"Safal Thapaliya",
"Kanchan Poudel",
"Prasiddha Bhandari",
"Bishesh Khanal"
] | 2023-09-22 12:36:30 | http://arxiv.org/abs/2309.12829v1 | http://arxiv.org/pdf/2309.12829v1 | 2309.12829v1 |
Doubly Robust Proximal Causal Learning for Continuous Treatments | Proximal causal learning is a promising framework for identifying the causal
effect under the existence of unmeasured confounders. Within this framework,
the doubly robust (DR) estimator was derived and has shown its effectiveness in
estimation, especially when the model assumption is violated. However, the
current form of the DR estimator is restricted to binary treatments, while the
treatment can be continuous in many real-world applications. The primary
obstacle to continuous treatments resides in the delta function present in the
original DR estimator, making it infeasible in causal effect estimation and
introducing a heavy computational burden in nuisance function estimation. To
address these challenges, we propose a kernel-based DR estimator that can well
handle continuous treatments. Equipped with its smoothness, we show that its
oracle form is a consistent approximation of the influence function. Further,
we propose a new approach to efficiently solve the nuisance functions. We then
provide a comprehensive convergence analysis in terms of the mean square error.
We demonstrate the utility of our estimator on synthetic datasets and
real-world applications. | [
"Yong Wu",
"Yanwei Fu",
"Shouyan Wang",
"Xinwei Sun"
] | 2023-09-22 12:18:53 | http://arxiv.org/abs/2309.12819v2 | http://arxiv.org/pdf/2309.12819v2 | 2309.12819v2 |
Improving Generalization in Game Agents with Data Augmentation in Imitation Learning | Imitation learning is an effective approach for training game-playing agents
and, consequently, for efficient game production. However, generalization - the
ability to perform well in related but unseen scenarios - is an essential
requirement that remains an unsolved challenge for game AI. Generalization is
difficult for imitation learning agents because it requires the algorithm to
take meaningful actions outside of the training distribution. In this paper we
propose a solution to this challenge. Inspired by the success of data
augmentation in supervised learning, we augment the training data so the
distribution of states and actions in the dataset better represents the real
state-action distribution. This study evaluates methods for combining and
applying data augmentations to observations, to improve generalization of
imitation learning agents. It also provides a performance benchmark of these
augmentations across several 3D environments. These results demonstrate that
data augmentation is a promising framework for improving generalization in
imitation learning agents. | [
"Derek Yadgaroff",
"Alessandro Sestini",
"Konrad Tollmar",
"Linus Gisslén"
] | 2023-09-22 12:08:53 | http://arxiv.org/abs/2309.12815v1 | http://arxiv.org/pdf/2309.12815v1 | 2309.12815v1 |
Deepfake audio as a data augmentation technique for training automatic speech to text transcription models | To train transcriptor models that produce robust results, a large and diverse
labeled dataset is required. Finding such data with the necessary
characteristics is a challenging task, especially for languages less popular
than English. Moreover, producing such data requires significant effort and
often money. Therefore, a strategy to mitigate this problem is the use of data
augmentation techniques. In this work, we propose a framework that approaches
data augmentation based on deepfake audio. To validate the produced framework,
experiments were conducted using existing deepfake and transcription models. A
voice cloner and a dataset produced by Indians (in English) were selected,
ensuring the presence of a single accent in the dataset. Subsequently, the
augmented data was used to train speech to text models in various scenarios. | [
"Alexandre R. Ferreira",
"Cláudio E. C. Campelo"
] | 2023-09-22 11:33:03 | http://arxiv.org/abs/2309.12802v1 | http://arxiv.org/pdf/2309.12802v1 | 2309.12802v1 |
An Intelligent Approach to Detecting Novel Fault Classes for Centrifugal Pumps Based on Deep CNNs and Unsupervised Methods | Despite the recent success in data-driven fault diagnosis of rotating
machines, there are still remaining challenges in this field. Among the issues
to be addressed, is the lack of information about variety of faults the system
may encounter in the field. In this paper, we assume a partial knowledge of the
system faults and use the corresponding data to train a convolutional neural
network. A combination of t-SNE method and clustering techniques is then
employed to detect novel faults. Upon detection, the network is augmented using
the new data. Finally, a test setup is used to validate this two-stage
methodology on a centrifugal pump and experimental results show high accuracy
in detecting novel faults. | [
"Mahdi Abdollah Chalaki",
"Daniyal Maroufi",
"Mahdi Robati",
"Mohammad Javad Karimi",
"Ali Sadighi"
] | 2023-09-22 10:10:30 | http://arxiv.org/abs/2309.12765v1 | http://arxiv.org/pdf/2309.12765v1 | 2309.12765v1 |
Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You Where | While image data starts to enjoy the simple-but-effective self-supervised
learning scheme built upon masking and self-reconstruction objective thanks to
the introduction of tokenization procedure and vision transformer backbone,
convolutional neural networks as another important and widely-adopted
architecture for image data, though having contrastive-learning techniques to
drive the self-supervised learning, still face the difficulty of leveraging
such straightforward and general masking operation to benefit their learning
process significantly. In this work, we aim to alleviate the burden of
including masking operation into the contrastive-learning framework for
convolutional neural networks as an extra augmentation method. In addition to
the additive but unwanted edges (between masked and unmasked regions) as well
as other adverse effects caused by the masking operations for ConvNets, which
have been discussed by prior works, we particularly identify the potential
problem where for one view in a contrastive sample-pair the randomly-sampled
masking regions could be overly concentrated on important/salient objects thus
resulting in misleading contrastiveness to the other view. To this end, we
propose to explicitly take the saliency constraint into consideration in which
the masked regions are more evenly distributed among the foreground and
background for realizing the masking-based augmentation. Moreover, we introduce
hard negative samples by masking larger regions of salient patches in an input
image. Extensive experiments conducted on various datasets, contrastive
learning mechanisms, and downstream tasks well verify the efficacy as well as
the superior performance of our proposed method with respect to several
state-of-the-art baselines. | [
"Zhi-Yi Chin",
"Chieh-Ming Jiang",
"Ching-Chun Huang",
"Pin-Yu Chen",
"Wei-Chen Chiu"
] | 2023-09-22 09:58:38 | http://arxiv.org/abs/2309.12757v1 | http://arxiv.org/pdf/2309.12757v1 | 2309.12757v1 |
Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks | The variety and complexity of relations in multimedia data lead to
Heterogeneous Information Networks (HINs). Capturing the semantics from such
networks requires approaches capable of utilizing the full richness of the
HINs. Existing methods for modeling HINs employ techniques originally designed
for graph neural networks, and HINs decomposition analysis, like using manually
predefined metapaths. In this paper, we introduce a novel prototype-enhanced
hypergraph learning approach for node classification in HINs. Using hypergraphs
instead of graphs, our method captures higher-order relationships among nodes
and extracts semantic information without relying on metapaths. Our method
leverages the power of prototypes to improve the robustness of the hypergraph
learning process and creates the potential to provide human-interpretable
insights into the underlying network structure. Extensive experiments on three
real-world HINs demonstrate the effectiveness of our method. | [
"Shuai Wang",
"Jiayi Shen",
"Athanasios Efthymiou",
"Stevan Rudinac",
"Monika Kackovic",
"Nachoem Wijnberg",
"Marcel Worring"
] | 2023-09-22 09:51:15 | http://arxiv.org/abs/2309.13092v1 | http://arxiv.org/pdf/2309.13092v1 | 2309.13092v1 |
Make the U in UDA Matter: Invariant Consistency Learning for Unsupervised Domain Adaptation | Domain Adaptation (DA) is always challenged by the spurious correlation
between domain-invariant features (e.g., class identity) and domain-specific
features (e.g., environment) that does not generalize to the target domain.
Unfortunately, even enriched with additional unsupervised target domains,
existing Unsupervised DA (UDA) methods still suffer from it. This is because
the source domain supervision only considers the target domain samples as
auxiliary data (e.g., by pseudo-labeling), yet the inherent distribution in the
target domain -- where the valuable de-correlation clues hide -- is
disregarded. We propose to make the U in UDA matter by giving equal status to
the two domains. Specifically, we learn an invariant classifier whose
prediction is simultaneously consistent with the labels in the source domain
and clusters in the target domain, hence the spurious correlation inconsistent
in the target domain is removed. We dub our approach "Invariant CONsistency
learning" (ICON). Extensive experiments show that ICON achieves the
state-of-the-art performance on the classic UDA benchmarks: Office-Home and
VisDA-2017, and outperforms all the conventional methods on the challenging
WILDS 2.0 benchmark. Codes are in https://github.com/yue-zhongqi/ICON. | [
"Zhongqi Yue",
"Hanwang Zhang",
"Qianru Sun"
] | 2023-09-22 09:43:32 | http://arxiv.org/abs/2309.12742v1 | http://arxiv.org/pdf/2309.12742v1 | 2309.12742v1 |
Optimal Dynamic Fees for Blockchain Resources | We develop a general and practical framework to address the problem of the
optimal design of dynamic fee mechanisms for multiple blockchain resources. Our
framework allows to compute policies that optimally trade-off between adjusting
resource prices to handle persistent demand shifts versus being robust to local
noise in the observed block demand. In the general case with more than one
resource, our optimal policies correctly handle cross-effects (complementarity
and substitutability) in resource demands. We also show how these cross-effects
can be used to inform resource design, i.e. combining resources into bundles
that have low demand-side cross-effects can yield simpler and more efficient
price-update rules. Our framework is also practical, we demonstrate how it can
be used to refine or inform the design of heuristic fee update rules such as
EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional
version of our model using real market data from the Ethereum blockchain and
empirically compare the performance of our optimal policies to EIP-1559. | [
"Davide Crapis",
"Ciamac C. Moallemi",
"Shouqiao Wang"
] | 2023-09-22 09:34:33 | http://arxiv.org/abs/2309.12735v1 | http://arxiv.org/pdf/2309.12735v1 | 2309.12735v1 |
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps | Solving real-world complex tasks using reinforcement learning (RL) without
high-fidelity simulation environments or large amounts of offline data can be
quite challenging. Online RL agents trained in imperfect simulation
environments can suffer from severe sim-to-real issues. Offline RL approaches
although bypass the need for simulators, often pose demanding requirements on
the size and quality of the offline datasets. The recently emerged hybrid
offline-and-online RL provides an attractive framework that enables joint use
of limited offline data and imperfect simulator for transferable policy
learning. In this paper, we develop a new algorithm, called H2O+, which offers
great flexibility to bridge various choices of offline and online learning
methods, while also accounting for dynamics gaps between the real and
simulation environment. Through extensive simulation and real-world robotics
experiments, we demonstrate superior performance and flexibility over advanced
cross-domain online and offline RL algorithms. | [
"Haoyi Niu",
"Tianying Ji",
"Bingqi Liu",
"Haocheng Zhao",
"Xiangyu Zhu",
"Jianying Zheng",
"Pengfei Huang",
"Guyue Zhou",
"Jianming Hu",
"Xianyuan Zhan"
] | 2023-09-22 08:58:22 | http://arxiv.org/abs/2309.12716v1 | http://arxiv.org/pdf/2309.12716v1 | 2309.12716v1 |
Unsupervised Representations Improve Supervised Learning in Speech Emotion Recognition | Speech Emotion Recognition (SER) plays a pivotal role in enhancing
human-computer interaction by enabling a deeper understanding of emotional
states across a wide range of applications, contributing to more empathetic and
effective communication. This study proposes an innovative approach that
integrates self-supervised feature extraction with supervised classification
for emotion recognition from small audio segments. In the preprocessing step,
to eliminate the need of crafting audio features, we employed a self-supervised
feature extractor, based on the Wav2Vec model, to capture acoustic features
from audio data. Then, the output featuremaps of the preprocessing step are fed
to a custom designed Convolutional Neural Network (CNN)-based model to perform
emotion classification. Utilizing the ShEMO dataset as our testing ground, the
proposed method surpasses two baseline methods, i.e. support vector machine
classifier and transfer learning of a pretrained CNN. comparing the propose
method to the state-of-the-art methods in SER task indicates the superiority of
the proposed method. Our findings underscore the pivotal role of deep
unsupervised feature learning in elevating the landscape of SER, offering
enhanced emotional comprehension in the realm of human-computer interactions. | [
"Amirali Soltani Tehrani",
"Niloufar Faridani",
"Ramin Toosi"
] | 2023-09-22 08:54:06 | http://arxiv.org/abs/2309.12714v1 | http://arxiv.org/pdf/2309.12714v1 | 2309.12714v1 |
Big model only for hard audios: Sample dependent Whisper model selection for efficient inferences | Recent progress in Automatic Speech Recognition (ASR) has been coupled with a
substantial increase in the model sizes, which may now contain billions of
parameters, leading to slow inferences even with adapted hardware. In this
context, several ASR models exist in various sizes, with different inference
costs leading to different performance levels. Based on the observation that
smaller models perform optimally on large parts of testing corpora, we propose
to train a decision module, that would allow, given an audio sample, to use the
smallest sufficient model leading to a good transcription. We apply our
approach to two Whisper models with different sizes. By keeping the decision
process computationally efficient, we build a decision module that allows
substantial computational savings with reduced performance drops. | [
"Hugo Malard",
"Salah Zaiem",
"Robin Algayres"
] | 2023-09-22 08:50:58 | http://arxiv.org/abs/2309.12712v1 | http://arxiv.org/pdf/2309.12712v1 | 2309.12712v1 |
PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for Semantic Scene Completion | Semantic Scene Completion (SSC) aims to jointly generate space occupancies
and semantic labels for complex 3D scenes. Most existing SSC models focus on
volumetric representations, which are memory-inefficient for large outdoor
spaces. Point clouds provide a lightweight alternative but existing benchmarks
lack outdoor point cloud scenes with semantic labels. To address this, we
introduce PointSSC, the first cooperative vehicle-infrastructure point cloud
benchmark for semantic scene completion. These scenes exhibit long-range
perception and minimal occlusion. We develop an automated annotation pipeline
leveraging Segment Anything to efficiently assign semantics. To benchmark
progress, we propose a LiDAR-based model with a Spatial-Aware Transformer for
global and local feature extraction and a Completion and Segmentation
Cooperative Module for joint completion and segmentation. PointSSC provides a
challenging testbed to drive advances in semantic point cloud completion for
real-world navigation. | [
"Yuxiang Yan",
"Boda Liu",
"Jianfei Ai",
"Qinbu Li",
"Ru Wan",
"Jian Pu"
] | 2023-09-22 08:39:16 | http://arxiv.org/abs/2309.12708v1 | http://arxiv.org/pdf/2309.12708v1 | 2309.12708v1 |
Multi-Label Noise Transition Matrix Estimation with Label Correlations: Theory and Algorithm | Noisy multi-label learning has garnered increasing attention due to the
challenges posed by collecting large-scale accurate labels, making noisy labels
a more practical alternative. Motivated by noisy multi-class learning, the
introduction of transition matrices can help model multi-label noise and enable
the development of statistically consistent algorithms for noisy multi-label
learning. However, estimating multi-label noise transition matrices remains a
challenging task, as most existing estimators in noisy multi-class learning
rely on anchor points and accurate fitting of noisy class posteriors, which is
hard to satisfy in noisy multi-label learning. In this paper, we address this
problem by first investigating the identifiability of class-dependent
transition matrices in noisy multi-label learning. Building upon the
identifiability results, we propose a novel estimator that leverages label
correlations without the need for anchor points or precise fitting of noisy
class posteriors. Specifically, we first estimate the occurrence probability of
two noisy labels to capture noisy label correlations. Subsequently, we employ
sample selection techniques to extract information implying clean label
correlations, which are then used to estimate the occurrence probability of one
noisy label when a certain clean label appears. By exploiting the mismatches in
label correlations implied by these occurrence probabilities, we demonstrate
that the transition matrix becomes identifiable and can be acquired by solving
a bilinear decomposition problem. Theoretically, we establish an estimation
error bound for our multi-label transition matrix estimator and derive a
generalization error bound for our statistically consistent algorithm.
Empirically, we validate the effectiveness of our estimator in estimating
multi-label noise transition matrices, leading to excellent classification
performance. | [
"Shikun Li",
"Xiaobo Xia",
"Hansong Zhang",
"Shiming Ge",
"Tongliang Liu"
] | 2023-09-22 08:35:38 | http://arxiv.org/abs/2309.12706v1 | http://arxiv.org/pdf/2309.12706v1 | 2309.12706v1 |
Discovering the Interpretability-Performance Pareto Front of Decision Trees with Dynamic Programming | Decision trees are known to be intrinsically interpretable as they can be
inspected and interpreted by humans. Furthermore, recent hardware advances have
rekindled an interest for optimal decision tree algorithms, that produce more
accurate trees than the usual greedy approaches. However, these optimal
algorithms return a single tree optimizing a hand defined
interpretability-performance trade-off, obtained by specifying a maximum number
of decision nodes, giving no further insights about the quality of this
trade-off. In this paper, we propose a new Markov Decision Problem (MDP)
formulation for finding optimal decision trees. The main interest of this
formulation is that we can compute the optimal decision trees for several
interpretability-performance trade-offs by solving a single dynamic program,
letting the user choose a posteriori the tree that best suits their needs.
Empirically, we show that our method is competitive with state-of-the-art
algorithms in terms of accuracy and runtime while returning a whole set of
trees on the interpretability-performance Pareto front. | [
"Hector Kohler",
"Riad Akrour",
"Philippe Preux"
] | 2023-09-22 08:18:08 | http://arxiv.org/abs/2309.12701v1 | http://arxiv.org/pdf/2309.12701v1 | 2309.12701v1 |
Semantic similarity prediction is better than other semantic similarity measures | Semantic similarity between natural language texts is typically measured
either by looking at the overlap between subsequences (e.g., BLEU) or by using
embeddings (e.g., BERTScore, S-BERT). Within this paper, we argue that when we
are only interested in measuring the semantic similarity, it is better to
directly predict the similarity using a fine-tuned model for such a task. Using
a fine-tuned model for the STS-B from the GLUE benchmark, we define the
STSScore approach and show that the resulting similarity is better aligned with
our expectations on a robust semantic similarity measure than other approaches. | [
"Steffen Herbold"
] | 2023-09-22 08:11:01 | http://arxiv.org/abs/2309.12697v1 | http://arxiv.org/pdf/2309.12697v1 | 2309.12697v1 |
Recurrent Temporal Revision Graph Networks | Temporal graphs offer more accurate modeling of many real-world scenarios
than static graphs. However, neighbor aggregation, a critical building block of
graph networks, for temporal graphs, is currently straightforwardly extended
from that of static graphs. It can be computationally expensive when involving
all historical neighbors during such aggregation. In practice, typically only a
subset of the most recent neighbors are involved. However, such subsampling
leads to incomplete and biased neighbor information. To address this
limitation, we propose a novel framework for temporal neighbor aggregation that
uses the recurrent neural network with node-wise hidden states to integrate
information from all historical neighbors for each node to acquire the complete
neighbor information. We demonstrate the superior theoretical expressiveness of
the proposed framework as well as its state-of-the-art performance in
real-world applications. Notably, it achieves a significant +9.6% improvement
on averaged precision in a real-world Ecommerce dataset over existing methods
on 2-layer models. | [
"Yizhou Chen",
"Anxiang Zeng",
"Guangda Huzhang",
"Qingtao Yu",
"Kerui Zhang",
"Cao Yuanpeng",
"Kangle Wu",
"Han Yu",
"Zhiming Zhou"
] | 2023-09-22 08:09:55 | http://arxiv.org/abs/2309.12694v2 | http://arxiv.org/pdf/2309.12694v2 | 2309.12694v2 |
AMPLIFY:Attention-based Mixup for Performance Improvement and Label Smoothing in Transformer | Mixup is an effective data augmentation method that generates new augmented
samples by aggregating linear combinations of different original samples.
However, if there are noises or aberrant features in the original samples,
Mixup may propagate them to the augmented samples, leading to over-sensitivity
of the model to these outliers . To solve this problem, this paper proposes a
new Mixup method called AMPLIFY. This method uses the Attention mechanism of
Transformer itself to reduce the influence of noises and aberrant values in the
original samples on the prediction results, without increasing additional
trainable parameters, and the computational cost is very low, thereby avoiding
the problem of high resource consumption in common Mixup methods such as
Sentence Mixup . The experimental results show that, under a smaller
computational resource cost, AMPLIFY outperforms other Mixup methods in text
classification tasks on 7 benchmark datasets, providing new ideas and new ways
to further improve the performance of pre-trained models based on the Attention
mechanism, such as BERT, ALBERT, RoBERTa, and GPT. Our code can be obtained at
https://github.com/kiwi-lilo/AMPLIFY. | [
"Leixin Yang",
"Yaping Zhang",
"Haoyu Xiong",
"Yu Xiang"
] | 2023-09-22 08:02:45 | http://arxiv.org/abs/2309.12689v1 | http://arxiv.org/pdf/2309.12689v1 | 2309.12689v1 |
On Sparse Modern Hopfield Model | We introduce the sparse modern Hopfield model as a sparse extension of the
modern Hopfield model. Like its dense counterpart, the sparse modern Hopfield
model equips a memory-retrieval dynamics whose one-step approximation
corresponds to the sparse attention mechanism. Theoretically, our key
contribution is a principled derivation of a closed-form sparse Hopfield energy
using the convex conjugate of the sparse entropic regularizer. Building upon
this, we derive the sparse memory retrieval dynamics from the sparse energy
function and show its one-step approximation is equivalent to the
sparse-structured attention. Importantly, we provide a sparsity-dependent
memory retrieval error bound which is provably tighter than its dense analog.
The conditions for the benefits of sparsity to arise are therefore identified
and discussed. In addition, we show that the sparse modern Hopfield model
maintains the robust theoretical properties of its dense counterpart, including
rapid fixed point convergence and exponential memory capacity. Empirically, we
use both synthetic and real-world datasets to demonstrate that the sparse
Hopfield model outperforms its dense counterpart in many situations. | [
"Jerry Yao-Chieh Hu",
"Donglin Yang",
"Dennis Wu",
"Chenwei Xu",
"Bo-Yu Chen",
"Han Liu"
] | 2023-09-22 07:32:45 | http://arxiv.org/abs/2309.12673v1 | http://arxiv.org/pdf/2309.12673v1 | 2309.12673v1 |
How to Fine-tune the Model: Unified Model Shift and Model Bias Policy Optimization | Designing and deriving effective model-based reinforcement learning (MBRL)
algorithms with a performance improvement guarantee is challenging, mainly
attributed to the high coupling between model learning and policy optimization.
Many prior methods that rely on return discrepancy to guide model learning
ignore the impacts of model shift, which can lead to performance deterioration
due to excessive model updates. Other methods use performance difference bound
to explicitly consider model shift. However, these methods rely on a fixed
threshold to constrain model shift, resulting in a heavy dependence on the
threshold and a lack of adaptability during the training process. In this
paper, we theoretically derive an optimization objective that can unify model
shift and model bias and then formulate a fine-tuning process. This process
adaptively adjusts the model updates to get a performance improvement guarantee
while avoiding model overfitting. Based on these, we develop a straightforward
algorithm USB-PO (Unified model Shift and model Bias Policy Optimization).
Empirical results show that USB-PO achieves state-of-the-art performance on
several challenging benchmark tasks. | [
"Hai Zhang",
"Hang Yu",
"Junqiao Zhao",
"Di Zhang",
"ChangHuang",
"Hongtu Zhou",
"Xiao Zhang",
"Chen Ye"
] | 2023-09-22 07:27:32 | http://arxiv.org/abs/2309.12671v1 | http://arxiv.org/pdf/2309.12671v1 | 2309.12671v1 |
OneNet: Enhancing Time Series Forecasting Models under Concept Drift by Online Ensembling | Online updating of time series forecasting models aims to address the concept
drifting problem by efficiently updating forecasting models based on streaming
data. Many algorithms are designed for online time series forecasting, with
some exploiting cross-variable dependency while others assume independence
among variables. Given every data assumption has its own pros and cons in
online time series modeling, we propose \textbf{On}line \textbf{e}nsembling
\textbf{Net}work (OneNet). It dynamically updates and combines two models, with
one focusing on modeling the dependency across the time dimension and the other
on cross-variate dependency. Our method incorporates a reinforcement
learning-based approach into the traditional online convex programming
framework, allowing for the linear combination of the two models with
dynamically adjusted weights. OneNet addresses the main shortcoming of
classical online learning methods that tend to be slow in adapting to the
concept drift. Empirical results show that OneNet reduces online forecasting
error by more than $\mathbf{50\%}$ compared to the State-Of-The-Art (SOTA)
method. The code is available at \url{https://github.com/yfzhang114/OneNet}. | [
"Yi-Fan Zhang",
"Qingsong Wen",
"Xue Wang",
"Weiqi Chen",
"Liang Sun",
"Zhang Zhang",
"Liang Wang",
"Rong Jin",
"Tieniu Tan"
] | 2023-09-22 06:59:14 | http://arxiv.org/abs/2309.12659v1 | http://arxiv.org/pdf/2309.12659v1 | 2309.12659v1 |
Neural Operator Variational Inference based on Regularized Stein Discrepancy for Deep Gaussian Processes | Deep Gaussian Process (DGP) models offer a powerful nonparametric approach
for Bayesian inference, but exact inference is typically intractable,
motivating the use of various approximations. However, existing approaches,
such as mean-field Gaussian assumptions, limit the expressiveness and efficacy
of DGP models, while stochastic approximation can be computationally expensive.
To tackle these challenges, we introduce Neural Operator Variational Inference
(NOVI) for Deep Gaussian Processes. NOVI uses a neural generator to obtain a
sampler and minimizes the Regularized Stein Discrepancy in L2 space between the
generated distribution and true posterior. We solve the minimax problem using
Monte Carlo estimation and subsampling stochastic optimization techniques. We
demonstrate that the bias introduced by our method can be controlled by
multiplying the Fisher divergence with a constant, which leads to robust error
control and ensures the stability and precision of the algorithm. Our
experiments on datasets ranging from hundreds to tens of thousands demonstrate
the effectiveness and the faster convergence rate of the proposed method. We
achieve a classification accuracy of 93.56 on the CIFAR10 dataset,
outperforming SOTA Gaussian process methods. Furthermore, our method guarantees
theoretically controlled prediction error for DGP models and demonstrates
remarkable performance on various datasets. We are optimistic that NOVI has the
potential to enhance the performance of deep Bayesian nonparametric models and
could have significant implications for various practical applications | [
"Jian Xu",
"Shian Du",
"Junmei Yang",
"Qianli Ma",
"Delu Zeng"
] | 2023-09-22 06:56:35 | http://arxiv.org/abs/2309.12658v1 | http://arxiv.org/pdf/2309.12658v1 | 2309.12658v1 |
FP-PET: Large Model, Multiple Loss And Focused Practice | This study presents FP-PET, a comprehensive approach to medical image
segmentation with a focus on CT and PET images. Utilizing a dataset from the
AutoPet2023 Challenge, the research employs a variety of machine learning
models, including STUNet-large, SwinUNETR, and VNet, to achieve
state-of-the-art segmentation performance. The paper introduces an aggregated
score that combines multiple evaluation metrics such as Dice score, false
positive volume (FPV), and false negative volume (FNV) to provide a holistic
measure of model effectiveness. The study also discusses the computational
challenges and solutions related to model training, which was conducted on
high-performance GPUs. Preprocessing and postprocessing techniques, including
gaussian weighting schemes and morphological operations, are explored to
further refine the segmentation output. The research offers valuable insights
into the challenges and solutions for advanced medical image segmentation. | [
"Yixin Chen",
"Ourui Fu",
"Wenrui Shao",
"Zhaoheng Xie"
] | 2023-09-22 06:44:28 | http://arxiv.org/abs/2309.12650v1 | http://arxiv.org/pdf/2309.12650v1 | 2309.12650v1 |
Are Deep Learning Classification Results Obtained on CT Scans Fair and Interpretable? | Following the great success of various deep learning methods in image and
object classification, the biomedical image processing society is also
overwhelmed with their applications to various automatic diagnosis cases.
Unfortunately, most of the deep learning-based classification attempts in the
literature solely focus on the aim of extreme accuracy scores, without
considering interpretability, or patient-wise separation of training and test
data. For example, most lung nodule classification papers using deep learning
randomly shuffle data and split it into training, validation, and test sets,
causing certain images from the CT scan of a person to be in the training set,
while other images of the exact same person to be in the validation or testing
image sets. This can result in reporting misleading accuracy rates and the
learning of irrelevant features, ultimately reducing the real-life usability of
these models. When the deep neural networks trained on the traditional, unfair
data shuffling method are challenged with new patient images, it is observed
that the trained models perform poorly. In contrast, deep neural networks
trained with strict patient-level separation maintain their accuracy rates even
when new patient images are tested. Heat-map visualizations of the activations
of the deep neural networks trained with strict patient-level separation
indicate a higher degree of focus on the relevant nodules. We argue that the
research question posed in the title has a positive answer only if the deep
neural networks are trained with images of patients that are strictly isolated
from the validation and testing patient sets. | [
"Mohamad M. A. Ashames",
"Ahmet Demir",
"Omer N. Gerek",
"Mehmet Fidan",
"M. Bilginer Gulmezoglu",
"Semih Ergin",
"Mehmet Koc",
"Atalay Barkana",
"Cuneyt Calisir"
] | 2023-09-22 05:57:25 | http://arxiv.org/abs/2309.12632v1 | http://arxiv.org/pdf/2309.12632v1 | 2309.12632v1 |
Sequential Action-Induced Invariant Representation for Reinforcement Learning | How to accurately learn task-relevant state representations from
high-dimensional observations with visual distractions is a realistic and
challenging problem in visual reinforcement learning. Recently, unsupervised
representation learning methods based on bisimulation metrics, contrast,
prediction, and reconstruction have shown the ability for task-relevant
information extraction. However, due to the lack of appropriate mechanisms for
the extraction of task information in the prediction, contrast, and
reconstruction-related approaches and the limitations of bisimulation-related
methods in domains with sparse rewards, it is still difficult for these methods
to be effectively extended to environments with distractions. To alleviate
these problems, in the paper, the action sequences, which contain
task-intensive signals, are incorporated into representation learning.
Specifically, we propose a Sequential Action--induced invariant Representation
(SAR) method, in which the encoder is optimized by an auxiliary learner to only
preserve the components that follow the control signals of sequential actions,
so the agent can be induced to learn the robust representation against
distractions. We conduct extensive experiments on the DeepMind Control suite
tasks with distractions while achieving the best performance over strong
baselines. We also demonstrate the effectiveness of our method at disregarding
task-irrelevant information by deploying SAR to real-world CARLA-based
autonomous driving with natural distractions. Finally, we provide the analysis
results of generalization drawn from the generalization decay and t-SNE
visualization. Code and demo videos are available at
https://github.com/DMU-XMU/SAR.git. | [
"Dayang Liang",
"Qihang Chen",
"Yunlong Liu"
] | 2023-09-22 05:31:55 | http://arxiv.org/abs/2309.12628v1 | http://arxiv.org/pdf/2309.12628v1 | 2309.12628v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.