title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data | The impressive performances of large language models (LLMs) and their immense
potential for commercialization have given rise to serious concerns over the
intellectual property (IP) of their training data. In particular, the synthetic
texts generated by LLMs may infringe the IP of the data being used to train the
LLMs. To this end, it is imperative to be able to (a) identify the data
provider who contributed to the generation of a synthetic text by an LLM
(source attribution) and (b) verify whether the text data from a data provider
has been used to train an LLM (data provenance). In this paper, we show that
both problems can be solved by watermarking, i.e., by enabling an LLM to
generate synthetic texts with embedded watermarks that contain information
about their source(s). We identify the key properties of such watermarking
frameworks (e.g., source attribution accuracy, robustness against adversaries),
and propose a WAtermarking for Source Attribution (WASA) framework that
satisfies these key properties due to our algorithmic designs. Our WASA
framework enables an LLM to learn an accurate mapping from the texts of
different data providers to their corresponding unique watermarks, which sets
the foundation for effective source attribution (and hence data provenance).
Extensive empirical evaluations show that our WASA framework achieves effective
source attribution and data provenance. | [
"Jingtan Wang",
"Xinyang Lu",
"Zitong Zhao",
"Zhongxiang Dai",
"Chuan-Sheng Foo",
"See-Kiong Ng",
"Bryan Kian Hsiang Low"
] | 2023-10-01 12:02:57 | http://arxiv.org/abs/2310.00646v1 | http://arxiv.org/pdf/2310.00646v1 | 2310.00646v1 |
From Bandits Model to Deep Deterministic Policy Gradient, Reinforcement Learning with Contextual Information | The problem of how to take the right actions to make profits in sequential
process continues to be difficult due to the quick dynamics and a significant
amount of uncertainty in many application scenarios. In such complicated
environments, reinforcement learning (RL), a reward-oriented strategy for
optimum control, has emerged as a potential technique to address this strategic
decision-making issue. However, reinforcement learning also has some
shortcomings that make it unsuitable for solving many financial problems,
excessive resource consumption, and inability to quickly obtain optimal
solutions, making it unsuitable for quantitative trading markets. In this
study, we use two methods to overcome the issue with contextual information:
contextual Thompson sampling and reinforcement learning under supervision which
can accelerate the iterations in search of the best answer. In order to
investigate strategic trading in quantitative markets, we merged the earlier
financial trading strategy known as constant proportion portfolio insurance
(CPPI) into deep deterministic policy gradient (DDPG). The experimental results
show that both methods can accelerate the progress of reinforcement learning to
obtain the optimal solution. | [
"Zhendong Shi",
"Xiaoli Wei",
"Ercan E. Kuruoglu"
] | 2023-10-01 11:25:20 | http://arxiv.org/abs/2310.00642v1 | http://arxiv.org/pdf/2310.00642v1 | 2310.00642v1 |
A primal-dual perspective for distributed TD-learning | The goal of this paper is to investigate distributed temporal difference (TD)
learning for a networked multi-agent Markov decision process. The proposed
approach is based on distributed optimization algorithms, which can be
interpreted as primal-dual Ordinary differential equation (ODE) dynamics
subject to null-space constraints. Based on the exponential convergence
behavior of the primal-dual ODE dynamics subject to null-space constraints, we
examine the behavior of the final iterate in various distributed TD-learning
scenarios, considering both constant and diminishing step-sizes and
incorporating both i.i.d. and Markovian observation models. Unlike existing
methods, the proposed algorithm does not require the assumption that the
underlying communication network structure is characterized by a doubly
stochastic matrix. | [
"Han-Dong Lim",
"Donghwan Lee"
] | 2023-10-01 10:38:46 | http://arxiv.org/abs/2310.00638v1 | http://arxiv.org/pdf/2310.00638v1 | 2310.00638v1 |
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks | Benefiting from the rapid development of deep learning, 2D and 3D computer
vision applications are deployed in many safe-critical systems, such as
autopilot and identity authentication. However, deep learning models are not
trustworthy enough because of their limited robustness against adversarial
attacks. The physically realizable adversarial attacks further pose fatal
threats to the application and human safety. Lots of papers have emerged to
investigate the robustness and safety of deep learning models against
adversarial attacks. To lead to trustworthy AI, we first construct a general
threat model from different perspectives and then comprehensively review the
latest progress of both 2D and 3D adversarial attacks. We extend the concept of
adversarial examples beyond imperceptive perturbations and collate over 170
papers to give an overview of deep learning model robustness against various
adversarial attacks. To the best of our knowledge, we are the first to
systematically investigate adversarial attacks for 3D models, a flourishing
field applied to many real-world applications. In addition, we examine physical
adversarial attacks that lead to safety violations. Last but not least, we
summarize present popular topics, give insights on challenges, and shed light
on future research on trustworthy AI. | [
"Yanjie Li",
"Bin Xie",
"Songtao Guo",
"Yuanyuan Yang",
"Bin Xiao"
] | 2023-10-01 10:16:33 | http://arxiv.org/abs/2310.00633v1 | http://arxiv.org/pdf/2310.00633v1 | 2310.00633v1 |
Intelligent Client Selection for Federated Learning using Cellular Automata | Federated Learning (FL) has emerged as a promising solution for
privacy-enhancement and latency minimization in various real-world
applications, such as transportation, communications, and healthcare. FL
endeavors to bring Machine Learning (ML) down to the edge by harnessing data
from million of devices and IoT sensors, thus enabling rapid responses to
dynamic environments and yielding highly personalized results. However, the
increased amount of sensors across diverse applications poses challenges in
terms of communication and resource allocation, hindering the participation of
all devices in the federated process and prompting the need for effective FL
client selection. To address this issue, we propose Cellular Automaton-based
Client Selection (CA-CS), a novel client selection algorithm, which leverages
Cellular Automata (CA) as models to effectively capture spatio-temporal changes
in a fast-evolving environment. CA-CS considers the computational resources and
communication capacity of each participating client, while also accounting for
inter-client interactions between neighbors during the client selection
process, enabling intelligent client selection for online FL processes on data
streams that closely resemble real-world scenarios. In this paper, we present a
thorough evaluation of the proposed CA-CS algorithm using MNIST and CIFAR-10
datasets, while making a direct comparison against a uniformly random client
selection scheme. Our results demonstrate that CA-CS achieves comparable
accuracy to the random selection approach, while effectively avoiding
high-latency clients. | [
"Nikolaos Pavlidis",
"Vasileios Perifanis",
"Theodoros Panagiotis Chatzinikolaou",
"Georgios Ch. Sirakoulis",
"Pavlos S. Efraimidis"
] | 2023-10-01 09:40:40 | http://arxiv.org/abs/2310.00627v2 | http://arxiv.org/pdf/2310.00627v2 | 2310.00627v2 |
GNRK: Graph Neural Runge-Kutta method for solving partial differential equations | Neural networks have proven to be efficient surrogate models for tackling
partial differential equations (PDEs). However, their applicability is often
confined to specific PDEs under certain constraints, in contrast to classical
PDE solvers that rely on numerical differentiation. Striking a balance between
efficiency and versatility, this study introduces a novel approach called Graph
Neural Runge-Kutta (GNRK), which integrates graph neural network modules with a
recurrent structure inspired by the classical solvers. The GNRK operates on
graph structures, ensuring its resilience to changes in spatial and temporal
resolutions during domain discretization. Moreover, it demonstrates the
capability to address general PDEs, irrespective of initial conditions or PDE
coefficients. To assess its performance, we benchmark the GNRK against existing
neural network based PDE solvers using the 2-dimensional Burgers' equation,
revealing the GNRK's superiority in terms of model size and accuracy.
Additionally, this graph-based methodology offers a straightforward extension
for solving coupled differential equations, typically necessitating more
intricate models. | [
"Hoyun Choi",
"Sungyeop Lee",
"B. Kahng",
"Junghyo Jo"
] | 2023-10-01 08:52:46 | http://arxiv.org/abs/2310.00618v1 | http://arxiv.org/pdf/2310.00618v1 | 2310.00618v1 |
Understanding Adversarial Transferability in Federated Learning | We investigate the robustness and security issues from a novel and practical
setting: a group of malicious clients has impacted the model during training by
disguising their identities and acting as benign clients, and only revealing
their adversary position after the training to conduct transferable adversarial
attacks with their data, which is usually a subset of the data that FL system
is trained with. Our aim is to offer a full understanding of the challenges the
FL system faces in this practical setting across a spectrum of configurations.
We notice that such an attack is possible, but the federated model is more
robust compared with its centralized counterpart when the accuracy on clean
images is comparable. Through our study, we hypothesized the robustness is from
two factors: the decentralized training on distributed data and the averaging
operation. We provide evidence from both the perspective of empirical
experiments and theoretical analysis. Our work has implications for
understanding the robustness of federated learning systems and poses a
practical question for federated learning applications. | [
"Yijiang Li",
"Ying Gao",
"Haohan Wang"
] | 2023-10-01 08:35:46 | http://arxiv.org/abs/2310.00616v1 | http://arxiv.org/pdf/2310.00616v1 | 2310.00616v1 |
Hierarchical Adaptation with Hypernetworks for Few-shot Molecular Property Prediction | Molecular property prediction (MPP) is important in biomedical applications,
which naturally suffers from a lack of labels, thus forming a few-shot learning
problem. State-of-the-art approaches are usually based on gradient-based meta
learning strategy, which ignore difference in model parameter and molecule's
learning difficulty. To address above problems, we propose a novel hierarchical
adaptation mechanism for few-shot MPP (HiMPP). The model follows a
encoder-predictor framework. First, to make molecular representation
property-adaptive, we selectively adapt encoder's parameter by designing a
hypernetwork to modulate node embeddings during message propagation. Next, we
make molecule-level adaptation by design another hypernetwork, which assigns
larger propagating steps for harder molecules in predictor. In this way,
molecular representation is transformed by HiMPP hierarchically from
property-level to molecular level. Extensive results show that HiMPP obtains
the state-of-the-art performance in few-shot MPP problems, and our proposed
hierarchical adaptation mechanism is rational and effective. | [
"Shiguang Wu",
"Yaqing Wang",
"Quanming Yao"
] | 2023-10-01 08:28:04 | http://arxiv.org/abs/2310.00614v1 | http://arxiv.org/pdf/2310.00614v1 | 2310.00614v1 |
Understanding AI Cognition: A Neural Module for Inference Inspired by Human Memory Mechanisms | How humans and machines make sense of current inputs for relation reasoning
and question-answering while putting the perceived information into context of
our past memories, has been a challenging conundrum in cognitive science and
artificial intelligence. Inspired by human brain's memory system and cognitive
architectures, we propose a PMI framework that consists of perception, memory
and inference components. Notably, the memory module comprises working and
long-term memory, with the latter endowed with a higher-order structure to
retain more accumulated knowledge and experiences. Through a differentiable
competitive write access, current perceptions update working memory, which is
later merged with long-term memory via outer product associations, averting
memory overflow and minimizing information conflicts. In the inference module,
relevant information is retrieved from two separate memory origins and
associatively integrated to attain a more comprehensive and precise
interpretation of current perceptions. We exploratively apply our PMI to
improve prevailing Transformers and CNN models on question-answering tasks like
bAbI-20k and Sort-of-CLEVR datasets, as well as relation calculation and image
classification tasks, and in each case, our PMI enhancements consistently
outshine their original counterparts significantly. Visualization analyses
reveal that memory consolidation, along with the interaction and integration of
information from diverse memory sources, substantially contributes to the model
effectiveness on inference tasks. | [
"Xiangyu Zeng",
"Jie Lin",
"Piao Hu",
"Ruizheng Huang",
"Zhicheng Zhang"
] | 2023-10-01 08:12:55 | http://arxiv.org/abs/2310.09297v1 | http://arxiv.org/pdf/2310.09297v1 | 2310.09297v1 |
On the Onset of Robust Overfitting in Adversarial Training | Adversarial Training (AT) is a widely-used algorithm for building robust
neural networks, but it suffers from the issue of robust overfitting, the
fundamental mechanism of which remains unclear. In this work, we consider
normal data and adversarial perturbation as separate factors, and identify that
the underlying causes of robust overfitting stem from the normal data through
factor ablation in AT. Furthermore, we explain the onset of robust overfitting
as a result of the model learning features that lack robust generalization,
which we refer to as non-effective features. Specifically, we provide a
detailed analysis of the generation of non-effective features and how they lead
to robust overfitting. Additionally, we explain various empirical behaviors
observed in robust overfitting and revisit different techniques to mitigate
robust overfitting from the perspective of non-effective features, providing a
comprehensive understanding of the robust overfitting phenomenon. This
understanding inspires us to propose two measures, attack strength and data
augmentation, to hinder the learning of non-effective features by the neural
network, thereby alleviating robust overfitting. Extensive experiments
conducted on benchmark datasets demonstrate the effectiveness of the proposed
methods in mitigating robust overfitting and enhancing adversarial robustness. | [
"Chaojian Yu",
"Xiaolong Shi",
"Jun Yu",
"Bo Han",
"Tongliang Liu"
] | 2023-10-01 07:57:03 | http://arxiv.org/abs/2310.00607v1 | http://arxiv.org/pdf/2310.00607v1 | 2310.00607v1 |
Path Structured Multimarginal Schrödinger Bridge for Probabilistic Learning of Hardware Resource Usage by Control Software | The solution of the path structured multimarginal Schr\"{o}dinger bridge
problem (MSBP) is the most-likely measure-valued trajectory consistent with a
sequence of observed probability measures or distributional snapshots. We
leverage recent algorithmic advances in solving such structured MSBPs for
learning stochastic hardware resource usage by control software. The solution
enables predicting the time-varying distribution of hardware resource
availability at a desired time with guaranteed linear convergence. We
demonstrate the efficacy of our probabilistic learning approach in a model
predictive control software execution case study. The method exhibits rapid
convergence to an accurate prediction of hardware resource utilization of the
controller. The method can be broadly applied to any software to predict
cyber-physical context-dependent performance at arbitrary time. | [
"Georgiy A. Bondar",
"Robert Gifford",
"Linh Thi Xuan Phan",
"Abhishek Halder"
] | 2023-10-01 07:35:12 | http://arxiv.org/abs/2310.00604v2 | http://arxiv.org/pdf/2310.00604v2 | 2310.00604v2 |
Quantum generative adversarial learning in photonics | Quantum Generative Adversarial Networks (QGANs), an intersection of quantum
computing and machine learning, have attracted widespread attention due to
their potential advantages over classical analogs. However, in the current era
of Noisy Intermediate-Scale Quantum (NISQ) computing, it is essential to
investigate whether QGANs can perform learning tasks on near-term quantum
devices usually affected by noise and even defects. In this Letter, using a
programmable silicon quantum photonic chip, we experimentally demonstrate the
QGAN model in photonics for the first time, and investigate the effects of
noise and defects on its performance. Our results show that QGANs can generate
high-quality quantum data with a fidelity higher than 90\%, even under
conditions where up to half of the generator's phase shifters are damaged, or
all of the generator and discriminator's phase shifters are subjected to phase
noise up to 0.04$\pi$. Our work sheds light on the feasibility of implementing
QGANs on NISQ-era quantum hardware. | [
"Yizhi Wang",
"Shichuan Xue",
"Yaxuan Wang",
"Yong Liu",
"Jiangfang Ding",
"Weixu Shi",
"Dongyang Wang",
"Yingwen Liu",
"Xiang Fu",
"Guangyao Huang",
"Anqi Huang",
"Mingtang Deng",
"Junjie Wu"
] | 2023-10-01 06:08:21 | http://arxiv.org/abs/2310.00585v1 | http://arxiv.org/pdf/2310.00585v1 | 2310.00585v1 |
GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length | The evolving sophistication and intricacies of Large Language Models (LLMs)
yield unprecedented advancements, yet they simultaneously demand considerable
computational resources and incur significant costs. To alleviate these
challenges, this paper introduces a novel, simple, and effective method named
``\growlength'' to accelerate the pretraining process of LLMs. Our method
progressively increases the training length throughout the pretraining phase,
thereby mitigating computational costs and enhancing efficiency. For instance,
it begins with a sequence length of 128 and progressively extends to 4096. This
approach enables models to process a larger number of tokens within limited
time frames, potentially boosting their performance. In other words, the
efficiency gain is derived from training with shorter sequences optimizing the
utilization of resources. Our extensive experiments with various
state-of-the-art LLMs have revealed that models trained using our method not
only converge more swiftly but also exhibit superior performance metrics
compared to those trained with existing methods. Furthermore, our method for
LLMs pretraining acceleration does not require any additional engineering
efforts, making it a practical solution in the realm of LLMs. | [
"Hongye Jin",
"Xiaotian Han",
"Jingfeng Yang",
"Zhimeng Jiang",
"Chia-Yuan Chang",
"Xia Hu"
] | 2023-10-01 05:25:24 | http://arxiv.org/abs/2310.00576v1 | http://arxiv.org/pdf/2310.00576v1 | 2310.00576v1 |
SIMD Dataflow Co-optimization for Efficient Neural Networks Inferences on CPUs | We address the challenges associated with deploying neural networks on CPUs,
with a particular focus on minimizing inference time while maintaining
accuracy. Our novel approach is to use the dataflow (i.e., computation order)
of a neural network to explore data reuse opportunities using heuristic-guided
analysis and a code generation framework, which enables exploration of various
Single Instruction, Multiple Data (SIMD) implementations to achieve optimized
neural network execution. Our results demonstrate that the dataflow that keeps
outputs in SIMD registers while also maximizing both input and weight reuse
consistently yields the best performance for a wide variety of inference
workloads, achieving up to 3x speedup for 8-bit neural networks, and up to 4.8x
speedup for binary neural networks, respectively, over the optimized
implementations of neural networks today. | [
"Cyrus Zhou",
"Zack Hassman",
"Ruize Xu",
"Dhirpal Shah",
"Vaugnn Richard",
"Yanjing Li"
] | 2023-10-01 05:11:54 | http://arxiv.org/abs/2310.00574v2 | http://arxiv.org/pdf/2310.00574v2 | 2310.00574v2 |
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion | Consistency Models (CM) (Song et al., 2023) accelerate score-based diffusion
model sampling at the cost of sample quality but lack a natural way to
trade-off quality for speed. To address this limitation, we propose Consistency
Trajectory Model (CTM), a generalization encompassing CM and score-based models
as special cases. CTM trains a single neural network that can -- in a single
forward pass -- output scores (i.e., gradients of log-density) and enables
unrestricted traversal between any initial and final time along the Probability
Flow Ordinary Differential Equation (ODE) in a diffusion process. CTM enables
the efficient combination of adversarial training and denoising score matching
loss to enhance performance and achieves new state-of-the-art FIDs for
single-step diffusion model sampling on CIFAR-10 (FID 1.73) and ImageNet at
64X64 resolution (FID 2.06). CTM also enables a new family of sampling schemes,
both deterministic and stochastic, involving long jumps along the ODE solution
trajectories. It consistently improves sample quality as computational budgets
increase, avoiding the degradation seen in CM. Furthermore, CTM's access to the
score accommodates all diffusion model inference techniques, including exact
likelihood computation. | [
"Dongjun Kim",
"Chieh-Hsin Lai",
"Wei-Hsiang Liao",
"Naoki Murata",
"Yuhta Takida",
"Toshimitsu Uesaka",
"Yutong He",
"Yuki Mitsufuji",
"Stefano Ermon"
] | 2023-10-01 05:07:17 | http://arxiv.org/abs/2310.02279v1 | http://arxiv.org/pdf/2310.02279v1 | 2310.02279v1 |
LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations | Machine learning models have undeniably achieved impressive performance
across a range of applications. However, their often perceived black-box
nature, and lack of transparency in decision-making, have raised concerns about
understanding their predictions. To tackle this challenge, researchers have
developed methods to provide explanations for machine learning models. In this
paper, we introduce LaPLACE-explainer, designed to provide probabilistic
cause-and-effect explanations for any classifier operating on tabular data, in
a human-understandable manner. The LaPLACE-Explainer component leverages the
concept of a Markov blanket to establish statistical boundaries between
relevant and non-relevant features automatically. This approach results in the
automatic generation of optimal feature subsets, serving as explanations for
predictions. Importantly, this eliminates the need to predetermine a fixed
number N of top features as explanations, enhancing the flexibility and
adaptability of our methodology. Through the incorporation of conditional
probabilities, our approach offers probabilistic causal explanations and
outperforms LIME and SHAP (well-known model-agnostic explainers) in terms of
local accuracy and consistency of explained features. LaPLACE's soundness,
consistency, local accuracy, and adaptability are rigorously validated across
various classification models. Furthermore, we demonstrate the practical
utility of these explanations via experiments with both simulated and
real-world datasets. This encompasses addressing trust-related issues, such as
evaluating prediction reliability, facilitating model selection, enhancing
trustworthiness, and identifying fairness-related concerns within classifiers. | [
"Sein Minn"
] | 2023-10-01 04:09:59 | http://arxiv.org/abs/2310.00570v1 | http://arxiv.org/pdf/2310.00570v1 | 2310.00570v1 |
Quantum-Based Feature Selection for Multi-classification Problem in Complex Systems with Edge Computing | The complex systems with edge computing require a huge amount of
multi-feature data to extract appropriate insights for their decision making,
so it is important to find a feasible feature selection method to improve the
computational efficiency and save the resource consumption. In this paper, a
quantum-based feature selection algorithm for the multi-classification problem,
namely, QReliefF, is proposed, which can effectively reduce the complexity of
algorithm and improve its computational efficiency. First, all features of each
sample are encoded into a quantum state by performing operations CMP and R_y,
and then the amplitude estimation is applied to calculate the similarity
between any two quantum states (i.e., two samples). According to the
similarities, the Grover-Long method is utilized to find the nearest k neighbor
samples, and then the weight vector is updated. After a certain number of
iterations through the above process, the desired features can be selected with
regards to the final weight vector and the threshold {\tau}. Compared with the
classical ReliefF algorithm, our algorithm reduces the complexity of similarity
calculation from O(MN) to O(M), the complexity of finding the nearest neighbor
from O(M) to O(sqrt(M)), and resource consumption from O(MN) to O(MlogN).
Meanwhile, compared with the quantum Relief algorithm, our algorithm is
superior in finding the nearest neighbor, reducing the complexity from O(M) to
O(sqrt(M)). Finally, in order to verify the feasibility of our algorithm, a
simulation experiment based on Rigetti with a simple example is performed. | [
"Wenjie Liu",
"Junxiu Chen",
"Yuxiang Wang",
"Peipei Gao",
"Zhibin Lei",
"Xu Ma"
] | 2023-10-01 03:57:13 | http://arxiv.org/abs/2310.01443v1 | http://arxiv.org/pdf/2310.01443v1 | 2310.01443v1 |
Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks | Recent works have shown that deep neural networks are vulnerable to
adversarial examples that find samples close to the original image but can make
the model misclassify. Even with access only to the model's output, an attacker
can employ black-box attacks to generate such adversarial examples. In this
work, we propose a simple and lightweight defense against black-box attacks by
adding random noise to hidden features at intermediate layers of the model at
inference time. Our theoretical analysis confirms that this method effectively
enhances the model's resilience against both score-based and decision-based
black-box attacks. Importantly, our defense does not necessitate adversarial
training and has minimal impact on accuracy, rendering it applicable to any
pre-trained model. Our analysis also reveals the significance of selectively
adding noise to different parts of the model based on the gradient of the
adversarial objective function, which can be varied during the attack. We
demonstrate the robustness of our defense against multiple black-box attacks
through extensive empirical experiments involving diverse models with various
architectures. | [
"Quang H. Nguyen",
"Yingjie Lao",
"Tung Pham",
"Kok-Seng Wong",
"Khoa D. Doan"
] | 2023-10-01 03:53:23 | http://arxiv.org/abs/2310.00567v1 | http://arxiv.org/pdf/2310.00567v1 | 2310.00567v1 |
Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models | Credit and risk assessments are cornerstones of the financial landscape,
impacting both individual futures and broader societal constructs. Existing
credit scoring models often exhibit limitations stemming from knowledge myopia
and task isolation. In response, we formulate three hypotheses and undertake an
extensive case study to investigate LLMs' viability in credit assessment. Our
empirical investigations unveil LLMs' ability to overcome the limitations
inherent in conventional models. We introduce a novel benchmark curated for
credit assessment purposes, fine-tune a specialized Credit and Risk Assessment
Large Language Model (CALM), and rigorously examine the biases that LLMs may
harbor. Our findings underscore LLMs' potential in revolutionizing credit
assessment, showcasing their adaptability across diverse financial evaluations,
and emphasizing the critical importance of impartial decision-making in the
financial sector. Our datasets, models, and benchmarks are open-sourced for
other researchers. | [
"Duanyu Feng",
"Yongfu Dai",
"Jimin Huang",
"Yifang Zhang",
"Qianqian Xie",
"Weiguang Han",
"Alejandro Lopez-Lira",
"Hao Wang"
] | 2023-10-01 03:50:34 | http://arxiv.org/abs/2310.00566v1 | http://arxiv.org/pdf/2310.00566v1 | 2310.00566v1 |
Discrete Choice Multi-Armed Bandits | This paper establishes a connection between a category of discrete choice
models and the realms of online learning and multiarmed bandit algorithms. Our
contributions can be summarized in two key aspects. Firstly, we furnish
sublinear regret bounds for a comprehensive family of algorithms, encompassing
the Exp3 algorithm as a particular case. Secondly, we introduce a novel family
of adversarial multiarmed bandit algorithms, drawing inspiration from the
generalized nested logit models initially introduced by \citet{wen:2001}. These
algorithms offer users the flexibility to fine-tune the model extensively, as
they can be implemented efficiently due to their closed-form sampling
distribution probabilities. To demonstrate the practical implementation of our
algorithms, we present numerical experiments, focusing on the stochastic bandit
case. | [
"Emerson Melo",
"David Müller"
] | 2023-10-01 03:41:04 | http://arxiv.org/abs/2310.00562v1 | http://arxiv.org/pdf/2310.00562v1 | 2310.00562v1 |
Horizontal Class Backdoor to Deep Learning | All existing backdoor attacks to deep learning (DL) models belong to the
vertical class backdoor (VCB). That is, any sample from a class will activate
the implanted backdoor in the presence of the secret trigger, regardless of
source-class-agnostic or source-class-specific backdoor. Current trends of
existing defenses are overwhelmingly devised for VCB attacks especially the
source-class-agnostic backdoor, which essentially neglects other potential
simple but general backdoor types, thus giving false security implications. It
is thus urgent to discover unknown backdoor types.
This work reveals a new, simple, and general horizontal class backdoor (HCB)
attack. We show that the backdoor can be naturally bounded with innocuous
natural features that are common and pervasive in the real world. Note that an
innocuous feature (e.g., expression) is irrelevant to the main task of the
model (e.g., recognizing a person from one to another). The innocuous feature
spans across classes horizontally but is exhibited by partial samples per class
-- satisfying the horizontal class (HC) property. Only when the trigger is
concurrently presented with the HC innocuous feature, can the backdoor be
effectively activated. Extensive experiments on attacking performance in terms
of high attack success rates with tasks of 1) MNIST, 2) facial recognition, 3)
traffic sign recognition, and 4) object detection demonstrate that the HCB is
highly efficient and effective. We extensively evaluate the HCB evasiveness
against a (chronologically) series of 9 influential countermeasures of
Fine-Pruning (RAID 18'), STRIP (ACSAC 19'), Neural Cleanse (Oakland 19'), ABS
(CCS 19'), Februus (ACSAC 20'), MNTD (Oakland 21'), SCAn (USENIX SEC 21'), MOTH
(Oakland 22'), and Beatrix (NDSS 23'), where none of them can succeed even when
a simplest trigger is used. | [
"Hua Ma",
"Shang Wang",
"Yansong Gao"
] | 2023-10-01 01:45:36 | http://arxiv.org/abs/2310.00542v1 | http://arxiv.org/pdf/2310.00542v1 | 2310.00542v1 |
Robust Nonparametric Hypothesis Testing to Understand Variability in Training Neural Networks | Training a deep neural network (DNN) often involves stochastic optimization,
which means each run will produce a different model. Several works suggest this
variability is negligible when models have the same performance, which in the
case of classification is test accuracy. However, models with similar test
accuracy may not be computing the same function. We propose a new measure of
closeness between classification models based on the output of the network
before thresholding. Our measure is based on a robust hypothesis-testing
framework and can be adapted to other quantities derived from trained models. | [
"Sinjini Banerjee",
"Reilly Cannon",
"Tim Marrinan",
"Tony Chiang",
"Anand D. Sarwate"
] | 2023-10-01 01:44:35 | http://arxiv.org/abs/2310.00541v1 | http://arxiv.org/pdf/2310.00541v1 | 2310.00541v1 |
Thompson Exploration with Best Challenger Rule in Best Arm Identification | This paper studies the fixed-confidence best arm identification (BAI) problem
in the bandit framework in the canonical single-parameter exponential models.
For this problem, many policies have been proposed, but most of them require
solving an optimization problem at every round and/or are forced to explore an
arm at least a certain number of times except those restricted to the Gaussian
model. To address these limitations, we propose a novel policy that combines
Thompson sampling with a computationally efficient approach known as the best
challenger rule. While Thompson sampling was originally considered for
maximizing the cumulative reward, we demonstrate that it can be used to
naturally explore arms in BAI without forcing it. We show that our policy is
asymptotically optimal for any two-armed bandit problems and achieves near
optimality for general $K$-armed bandit problems for $K\geq 3$. Nevertheless,
in numerical experiments, our policy shows competitive performance compared to
asymptotically optimal policies in terms of sample complexity while requiring
less computation cost. In addition, we highlight the advantages of our policy
by comparing it to the concept of $\beta$-optimality, a relaxed notion of
asymptotic optimality commonly considered in the analysis of a class of
policies including the proposed one. | [
"Jongyeong Lee",
"Junya Honda",
"Masashi Sugiyama"
] | 2023-10-01 01:37:02 | http://arxiv.org/abs/2310.00539v1 | http://arxiv.org/pdf/2310.00539v1 | 2310.00539v1 |
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention | We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical
framework to understand the training procedure of multilayer Transformer
architectures. This is achieved by integrating out the self-attention layer in
Transformers, producing a modified dynamics of MLP layers only. JoMA removes
unrealistic assumptions in previous analysis (e.g., lack of residual
connection) and predicts that the attention first becomes sparse (to learn
salient tokens), then dense (to learn less salient tokens) in the presence of
nonlinear activations, while in the linear case, it is consistent with existing
works that show attention becomes sparse over time. We leverage JoMA to
qualitatively explains how tokens are combined to form hierarchies in
multilayer Transformers, when the input tokens are generated by a latent
hierarchical generative model. Experiments on models trained from real-world
dataset (Wikitext2/Wikitext103) and various pre-trained models (OPT, Pythia)
verify our theoretical findings. | [
"Yuandong Tian",
"Yiping Wang",
"Zhenyu Zhang",
"Beidi Chen",
"Simon Du"
] | 2023-10-01 01:21:35 | http://arxiv.org/abs/2310.00535v2 | http://arxiv.org/pdf/2310.00535v2 | 2310.00535v2 |
SELF: Language-Driven Self-Evolution for Large Language Model | Large Language Models (LLMs) have showcased remarkable versatility across
diverse domains. However, the pathway toward autonomous model development, a
cornerstone for achieving human-level learning and advancing autonomous AI,
remains largely uncharted. We introduce an innovative approach, termed "SELF"
(Self-Evolution with Language Feedback). This methodology empowers LLMs to
undergo continual self-evolution. Furthermore, SELF employs language-based
feedback as a versatile and comprehensive evaluative tool, pinpointing areas
for response refinement and bolstering the stability of self-evolutionary
training. Initiating with meta-skill learning, SELF acquires foundational
meta-skills with a focus on self-feedback and self-refinement. These
meta-skills are critical, guiding the model's subsequent self-evolution through
a cycle of perpetual training with self-curated data, thereby enhancing its
intrinsic abilities. Given unlabeled instructions, SELF equips the model with
the capability to autonomously generate and interactively refine responses.
This synthesized training data is subsequently filtered and utilized for
iterative fine-tuning, enhancing the model's capabilities. Experimental results
on representative benchmarks substantiate that SELF can progressively advance
its inherent abilities without the requirement of human intervention, thereby
indicating a viable pathway for autonomous model evolution. Additionally, SELF
can employ online self-refinement strategy to produce responses of superior
quality. In essence, the SELF framework signifies a progressive step towards
autonomous LLM development, transforming the LLM from a mere passive recipient
of information into an active participant in its own evolution. | [
"Jianqiao Lu",
"Wanjun Zhong",
"Wenyong Huang",
"Yufei Wang",
"Fei Mi",
"Baojun Wang",
"Weichao Wang",
"Lifeng Shang",
"Qun Liu"
] | 2023-10-01 00:52:24 | http://arxiv.org/abs/2310.00533v2 | http://arxiv.org/pdf/2310.00533v2 | 2310.00533v2 |
Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation and Inference | Estimation and inference in statistics pose significant challenges when data
are collected adaptively. Even in linear models, the Ordinary Least Squares
(OLS) estimator may fail to exhibit asymptotic normality for single coordinate
estimation and have inflated error. This issue is highlighted by a recent
minimax lower bound, which shows that the error of estimating a single
coordinate can be enlarged by a multiple of $\sqrt{d}$ when data are allowed to
be arbitrarily adaptive, compared with the case when they are i.i.d. Our work
explores this striking difference in estimation performance between utilizing
i.i.d. and adaptive data. We investigate how the degree of adaptivity in data
collection impacts the performance of estimating a low-dimensional parameter
component in high-dimensional linear models. We identify conditions on the data
collection mechanism under which the estimation error for a low-dimensional
parameter component matches its counterpart in the i.i.d. setting, up to a
factor that depends on the degree of adaptivity. We show that OLS or OLS on
centered data can achieve this matching error. In addition, we propose a novel
estimator for single coordinate inference via solving a Two-stage Adaptive
Linear Estimating equation (TALE). Under a weaker form of adaptivity in data
collection, we establish an asymptotic normality property of the proposed
estimator. | [
"Licong Lin",
"Mufang Ying",
"Suvrojit Ghosh",
"Koulik Khamaru",
"Cun-Hui Zhang"
] | 2023-10-01 00:45:09 | http://arxiv.org/abs/2310.00532v1 | http://arxiv.org/pdf/2310.00532v1 | 2310.00532v1 |
Are Graph Neural Networks Optimal Approximation Algorithms? | In this work we design graph neural network architectures that can be used to
obtain optimal approximation algorithms for a large class of combinatorial
optimization problems using powerful algorithmic tools from semidefinite
programming (SDP). Concretely, we prove that polynomial-sized message passing
algorithms can represent the most powerful polynomial time algorithms for Max
Constraint Satisfaction Problems assuming the Unique Games Conjecture. We
leverage this result to construct efficient graph neural network architectures,
OptGNN, that obtain high-quality approximate solutions on landmark
combinatorial optimization problems such as Max Cut and maximum independent
set. Our approach achieves strong empirical results across a wide range of
real-world and synthetic datasets against both neural baselines and classical
algorithms. Finally, we take advantage of OptGNN's ability to capture convex
relaxations to design an algorithm for producing dual certificates of
optimality (bounds on the optimal solution) from the learned embeddings of
OptGNN. | [
"Morris Yau",
"Eric Lu",
"Nikolaos Karalias",
"Jessica Xu",
"Stefanie Jegelka"
] | 2023-10-01 00:12:31 | http://arxiv.org/abs/2310.00526v3 | http://arxiv.org/pdf/2310.00526v3 | 2310.00526v3 |
Enhancing Efficiency and Privacy in Memory-Based Malware Classification through Feature Selection | Malware poses a significant security risk to individuals, organizations, and
critical infrastructure by compromising systems and data. Leveraging memory
dumps that offer snapshots of computer memory can aid the analysis and
detection of malicious content, including malware. To improve the efficacy and
address privacy concerns in malware classification systems, feature selection
can play a critical role as it is capable of identifying the most relevant
features, thus, minimizing the amount of data fed to classifiers. In this
study, we employ three feature selection approaches to identify significant
features from memory content and use them with a diverse set of classifiers to
enhance the performance and privacy of the classification task. Comprehensive
experiments are conducted across three levels of malware classification tasks:
i) binary-level benign or malware classification, ii) malware type
classification (including Trojan horse, ransomware, and spyware), and iii)
malware family classification within each family (with varying numbers of
classes). Results demonstrate that the feature selection strategy,
incorporating mutual information and other methods, enhances classifier
performance for all tasks. Notably, selecting only 25\% and 50\% of input
features using Mutual Information and then employing the Random Forest
classifier yields the best results. Our findings reinforce the importance of
feature selection for malware classification and provide valuable insights for
identifying appropriate approaches. By advancing the effectiveness and privacy
of malware classification systems, this research contributes to safeguarding
against security threats posed by malicious software. | [
"Salim Sazzed",
"Sharif Ullah"
] | 2023-09-30 22:36:31 | http://arxiv.org/abs/2310.00516v2 | http://arxiv.org/pdf/2310.00516v2 | 2310.00516v2 |
Nonparametric active learning for cost-sensitive classification | Cost-sensitive learning is a common type of machine learning problem where
different errors of prediction incur different costs. In this paper, we design
a generic nonparametric active learning algorithm for cost-sensitive
classification. Based on the construction of confidence bounds for the expected
prediction cost functions of each label, our algorithm sequentially selects the
most informative vector points. Then it interacts with them by only querying
the costs of prediction that could be the smallest. We prove that our algorithm
attains optimal rate of convergence in terms of the number of interactions with
the feature vector space. Furthermore, in terms of a general version of
Tsybakov's noise assumption, the gain over the corresponding passive learning
is explicitly characterized by the probability-mass of the boundary decision.
Additionally, we prove the near-optimality of obtained upper bounds by
providing matching (up to logarithmic factor) lower bounds. | [
"Boris Ndjia Njike",
"Xavier Siebert"
] | 2023-09-30 22:19:21 | http://arxiv.org/abs/2310.00511v1 | http://arxiv.org/pdf/2310.00511v1 | 2310.00511v1 |
Unveiling the Unborn: Advancing Fetal Health Classification through Machine Learning | Fetal health classification is a critical task in obstetrics, enabling early
identification and management of potential health problems. However, it remains
challenging due to data complexity and limited labeled samples. This research
paper presents a novel machine-learning approach for fetal health
classification, leveraging a LightGBM classifier trained on a comprehensive
dataset. The proposed model achieves an impressive accuracy of 98.31% on a test
set. Our findings demonstrate the potential of machine learning in enhancing
fetal health classification, offering a more objective and accurate assessment.
Notably, our approach combines various features, such as fetal heart rate,
uterine contractions, and maternal blood pressure, to provide a comprehensive
evaluation. This methodology holds promise for improving early detection and
treatment of fetal health issues, ensuring better outcomes for both mothers and
babies. Beyond the high accuracy achieved, the novelty of our approach lies in
its comprehensive feature selection and assessment methodology. By
incorporating multiple data points, our model offers a more holistic and
reliable evaluation compared to traditional methods. This research has
significant implications in the field of obstetrics, paving the way for
advancements in early detection and intervention of fetal health concerns.
Future work involves validating the model on a larger dataset and developing a
clinical application. Ultimately, we anticipate that our research will
revolutionize the assessment and management of fetal health, contributing to
improved healthcare outcomes for expectant mothers and their babies. | [
"Sujith K Mandala"
] | 2023-09-30 22:02:51 | http://arxiv.org/abs/2310.00505v1 | http://arxiv.org/pdf/2310.00505v1 | 2310.00505v1 |
Exploring SAM Ablations for Enhancing Medical Segmentation in Radiology and Pathology | Medical imaging plays a critical role in the diagnosis and treatment planning
of various medical conditions, with radiology and pathology heavily reliant on
precise image segmentation. The Segment Anything Model (SAM) has emerged as a
promising framework for addressing segmentation challenges across different
domains. In this white paper, we delve into SAM, breaking down its fundamental
components and uncovering the intricate interactions between them. We also
explore the fine-tuning of SAM and assess its profound impact on the accuracy
and reliability of segmentation results, focusing on applications in radiology
(specifically, brain tumor segmentation) and pathology (specifically, breast
cancer segmentation). Through a series of carefully designed experiments, we
analyze SAM's potential application in the field of medical imaging. We aim to
bridge the gap between advanced segmentation techniques and the demanding
requirements of healthcare, shedding light on SAM's transformative
capabilities. | [
"Amin Ranem",
"Niklas Babendererde",
"Moritz Fuchs",
"Anirban Mukhopadhyay"
] | 2023-09-30 21:58:12 | http://arxiv.org/abs/2310.00504v1 | http://arxiv.org/pdf/2310.00504v1 | 2310.00504v1 |
Automated Gait Generation For Walking, Soft Robotic Quadrupeds | Gait generation for soft robots is challenging due to the nonlinear dynamics
and high dimensional input spaces of soft actuators. Limitations in soft
robotic control and perception force researchers to hand-craft open loop
controllers for gait sequences, which is a non-trivial process. Moreover, short
soft actuator lifespans and natural variations in actuator behavior limit
machine learning techniques to settings that can be learned on the same time
scales as robot deployment. Lastly, simulation is not always possible, due to
heterogeneity and nonlinearity in soft robotic materials and their dynamics
change due to wear. We present a sample-efficient, simulation free, method for
self-generating soft robot gaits, using very minimal computation. This
technique is demonstrated on a motorized soft robotic quadruped that walks
using four legs constructed from 16 "handed shearing auxetic" (HSA) actuators.
To manage the dimension of the search space, gaits are composed of two
sequential sets of leg motions selected from 7 possible primitives. Pairs of
primitives are executed on one leg at a time; we then select the
best-performing pair to execute while moving on to subsequent legs. This method
-- which uses no simulation, sophisticated computation, or user input --
consistently generates good translation and rotation gaits in as low as 4
minutes of hardware experimentation, outperforming hand-crafted gaits. This is
the first demonstration of completely autonomous gait generation in a soft
robot. | [
"Jake Ketchum",
"Sophia Schiffer",
"Muchen Sun",
"Pranav Kaarthik",
"Ryan L. Truby",
"Todd D. Murphey"
] | 2023-09-30 21:31:30 | http://arxiv.org/abs/2310.00498v2 | http://arxiv.org/pdf/2310.00498v2 | 2310.00498v2 |
The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks | We introduce the Sparsity Roofline, a visual performance model for evaluating
sparsity in neural networks. The Sparsity Roofline jointly models network
accuracy, sparsity, and predicted inference speedup. Our approach does not
require implementing and benchmarking optimized kernels, and the predicted
speedup is equal to what would be measured when the corresponding dense and
sparse kernels are equally well-optimized. We achieve this through a novel
analytical model for predicting sparse network performance, and validate the
predicted speedup using several real-world computer vision architectures pruned
across a range of sparsity patterns and degrees. We demonstrate the utility and
ease-of-use of our model through two case studies: (1) we show how machine
learning researchers can predict the performance of unimplemented or
unoptimized block-structured sparsity patterns, and (2) we show how hardware
designers can predict the performance implications of new sparsity patterns and
sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps
performance experts identify sparsity regimes with the highest performance
potential. | [
"Cameron Shinn",
"Collin McCarthy",
"Saurav Muralidharan",
"Muhammad Osama",
"John D. Owens"
] | 2023-09-30 21:29:31 | http://arxiv.org/abs/2310.00496v1 | http://arxiv.org/pdf/2310.00496v1 | 2310.00496v1 |
From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning | Large Language Models (LLMs) have achieved remarkable success, demonstrating
powerful instruction-following capabilities across diverse tasks. Instruction
fine-tuning is critical in enabling LLMs to align with user intentions and
effectively follow instructions. In this work, we investigate how instruction
fine-tuning modifies pre-trained models, focusing on two perspectives:
instruction recognition and knowledge evolution. To study the behavior shift of
LLMs, we employ a suite of local and global explanation methods, including a
gradient-based approach for input-output attribution and techniques for
interpreting patterns and concepts in self-attention and feed-forward layers.
Our findings reveal three significant impacts of instruction fine-tuning: 1) It
empowers LLMs to better recognize the instruction parts from user prompts,
thereby facilitating high-quality response generation and addressing the
``lost-in-the-middle'' issue observed in pre-trained models; 2) It aligns the
knowledge stored in feed-forward layers with user-oriented tasks, exhibiting
minimal shifts across linguistic levels. 3) It facilitates the learning of
word-word relations with instruction verbs through the self-attention
mechanism, particularly in the lower and middle layers, indicating enhanced
recognition of instruction words. These insights contribute to a deeper
understanding of the behavior shifts in LLMs after instruction fine-tuning and
lay the groundwork for future research aimed at interpreting and optimizing
LLMs for various applications. We will release our code and data soon. | [
"Xuansheng Wu",
"Wenlin Yao",
"Jianshu Chen",
"Xiaoman Pan",
"Xiaoyang Wang",
"Ninghao Liu",
"Dong Yu"
] | 2023-09-30 21:16:05 | http://arxiv.org/abs/2310.00492v1 | http://arxiv.org/pdf/2310.00492v1 | 2310.00492v1 |
Dynamic DAG Discovery for Interpretable Imitation Learning | Imitation learning, which learns agent policy by mimicking expert
demonstration, has shown promising results in many applications such as medical
treatment regimes and self-driving vehicles. However, it remains a difficult
task to interpret control policies learned by the agent. Difficulties mainly
come from two aspects: 1) agents in imitation learning are usually implemented
as deep neural networks, which are black-box models and lack interpretability;
2) the latent causal mechanism behind agents' decisions may vary along the
trajectory, rather than staying static throughout time steps. To increase
transparency and offer better interpretability of the neural agent, we propose
to expose its captured knowledge in the form of a directed acyclic causal
graph, with nodes being action and state variables and edges denoting the
causal relations behind predictions. Furthermore, we design this causal
discovery process to be state-dependent, enabling it to model the dynamics in
latent causal graphs. Concretely, we conduct causal discovery from the
perspective of Granger causality and propose a self-explainable imitation
learning framework, {\method}. The proposed framework is composed of three
parts: a dynamic causal discovery module, a causality encoding module, and a
prediction module, and is trained in an end-to-end manner. After the model is
learned, we can obtain causal relations among states and action variables
behind its decisions, exposing policies learned by it. Experimental results on
both synthetic and real-world datasets demonstrate the effectiveness of the
proposed {\method} in learning the dynamic causal graphs for understanding the
decision-making of imitation learning meanwhile maintaining high prediction
accuracy. | [
"ianxiang Zhao",
"Wenchao Yu",
"Suhang Wang",
"Lu Wang",
"Xiang Zhang",
"Yuncong Chen",
"Yanchi Liu",
"Wei Cheng",
"Haifeng Chen"
] | 2023-09-30 20:59:42 | http://arxiv.org/abs/2310.00489v2 | http://arxiv.org/pdf/2310.00489v2 | 2310.00489v2 |
On Memorization and Privacy risks of Sharpness Aware Minimization | In many recent works, there is an increased focus on designing algorithms
that seek flatter optima for neural network loss optimization as there is
empirical evidence that it leads to better generalization performance in many
datasets. In this work, we dissect these performance gains through the lens of
data memorization in overparameterized models. We define a new metric that
helps us identify which data points specifically do algorithms seeking flatter
optima do better when compared to vanilla SGD. We find that the generalization
gains achieved by Sharpness Aware Minimization (SAM) are particularly
pronounced for atypical data points, which necessitate memorization. This
insight helps us unearth higher privacy risks associated with SAM, which we
verify through exhaustive empirical evaluations. Finally, we propose mitigation
strategies to achieve a more desirable accuracy vs privacy tradeoff. | [
"Young In Kim",
"Pratiksha Agrawal",
"Johannes O. Royset",
"Rajiv Khanna"
] | 2023-09-30 20:59:07 | http://arxiv.org/abs/2310.00488v1 | http://arxiv.org/pdf/2310.00488v1 | 2310.00488v1 |
It HAS to be Subjective: Human Annotator Simulation via Zero-shot Density Estimation | Human annotator simulation (HAS) serves as a cost-effective substitute for
human evaluation such as data annotation and system assessment. Human
perception and behaviour during human evaluation exhibit inherent variability
due to diverse cognitive processes and subjective interpretations, which should
be taken into account in modelling to better mimic the way people perceive and
interact with the world. This paper introduces a novel meta-learning framework
that treats HAS as a zero-shot density estimation problem, which incorporates
human variability and allows for the efficient generation of human-like
annotations for unlabelled test inputs. Under this framework, we propose two
new model classes, conditional integer flows and conditional softmax flows, to
account for ordinal and categorical annotations, respectively. The proposed
method is evaluated on three real-world human evaluation tasks and shows
superior capability and efficiency to predict the aggregated behaviours of
human annotators, match the distribution of human annotations, and simulate the
inter-annotator disagreements. | [
"Wen Wu",
"Wenlin Chen",
"Chao Zhang",
"Philip C. Woodland"
] | 2023-09-30 20:54:59 | http://arxiv.org/abs/2310.00486v1 | http://arxiv.org/pdf/2310.00486v1 | 2310.00486v1 |
Prompting Code Interpreter to Write Better Unit Tests on Quixbugs Functions | Unit testing is a commonly-used approach in software engineering to test the
correctness and robustness of written code. Unit tests are tests designed to
test small components of a codebase in isolation, such as an individual
function or method. Although unit tests have historically been written by human
programmers, recent advancements in AI, particularly LLMs, have shown
corresponding advances in automatic unit test generation. In this study, we
explore the effect of different prompts on the quality of unit tests generated
by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the
Quixbugs dataset, and we focus on prompting due to the ease with which users
can make use of our findings and observations. We find that the quality of the
generated unit tests is not sensitive to changes in minor details in the
prompts provided. However, we observe that Code Interpreter is often able to
effectively identify and correct mistakes in code that it writes, suggesting
that providing it runnable code to check the correctness of its outputs would
be beneficial, even though we find that it is already often able to generate
correctly-formatted unit tests. Our findings suggest that, when prompting
models similar to Code Interpreter, it is important to include the basic
information necessary to generate unit tests, but minor details are not as
important. | [
"Vincent Li",
"Nick Doiron"
] | 2023-09-30 20:36:23 | http://arxiv.org/abs/2310.00483v1 | http://arxiv.org/pdf/2310.00483v1 | 2310.00483v1 |
Generative Design of inorganic compounds using deep diffusion language models | Due to the vast chemical space, discovering materials with a specific
function is challenging. Chemical formulas are obligated to conform to a set of
exacting criteria such as charge neutrality, balanced electronegativity,
synthesizability, and mechanical stability. In response to this formidable
task, we introduce a deep learning-based generative model for material
composition and structure design by learning and exploiting explicit and
implicit chemical knowledge. Our pipeline first uses deep diffusion language
models as the generator of compositions and then applies a template-based
crystal structure prediction algorithm to predict their corresponding
structures, which is then followed by structure relaxation using a universal
graph neural network-based potential. The density functional theory (DFT)
calculations of the formation energies and energy-above-the-hull analysis are
used to validate new structures generated through our pipeline. Based on the
DFT calculation results, six new materials, including Ti2HfO5, TaNbP, YMoN2,
TaReO4, HfTiO2, and HfMnO2, with formation energy less than zero have been
found. Remarkably, among these, four materials, namely Ti2$HfO5, TaNbP, YMoN2,
and TaReO4, exhibit an e-above-hull energy of less than 0.3 eV. These findings
have proved the effectiveness of our approach. | [
"Rongzhi Dong",
"Nihang Fu",
"dirisuriya M. D. Siriwardane",
"Jianjun Hu"
] | 2023-09-30 19:46:19 | http://arxiv.org/abs/2310.00475v1 | http://arxiv.org/pdf/2310.00475v1 | 2310.00475v1 |
Enhancing Mortality Prediction in Heart Failure Patients: Exploring Preprocessing Methods for Imbalanced Clinical Datasets | Heart failure (HF) is a critical condition in which the accurate prediction
of mortality plays a vital role in guiding patient management decisions.
However, clinical datasets used for mortality prediction in HF often suffer
from an imbalanced distribution of classes, posing significant challenges. In
this paper, we explore preprocessing methods for enhancing one-month mortality
prediction in HF patients. We present a comprehensive preprocessing framework
including scaling, outliers processing and resampling as key techniques. We
also employed an aware encoding approach to effectively handle missing values
in clinical datasets. Our study utilizes a comprehensive dataset from the
Persian Registry Of cardio Vascular disease (PROVE) with a significant class
imbalance. By leveraging appropriate preprocessing techniques and Machine
Learning (ML) algorithms, we aim to improve mortality prediction performance
for HF patients. The results reveal an average enhancement of approximately
3.6% in F1 score and 2.7% in MCC for tree-based models, specifically Random
Forest (RF) and XGBoost (XGB). This demonstrates the efficiency of our
preprocessing approach in effectively handling Imbalanced Clinical Datasets
(ICD). Our findings hold promise in guiding healthcare professionals to make
informed decisions and improve patient outcomes in HF management. | [
"Hanif Kia",
"Mansour Vali",
"Hadi Sabahi"
] | 2023-09-30 18:31:15 | http://arxiv.org/abs/2310.00457v1 | http://arxiv.org/pdf/2310.00457v1 | 2310.00457v1 |
Music- and Lyrics-driven Dance Synthesis | Lyrics often convey information about the songs that are beyond the auditory
dimension, enriching the semantic meaning of movements and musical themes. Such
insights are important in the dance choreography domain. However, most existing
dance synthesis methods mainly focus on music-to-dance generation, without
considering the semantic information. To complement it, we introduce JustLMD, a
new multimodal dataset of 3D dance motion with music and lyrics. To the best of
our knowledge, this is the first dataset with triplet information including
dance motion, music, and lyrics. Additionally, we showcase a cross-modal
diffusion-based network designed to generate 3D dance motion conditioned on
music and lyrics. The proposed JustLMD dataset encompasses 4.6 hours of 3D
dance motion in 1867 sequences, accompanied by musical tracks and their
corresponding English lyrics. | [
"Wenjie Yin",
"Qingyuan Yao",
"Yi Yu",
"Hang Yin",
"Danica Kragic",
"Mårten Björkman"
] | 2023-09-30 18:27:14 | http://arxiv.org/abs/2310.00455v1 | http://arxiv.org/pdf/2310.00455v1 | 2310.00455v1 |
On the Role of Neural Collapse in Meta Learning Models for Few-shot Learning | Meta-learning frameworks for few-shot learning aims to learn models that can
learn new skills or adapt to new environments rapidly with a few training
examples. This has led to the generalizability of the developed model towards
new classes with just a few labelled samples. However these networks are seen
as black-box models and understanding the representations learnt under
different learning scenarios is crucial. Neural collapse ($\mathcal{NC}$) is a
recently discovered phenomenon which showcases unique properties at the network
proceeds towards zero loss. The input features collapse to their respective
class means, the class means form a Simplex equiangular tight frame (ETF) where
the class means are maximally distant and linearly separable, and the
classifier acts as a simple nearest neighbor classifier. While these phenomena
have been observed in simple classification networks, this study is the first
to explore and understand the properties of neural collapse in meta learning
frameworks for few-shot learning. We perform studies on the Omniglot dataset in
the few-shot setting and study the neural collapse phenomenon. We observe that
the learnt features indeed have the trend of neural collapse, especially as
model size grows, but to do not necessarily showcase the complete collapse as
measured by the $\mathcal{NC}$ properties. | [
"Saaketh Medepalli",
"Naren Doraiswamy"
] | 2023-09-30 18:02:51 | http://arxiv.org/abs/2310.00451v2 | http://arxiv.org/pdf/2310.00451v2 | 2310.00451v2 |
Question-Answering Model for Schizophrenia Symptoms and Their Impact on Daily Life using Mental Health Forums Data | In recent years, there is strong emphasis on mining medical data using
machine learning techniques. A common problem is to obtain a noiseless set of
textual documents, with a relevant content for the research question, and
developing a Question Answering (QA) model for a specific medical field. The
purpose of this paper is to present a new methodology for building a medical
dataset and obtain a QA model for analysis of symptoms and impact on daily life
for a specific disease domain. The ``Mental Health'' forum was used, a forum
dedicated to people suffering from schizophrenia and different mental
disorders. Relevant posts of active users, who regularly participate, were
extrapolated providing a new method of obtaining low-bias content and without
privacy issues. Furthermore, it is shown how to pre-process the dataset to
convert it into a QA dataset. The Bidirectional Encoder Representations from
Transformers (BERT), DistilBERT, RoBERTa, and BioBERT models were fine-tuned
and evaluated via F1-Score, Exact Match, Precision and Recall. Accurate
empirical experiments demonstrated the effectiveness of the proposed method for
obtaining an accurate dataset for QA model implementation. By fine-tuning the
BioBERT QA model, we achieved an F1 score of 0.885, showing a considerable
improvement and outperforming the state-of-the-art model for mental disorders
domain. | [
"Christian Internò",
"Eloisa Ambrosini"
] | 2023-09-30 17:50:50 | http://arxiv.org/abs/2310.00448v1 | http://arxiv.org/pdf/2310.00448v1 | 2310.00448v1 |
The objective function equality property of infoGAN for two-layer network | Information Maximizing Generative Adversarial Network (infoGAN) can be
understood as a minimax problem involving two networks: discriminators and
generators with mutual information functions. The infoGAN incorporates various
components, including latent variables, mutual information, and objective
function. This research demonstrates that the two objective functions in
infoGAN become equivalent as the discriminator and generator sample size
approaches infinity. This equivalence is established by considering the
disparity between the empirical and population versions of the objective
function. The bound on this difference is determined by the Rademacher
complexity of the discriminator and generator function class. Furthermore, the
utilization of a two-layer network for both the discriminator and generator,
featuring Lipschitz and non-decreasing activation functions, validates this
equality | [
"Mahmud Hasan"
] | 2023-09-30 17:38:07 | http://arxiv.org/abs/2310.00443v1 | http://arxiv.org/pdf/2310.00443v1 | 2310.00443v1 |
Human-Producible Adversarial Examples | Visual adversarial examples have so far been restricted to pixel-level image
manipulations in the digital world, or have required sophisticated equipment
such as 2D or 3D printers to be produced in the physical real world. We present
the first ever method of generating human-producible adversarial examples for
the real world that requires nothing more complicated than a marker pen. We
call them $\textbf{adversarial tags}$. First, building on top of differential
rendering, we demonstrate that it is possible to build potent adversarial
examples with just lines. We find that by drawing just $4$ lines we can disrupt
a YOLO-based model in $54.8\%$ of cases; increasing this to $9$ lines disrupts
$81.8\%$ of the cases tested. Next, we devise an improved method for line
placement to be invariant to human drawing error. We evaluate our system
thoroughly in both digital and analogue worlds and demonstrate that our tags
can be applied by untrained humans. We demonstrate the effectiveness of our
method for producing real-world adversarial examples by conducting a user study
where participants were asked to draw over printed images using digital
equivalents as guides. We further evaluate the effectiveness of both targeted
and untargeted attacks, and discuss various trade-offs and method limitations,
as well as the practical and ethical implications of our work. The source code
will be released publicly. | [
"David Khachaturov",
"Yue Gao",
"Ilia Shumailov",
"Robert Mullins",
"Ross Anderson",
"Kassem Fawaz"
] | 2023-09-30 17:22:02 | http://arxiv.org/abs/2310.00438v1 | http://arxiv.org/pdf/2310.00438v1 | 2310.00438v1 |
Consistent Aggregation of Objectives with Diverse Time Preferences Requires Non-Markovian Rewards | As the capabilities of artificial agents improve, they are being increasingly
deployed to service multiple diverse objectives and stakeholders. However, the
composition of these objectives is often performed ad hoc, with no clear
justification. This paper takes a normative approach to multi-objective agency:
from a set of intuitively appealing axioms, it is shown that Markovian
aggregation of Markovian reward functions is not possible when the time
preference (discount factor) for each objective may vary. It follows that
optimal multi-objective agents must admit rewards that are non-Markovian with
respect to the individual objectives. To this end, a practical non-Markovian
aggregation scheme is proposed, which overcomes the impossibility with only one
additional parameter for each objective. This work offers new insights into
sequential, multi-objective agency and intertemporal choice, and has practical
implications for the design of AI systems deployed to serve multiple
generations of principals with varying time preference. | [
"Silviu Pitis"
] | 2023-09-30 17:06:34 | http://arxiv.org/abs/2310.00435v1 | http://arxiv.org/pdf/2310.00435v1 | 2310.00435v1 |
ResolvNet: A Graph Convolutional Network with multi-scale Consistency | It is by now a well known fact in the graph learning community that the
presence of bottlenecks severely limits the ability of graph neural networks to
propagate information over long distances. What so far has not been appreciated
is that, counter-intuitively, also the presence of strongly connected
sub-graphs may severely restrict information flow in common architectures.
Motivated by this observation, we introduce the concept of multi-scale
consistency. At the node level this concept refers to the retention of a
connected propagation graph even if connectivity varies over a given graph. At
the graph-level, multi-scale consistency refers to the fact that distinct
graphs describing the same object at different resolutions should be assigned
similar feature vectors. As we show, both properties are not satisfied by
poular graph neural network architectures. To remedy these shortcomings, we
introduce ResolvNet, a flexible graph neural network based on the mathematical
concept of resolvents. We rigorously establish its multi-scale consistency
theoretically and verify it in extensive experiments on real world data: Here
networks based on this ResolvNet architecture prove expressive; out-performing
baselines significantly on many tasks; in- and outside the multi-scale setting. | [
"Christian Koke",
"Abhishek Saroha",
"Yuesong Shen",
"Marvin Eisenberger",
"Daniel Cremers"
] | 2023-09-30 16:46:45 | http://arxiv.org/abs/2310.00431v1 | http://arxiv.org/pdf/2310.00431v1 | 2310.00431v1 |
On the Stability of Iterative Retraining of Generative Models on their own Data | Deep generative models have made tremendous progress in modeling complex
data, often exhibiting generation quality that surpasses a typical human's
ability to discern the authenticity of samples. Undeniably, a key driver of
this success is enabled by the massive amounts of web-scale data consumed by
these models. Due to these models' striking performance and ease of
availability, the web will inevitably be increasingly populated with synthetic
content. Such a fact directly implies that future iterations of generative
models must contend with the reality that their training is curated from both
clean data and artificially generated data from past models. In this paper, we
develop a framework to rigorously study the impact of training generative
models on mixed datasets (of real and synthetic data) on their stability. We
first prove the stability of iterative training under the condition that the
initial generative models approximate the data distribution well enough and the
proportion of clean training data (w.r.t. synthetic data) is large enough. We
empirically validate our theory on both synthetic and natural images by
iteratively training normalizing flows and state-of-the-art diffusion models on
CIFAR10 and FFHQ. | [
"Quentin Bertrand",
"Avishek Joey Bose",
"Alexandre Duplessis",
"Marco Jiralerspong",
"Gauthier Gidel"
] | 2023-09-30 16:41:04 | http://arxiv.org/abs/2310.00429v2 | http://arxiv.org/pdf/2310.00429v2 | 2310.00429v2 |
An Efficient Algorithm for Clustered Multi-Task Compressive Sensing | This paper considers clustered multi-task compressive sensing, a hierarchical
model that solves multiple compressive sensing tasks by finding clusters of
tasks that leverage shared information to mutually improve signal
reconstruction. The existing inference algorithm for this model is
computationally expensive and does not scale well in high dimensions. The main
bottleneck involves repeated matrix inversion and log-determinant computation
for multiple large covariance matrices. We propose a new algorithm that
substantially accelerates model inference by avoiding the need to explicitly
compute these covariance matrices. Our approach combines Monte Carlo sampling
with iterative linear solvers. Our experiments reveal that compared to the
existing baseline, our algorithm can be up to thousands of times faster and an
order of magnitude more memory-efficient. | [
"Alexander Lin",
"Demba Ba"
] | 2023-09-30 15:57:14 | http://arxiv.org/abs/2310.00420v1 | http://arxiv.org/pdf/2310.00420v1 | 2310.00420v1 |
Linear Convergence of Pre-Conditioned PI Consensus Algorithm under Restricted Strong Convexity | This paper considers solving distributed convex optimization problems in
peer-to-peer multi-agent networks. The network is assumed to be synchronous and
connected. By using the proportional-integral (PI) control strategy, various
algorithms with fixed stepsize have been developed. The earliest among them is
the PI consensus algorithm. Using Lyapunov theory, we guarantee exponential
convergence of the PI consensus algorithm for restricted strongly convex
functions with rate-matching discretization, without requiring convexity of
individual local cost functions, for the first time. In order to accelerate the
PI consensus algorithm, we incorporate local pre-conditioning in the form of
constant positive definite matrices and numerically validate its efficiency
compared to the prominent distributed convex optimization algorithms. Unlike
classical pre-conditioning, where only the gradients are multiplied by a
pre-conditioner, the proposed pre-conditioning modifies both the gradients and
the consensus terms, thereby controlling the effect of the communication graph
between the agents on the PI consensus algorithm. | [
"Kushal Chakrabarti",
"Mayank Baranwal"
] | 2023-09-30 15:54:52 | http://arxiv.org/abs/2310.00419v1 | http://arxiv.org/pdf/2310.00419v1 | 2310.00419v1 |
Building Flexible, Scalable, and Machine Learning-ready Multimodal Oncology Datasets | The advancements in data acquisition, storage, and processing techniques have
resulted in the rapid growth of heterogeneous medical data. Integrating
radiological scans, histopathology images, and molecular information with
clinical data is essential for developing a holistic understanding of the
disease and optimizing treatment. The need for integrating data from multiple
sources is further pronounced in complex diseases such as cancer for enabling
precision medicine and personalized treatments. This work proposes Multimodal
Integration of Oncology Data System (MINDS) - a flexible, scalable, and
cost-effective metadata framework for efficiently fusing disparate data from
public sources such as the Cancer Research Data Commons (CRDC) into an
interconnected, patient-centric framework. MINDS offers an interface for
exploring relationships across data types and building cohorts for developing
large-scale multimodal machine learning models. By harmonizing multimodal data,
MINDS aims to potentially empower researchers with greater analytical ability
to uncover diagnostic and prognostic insights and enable evidence-based
personalized care. MINDS tracks granular end-to-end data provenance, ensuring
reproducibility and transparency. The cloud-native architecture of MINDS can
handle exponential data growth in a secure, cost-optimized manner while
ensuring substantial storage optimization, replication avoidance, and dynamic
access capabilities. Auto-scaling, access controls, and other mechanisms
guarantee pipelines' scalability and security. MINDS overcomes the limitations
of existing biomedical data silos via an interoperable metadata-driven approach
that represents a pivotal step toward the future of oncology data integration. | [
"Aakash Tripathi",
"Asim Waqas",
"Kavya Venkatesan",
"Yasin Yilmaz",
"Ghulam Rasool"
] | 2023-09-30 15:44:39 | http://arxiv.org/abs/2310.01438v1 | http://arxiv.org/pdf/2310.01438v1 | 2310.01438v1 |
Refutation of Shapley Values for XAI -- Additional Evidence | Recent work demonstrated the inadequacy of Shapley values for explainable
artificial intelligence (XAI). Although to disprove a theory a single
counterexample suffices, a possible criticism of earlier work is that the focus
was solely on Boolean classifiers. To address such possible criticism, this
paper demonstrates the inadequacy of Shapley values for families of classifiers
where features are not boolean, but also for families of classifiers for which
multiple classes can be picked. Furthermore, the paper shows that the features
changed in any minimal $l_0$ distance adversarial examples do not include
irrelevant features, thus offering further arguments regarding the inadequacy
of Shapley values for XAI. | [
"Xuanxiang Huang",
"Joao Marques-Silva"
] | 2023-09-30 15:44:06 | http://arxiv.org/abs/2310.00416v1 | http://arxiv.org/pdf/2310.00416v1 | 2310.00416v1 |
SSIF: Learning Continuous Image Representation for Spatial-Spectral Super-Resolution | Existing digital sensors capture images at fixed spatial and spectral
resolutions (e.g., RGB, multispectral, and hyperspectral images), and each
combination requires bespoke machine learning models. Neural Implicit Functions
partially overcome the spatial resolution challenge by representing an image in
a resolution-independent way. However, they still operate at fixed, pre-defined
spectral resolutions. To address this challenge, we propose Spatial-Spectral
Implicit Function (SSIF), a neural implicit model that represents an image as a
function of both continuous pixel coordinates in the spatial domain and
continuous wavelengths in the spectral domain. We empirically demonstrate the
effectiveness of SSIF on two challenging spatio-spectral super-resolution
benchmarks. We observe that SSIF consistently outperforms state-of-the-art
baselines even when the baselines are allowed to train separate models at each
spectral resolution. We show that SSIF generalizes well to both unseen spatial
resolutions and spectral resolutions. Moreover, SSIF can generate
high-resolution images that improve the performance of downstream tasks (e.g.,
land use classification) by 1.7%-7%. | [
"Gengchen Mai",
"Ni Lao",
"Weiwei Sun",
"Yuchi Ma",
"Jiaming Song",
"Chenlin Meng",
"Hongxu Ma",
"Jinmeng Rao",
"Ziyuan Li",
"Stefano Ermon"
] | 2023-09-30 15:23:30 | http://arxiv.org/abs/2310.00413v1 | http://arxiv.org/pdf/2310.00413v1 | 2310.00413v1 |
Better Situational Graphs by Inferring High-level Semantic-Relational Concepts | Recent works on SLAM extend their pose graphs with higher-level semantic
concepts exploiting relationships between them, to provide, not only a richer
representation of the situation/environment but also to improve the accuracy of
its estimation. Concretely, our previous work, Situational Graphs (S-Graphs), a
pioneer in jointly leveraging semantic relationships in the factor optimization
process, relies on semantic entities such as wall surfaces and rooms, whose
relationship is mathematically defined. Nevertheless, excerpting these
high-level concepts relying exclusively on the lower-level factor-graph remains
a challenge and it is currently done with ad-hoc algorithms, which limits its
capability to include new semantic-relational concepts. To overcome this
limitation, in this work, we propose a Graph Neural Network (GNN) for learning
high-level semantic-relational concepts that can be inferred from the low-level
factor graph. We have demonstrated that we can infer room entities and their
relationship to the mapped wall surfaces, more accurately and more
computationally efficient than the baseline algorithm. Additionally, to
demonstrate the versatility of our method, we provide a new semantic concept,
i.e. wall, and its relationship with its wall surfaces. Our proposed method has
been integrated into S-Graphs+, and it has been validated in both simulated and
real datasets. A docker container with our software will be made available to
the scientific community. | [
"Jose Andres Millan-Romera",
"Hriday Bavle",
"Muhammad Shaheer",
"Martin R. Oswald",
"Holger Voos",
"Jose Luis Sanchez-Lopez"
] | 2023-09-30 14:54:31 | http://arxiv.org/abs/2310.00401v1 | http://arxiv.org/pdf/2310.00401v1 | 2310.00401v1 |
Order-Preserving GFlowNets | Generative Flow Networks (GFlowNets) have been introduced as a method to
sample a diverse set of candidates with probabilities proportional to a given
reward. However, GFlowNets can only be used with a predefined scalar reward,
which can be either computationally expensive or not directly accessible, in
the case of multi-objective optimization (MOO) tasks for example. Moreover, to
prioritize identifying high-reward candidates, the conventional practice is to
raise the reward to a higher exponent, the optimal choice of which may vary
across different environments. To address these issues, we propose
Order-Preserving GFlowNets (OP-GFNs), which sample with probabilities in
proportion to a learned reward function that is consistent with a provided
(partial) order on the candidates, thus eliminating the need for an explicit
formulation of the reward function. We theoretically prove that the training
process of OP-GFNs gradually sparsifies the learned reward landscape in
single-objective maximization tasks. The sparsification concentrates on
candidates of a higher hierarchy in the ordering, ensuring exploration at the
beginning and exploitation towards the end of the training. We demonstrate
OP-GFN's state-of-the-art performance in single-objective maximization (totally
ordered) and multi-objective Pareto front approximation (partially ordered)
tasks, including synthetic datasets, molecule generation, and neural
architecture search. | [
"Yihang Chen",
"Lukas Mauch"
] | 2023-09-30 14:06:53 | http://arxiv.org/abs/2310.00386v1 | http://arxiv.org/pdf/2310.00386v1 | 2310.00386v1 |
Mitigating the Effect of Incidental Correlations on Part-based Learning | Intelligent systems possess a crucial characteristic of breaking complicated
problems into smaller reusable components or parts and adjusting to new tasks
using these part representations. However, current part-learners encounter
difficulties in dealing with incidental correlations resulting from the limited
observations of objects that may appear only in specific arrangements or with
specific backgrounds. These incidental correlations may have a detrimental
impact on the generalization and interpretability of learned part
representations. This study asserts that part-based representations could be
more interpretable and generalize better with limited data, employing two
innovative regularization methods. The first regularization separates
foreground and background information's generative process via a unique
mixture-of-parts formulation. Structural constraints are imposed on the parts
using a weakly-supervised loss, guaranteeing that the mixture-of-parts for
foreground and background entails soft, object-agnostic masks. The second
regularization assumes the form of a distillation loss, ensuring the invariance
of the learned parts to the incidental background correlations. Furthermore, we
incorporate sparse and orthogonal constraints to facilitate learning
high-quality part representations. By reducing the impact of incidental
background correlations on the learned parts, we exhibit state-of-the-art
(SoTA) performance on few-shot learning tasks on benchmark datasets, including
MiniImagenet, TieredImageNet, and FC100. We also demonstrate that the
part-based representations acquired through our approach generalize better than
existing techniques, even under domain shifts of the background and common data
corruption on the ImageNet-9 dataset. The implementation is available on
GitHub: https://github.com/GauravBh1010tt/DPViT.git | [
"Gaurav Bhatt",
"Deepayan Das",
"Leonid Sigal",
"Vineeth N Balasubramanian"
] | 2023-09-30 13:44:48 | http://arxiv.org/abs/2310.00377v1 | http://arxiv.org/pdf/2310.00377v1 | 2310.00377v1 |
Deep Active Learning with Noisy Oracle in Object Detection | Obtaining annotations for complex computer vision tasks such as object
detection is an expensive and time-intense endeavor involving a large number of
human workers or expert opinions. Reducing the amount of annotations required
while maintaining algorithm performance is, therefore, desirable for machine
learning practitioners and has been successfully achieved by active learning
algorithms. However, it is not merely the amount of annotations which
influences model performance but also the annotation quality. In practice, the
oracles that are queried for new annotations frequently contain significant
amounts of noise. Therefore, cleansing procedures are oftentimes necessary to
review and correct given labels. This process is subject to the same budget as
the initial annotation itself since it requires human workers or even domain
experts. Here, we propose a composite active learning framework including a
label review module for deep object detection. We show that utilizing part of
the annotation budget to correct the noisy annotations partially in the active
dataset leads to early improvements in model performance, especially when
coupled with uncertainty-based query strategies. The precision of the label
error proposals has a significant influence on the measured effect of the label
review. In our experiments we achieve improvements of up to 4.5 mAP points of
object detection performance by incorporating label reviews at equal annotation
budget. | [
"Marius Schubert",
"Tobias Riedlinger",
"Karsten Kahl",
"Matthias Rottmann"
] | 2023-09-30 13:28:35 | http://arxiv.org/abs/2310.00372v1 | http://arxiv.org/pdf/2310.00372v1 | 2310.00372v1 |
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression | With the rapid development of computer vision, Vision Transformers (ViTs)
offer the tantalizing prospect of unified information processing across visual
and textual domains. But due to the lack of inherent inductive biases in ViTs,
they require enormous amount of data for training. To make their applications
practical, we introduce an innovative ensemble-based distillation approach
distilling inductive bias from complementary lightweight teacher models. Prior
systems relied solely on convolution-based teaching. However, this method
incorporates an ensemble of light teachers with different architectural
tendencies, such as convolution and involution, to instruct the student
transformer jointly. Because of these unique inductive biases, instructors can
accumulate a wide range of knowledge, even from readily identifiable stored
datasets, which leads to enhanced student performance. Our proposed framework
also involves precomputing and storing logits in advance, essentially the
unnormalized predictions of the model. This optimization can accelerate the
distillation process by eliminating the need for repeated forward passes during
knowledge distillation, significantly reducing the computational burden and
enhancing efficiency. | [
"Gousia Habib",
"Tausifa Jan Saleem",
"Brejesh Lall"
] | 2023-09-30 13:21:29 | http://arxiv.org/abs/2310.00369v2 | http://arxiv.org/pdf/2310.00369v2 | 2310.00369v2 |
Structural Adversarial Objectives for Self-Supervised Representation Learning | Within the framework of generative adversarial networks (GANs), we propose
objectives that task the discriminator for self-supervised representation
learning via additional structural modeling responsibilities. In combination
with an efficient smoothness regularizer imposed on the network, these
objectives guide the discriminator to learn to extract informative
representations, while maintaining a generator capable of sampling from the
domain. Specifically, our objectives encourage the discriminator to structure
features at two levels of granularity: aligning distribution characteristics,
such as mean and variance, at coarse scales, and grouping features into local
clusters at finer scales. Operating as a feature learner within the GAN
framework frees our self-supervised system from the reliance on hand-crafted
data augmentation schemes that are prevalent across contrastive representation
learning methods. Across CIFAR-10/100 and an ImageNet subset, experiments
demonstrate that equipping GANs with our self-supervised objectives suffices to
produce discriminators which, evaluated in terms of representation learning,
compete with networks trained by contrastive learning approaches. | [
"Xiao Zhang",
"Michael Maire"
] | 2023-09-30 12:27:53 | http://arxiv.org/abs/2310.00357v2 | http://arxiv.org/pdf/2310.00357v2 | 2310.00357v2 |
Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram | In today's digital age, images have emerged as powerful tools for politicians
to engage with their voters on social media platforms. Visual content possesses
a unique emotional appeal that often leads to increased user engagement.
However, research on visual communication remains relatively limited,
particularly in the Global South. This study aims to bridge this gap by
employing a combination of computational methods and qualitative approach to
investigate the visual communication strategies employed in a dataset of 11,263
Instagram posts by 19 Brazilian presidential candidates in 2018 and 2022
national elections. Through two studies, we observed consistent patterns across
these candidates on their use of visual political communication. Notably, we
identify a prevalence of celebratory and positively toned images. They also
exhibit a strong sense of personalization, portraying candidates connected with
their voters on a more emotional level. Our research also uncovers unique
contextual nuances specific to the Brazilian political landscape. We note a
substantial presence of screenshots from news websites and other social media
platforms. Furthermore, text-edited images with portrayals emerge as a
prominent feature. In light of these results, we engage in a discussion
regarding the implications for the broader field of visual political
communication. This article serves as a testament to the pivotal role that
Instagram has played in shaping the narrative of two fiercely polarized
Brazilian elections, casting a revealing light on the ever-evolving dynamics of
visual political communication in the digital age. Finally, we propose avenues
for future research in the realm of visual political communication. | [
"Mathias-Felipe de-Lima-Santos",
"Isabella Gonçalves",
"Marcos G. Quiles",
"Lucia Mesquita",
"Wilson Ceron"
] | 2023-09-30 12:11:11 | http://arxiv.org/abs/2310.00349v1 | http://arxiv.org/pdf/2310.00349v1 | 2310.00349v1 |
Harmony World Models: Boosting Sample Efficiency for Model-based Reinforcement Learning | Model-based reinforcement learning (MBRL) holds the promise of
sample-efficient learning by utilizing a world model, which models how the
environment works and typically encompasses components for two tasks:
observation modeling and reward modeling. In this paper, through a dedicated
empirical investigation, we gain a deeper understanding of the role each task
plays in world models and uncover the overlooked potential of more efficient
MBRL by harmonizing the interference between observation and reward modeling.
Our key insight is that while prevalent approaches of explicit MBRL attempt to
restore abundant details of the environment through observation models, it is
difficult due to the environment's complexity and limited model capacity. On
the other hand, reward models, while dominating in implicit MBRL and adept at
learning task-centric dynamics, are inadequate for sample-efficient learning
without richer learning signals. Capitalizing on these insights and
discoveries, we propose a simple yet effective method, Harmony World Models
(HarmonyWM), that introduces a lightweight harmonizer to maintain a dynamic
equilibrium between the two tasks in world model learning. Our experiments on
three visual control domains show that the base MBRL method equipped with
HarmonyWM gains 10%-55% absolute performance boosts. | [
"Haoyu Ma",
"Jialong Wu",
"Ningya Feng",
"Jianmin Wang",
"Mingsheng Long"
] | 2023-09-30 11:38:13 | http://arxiv.org/abs/2310.00344v1 | http://arxiv.org/pdf/2310.00344v1 | 2310.00344v1 |
Deep Reinforcement Learning for Autonomous Vehicle Intersection Navigation | In this paper, we explore the challenges associated with navigating complex
T-intersections in dense traffic scenarios for autonomous vehicles (AVs).
Reinforcement learning algorithms have emerged as a promising approach to
address these challenges by enabling AVs to make safe and efficient decisions
in real-time. Here, we address the problem of efficiently and safely navigating
T-intersections using a lower-cost, single-agent approach based on the Twin
Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning
algorithm. We show that our TD3-based method, when trained and tested in the
CARLA simulation platform, demonstrates stable convergence and improved safety
performance in various traffic densities. Our results reveal that the proposed
approach enables the AV to effectively navigate T-intersections, outperforming
previous methods in terms of travel delays, collision minimization, and overall
cost. This study contributes to the growing body of knowledge on reinforcement
learning applications in autonomous driving and highlights the potential of
single-agent, cost-effective methods for addressing more complex driving
scenarios and advancing reinforcement learning algorithms in the future. | [
"Badr Ben Elallid",
"Hamza El Alaoui",
"Nabil Benamar"
] | 2023-09-30 10:54:02 | http://arxiv.org/abs/2310.08595v2 | http://arxiv.org/pdf/2310.08595v2 | 2310.08595v2 |
FedLPA: Personalized One-shot Federated Learning with Layer-Wise Posterior Aggregation | Efficiently aggregating trained neural networks from local clients into a
global model on a server is a widely researched topic in federated learning.
Recently, motivated by diminishing privacy concerns, mitigating potential
attacks, and reducing the overhead of communication, one-shot federated
learning (i.e., limiting client-server communication into a single round) has
gained popularity among researchers. However, the one-shot aggregation
performances are sensitively affected by the non-identical training data
distribution, which exhibits high statistical heterogeneity in some real-world
scenarios. To address this issue, we propose a novel one-shot aggregation
method with Layer-wise Posterior Aggregation, named FedLPA. FedLPA aggregates
local models to obtain a more accurate global model without requiring extra
auxiliary datasets or exposing any confidential local information, e.g., label
distributions. To effectively capture the statistics maintained in the biased
local datasets in the practical non-IID scenario, we efficiently infer the
posteriors of each layer in each local model using layer-wise Laplace
approximation and aggregate them to train the global parameters. Extensive
experimental results demonstrate that FedLPA significantly improves learning
performance over state-of-the-art methods across several metrics. | [
"Xiang Liu",
"Liangxi Liu",
"Feiyang Ye",
"Yunheng Shen",
"Xia Li",
"Linshan Jiang",
"Jialin Li"
] | 2023-09-30 10:51:27 | http://arxiv.org/abs/2310.00339v2 | http://arxiv.org/pdf/2310.00339v2 | 2310.00339v2 |
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware | In recent years, hardware-accelerated neural networks have gained significant
attention for edge computing applications. Among various hardware options,
crossbar arrays, offer a promising avenue for efficient storage and
manipulation of neural network weights. However, the transition from trained
floating-point models to hardware-constrained analog architectures remains a
challenge. In this work, we combine a quantization technique specifically
designed for such architectures with a novel self-correcting mechanism. By
utilizing dual crossbar connections to represent both the positive and negative
parts of a single weight, we develop an algorithm to approximate a set of
multiplicative weights. These weights, along with their differences, aim to
represent the original network's weights with minimal loss in performance. We
implement the models using IBM's aihwkit and evaluate their efficacy over time.
Our results demonstrate that, when paired with an on-chip pulse generator, our
self-correcting neural network performs comparably to those trained with
analog-aware algorithms. | [
"Arseni Ivanov"
] | 2023-09-30 10:47:25 | http://arxiv.org/abs/2310.00337v1 | http://arxiv.org/pdf/2310.00337v1 | 2310.00337v1 |
DURENDAL: Graph deep learning framework for temporal heterogeneous networks | Temporal heterogeneous networks (THNs) are evolving networks that
characterize many real-world applications such as citation and events networks,
recommender systems, and knowledge graphs. Although different Graph Neural
Networks (GNNs) have been successfully applied to dynamic graphs, most of them
only support homogeneous graphs or suffer from model design heavily influenced
by specific THNs prediction tasks. Furthermore, there is a lack of temporal
heterogeneous networked data in current standard graph benchmark datasets.
Hence, in this work, we propose DURENDAL, a graph deep learning framework for
THNs. DURENDAL can help to easily repurpose any heterogeneous graph learning
model to evolving networks by combining design principles from snapshot-based
and multirelational message-passing graph learning models. We introduce two
different schemes to update embedding representations for THNs, discussing the
strengths and weaknesses of both strategies. We also extend the set of
benchmarks for TNHs by introducing two novel high-resolution temporal
heterogeneous graph datasets derived from an emerging Web3 platform and a
well-established e-commerce website. Overall, we conducted the experimental
evaluation of the framework over four temporal heterogeneous network datasets
on future link prediction tasks in an evaluation setting that takes into
account the evolving nature of the data. Experiments show the prediction power
of DURENDAL compared to current solutions for evolving and dynamic graphs, and
the effectiveness of its model design. | [
"Manuel Dileo",
"Matteo Zignani",
"Sabrina Gaito"
] | 2023-09-30 10:46:01 | http://arxiv.org/abs/2310.00336v1 | http://arxiv.org/pdf/2310.00336v1 | 2310.00336v1 |
Anomaly Detection in Power Generation Plants with Generative Adversarial Networks | Anomaly detection is a critical task that involves the identification of data
points that deviate from a predefined pattern, useful for fraud detection and
related activities. Various techniques are employed for anomaly detection, but
recent research indicates that deep learning methods, with their ability to
discern intricate data patterns, are well-suited for this task. This study
explores the use of Generative Adversarial Networks (GANs) for anomaly
detection in power generation plants. The dataset used in this investigation
comprises fuel consumption records obtained from power generation plants
operated by a telecommunications company. The data was initially collected in
response to observed irregularities in the fuel consumption patterns of the
generating sets situated at the company's base stations. The dataset was
divided into anomalous and normal data points based on specific variables, with
64.88% classified as normal and 35.12% as anomalous. An analysis of feature
importance, employing the random forest classifier, revealed that Running Time
Per Day exhibited the highest relative importance. A GANs model was trained and
fine-tuned both with and without data augmentation, with the goal of increasing
the dataset size to enhance performance. The generator model consisted of five
dense layers using the tanh activation function, while the discriminator
comprised six dense layers, each integrated with a dropout layer to prevent
overfitting. Following data augmentation, the model achieved an accuracy rate
of 98.99%, compared to 66.45% before augmentation. This demonstrates that the
model nearly perfectly classified data points into normal and anomalous
categories, with the augmented data significantly enhancing the GANs'
performance in anomaly detection. Consequently, this study recommends the use
of GANs, particularly when using large datasets, for effective anomaly
detection. | [
"Marcellin Atemkeng",
"Toheeb Aduramomi Jimoh"
] | 2023-09-30 10:44:05 | http://arxiv.org/abs/2310.00335v1 | http://arxiv.org/pdf/2310.00335v1 | 2310.00335v1 |
MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection | Recently, the application of computer vision for anomaly detection has been
under attention in several industrial fields. An important example is oil
pipeline defect detection. Failure of one oil pipeline can interrupt the
operation of the entire transportation system or cause a far-reaching failure.
The automated defect detection could significantly decrease the inspection time
and the related costs. However, there is a gap in the related literature when
it comes to dealing with this task. The existing studies do not sufficiently
cover the research of the Magnetic Flux Leakage data and the preprocessing
techniques that allow overcoming the limitations set by the available data.
This work focuses on alleviating these issues. Moreover, in doing so, we
exploited the recent convolutional neural network structures and proposed
robust approaches, aiming to acquire high performance considering the related
metrics. The proposed approaches and their applicability were verified using
real-world data. | [
"Iurii Katser",
"Vyacheslav Kozitsin",
"Igor Mozolin"
] | 2023-09-30 10:37:12 | http://arxiv.org/abs/2310.00332v1 | http://arxiv.org/pdf/2310.00332v1 | 2310.00332v1 |
Memorization with neural nets: going beyond the worst case | In practice, deep neural networks are often able to easily interpolate their
training data. To understand this phenomenon, many works have aimed to quantify
the memorization capacity of a neural network architecture: the largest number
of points such that the architecture can interpolate any placement of these
points with any assignment of labels. For real-world data, however, one
intuitively expects the presence of a benign structure so that interpolation
already occurs at a smaller network size than suggested by memorization
capacity. In this paper, we investigate interpolation by adopting an
instance-specific viewpoint. We introduce a simple randomized algorithm that,
given a fixed finite dataset with two classes, with high probability constructs
an interpolating three-layer neural network in polynomial time. The required
number of parameters is linked to geometric properties of the two classes and
their mutual arrangement. As a result, we obtain guarantees that are
independent of the number of samples and hence move beyond worst-case
memorization capacity bounds. We illustrate the effectiveness of the algorithm
in non-pathological situations with extensive numerical experiments and link
the insights back to the theoretical results. | [
"Sjoerd Dirksen",
"Patrick Finke",
"Martin Genzel"
] | 2023-09-30 10:06:05 | http://arxiv.org/abs/2310.00327v2 | http://arxiv.org/pdf/2310.00327v2 | 2310.00327v2 |
Efficient Planning with Latent Diffusion | Temporal abstraction and efficient planning pose significant challenges in
offline reinforcement learning, mainly when dealing with domains that involve
temporally extended tasks and delayed sparse rewards. Existing methods
typically plan in the raw action space and can be inefficient and inflexible.
Latent action spaces offer a more flexible paradigm, capturing only possible
actions within the behavior policy support and decoupling the temporal
structure between planning and modeling. However, current latent-action-based
methods are limited to discrete spaces and require expensive planning. This
paper presents a unified framework for continuous latent action space
representation learning and planning by leveraging latent, score-based
diffusion models. We establish the theoretical equivalence between planning in
the latent action space and energy-guided sampling with a pretrained diffusion
model and incorporate a novel sequence-level exact sampling method. Our
proposed method, $\texttt{LatentDiffuser}$, demonstrates competitive
performance on low-dimensional locomotion control tasks and surpasses existing
methods in higher-dimensional tasks. | [
"Wenhao Li"
] | 2023-09-30 08:50:49 | http://arxiv.org/abs/2310.00311v1 | http://arxiv.org/pdf/2310.00311v1 | 2310.00311v1 |
A Hierarchical Approach to Environment Design with Generative Trajectory Modeling | Unsupervised Environment Design (UED) is a paradigm for training generally
capable agents to achieve good zero-shot transfer performance. This paradigm
hinges on automatically generating a curriculum of training environments.
Leading approaches for UED predominantly use randomly generated environment
instances to train the agent. While these methods exhibit good zero-shot
transfer performance, they often encounter challenges in effectively exploring
large design spaces or leveraging previously discovered underlying structures,
To address these challenges, we introduce a novel framework based on
Hierarchical MDP (Markov Decision Processes). Our approach includes an
upper-level teacher's MDP responsible for training a lower-level MDP student
agent, guided by the student's performance. To expedite the learning of the
upper leavel MDP, we leverage recent advancements in generative modeling to
generate synthetic experience dataset for training the teacher agent. Our
algorithm, called Synthetically-enhanced Hierarchical Environment Design
(SHED), significantly reduces the resource-intensive interactions between the
agent and the environment. To validate the effectiveness of SHED, we conduct
empirical experiments across various domains, with the goal of developing an
efficient and robust agent under limited training resources. Our results show
the manifold advantages of SHED and highlight its effectiveness as a potent
instrument for curriculum-based learning within the UED framework. This work
contributes to exploring the next generation of RL agents capable of adeptly
handling an ever-expanding range of complex tasks. | [
"Dexun Li",
"Pradeep Varakantham"
] | 2023-09-30 08:21:32 | http://arxiv.org/abs/2310.00301v1 | http://arxiv.org/pdf/2310.00301v1 | 2310.00301v1 |
Graph Neural Architecture Search with GPT-4 | Graph Neural Architecture Search (GNAS) has shown promising results in
automatically designing graph neural networks. However, GNAS still requires
intensive human labor with rich domain knowledge to design the search space and
search strategy. In this paper, we integrate GPT-4 into GNAS and propose a new
GPT-4 based Graph Neural Architecture Search method (GPT4GNAS for short). The
basic idea of our method is to design a new class of prompts for GPT-4 to guide
GPT-4 toward the generative task of graph neural architectures. The prompts
consist of descriptions of the search space, search strategy, and search
feedback of GNAS. By iteratively running GPT-4 with the prompts, GPT4GNAS
generates more accurate graph neural networks with fast convergence.
Experimental results show that embedding GPT-4 into GNAS outperforms the
state-of-the-art GNAS methods. | [
"Haishuai Wang",
"Yang Gao",
"Xin Zheng",
"Peng Zhang",
"Hongyang Chen",
"Jiajun Bu"
] | 2023-09-30 08:05:59 | http://arxiv.org/abs/2310.01436v1 | http://arxiv.org/pdf/2310.01436v1 | 2310.01436v1 |
Mathematical structure of perfect predictive reservoir computing for autoregressive type of time series data | Reservoir Computing (RC) is a type of recursive neural network (RNN), and
there can be no doubt that the RC will be more and more widely used for
building future prediction models for time-series data, with low training cost,
high speed and high computational power. However, research into the
mathematical structure of RC neural networks has only recently begun. Bollt
(2021) clarified the necessity of the autoregressive (AR) model for gaining the
insight into the mathematical structure of RC neural networks, and indicated
that the Wold decomposition theorem is the milestone for understanding of
these. Keeping this celebrated result in mind, in this paper, we clarify hidden
structures of input and recurrent weight matrices in RC neural networks, and
show that such structures attain perfect prediction for the AR type of time
series data. | [
"Tsuyoshi Yoneda"
] | 2023-09-30 07:46:47 | http://arxiv.org/abs/2310.00290v2 | http://arxiv.org/pdf/2310.00290v2 | 2310.00290v2 |
A Unified Framework for Generative Data Augmentation: A Comprehensive Survey | Generative data augmentation (GDA) has emerged as a promising technique to
alleviate data scarcity in machine learning applications. This thesis presents
a comprehensive survey and unified framework of the GDA landscape. We first
provide an overview of GDA, discussing its motivation, taxonomy, and key
distinctions from synthetic data generation. We then systematically analyze the
critical aspects of GDA - selection of generative models, techniques to utilize
them, data selection methodologies, validation approaches, and diverse
applications. Our proposed unified framework categorizes the extensive GDA
literature, revealing gaps such as the lack of universal benchmarks. The thesis
summarises promising research directions, including , effective data selection,
theoretical development for large-scale models' application in GDA and
establishing a benchmark for GDA. By laying a structured foundation, this
thesis aims to nurture more cohesive development and accelerate progress in the
vital arena of generative data augmentation. | [
"Yunhao Chen",
"Zihui Yan",
"Yunjie Zhu"
] | 2023-09-30 07:01:08 | http://arxiv.org/abs/2310.00277v1 | http://arxiv.org/pdf/2310.00277v1 | 2310.00277v1 |
SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data | The problem of urban event ranking aims at predicting the top-k most risky
locations of future events such as traffic accidents and crimes. This problem
is of fundamental importance to public safety and urban administration
especially when limited resources are available. The problem is, however,
challenging due to complex and dynamic spatio-temporal correlations between
locations, uneven distribution of urban events in space, and the difficulty to
correctly rank nearby locations with similar features. Prior works on event
forecasting mostly aim at accurately predicting the actual risk score or counts
of events for all the locations. Rankings obtained as such usually have low
quality due to prediction errors. Learning-to-rank methods directly optimize
measures such as Normalized Discounted Cumulative Gain (NDCG), but cannot
handle the spatiotemporal autocorrelation existing among locations. In this
paper, we bridge the gap by proposing a novel spatial event ranking approach
named SpatialRank. SpatialRank features adaptive graph convolution layers that
dynamically learn the spatiotemporal dependencies across locations from data.
In addition, the model optimizes through surrogates a hybrid NDCG loss with a
spatial component to better rank neighboring spatial locations. We design an
importance-sampling with a spatial filtering algorithm to effectively evaluate
the loss during training. Comprehensive experiments on three real-world
datasets demonstrate that SpatialRank can effectively identify the top riskiest
locations of crimes and traffic accidents and outperform state-of-art methods
in terms of NDCG by up to 12.7%. | [
"Bang An",
"Xun Zhou",
"Yongjian Zhong",
"Tianbao Yang"
] | 2023-09-30 06:20:21 | http://arxiv.org/abs/2310.00270v4 | http://arxiv.org/pdf/2310.00270v4 | 2310.00270v4 |
Unravel Anomalies: An End-to-end Seasonal-Trend Decomposition Approach for Time Series Anomaly Detection | Traditional Time-series Anomaly Detection (TAD) methods often struggle with
the composite nature of complex time-series data and a diverse array of
anomalies. We introduce TADNet, an end-to-end TAD model that leverages
Seasonal-Trend Decomposition to link various types of anomalies to specific
decomposition components, thereby simplifying the analysis of complex
time-series and enhancing detection performance. Our training methodology,
which includes pre-training on a synthetic dataset followed by fine-tuning,
strikes a balance between effective decomposition and precise anomaly
detection. Experimental validation on real-world datasets confirms TADNet's
state-of-the-art performance across a diverse range of anomalies. | [
"Zhenwei Zhang",
"Ruiqi Wang",
"Ran Ding",
"Yuantao Gu"
] | 2023-09-30 06:08:37 | http://arxiv.org/abs/2310.00268v1 | http://arxiv.org/pdf/2310.00268v1 | 2310.00268v1 |
On Sinkhorn's Algorithm and Choice Modeling | For a broad class of choice and ranking models based on Luce's choice axiom,
including the Bradley--Terry--Luce and Plackett--Luce models, we show that the
associated maximum likelihood estimation problems are equivalent to a classic
matrix balancing problem with target row and column sums. This perspective
opens doors between two seemingly unrelated research areas, and allows us to
unify existing algorithms in the choice modeling literature as special
instances or analogs of Sinkhorn's celebrated algorithm for matrix balancing.
We draw inspirations from these connections and resolve important open problems
on the study of Sinkhorn's algorithm. We first prove the global linear
convergence of Sinkhorn's algorithm for non-negative matrices whenever finite
solutions to the matrix balancing problem exist. We characterize this global
rate of convergence in terms of the algebraic connectivity of the bipartite
graph constructed from data. Next, we also derive the sharp asymptotic rate of
linear convergence, which generalizes a classic result of Knight (2008), but
with a more explicit analysis that exploits an intrinsic orthogonality
structure. To our knowledge, these are the first quantitative linear
convergence results for Sinkhorn's algorithm for general non-negative matrices
and positive marginals. The connections we establish in this paper between
matrix balancing and choice modeling could help motivate further transmission
of ideas and interesting results in both directions. | [
"Zhaonan Qu",
"Alfred Galichon",
"Johan Ugander"
] | 2023-09-30 05:20:23 | http://arxiv.org/abs/2310.00260v1 | http://arxiv.org/pdf/2310.00260v1 | 2310.00260v1 |
Learning State-Augmented Policies for Information Routing in Communication Networks | This paper examines the problem of information routing in a large-scale
communication network, which can be formulated as a constrained statistical
learning problem having access to only local information. We delineate a novel
State Augmentation (SA) strategy to maximize the aggregate information at
source nodes using graph neural network (GNN) architectures, by deploying graph
convolutions over the topological links of the communication network. The
proposed technique leverages only the local information available at each node
and efficiently routes desired information to the destination nodes. We
leverage an unsupervised learning procedure to convert the output of the GNN
architecture to optimal information routing strategies. In the experiments, we
perform the evaluation on real-time network topologies to validate our
algorithms. Numerical simulations depict the improved performance of the
proposed method in training a GNN parameterization as compared to baseline
algorithms. | [
"Sourajit Das",
"Navid NaderiAlizadeh",
"Alejandro Ribeiro"
] | 2023-09-30 04:34:25 | http://arxiv.org/abs/2310.00248v2 | http://arxiv.org/pdf/2310.00248v2 | 2310.00248v2 |
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning | Federated learning (FL) offers privacy-preserving decentralized machine
learning, optimizing models at edge clients without sharing private data.
Simultaneously, foundation models (FMs) have gained traction in the artificial
intelligence (AI) community due to their exceptional performance across various
tasks. However, integrating FMs into FL presents challenges, primarily due to
their substantial size and intensive resource requirements. This is especially
true when considering the resource heterogeneity in edge FL systems. We present
an adaptive framework for Resource-aware Federated Foundation Models (RaFFM) to
address these challenges. RaFFM introduces specialized model compression
algorithms tailored for FL scenarios, such as salient parameter prioritization
and high-performance subnetwork extraction. These algorithms enable dynamic
scaling of given transformer-based FMs to fit heterogeneous resource
constraints at the network edge during both FL's optimization and deployment
stages. Experimental results demonstrate that RaFFM shows significant
superiority in resource utilization efficiency and uses fewer resources to
deploy FMs to FL. Despite the lower resource consumption, target models
optimized by RaFFM achieve performance on par with traditional FL methods
applied to full-sized FMs. This is evident across tasks in both natural
language processing and computer vision domains. | [
"Sixing Yu",
"J. Pablo Muñoz",
"Ali Jannesari"
] | 2023-09-30 04:31:53 | http://arxiv.org/abs/2310.00247v2 | http://arxiv.org/pdf/2310.00247v2 | 2310.00247v2 |
A hybrid quantum-classical conditional generative adversarial network algorithm for human-centered paradigm in cloud | As an emerging field that aims to bridge the gap between human activities and
computing systems, human-centered computing (HCC) in cloud, edge, fog has had a
huge impact on the artificial intelligence algorithms. The quantum generative
adversarial network (QGAN) is considered to be one of the quantum machine
learning algorithms with great application prospects, which also should be
improved to conform to the human-centered paradigm. The generation process of
QGAN is relatively random and the generated model does not conform to the
human-centered concept, so it is not quite suitable for real scenarios. In
order to solve these problems, a hybrid quantum-classical conditional
generative adversarial network (QCGAN) algorithm is proposed, which is a
knowledge-driven human-computer interaction computing mode that can be
implemented in cloud. The purposes of stabilizing the generation process and
realizing the interaction between human and computing process are achieved by
inputting artificial conditional information in the generator and
discriminator. The generator uses the parameterized quantum circuit with an
all-to-all connected topology, which facilitates the tuning of network
parameters during the training process. The discriminator uses the classical
neural network, which effectively avoids the "input bottleneck" of quantum
machine learning. Finally, the BAS training set is selected to conduct
experiment on the quantum cloud computing platform. The result shows that the
QCGAN algorithm can effectively converge to the Nash equilibrium point after
training and perform human-centered classification generation tasks. | [
"Wenjie Liu",
"Ying Zhang",
"Zhiliang Deng",
"Jiaojiao Zhao",
"Lian Tong"
] | 2023-09-30 04:31:23 | http://arxiv.org/abs/2310.00246v1 | http://arxiv.org/pdf/2310.00246v1 | 2310.00246v1 |
AdaptNet: Policy Adaptation for Physics-Based Character Control | Motivated by humans' ability to adapt skills in the learning of new ones,
this paper presents AdaptNet, an approach for modifying the latent space of
existing policies to allow new behaviors to be quickly learned from like tasks
in comparison to learning from scratch. Building on top of a given
reinforcement learning controller, AdaptNet uses a two-tier hierarchy that
augments the original state embedding to support modest changes in a behavior
and further modifies the policy network layers to make more substantive
changes. The technique is shown to be effective for adapting existing
physics-based controllers to a wide range of new styles for locomotion, new
task targets, changes in character morphology and extensive changes in
environment. Furthermore, it exhibits significant increase in learning
efficiency, as indicated by greatly reduced training times when compared to
training from scratch or using other approaches that modify existing policies.
Code is available at https://motion-lab.github.io/AdaptNet. | [
"Pei Xu",
"Kaixiang Xie",
"Sheldon Andrews",
"Paul G. Kry",
"Michael Neff",
"Morgan McGuire",
"Ioannis Karamouzas",
"Victor Zordan"
] | 2023-09-30 03:19:51 | http://arxiv.org/abs/2310.00239v2 | http://arxiv.org/pdf/2310.00239v2 | 2310.00239v2 |
CausalImages: An R Package for Causal Inference with Earth Observation, Bio-medical, and Social Science Images | The causalimages R package enables causal inference with image and image
sequence data, providing new tools for integrating novel data sources like
satellite and bio-medical imagery into the study of cause and effect. One set
of functions enables image-based causal inference analyses. For example, one
key function decomposes treatment effect heterogeneity by images using an
interpretable Bayesian framework. This allows for determining which types of
images or image sequences are most responsive to interventions. A second
modeling function allows researchers to control for confounding using images.
The package also allows investigators to produce embeddings that serve as
vector summaries of the image or video content. Finally, infrastructural
functions are also provided, such as tools for writing large-scale image and
image sequence data as sequentialized byte strings for more rapid image
analysis. causalimages therefore opens new capabilities for causal inference in
R, letting researchers use informative imagery in substantive analyses in a
fast and accessible manner. | [
"Connor T. Jerzak",
"Adel Daoud"
] | 2023-09-30 02:52:49 | http://arxiv.org/abs/2310.00233v2 | http://arxiv.org/pdf/2310.00233v2 | 2310.00233v2 |
Combining Spatial and Temporal Abstraction in Planning for Better Generalization | Inspired by human conscious planning, we propose Skipper, a model-based
reinforcement learning agent that utilizes spatial and temporal abstractions to
generalize learned skills in novel situations. It automatically decomposes the
task at hand into smaller-scale, more manageable subtasks and hence enables
sparse decision-making and focuses its computation on the relevant parts of the
environment. This relies on the definition of a high-level proxy problem
represented as a directed graph, in which vertices and edges are learned
end-to-end using hindsight. Our theoretical analyses provide performance
guarantees under appropriate assumptions and establish where our approach is
expected to be helpful. Generalization-focused experiments validate Skipper's
significant advantage in zero-shot generalization, compared to existing
state-of-the-art hierarchical planning methods. | [
"Mingde Zhao",
"Safa Alver",
"Harm van Seijen",
"Romain Laroche",
"Doina Precup",
"Yoshua Bengio"
] | 2023-09-30 02:25:18 | http://arxiv.org/abs/2310.00229v1 | http://arxiv.org/pdf/2310.00229v1 | 2310.00229v1 |
Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis | Conditional generative models typically demand large annotated training sets
to achieve high-quality synthesis. As a result, there has been significant
interest in designing models that perform plug-and-play generation, i.e., to
use a predefined or pretrained model, which is not explicitly trained on the
generative task, to guide the generative process (e.g., using language).
However, such guidance is typically useful only towards synthesizing high-level
semantics rather than editing fine-grained details as in image-to-image
translation tasks. To this end, and capitalizing on the powerful fine-grained
generative control offered by the recent diffusion-based generative models, we
introduce Steered Diffusion, a generalized framework for photorealistic
zero-shot conditional image generation using a diffusion model trained for
unconditional generation. The key idea is to steer the image generation of the
diffusion model at inference time via designing a loss using a pre-trained
inverse model that characterizes the conditional task. This loss modulates the
sampling trajectory of the diffusion process. Our framework allows for easy
incorporation of multiple conditions during inference. We present experiments
using steered diffusion on several tasks including inpainting, colorization,
text-guided semantic editing, and image super-resolution. Our results
demonstrate clear qualitative and quantitative improvements over
state-of-the-art diffusion-based plug-and-play models while adding negligible
additional computational cost. | [
"Nithin Gopalakrishnan Nair",
"Anoop Cherian",
"Suhas Lohit",
"Ye Wang",
"Toshiaki Koike-Akino",
"Vishal M. Patel",
"Tim K. Marks"
] | 2023-09-30 02:03:22 | http://arxiv.org/abs/2310.00224v1 | http://arxiv.org/pdf/2310.00224v1 | 2310.00224v1 |
Beyond Random Noise: Insights on Anonymization Strategies from a Latent Bandit Study | This paper investigates the issue of privacy in a learning scenario where
users share knowledge for a recommendation task. Our study contributes to the
growing body of research on privacy-preserving machine learning and underscores
the need for tailored privacy techniques that address specific attack patterns
rather than relying on one-size-fits-all solutions. We use the latent bandit
setting to evaluate the trade-off between privacy and recommender performance
by employing various aggregation strategies, such as averaging, nearest
neighbor, and clustering combined with noise injection. More specifically, we
simulate a linkage attack scenario leveraging publicly available auxiliary
information acquired by the adversary. Our results on three open real-world
datasets reveal that adding noise using the Laplace mechanism to an individual
user's data record is a poor choice. It provides the highest regret for any
noise level, relative to de-anonymization probability and the ADS metric.
Instead, one should combine noise with appropriate aggregation strategies. For
example, using averages from clusters of different sizes provides flexibility
not achievable by varying the amount of noise alone. Generally, no single
aggregation strategy can consistently achieve the optimum regret for a given
desired level of privacy. | [
"Alexander Galozy",
"Sadi Alawadi",
"Victor Kebande",
"Sławomir Nowaczyk"
] | 2023-09-30 01:56:04 | http://arxiv.org/abs/2310.00221v1 | http://arxiv.org/pdf/2310.00221v1 | 2310.00221v1 |
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment | Large Language Models (LLMs) can acquire extensive world knowledge through
pre-training on large corpora. However, due to exposure to low-quality data,
LLMs may exhibit harmful behavior without aligning with human values. The
dominant approach for steering LLMs towards beneficial behavior involves
Reinforcement Learning with Human Feedback (RLHF), with Proximal Policy
Optimization (PPO) serving as the default RL optimizer. Despite its
effectiveness, PPO has limitations when optimizing rewards trained from
comparison-based loss. Primarily, PPO is not invariant to equivalent reward
functions containing identical preference information due to the need to
calibrate the reward scale. Additionally, PPO's necessity for token-wise
updates introduces complexity in both function approximation and algorithm
design compared to trajectory-wise optimization. This paper proposes a new
framework, reinforcement learning with relative feedback, and a novel
trajectory-wise policy gradient algorithm, Pairwise Proximal Policy
Optimization (P3O) that operates directly on comparative rewards. We show
theoretically that P3O is invariant to equivalent rewards and avoids the
complexity of PPO. Empirical evaluations demonstrate that P3O outperforms PPO
in the KL-Reward trade-off and can align with human preferences as well as or
better than prior methods. In summary, this work introduces a simpler yet
effective approach for aligning LLMs to human preferences through relative
feedback. | [
"Tianhao Wu",
"Banghua Zhu",
"Ruoyu Zhang",
"Zhaojin Wen",
"Kannan Ramchandran",
"Jiantao Jiao"
] | 2023-09-30 01:23:22 | http://arxiv.org/abs/2310.00212v3 | http://arxiv.org/pdf/2310.00212v3 | 2310.00212v3 |
Accelerating Non-IID Federated Learning via Heterogeneity-Guided Client Sampling | Statistical heterogeneity of data present at client devices in a federated
learning (FL) system renders the training of a global model in such systems
difficult. Particularly challenging are the settings where due to resource
constraints only a small fraction of clients can participate in any given round
of FL. Recent approaches to training a global model in FL systems with non-IID
data have focused on developing client selection methods that aim to sample
clients with more informative updates of the model. However, existing client
selection techniques either introduce significant computation overhead or
perform well only in the scenarios where clients have data with similar
heterogeneity profiles. In this paper, we propose HiCS-FL (Federated Learning
via Hierarchical Clustered Sampling), a novel client selection method in which
the server estimates statistical heterogeneity of a client's data using the
client's update of the network's output layer and relies on this information to
cluster and sample the clients. We analyze the ability of the proposed
techniques to compare heterogeneity of different datasets, and characterize
convergence of the training process that deploys the introduced client
selection method. Extensive experimental results demonstrate that in non-IID
settings HiCS-FL achieves faster convergence and lower training variance than
state-of-the-art FL client selection schemes. Notably, HiCS-FL drastically
reduces computation cost compared to existing selection schemes and is
adaptable to different heterogeneity scenarios. | [
"Huancheng Chen",
"Haris Vikalo"
] | 2023-09-30 00:29:30 | http://arxiv.org/abs/2310.00198v1 | http://arxiv.org/pdf/2310.00198v1 | 2310.00198v1 |
On the Equivalence of Graph Convolution and Mixup | This paper investigates the relationship between graph convolution and Mixup
techniques. Graph convolution in a graph neural network involves aggregating
features from neighboring samples to learn representative features for a
specific node or sample. On the other hand, Mixup is a data augmentation
technique that generates new examples by averaging features and one-hot labels
from multiple samples. One commonality between these techniques is their
utilization of information from multiple samples to derive feature
representation. This study aims to explore whether a connection exists between
these two approaches. Our investigation reveals that, under two mild
conditions, graph convolution can be viewed as a specialized form of Mixup that
is applied during both the training and testing phases. The two conditions are:
1) \textit{Homophily Relabel} - assigning the target node's label to all its
neighbors, and 2) \textit{Test-Time Mixup} - Mixup the feature during the test
time. We establish this equivalence mathematically by demonstrating that graph
convolution networks (GCN) and simplified graph convolution (SGC) can be
expressed as a form of Mixup. We also empirically verify the equivalence by
training an MLP using the two conditions to achieve comparable performance. | [
"Xiaotian Han",
"Hanqing Zeng",
"Yu Chen",
"Shaoliang Nie",
"Jingzhou Liu",
"Kanika Narang",
"Zahra Shakeri",
"Karthik Abinav Sankararaman",
"Song Jiang",
"Madian Khabsa",
"Qifan Wang",
"Xia Hu"
] | 2023-09-29 23:09:54 | http://arxiv.org/abs/2310.00183v1 | http://arxiv.org/pdf/2310.00183v1 | 2310.00183v1 |
MARL: Multi-scale Archetype Representation Learning for Urban Building Energy Modeling | Building archetypes, representative models of building stock, are crucial for
precise energy simulations in Urban Building Energy Modeling. The current
widely adopted building archetypes are developed on a nationwide scale,
potentially neglecting the impact of local buildings' geometric specificities.
We present Multi-scale Archetype Representation Learning (MARL), an approach
that leverages representation learning to extract geometric features from a
specific building stock. Built upon VQ-AE, MARL encodes building footprints and
purifies geometric information into latent vectors constrained by multiple
architectural downstream tasks. These tailored representations are proven
valuable for further clustering and building energy modeling. The advantages of
our algorithm are its adaptability with respect to the different building
footprint sizes, the ability for automatic generation across multi-scale
regions, and the preservation of geometric features across neighborhoods and
local ecologies. In our study spanning five regions in LA County, we show MARL
surpasses both conventional and VQ-AE extracted archetypes in performance.
Results demonstrate that geometric feature embeddings significantly improve the
accuracy and reliability of energy consumption estimates. Code, dataset and
trained models are publicly available:
https://github.com/ZixunHuang1997/MARL-BuildingEnergyEstimation | [
"Xinwei Zhuang",
"Zixun Huang",
"Wentao Zeng",
"Luisa Caldas"
] | 2023-09-29 22:56:19 | http://arxiv.org/abs/2310.00180v1 | http://arxiv.org/pdf/2310.00180v1 | 2310.00180v1 |
Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity | The traditional notion of "Junk DNA" has long been linked to non-coding
segments within the human genome, constituting roughly 98% of its composition.
However, recent research has unveiled the critical roles some of these
seemingly non-functional DNA sequences play in cellular processes.
Intriguingly, the weights within deep neural networks exhibit a remarkable
similarity to the redundancy observed in human genes. It was believed that
weights in gigantic models contained excessive redundancy, and could be removed
without compromising performance. This paper challenges this conventional
wisdom by presenting a compelling counter-argument. We employ sparsity as a
tool to isolate and quantify the nuanced significance of low-magnitude weights
in pre-trained large language models (LLMs). Our study demonstrates a strong
correlation between these weight magnitudes and the knowledge they encapsulate,
from a downstream task-centric angle. we raise the "Junk DNA Hypothesis" backed
by our in-depth investigation: while small-magnitude weights may appear
"useless" for simple tasks and suitable for pruning, they actually encode
crucial knowledge necessary for solving more difficult downstream tasks.
Removing these seemingly insignificant weights can lead to irreversible
knowledge forgetting and performance damage in difficult tasks. These findings
offer fresh insights into how LLMs encode knowledge in a task-sensitive manner,
pave future research direction in model pruning, and open avenues for
task-aware conditional computation during inference. | [
"Lu Yin",
"Shiwei Liu",
"Ajay Jaiswal",
"Souvik Kundu",
"Zhangyang Wang"
] | 2023-09-29 22:55:06 | http://arxiv.org/abs/2310.02277v1 | http://arxiv.org/pdf/2310.02277v1 | 2310.02277v1 |
A Neural-preconditioned Poisson Solver for Mixed Dirichlet and Neumann Boundary Conditions | We introduce a neural-preconditioned iterative solver for Poisson equations
with mixed boundary conditions. The Poisson equation is ubiquitous in
scientific computing: it governs a wide array of physical phenomena, arises as
a subproblem in many numerical algorithms, and serves as a model problem for
the broader class of elliptic PDEs. The most popular Poisson discretizations
yield large sparse linear systems. At high resolution, and for
performance-critical applications, iterative solvers can be advantageous for
these -- but only when paired with powerful preconditioners. The core of our
solver is a neural network trained to approximate the inverse of a discrete
structured-grid Laplace operator for a domain of arbitrary shape and with mixed
boundary conditions. The structure of this problem motivates a novel network
architecture that we demonstrate is highly effective as a preconditioner even
for boundary conditions outside the training set. We show that on challenging
test cases arising from an incompressible fluid simulation, our method
outperforms state-of-the-art solvers like algebraic multigrid as well as some
recent neural preconditioners. | [
"Kai Weixian Lan",
"Elias Gueidon",
"Ayano Kaneda",
"Julian Panetta",
"Joseph Teran"
] | 2023-09-29 22:49:47 | http://arxiv.org/abs/2310.00177v3 | http://arxiv.org/pdf/2310.00177v3 | 2310.00177v3 |
Tight Bounds for Volumetric Spanners and Applications | Given a set of points of interest, a volumetric spanner is a subset of the
points using which all the points can be expressed using "small" coefficients
(measured in an appropriate norm). Formally, given a set of vectors $X = \{v_1,
v_2, \dots, v_n\}$, the goal is to find $T \subseteq [n]$ such that every $v
\in X$ can be expressed as $\sum_{i\in T} \alpha_i v_i$, with $\|\alpha\|$
being small. This notion, which has also been referred to as a well-conditioned
basis, has found several applications, including bandit linear optimization,
determinant maximization, and matrix low rank approximation. In this paper, we
give almost optimal bounds on the size of volumetric spanners for all $\ell_p$
norms, and show that they can be constructed using a simple local search
procedure. We then show the applications of our result to other tasks and in
particular the problem of finding coresets for the Minimum Volume Enclosing
Ellipsoid (MVEE) problem. | [
"Aditya Bhaskara",
"Sepideh Mahabadi",
"Ali Vakilian"
] | 2023-09-29 22:43:30 | http://arxiv.org/abs/2310.00175v1 | http://arxiv.org/pdf/2310.00175v1 | 2310.00175v1 |
ADMET property prediction through combinations of molecular fingerprints | While investigating methods to predict small molecule potencies, we found
random forests or support vector machines paired with extended-connectivity
fingerprints (ECFP) consistently outperformed recently developed methods. A
detailed investigation into regression algorithms and molecular fingerprints
revealed gradient-boosted decision trees, particularly CatBoost, in conjunction
with a combination of ECFP, Avalon, and ErG fingerprints, as well as 200
molecular properties, to be most effective. Incorporating a graph neural
network fingerprint further enhanced performance. We successfully validated our
model across 22 Therapeutics Data Commons ADMET benchmarks. Our findings
underscore the significance of richer molecular representations for accurate
property prediction. | [
"James H. Notwell",
"Michael W. Wood"
] | 2023-09-29 22:39:18 | http://arxiv.org/abs/2310.00174v1 | http://arxiv.org/pdf/2310.00174v1 | 2310.00174v1 |
Motif: Intrinsic Motivation from Artificial Intelligence Feedback | Exploring rich environments and evaluating one's actions without prior
knowledge is immensely challenging. In this paper, we propose Motif, a general
method to interface such prior knowledge from a Large Language Model (LLM) with
an agent. Motif is based on the idea of grounding LLMs for decision-making
without requiring them to interact with the environment: it elicits preferences
from an LLM over pairs of captions to construct an intrinsic reward, which is
then used to train agents with reinforcement learning. We evaluate Motif's
performance and behavior on the challenging, open-ended and
procedurally-generated NetHack game. Surprisingly, by only learning to maximize
its intrinsic reward, Motif achieves a higher game score than an algorithm
directly trained to maximize the score itself. When combining Motif's intrinsic
reward with the environment reward, our method significantly outperforms
existing approaches and makes progress on tasks where no advancements have ever
been made without demonstrations. Finally, we show that Motif mostly generates
intuitive human-aligned behaviors which can be steered easily through prompt
modifications, while scaling well with the LLM size and the amount of
information given in the prompt. | [
"Martin Klissarov",
"Pierluca D'Oro",
"Shagun Sodhani",
"Roberta Raileanu",
"Pierre-Luc Bacon",
"Pascal Vincent",
"Amy Zhang",
"Mikael Henaff"
] | 2023-09-29 22:10:01 | http://arxiv.org/abs/2310.00166v1 | http://arxiv.org/pdf/2310.00166v1 | 2310.00166v1 |
SCoRe: Submodular Combinatorial Representation Learning for Real-World Class-Imbalanced Settings | Representation Learning in real-world class-imbalanced settings has emerged
as a challenging task in the evolution of deep learning. Lack of diversity in
visual and structural features for rare classes restricts modern neural
networks to learn discriminative feature clusters. This manifests in the form
of large inter-class bias between rare object classes and elevated intra-class
variance among abundant classes in the dataset. Although deep metric learning
approaches have shown promise in this domain, significant improvements need to
be made to overcome the challenges associated with class-imbalance in mission
critical tasks like autonomous navigation and medical diagnostics. Set-based
combinatorial functions like Submodular Information Measures exhibit properties
that allow them to simultaneously model diversity and cooperation among feature
clusters. In this paper, we introduce the SCoRe (Submodular Combinatorial
Representation Learning) framework and propose a family of Submodular
Combinatorial Loss functions to overcome these pitfalls in contrastive
learning. We also show that existing contrastive learning approaches are either
submodular or can be re-formulated to create their submodular counterparts. We
conduct experiments on the newly introduced family of combinatorial objectives
on two image classification benchmarks - pathologically imbalanced CIFAR-10,
subsets of MedMNIST and a real-world road object detection benchmark - India
Driving Dataset (IDD). Our experiments clearly show that the newly introduced
objectives like Facility Location, Graph-Cut and Log Determinant outperform
state-of-the-art metric learners by up to 7.6% for the imbalanced
classification tasks and up to 19.4% for object detection tasks. | [
"Anay Majee",
"Suraj Kothawade",
"Krishnateja Killiamsetty",
"Rishabh Iyer"
] | 2023-09-29 22:09:07 | http://arxiv.org/abs/2310.00165v1 | http://arxiv.org/pdf/2310.00165v1 | 2310.00165v1 |
Detection-Oriented Image-Text Pretraining for Open-Vocabulary Detection | We present a new open-vocabulary detection approach based on
detection-oriented image-text pretraining to bridge the gap between image-level
pretraining and open-vocabulary object detection. At the pretraining phase, we
replace the commonly used classification architecture with the detector
architecture, which better serves the region-level recognition needs of
detection by enabling the detector heads to learn from noisy image-text pairs.
Using only standard contrastive loss and no pseudo-labeling, our approach is a
simple yet effective extension of the contrastive learning method to learn
emergent object-semantic cues. In addition, we propose a shifted-window
learning approach upon window attention to make the backbone representation
more robust, translation-invariant, and less biased by the window pattern. On
the popular LVIS open-vocabulary detection benchmark, our approach sets a new
state of the art of 40.4 mask AP$_r$ using the common ViT-L backbone,
significantly outperforming the best existing approach by +6.5 mask AP$_r$ at
system level. On the COCO benchmark, we achieve very competitive 40.8 novel AP
without pseudo labeling or weak supervision. In addition, we evaluate our
approach on the transfer detection setup, where ours outperforms the baseline
significantly. Visualization reveals emerging object locality from the
pretraining recipes compared to the baseline. Code and models will be publicly
released. | [
"Dahun Kim",
"Anelia Angelova",
"Weicheng Kuo"
] | 2023-09-29 21:56:37 | http://arxiv.org/abs/2310.00161v1 | http://arxiv.org/pdf/2310.00161v1 | 2310.00161v1 |
Feedback-guided Data Synthesis for Imbalanced Classification | Current status quo in machine learning is to use static datasets of real
images for training, which often come from long-tailed distributions. With the
recent advances in generative models, researchers have started augmenting these
static datasets with synthetic data, reporting moderate performance
improvements on classification tasks. We hypothesize that these performance
gains are limited by the lack of feedback from the classifier to the generative
model, which would promote the usefulness of the generated samples to improve
the classifier's performance. In this work, we introduce a framework for
augmenting static datasets with useful synthetic samples, which leverages
one-shot feedback from the classifier to drive the sampling of the generative
model. In order for the framework to be effective, we find that the samples
must be close to the support of the real data of the task at hand, and be
sufficiently diverse. We validate three feedback criteria on a long-tailed
dataset (ImageNet-LT) as well as a group-imbalanced dataset (NICO++). On
ImageNet-LT, we achieve state-of-the-art results, with over 4 percent
improvement on underrepresented classes while being twice efficient in terms of
the number of generated synthetic samples. NICO++ also enjoys marked boosts of
over 5 percent in worst group accuracy. With these results, our framework paves
the path towards effectively leveraging state-of-the-art text-to-image models
as data sources that can be queried to improve downstream applications. | [
"Reyhane Askari Hemmat",
"Mohammad Pezeshki",
"Florian Bordes",
"Michal Drozdzal",
"Adriana Romero-Soriano"
] | 2023-09-29 21:47:57 | http://arxiv.org/abs/2310.00158v1 | http://arxiv.org/pdf/2310.00158v1 | 2310.00158v1 |
Primal-Dual Continual Learning: Stability and Plasticity through Lagrange Multipliers | Continual learning is inherently a constrained learning problem. The goal is
to learn a predictor under a \emph{no-forgetting} requirement. Although several
prior studies formulate it as such, they do not solve the constrained problem
explicitly. In this work, we show that it is both possible and beneficial to
undertake the constrained optimization problem directly. To do this, we
leverage recent results in constrained learning through Lagrangian duality. We
focus on memory-based methods, where a small subset of samples from previous
tasks can be stored in a replay buffer. In this setting, we analyze two
versions of the continual learning problem: a coarse approach with constraints
at the task level and a fine approach with constraints at the sample level. We
show that dual variables indicate the sensitivity of the optimal value with
respect to constraint perturbations. We then leverage this result to partition
the buffer in the coarse approach, allocating more resources to harder tasks,
and to populate the buffer in the fine approach, including only impactful
samples. We derive sub-optimality bounds, and empirically corroborate our
theoretical results in various continual learning benchmarks. We also discuss
the limitations of these methods with respect to the amount of memory available
and the number of constraints involved in the optimization problem. | [
"Juan Elenter",
"Navid NaderiAlizadeh",
"Tara Javidi",
"Alejandro Ribeiro"
] | 2023-09-29 21:23:27 | http://arxiv.org/abs/2310.00154v1 | http://arxiv.org/pdf/2310.00154v1 | 2310.00154v1 |
One for All: Towards Training One Graph Model for All Classification Tasks | Designing a single model that addresses multiple tasks has been a
long-standing objective in artificial intelligence. Recently, large language
models have demonstrated exceptional capability in integrating and solving
different tasks within the language domain. However, a unified model for
various tasks on graphs remains underexplored, primarily due to the challenges
unique to the graph learning domain. First, graph data from different areas
carry distinct attributes and follow different distributions. Such discrepancy
makes it hard to represent graphs in a single representation space. Second,
tasks on graphs diversify into node, link, and graph tasks, requiring distinct
embedding strategies. Finally, an appropriate graph prompting paradigm for
in-context learning is unclear. Striving to handle all the aforementioned
challenges, we propose One for All (OFA), the first general framework that can
use a single graph model to address the above challenges. Specifically, OFA
proposes text-attributed graphs to unify different graph data by describing
nodes and edges with natural language and uses language models to encode the
diverse and possibly cross-domain text attributes to feature vectors in the
same embedding space. Furthermore, OFA introduces the concept of
nodes-of-interest to standardize different tasks with a single task
representation. For in-context learning on graphs, OFA introduces a novel graph
prompting paradigm that appends prompting substructures to the input graph,
which enables it to address varied tasks without fine-tuning. We train the OFA
model using graph data from multiple domains (including citation networks,
molecular graphs, knowledge graphs, etc.) simultaneously and evaluate its
ability in supervised, few-shot, and zero-shot learning scenarios. OFA performs
well across different tasks, making it the first general-purpose graph
classification model across domains. | [
"Hao Liu",
"Jiarui Feng",
"Lecheng Kong",
"Ningyue Liang",
"Dacheng Tao",
"Yixin Chen",
"Muhan Zhang"
] | 2023-09-29 21:15:26 | http://arxiv.org/abs/2310.00149v1 | http://arxiv.org/pdf/2310.00149v1 | 2310.00149v1 |
Probabilistic Sampling-Enhanced Temporal-Spatial GCN: A Scalable Framework for Transaction Anomaly Detection in Ethereum Networks | The rapid evolution of the Ethereum network necessitates sophisticated
techniques to ensure its robustness against potential threats and to maintain
transparency. While Graph Neural Networks (GNNs) have pioneered anomaly
detection in such platforms, capturing the intricacies of both spatial and
temporal transactional patterns has remained a challenge. This study presents a
fusion of Graph Convolutional Networks (GCNs) with Temporal Random Walks (TRW)
enhanced by probabilistic sampling to bridge this gap. Our approach, unlike
traditional GCNs, leverages the strengths of TRW to discern complex temporal
sequences in Ethereum transactions, thereby providing a more nuanced
transaction anomaly detection mechanism. Preliminary evaluations demonstrate
that our TRW-GCN framework substantially advances the performance metrics over
conventional GCNs in detecting anomalies and transaction bursts. This research
not only underscores the potential of temporal cues in Ethereum transactional
data but also offers a scalable and effective methodology for ensuring the
security and transparency of decentralized platforms. By harnessing both
spatial relationships and time-based transactional sequences as node features,
our model introduces an additional layer of granularity, making the detection
process more robust and less prone to false positives. This work lays the
foundation for future research aimed at optimizing and enhancing the
transparency of blockchain technologies, and serves as a testament to the
significance of considering both time and space dimensions in the ever-evolving
landscape of the decentralized platforms. | [
"Stefan Kambiz Behfar",
"Jon Crowcroft"
] | 2023-09-29 21:08:21 | http://arxiv.org/abs/2310.00144v1 | http://arxiv.org/pdf/2310.00144v1 | 2310.00144v1 |
GASS: Generalizing Audio Source Separation with Large-scale Data | Universal source separation targets at separating the audio sources of an
arbitrary mix, removing the constraint to operate on a specific domain like
speech or music. Yet, the potential of universal source separation is limited
because most existing works focus on mixes with predominantly sound events, and
small training datasets also limit its potential for supervised learning. Here,
we study a single general audio source separation (GASS) model trained to
separate speech, music, and sound events in a supervised fashion with a
large-scale dataset. We assess GASS models on a diverse set of tasks. Our
strong in-distribution results show the feasibility of GASS models, and the
competitive out-of-distribution performance in sound event and speech
separation shows its generalization abilities. Yet, it is challenging for GASS
models to generalize for separating out-of-distribution cinematic and music
content. We also fine-tune GASS models on each dataset and consistently
outperform the ones without pre-training. All fine-tuned models (except the
music separation one) obtain state-of-the-art results in their respective
benchmarks. | [
"Jordi Pons",
"Xiaoyu Liu",
"Santiago Pascual",
"Joan Serrà"
] | 2023-09-29 21:02:07 | http://arxiv.org/abs/2310.00140v1 | http://arxiv.org/pdf/2310.00140v1 | 2310.00140v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.