title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability | Causal machine learning tools are beginning to see use in real-world policy
evaluation tasks to flexibly estimate treatment effects. One issue with these
methods is that the machine learning models used are generally black boxes,
i.e., there is no globally interpretable way to understand how a model makes
estimates. This is a clear problem in policy evaluation applications,
particularly in government, because it is difficult to understand whether such
models are functioning in ways that are fair, based on the correct
interpretation of evidence and transparent enough to allow for accountability
if things go wrong. However, there has been little discussion of transparency
problems in the causal machine learning literature and how these might be
overcome. This paper explores why transparency issues are a problem for causal
machine learning in public policy evaluation applications and considers ways
these problems might be addressed through explainable AI tools and by
simplifying models in line with interpretable AI principles. It then applies
these ideas to a case-study using a causal forest model to estimate conditional
average treatment effects for a hypothetical change in the school leaving age
in Australia. It shows that existing tools for understanding black-box
predictive models are poorly suited to causal machine learning and that
simplifying the model to make it interpretable leads to an unacceptable
increase in error (in this application). It concludes that new tools are needed
to properly understand causal machine learning models and the algorithms that
fit them. | [
"Patrick Rehill",
"Nicholas Biddle"
] | 2023-10-20 02:48:29 | http://arxiv.org/abs/2310.13240v1 | http://arxiv.org/pdf/2310.13240v1 | 2310.13240v1 |
Training A Semantic Communication System with Federated Learning | Semantic communication has emerged as a pillar for the next generation of
communication systems due to its capabilities in alleviating data redundancy.
Most semantic communication systems are built using advanced deep learning
models whose performance heavily depends on data availability. These studies
assume that an abundance of training data is available, which is unrealistic.
In practice, data is mainly created on the user side. Due to privacy and
security concerns, the transmission of data is restricted, which is necessary
for conventional centralized training schemes. To address this challenge, we
explore semantic communication in federated learning (FL) setting that utilizes
user data without leaking privacy. Additionally, we design our system to tackle
the communication overhead by reducing the quantity of information delivered in
each global round. In this way, we can save significant bandwidth for
resource-limited devices and reduce overall network traffic. Finally, we
propose a mechanism to aggregate the global model from the clients, called
FedLol. Extensive simulation results demonstrate the efficacy of our proposed
technique compared to baseline methods. | [
"Loc X. Nguyen",
"Huy Q. Le",
"Ye Lin Tun",
"Pyae Sone Aung",
"Yan Kyaw Tun",
"Zhu Han",
"Choong Seon Hong"
] | 2023-10-20 02:45:20 | http://arxiv.org/abs/2310.13236v1 | http://arxiv.org/pdf/2310.13236v1 | 2310.13236v1 |
Multi-level Contrastive Learning for Script-based Character Understanding | In this work, we tackle the scenario of understanding characters in scripts,
which aims to learn the characters' personalities and identities from their
utterances. We begin by analyzing several challenges in this scenario, and then
propose a multi-level contrastive learning framework to capture characters'
global information in a fine-grained manner. To validate the proposed
framework, we conduct extensive experiments on three character understanding
sub-tasks by comparing with strong pre-trained language models, including
SpanBERT, Longformer, BigBird and ChatGPT-3.5. Experimental results demonstrate
that our method improves the performances by a considerable margin. Through
further in-depth analysis, we show the effectiveness of our method in
addressing the challenges and provide more hints on the scenario of character
understanding. We will open-source our work on github at
https://github.com/David-Li0406/Script-based-Character-Understanding. | [
"Dawei Li",
"Hengyuan Zhang",
"Yanran Li",
"Shiping Yang"
] | 2023-10-20 02:40:52 | http://arxiv.org/abs/2310.13231v1 | http://arxiv.org/pdf/2310.13231v1 | 2310.13231v1 |
Absolute Policy Optimization | In recent years, trust region on-policy reinforcement learning has achieved
impressive results in addressing complex control tasks and gaming scenarios.
However, contemporary state-of-the-art algorithms within this category
primarily emphasize improvement in expected performance, lacking the ability to
control over the worst-case performance outcomes. To address this limitation,
we introduce a novel objective function; by optimizing which, it will lead to
guaranteed monotonic improvement in the lower bound of near-total performance
samples (absolute performance). Considering this groundbreaking theoretical
advancement, we then refine this theoretically grounded algorithm through a
series of approximations, resulting in a practical solution called Absolute
Policy Optimization (APO). Our experiments demonstrate the effectiveness of our
approach across challenging continuous control benchmark tasks and extend its
applicability to mastering Atari games. Our findings reveal that APO
significantly outperforms state-of-the-art policy gradient algorithms,
resulting in substantial improvements in both expected performance and
worst-case performance. | [
"Weiye Zhao",
"Feihan Li",
"Yifan Sun",
"Rui Chen",
"Tianhao Wei",
"Changliu Liu"
] | 2023-10-20 02:40:05 | http://arxiv.org/abs/2310.13230v1 | http://arxiv.org/pdf/2310.13230v1 | 2310.13230v1 |
Interpretable Deep Reinforcement Learning for Optimizing Heterogeneous Energy Storage Systems | Energy storage systems (ESS) are pivotal component in the energy market,
serving as both energy suppliers and consumers. ESS operators can reap benefits
from energy arbitrage by optimizing operations of storage equipment. To further
enhance ESS flexibility within the energy market and improve renewable energy
utilization, a heterogeneous photovoltaic-ESS (PV-ESS) is proposed, which
leverages the unique characteristics of battery energy storage (BES) and
hydrogen energy storage (HES). For scheduling tasks of the heterogeneous
PV-ESS, cost description plays a crucial role in guiding operator's strategies
to maximize benefits. We develop a comprehensive cost function that takes into
account degradation, capital, and operation/maintenance costs to reflect
real-world scenarios. Moreover, while numerous methods excel in optimizing ESS
energy arbitrage, they often rely on black-box models with opaque
decision-making processes, limiting practical applicability. To overcome this
limitation and enable transparent scheduling strategies, a prototype-based
policy network with inherent interpretability is introduced. This network
employs human-designed prototypes to guide decision-making by comparing
similarities between prototypical situations and encountered situations, which
allows for naturally explained scheduling strategies. Comparative results
across four distinct cases underscore the effectiveness and practicality of our
proposed pre-hoc interpretable optimization method when contrasted with
black-box models. | [
"Luolin Xiong",
"Yang Tang",
"Chensheng Liu",
"Shuai Mao",
"Ke Meng",
"Zhaoyang Dong",
"Feng Qian"
] | 2023-10-20 02:26:17 | http://arxiv.org/abs/2310.14783v1 | http://arxiv.org/pdf/2310.14783v1 | 2310.14783v1 |
ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search | Large language models (LLMs) have demonstrated powerful decision-making and
planning capabilities in solving complicated real-world problems. LLM-based
autonomous agents can interact with diverse tools (e.g., functional APIs) and
generate solution plans that execute a series of API function calls in a
step-by-step manner. The multitude of candidate API function calls
significantly expands the action space, amplifying the critical need for
efficient action space navigation. However, existing methods either struggle
with unidirectional exploration in expansive action spaces, trapped into a
locally optimal solution, or suffer from exhaustively traversing all potential
actions, causing inefficient navigation. To address these issues, we propose
ToolChain*, an efficient tree search-based planning algorithm for LLM-based
agents. It formulates the entire action space as a decision tree, where each
node represents a possible API function call involved in a solution plan. By
incorporating the A* search algorithm with task-specific cost function design,
it efficiently prunes high-cost branches that may involve incorrect actions,
identifying the most low-cost valid path as the solution. Extensive experiments
on multiple tool-use and reasoning tasks demonstrate that ToolChain*
efficiently balances exploration and exploitation within an expansive action
space. It outperforms state-of-the-art baselines on planning and reasoning
tasks by 3.1% and 3.5% on average while requiring 7.35x and 2.31x less time,
respectively. | [
"Yuchen Zhuang",
"Xiang Chen",
"Tong Yu",
"Saayan Mitra",
"Victor Bursztyn",
"Ryan A. Rossi",
"Somdeb Sarkhel",
"Chao Zhang"
] | 2023-10-20 02:24:35 | http://arxiv.org/abs/2310.13227v1 | http://arxiv.org/pdf/2310.13227v1 | 2310.13227v1 |
Scalable Neural Network Kernels | We introduce the concept of scalable neural network kernels (SNNKs), the
replacements of regular feedforward layers (FFLs), capable of approximating the
latter, but with favorable computational properties. SNNKs effectively
disentangle the inputs from the parameters of the neural network in the FFL,
only to connect them in the final computation via the dot-product kernel. They
are also strictly more expressive, as allowing to model complicated
relationships beyond the functions of the dot-products of parameter-input
vectors. We also introduce the neural network bundling process that applies
SNNKs to compactify deep neural network architectures, resulting in additional
compression gains. In its extreme version, it leads to the fully bundled
network whose optimal parameters can be expressed via explicit formulae for
several loss functions (e.g. mean squared error), opening a possibility to
bypass backpropagation. As a by-product of our analysis, we introduce the
mechanism of the universal random features (or URFs), applied to instantiate
several SNNK variants, and interesting on its own in the context of scalable
kernel methods. We provide rigorous theoretical analysis of all these concepts
as well as an extensive empirical evaluation, ranging from point-wise kernel
estimation to Transformers' fine-tuning with novel adapter layers inspired by
SNNKs. Our mechanism provides up to 5x reduction in the number of trainable
parameters, while maintaining competitive accuracy. | [
"Arijit Sehanobish",
"Krzysztof Choromanski",
"Yunfan Zhao",
"Avinava Dubey",
"Valerii Likhosherstov"
] | 2023-10-20 02:12:56 | http://arxiv.org/abs/2310.13225v1 | http://arxiv.org/pdf/2310.13225v1 | 2310.13225v1 |
Equivariant Transformer is all you need | Machine learning, deep learning, has been accelerating computational physics,
which has been used to simulate systems on a lattice. Equivariance is essential
to simulate a physical system because it imposes a strong induction bias for
the probability distribution described by a machine learning model. This
reduces the risk of erroneous extrapolation that deviates from data symmetries
and physical laws. However, imposing symmetry on the model sometimes occur a
poor acceptance rate in self-learning Monte-Carlo (SLMC). On the other hand,
Attention used in Transformers like GPT realizes a large model capacity. We
introduce symmetry equivariant attention to SLMC. To evaluate our architecture,
we apply it to our proposed new architecture on a spin-fermion model on a
two-dimensional lattice. We find that it overcomes poor acceptance rates for
linear models and observe the scaling law of the acceptance rate as in the
large language models with Transformers. | [
"Akio Tomiya",
"Yuki Nagai"
] | 2023-10-20 01:57:03 | http://arxiv.org/abs/2310.13222v1 | http://arxiv.org/pdf/2310.13222v1 | 2310.13222v1 |
In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern | Pre-trained large language models based on Transformers have demonstrated
amazing in-context learning (ICL) abilities. Given several demonstration
examples, the models can implement new tasks without any parameter updates.
However, it is still an open question to understand the mechanism of ICL. In
this paper, we interpret the inference process of ICL as a gradient descent
process in a contrastive learning pattern. Firstly, leveraging kernel methods,
we establish the relationship between gradient descent and self-attention
mechanism under generally used softmax attention setting instead of linear
attention setting. Then, we analyze the corresponding gradient descent process
of ICL from the perspective of contrastive learning without negative samples
and discuss possible improvements of this contrastive learning pattern, based
on which the self-attention layer can be further modified. Finally, we design
experiments to support our opinions. To the best of our knowledge, our work is
the first to provide the understanding of ICL from the perspective of
contrastive learning and has the potential to facilitate future model design by
referring to related works on contrastive learning. | [
"Ruifeng Ren",
"Yong Liu"
] | 2023-10-20 01:55:34 | http://arxiv.org/abs/2310.13220v1 | http://arxiv.org/pdf/2310.13220v1 | 2310.13220v1 |
A Deep Learning Analysis of Climate Change, Innovation, and Uncertainty | We study the implications of model uncertainty in a climate-economics
framework with three types of capital: "dirty" capital that produces carbon
emissions when used for production, "clean" capital that generates no emissions
but is initially less productive than dirty capital, and knowledge capital that
increases with R\&D investment and leads to technological innovation in green
sector productivity. To solve our high-dimensional, non-linear model framework
we implement a neural-network-based global solution method. We show there are
first-order impacts of model uncertainty on optimal decisions and social
valuations in our integrated climate-economic-innovation framework. Accounting
for interconnected uncertainty over climate dynamics, economic damages from
climate change, and the arrival of a green technological change leads to
substantial adjustments to investment in the different capital types in
anticipation of technological change and the revelation of climate damage
severity. | [
"Michael Barnett",
"William Brock",
"Lars Peter Hansen",
"Ruimeng Hu",
"Joseph Huang"
] | 2023-10-19 23:58:28 | http://arxiv.org/abs/2310.13200v1 | http://arxiv.org/pdf/2310.13200v1 | 2310.13200v1 |
NameGuess: Column Name Expansion for Tabular Data | Recent advances in large language models have revolutionized many sectors,
including the database industry. One common challenge when dealing with large
volumes of tabular data is the pervasive use of abbreviated column names, which
can negatively impact performance on various data search, access, and
understanding tasks. To address this issue, we introduce a new task, called
NameGuess, to expand column names (used in database schema) as a natural
language generation problem. We create a training dataset of 384K
abbreviated-expanded column pairs using a new data fabrication method and a
human-annotated evaluation benchmark that includes 9.2K examples from
real-world tables. To tackle the complexities associated with polysemy and
ambiguity in NameGuess, we enhance auto-regressive language models by
conditioning on table content and column header names -- yielding a fine-tuned
model (with 2.7B parameters) that matches human performance. Furthermore, we
conduct a comprehensive analysis (on multiple LLMs) to validate the
effectiveness of table content in NameGuess and identify promising future
opportunities. Code has been made available at
https://github.com/amazon-science/nameguess. | [
"Jiani Zhang",
"Zhengyuan Shen",
"Balasubramaniam Srinivasan",
"Shen Wang",
"Huzefa Rangwala",
"George Karypis"
] | 2023-10-19 23:11:37 | http://arxiv.org/abs/2310.13196v1 | http://arxiv.org/pdf/2310.13196v1 | 2310.13196v1 |
Heterogeneous Graph Neural Networks for Data-driven Traffic Assignment | The traffic assignment problem is one of the significant components of
traffic flow analysis for which various solution approaches have been proposed.
However, deploying these approaches for large-scale networks poses significant
challenges. In this paper, we leverage the power of heterogeneous graph neural
networks to propose a novel data-driven approach for traffic assignment and
traffic flow learning. The proposed model is capable of capturing spatial
traffic patterns across different links, yielding highly accurate results. We
present numerical experiments on urban transportation networks and show that
the proposed heterogeneous graph neural network model outperforms other
conventional neural network models in terms of convergence rate, training loss,
and prediction accuracy. Notably, the proposed heterogeneous graph neural
network model can also be generalized to different network topologies. This
approach offers a promising solution for complex traffic flow analysis and
prediction, enhancing our understanding and management of a wide range of
transportation systems. | [
"Tong Liu",
"Hadi Meidani"
] | 2023-10-19 23:04:09 | http://arxiv.org/abs/2310.13193v1 | http://arxiv.org/pdf/2310.13193v1 | 2310.13193v1 |
CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation | Diffusion models (DMs) have enabled breakthroughs in image synthesis tasks
but lack an intuitive interface for consistent image-to-image (I2I)
translation. Various methods have been explored to address this issue,
including mask-based methods, attention-based methods, and image-conditioning.
However, it remains a critical challenge to enable unpaired I2I translation
with pre-trained DMs while maintaining satisfying consistency. This paper
introduces Cyclenet, a novel but simple method that incorporates cycle
consistency into DMs to regularize image manipulation. We validate Cyclenet on
unpaired I2I tasks of different granularities. Besides the scene and object
level translation, we additionally contribute a multi-domain I2I translation
dataset to study the physical state changes of objects. Our empirical studies
show that Cyclenet is superior in translation consistency and quality, and can
generate high-quality images for out-of-domain distributions with a simple
change of the textual prompt. Cyclenet is a practical framework, which is
robust even with very limited training data (around 2k) and requires minimal
computational resources (1 GPU) to train. Project homepage:
https://cyclenetweb.github.io/ | [
"Sihan Xu",
"Ziqiao Ma",
"Yidong Huang",
"Honglak Lee",
"Joyce Chai"
] | 2023-10-19 21:32:21 | http://arxiv.org/abs/2310.13165v1 | http://arxiv.org/pdf/2310.13165v1 | 2310.13165v1 |
Almost Equivariance via Lie Algebra Convolutions | Recently, the equivariance of models with respect to a group action has
become an important topic of research in machine learning. However, imbuing an
architecture with a specific group equivariance imposes a strong prior on the
types of data transformations that the model expects to see. While
strictly-equivariant models enforce symmetries, real-world data does not always
conform to such strict equivariances, be it due to noise in the data or
underlying physical laws that encode only approximate or partial symmetries. In
such cases, the prior of strict equivariance can actually prove too strong and
cause models to underperform on real-world data. Therefore, in this work we
study a closely related topic, that of almost equivariance. We provide a
definition of almost equivariance that differs from those extant in the current
literature and give a practical method for encoding almost equivariance in
models by appealing to the Lie algebra of a Lie group. Specifically, we define
Lie algebra convolutions and demonstrate that they offer several benefits over
Lie group convolutions, including being well-defined for non-compact groups.
From there, we pivot to the realm of theory and demonstrate connections between
the notions of equivariance and isometry and those of almost equivariance and
almost isometry, respectively. We prove two existence theorems, one showing the
existence of almost isometries within bounded distance of isometries of a
general manifold, and another showing the converse for Hilbert spaces. We then
extend these theorems to prove the existence of almost equivariant manifold
embeddings within bounded distance of fully equivariant embedding functions,
subject to certain constraints on the group action and the function class.
Finally, we demonstrate the validity of our approach by benchmarking against
datasets in fully equivariant and almost equivariant settings. | [
"Daniel McNeela"
] | 2023-10-19 21:31:11 | http://arxiv.org/abs/2310.13164v1 | http://arxiv.org/pdf/2310.13164v1 | 2310.13164v1 |
A Distributed Approach to Meteorological Predictions: Addressing Data Imbalance in Precipitation Prediction Models through Federated Learning and GANs | The classification of weather data involves categorizing meteorological
phenomena into classes, thereby facilitating nuanced analyses and precise
predictions for various sectors such as agriculture, aviation, and disaster
management. This involves utilizing machine learning models to analyze large,
multidimensional weather datasets for patterns and trends. These datasets may
include variables such as temperature, humidity, wind speed, and pressure,
contributing to meteorological conditions. Furthermore, it's imperative that
classification algorithms proficiently navigate challenges such as data
imbalances, where certain weather events (e.g., storms or extreme temperatures)
might be underrepresented. This empirical study explores data augmentation
methods to address imbalanced classes in tabular weather data in centralized
and federated settings. Employing data augmentation techniques such as the
Synthetic Minority Over-sampling Technique or Generative Adversarial Networks
can improve the model's accuracy in classifying rare but critical weather
events. Moreover, with advancements in federated learning, machine learning
models can be trained across decentralized databases, ensuring privacy and data
integrity while mitigating the need for centralized data storage and
processing. Thus, the classification of weather data stands as a critical
bridge, linking raw meteorological data to actionable insights, enhancing our
capacity to anticipate and prepare for diverse weather conditions. | [
"Elaheh Jafarigol",
"Theodore Trafalis"
] | 2023-10-19 21:28:20 | http://arxiv.org/abs/2310.13161v1 | http://arxiv.org/pdf/2310.13161v1 | 2310.13161v1 |
Conditional Generative Modeling for Images, 3D Animations, and Video | This dissertation attempts to drive innovation in the field of generative
modeling for computer vision, by exploring novel formulations of conditional
generative models, and innovative applications in images, 3D animations, and
video. Our research focuses on architectures that offer reversible
transformations of noise and visual data, and the application of
encoder-decoder architectures for generative tasks and 3D content manipulation.
In all instances, we incorporate conditional information to enhance the
synthesis of visual data, improving the efficiency of the generation process as
well as the generated content.
We introduce the use of Neural ODEs to model video dynamics using an
encoder-decoder architecture, demonstrating their ability to predict future
video frames despite being trained solely to reconstruct current frames. Next,
we propose a conditional variant of continuous normalizing flows that enables
higher-resolution image generation based on lower-resolution input, achieving
comparable image quality while reducing parameters and training time. Our next
contribution presents a pipeline that takes human images as input,
automatically aligns a user-specified 3D character with the pose of the human,
and facilitates pose editing based on partial inputs. Next, we derive the
relevant mathematical details for denoising diffusion models that use
non-isotropic Gaussian processes, and show comparable generation quality.
Finally, we devise a novel denoising diffusion framework capable of solving all
three video tasks of prediction, generation, and interpolation. We perform
ablation studies, and show SOTA results on multiple datasets.
Our contributions are published articles at peer-reviewed venues. Overall,
our research aims to make a meaningful contribution to the pursuit of more
efficient and flexible generative models, with the potential to shape the
future of computer vision. | [
"Vikram Voleti"
] | 2023-10-19 21:10:39 | http://arxiv.org/abs/2310.13157v1 | http://arxiv.org/pdf/2310.13157v1 | 2310.13157v1 |
CLIFT: Analysing Natural Distribution Shift on Question Answering Models in Clinical Domain | This paper introduces a new testbed CLIFT (Clinical Shift) for the clinical
domain Question-answering task. The testbed includes 7.5k high-quality question
answering samples to provide a diverse and reliable benchmark. We performed a
comprehensive experimental study and evaluated several QA deep-learning models
under the proposed testbed. Despite impressive results on the original test
set, the performance degrades when applied to new test sets, which shows the
distribution shift. Our findings emphasize the need for and the potential for
increasing the robustness of clinical domain models under distributional
shifts. The testbed offers one way to track progress in that direction. It also
highlights the necessity of adopting evaluation metrics that consider
robustness to natural distribution shifts. We plan to expand the corpus by
adding more samples and model results. The full paper and the updated benchmark
are available at github.com/openlifescience-ai/clift | [
"Ankit Pal"
] | 2023-10-19 20:43:11 | http://arxiv.org/abs/2310.13146v1 | http://arxiv.org/pdf/2310.13146v1 | 2310.13146v1 |
Graph Neural Networks with polynomial activations have limited expressivity | The expressivity of Graph Neural Networks (GNNs) can be entirely
characterized by appropriate fragments of the first order logic. Namely, any
query of the two variable fragment of graded modal logic (GC2) interpreted over
labelled graphs can be expressed using a GNN whose size depends only on the
depth of the query. As pointed out by [Barcelo & Al., 2020, Grohe, 2021 ], this
description holds for a family of activation functions, leaving the
possibibility for a hierarchy of logics expressible by GNNs depending on the
chosen activation function. In this article, we show that such hierarchy indeed
exists by proving that GC2 queries cannot be expressed by GNNs with polynomial
activation functions. This implies a separation between polynomial and popular
non polynomial activations (such as ReLUs, sigmoid and hyperbolic tan and
others) and answers an open question formulated by [Grohe, 2021]. | [
"Sammy Khalife"
] | 2023-10-19 20:32:25 | http://arxiv.org/abs/2310.13139v1 | http://arxiv.org/pdf/2310.13139v1 | 2310.13139v1 |
Mean Estimation Under Heterogeneous Privacy Demands | Differential Privacy (DP) is a well-established framework to quantify privacy
loss incurred by any algorithm. Traditional formulations impose a uniform
privacy requirement for all users, which is often inconsistent with real-world
scenarios in which users dictate their privacy preferences individually. This
work considers the problem of mean estimation, where each user can impose their
own distinct privacy level. The algorithm we propose is shown to be minimax
optimal and has a near-linear run-time. Our results elicit an interesting
saturation phenomenon that occurs. Namely, the privacy requirements of the most
stringent users dictate the overall error rates. As a consequence, users with
less but differing privacy requirements are all given more privacy than they
require, in equal amounts. In other words, these privacy-indifferent users are
given a nontrivial degree of privacy for free, without any sacrifice in the
performance of the estimator. | [
"Syomantak Chaudhuri",
"Konstantin Miagkov",
"Thomas A. Courtade"
] | 2023-10-19 20:29:19 | http://arxiv.org/abs/2310.13137v1 | http://arxiv.org/pdf/2310.13137v1 | 2310.13137v1 |
Approaches for Uncertainty Quantification of AI-predicted Material Properties: A Comparison | The development of large databases of material properties, together with the
availability of powerful computers, has allowed machine learning (ML) modeling
to become a widely used tool for predicting material performances. While
confidence intervals are commonly reported for such ML models, prediction
intervals, i.e., the uncertainty on each prediction, are not as frequently
available. Here, we investigate three easy-to-implement approaches to determine
such individual uncertainty, comparing them across ten ML quantities spanning
energetics, mechanical, electronic, optical, and spectral properties.
Specifically, we focused on the Quantile approach, the direct machine learning
of the prediction intervals and Ensemble methods. | [
"Francesca Tavazza",
"Kamal Choudhary",
"Brian DeCost"
] | 2023-10-19 20:20:39 | http://arxiv.org/abs/2310.13136v1 | http://arxiv.org/pdf/2310.13136v1 | 2310.13136v1 |
Deep Reinforcement Learning-based Intelligent Traffic Signal Controls with Optimized CO2 emissions | Nowadays, transportation networks face the challenge of sub-optimal control
policies that can have adverse effects on human health, the environment, and
contribute to traffic congestion. Increased levels of air pollution and
extended commute times caused by traffic bottlenecks make intersection traffic
signal controllers a crucial component of modern transportation infrastructure.
Despite several adaptive traffic signal controllers in literature, limited
research has been conducted on their comparative performance. Furthermore,
despite carbon dioxide (CO2) emissions' significance as a global issue, the
literature has paid limited attention to this area. In this report, we propose
EcoLight, a reward shaping scheme for reinforcement learning algorithms that
not only reduces CO2 emissions but also achieves competitive results in metrics
such as travel time. We compare the performance of tabular Q-Learning, DQN,
SARSA, and A2C algorithms using metrics such as travel time, CO2 emissions,
waiting time, and stopped time. Our evaluation considers multiple scenarios
that encompass a range of road users (trucks, buses, cars) with varying
pollution levels. | [
"Pedram Agand",
"Alexey Iskrov",
"Mo Chen"
] | 2023-10-19 19:54:47 | http://arxiv.org/abs/2310.13129v1 | http://arxiv.org/pdf/2310.13129v1 | 2310.13129v1 |
Fuel Consumption Prediction for a Passenger Ferry using Machine Learning and In-service Data: A Comparative Study | As the importance of eco-friendly transportation increases, providing an
efficient approach for marine vessel operation is essential. Methods for status
monitoring with consideration to the weather condition and forecasting with the
use of in-service data from ships requires accurate and complete models for
predicting the energy efficiency of a ship. The models need to effectively
process all the operational data in real-time. This paper presents models that
can predict fuel consumption using in-service data collected from a passenger
ship. Statistical and domain-knowledge methods were used to select the proper
input variables for the models. These methods prevent over-fitting, missing
data, and multicollinearity while providing practical applicability. Prediction
models that were investigated include multiple linear regression (MLR),
decision tree approach (DT), an artificial neural network (ANN), and ensemble
methods. The best predictive performance was from a model developed using the
XGboost technique which is a boosting ensemble approach. \rvv{Our code is
available on GitHub at
\url{https://github.com/pagand/model_optimze_vessel/tree/OE} for future
research. | [
"Pedram Agand",
"Allison Kennedy",
"Trevor Harris",
"Chanwoo Bae",
"Mo Chen",
"Edward J Park"
] | 2023-10-19 19:35:38 | http://arxiv.org/abs/2310.13123v1 | http://arxiv.org/pdf/2310.13123v1 | 2310.13123v1 |
Understanding Addition in Transformers | Understanding the inner workings of machine learning models like Transformers
is vital for their safe and ethical use. This paper presents an in-depth
analysis of a one-layer Transformer model trained for integer addition. We
reveal that the model divides the task into parallel, digit-specific streams
and employs distinct algorithms for different digit positions. Our study also
finds that the model starts calculations late but executes them rapidly. A rare
use case with high loss is identified and explained. Overall, the model's
algorithm is explained in detail. These findings are validated through rigorous
testing and mathematical modeling, contributing to the broader works in
Mechanistic Interpretability, AI safety, and alignment. Our approach opens the
door for analyzing more complex tasks and multi-layer Transformer models. | [
"Philip Quirke",
"Fazl",
"Barez"
] | 2023-10-19 19:34:42 | http://arxiv.org/abs/2310.13121v1 | http://arxiv.org/pdf/2310.13121v1 | 2310.13121v1 |
RSAdapter: Adapting Multimodal Models for Remote Sensing Visual Question Answering | In recent years, with the rapid advancement of transformer models,
transformer-based multimodal architectures have found wide application in
various downstream tasks, including but not limited to Image Captioning, Visual
Question Answering (VQA), and Image-Text Generation. However, contemporary
approaches to Remote Sensing (RS) VQA often involve resource-intensive
techniques, such as full fine-tuning of large models or the extraction of
image-text features from pre-trained multimodal models, followed by modality
fusion using decoders. These approaches demand significant computational
resources and time, and a considerable number of trainable parameters are
introduced. To address these challenges, we introduce a novel method known as
RSAdapter, which prioritizes runtime and parameter efficiency. RSAdapter
comprises two key components: the Parallel Adapter and an additional linear
transformation layer inserted after each fully connected (FC) layer within the
Adapter. This approach not only improves adaptation to pre-trained multimodal
models but also allows the parameters of the linear transformation layer to be
integrated into the preceding FC layers during inference, reducing inference
costs. To demonstrate the effectiveness of RSAdapter, we conduct an extensive
series of experiments using three distinct RS-VQA datasets and achieve
state-of-the-art results on all three datasets. The code for RSAdapter will be
available online at https://github.com/Y-D-Wang/RSAdapter. | [
"Yuduo Wang",
"Pedram Ghamisi"
] | 2023-10-19 19:32:27 | http://arxiv.org/abs/2310.13120v1 | http://arxiv.org/pdf/2310.13120v1 | 2310.13120v1 |
Semi-Supervised Learning of Dynamical Systems with Neural Ordinary Differential Equations: A Teacher-Student Model Approach | Modeling dynamical systems is crucial for a wide range of tasks, but it
remains challenging due to complex nonlinear dynamics, limited observations, or
lack of prior knowledge. Recently, data-driven approaches such as Neural
Ordinary Differential Equations (NODE) have shown promising results by
leveraging the expressive power of neural networks to model unknown dynamics.
However, these approaches often suffer from limited labeled training data,
leading to poor generalization and suboptimal predictions. On the other hand,
semi-supervised algorithms can utilize abundant unlabeled data and have
demonstrated good performance in classification and regression tasks. We
propose TS-NODE, the first semi-supervised approach to modeling dynamical
systems with NODE. TS-NODE explores cheaply generated synthetic pseudo rollouts
to broaden exploration in the state space and to tackle the challenges brought
by lack of ground-truth system data under a teacher-student model. TS-NODE
employs an unified optimization framework that corrects the teacher model based
on the student's feedback while mitigating the potential false system dynamics
present in pseudo rollouts. TS-NODE demonstrates significant performance
improvements over a baseline Neural ODE model on multiple dynamical system
modeling tasks. | [
"Yu Wang",
"Yuxuan Yin",
"Karthik Somayaji Nanjangud Suryanarayana",
"Jan Drgona",
"Malachi Schram",
"Mahantesh Halappanavar",
"Frank Liu",
"Peng Li"
] | 2023-10-19 19:17:12 | http://arxiv.org/abs/2310.13110v1 | http://arxiv.org/pdf/2310.13110v1 | 2310.13110v1 |
Streamlining Brain Tumor Classification with Custom Transfer Learning in MRI Images | Brain tumors are increasingly prevalent, characterized by the uncontrolled
spread of aberrant tissues in the brain, with almost 700,000 new cases
diagnosed globally each year. Magnetic Resonance Imaging (MRI) is commonly used
for the diagnosis of brain tumors and accurate classification is a critical
clinical procedure. In this study, we propose an efficient solution for
classifying brain tumors from MRI images using custom transfer learning
networks. While several researchers have employed various pre-trained
architectures such as RESNET-50, ALEXNET, VGG-16, and VGG-19, these methods
often suffer from high computational complexity. To address this issue, we
present a custom and lightweight model using a Convolutional Neural
Network-based pre-trained architecture with reduced complexity. Specifically,
we employ the VGG-19 architecture with additional hidden layers, which reduces
the complexity of the base architecture but improves computational efficiency.
The objective is to achieve high classification accuracy using a novel
approach. Finally, the result demonstrates a classification accuracy of 96.42%. | [
"Javed Hossain",
"Md. Touhidul Islam",
"Md. Taufiqul Haque Khan Tusar"
] | 2023-10-19 19:13:04 | http://arxiv.org/abs/2310.13108v1 | http://arxiv.org/pdf/2310.13108v1 | 2310.13108v1 |
AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting Multiple Experts for Video Deepfake Detection | Forged content shared widely on social media platforms is a major social
problem that requires increased regulation and poses new challenges to the
research community. The recent proliferation of hyper-realistic deepfake videos
has drawn attention to the threat of audio and visual forgeries. Most previous
work on detecting AI-generated fake videos only utilizes visual modality or
audio modality. While there are some methods in the literature that exploit
audio and visual modalities to detect forged videos, they have not been
comprehensively evaluated on multi-modal datasets of deepfake videos involving
acoustic and visual manipulations. Moreover, these existing methods are mostly
based on CNN and suffer from low detection accuracy. Inspired by the recent
success of Transformer in various fields, to address the challenges posed by
deepfake technology, in this paper, we propose an Audio-Visual
Transformer-based Ensemble Network (AVTENet) framework that considers both
acoustic manipulation and visual manipulation to achieve effective video
forgery detection. Specifically, the proposed model integrates several purely
transformer-based variants that capture video, audio, and audio-visual salient
cues to reach a consensus in prediction. For evaluation, we use the recently
released benchmark multi-modal audio-video FakeAVCeleb dataset. For a detailed
analysis, we evaluate AVTENet, its variants, and several existing methods on
multiple test sets of the FakeAVCeleb dataset. Experimental results show that
our best model outperforms all existing methods and achieves state-of-the-art
performance on Testset-I and Testset-II of the FakeAVCeleb dataset. | [
"Ammarah Hashmi",
"Sahibzada Adil Shahzad",
"Chia-Wen Lin",
"Yu Tsao",
"Hsin-Min Wang"
] | 2023-10-19 19:01:26 | http://arxiv.org/abs/2310.13103v1 | http://arxiv.org/pdf/2310.13103v1 | 2310.13103v1 |
Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models | In light of the widespread success of generative models, a significant amount
of research has gone into speeding up their sampling time. However, generative
models are often sampled multiple times to obtain a diverse set incurring a
cost that is orthogonal to sampling time. We tackle the question of how to
improve diversity and sample efficiency by moving beyond the common assumption
of independent samples. We propose particle guidance, an extension of
diffusion-based generative sampling where a joint-particle time-evolving
potential enforces diversity. We analyze theoretically the joint distribution
that particle guidance generates, its implications on the choice of potential,
and the connections with methods in other disciplines. Empirically, we test the
framework both in the setting of conditional image generation, where we are
able to increase diversity without affecting quality, and molecular conformer
generation, where we reduce the state-of-the-art median error by 13% on
average. | [
"Gabriele Corso",
"Yilun Xu",
"Valentin de Bortoli",
"Regina Barzilay",
"Tommi Jaakkola"
] | 2023-10-19 19:01:00 | http://arxiv.org/abs/2310.13102v1 | http://arxiv.org/pdf/2310.13102v1 | 2310.13102v1 |
No offence, Bert -- I insult only humans! Multiple addressees sentence-level attack on toxicity detection neural network | We introduce a simple yet efficient sentence-level attack on black-box
toxicity detector models. By adding several positive words or sentences to the
end of a hateful message, we are able to change the prediction of a neural
network and pass the toxicity detection system check. This approach is shown to
be working on seven languages from three different language families. We also
describe the defence mechanism against the aforementioned attack and discuss
its limitations. | [
"Sergey Berezin",
"Reza Farahbakhsh",
"Noel Crespi"
] | 2023-10-19 18:56:50 | http://arxiv.org/abs/2310.13099v1 | http://arxiv.org/pdf/2310.13099v1 | 2310.13099v1 |
SRAI: Towards Standardization of Geospatial AI | Spatial Representations for Artificial Intelligence (srai) is a Python
library for working with geospatial data. The library can download geospatial
data, split a given area into micro-regions using multiple algorithms and train
an embedding model using various architectures. It includes baseline models as
well as more complex methods from published works. Those capabilities make it
possible to use srai in a complete pipeline for geospatial task solving. The
proposed library is the first step to standardize the geospatial AI domain
toolset. It is fully open-source and published under Apache 2.0 licence. | [
"Piotr Gramacki",
"Kacper Leśniara",
"Kamil Raczycki",
"Szymon Woźniak",
"Marcin Przymus",
"Piotr Szymański"
] | 2023-10-19 18:56:04 | http://arxiv.org/abs/2310.13098v2 | http://arxiv.org/pdf/2310.13098v2 | 2310.13098v2 |
A Multi-Stage Temporal Convolutional Network for Volleyball Jumps Classification Using a Waist-Mounted IMU | Monitoring the number of jumps for volleyball players during training or a
match can be crucial to prevent injuries, yet the measurement requires
considerable workload and cost using traditional methods such as video
analysis. Also, existing methods do not provide accurate differentiation
between different types of jumps. In this study, an unobtrusive system with a
single inertial measurement unit (IMU) on the waist was proposed to recognize
the types of volleyball jumps. A Multi-Layer Temporal Convolutional Network
(MS-TCN) was applied for sample-wise classification. The model was evaluated on
ten volleyball players and twenty-six volleyball players, during a lab session
with a fixed protocol of jumping and landing tasks, and during four volleyball
training sessions, respectively. The MS-TCN model achieved better performance
than a state-of-the-art deep learning model but with lower computational cost.
In the lab sessions, most jump counts showed small differences between the
predicted jumps and video-annotated jumps, with an overall count showing a
Limit of Agreement (LoA) of 0.1+-3.40 (r=0.884). For comparison, the proposed
algorithm showed slightly worse results than VERT (a commercial jumping
assessment device) with a LoA of 0.1+-2.08 (r=0.955) but the differences were
still within a comparable range. In the training sessions, the recognition of
three types of jumps exhibited a mean difference from observation of less than
10 jumps: block, smash, and overhead serve. These results showed the potential
of using a single IMU to recognize the types of volleyball jumps. The
sample-wise architecture provided high resolution of recognition and the MS-TCN
required fewer parameters to train compared with state-of-the-art models. | [
"Meng Shang",
"Camilla De Bleecker",
"Jos Vanrenterghem",
"Roel De Ridder",
"Sabine Verschueren",
"Carolina Varon",
"Walter De Raedt",
"Bart Vanrumste"
] | 2023-10-19 18:55:10 | http://arxiv.org/abs/2310.13097v1 | http://arxiv.org/pdf/2310.13097v1 | 2310.13097v1 |
Sequence Length Independent Norm-Based Generalization Bounds for Transformers | This paper provides norm-based generalization bounds for the Transformer
architecture that do not depend on the input sequence length. We employ a
covering number based approach to prove our bounds. We use three novel covering
number bounds for the function class of bounded linear transformations to upper
bound the Rademacher complexity of the Transformer. Furthermore, we show this
generalization bound applies to the common Transformer training technique of
masking and then predicting the masked word. We also run a simulated study on a
sparse majority data set that empirically validates our theoretical findings. | [
"Jacob Trauger",
"Ambuj Tewari"
] | 2023-10-19 18:31:09 | http://arxiv.org/abs/2310.13088v1 | http://arxiv.org/pdf/2310.13088v1 | 2310.13088v1 |
Unsupervised Representation Learning to Aid Semi-Supervised Meta Learning | Few-shot learning or meta-learning leverages the data scarcity problem in
machine learning. Traditionally, training data requires a multitude of samples
and labeling for supervised learning. To address this issue, we propose a
one-shot unsupervised meta-learning to learn the latent representation of the
training samples. We use augmented samples as the query set during the training
phase of the unsupervised meta-learning. A temperature-scaled cross-entropy
loss is used in the inner loop of meta-learning to prevent overfitting during
unsupervised learning. The learned parameters from this step are applied to the
targeted supervised meta-learning in a transfer-learning fashion for
initialization and fast adaptation with improved accuracy. The proposed method
is model agnostic and can aid any meta-learning model to improve accuracy. We
use model agnostic meta-learning (MAML) and relation network (RN) on Omniglot
and mini-Imagenet datasets to demonstrate the performance of the proposed
method. Furthermore, a meta-learning model with the proposed initialization can
achieve satisfactory accuracy with significantly fewer training samples. | [
"Atik Faysal",
"Mohammad Rostami",
"Huaxia Wang",
"Avimanyu Sahoo",
"Ryan Antle"
] | 2023-10-19 18:25:22 | http://arxiv.org/abs/2310.13085v1 | http://arxiv.org/pdf/2310.13085v1 | 2310.13085v1 |
How Can Everyday Users Efficiently Teach Robots by Demonstrations? | Learning from Demonstration (LfD) is a framework that allows lay users to
easily program robots. However, the efficiency of robot learning and the
robot's ability to generalize to task variations hinges upon the quality and
quantity of the provided demonstrations. Our objective is to guide human
teachers to furnish more effective demonstrations, thus facilitating efficient
robot learning. To achieve this, we propose to use a measure of uncertainty,
namely task-related information entropy, as a criterion for suggesting
informative demonstration examples to human teachers to improve their teaching
skills. In a conducted experiment (N=24), an augmented reality (AR)-based
guidance system was employed to train novice users to produce additional
demonstrations from areas with the highest entropy within the workspace. These
novice users were trained for a few trials to teach the robot a generalizable
task using a limited number of demonstrations. Subsequently, the users'
performance after training was assessed first on the same task (retention) and
then on a novel task (transfer) without guidance. The results indicated a
substantial improvement in robot learning efficiency from the teacher's
demonstrations, with an improvement of up to 198% observed on the novel task.
Furthermore, the proposed approach was compared to a state-of-the-art heuristic
rule and found to improve robot learning efficiency by 210% compared to the
heuristic rule. | [
"Maram Sakr",
"Zhikai Zhang",
"Benjamin Li",
"Haomiao Zhang",
"H. F. Machiel Van der Loos",
"Dana Kulic",
"Elizabeth Croft"
] | 2023-10-19 18:21:39 | http://arxiv.org/abs/2310.13083v1 | http://arxiv.org/pdf/2310.13083v1 | 2310.13083v1 |
On the Computational Complexities of Complex-valued Neural Networks | Complex-valued neural networks (CVNNs) are nonlinear filters used in the
digital signal processing of complex-domain data. Compared with real-valued
neural networks~(RVNNs), CVNNs can directly handle complex-valued input and
output signals due to their complex domain parameters and activation functions.
With the trend toward low-power systems, computational complexity analysis has
become essential for measuring an algorithm's power consumption. Therefore,
this paper presents both the quantitative and asymptotic computational
complexities of CVNNs. This is a crucial tool in deciding which algorithm to
implement. The mathematical operations are described in terms of the number of
real-valued multiplications, as these are the most demanding operations. To
determine which CVNN can be implemented in a low-power system, quantitative
computational complexities can be used to accurately estimate the number of
floating-point operations. We have also investigated the computational
complexities of CVNNs discussed in some studies presented in the literature. | [
"Kayol Soares Mayer",
"Jonathan Aguiar Soares",
"Ariadne Arrais Cruz",
"Dalton Soares Arantes"
] | 2023-10-19 18:14:04 | http://arxiv.org/abs/2310.13075v1 | http://arxiv.org/pdf/2310.13075v1 | 2310.13075v1 |
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks | Within the realm of deep learning, the interpretability of Convolutional
Neural Networks (CNNs), particularly in the context of image classification
tasks, remains a formidable challenge. To this end we present a neurosymbolic
framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer
kernels of the CNN to make its underlying knowledge interpretable. What makes
NeSyFOLD-G different from other similar frameworks is that we first find groups
of similar kernels in the CNN (kernel-grouping) using the cosine-similarity
between the feature maps generated by various kernels. Once such kernel groups
are found, we binarize each kernel group's output in the CNN and use it to
generate a binarization table which serves as input data to FOLD-SE-M which is
a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a
rule-set that can be used to make predictions. We present a novel kernel
grouping algorithm and show that grouping similar kernels leads to a
significant reduction in the size of the rule-set generated by FOLD-SE-M,
consequently, improving the interpretability. This rule-set symbolically
encapsulates the connectionist knowledge of the trained CNN. The rule-set can
be viewed as a normal logic program wherein each predicate's truth value
depends on a kernel group in the CNN. Each predicate in the rule-set is mapped
to a concept using a few semantic segmentation masks of the images used for
training, to make it human-understandable. The last layers of the CNN can then
be replaced by this rule-set to obtain the NeSy-G model which can then be used
for the image classification task. The goal directed ASP system s(CASP) can be
used to obtain the justification of any prediction made using the NeSy-G model.
We also propose a novel algorithm for labeling each predicate in the rule-set
with the semantic concept(s) that its corresponding kernel group represents. | [
"Parth Padalkar",
"Gopal Gupta"
] | 2023-10-19 18:12:49 | http://arxiv.org/abs/2310.13073v1 | http://arxiv.org/pdf/2310.13073v1 | 2310.13073v1 |
Creative Robot Tool Use with Large Language Models | Tool use is a hallmark of advanced intelligence, exemplified in both animal
behavior and robotic capabilities. This paper investigates the feasibility of
imbuing robots with the ability to creatively use tools in tasks that involve
implicit physical constraints and long-term planning. Leveraging Large Language
Models (LLMs), we develop RoboTool, a system that accepts natural language
instructions and outputs executable code for controlling robots in both
simulated and real-world environments. RoboTool incorporates four pivotal
components: (i) an "Analyzer" that interprets natural language to discern key
task-related concepts, (ii) a "Planner" that generates comprehensive strategies
based on the language input and key concepts, (iii) a "Calculator" that
computes parameters for each skill, and (iv) a "Coder" that translates these
plans into executable Python code. Our results show that RoboTool can not only
comprehend explicit or implicit physical constraints and environmental factors
but also demonstrate creative tool use. Unlike traditional Task and Motion
Planning (TAMP) methods that rely on explicit optimization, our LLM-based
system offers a more flexible, efficient, and user-friendly solution for
complex robotics tasks. Through extensive experiments, we validate that
RoboTool is proficient in handling tasks that would otherwise be infeasible
without the creative use of tools, thereby expanding the capabilities of
robotic systems. Demos are available on our project page:
https://creative-robotool.github.io/. | [
"Mengdi Xu",
"Peide Huang",
"Wenhao Yu",
"Shiqi Liu",
"Xilun Zhang",
"Yaru Niu",
"Tingnan Zhang",
"Fei Xia",
"Jie Tan",
"Ding Zhao"
] | 2023-10-19 18:02:15 | http://arxiv.org/abs/2310.13065v1 | http://arxiv.org/pdf/2310.13065v1 | 2310.13065v1 |
To grok or not to grok: Disentangling generalization and memorization on corrupted algorithmic datasets | Robust generalization is a major challenge in deep learning, particularly
when the number of trainable parameters is very large. In general, it is very
difficult to know if the network has memorized a particular set of examples or
understood the underlying rule (or both). Motivated by this challenge, we study
an interpretable model where generalizing representations are understood
analytically, and are easily distinguishable from the memorizing ones. Namely,
we consider two-layer neural networks trained on modular arithmetic tasks where
($\xi \cdot 100\%$) of labels are corrupted (\emph{i.e.} some results of the
modular operations in the training set are incorrect). We show that (i) it is
possible for the network to memorize the corrupted labels \emph{and} achieve
$100\%$ generalization at the same time; (ii) the memorizing neurons can be
identified and pruned, lowering the accuracy on corrupted data and improving
the accuracy on uncorrupted data; (iii) regularization methods such as weight
decay, dropout and BatchNorm force the network to ignore the corrupted data
during optimization, and achieve $100\%$ accuracy on the uncorrupted dataset;
and (iv) the effect of these regularization methods is (``mechanistically'')
interpretable: weight decay and dropout force all the neurons to learn
generalizing representations, while BatchNorm de-amplifies the output of
memorizing neurons and amplifies the output of the generalizing ones. Finally,
we show that in the presence of regularization, the training dynamics involves
two consecutive stages: first, the network undergoes the \emph{grokking}
dynamics reaching high train \emph{and} test accuracy; second, it unlearns the
memorizing representations, where train accuracy suddenly jumps from $100\%$ to
$100 (1-\xi)\%$. | [
"Darshil Doshi",
"Aritra Das",
"Tianyu He",
"Andrey Gromov"
] | 2023-10-19 18:01:10 | http://arxiv.org/abs/2310.13061v1 | http://arxiv.org/pdf/2310.13061v1 | 2310.13061v1 |
Training Dynamics of Deep Network Linear Regions | The study of Deep Network (DN) training dynamics has largely focused on the
evolution of the loss function, evaluated on or around train and test set data
points. In fact, many DN phenomenon were first introduced in literature with
that respect, e.g., double descent, grokking. In this study, we look at the
training dynamics of the input space partition or linear regions formed by
continuous piecewise affine DNs, e.g., networks with (leaky)ReLU
nonlinearities. First, we present a novel statistic that encompasses the local
complexity (LC) of the DN based on the concentration of linear regions inside
arbitrary dimensional neighborhoods around data points. We observe that during
training, the LC around data points undergoes a number of phases, starting with
a decreasing trend after initialization, followed by an ascent and ending with
a final descending trend. Using exact visualization methods, we come across the
perplexing observation that during the final LC descent phase of training,
linear regions migrate away from training and test samples towards the decision
boundary, making the DN input-output nearly linear everywhere else. We also
observe that the different LC phases are closely related to the memorization
and generalization performance of the DN, especially during grokking. | [
"Ahmed Imtiaz Humayun",
"Randall Balestriero",
"Richard Baraniuk"
] | 2023-10-19 17:59:44 | http://arxiv.org/abs/2310.12977v1 | http://arxiv.org/pdf/2310.12977v1 | 2310.12977v1 |
On the Hidden Waves of Image | In this paper, we introduce an intriguing phenomenon-the successful
reconstruction of images using a set of one-way wave equations with hidden and
learnable speeds. Each individual image corresponds to a solution with a unique
initial condition, which can be computed from the original image using a visual
encoder (e.g., a convolutional neural network). Furthermore, the solution for
each image exhibits two noteworthy mathematical properties: (a) it can be
decomposed into a collection of special solutions of the same one-way wave
equations that are first-order autoregressive, with shared coefficient matrices
for autoregression, and (b) the product of these coefficient matrices forms a
diagonal matrix with the speeds of the wave equations as its diagonal elements.
We term this phenomenon hidden waves, as it reveals that, although the speeds
of the set of wave equations and autoregressive coefficient matrices are
latent, they are both learnable and shared across images. This represents a
mathematical invariance across images, providing a new mathematical perspective
to understand images. | [
"Yinpeng Chen",
"Dongdong Chen",
"Xiyang Dai",
"Mengchen Liu",
"Lu Yuan",
"Zicheng Liu",
"Youzuo Lin"
] | 2023-10-19 17:59:37 | http://arxiv.org/abs/2310.12976v1 | http://arxiv.org/pdf/2310.12976v1 | 2310.12976v1 |
Variational Inference for SDEs Driven by Fractional Noise | We present a novel variational framework for performing inference in (neural)
stochastic differential equations (SDEs) driven by Markov-approximate
fractional Brownian motion (fBM). SDEs offer a versatile tool for modeling
real-world continuous-time dynamic systems with inherent noise and randomness.
Combining SDEs with the powerful inference capabilities of variational methods,
enables the learning of representative function distributions through
stochastic gradient descent. However, conventional SDEs typically assume the
underlying noise to follow a Brownian motion (BM), which hinders their ability
to capture long-term dependencies. In contrast, fractional Brownian motion
(fBM) extends BM to encompass non-Markovian dynamics, but existing methods for
inferring fBM parameters are either computationally demanding or statistically
inefficient. In this paper, building upon the Markov approximation of fBM, we
derive the evidence lower bound essential for efficient variational inference
of posterior path measures, drawing from the well-established field of
stochastic analysis. Additionally, we provide a closed-form expression to
determine optimal approximation coefficients. Furthermore, we propose the use
of neural networks to learn the drift, diffusion and control terms within our
variational posterior, leading to the variational training of neural-SDEs. In
this framework, we also optimize the Hurst index, governing the nature of our
fractional noise. Beyond validation on synthetic data, we contribute a novel
architecture for variational latent video prediction,-an approach that, to the
best of our knowledge, enables the first variational neural-SDE application to
video perception. | [
"Rembert Daems",
"Manfred Opper",
"Guillaume Crevecoeur",
"Tolga Birdal"
] | 2023-10-19 17:59:21 | http://arxiv.org/abs/2310.12975v1 | http://arxiv.org/pdf/2310.12975v1 | 2310.12975v1 |
Robust multimodal models have outlier features and encode more concepts | What distinguishes robust models from non-robust ones? This question has
gained traction with the appearance of large-scale multimodal models, such as
CLIP. These models have demonstrated unprecedented robustness with respect to
natural distribution shifts. While it has been shown that such differences in
robustness can be traced back to differences in training data, so far it is not
known what that translates to in terms of what the model has learned. In this
work, we bridge this gap by probing the representation spaces of 12 robust
multimodal models with various backbones (ResNets and ViTs) and pretraining
sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp). We find two
signatures of robustness in the representation spaces of these models: (1)
Robust models exhibit outlier features characterized by their activations, with
some being several orders of magnitude above average. These outlier features
induce privileged directions in the model's representation space. We
demonstrate that these privileged directions explain most of the predictive
power of the model by pruning up to $80 \%$ of the least important
representation space directions without negative impacts on model accuracy and
robustness; (2) Robust models encode substantially more concepts in their
representation space. While this superposition of concepts allows robust models
to store much information, it also results in highly polysemantic features,
which makes their interpretation challenging. We discuss how these insights
pave the way for future research in various fields, such as model pruning and
mechanistic interpretability. | [
"Jonathan Crabbé",
"Pau Rodríguez",
"Vaishaal Shankar",
"Luca Zappella",
"Arno Blaas"
] | 2023-10-19 17:59:12 | http://arxiv.org/abs/2310.13040v1 | http://arxiv.org/pdf/2310.13040v1 | 2310.13040v1 |
Frozen Transformers in Language Models Are Effective Visual Encoder Layers | This paper reveals that large language models (LLMs), despite being trained
solely on textual data, are surprisingly strong encoders for purely visual
tasks in the absence of language. Even more intriguingly, this can be achieved
by a simple yet previously overlooked strategy -- employing a frozen
transformer block from pre-trained LLMs as a constituent encoder layer to
directly process visual tokens. Our work pushes the boundaries of leveraging
LLMs for computer vision tasks, significantly departing from conventional
practices that typically necessitate a multi-modal vision-language setup with
associated language prompts, inputs, or outputs. We demonstrate that our
approach consistently enhances performance across a diverse range of tasks,
encompassing pure 2D and 3D visual recognition tasks (e.g., image and point
cloud classification), temporal modeling tasks (e.g., action recognition),
non-semantic tasks (e.g., motion forecasting), and multi-modal tasks (e.g.,
2D/3D visual question answering and image-text retrieval). Such improvements
are a general phenomenon, applicable to various types of LLMs (e.g., LLaMA and
OPT) and different LLM transformer blocks. We additionally propose the
information filtering hypothesis to explain the effectiveness of pre-trained
LLMs in visual encoding -- the pre-trained LLM transformer blocks discern
informative visual tokens and further amplify their effect. This hypothesis is
empirically supported by the observation that the feature activation, after
training with LLM transformer blocks, exhibits a stronger focus on relevant
regions. We hope that our work inspires new perspectives on utilizing LLMs and
deepening our understanding of their underlying mechanisms. Code is available
at https://github.com/ziqipang/LM4VisualEncoding. | [
"Ziqi Pang",
"Ziyang Xie",
"Yunze Man",
"Yu-Xiong Wang"
] | 2023-10-19 17:59:05 | http://arxiv.org/abs/2310.12973v1 | http://arxiv.org/pdf/2310.12973v1 | 2310.12973v1 |
Demystifying the Myths and Legends of Nonconvex Convergence of SGD | Stochastic gradient descent (SGD) and its variants are the main workhorses
for solving large-scale optimization problems with nonconvex objective
functions. Although the convergence of SGDs in the (strongly) convex case is
well-understood, their convergence for nonconvex functions stands on weak
mathematical foundations. Most existing studies on the nonconvex convergence of
SGD show the complexity results based on either the minimum of the expected
gradient norm or the functional sub-optimality gap (for functions with extra
structural property) by searching the entire range of iterates. Hence the last
iterations of SGDs do not necessarily maintain the same complexity guarantee.
This paper shows that an $\epsilon$-stationary point exists in the final
iterates of SGDs, given a large enough total iteration budget, $T$, not just
anywhere in the entire range of iterates -- a much stronger result than the
existing one. Additionally, our analyses allow us to measure the density of the
$\epsilon$-stationary points in the final iterates of SGD, and we recover the
classical $O(\frac{1}{\sqrt{T}})$ asymptotic rate under various existing
assumptions on the objective function and the bounds on the stochastic
gradient. As a result of our analyses, we addressed certain myths and legends
related to the nonconvex convergence of SGD and posed some thought-provoking
questions that could set new directions for research. | [
"Aritra Dutta",
"El Houcine Bergou",
"Soumia Boucherouite",
"Nicklas Werge",
"Melih Kandemir",
"Xin Li"
] | 2023-10-19 17:58:59 | http://arxiv.org/abs/2310.12969v1 | http://arxiv.org/pdf/2310.12969v1 | 2310.12969v1 |
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning | Deep Learning has already been successfully applied to analyze industrial
sensor data in a variety of relevant use cases. However, the opaque nature of
many well-performing methods poses a major obstacle for real-world deployment.
Explainable AI (XAI) and especially feature attribution techniques promise to
enable insights about how such models form their decision. But the plain
application of such methods often fails to provide truly informative and
problem-specific insights to domain experts. In this work, we focus on the
specific task of detecting faults in rolling element bearings from vibration
signals. We propose a novel and domain-specific feature attribution framework
that allows us to evaluate how well the underlying logic of a model corresponds
with expert reasoning. Utilizing the framework we are able to validate the
trustworthiness and to successfully anticipate the generalization ability of
different well-performing deep learning models. Our methodology demonstrates
how signal processing tools can effectively be used to enhance Explainable AI
techniques and acts as a template for similar problems. | [
"Thomas Decker",
"Michael Lebacher",
"Volker Tresp"
] | 2023-10-19 17:58:11 | http://arxiv.org/abs/2310.12967v1 | http://arxiv.org/pdf/2310.12967v1 | 2310.12967v1 |
PAC Prediction Sets Under Label Shift | Prediction sets capture uncertainty by predicting sets of labels rather than
individual labels, enabling downstream decisions to conservatively account for
all plausible outcomes. Conformal inference algorithms construct prediction
sets guaranteed to contain the true label with high probability. These
guarantees fail to hold in the face of distribution shift, which is precisely
when reliable uncertainty quantification can be most useful. We propose a novel
algorithm for constructing prediction sets with PAC guarantees in the label
shift setting. This method estimates the predicted probabilities of the classes
in a target domain, as well as the confusion matrix, then propagates
uncertainty in these estimates through a Gaussian elimination algorithm to
compute confidence intervals for importance weights. Finally, it uses these
intervals to construct prediction sets. We evaluate our approach on five
datasets: the CIFAR-10, ChestX-Ray and Entity-13 image datasets, the tabular
CDC Heart dataset, and the AGNews text dataset. Our algorithm satisfies the PAC
guarantee while producing smaller, more informative, prediction sets compared
to several baselines. | [
"Wenwen Si",
"Sangdon Park",
"Insup Lee",
"Edgar Dobriban",
"Osbert Bastani"
] | 2023-10-19 17:57:57 | http://arxiv.org/abs/2310.12964v1 | http://arxiv.org/pdf/2310.12964v1 | 2310.12964v1 |
An Emulator for Fine-Tuning Large Language Models using Small Language Models | Widely used language models (LMs) are typically built by scaling up a
two-stage training pipeline: a pre-training stage that uses a very large,
diverse dataset of text and a fine-tuning (sometimes, 'alignment') stage that
uses targeted examples or other specifications of desired behaviors. While it
has been hypothesized that knowledge and skills come from pre-training, and
fine-tuning mostly filters this knowledge and skillset, this intuition has not
been extensively tested. To aid in doing so, we introduce a novel technique for
decoupling the knowledge and skills gained in these two stages, enabling a
direct answer to the question, "What would happen if we combined the knowledge
learned by a large model during pre-training with the knowledge learned by a
small model during fine-tuning (or vice versa)?" Using an RL-based framework
derived from recent developments in learning from human preferences, we
introduce emulated fine-tuning (EFT), a principled and practical method for
sampling from a distribution that approximates (or 'emulates') the result of
pre-training and fine-tuning at different scales. Our experiments with EFT show
that scaling up fine-tuning tends to improve helpfulness, while scaling up
pre-training tends to improve factuality. Beyond decoupling scale, we show that
EFT enables test-time adjustment of competing behavioral traits like
helpfulness and harmlessness without additional training. Finally, a special
case of emulated fine-tuning, which we call LM up-scaling, avoids
resource-intensive fine-tuning of large pre-trained models by ensembling them
with small fine-tuned models, essentially emulating the result of fine-tuning
the large pre-trained model. Up-scaling consistently improves helpfulness and
factuality of instruction-following models in the Llama, Llama-2, and Falcon
families, without additional hyperparameters or training. | [
"Eric Mitchell",
"Rafael Rafailov",
"Archit Sharma",
"Chelsea Finn",
"Christopher D. Manning"
] | 2023-10-19 17:57:16 | http://arxiv.org/abs/2310.12962v1 | http://arxiv.org/pdf/2310.12962v1 | 2310.12962v1 |
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems | In this work, we study rapid, step-wise improvements of the loss in
transformers when being confronted with multi-step decision tasks. We found
that transformers struggle to learn the intermediate tasks, whereas CNNs have
no such issue on the tasks we studied. When transformers learn the intermediate
task, they do this rapidly and unexpectedly after both training and validation
loss saturated for hundreds of epochs. We call these rapid improvements
Eureka-moments, since the transformer appears to suddenly learn a previously
incomprehensible task. Similar leaps in performance have become known as
Grokking. In contrast to Grokking, for Eureka-moments, both the validation and
the training loss saturate before rapidly improving. We trace the problem back
to the Softmax function in the self-attention block of transformers and show
ways to alleviate the problem. These fixes improve training speed. The improved
models reach 95% of the baseline model in just 20% of training steps while
having a much higher likelihood to learn the intermediate task, lead to higher
final accuracy and are more robust to hyper-parameters. | [
"David T. Hoffmann",
"Simon Schrodi",
"Nadine Behrmann",
"Volker Fischer",
"Thomas Brox"
] | 2023-10-19 17:55:06 | http://arxiv.org/abs/2310.12956v1 | http://arxiv.org/pdf/2310.12956v1 | 2310.12956v1 |
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption | Offline reinforcement learning (RL) presents a promising approach for
learning reinforced policies from offline datasets without the need for costly
or unsafe interactions with the environment. However, datasets collected by
humans in real-world environments are often noisy and may even be maliciously
corrupted, which can significantly degrade the performance of offline RL. In
this work, we first investigate the performance of current offline RL
algorithms under comprehensive data corruption, including states, actions,
rewards, and dynamics. Our extensive experiments reveal that implicit
Q-learning (IQL) demonstrates remarkable resilience to data corruption among
various offline RL algorithms. Furthermore, we conduct both empirical and
theoretical analyses to understand IQL's robust performance, identifying its
supervised policy learning scheme as the key factor. Despite its relative
robustness, IQL still suffers from heavy-tail targets of Q functions under
dynamics corruption. To tackle this challenge, we draw inspiration from robust
statistics to employ the Huber loss to handle the heavy-tailedness and utilize
quantile estimators to balance penalization for corrupted data and learning
stability. By incorporating these simple yet effective modifications into IQL,
we propose a more robust offline RL approach named Robust IQL (RIQL). Extensive
experiments demonstrate that RIQL exhibits highly robust performance when
subjected to diverse data corruption scenarios. | [
"Rui Yang",
"Han Zhong",
"Jiawei Xu",
"Amy Zhang",
"Chongjie Zhang",
"Lei Han",
"Tong Zhang"
] | 2023-10-19 17:54:39 | http://arxiv.org/abs/2310.12955v1 | http://arxiv.org/pdf/2310.12955v1 | 2310.12955v1 |
Cousins Of The Vendi Score: A Family Of Similarity-Based Diversity Metrics For Science And Machine Learning | Measuring diversity accurately is important for many scientific fields,
including machine learning (ML), ecology, and chemistry. The Vendi Score was
introduced as a generic similarity-based diversity metric that extends the Hill
number of order q=1 by leveraging ideas from quantum statistical mechanics.
Contrary to many diversity metrics in ecology, the Vendi Score accounts for
similarity and does not require knowledge of the prevalence of the categories
in the collection to be evaluated for diversity. However, the Vendi Score
treats each item in a given collection with a level of sensitivity proportional
to the item's prevalence. This is undesirable in settings where there is a
significant imbalance in item prevalence. In this paper, we extend the other
Hill numbers using similarity to provide flexibility in allocating sensitivity
to rare or common items. This leads to a family of diversity metrics -- Vendi
scores with different levels of sensitivity -- that can be used in a variety of
applications. We study the properties of the scores in a synthetic controlled
setting where the ground truth diversity is known. We then test their utility
in improving molecular simulations via Vendi Sampling. Finally, we use the
Vendi scores to better understand the behavior of image generative models in
terms of memorization, duplication, diversity, and sample quality. | [
"Amey Pasarkar",
"Adji Bousso Dieng"
] | 2023-10-19 17:52:04 | http://arxiv.org/abs/2310.12952v1 | http://arxiv.org/pdf/2310.12952v1 | 2310.12952v1 |
3D-GPT: Procedural 3D Modeling with Large Language Models | In the pursuit of efficient automated content creation, procedural
generation, leveraging modifiable parameters and rule-based systems, emerges as
a promising approach. Nonetheless, it could be a demanding endeavor, given its
intricate nature necessitating a deep understanding of rules, algorithms, and
parameters. To reduce workload, we introduce 3D-GPT, a framework utilizing
large language models~(LLMs) for instruction-driven 3D modeling. 3D-GPT
positions LLMs as proficient problem solvers, dissecting the procedural 3D
modeling tasks into accessible segments and appointing the apt agent for each
task. 3D-GPT integrates three core agents: the task dispatch agent, the
conceptualization agent, and the modeling agent. They collaboratively achieve
two objectives. First, it enhances concise initial scene descriptions, evolving
them into detailed forms while dynamically adapting the text based on
subsequent instructions. Second, it integrates procedural generation,
extracting parameter values from enriched text to effortlessly interface with
3D software for asset creation. Our empirical investigations confirm that
3D-GPT not only interprets and executes instructions, delivering reliable
results but also collaborates effectively with human designers. Furthermore, it
seamlessly integrates with Blender, unlocking expanded manipulation
possibilities. Our work highlights the potential of LLMs in 3D modeling,
offering a basic framework for future advancements in scene generation and
animation. | [
"Chunyi Sun",
"Junlin Han",
"Weijian Deng",
"Xinlong Wang",
"Zishan Qin",
"Stephen Gould"
] | 2023-10-19 17:41:48 | http://arxiv.org/abs/2310.12945v1 | http://arxiv.org/pdf/2310.12945v1 | 2310.12945v1 |
On the Representational Capacity of Recurrent Neural Language Models | This work investigates the computational expressivity of language models
(LMs) based on recurrent neural networks (RNNs). Siegelmann and Sontag (1992)
famously showed that RNNs with rational weights and hidden states and unbounded
computation time are Turing complete. However, LMs define weightings over
strings in addition to just (unweighted) language membership and the analysis
of the computational power of RNN LMs (RLMs) should reflect this. We extend the
Turing completeness result to the probabilistic case, showing how a rationally
weighted RLM with unbounded computation time can simulate any probabilistic
Turing machine (PTM). Since, in practice, RLMs work in real-time, processing a
symbol at every time step, we treat the above result as an upper bound on the
expressivity of RLMs. We also provide a lower bound by showing that under the
restriction to real-time computation, such models can simulate deterministic
real-time rational PTMs. | [
"Franz Nowak",
"Anej Svete",
"Li Du",
"Ryan Cotterell"
] | 2023-10-19 17:39:47 | http://arxiv.org/abs/2310.12942v2 | http://arxiv.org/pdf/2310.12942v2 | 2310.12942v2 |
The Foundation Model Transparency Index | Foundation models have rapidly permeated society, catalyzing a wave of
generative AI applications spanning enterprise and consumer-facing contexts.
While the societal impact of foundation models is growing, transparency is on
the decline, mirroring the opacity that has plagued past digital technologies
(e.g. social media). Reversing this trend is essential: transparency is a vital
precondition for public accountability, scientific innovation, and effective
governance. To assess the transparency of the foundation model ecosystem and
help improve transparency over time, we introduce the Foundation Model
Transparency Index. The Foundation Model Transparency Index specifies 100
fine-grained indicators that comprehensively codify transparency for foundation
models, spanning the upstream resources used to build a foundation model (e.g
data, labor, compute), details about the model itself (e.g. size, capabilities,
risks), and the downstream use (e.g. distribution channels, usage policies,
affected geographies). We score 10 major foundation model developers (e.g.
OpenAI, Google, Meta) against the 100 indicators to assess their transparency.
To facilitate and standardize assessment, we score developers in relation to
their practices for their flagship foundation model (e.g. GPT-4 for OpenAI,
PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about
the foundation model ecosystem: for example, no developer currently discloses
significant information about the downstream impact of its flagship model, such
as the number of users, affected market sectors, or how users can seek redress
for harm. Overall, the Foundation Model Transparency Index establishes the
level of transparency today to drive progress on foundation model governance
via industry standards and regulatory intervention. | [
"Rishi Bommasani",
"Kevin Klyman",
"Shayne Longpre",
"Sayash Kapoor",
"Nestor Maslej",
"Betty Xiong",
"Daniel Zhang",
"Percy Liang"
] | 2023-10-19 17:39:02 | http://arxiv.org/abs/2310.12941v1 | http://arxiv.org/pdf/2310.12941v1 | 2310.12941v1 |
Generative Flow Networks as Entropy-Regularized RL | The recently proposed generative flow networks (GFlowNets) are a method of
training a policy to sample compositional discrete objects with probabilities
proportional to a given reward via a sequence of actions. GFlowNets exploit the
sequential nature of the problem, drawing parallels with reinforcement learning
(RL). Our work extends the connection between RL and GFlowNets to a general
case. We demonstrate how the task of learning a generative flow network can be
efficiently redefined as an entropy-regularized RL problem with a specific
reward and regularizer structure. Furthermore, we illustrate the practical
efficiency of this reformulation by applying standard soft RL algorithms to
GFlowNet training across several probabilistic modeling tasks. Contrary to
previously reported results, we show that entropic RL approaches can be
competitive against established GFlowNet training methods. This perspective
opens a direct path for integrating reinforcement learning principles into the
realm of generative flow networks. | [
"Daniil Tiapkin",
"Nikita Morozov",
"Alexey Naumov",
"Dmitry Vetrov"
] | 2023-10-19 17:31:40 | http://arxiv.org/abs/2310.12934v2 | http://arxiv.org/pdf/2310.12934v2 | 2310.12934v2 |
Eureka: Human-Level Reward Design via Coding Large Language Models | Large Language Models (LLMs) have excelled as high-level semantic planners
for sequential decision-making tasks. However, harnessing them to learn complex
low-level manipulation tasks, such as dexterous pen spinning, remains an open
problem. We bridge this fundamental gap and present Eureka, a human-level
reward design algorithm powered by LLMs. Eureka exploits the remarkable
zero-shot generation, code-writing, and in-context improvement capabilities of
state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over
reward code. The resulting rewards can then be used to acquire complex skills
via reinforcement learning. Without any task-specific prompting or pre-defined
reward templates, Eureka generates reward functions that outperform expert
human-engineered rewards. In a diverse suite of 29 open-source RL environments
that include 10 distinct robot morphologies, Eureka outperforms human experts
on 83% of the tasks, leading to an average normalized improvement of 52%. The
generality of Eureka also enables a new gradient-free in-context learning
approach to reinforcement learning from human feedback (RLHF), readily
incorporating human inputs to improve the quality and the safety of the
generated rewards without model updating. Finally, using Eureka rewards in a
curriculum learning setting, we demonstrate for the first time, a simulated
Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a
pen in circles at rapid speed. | [
"Yecheng Jason Ma",
"William Liang",
"Guanzhi Wang",
"De-An Huang",
"Osbert Bastani",
"Dinesh Jayaraman",
"Yuke Zhu",
"Linxi Fan",
"Anima Anandkumar"
] | 2023-10-19 17:31:01 | http://arxiv.org/abs/2310.12931v1 | http://arxiv.org/pdf/2310.12931v1 | 2310.12931v1 |
Probabilistic Modeling of Human Teams to Infer False Beliefs | We develop a probabilistic graphical model (PGM) for artificially intelligent
(AI) agents to infer human beliefs during a simulated urban search and rescue
(USAR) scenario executed in a Minecraft environment with a team of three
players. The PGM approach makes observable states and actions explicit, as well
as beliefs and intentions grounded by evidence about what players see and do
over time. This approach also supports inferring the effect of interventions,
which are vital if AI agents are to assist human teams. The experiment
incorporates manipulations of players' knowledge, and the virtual
Minecraft-based testbed provides access to several streams of information,
including the objects in the players' field of view. The participants are
equipped with a set of marker blocks that can be placed near room entrances to
signal the presence or absence of victims in the rooms to their teammates. In
each team, one of the members is given a different legend for the markers than
the other two, which may mislead them about the state of the rooms; that is,
they will hold a false belief. We extend previous works in this field by
introducing ToMCAT, an AI agent that can reason about individual and shared
mental states. We find that the players' behaviors are affected by what they
see in their in-game field of view, their beliefs about the meaning of the
markers, and their beliefs about which meaning the team decided to adopt. In
addition, we show that ToMCAT's beliefs are consistent with the players'
actions and that it can infer false beliefs with accuracy significantly better
than chance and comparable to inferences made by human observers. | [
"Paulo Soares",
"Adarsh Pyarelal",
"Kobus Barnard"
] | 2023-10-19 17:28:37 | http://arxiv.org/abs/2310.12929v1 | http://arxiv.org/pdf/2310.12929v1 | 2310.12929v1 |
Enhancing Open-World Bacterial Raman Spectra Identification by Feature Regularization for Improved Resilience against Unknown Classes | The combination of Deep Learning techniques and Raman spectroscopy shows
great potential offering precise and prompt identification of pathogenic
bacteria in clinical settings. However, the traditional closed-set
classification approaches assume that all test samples belong to one of the
known pathogens, and their applicability is limited since the clinical
environment is inherently unpredictable and dynamic, unknown or emerging
pathogens may not be included in the available catalogs. We demonstrate that
the current state-of-the-art Neural Networks identifying pathogens through
Raman spectra are vulnerable to unknown inputs, resulting in an uncontrollable
false positive rate. To address this issue, first, we developed a novel
ensemble of ResNet architectures combined with the attention mechanism which
outperforms existing closed-world methods, achieving an accuracy of $87.8 \pm
0.1\%$ compared to the best available model's accuracy of $86.7 \pm 0.4\%$.
Second, through the integration of feature regularization by the Objectosphere
loss function, our model achieves both high accuracy in identifying known
pathogens from the catalog and effectively separates unknown samples
drastically reducing the false positive rate. Finally, the proposed feature
regularization method during training significantly enhances the performance of
out-of-distribution detectors during the inference phase improving the
reliability of the detection of unknown classes. Our novel algorithm for Raman
spectroscopy enables the detection of unknown, uncatalogued, and emerging
pathogens providing the flexibility to adapt to future pathogens that may
emerge, and has the potential to improve the reliability of Raman-based
solutions in dynamic operating environments where accuracy is critical, such as
public safety applications. | [
"Yaroslav Balytskyi",
"Nataliia Kalashnyk",
"Inna Hubenko",
"Alina Balytska",
"Kelly McNear"
] | 2023-10-19 17:19:47 | http://arxiv.org/abs/2310.13723v1 | http://arxiv.org/pdf/2310.13723v1 | 2310.13723v1 |
Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning | Reinforcement learning (RL) requires either manually specifying a reward
function, which is often infeasible, or learning a reward model from a large
amount of human feedback, which is often very expensive. We study a more
sample-efficient alternative: using pretrained vision-language models (VLMs) as
zero-shot reward models (RMs) to specify tasks via natural language. We propose
a natural and general approach to using VLMs as reward models, which we call
VLM-RMs. We use VLM-RMs based on CLIP to train a MuJoCo humanoid to learn
complex tasks without a manually specified reward function, such as kneeling,
doing the splits, and sitting in a lotus position. For each of these tasks, we
only provide a single sentence text prompt describing the desired task with
minimal prompt engineering. We provide videos of the trained agents at:
https://sites.google.com/view/vlm-rm. We can improve performance by providing a
second ``baseline'' prompt and projecting out parts of the CLIP embedding space
irrelevant to distinguish between goal and baseline. Further, we find a strong
scaling effect for VLM-RMs: larger VLMs trained with more compute and data are
better reward models. The failure modes of VLM-RMs we encountered are all
related to known capability limitations of current VLMs, such as limited
spatial reasoning ability or visually unrealistic environments that are far
off-distribution for the VLM. We find that VLM-RMs are remarkably robust as
long as the VLM is large enough. This suggests that future VLMs will become
more and more useful reward models for a wide range of RL applications. | [
"Juan Rocamonde",
"Victoriano Montesinos",
"Elvis Nava",
"Ethan Perez",
"David Lindner"
] | 2023-10-19 17:17:06 | http://arxiv.org/abs/2310.12921v1 | http://arxiv.org/pdf/2310.12921v1 | 2310.12921v1 |
Generative Marginalization Models | We introduce marginalization models (MaMs), a new family of generative models
for high-dimensional discrete data. They offer scalable and flexible generative
modeling with tractable likelihoods by explicitly modeling all induced marginal
distributions. Marginalization models enable fast evaluation of arbitrary
marginal probabilities with a single forward pass of the neural network, which
overcomes a major limitation of methods with exact marginal inference, such as
autoregressive models (ARMs). We propose scalable methods for learning the
marginals, grounded in the concept of "marginalization self-consistency".
Unlike previous methods, MaMs support scalable training of any-order generative
models for high-dimensional problems under the setting of energy-based
training, where the goal is to match the learned distribution to a given
desired probability (specified by an unnormalized (log) probability function
such as energy function or reward function). We demonstrate the effectiveness
of the proposed model on a variety of discrete data distributions, including
binary images, language, physical systems, and molecules, for maximum
likelihood and energy-based training settings. MaMs achieve orders of magnitude
speedup in evaluating the marginal probabilities on both settings. For
energy-based training tasks, MaMs enable any-order generative modeling of
high-dimensional problems beyond the capability of previous methods. Code is at
https://github.com/PrincetonLIPS/MaM. | [
"Sulin Liu",
"Peter J. Ramadge",
"Ryan P. Adams"
] | 2023-10-19 17:14:29 | http://arxiv.org/abs/2310.12920v1 | http://arxiv.org/pdf/2310.12920v1 | 2310.12920v1 |
Personalized human mobility prediction for HuMob challenge | We explain the methodology used to create the data submitted to HuMob
Challenge, a data analysis competition for human mobility prediction. We
adopted a personalized model to predict the individual's movement trajectory
from their data, instead of predicting from the overall movement, based on the
hypothesis that human movement is unique to each person. We devised the
features such as the date and time, activity time, days of the week, time of
day, and frequency of visits to POI (Point of Interest). As additional
features, we incorporated the movement of other individuals with similar
behavior patterns through the employment of clustering. The machine learning
model we adopted was the Support Vector Regression (SVR). We performed accuracy
through offline assessment and carried out feature selection and parameter
tuning. Although overall dataset provided consists of 100,000 users trajectory,
our method use only 20,000 target users data, and do not need to use other
80,000 data. Despite the personalized model's traditional feature engineering
approach, this model yields reasonably good accuracy with lower computational
cost. | [
"Masahiro Suzuki",
"Shomu Furuta",
"Yusuke Fukazawa"
] | 2023-10-19 16:52:12 | http://arxiv.org/abs/2310.12900v1 | http://arxiv.org/pdf/2310.12900v1 | 2310.12900v1 |
Blind quantum machine learning with quantum bipartite correlator | Distributed quantum computing is a promising computational paradigm for
performing computations that are beyond the reach of individual quantum
devices. Privacy in distributed quantum computing is critical for maintaining
confidentiality and protecting the data in the presence of untrusted computing
nodes. In this work, we introduce novel blind quantum machine learning
protocols based on the quantum bipartite correlator algorithm. Our protocols
have reduced communication overhead while preserving the privacy of data from
untrusted parties. We introduce robust algorithm-specific privacy-preserving
mechanisms with low computational overhead that do not require complex
cryptographic techniques. We then validate the effectiveness of the proposed
protocols through complexity and privacy analysis. Our findings pave the way
for advancements in distributed quantum computing, opening up new possibilities
for privacy-aware machine learning applications in the era of quantum
technologies. | [
"Changhao Li",
"Boning Li",
"Omar Amer",
"Ruslan Shaydulin",
"Shouvanik Chakrabarti",
"Guoqing Wang",
"Haowei Xu",
"Hao Tang",
"Isidor Schoch",
"Niraj Kumar",
"Charles Lim",
"Ju Li",
"Paola Cappellaro",
"Marco Pistoia"
] | 2023-10-19 16:42:32 | http://arxiv.org/abs/2310.12893v1 | http://arxiv.org/pdf/2310.12893v1 | 2310.12893v1 |
Fine-Tuning Generative Models as an Inference Method for Robotic Tasks | Adaptable models could greatly benefit robotic agents operating in the real
world, allowing them to deal with novel and varying conditions. While
approaches such as Bayesian inference are well-studied frameworks for adapting
models to evidence, we build on recent advances in deep generative models which
have greatly affected many areas of robotics. Harnessing modern GPU
acceleration, we investigate how to quickly adapt the sample generation of
neural network models to observations in robotic tasks. We propose a simple and
general method that is applicable to various deep generative models and robotic
environments. The key idea is to quickly fine-tune the model by fitting it to
generated samples matching the observed evidence, using the cross-entropy
method. We show that our method can be applied to both autoregressive models
and variational autoencoders, and demonstrate its usability in object shape
inference from grasping, inverse kinematics calculation, and point cloud
completion. | [
"Orr Krupnik",
"Elisei Shafer",
"Tom Jurgenson",
"Aviv Tamar"
] | 2023-10-19 16:11:49 | http://arxiv.org/abs/2310.12862v1 | http://arxiv.org/pdf/2310.12862v1 | 2310.12862v1 |
Audio Editing with Non-Rigid Text Prompts | In this paper, we explore audio-editing with non-rigid text edits. We show
that the proposed editing pipeline is able to create audio edits that remain
faithful to the input audio. We explore text prompts that perform addition,
style transfer, and in-painting. We quantitatively and qualitatively show that
the edits are able to obtain results which outperform Audio-LDM, a recently
released text-prompted audio generation model. Qualitative inspection of the
results points out that the edits given by our approach remain more faithful to
the input audio in terms of keeping the original onsets and offsets of the
audio events. | [
"Francesco Paissan",
"Zhepei Wang",
"Mirco Ravanelli",
"Paris Smaragdis",
"Cem Subakan"
] | 2023-10-19 16:09:44 | http://arxiv.org/abs/2310.12858v1 | http://arxiv.org/pdf/2310.12858v1 | 2310.12858v1 |
Model-agnostic variable importance for predictive uncertainty: an entropy-based approach | In order to trust the predictions of a machine learning algorithm, it is
necessary to understand the factors that contribute to those predictions. In
the case of probabilistic and uncertainty-aware models, it is necessary to
understand not only the reasons for the predictions themselves, but also the
model's level of confidence in those predictions. In this paper, we show how
existing methods in explainability can be extended to uncertainty-aware models
and how such extensions can be used to understand the sources of uncertainty in
a model's predictive distribution. In particular, by adapting permutation
feature importance, partial dependence plots, and individual conditional
expectation plots, we demonstrate that novel insights into model behaviour may
be obtained and that these methods can be used to measure the impact of
features on both the entropy of the predictive distribution and the
log-likelihood of the ground truth labels under that distribution. With
experiments using both synthetic and real-world data, we demonstrate the
utility of these approaches in understanding both the sources of uncertainty
and their impact on model performance. | [
"Danny Wood",
"Theodore Papamarkou",
"Matt Benatan",
"Richard Allmendinger"
] | 2023-10-19 15:51:23 | http://arxiv.org/abs/2310.12842v1 | http://arxiv.org/pdf/2310.12842v1 | 2310.12842v1 |
Knowledge-Augmented Language Model Verification | Recent Language Models (LMs) have shown impressive capabilities in generating
texts with the knowledge internalized in parameters. Yet, LMs often generate
the factually incorrect responses to the given queries, since their knowledge
may be inaccurate, incomplete, and outdated. To address this problem, previous
works propose to augment LMs with the knowledge retrieved from an external
knowledge source. However, such approaches often show suboptimal text
generation performance due to two reasons: 1) the model may fail to retrieve
the knowledge relevant to the given query, or 2) the model may not faithfully
reflect the retrieved knowledge in the generated text. To overcome these, we
propose to verify the output and the knowledge of the knowledge-augmented LMs
with a separate verifier, which is a small LM that is trained to detect those
two types of errors through instruction-finetuning. Then, when the verifier
recognizes an error, we can rectify it by either retrieving new knowledge or
generating new text. Further, we use an ensemble of the outputs from different
instructions with a single verifier to enhance the reliability of the
verification processes. We validate the effectiveness of the proposed
verification steps on multiple question answering benchmarks, whose results
show that the proposed verifier effectively identifies retrieval and generation
errors, allowing LMs to provide more factually correct outputs. Our code is
available at https://github.com/JinheonBaek/KALMV. | [
"Jinheon Baek",
"Soyeong Jeong",
"Minki Kang",
"Jong C. Park",
"Sung Ju Hwang"
] | 2023-10-19 15:40:00 | http://arxiv.org/abs/2310.12836v1 | http://arxiv.org/pdf/2310.12836v1 | 2310.12836v1 |
AgentTuning: Enabling Generalized Agent Abilities for LLMs | Open large language models (LLMs) with great performance in various tasks
have significantly advanced the development of LLMs. However, they are far
inferior to commercial models such as ChatGPT and GPT-4 when acting as agents
to tackle complex tasks in the real world. These agent tasks employ LLMs as the
central controller responsible for planning, memorization, and tool
utilization, necessitating both fine-grained prompting methods and robust LLMs
to achieve satisfactory performance. Though many prompting methods have been
proposed to complete particular agent tasks, there is lack of research focusing
on improving the agent capabilities of LLMs themselves without compromising
their general abilities. In this work, we present AgentTuning, a simple and
general method to enhance the agent abilities of LLMs while maintaining their
general LLM capabilities. We construct AgentInstruct, a lightweight
instruction-tuning dataset containing high-quality interaction trajectories. We
employ a hybrid instruction-tuning strategy by combining AgentInstruct with
open-source instructions from general domains. AgentTuning is used to
instruction-tune the Llama 2 series, resulting in AgentLM. Our evaluations show
that AgentTuning enables LLMs' agent capabilities without compromising general
abilities. The AgentLM-70B is comparable to GPT-3.5-turbo on unseen agent
tasks, demonstrating generalized agent capabilities. We open source the
AgentInstruct and AgentLM-7B, 13B, and 70B models at
https://github.com/THUDM/AgentTuning, serving open and powerful alternatives to
commercial LLMs for agent tasks. | [
"Aohan Zeng",
"Mingdao Liu",
"Rui Lu",
"Bowen Wang",
"Xiao Liu",
"Yuxiao Dong",
"Jie Tang"
] | 2023-10-19 15:19:53 | http://arxiv.org/abs/2310.12823v2 | http://arxiv.org/pdf/2310.12823v2 | 2310.12823v2 |
Generating collective counterfactual explanations in score-based classification via mathematical optimization | Due to the increasing use of Machine Learning models in high stakes decision
making settings, it has become increasingly important to have tools to
understand how models arrive at decisions. Assuming a trained Supervised
Classification model, explanations can be obtained via counterfactual analysis:
a counterfactual explanation of an instance indicates how this instance should
be minimally modified so that the perturbed instance is classified in the
desired class by the Machine Learning classification model. Most of the
Counterfactual Analysis literature focuses on the single-instance
single-counterfactual setting, in which the analysis is done for one single
instance to provide one single explanation. Taking a stakeholder's perspective,
in this paper we introduce the so-called collective counterfactual
explanations. By means of novel Mathematical Optimization models, we provide a
counterfactual explanation for each instance in a group of interest, so that
the total cost of the perturbations is minimized under some linking
constraints. Making the process of constructing counterfactuals collective
instead of individual enables us to detect the features that are critical to
the entire dataset to have the individuals classified in the desired class. Our
methodology allows for some instances to be treated individually, performing
the collective counterfactual analysis for a fraction of records of the group
of interest. This way, outliers are identified and handled appropriately. Under
some assumptions on the classifier and the space in which counterfactuals are
sought, finding collective counterfactuals is reduced to solving a convex
quadratic linearly constrained mixed integer optimization problem, which, for
datasets of moderate size, can be solved to optimality using existing solvers.
The performance of our approach is illustrated on real-world datasets,
demonstrating its usefulness. | [
"Emilio Carrizosa",
"Jasone Ramírez-Ayerbe",
"Dolores Romero Morales"
] | 2023-10-19 15:18:42 | http://arxiv.org/abs/2310.12822v1 | http://arxiv.org/pdf/2310.12822v1 | 2310.12822v1 |
Hybrid Search for Efficient Planning with Completeness Guarantees | Solving complex planning problems has been a long-standing challenge in
computer science. Learning-based subgoal search methods have shown promise in
tackling these problems, but they often suffer from a lack of completeness
guarantees, meaning that they may fail to find a solution even if one exists.
In this paper, we propose an efficient approach to augment a subgoal search
method to achieve completeness in discrete action spaces. Specifically, we
augment the high-level search with low-level actions to execute a multi-level
(hybrid) search, which we call complete subgoal search. This solution achieves
the best of both worlds: the practical efficiency of high-level search and the
completeness of low-level search. We apply the proposed search method to a
recently proposed subgoal search algorithm and evaluate the algorithm trained
on offline data on complex planning problems. We demonstrate that our complete
subgoal search not only guarantees completeness but can even improve
performance in terms of search expansions for instances that the high-level
could solve without low-level augmentations. Our approach makes it possible to
apply subgoal-level planning for systems where completeness is a critical
requirement. | [
"Kalle Kujanpää",
"Joni Pajarinen",
"Alexander Ilin"
] | 2023-10-19 15:16:43 | http://arxiv.org/abs/2310.12819v1 | http://arxiv.org/pdf/2310.12819v1 | 2310.12819v1 |
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models | Parameter-shared pre-trained language models (PLMs) have emerged as a
successful approach in resource-constrained environments, enabling substantial
reductions in model storage and memory costs without significant performance
compromise. However, it is important to note that parameter sharing does not
alleviate computational burdens associated with inference, thus impeding its
practicality in situations characterized by limited stringent latency
requirements or computational resources. Building upon neural ordinary
differential equations (ODEs), we introduce a straightforward technique to
enhance the inference efficiency of parameter-shared PLMs. Additionally, we
propose a simple pre-training technique that leads to fully or partially shared
models capable of achieving even greater inference acceleration. The
experimental results demonstrate the effectiveness of our methods on both
autoregressive and autoencoding PLMs, providing novel insights into more
efficient utilization of parameter-shared models in resource-constrained
settings. | [
"Weize Chen",
"Xiaoyue Xu",
"Xu Han",
"Yankai Lin",
"Ruobing Xie",
"Zhiyuan Liu",
"Maosong Sun",
"Jie Zhou"
] | 2023-10-19 15:13:58 | http://arxiv.org/abs/2310.12818v1 | http://arxiv.org/pdf/2310.12818v1 | 2310.12818v1 |
2D-3D Interlaced Transformer for Point Cloud Segmentation with Scene-Level Supervision | We present a Multimodal Interlaced Transformer (MIT) that jointly considers
2D and 3D data for weakly supervised point cloud segmentation. Research studies
have shown that 2D and 3D features are complementary for point cloud
segmentation. However, existing methods require extra 2D annotations to achieve
2D-3D information fusion. Considering the high annotation cost of point clouds,
effective 2D and 3D feature fusion based on weakly supervised learning is in
great demand. To this end, we propose a transformer model with two encoders and
one decoder for weakly supervised point cloud segmentation using only
scene-level class tags. Specifically, the two encoders compute the
self-attended features for 3D point clouds and 2D multi-view images,
respectively. The decoder implements interlaced 2D-3D cross-attention and
carries out implicit 2D and 3D feature fusion. We alternately switch the roles
of queries and key-value pairs in the decoder layers. It turns out that the 2D
and 3D features are iteratively enriched by each other. Experiments show that
it performs favorably against existing weakly supervised point cloud
segmentation methods by a large margin on the S3DIS and ScanNet benchmarks. The
project page will be available at https://jimmy15923.github.io/mit_web/. | [
"Cheng-Kun Yang",
"Min-Hung Chen",
"Yung-Yu Chuang",
"Yen-Yu Lin"
] | 2023-10-19 15:12:44 | http://arxiv.org/abs/2310.12817v1 | http://arxiv.org/pdf/2310.12817v1 | 2310.12817v1 |
Prompt Injection Attacks and Defenses in LLM-Integrated Applications | Large Language Models (LLMs) are increasingly deployed as the backend for a
variety of real-world applications called LLM-Integrated Applications. Multiple
recent works showed that LLM-Integrated Applications are vulnerable to prompt
injection attacks, in which an attacker injects malicious instruction/data into
the input of those applications such that they produce results as the attacker
desires. However, existing works are limited to case studies. As a result, the
literature lacks a systematic understanding of prompt injection attacks and
their defenses. We aim to bridge the gap in this work. In particular, we
propose a general framework to formalize prompt injection attacks. Existing
attacks, which are discussed in research papers and blog posts, are special
cases in our framework. Our framework enables us to design a new attack by
combining existing attacks. Moreover, we also propose a framework to
systematize defenses against prompt injection attacks. Using our frameworks, we
conduct a systematic evaluation on prompt injection attacks and their defenses
with 10 LLMs and 7 tasks. We hope our frameworks can inspire future research in
this field. Our code is available at
https://github.com/liu00222/Open-Prompt-Injection. | [
"Yupei Liu",
"Yuqi Jia",
"Runpeng Geng",
"Jinyuan Jia",
"Neil Zhenqiang Gong"
] | 2023-10-19 15:12:09 | http://arxiv.org/abs/2310.12815v1 | http://arxiv.org/pdf/2310.12815v1 | 2310.12815v1 |
Hierarchical Forecasting at Scale | Existing hierarchical forecasting techniques scale poorly when the number of
time series increases. We propose to learn a coherent forecast for millions of
time series with a single bottom-level forecast model by using a sparse loss
function that directly optimizes the hierarchical product and/or temporal
structure. The benefit of our sparse hierarchical loss function is that it
provides practitioners a method of producing bottom-level forecasts that are
coherent to any chosen cross-sectional or temporal hierarchy. In addition,
removing the need for a post-processing step as required in traditional
hierarchical forecasting techniques reduces the computational cost of the
prediction phase in the forecasting pipeline. On the public M5 dataset, our
sparse hierarchical loss function performs up to 10% (RMSE) better compared to
the baseline loss function. We implement our sparse hierarchical loss function
within an existing forecasting model at bol, a large European e-commerce
platform, resulting in an improved forecasting performance of 2% at the product
level. Finally, we found an increase in forecasting performance of about 5-10%
when evaluating the forecasting performance across the cross-sectional
hierarchies that we defined. These results demonstrate the usefulness of our
sparse hierarchical loss applied to a production forecasting system at a major
e-commerce platform. | [
"Olivier Sprangers",
"Wander Wadman",
"Sebastian Schelter",
"Maarten de Rijke"
] | 2023-10-19 15:06:31 | http://arxiv.org/abs/2310.12809v1 | http://arxiv.org/pdf/2310.12809v1 | 2310.12809v1 |
Model Merging by Uncertainty-Based Gradient Matching | Models trained on different datasets can be merged by a weighted-averaging of
their parameters, but why does it work and when can it fail? Here, we connect
the inaccuracy of weighted-averaging to mismatches in the gradients and propose
a new uncertainty-based scheme to improve the performance by reducing the
mismatch. The connection also reveals implicit assumptions in other schemes
such as averaging, task arithmetic, and Fisher-weighted averaging. Our new
method gives consistent improvements for large language models and vision
transformers, both in terms of performance and robustness to hyperparameters. | [
"Nico Daheim",
"Thomas Möllenhoff",
"Edoardo Maria Ponti",
"Iryna Gurevych",
"Mohammad Emtiyaz Khan"
] | 2023-10-19 15:02:45 | http://arxiv.org/abs/2310.12808v1 | http://arxiv.org/pdf/2310.12808v1 | 2310.12808v1 |
DCSI -- An improved measure of cluster separability based on separation and connectedness | Whether class labels in a given data set correspond to meaningful clusters is
crucial for the evaluation of clustering algorithms using real-world data sets.
This property can be quantified by separability measures. A review of the
existing literature shows that neither classification-based complexity measures
nor cluster validity indices (CVIs) adequately incorporate the central aspects
of separability for density-based clustering: between-class separation and
within-class connectedness. A newly developed measure (density cluster
separability index, DCSI) aims to quantify these two characteristics and can
also be used as a CVI. Extensive experiments on synthetic data indicate that
DCSI correlates strongly with the performance of DBSCAN measured via the
adjusted rand index (ARI) but lacks robustness when it comes to multi-class
data sets with overlapping classes that are ill-suited for density-based hard
clustering. Detailed evaluation on frequently used real-world data sets shows
that DCSI can correctly identify touching or overlapping classes that do not
form meaningful clusters. | [
"Jana Gauss",
"Fabian Scheipl",
"Moritz Herrmann"
] | 2023-10-19 15:01:57 | http://arxiv.org/abs/2310.12806v1 | http://arxiv.org/pdf/2310.12806v1 | 2310.12806v1 |
Detection and Evaluation of bias-inducing Features in Machine learning | The cause-to-effect analysis can help us decompose all the likely causes of a
problem, such as an undesirable business situation or unintended harm to the
individual(s). This implies that we can identify how the problems are
inherited, rank the causes to help prioritize fixes, simplify a complex problem
and visualize them. In the context of machine learning (ML), one can use
cause-to-effect analysis to understand the reason for the biased behavior of
the system. For example, we can examine the root causes of biases by checking
each feature for a potential cause of bias in the model. To approach this, one
can apply small changes to a given feature or a pair of features in the data,
following some guidelines and observing how it impacts the decision made by the
model (i.e., model prediction). Therefore, we can use cause-to-effect analysis
to identify the potential bias-inducing features, even when these features are
originally are unknown. This is important since most current methods require a
pre-identification of sensitive features for bias assessment and can actually
miss other relevant bias-inducing features, which is why systematic
identification of such features is necessary. Moreover, it often occurs that to
achieve an equitable outcome, one has to take into account sensitive features
in the model decision. Therefore, it should be up to the domain experts to
decide based on their knowledge of the context of a decision whether bias
induced by specific features is acceptable or not. In this study, we propose an
approach for systematically identifying all bias-inducing features of a model
to help support the decision-making of domain experts. We evaluated our
technique using four well-known datasets to showcase how our contribution can
help spearhead the standard procedure when developing, testing, maintaining,
and deploying fair/equitable machine learning systems. | [
"Moses Openja",
"Gabriel Laberge",
"Foutse Khomh"
] | 2023-10-19 15:01:16 | http://arxiv.org/abs/2310.12805v1 | http://arxiv.org/pdf/2310.12805v1 | 2310.12805v1 |
Differentiable Vertex Fitting for Jet Flavour Tagging | We propose a differentiable vertex fitting algorithm that can be used for
secondary vertex fitting, and that can be seamlessly integrated into neural
networks for jet flavour tagging. Vertex fitting is formulated as an
optimization problem where gradients of the optimized solution vertex are
defined through implicit differentiation and can be passed to upstream or
downstream neural network components for network training. More broadly, this
is an application of differentiable programming to integrate physics knowledge
into neural network models in high energy physics. We demonstrate how
differentiable secondary vertex fitting can be integrated into larger
transformer-based models for flavour tagging and improve heavy flavour jet
classification. | [
"Rachel E. C. Smith",
"Inês Ochoa",
"Rúben Inácio",
"Jonathan Shoemaker",
"Michael Kagan"
] | 2023-10-19 15:01:05 | http://arxiv.org/abs/2310.12804v1 | http://arxiv.org/pdf/2310.12804v1 | 2310.12804v1 |
Causal-structure Driven Augmentations for Text OOD Generalization | The reliance of text classifiers on spurious correlations can lead to poor
generalization at deployment, raising concerns about their use in
safety-critical domains such as healthcare. In this work, we propose to use
counterfactual data augmentation, guided by knowledge of the causal structure
of the data, to simulate interventions on spurious features and to learn more
robust text classifiers. We show that this strategy is appropriate in
prediction problems where the label is spuriously correlated with an attribute.
Under the assumptions of such problems, we discuss the favorable sample
complexity of counterfactual data augmentation, compared to importance
re-weighting. Pragmatically, we match examples using auxiliary data, based on
diff-in-diff methodology, and use a large language model (LLM) to represent a
conditional probability of text. Through extensive experimentation on learning
caregiver-invariant predictors of clinical diagnoses from medical narratives
and on semi-synthetic data, we demonstrate that our method for simulating
interventions improves out-of-distribution (OOD) accuracy compared to baseline
invariant learning algorithms. | [
"Amir Feder",
"Yoav Wald",
"Claudia Shi",
"Suchi Saria",
"David Blei"
] | 2023-10-19 14:59:25 | http://arxiv.org/abs/2310.12803v1 | http://arxiv.org/pdf/2310.12803v1 | 2310.12803v1 |
An effective theory of collective deep learning | Unraveling the emergence of collective learning in systems of coupled
artificial neural networks is an endeavor with broader implications for
physics, machine learning, neuroscience and society. Here we introduce a
minimal model that condenses several recent decentralized algorithms by
considering a competition between two terms: the local learning dynamics in the
parameters of each neural network unit, and a diffusive coupling among units
that tends to homogenize the parameters of the ensemble. We derive the
coarse-grained behavior of our model via an effective theory for linear
networks that we show is analogous to a deformed Ginzburg-Landau model with
quenched disorder. This framework predicts (depth-dependent)
disorder-order-disorder phase transitions in the parameters' solutions that
reveal the onset of a collective learning phase, along with a depth-induced
delay of the critical point and a robust shape of the microscopic learning
path. We validate our theory in realistic ensembles of coupled nonlinear
networks trained in the MNIST dataset under privacy constraints. Interestingly,
experiments confirm that individual networks -- trained only with private data
-- can fully generalize to unseen data classes when the collective learning
phase emerges. Our work elucidates the physics of collective learning and
contributes to the mechanistic interpretability of deep learning in
decentralized settings. | [
"Lluís Arola-Fernández",
"Lucas Lacasa"
] | 2023-10-19 14:58:20 | http://arxiv.org/abs/2310.12802v1 | http://arxiv.org/pdf/2310.12802v1 | 2310.12802v1 |
Exploring Graph Neural Networks for Indian Legal Judgment Prediction | The burdensome impact of a skewed judges-to-cases ratio on the judicial
system manifests in an overwhelming backlog of pending cases alongside an
ongoing influx of new ones. To tackle this issue and expedite the judicial
process, the proposition of an automated system capable of suggesting case
outcomes based on factual evidence and precedent from past cases gains
significance. This research paper centres on developing a graph neural
network-based model to address the Legal Judgment Prediction (LJP) problem,
recognizing the intrinsic graph structure of judicial cases and making it a
binary node classification problem. We explored various embeddings as model
features, while nodes such as time nodes and judicial acts were added and
pruned to evaluate the model's performance. The study is done while considering
the ethical dimension of fairness in these predictions, considering gender and
name biases. A link prediction task is also conducted to assess the model's
proficiency in anticipating connections between two specified nodes. By
harnessing the capabilities of graph neural networks and incorporating fairness
analyses, this research aims to contribute insights towards streamlining the
adjudication process, enhancing judicial efficiency, and fostering a more
equitable legal landscape, ultimately alleviating the strain imposed by
mounting case backlogs. Our best-performing model with XLNet pre-trained
embeddings as its features gives the macro F1 score of 75% for the LJP task.
For link prediction, the same set of features is the best performing giving ROC
of more than 80% | [
"Mann Khatri",
"Mirza Yusuf",
"Yaman Kumar",
"Rajiv Ratn Shah",
"Ponnurangam Kumaraguru"
] | 2023-10-19 14:55:51 | http://arxiv.org/abs/2310.12800v1 | http://arxiv.org/pdf/2310.12800v1 | 2310.12800v1 |
OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift | Existing works have made great progress in improving adversarial robustness,
but typically test their method only on data from the same distribution as the
training data, i.e. in-distribution (ID) testing. As a result, it is unclear
how such robustness generalizes under input distribution shifts, i.e.
out-of-distribution (OOD) testing. This is a concerning omission as such
distribution shifts are unavoidable when methods are deployed in the wild. To
address this issue we propose a benchmark named OODRobustBench to
comprehensively assess OOD adversarial robustness using 23 dataset-wise shifts
(i.e. naturalistic shifts in input distribution) and 6 threat-wise shifts
(i.e., unforeseen adversarial threat models). OODRobustBench is used to assess
706 robust models using 60.7K adversarial evaluations. This large-scale
analysis shows that: 1) adversarial robustness suffers from a severe OOD
generalization issue; 2) ID robustness correlates strongly with OOD robustness,
in a positive linear way, under many distribution shifts. The latter enables
the prediction of OOD robustness from ID robustness. Based on this, we are able
to predict the upper limit of OOD robustness for existing robust training
schemes. The results suggest that achieving OOD robustness requires designing
novel methods beyond the conventional ones. Last, we discover that extra data,
data augmentation, advanced model architectures and particular regularization
approaches can improve OOD robustness. Noticeably, the discovered training
schemes, compared to the baseline, exhibit dramatically higher robustness under
threat shift while keeping high ID robustness, demonstrating new promising
solutions for robustness against both multi-attack and unforeseen attacks. | [
"Lin Li",
"Yifei Wang",
"Chawin Sitawarin",
"Michael Spratling"
] | 2023-10-19 14:50:46 | http://arxiv.org/abs/2310.12793v1 | http://arxiv.org/pdf/2310.12793v1 | 2310.12793v1 |
Agri-GNN: A Novel Genotypic-Topological Graph Neural Network Framework Built on GraphSAGE for Optimized Yield Prediction | Agriculture, as the cornerstone of human civilization, constantly seeks to
integrate technology for enhanced productivity and sustainability. This paper
introduces $\textit{Agri-GNN}$, a novel Genotypic-Topological Graph Neural
Network Framework tailored to capture the intricate spatial and genotypic
interactions of crops, paving the way for optimized predictions of harvest
yields. $\textit{Agri-GNN}$ constructs a Graph $\mathcal{G}$ that considers
farming plots as nodes, and then methodically constructs edges between nodes
based on spatial and genotypic similarity, allowing for the aggregation of node
information through a genotypic-topological filter. Graph Neural Networks
(GNN), by design, consider the relationships between data points, enabling them
to efficiently model the interconnected agricultural ecosystem. By harnessing
the power of GNNs, $\textit{Agri-GNN}$ encapsulates both local and global
information from plants, considering their inherent connections based on
spatial proximity and shared genotypes, allowing stronger predictions to be
made than traditional Machine Learning architectures. $\textit{Agri-GNN}$ is
built from the GraphSAGE architecture, because of its optimal calibration with
large graphs, like those of farming plots and breeding experiments.
$\textit{Agri-GNN}$ experiments, conducted on a comprehensive dataset of
vegetation indices, time, genotype information, and location data, demonstrate
that $\textit{Agri-GNN}$ achieves an $R^2 = .876$ in yield predictions for
farming fields in Iowa. The results show significant improvement over the
baselines and other work in the field. $\textit{Agri-GNN}$ represents a
blueprint for using advanced graph-based neural architectures to predict crop
yield, providing significant improvements over baselines in the field. | [
"Aditya Gupta",
"Asheesh Singh"
] | 2023-10-19 14:49:35 | http://arxiv.org/abs/2310.13037v1 | http://arxiv.org/pdf/2310.13037v1 | 2310.13037v1 |
A Theoretical Approach to Characterize the Accuracy-Fairness Trade-off Pareto Frontier | While the accuracy-fairness trade-off has been frequently observed in the
literature of fair machine learning, rigorous theoretical analyses have been
scarce. To demystify this long-standing challenge, this work seeks to develop a
theoretical framework by characterizing the shape of the accuracy-fairness
trade-off Pareto frontier (FairFrontier), determined by a set of all optimal
Pareto classifiers that no other classifiers can dominate. Specifically, we
first demonstrate the existence of the trade-off in real-world scenarios and
then propose four potential categories to characterize the important properties
of the accuracy-fairness Pareto frontier. For each category, we identify the
necessary conditions that lead to corresponding trade-offs. Experimental
results on synthetic data suggest insightful findings of the proposed
framework: (1) When sensitive attributes can be fully interpreted by
non-sensitive attributes, FairFrontier is mostly continuous. (2) Accuracy can
suffer a \textit{sharp} decline when over-pursuing fairness. (3) Eliminate the
trade-off via a two-step streamlined approach. The proposed research enables an
in-depth understanding of the accuracy-fairness trade-off, pushing current fair
machine-learning research to a new frontier. | [
"Hua Tang",
"Lu Cheng",
"Ninghao Liu",
"Mengnan Du"
] | 2023-10-19 14:35:26 | http://arxiv.org/abs/2310.12785v1 | http://arxiv.org/pdf/2310.12785v1 | 2310.12785v1 |
Conditional Density Estimations from Privacy-Protected Data | Many modern statistical analysis and machine learning applications require
training models on sensitive user data. Differential privacy provides a formal
guarantee that individual-level information about users does not leak. In this
framework, randomized algorithms inject calibrated noise into the confidential
data, resulting in privacy-protected datasets or queries. However, restricting
access to only the privatized data during statistical analysis makes it
computationally challenging to perform valid inferences on parameters
underlying the confidential data. In this work, we propose simulation-based
inference methods from privacy-protected datasets. Specifically, we use neural
conditional density estimators as a flexible family of distributions to
approximate the posterior distribution of model parameters given the observed
private query results. We illustrate our methods on discrete time-series data
under an infectious disease model and on ordinary linear regression models.
Illustrating the privacy-utility trade-off, our experiments and analysis
demonstrate the necessity and feasibility of designing valid statistical
inference procedures to correct for biases introduced by the privacy-protection
mechanisms. | [
"Yifei Xiong",
"Nianqiao P. Ju",
"Sanguo Zhang"
] | 2023-10-19 14:34:17 | http://arxiv.org/abs/2310.12781v2 | http://arxiv.org/pdf/2310.12781v2 | 2310.12781v2 |
Label-Aware Automatic Verbalizer for Few-Shot Text Classification | Prompt-based learning has shown its effectiveness in few-shot text
classification. One important factor in its success is a verbalizer, which
translates output from a language model into a predicted class. Notably, the
simplest and widely acknowledged verbalizer employs manual labels to represent
the classes. However, manual selection does not guarantee the optimality of the
selected words when conditioned on the chosen language model. Therefore, we
propose Label-Aware Automatic Verbalizer (LAAV), effectively augmenting the
manual labels to achieve better few-shot classification results. Specifically,
we use the manual labels along with the conjunction "and" to induce the model
to generate more effective words for the verbalizer. The experimental results
on five datasets across five languages demonstrate that LAAV significantly
outperforms existing verbalizers. Furthermore, our analysis reveals that LAAV
suggests more relevant words compared to similar approaches, especially in
mid-to-low resource languages. | [
"Thanakorn Thaminkaew",
"Piyawat Lertvittayakumjorn",
"Peerapon Vateekul"
] | 2023-10-19 14:30:07 | http://arxiv.org/abs/2310.12778v1 | http://arxiv.org/pdf/2310.12778v1 | 2310.12778v1 |
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning | Prompt-based learning has been an effective paradigm for large pretrained
language models (LLM), enabling few-shot or even zero-shot learning. Black-box
prompt search has received growing interest recently for its distinctive
properties of gradient-free optimization, proven particularly useful and
powerful for model-as-a-service usage. However, the discrete nature and the
complexity of combinatorial optimization hinder the efficiency of modern
black-box approaches. Despite extensive research on search algorithms, the
crucial aspect of search space design and optimization has been largely
overlooked. In this paper, we first conduct a sensitivity analysis by prompting
LLM, revealing that only a small number of tokens exert a disproportionate
amount of influence on LLM predictions. Leveraging this insight, we propose the
Clustering and Pruning for Efficient Black-box Prompt Search (ClaPS), a simple
black-box search method that first clusters and prunes the search space to
focus exclusively on influential prompt tokens. By employing even simple search
methods within the pruned search space, ClaPS achieves state-of-the-art
performance across various tasks and LLMs, surpassing the performance of
complex approaches while significantly reducing search costs. Our findings
underscore the critical role of search space design and optimization in
enhancing both the usefulness and the efficiency of black-box prompt-based
learning. | [
"Han Zhou",
"Xingchen Wan",
"Ivan Vulić",
"Anna Korhonen"
] | 2023-10-19 14:25:06 | http://arxiv.org/abs/2310.12774v1 | http://arxiv.org/pdf/2310.12774v1 | 2310.12774v1 |
Safe RLHF: Safe Reinforcement Learning from Human Feedback | With the development of large language models (LLMs), striking a balance
between the performance and safety of AI systems has never been more critical.
However, the inherent tension between the objectives of helpfulness and
harmlessness presents a significant challenge during LLM training. To address
this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe
RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly
decouples human preferences regarding helpfulness and harmlessness, effectively
avoiding the crowdworkers' confusion about the tension and allowing us to train
separate reward and cost models. We formalize the safety concern of LLMs as an
optimization task of maximizing the reward function while satisfying specified
cost constraints. Leveraging the Lagrangian method to solve this constrained
problem, Safe RLHF dynamically adjusts the balance between the two objectives
during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we
demonstrate a superior ability to mitigate harmful responses while enhancing
model performance compared to existing value-aligned algorithms.
Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with
collected human preferences, significantly improving its helpfulness and
harmlessness according to human evaluations. | [
"Josef Dai",
"Xuehai Pan",
"Ruiyang Sun",
"Jiaming Ji",
"Xinbo Xu",
"Mickel Liu",
"Yizhou Wang",
"Yaodong Yang"
] | 2023-10-19 14:22:03 | http://arxiv.org/abs/2310.12773v1 | http://arxiv.org/pdf/2310.12773v1 | 2310.12773v1 |
SemantIC: Semantic Interference Cancellation Towards 6G Wireless Communications | This letter proposes a novel anti-interference technique, semantic
interference cancellation (SemantIC), for enhancing information quality towards
the sixth-generation (6G) wireless networks. SemantIC only requires the
receiver to concatenate the channel decoder with a semantic auto-encoder. This
constructs a turbo loop which iteratively and alternately eliminates noise in
the signal domain and the semantic domain. From the viewpoint of network
information theory, the neural network of the semantic auto-encoder stores side
information by training, and provides side information in iterative decoding,
as an implementation of the Wyner-Ziv theorem. Simulation results verify the
performance improvement by SemantIC without extra channel resource cost. | [
"Wensheng Lin",
"Yuna Yan",
"Lixin Li",
"Zhu Han",
"Tad Matsumoto"
] | 2023-10-19 14:13:12 | http://arxiv.org/abs/2310.12768v1 | http://arxiv.org/pdf/2310.12768v1 | 2310.12768v1 |
Transformer-based Entity Legal Form Classification | We propose the application of Transformer-based language models for
classifying entity legal forms from raw legal entity names. Specifically, we
employ various BERT variants and compare their performance against multiple
traditional baselines. Our evaluation encompasses a substantial subset of
freely available Legal Entity Identifier (LEI) data, comprising over 1.1
million legal entities from 30 different legal jurisdictions. The ground truth
labels for classification per jurisdiction are taken from the Entity Legal Form
(ELF) code standard (ISO 20275). Our findings demonstrate that pre-trained BERT
variants outperform traditional text classification approaches in terms of F1
score, while also performing comparably well in the Macro F1 Score. Moreover,
the validity of our proposal is supported by the outcome of third-party expert
reviews conducted in ten selected jurisdictions. This study highlights the
significant potential of Transformer-based models in advancing data
standardization and data integration. The presented approaches can greatly
benefit financial institutions, corporations, governments and other
organizations in assessing business relationships, understanding risk exposure,
and promoting effective governance. | [
"Alexander Arimond",
"Mauro Molteni",
"Dominik Jany",
"Zornitsa Manolova",
"Damian Borth",
"Andreas G. F. Hoepner"
] | 2023-10-19 14:11:43 | http://arxiv.org/abs/2310.12766v1 | http://arxiv.org/pdf/2310.12766v1 | 2310.12766v1 |
Energy-Based Models For Speech Synthesis | Recently there has been a lot of interest in non-autoregressive (non-AR)
models for speech synthesis, such as FastSpeech 2 and diffusion models. Unlike
AR models, these models do not have autoregressive dependencies among outputs
which makes inference efficient. This paper expands the range of available
non-AR models with another member called energy-based models (EBMs). The paper
describes how noise contrastive estimation, which relies on the comparison
between positive and negative samples, can be used to train EBMs. It proposes a
number of strategies for generating effective negative samples, including using
high-performing AR models. It also describes how sampling from EBMs can be
performed using Langevin Markov Chain Monte-Carlo (MCMC). The use of Langevin
MCMC enables to draw connections between EBMs and currently popular diffusion
models. Experiments on LJSpeech dataset show that the proposed approach offers
improvements over Tacotron 2. | [
"Wanli Sun",
"Zehai Tu",
"Anton Ragni"
] | 2023-10-19 14:10:09 | http://arxiv.org/abs/2310.12765v1 | http://arxiv.org/pdf/2310.12765v1 | 2310.12765v1 |
Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic Algorithm | Spectral clustering and its extensions usually consist of two steps: (1)
constructing a graph and computing the relaxed solution; (2) discretizing
relaxed solutions. Although the former has been extensively investigated, the
discretization techniques are mainly heuristic methods, e.g., k-means, spectral
rotation. Unfortunately, the goal of the existing methods is not to find a
discrete solution that minimizes the original objective. In other words, the
primary drawback is the neglect of the original objective when computing the
discrete solution. Inspired by the first-order optimization algorithms, we
propose to develop a first-order term to bridge the original problem and
discretization algorithm, which is the first non-heuristic to the best of our
knowledge. Since the non-heuristic method is aware of the original graph cut
problem, the final discrete solution is more reliable and achieves the
preferable loss value. We also theoretically show that the continuous optimum
is beneficial to discretization algorithms though simply finding its closest
discrete solution is an existing heuristic algorithm which is also unreliable.
Sufficient experiments significantly show the superiority of our method. | [
"Hongyuan Zhang",
"Xuelong Li"
] | 2023-10-19 13:57:38 | http://arxiv.org/abs/2310.12752v1 | http://arxiv.org/pdf/2310.12752v1 | 2310.12752v1 |
TabuLa: Harnessing Language Models for Tabular Data Synthesis | Given the ubiquitous use of tabular data in industries and the growing
concerns in data privacy and security, tabular data synthesis emerges as a
critical research area. The recent state-of-the-art methods show that large
language models (LLMs) can be adopted to generate realistic tabular data. As
LLMs pre-process tabular data as full text, they have the advantage of avoiding
the curse of dimensionality associated with one-hot encoding high-dimensional
data. However, their long training time and limited re-usability on new tasks
prevent them from replacing exiting tabular generative models. In this paper,
we propose Tabula, a tabular data synthesizer based on the language model
structure. Through Tabula, we demonstrate the inherent limitation of employing
pre-trained language models designed for natural language processing (NLP) in
the context of tabular data synthesis. Our investigation delves into the
development of a dedicated foundational model tailored specifically for tabular
data synthesis. Additionally, we propose a token sequence compression strategy
to significantly reduce training time while preserving the quality of synthetic
data. Extensive experiments on six datasets demonstrate that using a language
model structure without loading the well-trained model weights yields a better
starting model for tabular data synthesis. Moreover, the Tabula model,
previously trained on other tabular data, serves as an excellent foundation
model for new tabular data synthesis tasks. Additionally, the token sequence
compression method substantially reduces the model's training time. Results
show that Tabula averagely reduces 46.2% training time per epoch comparing to
current LLMs-based state-of-the-art algorithm and consistently achieves even
higher synthetic data utility. | [
"Zilong Zhao",
"Robert Birke",
"Lydia Chen"
] | 2023-10-19 13:50:56 | http://arxiv.org/abs/2310.12746v1 | http://arxiv.org/pdf/2310.12746v1 | 2310.12746v1 |
Canonical normalizing flows for manifold learning | Manifold learning flows are a class of generative modelling techniques that
assume a low-dimensional manifold description of the data. The embedding of
such manifold into the high-dimensional space of the data is achieved via
learnable invertible transformations. Therefore, once the manifold is properly
aligned via a reconstruction loss, the probability density is tractable on the
manifold and maximum likelihood can be used optimize the network parameters.
Naturally, the lower-dimensional representation of the data requires an
injective-mapping. Recent approaches were able to enforce that density aligns
with the modelled manifold, while efficiently calculating the density
volume-change term when embedding to the higher-dimensional space. However,
unless the injective-mapping is analytically predefined, the learned manifold
is not necessarily an efficient representation of the data. Namely, the latent
dimensions of such models frequently learn an entangled intrinsic basis with
degenerate information being stored in each dimension. Alternatively, if a
locally orthogonal and/or sparse basis is to be learned, here coined canonical
intrinsic basis, it can serve in learning a more compact latent space
representation. Towards this end, we propose a canonical manifold learning flow
method, where a novel optimization objective enforces the transformation matrix
to have few prominent and orthogonal basis functions. Canonical manifold flow
yields a more efficient use of the latent space, automatically generating fewer
prominent and distinct dimensions to represent data, and consequently a better
approximation of target distributions than other manifold flow methods in most
experiments we conducted, resulting in lower FID scores. | [
"Kyriakos Flouris",
"Ender Konukoglu"
] | 2023-10-19 13:48:05 | http://arxiv.org/abs/2310.12743v1 | http://arxiv.org/pdf/2310.12743v1 | 2310.12743v1 |
LASER: Linear Compression in Wireless Distributed Optimization | Data-parallel SGD is the de facto algorithm for distributed optimization,
especially for large scale machine learning. Despite its merits, communication
bottleneck is one of its persistent issues. Most compression schemes to
alleviate this either assume noiseless communication links, or fail to achieve
good performance on practical tasks. In this paper, we close this gap and
introduce LASER: LineAr CompreSsion in WirEless DistRibuted Optimization. LASER
capitalizes on the inherent low-rank structure of gradients and transmits them
efficiently over the noisy channels. Whilst enjoying theoretical guarantees
similar to those of the classical SGD, LASER shows consistent gains over
baselines on a variety of practical benchmarks. In particular, it outperforms
the state-of-the-art compression schemes on challenging computer vision and GPT
language modeling tasks. On the latter, we obtain $50$-$64 \%$ improvement in
perplexity over our baselines for noisy channels. | [
"Ashok Vardhan Makkuva",
"Marco Bondaschi",
"Thijs Vogels",
"Martin Jaggi",
"Hyeji Kim",
"Michael C. Gastpar"
] | 2023-10-19 13:18:57 | http://arxiv.org/abs/2310.13033v1 | http://arxiv.org/pdf/2310.13033v1 | 2310.13033v1 |
Learn from the Past: A Proxy based Adversarial Defense Framework to Boost Robustness | In light of the vulnerability of deep learning models to adversarial samples
and the ensuing security issues, a range of methods, including Adversarial
Training (AT) as a prominent representative, aimed at enhancing model
robustness against various adversarial attacks, have seen rapid development.
However, existing methods essentially assist the current state of target model
to defend against parameter-oriented adversarial attacks with explicit or
implicit computation burdens, which also suffers from unstable convergence
behavior due to inconsistency of optimization trajectories. Diverging from
previous work, this paper reconsiders the update rule of target model and
corresponding deficiency to defend based on its current state. By introducing
the historical state of the target model as a proxy, which is endowed with much
prior information for defense, we formulate a two-stage update rule, resulting
in a general adversarial defense framework, which we refer to as `LAST' ({\bf
L}earn from the P{\bf ast}). Besides, we devise a Self Distillation (SD) based
defense objective to constrain the update process of the proxy model without
the introduction of larger teacher models. Experimentally, we demonstrate
consistent and significant performance enhancements by refining a series of
single-step and multi-step AT methods (e.g., up to $\bf 9.2\%$ and $\bf 20.5\%$
improvement of Robust Accuracy (RA) on CIFAR10 and CIFAR100 datasets,
respectively) across various datasets, backbones and attack modalities, and
validate its ability to enhance training stability and ameliorate catastrophic
overfitting issues meanwhile. | [
"Yaohua Liu",
"Jiaxin Gao",
"Zhu Liu",
"Xianghao Jiao",
"Xin Fan",
"Risheng Liu"
] | 2023-10-19 13:13:41 | http://arxiv.org/abs/2310.12713v1 | http://arxiv.org/pdf/2310.12713v1 | 2310.12713v1 |
Representation Learning via Consistent Assignment of Views over Random Partitions | We present Consistent Assignment of Views over Random Partitions (CARP), a
self-supervised clustering method for representation learning of visual
features. CARP learns prototypes in an end-to-end online fashion using gradient
descent without additional non-differentiable modules to solve the cluster
assignment problem. CARP optimizes a new pretext task based on random
partitions of prototypes that regularizes the model and enforces consistency
between views' assignments. Additionally, our method improves training
stability and prevents collapsed solutions in joint-embedding training. Through
an extensive evaluation, we demonstrate that CARP's representations are
suitable for learning downstream tasks. We evaluate CARP's representations
capabilities in 17 datasets across many standard protocols, including linear
evaluation, few-shot classification, k-NN, k-means, image retrieval, and copy
detection. We compare CARP performance to 11 existing self-supervised methods.
We extensively ablate our method and demonstrate that our proposed random
partition pretext task improves the quality of the learned representations by
devising multiple random classification tasks. In transfer learning tasks, CARP
achieves the best performance on average against many SSL methods trained for a
longer time. | [
"Thalles Silva",
"Adín Ramírez Rivera"
] | 2023-10-19 12:39:59 | http://arxiv.org/abs/2310.12692v1 | http://arxiv.org/pdf/2310.12692v1 | 2310.12692v1 |
Neurosymbolic Grounding for Compositional World Models | We introduce Cosmos, a framework for object-centric world modeling that is
designed for compositional generalization (CG), i.e., high performance on
unseen input scenes obtained through the composition of known visual "atoms."
The central insight behind Cosmos is the use of a novel form of neurosymbolic
grounding. Specifically, the framework introduces two new tools: (i)
neurosymbolic scene encodings, which represent each entity in a scene using a
real vector computed using a neural encoder, as well as a vector of composable
symbols describing attributes of the entity, and (ii) a neurosymbolic attention
mechanism that binds these entities to learned rules of interaction. Cosmos is
end-to-end differentiable; also, unlike traditional neurosymbolic methods that
require representations to be manually mapped to symbols, it computes an
entity's symbolic attributes using vision-language foundation models. Through
an evaluation that considers two different forms of CG on an established
blocks-pushing domain, we show that the framework establishes a new
state-of-the-art for CG in world modeling. | [
"Atharva Sehgal",
"Arya Grayeli",
"Jennifer J. Sun",
"Swarat Chaudhuri"
] | 2023-10-19 12:38:09 | http://arxiv.org/abs/2310.12690v1 | http://arxiv.org/pdf/2310.12690v1 | 2310.12690v1 |
Compression of Recurrent Neural Networks using Matrix Factorization | Compressing neural networks is a key step when deploying models for real-time
or embedded applications. Factorizing the model's matrices using low-rank
approximations is a promising method for achieving compression. While it is
possible to set the rank before training, this approach is neither flexible nor
optimal. In this work, we propose a post-training rank-selection method called
Rank-Tuning that selects a different rank for each matrix. Used in combination
with training adaptations, our method achieves high compression rates with no
or little performance degradation. Our numerical experiments on signal
processing tasks show that we can compress recurrent neural networks up to 14x
with at most 1.4% relative performance reduction. | [
"Lucas Maison",
"Hélion du Mas des Bourboux",
"Thomas Courtat"
] | 2023-10-19 12:35:30 | http://arxiv.org/abs/2310.12688v1 | http://arxiv.org/pdf/2310.12688v1 | 2310.12688v1 |
On the Optimization and Generalization of Multi-head Attention | The training and generalization dynamics of the Transformer's core mechanism,
namely the Attention mechanism, remain under-explored. Besides, existing
analyses primarily focus on single-head attention. Inspired by the demonstrated
benefits of overparameterization when training fully-connected networks, we
investigate the potential optimization and generalization advantages of using
multiple attention heads. Towards this goal, we derive convergence and
generalization guarantees for gradient-descent training of a single-layer
multi-head self-attention model, under a suitable realizability condition on
the data. We then establish primitive conditions on the initialization that
ensure realizability holds. Finally, we demonstrate that these conditions are
satisfied for a simple tokenized-mixture model. We expect the analysis can be
extended to various data-model and architecture variations. | [
"Puneesh Deora",
"Rouzbeh Ghaderi",
"Hossein Taheri",
"Christos Thrampoulidis"
] | 2023-10-19 12:18:24 | http://arxiv.org/abs/2310.12680v1 | http://arxiv.org/pdf/2310.12680v1 | 2310.12680v1 |
Quality-Diversity through AI Feedback | In many text-generation problems, users may prefer not only a single
response, but a diverse range of high-quality outputs from which to choose.
Quality-diversity (QD) search algorithms aim at such outcomes, by continually
improving and diversifying a population of candidates. However, the
applicability of QD to qualitative domains, like creative writing, has been
limited by the difficulty of algorithmically specifying measures of quality and
diversity. Interestingly, recent developments in language models (LMs) have
enabled guiding search through AI feedback, wherein LMs are prompted in natural
language to evaluate qualitative aspects of text. Leveraging this development,
we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an
evolutionary algorithm applies LMs to both generate variation and evaluate the
quality and diversity of candidate text. When assessed on creative writing
domains, QDAIF covers more of a specified search space with high-quality
samples than do non-QD controls. Further, human evaluation of QDAIF-generated
creative texts validates reasonable agreement between AI and human evaluation.
Our results thus highlight the potential of AI feedback to guide open-ended
search for creative and original solutions, providing a recipe that seemingly
generalizes to many domains and modalities. In this way, QDAIF is a step
towards AI systems that can independently search, diversify, evaluate, and
improve, which are among the core skills underlying human society's capacity
for innovation. | [
"Herbie Bradley",
"Andrew Dai",
"Hannah Teufel",
"Jenny Zhang",
"Koen Oostermeijer",
"Marco Bellagente",
"Jeff Clune",
"Kenneth Stanley",
"Grégory Schott",
"Joel Lehman"
] | 2023-10-19 12:13:58 | http://arxiv.org/abs/2310.13032v1 | http://arxiv.org/pdf/2310.13032v1 | 2310.13032v1 |
Neural networks for insurance pricing with frequency and severity data: a benchmark study from data preprocessing to technical tariff | Insurers usually turn to generalized linear models for modelling claim
frequency and severity data. Due to their success in other fields, machine
learning techniques are gaining popularity within the actuarial toolbox. Our
paper contributes to the literature on frequency-severity insurance pricing
with machine learning via deep learning structures. We present a benchmark
study on four insurance data sets with frequency and severity targets in the
presence of multiple types of input features. We compare in detail the
performance of: a generalized linear model on binned input data, a
gradient-boosted tree model, a feed-forward neural network (FFNN), and the
combined actuarial neural network (CANN). Our CANNs combine a baseline
prediction established with a GLM and GBM, respectively, with a neural network
correction. We explain the data preprocessing steps with specific focus on the
multiple types of input features typically present in tabular insurance data
sets, such as postal codes, numeric and categorical covariates. Autoencoders
are used to embed the categorical variables into the neural network and we
explore their potential advantages in a frequency-severity setting. Finally, we
construct global surrogate models for the neural nets' frequency and severity
models. These surrogates enable the translation of the essential insights
captured by the FFNNs or CANNs to GLMs. As such, a technical tariff table
results that can easily be deployed in practice. | [
"Freek Holvoet",
"Katrien Antonio",
"Roel Henckaerts"
] | 2023-10-19 12:00:33 | http://arxiv.org/abs/2310.12671v1 | http://arxiv.org/pdf/2310.12671v1 | 2310.12671v1 |
Subsets and Splits