title
stringlengths 9
208
| abstract
stringlengths 280
2.36k
| authors
sequence | published
stringlengths 19
19
| url
stringlengths 33
33
| pdf_url
stringlengths 33
33
| arxiv_id
stringlengths 12
12
|
---|---|---|---|---|---|---|
PDFTriage: Question Answering over Long, Structured Documents | Large Language Models (LLMs) have issues with document question answering
(QA) in situations where the document is unable to fit in the small context
length of an LLM. To overcome this issue, most existing works focus on
retrieving the relevant context from the document, representing them as plain
text. However, documents such as PDFs, web pages, and presentations are
naturally structured with different pages, tables, sections, and so on.
Representing such structured documents as plain text is incongruous with the
user's mental model of these documents with rich structure. When a system has
to query the document for context, this incongruity is brought to the fore, and
seemingly trivial questions can trip up the QA system. To bridge this
fundamental gap in handling structured documents, we propose an approach called
PDFTriage that enables models to retrieve the context based on either structure
or content. Our experiments demonstrate the effectiveness of the proposed
PDFTriage-augmented models across several classes of questions where existing
retrieval-augmented LLMs fail. To facilitate further research on this
fundamental problem, we release our benchmark dataset consisting of 900+
human-generated questions over 80 structured documents from 10 different
categories of question types for document QA. | [
"Jon Saad-Falcon",
"Joe Barrow",
"Alexa Siu",
"Ani Nenkova",
"Ryan A. Rossi",
"Franck Dernoncourt"
] | 2023-09-16 04:29:05 | http://arxiv.org/abs/2309.08872v1 | http://arxiv.org/pdf/2309.08872v1 | 2309.08872v1 |
Rethinking Learning Rate Tuning in the Era of Large Language Models | Large Language Models (LLMs) represent the recent success of deep learning in
achieving remarkable human-like predictive performance. It has become a
mainstream strategy to leverage fine-tuning to adapt LLMs for various
real-world applications due to the prohibitive expenses associated with LLM
training. The learning rate is one of the most important hyperparameters in LLM
fine-tuning with direct impacts on both fine-tuning efficiency and fine-tuned
LLM quality. Existing learning rate policies are primarily designed for
training traditional deep neural networks (DNNs), which may not work well for
LLM fine-tuning. We reassess the research challenges and opportunities of
learning rate tuning in the coming era of Large Language Models. This paper
makes three original contributions. First, we revisit existing learning rate
policies to analyze the critical challenges of learning rate tuning in the era
of LLMs. Second, we present LRBench++ to benchmark learning rate policies and
facilitate learning rate tuning for both traditional DNNs and LLMs. Third, our
experimental analysis with LRBench++ demonstrates the key differences between
LLM fine-tuning and traditional DNN training and validates our analysis. | [
"Hongpeng Jin",
"Wenqi Wei",
"Xuyu Wang",
"Wenbin Zhang",
"Yanzhao Wu"
] | 2023-09-16 03:37:00 | http://arxiv.org/abs/2309.08859v1 | http://arxiv.org/pdf/2309.08859v1 | 2309.08859v1 |
Intelligent machines work in unstructured environments by differential neural computing | Expecting intelligent machines to efficiently work in real world requires a
new method to understand unstructured information in unknown environments with
good accuracy, scalability and generalization, like human. Here, a memristive
neural computing based perceptual signal differential processing and learning
method for intelligent machines is presented, via extracting main features of
environmental information and applying associated encoded stimuli to
memristors, we successfully obtain human-like ability in processing
unstructured environmental information, such as amplification (>720%) and
adaptation (<50%) of mechanical stimuli. The method also exhibits good
scalability and generalization, validated in two typical applications of
intelligent machines: object grasping and autonomous driving. In the former, a
robot hand experimentally realizes safe and stable grasping, through learning
unknown object features (e.g., sharp corner and smooth surface) with a single
memristor in 1 ms. In the latter, the decision-making information of 10
unstructured environments in autonomous driving (e.g., overtaking cars,
pedestrians) are accurately (94%) extracted with a 40x25 memristor array. By
mimicking the intrinsic nature of human low-level perception mechanisms in
electronic memristive neural circuits, the proposed method is adaptable to
diverse sensing technologies, helping intelligent machines to generate smart
high-level decisions in real world. | [
"Shengbo Wang",
"Shuo Gao",
"Chenyu Tang",
"Cong Li",
"Shurui Wang",
"Jiaqi Wang",
"Hubin Zhao",
"Guohua Hu",
"Arokia Nathan",
"Ravinder Dahiya",
"Luigi Occhipinti"
] | 2023-09-16 01:45:13 | http://arxiv.org/abs/2309.08835v2 | http://arxiv.org/pdf/2309.08835v2 | 2309.08835v2 |
Distributionally Robust Post-hoc Classifiers under Prior Shifts | The generalization ability of machine learning models degrades significantly
when the test distribution shifts away from the training distribution. We
investigate the problem of training models that are robust to shifts caused by
changes in the distribution of class-priors or group-priors. The presence of
skewed training priors can often lead to the models overfitting to spurious
features. Unlike existing methods, which optimize for either the worst or the
average performance over classes or groups, our work is motivated by the need
for finer control over the robustness properties of the model. We present an
extremely lightweight post-hoc approach that performs scaling adjustments to
predictions from a pre-trained model, with the goal of minimizing a
distributionally robust loss around a chosen target distribution. These
adjustments are computed by solving a constrained optimization problem on a
validation set and applied to the model during test time. Our constrained
optimization objective is inspired by a natural notion of robustness to
controlled distribution shifts. Our method comes with provable guarantees and
empirically makes a strong case for distributional robust post-hoc classifiers.
An empirical implementation is available at
https://github.com/weijiaheng/Drops. | [
"Jiaheng Wei",
"Harikrishna Narasimhan",
"Ehsan Amid",
"Wen-Sheng Chu",
"Yang Liu",
"Abhishek Kumar"
] | 2023-09-16 00:54:57 | http://arxiv.org/abs/2309.08825v1 | http://arxiv.org/pdf/2309.08825v1 | 2309.08825v1 |
SHAPNN: Shapley Value Regularized Tabular Neural Network | We present SHAPNN, a novel deep tabular data modeling architecture designed
for supervised learning. Our approach leverages Shapley values, a
well-established technique for explaining black-box models. Our neural network
is trained using standard backward propagation optimization methods, and is
regularized with realtime estimated Shapley values. Our method offers several
advantages, including the ability to provide valid explanations with no
computational overhead for data instances and datasets. Additionally,
prediction with explanation serves as a regularizer, which improves the model's
performance. Moreover, the regularized prediction enhances the model's
capability for continual learning. We evaluate our method on various publicly
available datasets and compare it with state-of-the-art deep neural network
models, demonstrating the superior performance of SHAPNN in terms of AUROC,
transparency, as well as robustness to streaming data. | [
"Qisen Cheng",
"Shuhui Qu",
"Janghwan Lee"
] | 2023-09-15 22:45:05 | http://arxiv.org/abs/2309.08799v1 | http://arxiv.org/pdf/2309.08799v1 | 2309.08799v1 |
Fin-Fact: A Benchmark Dataset for Multimodal Financial Fact Checking and Explanation Generation | Fact-checking in financial domain is under explored, and there is a shortage
of quality dataset in this domain. In this paper, we propose Fin-Fact, a
benchmark dataset for multimodal fact-checking within the financial domain.
Notably, it includes professional fact-checker annotations and justifications,
providing expertise and credibility. With its multimodal nature encompassing
both textual and visual content, Fin-Fact provides complementary information
sources to enhance factuality analysis. Its primary objective is combating
misinformation in finance, fostering transparency, and building trust in
financial reporting and news dissemination. By offering insightful
explanations, Fin-Fact empowers users, including domain experts and end-users,
to understand the reasoning behind fact-checking decisions, validating claim
credibility, and fostering trust in the fact-checking process. The Fin-Fact
dataset, along with our experimental codes is available at
https://github.com/IIT-DM/Fin-Fact/. | [
"Aman Rangapur",
"Haoran Wang",
"Kai Shu"
] | 2023-09-15 22:24:00 | http://arxiv.org/abs/2309.08793v1 | http://arxiv.org/pdf/2309.08793v1 | 2309.08793v1 |
BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-inspired Materials | The study of biological materials and bio-inspired materials science is well
established; however, surprisingly little knowledge has been systematically
translated to engineering solutions. To accelerate discovery and guide
insights, an open-source autoregressive transformer large language model,
BioinspiredLLM, is reported. The model was finetuned with a corpus of over a
thousand peer-reviewed articles in the field of structural biological and
bio-inspired materials and can be prompted to actively and interactively recall
information, assist with research tasks, and function as an engine for
creativity. The model has proven by example that it is not only able to
accurately recall information about biological materials when queried but also
formulate biomaterials questions and answers that can evaluate its own
performance. BioinspiredLLM also has been shown to develop sound hypotheses
regarding biological materials design and remarkably so for materials that have
never been explicitly studied before. Lastly, the model showed impressive
promise in collaborating with other generative artificial intelligence models
in a workflow that can reshape the traditional materials design process. This
collaborative generative artificial intelligence method can stimulate and
enhance bio-inspired materials design workflows. Biological materials is at a
critical intersection of multiple scientific fields and models like
BioinspiredLLM help to connect knowledge domains. | [
"Rachel K. Luu",
"Markus J. Buehler"
] | 2023-09-15 22:12:44 | http://arxiv.org/abs/2309.08788v1 | http://arxiv.org/pdf/2309.08788v1 | 2309.08788v1 |
Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata | Content metadata plays a very important role in movie recommender systems as
it provides valuable information about various aspects of a movie such as
genre, cast, plot synopsis, box office summary, etc. Analyzing the metadata can
help understand the user preferences to generate personalized recommendations
and item cold starting. In this talk, we will focus on one particular type of
metadata - \textit{genre} labels. Genre labels associated with a movie or a TV
series help categorize a collection of titles into different themes and
correspondingly setting up the audience expectation. We present some of the
challenges associated with using genre label information and propose a new way
of examining the genre information that we call as the \textit{Genre Spectrum}.
The Genre Spectrum helps capture the various nuanced genres in a title and our
offline and online experiments corroborate the effectiveness of the approach.
Furthermore, we also talk about applications of LLMs in augmenting content
metadata which could eventually be used to achieve effective organization of
recommendations in user's 2-D home-grid. | [
"Saurabh Agrawal",
"John Trenkle",
"Jaya Kawale"
] | 2023-09-15 22:11:29 | http://arxiv.org/abs/2309.08787v1 | http://arxiv.org/pdf/2309.08787v1 | 2309.08787v1 |
Electroencephalogram Sensor Data Compression Using An Asymmetrical Sparse Autoencoder With A Discrete Cosine Transform Layer | Electroencephalogram (EEG) data compression is necessary for wireless
recording applications to reduce the amount of data that needs to be
transmitted. In this paper, an asymmetrical sparse autoencoder with a discrete
cosine transform (DCT) layer is proposed to compress EEG signals. The encoder
module of the autoencoder has a combination of a fully connected linear layer
and the DCT layer to reduce redundant data using hard-thresholding
nonlinearity. Furthermore, the DCT layer includes trainable hard-thresholding
parameters and scaling layers to give emphasis or de-emphasis on individual DCT
coefficients. Finally, the one-by-one convolutional layer generates the latent
space. The sparsity penalty-based cost function is employed to keep the feature
map as sparse as possible in the latent space. The latent space data is
transmitted to the receiver. The decoder module of the autoencoder is designed
using the inverse DCT and two fully connected linear layers to improve the
accuracy of data reconstruction. In comparison to other state-of-the-art
methods, the proposed method significantly improves the average quality score
in various data compression experiments. | [
"Xin Zhu",
"Hongyi Pan",
"Shuaiang Rong",
"Ahmet Enis Cetin"
] | 2023-09-15 21:55:56 | http://arxiv.org/abs/2309.12201v1 | http://arxiv.org/pdf/2309.12201v1 | 2309.12201v1 |
Projected Task-Specific Layers for Multi-Task Reinforcement Learning | Multi-task reinforcement learning could enable robots to scale across a wide
variety of manipulation tasks in homes and workplaces. However, generalizing
from one task to another and mitigating negative task interference still
remains a challenge. Addressing this challenge by successfully sharing
information across tasks will depend on how well the structure underlying the
tasks is captured. In this work, we introduce our new architecture, Projected
Task-Specific Layers (PTSL), that leverages a common policy with dense
task-specific corrections through task-specific layers to better express shared
and variable task information. We then show that our model outperforms the
state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10
and 50 goal-conditioned tasks for a Sawyer arm. | [
"Josselin Somerville Roberts",
"Julia Di"
] | 2023-09-15 21:42:06 | http://arxiv.org/abs/2309.08776v1 | http://arxiv.org/pdf/2309.08776v1 | 2309.08776v1 |
Long-term Neurological Sequelae in Post-COVID-19 Patients: A Machine Learning Approach to Predict Outcomes | The COVID-19 pandemic has brought to light a concerning aspect of long-term
neurological complications in post-recovery patients. This study delved into
the investigation of such neurological sequelae in a cohort of 500
post-COVID-19 patients, encompassing individuals with varying illness severity.
The primary aim was to predict outcomes using a machine learning approach based
on diverse clinical data and neuroimaging parameters. The results revealed that
68% of the post-COVID-19 patients reported experiencing neurological symptoms,
with fatigue, headache, and anosmia being the most common manifestations.
Moreover, 22% of the patients exhibited more severe neurological complications,
including encephalopathy and stroke. The application of machine learning models
showed promising results in predicting long-term neurological outcomes.
Notably, the Random Forest model achieved an accuracy of 85%, sensitivity of
80%, and specificity of 90% in identifying patients at risk of developing
neurological sequelae. These findings underscore the importance of continuous
monitoring and follow-up care for post-COVID-19 patients, particularly in
relation to potential neurological complications. The integration of machine
learning-based outcome prediction offers a valuable tool for early intervention
and personalized treatment strategies, aiming to improve patient care and
clinical decision-making. In conclusion, this study sheds light on the
prevalence of long-term neurological complications in post-COVID-19 patients
and demonstrates the potential of machine learning in predicting outcomes,
thereby contributing to enhanced patient management and better health outcomes.
Further research and larger studies are warranted to validate and refine these
predictive models and to gain deeper insights into the underlying mechanisms of
post-COVID-19 neurological sequelae. | [
"Hayder A. Albaqer",
"Kadhum J. Al-Jibouri",
"John Martin",
"Fadhil G. Al-Amran",
"Salman Rawaf",
"Maitham G. Yousif"
] | 2023-09-15 21:36:43 | http://arxiv.org/abs/2309.09993v1 | http://arxiv.org/pdf/2309.09993v1 | 2309.09993v1 |
Enhance audio generation controllability through representation similarity regularization | This paper presents an innovative approach to enhance control over audio
generation by emphasizing the alignment between audio and text representations
during model training. In the context of language model-based audio generation,
the model leverages input from both textual and audio token representations to
predict subsequent audio tokens. However, the current configuration lacks
explicit regularization to ensure the alignment between the chosen text
representation and the language model's predictions. Our proposal involves the
incorporation of audio and text representation regularization, particularly
during the classifier-free guidance (CFG) phase, where the text condition is
excluded from cross attention during language model training. The aim of this
proposed representation regularization is to minimize discrepancies in audio
and text similarity compared to other samples within the same training batch.
Experimental results on both music and audio generation tasks demonstrate that
our proposed methods lead to improvements in objective metrics for both audio
and music generation, as well as an enhancement in the human perception for
audio generation. | [
"Yangyang Shi",
"Gael Le Lan",
"Varun Nagaraja",
"Zhaoheng Ni",
"Xinhao Mei",
"Ernie Chang",
"Forrest Iandola",
"Yang Liu",
"Vikas Chandra"
] | 2023-09-15 21:32:20 | http://arxiv.org/abs/2309.08773v1 | http://arxiv.org/pdf/2309.08773v1 | 2309.08773v1 |
Mining Patents with Large Language Models Demonstrates Congruence of Functional Labels and Chemical Structures | Predicting chemical function from structure is a major goal of the chemical
sciences, from the discovery and repurposing of novel drugs to the creation of
new materials. Recently, new machine learning algorithms are opening up the
possibility of general predictive models spanning many different chemical
functions. Here, we consider the challenge of applying large language models to
chemical patents in order to consolidate and leverage the information about
chemical functionality captured by these resources. Chemical patents contain
vast knowledge on chemical function, but their usefulness as a dataset has
historically been neglected due to the impracticality of extracting
high-quality functional labels. Using a scalable ChatGPT-assisted patent
summarization and word-embedding label cleaning pipeline, we derive a Chemical
Function (CheF) dataset, containing 100K molecules and their patent-derived
functional labels. The functional labels were validated to be of high quality,
allowing us to detect a strong relationship between functional label and
chemical structural spaces. Further, we find that the co-occurrence graph of
the functional labels contains a robust semantic structure, which allowed us in
turn to examine functional relatedness among the compounds. We then trained a
model on the CheF dataset, allowing us to assign new functional labels to
compounds. Using this model, we were able to retrodict approved Hepatitis C
antivirals, uncover an antiviral mechanism undisclosed in the patent, and
identify plausible serotonin-related drugs. The CheF dataset and associated
model offers a promising new approach to predict chemical functionality. | [
"Clayton W. Kosonocky",
"Claus O. Wilke",
"Edward M. Marcotte",
"Andrew D. Ellington"
] | 2023-09-15 21:08:41 | http://arxiv.org/abs/2309.08765v1 | http://arxiv.org/pdf/2309.08765v1 | 2309.08765v1 |
Circular Clustering with Polar Coordinate Reconstruction | There is a growing interest in characterizing circular data found in
biological systems. Such data are wide ranging and varied, from signal phase in
neural recordings to nucleotide sequences in round genomes. Traditional
clustering algorithms are often inadequate due to their limited ability to
distinguish differences in the periodic component. Current clustering schemes
that work in a polar coordinate system have limitations, such as being only
angle-focused or lacking generality. To overcome these limitations, we propose
a new analysis framework that utilizes projections onto a cylindrical
coordinate system to better represent objects in a polar coordinate system.
Using the mathematical properties of circular data, we show our approach always
finds the correct clustering result within the reconstructed dataset, given
sufficient periodic repetitions of the data. Our approach is generally
applicable and adaptable and can be incorporated into most state-of-the-art
clustering algorithms. We demonstrate on synthetic and real data that our
method generates more appropriate and consistent clustering results compared to
standard methods. In summary, our proposed analysis framework overcomes the
limitations of existing polar coordinate-based clustering methods and provides
a more accurate and efficient way to cluster circular data. | [
"Xiaoxiao Sun",
"Paul Sajda"
] | 2023-09-15 20:56:01 | http://arxiv.org/abs/2309.08757v1 | http://arxiv.org/pdf/2309.08757v1 | 2309.08757v1 |
Diverse Neural Audio Embeddings -- Bringing Features back ! | With the advent of modern AI architectures, a shift has happened towards
end-to-end architectures. This pivot has led to neural architectures being
trained without domain-specific biases/knowledge, optimized according to the
task. We in this paper, learn audio embeddings via diverse feature
representations, in this case, domain-specific. For the case of audio
classification over hundreds of categories of sound, we learn robust separate
embeddings for diverse audio properties such as pitch, timbre, and neural
representation, along with also learning it via an end-to-end architecture. We
observe handcrafted embeddings, e.g., pitch and timbre-based, although on their
own, are not able to beat a fully end-to-end representation, yet adding these
together with end-to-end embedding helps us, significantly improve performance.
This work would pave the way to bring some domain expertise with end-to-end
models to learn robust, diverse representations, surpassing the performance of
just training end-to-end models. | [
"Prateek Verma"
] | 2023-09-15 20:27:47 | http://arxiv.org/abs/2309.08751v1 | http://arxiv.org/pdf/2309.08751v1 | 2309.08751v1 |
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits | Off-policy evaluation and learning are concerned with assessing a given
policy and learning an optimal policy from offline data without direct
interaction with the environment. Often, the environment in which the data are
collected differs from the environment in which the learned policy is applied.
To account for the effect of different environments during learning and
execution, distributionally robust optimization (DRO) methods have been
developed that compute worst-case bounds on the policy values assuming that the
distribution of the new environment lies within an uncertainty set. Typically,
this uncertainty set is defined based on the KL divergence around the empirical
distribution computed from the logging dataset. However, the KL uncertainty set
fails to encompass distributions with varying support and lacks awareness of
the geometry of the distribution support. As a result, KL approaches fall short
in addressing practical environment mismatches and lead to over-fitting to
worst-case scenarios. To overcome these limitations, we propose a novel DRO
approach that employs the Wasserstein distance instead. While Wasserstein DRO
is generally computationally more expensive compared to KL DRO, we present a
regularized method and a practical (biased) stochastic gradient descent method
to optimize the policy efficiently. We also provide a theoretical analysis of
the finite sample complexity and iteration complexity for our proposed method.
We further validate our approach using a public dataset that was recorded in a
randomized stoke trial. | [
"Yi Shen",
"Pan Xu",
"Michael M. Zavlanos"
] | 2023-09-15 20:21:46 | http://arxiv.org/abs/2309.08748v2 | http://arxiv.org/pdf/2309.08748v2 | 2309.08748v2 |
AlbNER: A Corpus for Named Entity Recognition in Albanian | Scarcity of resources such as annotated text corpora for under-resourced
languages like Albanian is a serious impediment in computational linguistics
and natural language processing research. This paper presents AlbNER, a corpus
of 900 sentences with labeled named entities, collected from Albanian Wikipedia
articles. Preliminary results with BERT and RoBERTa variants fine-tuned and
tested with AlbNER data indicate that model size has slight impact on NER
performance, whereas language transfer has a significant one. AlbNER corpus and
these obtained results should serve as baselines for future experiments. | [
"Erion Çano"
] | 2023-09-15 20:03:19 | http://arxiv.org/abs/2309.08741v1 | http://arxiv.org/pdf/2309.08741v1 | 2309.08741v1 |
Concept explainability for plant diseases classification | Plant diseases remain a considerable threat to food security and agricultural
sustainability. Rapid and early identification of these diseases has become a
significant concern motivating several studies to rely on the increasing global
digitalization and the recent advances in computer vision based on deep
learning. In fact, plant disease classification based on deep convolutional
neural networks has shown impressive performance. However, these methods have
yet to be adopted globally due to concerns regarding their robustness,
transparency, and the lack of explainability compared with their human experts
counterparts. Methods such as saliency-based approaches associating the network
output to perturbations of the input pixels have been proposed to give insights
into these algorithms. Still, they are not easily comprehensible and not
intuitive for human users and are threatened by bias. In this work, we deploy a
method called Testing with Concept Activation Vectors (TCAV) that shifts the
focus from pixels to user-defined concepts. To the best of our knowledge, our
paper is the first to employ this method in the field of plant disease
classification. Important concepts such as color, texture and disease related
concepts were analyzed. The results suggest that concept-based explanation
methods can significantly benefit automated plant disease identification. | [
"Jihen Amara",
"Birgitta König-Ries",
"Sheeba Samuel"
] | 2023-09-15 19:57:50 | http://arxiv.org/abs/2309.08739v1 | http://arxiv.org/pdf/2309.08739v1 | 2309.08739v1 |
Experimental Assessment of a Forward-Collision Warning System Fusing Deep Learning and Decentralized Radio Sensing | This paper presents the idea of an automatic forward-collision warning system
based on a decentralized radio sensing (RS) approach. In this framework, a
vehicle in receiving mode employs a continuous waveform (CW) transmitted by a
second vehicle as a probe signal to detect oncoming vehicles and warn the
driver of a potential forward collision. Such a CW can easily be incorporated
as a pilot signal within the data frame of current multicarrier vehicular
communication systems. Detection of oncoming vehicles is performed by a deep
learning (DL) module that analyzes the features of the Doppler signature
imprinted on the CW probe signal by a rapidly approaching vehicle. This
decentralized CW RS approach was assessed experimentally using data collected
by a series of field trials conducted in a two-lanes high-speed highway.
Detection performance was evaluated for two different DL models: a long
short-term memory network and a convolutional neural network. The obtained
results demonstrate the feasibility of the envisioned forward-collision warning
system based on the fusion of DL and decentralized CW RS. | [
"Jorge D. Cardenas",
"Omar Contreras-Ponce",
"Carlos A. Gutierrez",
"Ruth Aguilar-Ponce",
"Francisco R. Castillo-Soria",
"Cesar A. Azurdia-Meza"
] | 2023-09-15 19:55:10 | http://arxiv.org/abs/2309.08737v1 | http://arxiv.org/pdf/2309.08737v1 | 2309.08737v1 |
Pointing the Way: Refining Radar-Lidar Localization Using Learned ICP Weights | This paper presents a novel deep-learning-based approach to improve
localizing radar measurements against lidar maps. Although the state of the art
for localization is matching lidar data to lidar maps, radar has been
considered as a promising alternative, as it is potentially more resilient
against adverse weather such as precipitation and heavy fog. To make use of
existing high-quality lidar maps, while maintaining performance in adverse
weather, matching radar data to lidar maps is of interest. However, owing in
part to the unique artefacts present in radar measurements, radar-lidar
localization has struggled to achieve comparable performance to lidar-lidar
systems, preventing it from being viable for autonomous driving. This work
builds on an ICP-based radar-lidar localization system by including a learned
preprocessing step that weights radar points based on high-level scan
information. Combining a proven analytical approach with a learned weight
reduces localization errors in radar-lidar ICP results run on real-world
autonomous driving data by up to 54.94% in translation and 68.39% in rotation,
while maintaining interpretability and robustness. | [
"Daniil Lisus",
"Johann Laconte",
"Keenan Burnett",
"Timothy D. Barfoot"
] | 2023-09-15 19:37:58 | http://arxiv.org/abs/2309.08731v1 | http://arxiv.org/pdf/2309.08731v1 | 2309.08731v1 |
Clustered Multi-Agent Linear Bandits | We address in this paper a particular instance of the multi-agent linear
stochastic bandit problem, called clustered multi-agent linear bandits. In this
setting, we propose a novel algorithm leveraging an efficient collaboration
between the agents in order to accelerate the overall optimization problem. In
this contribution, a network controller is responsible for estimating the
underlying cluster structure of the network and optimizing the experiences
sharing among agents within the same groups. We provide a theoretical analysis
for both the regret minimization problem and the clustering quality. Through
empirical evaluation against state-of-the-art algorithms on both synthetic and
real data, we demonstrate the effectiveness of our approach: our algorithm
significantly improves regret minimization while managing to recover the true
underlying cluster partitioning. | [
"Hamza Cherkaoui",
"Merwan Barlier",
"Igor Colin"
] | 2023-09-15 19:01:42 | http://arxiv.org/abs/2309.08710v1 | http://arxiv.org/pdf/2309.08710v1 | 2309.08710v1 |
Price of Safety in Linear Best Arm Identification | We introduce the safe best-arm identification framework with linear feedback,
where the agent is subject to some stage-wise safety constraint that linearly
depends on an unknown parameter vector. The agent must take actions in a
conservative way so as to ensure that the safety constraint is not violated
with high probability at each round. Ways of leveraging the linear structure
for ensuring safety has been studied for regret minimization, but not for
best-arm identification to the best our knowledge. We propose a gap-based
algorithm that achieves meaningful sample complexity while ensuring the
stage-wise safety. We show that we pay an extra term in the sample complexity
due to the forced exploration phase incurred by the additional safety
constraint. Experimental illustrations are provided to justify the design of
our algorithm. | [
"Xuedong Shang",
"Igor Colin",
"Merwan Barlier",
"Hamza Cherkaoui"
] | 2023-09-15 19:01:21 | http://arxiv.org/abs/2309.08709v1 | http://arxiv.org/pdf/2309.08709v1 | 2309.08709v1 |
Wasserstein Distributionally Robust Control Barrier Function using Conditional Value-at-Risk with Differentiable Convex Programming | Control Barrier functions (CBFs) have attracted extensive attention for
designing safe controllers for their deployment in real-world safety-critical
systems. However, the perception of the surrounding environment is often
subject to stochasticity and further distributional shift from the nominal one.
In this paper, we present distributional robust CBF (DR-CBF) to achieve
resilience under distributional shift while keeping the advantages of CBF, such
as computational efficacy and forward invariance.
To achieve this goal, we first propose a single-level convex reformulation to
estimate the conditional value at risk (CVaR) of the safety constraints under
distributional shift measured by a Wasserstein metric, which is by nature
tri-level programming. Moreover, to construct a control barrier condition to
enforce the forward invariance of the CVaR, the technique of differentiable
convex programming is applied to enable differentiation through the
optimization layer of CVaR estimation. We also provide an approximate variant
of DR-CBF for higher-order systems. Simulation results are presented to
validate the chance-constrained safety guarantee under the distributional shift
in both first and second-order systems. | [
"Alaa Eddine Chriat",
"Chuangchuang Sun"
] | 2023-09-15 18:45:09 | http://arxiv.org/abs/2309.08700v1 | http://arxiv.org/pdf/2309.08700v1 | 2309.08700v1 |
Modelling Irregularly Sampled Time Series Without Imputation | Modelling irregularly-sampled time series (ISTS) is challenging because of
missing values. Most existing methods focus on handling ISTS by converting
irregularly sampled data into regularly sampled data via imputation. These
models assume an underlying missing mechanism leading to unwanted bias and
sub-optimal performance. We present SLAN (Switch LSTM Aggregate Network), which
utilizes a pack of LSTMs to model ISTS without imputation, eliminating the
assumption of any underlying process. It dynamically adapts its architecture on
the fly based on the measured sensors. SLAN exploits the irregularity
information to capture each sensor's local summary explicitly and maintains a
global summary state throughout the observational period. We demonstrate the
efficacy of SLAN on publicly available datasets, namely, MIMIC-III, Physionet
2012 and Physionet 2019. The code is available at
https://github.com/Rohit102497/SLAN. | [
"Rohit Agarwal",
"Aman Sinha",
"Dilip K. Prasad",
"Marianne Clausel",
"Alexander Horsch",
"Mathieu Constant",
"Xavier Coubez"
] | 2023-09-15 18:43:41 | http://arxiv.org/abs/2309.08698v1 | http://arxiv.org/pdf/2309.08698v1 | 2309.08698v1 |
Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents | Resolving the scope of a negation within a sentence is a challenging NLP
task. The complexity of legal texts and the lack of annotated in-domain
negation corpora pose challenges for state-of-the-art (SotA) models when
performing negation scope resolution on multilingual legal data. Our
experiments demonstrate that models pre-trained without legal data underperform
in the task of negation scope resolution. Our experiments, using language
models exclusively fine-tuned on domains like literary texts and medical data,
yield inferior results compared to the outcomes documented in prior
cross-domain experiments. We release a new set of annotated court decisions in
German, French, and Italian and use it to improve negation scope resolution in
both zero-shot and multilingual settings. We achieve token-level F1-scores of
up to 86.7% in our zero-shot cross-lingual experiments, where the models are
trained on two languages of our legal datasets and evaluated on the third. Our
multilingual experiments, where the models were trained on all available
negation data and evaluated on our legal datasets, resulted in F1-scores of up
to 91.1%. | [
"Ramona Christen",
"Anastassia Shaitarova",
"Matthias Stürmer",
"Joel Niklaus"
] | 2023-09-15 18:38:06 | http://arxiv.org/abs/2309.08695v1 | http://arxiv.org/pdf/2309.08695v1 | 2309.08695v1 |
Evaluating the Impact of Local Differential Privacy on Utility Loss via Influence Functions | How to properly set the privacy parameter in differential privacy (DP) has
been an open question in DP research since it was first proposed in 2006. In
this work, we demonstrate the ability of influence functions to offer insight
into how a specific privacy parameter value will affect a model's test loss in
the randomized response-based local DP setting. Our proposed method allows a
data curator to select the privacy parameter best aligned with their allowed
privacy-utility trade-off without requiring heavy computation such as extensive
model retraining and data privatization. We consider multiple common
randomization scenarios, such as performing randomized response over the
features, and/or over the labels, as well as the more complex case of applying
a class-dependent label noise correction method to offset the noise incurred by
randomization. Further, we provide a detailed discussion over the computational
complexity of our proposed approach inclusive of an empirical analysis. Through
empirical evaluations we show that for both binary and multi-class settings,
influence functions are able to approximate the true change in test loss that
occurs when randomized response is applied over features and/or labels with
small mean absolute error, especially in cases where noise correction methods
are applied. | [
"Alycia N. Carey",
"Minh-Hao Van",
"Xintao Wu"
] | 2023-09-15 18:08:24 | http://arxiv.org/abs/2309.08678v1 | http://arxiv.org/pdf/2309.08678v1 | 2309.08678v1 |
Sparse Autoencoders Find Highly Interpretable Features in Language Models | One of the roadblocks to a better understanding of neural networks' internals
is \textit{polysemanticity}, where neurons appear to activate in multiple,
semantically distinct contexts. Polysemanticity prevents us from identifying
concise, human-understandable explanations for what neural networks are doing
internally. One hypothesised cause of polysemanticity is
\textit{superposition}, where neural networks represent more features than they
have neurons by assigning features to an overcomplete set of directions in
activation space, rather than to individual neurons. Here, we attempt to
identify those directions, using sparse autoencoders to reconstruct the
internal activations of a language model. These autoencoders learn sets of
sparsely activating features that are more interpretable and monosemantic than
directions identified by alternative approaches, where interpretability is
measured by automated methods. Moreover, we show that with our learned set of
features, we can pinpoint the features that are causally responsible for
counterfactual behaviour on the indirect object identification task
\citep{wang2022interpretability} to a finer degree than previous
decompositions. This work indicates that it is possible to resolve
superposition in language models using a scalable, unsupervised method. Our
method may serve as a foundation for future mechanistic interpretability work,
which we hope will enable greater model transparency and steerability. | [
"Hoagy Cunningham",
"Aidan Ewart",
"Logan Riggs",
"Robert Huben",
"Lee Sharkey"
] | 2023-09-15 17:56:55 | http://arxiv.org/abs/2309.08600v3 | http://arxiv.org/pdf/2309.08600v3 | 2309.08600v3 |
Attention-Only Transformers and Implementing MLPs with Attention Heads | The transformer architecture is widely used in machine learning models and
consists of two alternating sublayers: attention heads and MLPs. We prove that
an MLP neuron can be implemented by a masked attention head with internal
dimension 1 so long as the MLP's activation function comes from a restricted
class including SiLU and close approximations of ReLU and GeLU. This allows one
to convert an MLP-and-attention transformer into an attention-only transformer
at the cost of greatly increasing the number of attention heads. We also prove
that attention heads can perform the components of an MLP (linear
transformations and activation functions) separately. Finally, we prove that
attention heads can encode arbitrary masking patterns in their weight matrices
to within arbitrarily small error. | [
"Robert Huben",
"Valerie Morris"
] | 2023-09-15 17:47:45 | http://arxiv.org/abs/2309.08593v1 | http://arxiv.org/pdf/2309.08593v1 | 2309.08593v1 |
Chain-of-Thought Reasoning is a Policy Improvement Operator | Large language models have astounded the world with fascinating new
capabilities. However, they currently lack the ability to teach themselves new
skills, relying instead on being trained on large amounts of human-generated
data. We introduce SECToR (Self-Education via Chain-of-Thought Reasoning), a
proof-of-concept demonstration that language models can successfully teach
themselves new skills using chain-of-thought reasoning. Inspired by previous
work in both reinforcement learning (Silver et al., 2017) and human cognition
(Kahneman, 2011), SECToR first uses chain-of-thought reasoning to slowly think
its way through problems. SECToR then fine-tunes the model to generate those
same answers, this time without using chain-of-thought reasoning. Language
models trained via SECToR autonomously learn to add up to 29-digit numbers
without any access to any ground truth examples beyond an initial supervised
fine-tuning phase consisting only of numbers with 6 or fewer digits. Our
central hypothesis is that chain-of-thought reasoning can act as a policy
improvement operator, analogously to how Monte-Carlo Tree Search is used in
AlphaZero. We hope that this research can lead to new directions in which
language models can learn to teach themselves without the need for human
demonstrations. | [
"Hugh Zhang",
"David C. Parkes"
] | 2023-09-15 17:44:17 | http://arxiv.org/abs/2309.08589v1 | http://arxiv.org/pdf/2309.08589v1 | 2309.08589v1 |
Compositional Foundation Models for Hierarchical Planning | To make effective decisions in novel environments with long-horizon goals, it
is crucial to engage in hierarchical reasoning across spatial and temporal
scales. This entails planning abstract subgoal sequences, visually reasoning
about the underlying plans, and executing actions in accordance with the
devised plan through visual-motor control. We propose Compositional Foundation
Models for Hierarchical Planning (HiP), a foundation model which leverages
multiple expert foundation model trained on language, vision and action data
individually jointly together to solve long-horizon tasks. We use a large
language model to construct symbolic plans that are grounded in the environment
through a large video diffusion model. Generated video plans are then grounded
to visual-motor control, through an inverse dynamics model that infers actions
from generated videos. To enable effective reasoning within this hierarchy, we
enforce consistency between the models via iterative refinement. We illustrate
the efficacy and adaptability of our approach in three different long-horizon
table-top manipulation tasks. | [
"Anurag Ajay",
"Seungwook Han",
"Yilun Du",
"Shuang Li",
"Abhi Gupta",
"Tommi Jaakkola",
"Josh Tenenbaum",
"Leslie Kaelbling",
"Akash Srivastava",
"Pulkit Agrawal"
] | 2023-09-15 17:44:05 | http://arxiv.org/abs/2309.08587v2 | http://arxiv.org/pdf/2309.08587v2 | 2309.08587v2 |
Replacing softmax with ReLU in Vision Transformers | Previous research observed accuracy degradation when replacing the attention
softmax with a point-wise activation such as ReLU. In the context of vision
transformers, we find that this degradation is mitigated when dividing by
sequence length. Our experiments training small to large vision transformers on
ImageNet-21k indicate that ReLU-attention can approach or match the performance
of softmax-attention in terms of scaling behavior as a function of compute. | [
"Mitchell Wortsman",
"Jaehoon Lee",
"Justin Gilmer",
"Simon Kornblith"
] | 2023-09-15 17:43:40 | http://arxiv.org/abs/2309.08586v2 | http://arxiv.org/pdf/2309.08586v2 | 2309.08586v2 |
A Bayesian Approach to Robust Inverse Reinforcement Learning | We consider a Bayesian approach to offline model-based inverse reinforcement
learning (IRL). The proposed framework differs from existing offline
model-based IRL approaches by performing simultaneous estimation of the
expert's reward function and subjective model of environment dynamics. We make
use of a class of prior distributions which parameterizes how accurate the
expert's model of the environment is to develop efficient algorithms to
estimate the expert's reward and subjective dynamics in high-dimensional
settings. Our analysis reveals a novel insight that the estimated policy
exhibits robust performance when the expert is believed (a priori) to have a
highly accurate model of the environment. We verify this observation in the
MuJoCo environments and show that our algorithms outperform state-of-the-art
offline IRL algorithms. | [
"Ran Wei",
"Siliang Zeng",
"Chenliang Li",
"Alfredo Garcia",
"Anthony McDonald",
"Mingyi Hong"
] | 2023-09-15 17:37:09 | http://arxiv.org/abs/2309.08571v1 | http://arxiv.org/pdf/2309.08571v1 | 2309.08571v1 |
Neural Network Driven, Interactive Design for Nonlinear Optical Molecules Based on Group Contribution Method | A Lewis-mode group contribution method (LGC) -- multi-stage Bayesian neural
network (msBNN) -- evolutionary algorithm (EA) framework is reported for
rational design of D-Pi-A type organic small-molecule nonlinear optical
materials is presented. Upon combination of msBNN and corrected Lewis-mode
group contribution method (cLGC), different optical properties of molecules are
afforded accurately and efficiently - by using only a small data set for
training. Moreover, by employing the EA model designed specifically for LGC,
structural search is well achievable. The logical origins of the well
performance of the framework are discussed in detail. Considering that such a
theory guided, machine learning framework combines chemical principles and
data-driven tools, most likely, it will be proven efficient to solve molecular
design related problems in wider fields. | [
"Jinming Fan",
"Chao Qian",
"Shaodong Zhou"
] | 2023-09-15 17:36:27 | http://arxiv.org/abs/2309.08570v1 | http://arxiv.org/pdf/2309.08570v1 | 2309.08570v1 |
Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach | Graph Neural Networks have achieved tremendous success in modeling complex
graph data in a variety of applications. However, there are limited studies
investigating privacy protection in GNNs. In this work, we propose a learning
framework that can provide node privacy at the user level, while incurring low
utility loss. We focus on a decentralized notion of Differential Privacy,
namely Local Differential Privacy, and apply randomization mechanisms to
perturb both feature and label data at the node level before the data is
collected by a central server for model training. Specifically, we investigate
the application of randomization mechanisms in high-dimensional feature
settings and propose an LDP protocol with strict privacy guarantees. Based on
frequency estimation in statistical analysis of randomized data, we develop
reconstruction methods to approximate features and labels from perturbed data.
We also formulate this learning framework to utilize frequency estimates of
graph clusters to supervise the training procedure at a sub-graph level.
Extensive experiments on real-world and semi-synthetic datasets demonstrate the
validity of our proposed model. | [
"Karuna Bhaila",
"Wen Huang",
"Yongkai Wu",
"Xintao Wu"
] | 2023-09-15 17:35:51 | http://arxiv.org/abs/2309.08569v1 | http://arxiv.org/pdf/2309.08569v1 | 2309.08569v1 |
Deep Reinforcement Learning for Efficient and Fair Allocation of Health Care Resources | Scarcity of health care resources could result in the unavoidable consequence
of rationing. For example, ventilators are often limited in supply, especially
during public health emergencies or in resource-constrained health care
settings, such as amid the pandemic of COVID-19. Currently, there is no
universally accepted standard for health care resource allocation protocols,
resulting in different governments prioritizing patients based on various
criteria and heuristic-based protocols. In this study, we investigate the use
of reinforcement learning for critical care resource allocation policy
optimization to fairly and effectively ration resources. We propose a
transformer-based deep Q-network to integrate the disease progression of
individual patients and the interaction effects among patients during the
critical care resource allocation. We aim to improve both fairness of
allocation and overall patient outcomes. Our experiments demonstrate that our
method significantly reduces excess deaths and achieves a more equitable
distribution under different levels of ventilator shortage, when compared to
existing severity-based and comorbidity-based methods in use by different
governments. Our source code is included in the supplement and will be released
on Github upon publication. | [
"Yikuan Li",
"Chengsheng Mao",
"Kaixuan Huang",
"Hanyin Wang",
"Zheng Yu",
"Mengdi Wang",
"Yuan Luo"
] | 2023-09-15 17:28:06 | http://arxiv.org/abs/2309.08560v1 | http://arxiv.org/pdf/2309.08560v1 | 2309.08560v1 |
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks | While numerous defense methods have been proposed to prohibit potential
poisoning attacks from untrusted data sources, most research works only defend
against specific attacks, which leaves many avenues for an adversary to
exploit. In this work, we propose an efficient and robust training approach to
defend against data poisoning attacks based on influence functions, named
Healthy Influential-Noise based Training. Using influence functions, we craft
healthy noise that helps to harden the classification model against poisoning
attacks without significantly affecting the generalization ability on test
data. In addition, our method can perform effectively when only a subset of the
training data is modified, instead of the current method of adding noise to all
examples that has been used in several previous works. We conduct comprehensive
evaluations over two image datasets with state-of-the-art poisoning attacks
under different realistic attack scenarios. Our empirical results show that
HINT can efficiently protect deep learning models against the effect of both
untargeted and targeted poisoning attacks. | [
"Minh-Hao Van",
"Alycia N. Carey",
"Xintao Wu"
] | 2023-09-15 17:12:19 | http://arxiv.org/abs/2309.08549v1 | http://arxiv.org/pdf/2309.08549v1 | 2309.08549v1 |
Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization | The pursuit of long-term autonomy mandates that robotic agents must
continuously adapt to their changing environments and learn to solve new tasks.
Continual learning seeks to overcome the challenge of catastrophic forgetting,
where learning to solve new tasks causes a model to forget previously learnt
information. Prior-based continual learning methods are appealing for robotic
applications as they are space efficient and typically do not increase in
computational complexity as the number of tasks grows. Despite these desirable
properties, prior-based approaches typically fail on important benchmarks and
consequently are limited in their potential applications compared to their
memory-based counterparts. We introduce Bayesian adaptive moment regularization
(BAdam), a novel prior-based method that better constrains parameter growth,
leading to lower catastrophic forgetting. Our method boasts a range of
desirable properties for robotic applications such as being lightweight and
task label-free, converging quickly, and offering calibrated uncertainty that
is important for safe real-world deployment. Results show that BAdam achieves
state-of-the-art performance for prior-based methods on challenging
single-headed class-incremental experiments such as Split MNIST and Split
FashionMNIST, and does so without relying on task labels or discrete task
boundaries. | [
"Jack Foster",
"Alexandra Brintrup"
] | 2023-09-15 17:10:51 | http://arxiv.org/abs/2309.08546v1 | http://arxiv.org/pdf/2309.08546v1 | 2309.08546v1 |
Efficient and robust Sensor Placement in Complex Environments | We address the problem of efficient and unobstructed surveillance or
communication in complex environments. On one hand, one wishes to use a minimal
number of sensors to cover the environment. On the other hand, it is often
important to consider solutions that are robust against sensor failure or
adversarial attacks. This paper addresses these challenges of designing minimal
sensor sets that achieve multi-coverage constraints -- every point in the
environment is covered by a prescribed number of sensors. We propose a greedy
algorithm to achieve the objective. Further, we explore deep learning
techniques to accelerate the evaluation of the objective function formulated in
the greedy algorithm. The training of the neural network reveals that the
geometric properties of the data significantly impact the network's
performance, particularly at the end stage. By taking into account these
properties, we discuss the differences in using greedy and $\epsilon$-greedy
algorithms to generate data and their impact on the robustness of the network. | [
"Lukas Taus",
"Yen-Hsi Richard Tsai"
] | 2023-09-15 17:10:19 | http://arxiv.org/abs/2309.08545v1 | http://arxiv.org/pdf/2309.08545v1 | 2309.08545v1 |
Towards Last-layer Retraining for Group Robustness with Fewer Annotations | Empirical risk minimization (ERM) of neural networks is prone to
over-reliance on spurious correlations and poor generalization on minority
groups. The recent deep feature reweighting (DFR) technique achieves
state-of-the-art group robustness via simple last-layer retraining, but it
requires held-out group and class annotations to construct a group-balanced
reweighting dataset. In this work, we examine this impractical requirement and
find that last-layer retraining can be surprisingly effective with no group
annotations (other than for model selection) and only a handful of class
annotations. We first show that last-layer retraining can greatly improve
worst-group accuracy even when the reweighting dataset has only a small
proportion of worst-group data. This implies a "free lunch" where holding out a
subset of training data to retrain the last layer can substantially outperform
ERM on the entire dataset with no additional data or annotations. To further
improve group robustness, we introduce a lightweight method called selective
last-layer finetuning (SELF), which constructs the reweighting dataset using
misclassifications or disagreements. Our empirical and theoretical results
present the first evidence that model disagreement upsamples worst-group data,
enabling SELF to nearly match DFR on four well-established benchmarks across
vision and language tasks with no group annotations and less than 3% of the
held-out class annotations. Our code is available at
https://github.com/tmlabonte/last-layer-retraining. | [
"Tyler LaBonte",
"Vidya Muthukumar",
"Abhishek Kumar"
] | 2023-09-15 16:52:29 | http://arxiv.org/abs/2309.08534v1 | http://arxiv.org/pdf/2309.08534v1 | 2309.08534v1 |
Scaling Laws for Sparsely-Connected Foundation Models | We explore the impact of parameter sparsity on the scaling behavior of
Transformers trained on massive datasets (i.e., "foundation models"), in both
vision and language domains. In this setting, we identify the first scaling law
describing the relationship between weight sparsity, number of non-zero
parameters, and amount of training data, which we validate empirically across
model and data scales; on ViT/JFT-4B and T5/C4. These results allow us to
characterize the "optimal sparsity", the sparsity level which yields the best
performance for a given effective model size and training budget. For a fixed
number of non-zero parameters, we identify that the optimal sparsity increases
with the amount of data used for training. We also extend our study to
different sparsity structures (such as the hardware-friendly n:m pattern) and
strategies (such as starting from a pretrained dense model). Our findings shed
light on the power and limitations of weight sparsity across various parameter
and computational settings, offering both theoretical understanding and
practical implications for leveraging sparsity towards computational efficiency
improvements. | [
"Elias Frantar",
"Carlos Riquelme",
"Neil Houlsby",
"Dan Alistarh",
"Utku Evci"
] | 2023-09-15 16:29:27 | http://arxiv.org/abs/2309.08520v1 | http://arxiv.org/pdf/2309.08520v1 | 2309.08520v1 |
Generalised Probabilistic Diffusion Scale-Spaces | Probabilistic diffusion models excel at sampling new images from learned
distributions. Originally motivated by drift-diffusion concepts from physics,
they apply image perturbations such as noise and blur in a forward process that
results in a tractable probability distribution. A corresponding learned
reverse process generates images and can be conditioned on side information,
which leads to a wide variety of practical applications. Most of the research
focus currently lies on practice-oriented extensions. In contrast, the
theoretical background remains largely unexplored, in particular the relations
to drift-diffusion. In order to shed light on these connections to classical
image filtering, we propose a generalised scale-space theory for probabilistic
diffusion models. Moreover, we show conceptual and empirical connections to
diffusion and osmosis filters. | [
"Pascal Peter"
] | 2023-09-15 16:17:54 | http://arxiv.org/abs/2309.08511v1 | http://arxiv.org/pdf/2309.08511v1 | 2309.08511v1 |
Deep-learning-powered data analysis in plankton ecology | The implementation of deep learning algorithms has brought new perspectives
to plankton ecology. Emerging as an alternative approach to established
methods, deep learning offers objective schemes to investigate plankton
organisms in diverse environments. We provide an overview of
deep-learning-based methods including detection and classification of phyto-
and zooplankton images, foraging and swimming behaviour analysis, and finally
ecological modelling. Deep learning has the potential to speed up the analysis
and reduce the human experimental bias, thus enabling data acquisition at
relevant temporal and spatial scales with improved reproducibility. We also
discuss shortcomings and show how deep learning architectures have evolved to
mitigate imprecise readouts. Finally, we suggest opportunities where deep
learning is particularly likely to catalyze plankton research. The examples are
accompanied by detailed tutorials and code samples that allow readers to apply
the methods described in this review to their own data. | [
"Harshith Bachimanchi",
"Matthew I. M. Pinder",
"Chloé Robert",
"Pierre De Wit",
"Jonathan Havenhand",
"Alexandra Kinnby",
"Daniel Midtvedt",
"Erik Selander",
"Giovanni Volpe"
] | 2023-09-15 16:04:11 | http://arxiv.org/abs/2309.08500v1 | http://arxiv.org/pdf/2309.08500v1 | 2309.08500v1 |
P-ROCKET: Pruning Random Convolution Kernels for Time Series Classification | In recent years, two time series classification models, ROCKET and
MINIROCKET, have attracted much attention for their low training cost and
state-of-the-art accuracy. Utilizing random 1-D convolutional kernels without
training, ROCKET and MINIROCKET can rapidly extract features from time series
data, allowing for the efficient fitting of linear classifiers. However, to
comprehensively capture useful features, a large number of random kernels are
required, which is incompatible for resource-constrained devices. Therefore, a
heuristic evolutionary algorithm named S-ROCKET is devised to recognize and
prune redundant kernels. Nevertheless, the inherent nature of evolutionary
algorithms renders the evaluation of kernels within S-ROCKET an unacceptable
time-consuming process. In this paper, diverging from S-ROCKET, which directly
evaluates random kernels with nonsignificant differences, we remove kernels
from a feature selection perspective by eliminating associating connections in
the sequential classification layer. To this end, we start by formulating the
pruning challenge as a Group Elastic Net classification problem and employ the
ADMM method to arrive at a solution. Sequentially, we accelerate the
aforementioned time-consuming solving process by bifurcating the $l_{2,1}$ and
$l_2$ regularizations into two sequential stages and solve them separately,
which ultimately forms our core algorithm, named P-ROCKET. Stage 1 of P-ROCKET
employs group-wise regularization similarly to our initial ADMM-based
Algorithm, but introduces dynamically varying penalties to greatly accelerate
the process. To mitigate overfitting, Stage 2 of P-ROCKET implements
element-wise regularization to refit a linear classifier, utilizing the
retained features. | [
"Shaowu Chen",
"Weize Sun",
"Lei Huang",
"Xiaopeng Li",
"Qingyuan Wang",
"Deepu John"
] | 2023-09-15 16:03:23 | http://arxiv.org/abs/2309.08499v1 | http://arxiv.org/pdf/2309.08499v1 | 2309.08499v1 |
Towards Word-Level End-to-End Neural Speaker Diarization with Auxiliary Network | While standard speaker diarization attempts to answer the question "who
spoken when", most of relevant applications in reality are more interested in
determining "who spoken what". Whether it is the conventional modularized
approach or the more recent end-to-end neural diarization (EEND), an additional
automatic speech recognition (ASR) model and an orchestration algorithm are
required to associate the speaker labels with recognized words. In this paper,
we propose Word-level End-to-End Neural Diarization (WEEND) with auxiliary
network, a multi-task learning algorithm that performs end-to-end ASR and
speaker diarization in the same neural architecture. That is, while speech is
being recognized, speaker labels are predicted simultaneously for each
recognized word. Experimental results demonstrate that WEEND outperforms the
turn-based diarization baseline system on all 2-speaker short-form scenarios
and has the capability to generalize to audio lengths of 5 minutes. Although
3+speaker conversations are harder, we find that with enough in-domain training
data, WEEND has the potential to deliver high quality diarized text. | [
"Yiling Huang",
"Weiran Wang",
"Guanlong Zhao",
"Hank Liao",
"Wei Xia",
"Quan Wang"
] | 2023-09-15 15:48:45 | http://arxiv.org/abs/2309.08489v1 | http://arxiv.org/pdf/2309.08489v1 | 2309.08489v1 |
On the limitations of data-driven weather forecasting models | As in many other areas of engineering and applied science, Machine Learning
(ML) is having a profound impact in the domain of Weather and Climate
Prediction. A very recent development in this area has been the emergence of
fully data-driven ML prediction models which routinely claim superior
performance to that of traditional physics-based models. In this work, we
examine some aspects of the forecasts produced by an exemplar of the current
generation of ML models, Pangu-Weather, with a focus on the fidelity and
physical consistency of those forecasts and how these characteristics relate to
perceived forecast performance. The main conclusion is that Pangu-Weather
forecasts, and by extension those of similar ML models, do not have the
fidelity and physical consistency of physics-based models and their advantage
in accuracy on traditional deterministic metrics of forecast skill can be
attributed, to a large extent, to these peculiarities. Similarly to other
current post-processing technologies, ML models appear to be able to add value
to standard NWP outputs for specific forecast applications and combined with
their extremely low computational cost during deployment, will likely provide
an additional, useful source of forecast information. | [
"Massimo Bonavita"
] | 2023-09-15 15:21:57 | http://arxiv.org/abs/2309.08473v1 | http://arxiv.org/pdf/2309.08473v1 | 2309.08473v1 |
Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks | In this research, we propose a novel approach for the quantification of
credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the
use of synthetic financial correlation matrices generated with deep learning
models. In previous work Generative Adversarial Networks (GANs) were employed
to demonstrate the generation of plausible correlation matrices, that capture
the essential characteristics observed in empirical correlation matrices
estimated on asset returns. Instead of GANs, we employ Variational Autoencoders
(VAE) to achieve a more interpretable latent space representation. Through our
analysis, we reveal that the VAE latent space can be a useful tool to capture
the crucial factors impacting portfolio diversification, particularly in
relation to credit portfolio sensitivity to asset correlations changes. | [
"Sergio Caprioli",
"Emanuele Cagliero",
"Riccardo Crupi"
] | 2023-09-15 15:21:14 | http://arxiv.org/abs/2309.08652v1 | http://arxiv.org/pdf/2309.08652v1 | 2309.08652v1 |
Explaining Search Result Stances to Opinionated People | People use web search engines to find information before forming opinions,
which can lead to practical decisions with different levels of impact. The
cognitive effort of search can leave opinionated users vulnerable to cognitive
biases, e.g., the confirmation bias. In this paper, we investigate whether
stance labels and their explanations can help users consume more diverse search
results. We automatically classify and label search results on three topics
(i.e., intellectual property rights, school uniforms, and atheism) as against,
neutral, and in favor, and generate explanations for these labels. In a user
study (N =203), we then investigate whether search result stance bias (balanced
vs biased) and the level of explanation (plain text, label only, label and
explanation) influence the diversity of search results clicked. We find that
stance labels and explanations lead to a more diverse search result
consumption. However, we do not find evidence for systematic opinion change
among users in this context. We believe these results can help designers of
search engines to make more informed design decisions. | [
"Z. Wu",
"T. Draws",
"F. Cau",
"F. Barile",
"A. Rieger",
"N. Tintarev"
] | 2023-09-15 15:08:24 | http://arxiv.org/abs/2309.08460v1 | http://arxiv.org/pdf/2309.08460v1 | 2309.08460v1 |
Mixture Encoder Supporting Continuous Speech Separation for Meeting Recognition | Many real-life applications of automatic speech recognition (ASR) require
processing of overlapped speech. A commonmethod involves first separating the
speech into overlap-free streams and then performing ASR on the resulting
signals. Recently, the inclusion of a mixture encoder in the ASR model has been
proposed. This mixture encoder leverages the original overlapped speech to
mitigate the effect of artifacts introduced by the speech separation.
Previously, however, the method only addressed two-speaker scenarios. In this
work, we extend this approach to more natural meeting contexts featuring an
arbitrary number of speakers and dynamic overlaps. We evaluate the performance
using different speech separators, including the powerful TF-GridNet model. Our
experiments show state-of-the-art performance on the LibriCSS dataset and
highlight the advantages of the mixture encoder. Furthermore, they demonstrate
the strong separation of TF-GridNet which largely closes the gap between
previous methods and oracle separation. | [
"Peter Vieting",
"Simon Berger",
"Thilo von Neumann",
"Christoph Boeddeker",
"Ralf Schlüter",
"Reinhold Haeb-Umbach"
] | 2023-09-15 14:57:28 | http://arxiv.org/abs/2309.08454v1 | http://arxiv.org/pdf/2309.08454v1 | 2309.08454v1 |
Toward responsible face datasets: modeling the distribution of a disentangled latent space for sampling face images from demographic groups | Recently, it has been exposed that some modern facial recognition systems
could discriminate specific demographic groups and may lead to unfair attention
with respect to various facial attributes such as gender and origin. The main
reason are the biases inside datasets, unbalanced demographics, used to train
theses models. Unfortunately, collecting a large-scale balanced dataset with
respect to various demographics is impracticable.
In this paper, we investigate as an alternative the generation of a balanced
and possibly bias-free synthetic dataset that could be used to train, to
regularize or to evaluate deep learning-based facial recognition models. We
propose to use a simple method for modeling and sampling a disentangled
projection of a StyleGAN latent space to generate any combination of
demographic groups (e.g. $hispanic-female$). Our experiments show that we can
synthesis any combination of demographic groups effectively and the identities
are different from the original training dataset. We also released the source
code. | [
"Parsa Rahimi",
"Christophe Ecabert",
"Sebastien Marcel"
] | 2023-09-15 14:42:04 | http://arxiv.org/abs/2309.08442v1 | http://arxiv.org/pdf/2309.08442v1 | 2309.08442v1 |
MIML: Multiplex Image Machine Learning for High Precision Cell Classification via Mechanical Traits within Microfluidic Systems | Label-free cell classification is advantageous for supplying pristine cells
for further use or examination, yet existing techniques frequently fall short
in terms of specificity and speed. In this study, we address these limitations
through the development of a novel machine learning framework, Multiplex Image
Machine Learning (MIML). This architecture uniquely combines label-free cell
images with biomechanical property data, harnessing the vast, often
underutilized morphological information intrinsic to each cell. By integrating
both types of data, our model offers a more holistic understanding of the
cellular properties, utilizing morphological information typically discarded in
traditional machine learning models. This approach has led to a remarkable
98.3\% accuracy in cell classification, a substantial improvement over models
that only consider a single data type. MIML has been proven effective in
classifying white blood cells and tumor cells, with potential for broader
application due to its inherent flexibility and transfer learning capability.
It's particularly effective for cells with similar morphology but distinct
biomechanical properties. This innovative approach has significant implications
across various fields, from advancing disease diagnostics to understanding
cellular behavior. | [
"Khayrul Islam",
"Ratul Paul",
"Shen Wang",
"Yaling Liu"
] | 2023-09-15 14:23:51 | http://arxiv.org/abs/2309.08421v1 | http://arxiv.org/pdf/2309.08421v1 | 2309.08421v1 |
FedDCSR: Federated Cross-domain Sequential Recommendation via Disentangled Representation Learning | Cross-domain Sequential Recommendation (CSR) which leverages user sequence
data from multiple domains has received extensive attention in recent years.
However, the existing CSR methods require sharing origin user data across
domains, which violates the General Data Protection Regulation (GDPR). Thus, it
is necessary to combine federated learning (FL) and CSR to fully utilize
knowledge from different domains while preserving data privacy. Nonetheless,
the sequence feature heterogeneity across different domains significantly
impacts the overall performance of FL. In this paper, we propose FedDCSR, a
novel federated cross-domain sequential recommendation framework via
disentangled representation learning. Specifically, to address the sequence
feature heterogeneity across domains, we introduce an approach called
inter-intra domain sequence representation disentanglement (SRD) to disentangle
the user sequence features into domain-shared and domain-exclusive features. In
addition, we design an intra domain contrastive infomax (CIM) strategy to learn
richer domain-exclusive features of users by performing data augmentation on
user sequences. Extensive experiments on three real-world scenarios demonstrate
that FedDCSR achieves significant improvements over existing baselines. | [
"Hongyu Zhang",
"Dongyi Zheng",
"Xu Yang",
"Jiyuan Feng",
"Qing Liao"
] | 2023-09-15 14:23:20 | http://arxiv.org/abs/2309.08420v3 | http://arxiv.org/pdf/2309.08420v3 | 2309.08420v3 |
A new method of modeling the multi-stage decision-making process of CRT using machine learning with uncertainty quantification | Aims. The purpose of this study is to create a multi-stage machine learning
model to predict cardiac resynchronization therapy (CRT) response for heart
failure (HF) patients. This model exploits uncertainty quantification to
recommend additional collection of single-photon emission computed tomography
myocardial perfusion imaging (SPECT MPI) variables if baseline clinical
variables and features from electrocardiogram (ECG) are not sufficient.
Methods. 218 patients who underwent rest-gated SPECT MPI were enrolled in this
study. CRT response was defined as an increase in left ventricular ejection
fraction (LVEF) > 5% at a 6 month follow-up. A multi-stage ML model was created
by combining two ensemble models. Results. The response rate for CRT was 55.5%
(n = 121) with overall male gender 61.0% (n = 133), an average age of 62.0, and
LVEF of 27.7. The multi-stage model performed similarly to Ensemble 2 (which
utilized the additional SPECT data) with AUC of 0.75 vs. 0.77, accuracy of 0.71
vs. 0.69, sensitivity of 0.70 vs. 0.72, and specificity 0.72 vs. 0.65,
respectively. However, the multi-stage model only required SPECT MPI data for
52.7% of the patients across all folds. Conclusions. By using rule-based logic
stemming from uncertainty quantification, the multi-stage model was able to
reduce the need for additional SPECT MPI data acquisition without sacrificing
performance. | [
"Kristoffer Larsen",
"Chen Zhao",
"Joyce Keyak",
"Qiuying Sha",
"Diana Paez",
"Xinwei Zhang",
"Jiangang Zou",
"Amalia Peix",
"Weihua Zhou"
] | 2023-09-15 14:18:53 | http://arxiv.org/abs/2309.08415v2 | http://arxiv.org/pdf/2309.08415v2 | 2309.08415v2 |
Make Deep Networks Shallow Again | Deep neural networks have a good success record and are thus viewed as the
best architecture choice for complex applications. Their main shortcoming has
been, for a long time, the vanishing gradient which prevented the numerical
optimization algorithms from acceptable convergence. A breakthrough has been
achieved by the concept of residual connections -- an identity mapping parallel
to a conventional layer. This concept is applicable to stacks of layers of the
same dimension and substantially alleviates the vanishing gradient problem. A
stack of residual connection layers can be expressed as an expansion of terms
similar to the Taylor expansion. This expansion suggests the possibility of
truncating the higher-order terms and receiving an architecture consisting of a
single broad layer composed of all initially stacked layers in parallel. In
other words, a sequential deep architecture is substituted by a parallel
shallow one. Prompted by this theory, we investigated the performance
capabilities of the parallel architecture in comparison to the sequential one.
The computer vision datasets MNIST and CIFAR10 were used to train both
architectures for a total of 6912 combinations of varying numbers of
convolutional layers, numbers of filters, kernel sizes, and other meta
parameters. Our findings demonstrate a surprising equivalence between the deep
(sequential) and shallow (parallel) architectures. Both layouts produced
similar results in terms of training and validation set loss. This discovery
implies that a wide, shallow architecture can potentially replace a deep
network without sacrificing performance. Such substitution has the potential to
simplify network architectures, improve optimization efficiency, and accelerate
the training process. | [
"Bernhard Bermeitinger",
"Tomas Hrycej",
"Siegfried Handschuh"
] | 2023-09-15 14:18:21 | http://arxiv.org/abs/2309.08414v1 | http://arxiv.org/pdf/2309.08414v1 | 2309.08414v1 |
Constraint-Free Structure Learning with Smooth Acyclic Orientations | The structure learning problem consists of fitting data generated by a
Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this
context, differentiable approaches constrain or regularize the optimization
problem using a continuous relaxation of the acyclicity property. The
computational cost of evaluating graph acyclicity is cubic on the number of
nodes and significantly affects scalability. In this paper we introduce COSMO,
a constraint-free continuous optimization scheme for acyclic structure
learning. At the core of our method, we define a differentiable approximation
of an orientation matrix parameterized by a single priority vector. Differently
from previous work, our parameterization fits a smooth orientation matrix and
the resulting acyclic adjacency matrix without evaluating acyclicity at any
step. Despite the absence of explicit constraints, we prove that COSMO always
converges to an acyclic solution. In addition to being asymptotically faster,
our empirical analysis highlights how COSMO performance on graph reconstruction
compares favorably with competing structure learning methods. | [
"Riccardo Massidda",
"Francesco Landolfi",
"Martina Cinquini",
"Davide Bacciu"
] | 2023-09-15 14:08:09 | http://arxiv.org/abs/2309.08406v1 | http://arxiv.org/pdf/2309.08406v1 | 2309.08406v1 |
Neural Metamaterial Networks for Nonlinear Material Design | Nonlinear metamaterials with tailored mechanical properties have applications
in engineering, medicine, robotics, and beyond. While modeling their
macromechanical behavior is challenging in itself, finding structure parameters
that lead to ideal approximation of high-level performance goals is a
challenging task. In this work, we propose Neural Metamaterial Networks (NMN)
-- smooth neural representations that encode the nonlinear mechanics of entire
metamaterial families. Given structure parameters as input, NMN return
continuously differentiable strain energy density functions, thus guaranteeing
conservative forces by construction. Though trained on simulation data, NMN do
not inherit the discontinuities resulting from topological changes in finite
element meshes. They instead provide a smooth map from parameter to performance
space that is fully differentiable and thus well-suited for gradient-based
optimization. On this basis, we formulate inverse material design as a
nonlinear programming problem that leverages neural networks for both objective
functions and constraints. We use this approach to automatically design
materials with desired strain-stress curves, prescribed directional stiffness
and Poisson ratio profiles. We furthermore conduct ablation studies on network
nonlinearities and show the advantages of our approach compared to native-scale
optimization. | [
"Yue Li",
"Stelian Coros",
"Bernhard Thomaszewski"
] | 2023-09-15 13:50:43 | http://arxiv.org/abs/2309.10600v1 | http://arxiv.org/pdf/2309.10600v1 | 2309.10600v1 |
Optimizing Modular Robot Composition: A Lexicographic Genetic Algorithm Approach | Industrial robots are designed as general-purpose hardware, which limits
their ability to adapt to changing task requirements or environments. Modular
robots, on the other hand, offer flexibility and can be easily customized to
suit diverse needs. The morphology, i.e., the form and structure of a robot,
significantly impacts the primary performance metrics acquisition cost, cycle
time, and energy efficiency. However, identifying an optimal module composition
for a specific task remains an open problem, presenting a substantial hurdle in
developing task-tailored modular robots. Previous approaches either lack
adequate exploration of the design space or the possibility to adapt to complex
tasks. We propose combining a genetic algorithm with a lexicographic evaluation
of solution candidates to overcome this problem and navigate search spaces
exceeding those in prior work by magnitudes in the number of possible
compositions. We demonstrate that our approach outperforms a state-of-the-art
baseline and is able to synthesize modular robots for industrial tasks in
cluttered environments. | [
"Jonathan Külz",
"Matthias Althoff"
] | 2023-09-15 13:50:21 | http://arxiv.org/abs/2309.08399v1 | http://arxiv.org/pdf/2309.08399v1 | 2309.08399v1 |
Exploring Meta Information for Audio-based Zero-shot Bird Classification | Advances in passive acoustic monitoring and machine learning have led to the
procurement of vast datasets for computational bioacoustic research.
Nevertheless, data scarcity is still an issue for rare and underrepresented
species. This study investigates how meta-information can improve zero-shot
audio classification, utilising bird species as an example case study due to
the availability of rich and diverse metadata. We investigate three different
sources of metadata: textual bird sound descriptions encoded via (S)BERT,
functional traits (AVONET), and bird life-history (BLH) characteristics. As
audio features, we extract audio spectrogram transformer (AST) embeddings and
project them to the dimension of the auxiliary information by adopting a single
linear layer. Then, we employ the dot product as compatibility function and a
standard zero-shot learning ranking hinge loss to determine the correct class.
The best results are achieved by concatenating the AVONET and BLH features
attaining a mean F1-score of .233 over five different test sets with 8 to 10
classes. | [
"Alexander Gebhard",
"Andreas Triantafyllopoulos",
"Teresa Bez",
"Lukas Christ",
"Alexander Kathan",
"Björn W. Schuller"
] | 2023-09-15 13:50:16 | http://arxiv.org/abs/2309.08398v1 | http://arxiv.org/pdf/2309.08398v1 | 2309.08398v1 |
Learning by Self-Explaining | Artificial intelligence (AI) research has a long track record of drawing
inspirations from findings from biology, in particular human intelligence. In
contrast to current AI research that mainly treats explanations as a means for
model inspection, a somewhat neglected finding from human psychology is the
benefit of self-explaining in an agents' learning process. Motivated by this,
we introduce a novel learning paradigm, termed Learning by Self-Explaining
(LSX). The underlying idea is that a learning module (learner) performs a base
task, e.g. image classification, and provides explanations to its decisions. An
internal critic module next evaluates the quality of these explanations given
the original task. Finally, the learner is refined with the critic's feedback
and the loop is repeated as required. The intuition behind this is that an
explanation is considered "good" if the critic can perform the same task given
the respective explanation. Despite many implementation possibilities the
structure of any LSX instantiation can be taxonomized based on four learning
modules which we identify as: Fit, Explain, Reflect and Revise. In our work, we
provide distinct instantiations of LSX for two different learner models, each
illustrating different choices for the various LSX components. We broadly
evaluate these on several datasets and show that Learning by Self-Explaining
not only boosts the generalization abilities of AI models, particularly in
small-data regimes, but also aids in mitigating the influence of confounding
factors, as well as leading to more task specific and faithful model
explanations. Overall, our results provide experimental evidence of the
potential of self-explaining within the learning phase of an AI model. | [
"Wolfgang Stammer",
"Felix Friedrich",
"David Steinmann",
"Hikaru Shindo",
"Kristian Kersting"
] | 2023-09-15 13:41:57 | http://arxiv.org/abs/2309.08395v1 | http://arxiv.org/pdf/2309.08395v1 | 2309.08395v1 |
Multidimensional well-being of US households at a fine spatial scale using fused household surveys: fusionACS | Social science often relies on surveys of households and individuals. Dozens
of such surveys are regularly administered by the U.S. government. However,
they field independent, unconnected samples with specialized questions,
limiting research questions to those that can be answered by a single survey.
The fusionACS project seeks to integrate data from multiple U.S. household
surveys by statistically "fusing" variables from "donor" surveys onto American
Community Survey (ACS) microdata. This results in an integrated microdataset of
household attributes and well-being dimensions that can be analyzed to address
research questions in ways that are not currently possible. The presented data
comprise the fusion onto the ACS of select donor variables from the Residential
Energy Consumption Survey (RECS) of 2015, the National Household Transportation
Survey (NHTS) of 2017, the American Housing Survey (AHS) of 2019, and the
Consumer Expenditure Survey - Interview (CEI) for the years 2015-2019. The
underlying statistical techniques are included in an open-source $R$ package,
fusionModel, that provides generic tools for the creation, analysis, and
validation of fused microdata. | [
"Kevin Ummel",
"Miguel Poblete-Cazenave",
"Karthik Akkiraju",
"Nick Graetz",
"Hero Ashman",
"Cora Kingdon",
"Steven Herrera Tenorio",
"Aaryaman \"Sunny\" Singhal",
"Daniel Aldana Cohen",
"Narasimha D. Rao"
] | 2023-09-15 13:19:54 | http://arxiv.org/abs/2309.11512v1 | http://arxiv.org/pdf/2309.11512v1 | 2309.11512v1 |
A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising | Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising
(HyperGSD) are two fundamental topics in higher-order network modeling.
Understanding the connection between these two domains is particularly useful
for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In
particular, the tensor-hypergraph convolutional network (T-HGCN) has emerged as
a powerful architecture for preserving higher-order interactions on
hypergraphs, and this work shows an equivalence relation between a HyperGSD
problem and the T-HGCN. Inspired by this intriguing result, we further design a
tensor-hypergraph iterative network (T-HGIN) based on the HyperGSD problem,
which takes advantage of a multi-step updating scheme in every single layer.
Numerical experiments are conducted to show the promising applications of the
proposed T-HGIN approach. | [
"Fuli Wang",
"Karelia Pena-Pena",
"Wei Qian",
"Gonzalo R. Arce"
] | 2023-09-15 13:19:31 | http://arxiv.org/abs/2309.08385v1 | http://arxiv.org/pdf/2309.08385v1 | 2309.08385v1 |
Boosting Fair Classifier Generalization through Adaptive Priority Reweighing | With the increasing penetration of machine learning applications in critical
decision-making areas, calls for algorithmic fairness are more prominent.
Although there have been various modalities to improve algorithmic fairness
through learning with fairness constraints, their performance does not
generalize well in the test set. A performance-promising fair algorithm with
better generalizability is needed. This paper proposes a novel adaptive
reweighing method to eliminate the impact of the distribution shifts between
training and test data on model generalizability. Most previous reweighing
methods propose to assign a unified weight for each (sub)group. Rather, our
method granularly models the distance from the sample predictions to the
decision boundary. Our adaptive reweighing method prioritizes samples closer to
the decision boundary and assigns a higher weight to improve the
generalizability of fair classifiers. Extensive experiments are performed to
validate the generalizability of our adaptive priority reweighing method for
accuracy and fairness measures (i.e., equal opportunity, equalized odds, and
demographic parity) in tabular benchmarks. We also highlight the performance of
our method in improving the fairness of language and vision models. The code is
available at https://github.com/che2198/APW. | [
"Zhihao Hu",
"Yiran Xu",
"Mengnan Du",
"Jindong Gu",
"Xinmei Tian",
"Fengxiang He"
] | 2023-09-15 13:04:55 | http://arxiv.org/abs/2309.08375v2 | http://arxiv.org/pdf/2309.08375v2 | 2309.08375v2 |
Understanding the limitations of self-supervised learning for tabular anomaly detection | While self-supervised learning has improved anomaly detection in computer
vision and natural language processing, it is unclear whether tabular data can
benefit from it. This paper explores the limitations of self-supervision for
tabular anomaly detection. We conduct several experiments spanning various
pretext tasks on 26 benchmark datasets to understand why this is the case. Our
results confirm representations derived from self-supervision do not improve
tabular anomaly detection performance compared to using the raw representations
of the data. We show this is due to neural networks introducing irrelevant
features, which reduces the effectiveness of anomaly detectors. However, we
demonstrate that using a subspace of the neural network's representation can
recover performance. | [
"Kimberly T. Mai",
"Toby Davies",
"Lewis D. Griffin"
] | 2023-09-15 13:04:11 | http://arxiv.org/abs/2309.08374v2 | http://arxiv.org/pdf/2309.08374v2 | 2309.08374v2 |
Continual Learning with Deep Streaming Regularized Discriminant Analysis | Continual learning is increasingly sought after in real world machine
learning applications, as it enables learning in a more human-like manner.
Conventional machine learning approaches fail to achieve this, as incrementally
updating the model with non-identically distributed data leads to catastrophic
forgetting, where existing representations are overwritten. Although
traditional continual learning methods have mostly focused on batch learning,
which involves learning from large collections of labeled data sequentially,
this approach is not well-suited for real-world applications where we would
like new data to be integrated directly. This necessitates a paradigm shift
towards streaming learning. In this paper, we propose a streaming version of
regularized discriminant analysis as a solution to this challenge. We combine
our algorithm with a convolutional neural network and demonstrate that it
outperforms both batch learning and existing streaming learning algorithms on
the ImageNet ILSVRC-2012 dataset. | [
"Joe Khawand",
"Peter Hanappe",
"David Colliaux"
] | 2023-09-15 12:25:42 | http://arxiv.org/abs/2309.08353v1 | http://arxiv.org/pdf/2309.08353v1 | 2309.08353v1 |
Convergence of ADAM with Constant Step Size in Non-Convex Settings: A Simple Proof | In neural network training, RMSProp and ADAM remain widely favoured
optimization algorithms. One of the keys to their performance lies in selecting
the correct step size, which can significantly influence their effectiveness.
It is worth noting that these algorithms performance can vary considerably,
depending on the chosen step sizes. Additionally, questions about their
theoretical convergence properties continue to be a subject of interest. In
this paper, we theoretically analyze a constant stepsize version of ADAM in the
non-convex setting. We show sufficient conditions for the stepsize to achieve
almost sure asymptotic convergence of the gradients to zero with minimal
assumptions. We also provide runtime bounds for deterministic ADAM to reach
approximate criticality when working with smooth, non-convex functions. | [
"Alokendu Mazumder",
"Bhartendu Kumar",
"Manan Tayal",
"Punit Rathore"
] | 2023-09-15 11:47:14 | http://arxiv.org/abs/2309.08339v2 | http://arxiv.org/pdf/2309.08339v2 | 2309.08339v2 |
Let's Predict Who Will Move to a New Job | Any company's human resources department faces the challenge of predicting
whether an applicant will search for a new job or stay with the company. In
this paper, we discuss how machine learning (ML) is used to predict who will
move to a new job. First, the data is pre-processed into a suitable format for
ML models. To deal with categorical features, data encoding is applied and
several MLA (ML Algorithms) are performed including Random Forest (RF),
Logistic Regression (LR), Decision Tree (DT), and eXtreme Gradient Boosting
(XGBoost). To improve the performance of ML models, the synthetic minority
oversampling technique (SMOTE) is used to retain them. Models are assessed
using decision support metrics such as precision, recall, F1-Score, and
accuracy. | [
"Rania Mkhinini Gahar",
"Adel Hidri",
"Minyar Sassi Hidri"
] | 2023-09-15 11:43:09 | http://arxiv.org/abs/2309.08333v1 | http://arxiv.org/pdf/2309.08333v1 | 2309.08333v1 |
Estimation of Counterfactual Interventions under Uncertainties | Counterfactual analysis is intuitively performed by humans on a daily basis
eg. "What should I have done differently to get the loan approved?". Such
counterfactual questions also steer the formulation of scientific hypotheses.
More formally it provides insights about potential improvements of a system by
inferring the effects of hypothetical interventions into a past observation of
the system's behaviour which plays a prominent role in a variety of industrial
applications. Due to the hypothetical nature of such analysis, counterfactual
distributions are inherently ambiguous. This ambiguity is particularly
challenging in continuous settings in which a continuum of explanations exist
for the same observation. In this paper, we address this problem by following a
hierarchical Bayesian approach which explicitly models such uncertainty. In
particular, we derive counterfactual distributions for a Bayesian Warped
Gaussian Process thereby allowing for non-Gaussian distributions and
non-additive noise. We illustrate the properties our approach on a synthetic
and on a semi-synthetic example and show its performance when used within an
algorithmic recourse downstream task. | [
"Juliane Weilbach",
"Sebastian Gerwinn",
"Melih Kandemir",
"Martin Fraenzle"
] | 2023-09-15 11:41:23 | http://arxiv.org/abs/2309.08332v1 | http://arxiv.org/pdf/2309.08332v1 | 2309.08332v1 |
Heteroskedastic conformal regression | Conformal prediction, and split conformal prediction as a specific
implementation, offer a distribution-free approach to estimating prediction
intervals with statistical guarantees. Recent work has shown that split
conformal prediction can produce state-of-the-art prediction intervals when
focusing on marginal coverage, i.e., on a calibration dataset the method
produces on average prediction intervals that contain the ground truth with a
predefined coverage level. However, such intervals are often not adaptive,
which can be problematic for regression problems with heteroskedastic noise.
This paper tries to shed new light on how adaptive prediction intervals can be
constructed using methods such as normalized and Mondrian conformal prediction.
We present theoretical and experimental results in which these methods are
investigated in a systematic way. | [
"Nicolas Dewolf",
"Bernard De Baets",
"Willem Waegeman"
] | 2023-09-15 11:10:46 | http://arxiv.org/abs/2309.08313v1 | http://arxiv.org/pdf/2309.08313v1 | 2309.08313v1 |
A Real-Time Active Speaker Detection System Integrating an Audio-Visual Signal with a Spatial Querying Mechanism | We introduce a distinctive real-time, causal, neural network-based active
speaker detection system optimized for low-power edge computing. This system
drives a virtual cinematography module and is deployed on a commercial device.
The system uses data originating from a microphone array and a 360-degree
camera. Our network requires only 127 MFLOPs per participant, for a meeting
with 14 participants. Unlike previous work, we examine the error rate of our
network when the computational budget is exhausted, and find that it exhibits
graceful degradation, allowing the system to operate reasonably well even in
this case. Departing from conventional DOA estimation approaches, our network
learns to query the available acoustic data, considering the detected head
locations. We train and evaluate our algorithm on a realistic meetings dataset
featuring up to 14 participants in the same meeting, overlapped speech, and
other challenging scenarios. | [
"Ilya Gurvich",
"Ido Leichter",
"Dharmendar Reddy Palle",
"Yossi Asher",
"Alon Vinnikov",
"Igor Abramovski",
"Vishak Gopal",
"Ross Cutler",
"Eyal Krupka"
] | 2023-09-15 10:20:16 | http://arxiv.org/abs/2309.08295v1 | http://arxiv.org/pdf/2309.08295v1 | 2309.08295v1 |
Large Intestine 3D Shape Refinement Using Point Diffusion Models for Digital Phantom Generation | Accurate 3D modeling of human organs plays a crucial role in building
computational phantoms for virtual imaging trials. However, generating
anatomically plausible reconstructions of organ surfaces from computed
tomography scans remains challenging for many structures in the human body.
This challenge is particularly evident when dealing with the large intestine.
In this study, we leverage recent advancements in geometric deep learning and
denoising diffusion probabilistic models to refine the segmentation results of
the large intestine. We begin by representing the organ as point clouds sampled
from the surface of the 3D segmentation mask. Subsequently, we employ a
hierarchical variational autoencoder to obtain global and local latent
representations of the organ's shape. We train two conditional denoising
diffusion models in the hierarchical latent space to perform shape refinement.
To further enhance our method, we incorporate a state-of-the-art surface
reconstruction model, allowing us to generate smooth meshes from the obtained
complete point clouds. Experimental results demonstrate the effectiveness of
our approach in capturing both the global distribution of the organ's shape and
its fine details. Our complete refinement pipeline demonstrates remarkable
enhancements in surface representation compared to the initial segmentation,
reducing the Chamfer distance by 70%, the Hausdorff distance by 32%, and the
Earth Mover's distance by 6%. By combining geometric deep learning, denoising
diffusion models, and advanced surface reconstruction techniques, our proposed
method offers a promising solution for accurately modeling the large
intestine's surface and can easily be extended to other anatomical structures. | [
"Kaouther Mouheb",
"Mobina Ghojogh Nejad",
"Lavsen Dahal",
"Ehsan Samei",
"W. Paul Segars",
"Joseph Y. Lo"
] | 2023-09-15 10:10:48 | http://arxiv.org/abs/2309.08289v1 | http://arxiv.org/pdf/2309.08289v1 | 2309.08289v1 |
Cure the headache of Transformers via Collinear Constrained Attention | As the rapid progression of practical applications based on Large Language
Models continues, the importance of extrapolating performance has grown
exponentially in the research domain. In our study, we identified an anomalous
behavior in Transformer models that had been previously overlooked, leading to
a chaos around closest tokens which carried the most important information.
We've coined this discovery the "headache of Transformers". To address this at
its core, we introduced a novel self-attention structure named Collinear
Constrained Attention (CoCA). This structure can be seamlessly integrated with
existing extrapolation, interpolation methods, and other optimization
strategies designed for traditional Transformer models. We have achieved
excellent extrapolating performance even for 16 times to 24 times of sequence
lengths during inference without any fine-tuning on our model. We have also
enhanced CoCA's computational and spatial efficiency to ensure its
practicality. We plan to open-source CoCA shortly. In the meantime, we've made
our code available in the appendix for reappearing experiments. | [
"Shiyi Zhu",
"Jing Ye",
"Wei Jiang",
"Qi Zhang",
"Yifan Wu",
"Jianguo Li"
] | 2023-09-15 09:36:51 | http://arxiv.org/abs/2309.08646v1 | http://arxiv.org/pdf/2309.08646v1 | 2309.08646v1 |
Sampling-Free Probabilistic Deep State-Space Models | Many real-world dynamical systems can be described as State-Space Models
(SSMs). In this formulation, each observation is emitted by a latent state,
which follows first-order Markovian dynamics. A Probabilistic Deep SSM
(ProDSSM) generalizes this framework to dynamical systems of unknown parametric
form, where the transition and emission models are described by neural networks
with uncertain weights. In this work, we propose the first deterministic
inference algorithm for models of this type. Our framework allows efficient
approximations for training and testing. We demonstrate in our experiments that
our new method can be employed for a variety of tasks and enjoys a superior
balance between predictive performance and computational budget. | [
"Andreas Look",
"Melih Kandemir",
"Barbara Rakitsch",
"Jan Peters"
] | 2023-09-15 09:06:23 | http://arxiv.org/abs/2309.08256v1 | http://arxiv.org/pdf/2309.08256v1 | 2309.08256v1 |
Cross-lingual Knowledge Distillation via Flow-based Voice Conversion for Robust Polyglot Text-To-Speech | In this work, we introduce a framework for cross-lingual speech synthesis,
which involves an upstream Voice Conversion (VC) model and a downstream
Text-To-Speech (TTS) model. The proposed framework consists of 4 stages. In the
first two stages, we use a VC model to convert utterances in the target locale
to the voice of the target speaker. In the third stage, the converted data is
combined with the linguistic features and durations from recordings in the
target language, which are then used to train a single-speaker acoustic model.
Finally, the last stage entails the training of a locale-independent vocoder.
Our evaluations show that the proposed paradigm outperforms state-of-the-art
approaches which are based on training a large multilingual TTS model. In
addition, our experiments demonstrate the robustness of our approach with
different model architectures, languages, speakers and amounts of data.
Moreover, our solution is especially beneficial in low-resource settings. | [
"Dariusz Piotrowski",
"Renard Korzeniowski",
"Alessio Falai",
"Sebastian Cygert",
"Kamil Pokora",
"Georgi Tinchev",
"Ziyao Zhang",
"Kayoko Yanagisawa"
] | 2023-09-15 09:03:14 | http://arxiv.org/abs/2309.08255v1 | http://arxiv.org/pdf/2309.08255v1 | 2309.08255v1 |
Quantitative and Qualitative Evaluation of Reinforcement Learning Policies for Autonomous Vehicles | Optimizing traffic dynamics in an evolving transportation landscape is
crucial, particularly in scenarios where autonomous vehicles (AVs) with varying
levels of autonomy coexist with human-driven cars. This paper presents a novel
approach to optimizing choices of AVs using Proximal Policy Optimization (PPO),
a reinforcement learning algorithm. We learned a policy to minimize traffic
jams (i.e., minimize the time to cross the scenario) and to minimize pollution
in a roundabout in Milan, Italy. Through empirical analysis, we demonstrate
that our approach can reduce time and pollution levels. Furthermore, we
qualitatively evaluate the learned policy using a cutting-edge cockpit to
assess its performance in near-real-world conditions. To gauge the practicality
and acceptability of the policy, we conducted evaluations with human
participants using the simulator, focusing on a range of metrics like traffic
smoothness and safety perception. In general, our findings show that
human-driven vehicles benefit from optimizing AVs dynamics. Also, participants
in the study highlighted that the scenario with 80\% AVs is perceived as safer
than the scenario with 20\%. The same result is obtained for traffic smoothness
perception. | [
"Laura Ferrarotti",
"Massimiliano Luca",
"Gabriele Santin",
"Giorgio Previati",
"Gianpiero Mastinu",
"Elena Campi",
"Lorenzo Uccello",
"Antonino Albanese",
"Praveen Zalaya",
"Alessandro Roccasalva",
"Bruno Lepri"
] | 2023-09-15 09:02:16 | http://arxiv.org/abs/2309.08254v1 | http://arxiv.org/pdf/2309.08254v1 | 2309.08254v1 |
Deep Nonnegative Matrix Factorization with Beta Divergences | Deep Nonnegative Matrix Factorization (deep NMF) has recently emerged as a
valuable technique for extracting multiple layers of features across different
scales. However, all existing deep NMF models and algorithms have primarily
centered their evaluation on the least squares error, which may not be the most
appropriate metric for assessing the quality of approximations on diverse
datasets. For instance, when dealing with data types such as audio signals and
documents, it is widely acknowledged that $\beta$-divergences offer a more
suitable alternative. In this paper, we develop new models and algorithms for
deep NMF using $\beta$-divergences. Subsequently, we apply these techniques to
the extraction of facial features, the identification of topics within document
collections, and the identification of materials within hyperspectral images. | [
"Valentin Leplat",
"Le Thi Khanh Hien",
"Akwum Onwunta",
"Nicolas Gillis"
] | 2023-09-15 08:46:53 | http://arxiv.org/abs/2309.08249v1 | http://arxiv.org/pdf/2309.08249v1 | 2309.08249v1 |
A Geometric Perspective on Autoencoders | This paper presents the geometric aspect of the autoencoder framework, which,
despite its importance, has been relatively less recognized. Given a set of
high-dimensional data points that approximately lie on some lower-dimensional
manifold, an autoencoder learns the \textit{manifold} and its
\textit{coordinate chart}, simultaneously. This geometric perspective naturally
raises inquiries like "Does a finite set of data points correspond to a single
manifold?" or "Is there only one coordinate chart that can represent the
manifold?". The responses to these questions are negative, implying that there
are multiple solution autoencoders given a dataset. Consequently, they
sometimes produce incorrect manifolds with severely distorted latent space
representations. In this paper, we introduce recent geometric approaches that
address these issues. | [
"Yonghyeon Lee"
] | 2023-09-15 08:41:12 | http://arxiv.org/abs/2309.08247v2 | http://arxiv.org/pdf/2309.08247v2 | 2309.08247v2 |
A Real-time Faint Space Debris Detector With Learning-based LCM | With the development of aerospace technology, the increasing population of
space debris has posed a great threat to the safety of spacecraft. However, the
low intensity of reflected light and high angular velocity of space debris
impede the extraction. Besides, due to the limitations of the ground
observation methods, small space debris can hardly be detected, making it
necessary to enhance the spacecraft's capacity for space situational awareness
(SSA). Considering that traditional methods have some defects in low-SNR target
detection, such as low effectiveness and large time consumption, this paper
proposes a method for low-SNR streak extraction based on local contrast and
maximum likelihood estimation (MLE), which can detect space objects with SNR
2.0 efficiently. In the proposed algorithm, local contrast will be applied for
crude classifications, which will return connected components as preliminary
results, and then MLE will be performed to reconstruct the connected components
of targets via orientated growth, further improving the precision. The
algorithm has been verified with both simulated streaks and real star tracker
images, and the average centroid error of the proposed algorithm is close to
the state-of-the-art method like ODCC. At the same time, the algorithm in this
paper has significant advantages in efficiency compared with ODCC. In
conclusion, the algorithm in this paper is of high speed and precision, which
guarantees its promising applications in the extraction of high dynamic
targets. | [
"Zherui Lu",
"Gangyi Wang",
"Xinguo Wei",
"Jian Li"
] | 2023-09-15 08:37:28 | http://arxiv.org/abs/2309.08244v1 | http://arxiv.org/pdf/2309.08244v1 | 2309.08244v1 |
Topological Node2vec: Enhanced Graph Embedding via Persistent Homology | Node2vec is a graph embedding method that learns a vector representation for
each node of a weighted graph while seeking to preserve relative proximity and
global structure. Numerical experiments suggest Node2vec struggles to recreate
the topology of the input graph. To resolve this we introduce a topological
loss term to be added to the training loss of Node2vec which tries to align the
persistence diagram (PD) of the resulting embedding as closely as possible to
that of the input graph. Following results in computational optimal transport,
we carefully adapt entropic regularization to PD metrics, allowing us to
measure the discrepancy between PDs in a differentiable way. Our modified loss
function can then be minimized through gradient descent to reconstruct both the
geometry and the topology of the input graph. We showcase the benefits of this
approach using demonstrative synthetic examples. | [
"Yasuaki Hiraoka",
"Yusuke Imoto",
"Killian Meehan",
"Théo Lacombe",
"Toshiaki Yachimura"
] | 2023-09-15 08:31:26 | http://arxiv.org/abs/2309.08241v1 | http://arxiv.org/pdf/2309.08241v1 | 2309.08241v1 |
Ensuring Topological Data-Structure Preservation under Autoencoder Compression due to Latent Space Regularization in Gauss--Legendre nodes | We formulate a data independent latent space regularisation constraint for
general unsupervised autoencoders. The regularisation rests on sampling the
autoencoder Jacobian in Legendre nodes, being the centre of the Gauss-Legendre
quadrature. Revisiting this classic enables to prove that regularised
autoencoders ensure a one-to-one re-embedding of the initial data manifold to
its latent representation. Demonstrations show that prior proposed
regularisation strategies, such as contractive autoencoding, cause topological
defects already for simple examples, and so do convolutional based
(variational) autoencoders. In contrast, topological preservation is ensured
already by standard multilayer perceptron neural networks when being
regularised due to our contribution. This observation extends through the
classic FashionMNIST dataset up to real world encoding problems for MRI brain
scans, suggesting that, across disciplines, reliable low dimensional
representations of complex high-dimensional datasets can be delivered due to
this regularisation technique. | [
"Chethan Krishnamurthy Ramanaik",
"Juan-Esteban Suarez Cardona",
"Anna Willmann",
"Pia Hanfeld",
"Nico Hoffmann",
"Michael Hecht"
] | 2023-09-15 07:58:26 | http://arxiv.org/abs/2309.08228v2 | http://arxiv.org/pdf/2309.08228v2 | 2309.08228v2 |
VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime Inference | Lifelong learning, also referred to as continual learning, is the problem of
training an AI agent continuously while also preventing it from forgetting its
previously acquired knowledge. Most of the existing methods primarily focus on
lifelong learning within a static environment and lack the ability to mitigate
forgetting in a quickly-changing dynamic environment. Streaming lifelong
learning is a challenging setting of lifelong learning with the goal of
continuous learning in a dynamic non-stationary environment without forgetting.
We introduce a novel approach to lifelong learning, which is streaming,
requires a single pass over the data, can learn in a class-incremental manner,
and can be evaluated on-the-fly (anytime inference). To accomplish these, we
propose virtual gradients for continual representation learning to prevent
catastrophic forgetting and leverage an exponential-moving-average-based
semantic memory to further enhance performance. Extensive experiments on
diverse datasets demonstrate our method's efficacy and superior performance
over existing methods. | [
"Soumya Banerjee",
"Vinay K. Verma",
"Avideep Mukherjee",
"Deepak Gupta",
"Vinay P. Namboodiri",
"Piyush Rai"
] | 2023-09-15 07:54:49 | http://arxiv.org/abs/2309.08227v1 | http://arxiv.org/pdf/2309.08227v1 | 2309.08227v1 |
Model-based Deep Learning for High-Dimensional Periodic Structures | This work presents a deep learning surrogate model for the fast simulation of
high-dimensional frequency selective surfaces. We consider unit-cells which are
built as multiple concatenated stacks of screens and their design requires the
control over many geometrical degrees of freedom. Thanks to the introduction of
physical insight into the model, it can produce accurate predictions of the
S-parameters of a certain structure after training with a reduced dataset.The
proposed model is highly versatile and it can be used with any kind of
frequency selective surface, based on either perforations or patches of any
arbitrary geometry. Numeric examples are presented here for the case of
frequency selective surfaces composed of screens with rectangular perforations,
showing an excellent agreement between the predicted performance and such
obtained with a full-wave simulator. | [
"Lucas Polo-López",
"Luc Le Magoarou",
"Romain Contreres",
"María García-Vigueras"
] | 2023-09-15 07:38:18 | http://arxiv.org/abs/2309.12223v1 | http://arxiv.org/pdf/2309.12223v1 | 2309.12223v1 |
Unified Risk Analysis for Weakly Supervised Learning | Among the flourishing research of weakly supervised learning (WSL), we
recognize the lack of a unified interpretation of the mechanism behind the
weakly supervised scenarios, let alone a systematic treatment of the risk
rewrite problem, a crucial step in the empirical risk minimization approach. In
this paper, we introduce a framework providing a comprehensive understanding
and a unified methodology for WSL. The formulation component of the framework,
leveraging a contamination perspective, provides a unified interpretation of
how weak supervision is formed and subsumes fifteen existing WSL settings. The
induced reduction graphs offer comprehensive connections over WSLs. The
analysis component of the framework, viewed as a decontamination process,
provides a systematic method of conducting risk rewrite. In addition to the
conventional inverse matrix approach, we devise a novel strategy called
marginal chain aiming to decontaminate distributions. We justify the
feasibility of the proposed framework by recovering existing rewrites reported
in the literature. | [
"Chao-Kai Chiang",
"Masashi Sugiyama"
] | 2023-09-15 07:30:15 | http://arxiv.org/abs/2309.08216v1 | http://arxiv.org/pdf/2309.08216v1 | 2309.08216v1 |
HM-Conformer: A Conformer-based audio deepfake detection system with hierarchical pooling and multi-level classification token aggregation methods | Audio deepfake detection (ADD) is the task of detecting spoofing attacks
generated by text-to-speech or voice conversion systems. Spoofing evidence,
which helps to distinguish between spoofed and bona-fide utterances, might
exist either locally or globally in the input features. To capture these, the
Conformer, which consists of Transformers and CNN, possesses a suitable
structure. However, since the Conformer was designed for sequence-to-sequence
tasks, its direct application to ADD tasks may be sub-optimal. To tackle this
limitation, we propose HM-Conformer by adopting two components: (1)
Hierarchical pooling method progressively reducing the sequence length to
eliminate duplicated information (2) Multi-level classification token
aggregation method utilizing classification tokens to gather information from
different blocks. Owing to these components, HM-Conformer can efficiently
detect spoofing evidence by processing various sequence lengths and aggregating
them. In experimental results on the ASVspoof 2021 Deepfake dataset,
HM-Conformer achieved a 15.71% EER, showing competitive performance compared to
recent systems. | [
"Hyun-seo Shin",
"Jungwoo Heo",
"Ju-ho Kim",
"Chan-yeong Lim",
"Wonbin Kim",
"Ha-Jin Yu"
] | 2023-09-15 07:18:30 | http://arxiv.org/abs/2309.08208v1 | http://arxiv.org/pdf/2309.08208v1 | 2309.08208v1 |
Gaussian Processes with Linear Multiple Kernel: Spectrum Design and Distributed Learning for Multi-Dimensional Data | Gaussian processes (GPs) have emerged as a prominent technique for machine
learning and signal processing. A key component in GP modeling is the choice of
kernel, and linear multiple kernels (LMKs) have become an attractive kernel
class due to their powerful modeling capacity and interpretability. This paper
focuses on the grid spectral mixture (GSM) kernel, an LMK that can approximate
arbitrary stationary kernels. Specifically, we propose a novel GSM kernel
formulation for multi-dimensional data that reduces the number of
hyper-parameters compared to existing formulations, while also retaining a
favorable optimization structure and approximation capability. In addition, to
make the large-scale hyper-parameter optimization in the GSM kernel tractable,
we first introduce the distributed SCA (DSCA) algorithm. Building on this, we
propose the doubly distributed SCA (D$^2$SCA) algorithm based on the
alternating direction method of multipliers (ADMM) framework, which allows us
to cooperatively learn the GSM kernel in the context of big data while
maintaining data privacy. Furthermore, we tackle the inherent communication
bandwidth restriction in distributed frameworks, by quantizing the
hyper-parameters in D$^2$SCA, resulting in the quantized doubly distributed SCA
(QD$^2$SCA) algorithm. Theoretical analysis establishes convergence guarantees
for the proposed algorithms, while experiments on diverse datasets demonstrate
the superior prediction performance and efficiency of our methods. | [
"Richard Cornelius Suwandi",
"Zhidi Lin",
"Feng Yin"
] | 2023-09-15 07:05:33 | http://arxiv.org/abs/2309.08201v1 | http://arxiv.org/pdf/2309.08201v1 | 2309.08201v1 |
An Explainable Deep-learning Model of Proton Auroras on Mars | Proton auroras are widely observed on the day side of Mars, identified as a
significant intensity enhancement in the hydrogen Ly alpha (121.6 nm) emission
between 120 and 150~km altitudes. Solar wind protons penetrating as energetic
neutral atoms into the Martian thermosphere are thought to be responsible for
these auroras. Understanding proton auroras is therefore important for
characterizing the solar wind interaction with the atmosphere of Mars. Recent
observations of spatially localized "patchy" proton auroras suggest a possible
direct deposition of protons into the atmosphere of Mars during unstable solar
wind conditions. Here, we develop a purely data-driven model of proton auroras
using Mars Atmosphere and Volatile EvolutioN (MAVEN) in situ observations and
limb scans of Ly alpha emissions between 2014 and 2022. We train an artificial
neural network that reproduces individual Ly alpha intensities with a Pearson
correlation of 0.95 along with a faithful reconstruction of the observed Ly
alpha emission altitude profiles. By performing a SHapley Additive exPlanations
(SHAP) analysis, we find that Solar Zenith Angle, seasonal CO2 atmosphere
variability, solar wind temperature, and density are the most important
features for the modelled proton auroras. We also demonstrate that our model
can serve as an inexpensive tool for simulating and characterizing Ly alpha
response under a variety of seasonal and upstream solar wind conditions. | [
"Dattaraj B. Dhuri",
"Dimitra Atri",
"Ahmed AlHantoobi"
] | 2023-09-15 06:53:13 | http://arxiv.org/abs/2309.08195v1 | http://arxiv.org/pdf/2309.08195v1 | 2309.08195v1 |
A Precision-Scalable RISC-V DNN Processor with On-Device Learning Capability at the Extreme Edge | Extreme edge platforms, such as in-vehicle smart devices, require efficient
deployment of quantized deep neural networks (DNNs) to enable intelligent
applications with limited amounts of energy, memory, and computing resources.
However, many edge devices struggle to boost inference throughput of various
quantized DNNs due to the varying quantization levels, and these devices lack
floating-point (FP) support for on-device learning, which prevents them from
improving model accuracy while ensuring data privacy. To tackle the challenges
above, we propose a precision-scalable RISC-V DNN processor with on-device
learning capability. It facilitates diverse precision levels of fixed-point DNN
inference, spanning from 2-bit to 16-bit, and enhances on-device learning
through improved support with FP16 operations. Moreover, we employ multiple
methods such as FP16 multiplier reuse and multi-precision integer multiplier
reuse, along with balanced mapping of FPGA resources, to significantly improve
hardware resource utilization. Experimental results on the Xilinx ZCU102 FPGA
show that our processor significantly improves inference throughput by
1.6$\sim$14.6$\times$ and energy efficiency by 1.1$\sim$14.6$\times$ across
various DNNs, compared to the prior art, XpulpNN. Additionally, our processor
achieves a 16.5$\times$ higher FP throughput for on-device learning. | [
"Longwei Huang",
"Chao Fang",
"Qiong Li",
"Jun Lin",
"Zhongfeng Wang"
] | 2023-09-15 06:25:10 | http://arxiv.org/abs/2309.08186v1 | http://arxiv.org/pdf/2309.08186v1 | 2309.08186v1 |
Unveiling Invariances via Neural Network Pruning | Invariance describes transformations that do not alter data's underlying
semantics. Neural networks that preserve natural invariance capture good
inductive biases and achieve superior performance. Hence, modern networks are
handcrafted to handle well-known invariances (ex. translations). We propose a
framework to learn novel network architectures that capture data-dependent
invariances via pruning. Our learned architectures consistently outperform
dense neural networks on both vision and tabular datasets in both efficiency
and effectiveness. We demonstrate our framework on multiple deep learning
models across 3 vision and 40 tabular datasets. | [
"Derek Xu",
"Yizhou Sun",
"Wei Wang"
] | 2023-09-15 05:38:33 | http://arxiv.org/abs/2309.08171v1 | http://arxiv.org/pdf/2309.08171v1 | 2309.08171v1 |
To Predict or to Reject: Causal Effect Estimation with Uncertainty on Networked Data | Due to the imbalanced nature of networked observational data, the causal
effect predictions for some individuals can severely violate the
positivity/overlap assumption, rendering unreliable estimations. Nevertheless,
this potential risk of individual-level treatment effect estimation on
networked data has been largely under-explored. To create a more trustworthy
causal effect estimator, we propose the uncertainty-aware graph deep kernel
learning (GraphDKL) framework with Lipschitz constraint to model the prediction
uncertainty with Gaussian process and identify unreliable estimations. To the
best of our knowledge, GraphDKL is the first framework to tackle the violation
of positivity assumption when performing causal effect estimation with graphs.
With extensive experiments, we demonstrate the superiority of our proposed
method in uncertainty-aware causal effect estimation on networked data. | [
"Hechuan Wen",
"Tong Chen",
"Li Kheng Chai",
"Shazia Sadiq",
"Kai Zheng",
"Hongzhi Yin"
] | 2023-09-15 05:25:43 | http://arxiv.org/abs/2309.08165v1 | http://arxiv.org/pdf/2309.08165v1 | 2309.08165v1 |
Uncovering Neural Scaling Laws in Molecular Representation Learning | Molecular Representation Learning (MRL) has emerged as a powerful tool for
drug and materials discovery in a variety of tasks such as virtual screening
and inverse design. While there has been a surge of interest in advancing
model-centric techniques, the influence of both data quantity and quality on
molecular representations is not yet clearly understood within this field. In
this paper, we delve into the neural scaling behaviors of MRL from a
data-centric viewpoint, examining four key dimensions: (1) data modalities, (2)
dataset splitting, (3) the role of pre-training, and (4) model capacity. Our
empirical studies confirm a consistent power-law relationship between data
volume and MRL performance across these dimensions. Additionally, through
detailed analysis, we identify potential avenues for improving learning
efficiency. To challenge these scaling laws, we adapt seven popular data
pruning strategies to molecular data and benchmark their performance. Our
findings underline the importance of data-centric MRL and highlight possible
directions for future research. | [
"Dingshuo Chen",
"Yanqiao Zhu",
"Jieyu Zhang",
"Yuanqi Du",
"Zhixun Li",
"Qiang Liu",
"Shu Wu",
"Liang Wang"
] | 2023-09-15 05:05:19 | http://arxiv.org/abs/2309.15123v2 | http://arxiv.org/pdf/2309.15123v2 | 2309.15123v2 |
AdSEE: Investigating the Impact of Image Style Editing on Advertisement Attractiveness | Online advertisements are important elements in e-commerce sites, social
media platforms, and search engines. With the increasing popularity of mobile
browsing, many online ads are displayed with visual information in the form of
a cover image in addition to text descriptions to grab the attention of users.
Various recent studies have focused on predicting the click rates of online
advertisements aware of visual features or composing optimal advertisement
elements to enhance visibility. In this paper, we propose Advertisement Style
Editing and Attractiveness Enhancement (AdSEE), which explores whether semantic
editing to ads images can affect or alter the popularity of online
advertisements. We introduce StyleGAN-based facial semantic editing and
inversion to ads images and train a click rate predictor attributing GAN-based
face latent representations in addition to traditional visual and textual
features to click rates. Through a large collected dataset named QQ-AD,
containing 20,527 online ads, we perform extensive offline tests to study how
different semantic directions and their edit coefficients may impact click
rates. We further design a Genetic Advertisement Editor to efficiently search
for the optimal edit directions and intensity given an input ad cover image to
enhance its projected click rates. Online A/B tests performed over a period of
5 days have verified the increased click-through rates of AdSEE-edited samples
as compared to a control group of original ads, verifying the relation between
image styles and ad popularity. We open source the code for AdSEE research at
https://github.com/LiyaoJiang1998/adsee. | [
"Liyao Jiang",
"Chenglin Li",
"Haolan Chen",
"Xiaodong Gao",
"Xinwang Zhong",
"Yang Qiu",
"Shani Ye",
"Di Niu"
] | 2023-09-15 04:52:49 | http://arxiv.org/abs/2309.08159v1 | http://arxiv.org/pdf/2309.08159v1 | 2309.08159v1 |
A Testbed for Automating and Analysing Mobile Devices and their Applications | The need for improved network situational awareness has been highlighted by
the growing complexity and severity of cyber-attacks. Mobile phones pose a
significant risk to network situational awareness due to their dynamic
behaviour and lack of visibility on a network. Machine learning techniques
enhance situational awareness by providing administrators insight into the
devices and activities which form their network. Developing machine learning
techniques for situational awareness requires a testbed to generate and label
network traffic. Current testbeds, however, are unable to automate the
generation and labelling of realistic network traffic. To address this, we
describe a testbed which automates applications on mobile devices to generate
and label realistic traffic. From this testbed, two labelled datasets of
network traffic have been created. We provide an analysis of the testbed
automation reliability and benchmark the datasets for the task of application
classification. | [
"Lachlan Simpson",
"Kyle Millar",
"Adriel Cheng",
"Hong Gunn Chew",
"Cheng-Chew Lim"
] | 2023-09-15 04:48:58 | http://arxiv.org/abs/2309.08158v1 | http://arxiv.org/pdf/2309.08158v1 | 2309.08158v1 |
Two-Step Knowledge Distillation for Tiny Speech Enhancement | Tiny, causal models are crucial for embedded audio machine learning
applications. Model compression can be achieved via distilling knowledge from a
large teacher into a smaller student model. In this work, we propose a novel
two-step approach for tiny speech enhancement model distillation. In contrast
to the standard approach of a weighted mixture of distillation and supervised
losses, we firstly pre-train the student using only the knowledge distillation
(KD) objective, after which we switch to a fully supervised training regime. We
also propose a novel fine-grained similarity-preserving KD loss, which aims to
match the student's intra-activation Gram matrices to that of the teacher. Our
method demonstrates broad improvements, but particularly shines in adverse
conditions including high compression and low signal to noise ratios (SNR),
yielding signal to distortion ratio gains of 0.9 dB and 1.1 dB, respectively,
at -5 dB input SNR and 63x compression compared to baseline. | [
"Rayan Daod Nathoo",
"Mikolaj Kegler",
"Marko Stamenovic"
] | 2023-09-15 04:19:38 | http://arxiv.org/abs/2309.08144v1 | http://arxiv.org/pdf/2309.08144v1 | 2309.08144v1 |
PromptTTS++: Controlling Speaker Identity in Prompt-Based Text-to-Speech Using Natural Language Descriptions | We propose PromptTTS++, a prompt-based text-to-speech (TTS) synthesis system
that allows control over speaker identity using natural language descriptions.
To control speaker identity within the prompt-based TTS framework, we introduce
the concept of speaker prompt, which describes voice characteristics (e.g.,
gender-neutral, young, old, and muffled) designed to be approximately
independent of speaking style. Since there is no large-scale dataset containing
speaker prompts, we first construct a dataset based on the LibriTTS-R corpus
with manually annotated speaker prompts. We then employ a diffusion-based
acoustic model with mixture density networks to model diverse speaker factors
in the training data. Unlike previous studies that rely on style prompts
describing only a limited aspect of speaker individuality, such as pitch,
speaking speed, and energy, our method utilizes an additional speaker prompt to
effectively learn the mapping from natural language descriptions to the
acoustic features of diverse speakers. Our subjective evaluation results show
that the proposed method can better control speaker characteristics than the
methods without the speaker prompt. Audio samples are available at
https://reppy4620.github.io/demo.promptttspp/. | [
"Reo Shimizu",
"Ryuichi Yamamoto",
"Masaya Kawamura",
"Yuma Shirahata",
"Hironori Doi",
"Tatsuya Komatsu",
"Kentaro Tachibana"
] | 2023-09-15 04:11:37 | http://arxiv.org/abs/2309.08140v1 | http://arxiv.org/pdf/2309.08140v1 | 2309.08140v1 |
Audio Difference Learning for Audio Captioning | This study introduces a novel training paradigm, audio difference learning,
for improving audio captioning. The fundamental concept of the proposed
learning method is to create a feature representation space that preserves the
relationship between audio, enabling the generation of captions that detail
intricate audio information. This method employs a reference audio along with
the input audio, both of which are transformed into feature representations via
a shared encoder. Captions are then generated from these differential features
to describe their differences. Furthermore, a unique technique is proposed that
involves mixing the input audio with additional audio, and using the additional
audio as a reference. This results in the difference between the mixed audio
and the reference audio reverting back to the original input audio. This allows
the original input's caption to be used as the caption for their difference,
eliminating the need for additional annotations for the differences. In the
experiments using the Clotho and ESC50 datasets, the proposed method
demonstrated an improvement in the SPIDEr score by 7% compared to conventional
methods. | [
"Tatsuya Komatsu",
"Yusuke Fujita",
"Kazuya Takeda",
"Tomoki Toda"
] | 2023-09-15 04:11:37 | http://arxiv.org/abs/2309.08141v1 | http://arxiv.org/pdf/2309.08141v1 | 2309.08141v1 |
Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates | Oobleck enables resilient distributed training of large DNN models with
guaranteed fault tolerance. It takes a planning-execution co-design approach,
where it first generates a set of heterogeneous pipeline templates and
instantiates at least $f+1$ logically equivalent pipeline replicas to tolerate
any $f$ simultaneous failures. During execution, it relies on
already-replicated model states across the replicas to provide fast recovery.
Oobleck provably guarantees that some combination of the initially created
pipeline templates can be used to cover all available resources after $f$ or
fewer simultaneous failures, thereby avoiding resource idling at all times.
Evaluation on large DNN models with billions of parameters shows that Oobleck
provides consistently high throughput, and it outperforms state-of-the-art
fault tolerance solutions like Bamboo and Varuna by up to $13.9x$. | [
"Insu Jang",
"Zhenning Yang",
"Zhen Zhang",
"Xin Jin",
"Mosharaf Chowdhury"
] | 2023-09-15 03:27:02 | http://arxiv.org/abs/2309.08125v1 | http://arxiv.org/pdf/2309.08125v1 | 2309.08125v1 |
InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning | We present a new financial domain large language model, InvestLM, tuned on
LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset
related to financial investment. Inspired by less-is-more-for-alignment (Zhou
et al., 2023), we manually curate a small yet diverse instruction dataset,
covering a wide range of financial related topics, from Chartered Financial
Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative
finance discussions. InvestLM shows strong capabilities in understanding
financial text and provides helpful responses to investment related questions.
Financial experts, including hedge fund managers and research analysts, rate
InvestLM's response as comparable to those of state-of-the-art commercial
models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of
financial NLP benchmarks demonstrates strong generalizability. From a research
perspective, this work suggests that a high-quality domain specific LLM can be
tuned using a small set of carefully curated instructions on a well-trained
foundation model, which is consistent with the Superficial Alignment Hypothesis
(Zhou et al., 2023). From a practical perspective, this work develops a
state-of-the-art financial domain LLM with superior capability in understanding
financial texts and providing helpful investment advice, potentially enhancing
the work efficiency of financial professionals. We release the model parameters
to the research community. | [
"Yi Yang",
"Yixuan Tang",
"Kar Yan Tam"
] | 2023-09-15 02:59:31 | http://arxiv.org/abs/2309.13064v1 | http://arxiv.org/pdf/2309.13064v1 | 2309.13064v1 |
Fast and Accurate Deep Loop Closing and Relocalization for Reliable LiDAR SLAM | Loop closing and relocalization are crucial techniques to establish reliable
and robust long-term SLAM by addressing pose estimation drift and degeneration.
This article begins by formulating loop closing and relocalization within a
unified framework. Then, we propose a novel multi-head network LCR-Net to
tackle both tasks effectively. It exploits novel feature extraction and
pose-aware attention mechanism to precisely estimate similarities and 6-DoF
poses between pairs of LiDAR scans. In the end, we integrate our LCR-Net into a
SLAM system and achieve robust and accurate online LiDAR SLAM in outdoor
driving environments. We thoroughly evaluate our LCR-Net through three setups
derived from loop closing and relocalization, including candidate retrieval,
closed-loop point cloud registration, and continuous relocalization using
multiple datasets. The results demonstrate that LCR-Net excels in all three
tasks, surpassing the state-of-the-art methods and exhibiting a remarkable
generalization ability. Notably, our LCR-Net outperforms baseline methods
without using a time-consuming robust pose estimator, rendering it suitable for
online SLAM applications. To our best knowledge, the integration of LCR-Net
yields the first LiDAR SLAM with the capability of deep loop closing and
relocalization. The implementation of our methods will be made open-source. | [
"Chenghao Shi",
"Xieyuanli Chen",
"Junhao Xiao",
"Bin Dai",
"Huimin Lu"
] | 2023-09-15 00:59:31 | http://arxiv.org/abs/2309.08086v1 | http://arxiv.org/pdf/2309.08086v1 | 2309.08086v1 |
Supervised Stochastic Neighbor Embedding Using Contrastive Learning | Stochastic neighbor embedding (SNE) methods $t$-SNE, UMAP are two most
popular dimensionality reduction methods for data visualization. Contrastive
learning, especially self-supervised contrastive learning (SSCL), has showed
great success in embedding features from unlabeled data. The conceptual
connection between SNE and SSCL has been exploited. In this work, within the
scope of preserving neighboring information of a dataset, we extend the
self-supervised contrastive approach to the fully-supervised setting, allowing
us to effectively leverage label information. Clusters of samples belonging to
the same class are pulled together in low-dimensional embedding space, while
simultaneously pushing apart clusters of samples from different classes. | [
"Yi Zhang"
] | 2023-09-15 00:26:21 | http://arxiv.org/abs/2309.08077v1 | http://arxiv.org/pdf/2309.08077v1 | 2309.08077v1 |
A Stochastic Online Forecast-and-Optimize Framework for Real-Time Energy Dispatch in Virtual Power Plants under Uncertainty | Aggregating distributed energy resources in power systems significantly
increases uncertainties, in particular caused by the fluctuation of renewable
energy generation. This issue has driven the necessity of widely exploiting
advanced predictive control techniques under uncertainty to ensure long-term
economics and decarbonization. In this paper, we propose a real-time
uncertainty-aware energy dispatch framework, which is composed of two key
elements: (i) A hybrid forecast-and-optimize sequential task, integrating deep
learning-based forecasting and stochastic optimization, where these two stages
are connected by the uncertainty estimation at multiple temporal resolutions;
(ii) An efficient online data augmentation scheme, jointly involving model
pre-training and online fine-tuning stages. In this way, the proposed framework
is capable to rapidly adapt to the real-time data distribution, as well as to
target on uncertainties caused by data drift, model discrepancy and environment
perturbations in the control process, and finally to realize an optimal and
robust dispatch solution. The proposed framework won the championship in
CityLearn Challenge 2022, which provided an influential opportunity to
investigate the potential of AI application in the energy domain. In addition,
comprehensive experiments are conducted to interpret its effectiveness in the
real-life scenario of smart building energy management. | [
"Wei Jiang",
"Zhongkai Yi",
"Li Wang",
"Hanwei Zhang",
"Jihai Zhang",
"Fangquan Lin",
"Cheng Yang"
] | 2023-09-15 00:04:00 | http://arxiv.org/abs/2309.08642v1 | http://arxiv.org/pdf/2309.08642v1 | 2309.08642v1 |
Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO) | The extraction of consensus segmentations from several binary or
probabilistic masks is important to solve various tasks such as the analysis of
inter-rater variability or the fusion of several neural network outputs. One of
the most widely used methods to obtain such a consensus segmentation is the
STAPLE algorithm. In this paper, we first demonstrate that the output of that
algorithm is heavily impacted by the background size of images and the choice
of the prior. We then propose a new method to construct a binary or a
probabilistic consensus segmentation based on the Fr\'{e}chet means of
carefully chosen distances which makes it totally independent of the image
background size. We provide a heuristic approach to optimize this criterion
such that a voxel's class is fully determined by its voxel-wise distance to the
different masks, the connected component it belongs to and the group of raters
who segmented it. We compared extensively our method on several datasets with
the STAPLE method and the naive segmentation averaging method, showing that it
leads to binary consensus masks of intermediate size between Majority Voting
and STAPLE and to different posterior probabilities than Mask Averaging and
STAPLE methods. Our code is available at
https://gitlab.inria.fr/dhamzaou/jaccardmap . | [
"Dimitri Hamzaoui",
"Sarah Montagne",
"Raphaële Renard-Penna",
"Nicholas Ayache",
"Hervé Delingette"
] | 2023-09-14 23:28:58 | http://arxiv.org/abs/2309.08066v2 | http://arxiv.org/pdf/2309.08066v2 | 2309.08066v2 |
How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent | We analyze the generalization properties of two-layer neural networks in the
neural tangent kernel (NTK) regime, trained with gradient descent (GD). For
early stopped GD we derive fast rates of convergence that are known to be
minimax optimal in the framework of non-parametric regression in reproducing
kernel Hilbert spaces. On our way, we precisely keep track of the number of
hidden neurons required for generalization and improve over existing results.
We further show that the weights during training remain in a vicinity around
initialization, the radius being dependent on structural assumptions such as
degree of smoothness of the regression function and eigenvalue decay of the
integral operator associated to the NTK. | [
"Mike Nguyen",
"Nicole Mücke"
] | 2023-09-14 22:10:28 | http://arxiv.org/abs/2309.08044v1 | http://arxiv.org/pdf/2309.08044v1 | 2309.08044v1 |
Subsets and Splits