instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: FlowMur: A Stealthy and Practical Audio Backdoor Attack with Limited Knowledge
Abstract: Speech recognition systems driven by DNNs have revolutionized human-computer
interaction through voice interfaces, which significantly facilitate our daily
lives. However, the growing popularity of these systems also raises special
concerns on their security, particularly regarding backdoor attacks. A backdoor
attack inserts one or more hidden backdoors into a DNN model during its
training process, such that it does not affect the model's performance on
benign inputs, but forces the model to produce an adversary-desired output if a
specific trigger is present in the model input. Despite the initial success of
current audio backdoor attacks, they suffer from the following limitations: (i)
Most of them require sufficient knowledge, which limits their widespread
adoption. (ii) They are not stealthy enough, thus easy to be detected by
humans. (iii) Most of them cannot attack live speech, reducing their
practicality. To address these problems, in this paper, we propose FlowMur, a
stealthy and practical audio backdoor attack that can be launched with limited
knowledge. FlowMur constructs an auxiliary dataset and a surrogate model to
augment adversary knowledge. To achieve dynamicity, it formulates trigger
generation as an optimization problem and optimizes the trigger over different
attachment positions. To enhance stealthiness, we propose an adaptive data
poisoning method according to Signal-to-Noise Ratio (SNR). Furthermore, ambient
noise is incorporated into the process of trigger generation and data poisoning
to make FlowMur robust to ambient noise and improve its practicality. Extensive
experiments conducted on two datasets demonstrate that FlowMur achieves high
attack performance in both digital and physical settings while remaining
resilient to state-of-the-art defenses. In particular, a human study confirms
that triggers generated by FlowMur are not easily detected by participants. | Cryptography and Security |
What field is the article from? | Title: A General Neural Causal Model for Interactive Recommendation
Abstract: Survivor bias in observational data leads the optimization of recommender
systems towards local optima. Currently most solutions re-mines existing
human-system collaboration patterns to maximize longer-term satisfaction by
reinforcement learning. However, from the causal perspective, mitigating
survivor effects requires answering a counterfactual problem, which is
generally unidentifiable and inestimable. In this work, we propose a neural
causal model to achieve counterfactual inference. Specifically, we first build
a learnable structural causal model based on its available graphical
representations which qualitatively characterizes the preference transitions.
Mitigation of the survivor bias is achieved though counterfactual consistency.
To identify the consistency, we use the Gumbel-max function as structural
constrains. To estimate the consistency, we apply reinforcement optimizations,
and use Gumbel-Softmax as a trade-off to get a differentiable function. Both
theoretical and empirical studies demonstrate the effectiveness of our
solution. | Machine Learning |
What field is the article from? | Title: An Investigation of Darwiche and Pearl's Postulates for Iterated Belief Update
Abstract: Belief revision and update, two significant types of belief change, both
focus on how an agent modify her beliefs in presence of new information. The
most striking difference between them is that the former studies the change of
beliefs in a static world while the latter concentrates on a
dynamically-changing world. The famous AGM and KM postulates were proposed to
capture rational belief revision and update, respectively. However, both of
them are too permissive to exclude some unreasonable changes in the iteration.
In response to this weakness, the DP postulates and its extensions for iterated
belief revision were presented. Furthermore, Rodrigues integrated these
postulates in belief update. Unfortunately, his approach does not meet the
basic requirement of iterated belief update. This paper is intended to solve
this problem of Rodrigues's approach. Firstly, we present a modification of the
original KM postulates based on belief states. Subsequently, we migrate several
well-known postulates for iterated belief revision to iterated belief update.
Moreover, we provide the exact semantic characterizations based on partial
preorders for each of the proposed postulates. Finally, we analyze the
compatibility between the above iterated postulates and the KM postulates for
belief update. | Artificial Intelligence |
What field is the article from? | Title: Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey
Abstract: The emergence of natural language processing has revolutionized the way users
interact with tabular data, enabling a shift from traditional query languages
and manual plotting to more intuitive, language-based interfaces. The rise of
large language models (LLMs) such as ChatGPT and its successors has further
advanced this field, opening new avenues for natural language processing
techniques. This survey presents a comprehensive overview of natural language
interfaces for tabular data querying and visualization, which allow users to
interact with data using natural language queries. We introduce the fundamental
concepts and techniques underlying these interfaces with a particular emphasis
on semantic parsing, the key technology facilitating the translation from
natural language to SQL queries or data visualization commands. We then delve
into the recent advancements in Text-to-SQL and Text-to-Vis problems from the
perspectives of datasets, methodologies, metrics, and system designs. This
includes a deep dive into the influence of LLMs, highlighting their strengths,
limitations, and potential for future improvements. Through this survey, we aim
to provide a roadmap for researchers and practitioners interested in developing
and applying natural language interfaces for data interaction in the era of
large language models. | Computational Linguistics |
What field is the article from? | Title: Stock Movement and Volatility Prediction from Tweets, Macroeconomic Factors and Historical Prices
Abstract: Predicting stock market is vital for investors and policymakers, acting as a
barometer of the economic health. We leverage social media data, a potent
source of public sentiment, in tandem with macroeconomic indicators as
government-compiled statistics, to refine stock market predictions. However,
prior research using tweet data for stock market prediction faces three
challenges. First, the quality of tweets varies widely. While many are filled
with noise and irrelevant details, only a few genuinely mirror the actual
market scenario. Second, solely focusing on the historical data of a particular
stock without considering its sector can lead to oversight. Stocks within the
same industry often exhibit correlated price behaviors. Lastly, simply
forecasting the direction of price movement without assessing its magnitude is
of limited value, as the extent of the rise or fall truly determines
profitability. In this paper, diverging from the conventional methods, we
pioneer an ECON. The framework has following advantages: First, ECON has an
adept tweets filter that efficiently extracts and decodes the vast array of
tweet data. Second, ECON discerns multi-level relationships among stocks,
sectors, and macroeconomic factors through a self-aware mechanism in semantic
space. Third, ECON offers enhanced accuracy in predicting substantial stock
price fluctuations by capitalizing on stock price movement. We showcase the
state-of-the-art performance of our proposed model using a dataset,
specifically curated by us, for predicting stock market movements and
volatility. | Artificial Intelligence |
What field is the article from? | Title: Responsible Emergent Multi-Agent Behavior
Abstract: Responsible AI has risen to the forefront of the AI research community. As
neural network-based learning algorithms continue to permeate real-world
applications, the field of Responsible AI has played a large role in ensuring
that such systems maintain a high-level of human-compatibility. Despite this
progress, the state of the art in Responsible AI has ignored one crucial point:
human problems are multi-agent problems. Predominant approaches largely
consider the performance of a single AI system in isolation, but human problems
are, by their very nature, multi-agent. From driving in traffic to negotiating
economic policy, human problem-solving involves interaction and the interplay
of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent
behavior, illustrating how researchers and practitioners can better understand
and shape multi-agent learning with respect to three pillars of Responsible AI:
interpretability, fairness, and robustness. First, I investigate multi-agent
interpretability, presenting novel techniques for understanding emergent
multi-agent behavior at multiple levels of granularity. With respect to
low-level interpretability, I examine the extent to which implicit
communication emerges as an aid to coordination in multi-agent populations. I
introduce a novel curriculum-driven method for learning high-performing
policies in difficult, sparse reward environments and show through a measure of
position-based social influence that multi-agent teams that learn sophisticated
coordination strategies exchange significantly more information through
implicit signals than lesser-coordinated agents. Then, at a high-level, I study
concept-based interpretability in the context of multi-agent learning. I
propose a novel method for learning intrinsically interpretable, concept-based
policies and show that it enables... | Artificial Intelligence |
What field is the article from? | Title: Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Abstract: Machine learning is currently undergoing an explosion in capability,
popularity, and sophistication. However, one of the major barriers to
widespread acceptance of machine learning (ML) is trustworthiness: most ML
models operate as black boxes, their inner workings opaque and mysterious, and
it can be difficult to trust their conclusions without understanding how those
conclusions are reached. Explainability is therefore a key aspect of improving
trustworthiness: the ability to better understand, interpret, and anticipate
the behaviour of ML models. To this end, we propose SMILE, a new method that
builds on previous approaches by making use of statistical distance measures to
improve explainability while remaining applicable to a wide range of input data
domains. | Machine Learning |
What field is the article from? | Title: SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
Abstract: To address the challenge of increasing network size, researchers have
developed sparse models through network pruning. However, maintaining model
accuracy while achieving significant speedups on general computing devices
remains an open problem. In this paper, we present a novel mobile inference
acceleration framework SparseByteNN, which leverages fine-grained kernel
sparsity to achieve real-time execution as well as high accuracy. Our framework
consists of two parts: (a) A fine-grained kernel sparsity schema with a
sparsity granularity between structured pruning and unstructured pruning. It
designs multiple sparse patterns for different operators. Combined with our
proposed whole network rearrangement strategy, the schema achieves a high
compression rate and high precision at the same time. (b) Inference engine
co-optimized with the sparse pattern. The conventional wisdom is that this
reduction in theoretical FLOPs does not translate into real-world efficiency
gains. We aim to correct this misconception by introducing a family of
efficient sparse kernels for ARM and WebAssembly. Equipped with our efficient
implementation of sparse primitives, we show that sparse versions of
MobileNet-v1 outperform strong dense baselines on the efficiency-accuracy
curve. Experimental results on Qualcomm 855 show that for 30% sparse
MobileNet-v1, SparseByteNN achieves 1.27x speedup over the dense version and
1.29x speedup over the state-of-the-art sparse inference engine MNN with a
slight accuracy drop of 0.224%. The source code of SparseByteNN will be
available at https://github.com/lswzjuer/SparseByteNN | Artificial Intelligence |
What field is the article from? | Title: A Survey on Detection of LLMs-Generated Content
Abstract: The burgeoning capabilities of advanced large language models (LLMs) such as
ChatGPT have led to an increase in synthetic content generation with
implications across a variety of sectors, including media, cybersecurity,
public discourse, and education. As such, the ability to detect LLMs-generated
content has become of paramount importance. We aim to provide a detailed
overview of existing detection strategies and benchmarks, scrutinizing their
differences and identifying key challenges and prospects in the field,
advocating for more adaptable and robust models to enhance detection accuracy.
We also posit the necessity for a multi-faceted approach to defend against
various attacks to counter the rapidly advancing capabilities of LLMs. To the
best of our knowledge, this work is the first comprehensive survey on the
detection in the era of LLMs. We hope it will provide a broad understanding of
the current landscape of LLMs-generated content detection, offering a guiding
reference for researchers and practitioners striving to uphold the integrity of
digital information in an era increasingly dominated by synthetic content. The
relevant papers are summarized and will be consistently updated at
https://github.com/Xianjun-Yang/Awesome_papers_on_LLMs_detection.git. | Computational Linguistics |
What field is the article from? | Title: Learning to Act without Actions
Abstract: Pre-training large models on vast amounts of web data has proven to be an
effective approach for obtaining powerful, general models in several domains,
including language and vision. However, this paradigm has not yet taken hold in
deep reinforcement learning (RL). This gap is due to the fact that the most
abundant form of embodied behavioral data on the web consists of videos, which
do not include the action labels required by existing methods for training
policies from offline data. We introduce Latent Action Policies from
Observation (LAPO), a method to infer latent actions and, consequently,
latent-action policies purely from action-free demonstrations. Our experiments
on challenging procedurally-generated environments show that LAPO can act as an
effective pre-training method to obtain RL policies that can then be rapidly
fine-tuned to expert-level performance. Our approach serves as a key stepping
stone to enabling the pre-training of powerful, generalist RL models on the
vast amounts of action-free demonstrations readily available on the web. | Machine Learning |
What field is the article from? | Title: GLaMM: Pixel Grounding Large Multimodal Model
Abstract: Large Multimodal Models (LMMs) extend Large Language Models to the vision
domain. Initial efforts towards LMMs used holistic images and text prompts to
generate ungrounded textual responses. Very recently, region-level LMMs have
been used to generate visually grounded responses. However, they are limited to
only referring a single object category at a time, require users to specify the
regions in inputs, or cannot offer dense pixel-wise object grounding. In this
work, we present Grounding LMM (GLaMM), the first model that can generate
natural language responses seamlessly intertwined with corresponding object
segmentation masks. GLaMM not only grounds objects appearing in the
conversations but is flexible enough to accept both textual and optional visual
prompts (region of interest) as input. This empowers users to interact with the
model at various levels of granularity, both in textual and visual domains. Due
to the lack of standard benchmarks for the novel setting of generating visually
grounded detailed conversations, we introduce a comprehensive evaluation
protocol with our curated grounded conversations. Our proposed Grounded
Conversation Generation (GCG) task requires densely grounded concepts in
natural scenes at a large-scale. To this end, we propose a densely annotated
Grounding-anything Dataset (GranD) using our proposed automated annotation
pipeline that encompasses 7.5M unique concepts grounded in a total of 810M
regions available with segmentation masks. Besides GCG, GLaMM also performs
effectively on several downstream tasks e.g., referring expression
segmentation, image and region-level captioning and vision-language
conversations. Project Page: https://mbzuai-oryx.github.io/groundingLMM. | Computer Vision |
What field is the article from? | Title: Peer attention enhances student learning
Abstract: Human visual attention is susceptible to social influences. In education,
peer effects impact student learning, but their precise role in modulating
attention remains unclear. Our experiment (N=311) demonstrates that displaying
peer visual attention regions when students watch online course videos enhances
their focus and engagement. However, students retain adaptability in following
peer attention cues. Overall, guided peer attention improves learning
experiences and outcomes. These findings elucidate how peer visual attention
shapes students' gaze patterns, deepening understanding of peer influence on
learning. They also offer insights into designing adaptive online learning
interventions leveraging peer attention modelling to optimize student
attentiveness and success. | Human-Computer Interaction |
What field is the article from? | Title: Plagiarism and AI Assistance Misuse in Web Programming: Unfair Benefits and Characteristics
Abstract: In programming education, plagiarism and misuse of artificial intelligence
(AI) assistance are emerging issues. However, not many relevant studies are
focused on web programming. We plan to develop automated tools to help
instructors identify both misconducts. To fully understand the issues, we
conducted a controlled experiment to observe the unfair benefits and the
characteristics. We compared student performance in completing web programming
tasks independently, with a submission to plagiarize, and with the help of AI
assistance (ChatGPT). Our study shows that students who are involved in such
misconducts get comparable test marks with less completion time. Plagiarized
submissions are similar to the independent ones except in trivial aspects such
as color and identifier names. AI-assisted submissions are more complex, making
them less readable. Students believe AI assistance could be useful given proper
acknowledgment of the use, although they are not convinced with readability and
correctness of the solutions. | Artificial Intelligence |
What field is the article from? | Title: Neural Collage Transfer: Artistic Reconstruction via Material Manipulation
Abstract: Collage is a creative art form that uses diverse material scraps as a base
unit to compose a single image. Although pixel-wise generation techniques can
reproduce a target image in collage style, it is not a suitable method due to
the solid stroke-by-stroke nature of the collage form. While some previous
works for stroke-based rendering produced decent sketches and paintings,
collages have received much less attention in research despite their popularity
as a style. In this paper, we propose a method for learning to make collages
via reinforcement learning without the need for demonstrations or collage
artwork data. We design the collage Markov Decision Process (MDP), which allows
the agent to handle various materials and propose a model-based soft
actor-critic to mitigate the agent's training burden derived from the
sophisticated dynamics of collage. Moreover, we devise additional techniques
such as active material selection and complexity-based multi-scale collage to
handle target images at any size and enhance the results' aesthetics by placing
relatively more scraps in areas of high complexity. Experimental results show
that the trained agent appropriately selected and pasted materials to
regenerate the target image into a collage and obtained a higher evaluation
score on content and style than pixel-wise generation methods. Code is
available at https://github.com/northadventure/CollageRL. | Computer Vision |
What field is the article from? | Title: Identifying Reasons for Bias: An Argumentation-Based Approach
Abstract: As algorithmic decision-making systems become more prevalent in society,
ensuring the fairness of these systems is becoming increasingly important.
Whilst there has been substantial research in building fair algorithmic
decision-making systems, the majority of these methods require access to the
training data, including personal characteristics, and are not transparent
regarding which individuals are classified unfairly. In this paper, we propose
a novel model-agnostic argumentation-based method to determine why an
individual is classified differently in comparison to similar individuals. Our
method uses a quantitative argumentation framework to represent attribute-value
pairs of an individual and of those similar to them, and uses a well-known
semantics to identify the attribute-value pairs in the individual contributing
most to their different classification. We evaluate our method on two datasets
commonly used in the fairness literature and illustrate its effectiveness in
the identification of bias. | Machine Learning |
What field is the article from? | Title: Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape
Abstract: The creation of lifelike speech-driven 3D facial animation requires a natural
and precise synchronization between audio input and facial expressions.
However, existing works still fail to render shapes with flexible head poses
and natural facial details (e.g., wrinkles). This limitation is mainly due to
two aspects: 1) Collecting training set with detailed 3D facial shapes is
highly expensive. This scarcity of detailed shape annotations hinders the
training of models with expressive facial animation. 2) Compared to mouth
movement, the head pose is much less correlated to speech content.
Consequently, concurrent modeling of both mouth movement and head pose yields
the lack of facial movement controllability. To address these challenges, we
introduce VividTalker, a new framework designed to facilitate speech-driven 3D
facial animation characterized by flexible head pose and natural facial
details. Specifically, we explicitly disentangle facial animation into head
pose and mouth movement and encode them separately into discrete latent spaces.
Then, these attributes are generated through an autoregressive process
leveraging a window-based Transformer architecture. To augment the richness of
3D facial animation, we construct a new 3D dataset with detailed shapes and
learn to synthesize facial details in line with speech content. Extensive
quantitative and qualitative experiments demonstrate that VividTalker
outperforms state-of-the-art methods, resulting in vivid and realistic
speech-driven 3D facial animation. | Computer Vision |
What field is the article from? | Title: Autonomous 3D Exploration in Large-Scale Environments with Dynamic Obstacles
Abstract: Exploration in dynamic and uncertain real-world environments is an open
problem in robotics and constitutes a foundational capability of autonomous
systems operating in most of the real world. While 3D exploration planning has
been extensively studied, the environments are assumed static or only reactive
collision avoidance is carried out. We propose a novel approach to not only
avoid dynamic obstacles but also include them in the plan itself, to exploit
the dynamic environment in the agent's favor. The proposed planner, Dynamic
Autonomous Exploration Planner (DAEP), extends AEP to explicitly plan with
respect to dynamic obstacles. To thoroughly evaluate exploration planners in
such settings we propose a new enhanced benchmark suite with several dynamic
environments, including large-scale outdoor environments. DAEP outperform
state-of-the-art planners in dynamic and large-scale environments. DAEP is
shown to be more effective at both exploration and collision avoidance. | Robotics |
What field is the article from? | Title: A GAN Approach for Node Embedding in Heterogeneous Graphs Using Subgraph Sampling
Abstract: Our research addresses class imbalance issues in heterogeneous graphs using
graph neural networks (GNNs). We propose a novel method combining the strengths
of Generative Adversarial Networks (GANs) with GNNs, creating synthetic nodes
and edges that effectively balance the dataset. This approach directly targets
and rectifies imbalances at the data level. The proposed framework resolves
issues such as neglecting graph structures during data generation and creating
synthetic structures usable with GNN-based classifiers in downstream tasks. It
processes node and edge information concurrently, improving edge balance
through node augmentation and subgraph sampling. Additionally, our framework
integrates a threshold strategy, aiding in determining optimal edge thresholds
during training without time-consuming parameter adjustments. Experiments on
the Amazon and Yelp Review datasets highlight the effectiveness of the
framework we proposed, especially in minority node identification, where it
consistently outperforms baseline models across key performance metrics,
demonstrating its potential in the field. | Machine Learning |
What field is the article from? | Title: Moral Foundations of Large Language Models
Abstract: Moral foundations theory (MFT) is a psychological assessment tool that
decomposes human moral reasoning into five factors, including care/harm,
liberty/oppression, and sanctity/degradation (Graham et al., 2009). People vary
in the weight they place on these dimensions when making moral decisions, in
part due to their cultural upbringing and political ideology. As large language
models (LLMs) are trained on datasets collected from the internet, they may
reflect the biases that are present in such corpora. This paper uses MFT as a
lens to analyze whether popular LLMs have acquired a bias towards a particular
set of moral values. We analyze known LLMs and find they exhibit particular
moral foundations, and show how these relate to human moral foundations and
political affiliations. We also measure the consistency of these biases, or
whether they vary strongly depending on the context of how the model is
prompted. Finally, we show that we can adversarially select prompts that
encourage the moral to exhibit a particular set of moral foundations, and that
this can affect the model's behavior on downstream tasks. These findings help
illustrate the potential risks and unintended consequences of LLMs assuming a
particular moral stance. | Artificial Intelligence |
What field is the article from? | Title: Decentralized Personalized Online Federated Learning
Abstract: Vanilla federated learning does not support learning in an online
environment, learning a personalized model on each client, and learning in a
decentralized setting. There are existing methods extending federated learning
in each of the three aspects. However, some important applications on
enterprise edge servers (e.g. online item recommendation at global scale)
involve the three aspects at the same time. Therefore, we propose a new
learning setting \textit{Decentralized Personalized Online Federated Learning}
that considers all the three aspects at the same time.
In this new setting for learning, the first technical challenge is how to
aggregate the shared model parameters from neighboring clients to obtain a
personalized local model with good performance on each client. We propose to
directly learn an aggregation by optimizing the performance of the local model
with respect to the aggregation weights. This not only improves personalization
of each local model but also helps the local model adapting to potential data
shift by intelligently incorporating the right amount of information from its
neighbors. The second challenge is how to select the neighbors for each client.
We propose a peer selection method based on the learned aggregation weights
enabling each client to select the most helpful neighbors and reduce
communication cost at the same time. We verify the effectiveness and robustness
of our proposed method on three real-world item recommendation datasets and one
air quality prediction dataset. | Machine Learning |
What field is the article from? | Title: Large Language Models for Autonomous Driving: Real-World Experiments
Abstract: Autonomous driving systems are increasingly popular in today's technological
landscape, where vehicles with partial automation have already been widely
available on the market, and the full automation era with ``driverless''
capabilities is near the horizon. However, accurately understanding humans'
commands, particularly for autonomous vehicles that have only passengers
instead of drivers, and achieving a high level of personalization remain
challenging tasks in the development of autonomous driving systems. In this
paper, we introduce a Large Language Model (LLM)-based framework Talk-to-Drive
(Talk2Drive) to process verbal commands from humans and make autonomous driving
decisions with contextual information, satisfying their personalized
preferences for safety, efficiency, and comfort. First, a speech recognition
module is developed for Talk2Drive to interpret verbal inputs from humans to
textual instructions, which are then sent to LLMs for reasoning. Then,
appropriate commands for the Electrical Control Unit (ECU) are generated,
achieving a 100\% success rate in executing codes. Real-world experiments show
that our framework can substantially reduce the takeover rate for a diverse
range of drivers by up to 90.1\%. To the best of our knowledge, Talk2Drive
marks the first instance of employing an LLM-based system in a real-world
autonomous driving environment. | Artificial Intelligence |
What field is the article from? | Title: The Rise of Creative Machines: Exploring the Impact of Generative AI
Abstract: This study looks at how generative artificial intelligence (AI) can
revolutionize marketing, product development, and research. It discusses the
latest developments in the field, easy-to-use resources, and moral and social
hazards. In addition to addressing mitigating techniques for issues like
prejudice and disinformation, the debate emphasizes the significance of
responsible development through continual stakeholder communication and ethical
principles. | Artificial Intelligence |
What field is the article from? | Title: NLQxform: A Language Model-based Question to SPARQL Transformer
Abstract: In recent years, scholarly data has grown dramatically in terms of both scale
and complexity. It becomes increasingly challenging to retrieve information
from scholarly knowledge graphs that include large-scale heterogeneous
relationships, such as authorship, affiliation, and citation, between various
types of entities, e.g., scholars, papers, and organizations. As part of the
Scholarly QALD Challenge, this paper presents a question-answering (QA) system
called NLQxform, which provides an easy-to-use natural language interface to
facilitate accessing scholarly knowledge graphs. NLQxform allows users to
express their complex query intentions in natural language questions. A
transformer-based language model, i.e., BART, is employed to translate
questions into standard SPARQL queries, which can be evaluated to retrieve the
required information. According to the public leaderboard of the Scholarly QALD
Challenge at ISWC 2023 (Task 1: DBLP-QUAD - Knowledge Graph Question Answering
over DBLP), NLQxform achieved an F1 score of 0.85 and ranked first on the QA
task, demonstrating the competitiveness of the system. | Computational Linguistics |
What field is the article from? | Title: ABIGX: A Unified Framework for eXplainable Fault Detection and Classification
Abstract: For explainable fault detection and classification (FDC), this paper proposes
a unified framework, ABIGX (Adversarial fault reconstruction-Based Integrated
Gradient eXplanation). ABIGX is derived from the essentials of previous
successful fault diagnosis methods, contribution plots (CP) and
reconstruction-based contribution (RBC). It is the first explanation framework
that provides variable contributions for the general FDC models. The core part
of ABIGX is the adversarial fault reconstruction (AFR) method, which rethinks
the FR from the perspective of adversarial attack and generalizes to fault
classification models with a new fault index. For fault classification, we put
forward a new problem of fault class smearing, which intrinsically hinders the
correct explanation. We prove that ABIGX effectively mitigates this problem and
outperforms the existing gradient-based explanation methods. For fault
detection, we theoretically bridge ABIGX with conventional fault diagnosis
methods by proving that CP and RBC are the linear specifications of ABIGX. The
experiments evaluate the explanations of FDC by quantitative metrics and
intuitive illustrations, the results of which show the general superiority of
ABIGX to other advanced explanation methods. | Machine Learning |
What field is the article from? | Title: The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
Abstract: Compressing large language models (LLMs), often consisting of billions of
parameters, provides faster inference, smaller memory footprints, and enables
local deployment. Two standard compression techniques are pruning and
quantization, with the former eliminating redundant connections in model layers
and the latter representing model parameters with fewer bits. The key tradeoff
is between the degree of compression and the impact on the quality of the
compressed model. Existing research on LLM compression primarily focuses on
performance in terms of general metrics like perplexity or downstream task
accuracy. More fine-grained metrics, such as those measuring parametric
knowledge, remain significantly underexplored. To help bridge this gap, we
present a comprehensive analysis across multiple model families (ENCODER,
ENCODER-DECODER, and DECODER) using the LAMA and LM-HARNESS benchmarks in order
to systematically quantify the effect of commonly employed compression
techniques on model performance. A particular focus is on tradeoffs involving
parametric knowledge, with the goal of providing practitioners with practical
insights to help make informed decisions on compression. We release our
codebase1 to enable further research. | Computational Linguistics |
What field is the article from? | Title: STOW: Discrete-Frame Segmentation and Tracking of Unseen Objects for Warehouse Picking Robots
Abstract: Segmentation and tracking of unseen object instances in discrete frames pose
a significant challenge in dynamic industrial robotic contexts, such as
distribution warehouses. Here, robots must handle object rearrangement,
including shifting, removal, and partial occlusion by new items, and track
these items after substantial temporal gaps. The task is further complicated
when robots encounter objects not learned in their training sets, which
requires the ability to segment and track previously unseen items. Considering
that continuous observation is often inaccessible in such settings, our task
involves working with a discrete set of frames separated by indefinite periods
during which substantial changes to the scene may occur. This task also
translates to domestic robotic applications, such as rearrangement of objects
on a table. To address these demanding challenges, we introduce new synthetic
and real-world datasets that replicate these industrial and household
scenarios. We also propose a novel paradigm for joint segmentation and tracking
in discrete frames along with a transformer module that facilitates efficient
inter-frame communication. The experiments we conduct show that our approach
significantly outperforms recent methods. For additional results and videos,
please visit \href{https://sites.google.com/view/stow-corl23}{website}. Code
and dataset will be released. | Robotics |
What field is the article from? | Title: Hierarchical Framework for Interpretable and Probabilistic Model-Based Safe Reinforcement Learning
Abstract: The difficulty of identifying the physical model of complex systems has led
to exploring methods that do not rely on such complex modeling of the systems.
Deep reinforcement learning has been the pioneer for solving this problem
without the need for relying on the physical model of complex systems by just
interacting with it. However, it uses a black-box learning approach that makes
it difficult to be applied within real-world and safety-critical systems
without providing explanations of the actions derived by the model.
Furthermore, an open research question in deep reinforcement learning is how to
focus the policy learning of critical decisions within a sparse domain. This
paper proposes a novel approach for the use of deep reinforcement learning in
safety-critical systems. It combines the advantages of probabilistic modeling
and reinforcement learning with the added benefits of interpretability and
works in collaboration and synchronization with conventional decision-making
strategies. The BC-SRLA is activated in specific situations which are
identified autonomously through the fused information of probabilistic model
and reinforcement learning, such as abnormal conditions or when the system is
near-to-failure. Further, it is initialized with a baseline policy using policy
cloning to allow minimum interactions with the environment to address the
challenges associated with using RL in safety-critical industries. The
effectiveness of the BC-SRLA is demonstrated through a case study in
maintenance applied to turbofan engines, where it shows superior performance to
the prior art and other baselines. | Artificial Intelligence |
What field is the article from? | Title: DTL: Disentangled Transfer Learning for Visual Recognition
Abstract: When pre-trained models become rapidly larger, the cost of fine-tuning on
downstream tasks steadily increases, too. To economically fine-tune these
models, parameter-efficient transfer learning (PETL) is proposed, which only
tunes a tiny subset of trainable parameters to efficiently learn quality
representations. However, current PETL methods are facing the dilemma that
during training the GPU memory footprint is not effectively reduced as
trainable parameters. PETL will likely fail, too, if the full fine-tuning
encounters the out-of-GPU-memory issue. This phenomenon happens because
trainable parameters from these methods are generally entangled with the
backbone, such that a lot of intermediate states have to be stored in GPU
memory for gradient propagation. To alleviate this problem, we introduce
Disentangled Transfer Learning (DTL), which disentangles the trainable
parameters from the backbone using a lightweight Compact Side Network (CSN). By
progressively extracting task-specific information with a few low-rank linear
mappings and appropriately adding the information back to the backbone, CSN
effectively realizes knowledge transfer in various downstream tasks. We
conducted extensive experiments to validate the effectiveness of our method.
The proposed method not only reduces a large amount of GPU memory usage and
trainable parameters, but also outperforms existing PETL methods by a
significant margin in accuracy, achieving new state-of-the-art on several
standard benchmarks. | Computer Vision |
What field is the article from? | Title: LSA64: An Argentinian Sign Language Dataset
Abstract: Automatic sign language recognition is a research area that encompasses
human-computer interaction, computer vision and machine learning. Robust
automatic recognition of sign language could assist in the translation process
and the integration of hearing-impaired people, as well as the teaching of sign
language to the hearing population. Sign languages differ significantly in
different countries and even regions, and their syntax and semantics are
different as well from those of written languages. While the techniques for
automatic sign language recognition are mostly the same for different
languages, training a recognition system for a new language requires having an
entire dataset for that language. This paper presents a dataset of 64 signs
from the Argentinian Sign Language (LSA). The dataset, called LSA64, contains
3200 videos of 64 different LSA signs recorded by 10 subjects, and is a first
step towards building a comprehensive research-level dataset of Argentinian
signs, specifically tailored to sign language recognition or other machine
learning tasks. The subjects that performed the signs wore colored gloves to
ease the hand tracking and segmentation steps, allowing experiments on the
dataset to focus specifically on the recognition of signs. We also present a
pre-processed version of the dataset, from which we computed statistics of
movement, position and handshape of the signs. | Computer Vision |
What field is the article from? | Title: Nominality Score Conditioned Time Series Anomaly Detection by Point/Sequential Reconstruction
Abstract: Time series anomaly detection is challenging due to the complexity and
variety of patterns that can occur. One major difficulty arises from modeling
time-dependent relationships to find contextual anomalies while maintaining
detection accuracy for point anomalies. In this paper, we propose a framework
for unsupervised time series anomaly detection that utilizes point-based and
sequence-based reconstruction models. The point-based model attempts to
quantify point anomalies, and the sequence-based model attempts to quantify
both point and contextual anomalies. Under the formulation that the observed
time point is a two-stage deviated value from a nominal time point, we
introduce a nominality score calculated from the ratio of a combined value of
the reconstruction errors. We derive an induced anomaly score by further
integrating the nominality score and anomaly score, then theoretically prove
the superiority of the induced anomaly score over the original anomaly score
under certain conditions. Extensive studies conducted on several public
datasets show that the proposed framework outperforms most state-of-the-art
baselines for time series anomaly detection. | Machine Learning |
What field is the article from? | Title: Promoting Counterfactual Robustness through Diversity
Abstract: Counterfactual explanations shed light on the decisions of black-box models
by explaining how an input can be altered to obtain a favourable decision from
the model (e.g., when a loan application has been rejected). However, as noted
recently, counterfactual explainers may lack robustness in the sense that a
minor change in the input can cause a major change in the explanation. This can
cause confusion on the user side and open the door for adversarial attacks. In
this paper, we study some sources of non-robustness. While there are
fundamental reasons for why an explainer that returns a single counterfactual
cannot be robust in all instances, we show that some interesting robustness
guarantees can be given by reporting multiple rather than a single
counterfactual. Unfortunately, the number of counterfactuals that need to be
reported for the theoretical guarantees to hold can be prohibitively large. We
therefore propose an approximation algorithm that uses a diversity criterion to
select a feasible number of most relevant explanations and study its robustness
empirically. Our experiments indicate that our method improves the
state-of-the-art in generating robust explanations, while maintaining other
desirable properties and providing competitive computational performance. | Machine Learning |
What field is the article from? | Title: Differentiable Visual Computing for Inverse Problems and Machine Learning
Abstract: Originally designed for applications in computer graphics, visual computing
(VC) methods synthesize information about physical and virtual worlds, using
prescribed algorithms optimized for spatial computing. VC is used to analyze
geometry, physically simulate solids, fluids, and other media, and render the
world via optical techniques. These fine-tuned computations that operate
explicitly on a given input solve so-called forward problems, VC excels at. By
contrast, deep learning (DL) allows for the construction of general algorithmic
models, side stepping the need for a purely first principles-based approach to
problem solving. DL is powered by highly parameterized neural network
architectures -- universal function approximators -- and gradient-based search
algorithms which can efficiently search that large parameter space for optimal
models. This approach is predicated by neural network differentiability, the
requirement that analytic derivatives of a given problem's task metric can be
computed with respect to neural network's parameters. Neural networks excel
when an explicit model is not known, and neural network training solves an
inverse problem in which a model is computed from data. | Machine Learning |
What field is the article from? | Title: PrivateLoRA For Efficient Privacy Preserving LLM
Abstract: End users face a choice between privacy and efficiency in current Large
Language Model (LLM) service paradigms. In cloud-based paradigms, users are
forced to compromise data locality for generation quality and processing speed.
Conversely, edge device paradigms maintain data locality but fail to deliver
satisfactory performance. In this work, we propose a novel LLM service paradigm
that distributes privacy-sensitive computation on edge devices and shared
computation in the cloud. Only activations are transmitted between the central
cloud and edge devices to ensure data locality. Our core innovation,
PrivateLoRA, addresses the challenging communication overhead by exploiting the
low rank of residual activations, achieving over 95% communication reduction.
Consequently, PrivateLoRA effectively maintains data locality and is extremely
resource efficient. Under standard 5G networks, PrivateLoRA achieves throughput
over 300% of device-only solutions for 7B models and over 80% of an A100 GPU
for 33B models. PrivateLoRA also provides tuning performance comparable to LoRA
for advanced personalization. Our approach democratizes access to
state-of-the-art generative AI for edge devices, paving the way for more
tailored LLM experiences for the general public. To our knowledge, our proposed
framework is the first efficient and privacy-preserving LLM solution in the
literature. | Artificial Intelligence |
What field is the article from? | Title: LLM as an Art Director (LaDi): Using LLMs to improve Text-to-Media Generators
Abstract: Recent advancements in text-to-image generation have revolutionized numerous
fields, including art and cinema, by automating the generation of high-quality,
context-aware images and video. However, the utility of these technologies is
often limited by the inadequacy of text prompts in guiding the generator to
produce artistically coherent and subject-relevant images. In this paper, We
describe the techniques that can be used to make Large Language Models (LLMs)
act as Art Directors that enhance image and video generation. We describe our
unified system for this called "LaDi". We explore how LaDi integrates multiple
techniques for augmenting the capabilities of text-to-image generators (T2Is)
and text-to-video generators (T2Vs), with a focus on constrained decoding,
intelligent prompting, fine-tuning, and retrieval. LaDi and these techniques
are being used today in apps and platforms developed by Plai Labs. | Computational Linguistics |
What field is the article from? | Title: Machine Learning For An Explainable Cost Prediction of Medical Insurance
Abstract: Predictive modeling in healthcare continues to be an active actuarial
research topic as more insurance companies aim to maximize the potential of
Machine Learning approaches to increase their productivity and efficiency. In
this paper, the authors deployed three regression-based ensemble ML models that
combine variations of decision trees through Extreme Gradient Boosting,
Gradient-boosting Machine, and Random Forest) methods in predicting medical
insurance costs. Explainable Artificial Intelligence methods SHapley Additive
exPlanations and Individual Conditional Expectation plots were deployed to
discover and explain the key determinant factors that influence medical
insurance premium prices in the dataset. The dataset used comprised 986 records
and is publicly available in the KAGGLE repository. The models were evaluated
using four performance evaluation metrics, including R-squared, Mean Absolute
Error, Root Mean Squared Error, and Mean Absolute Percentage Error. The results
show that all models produced impressive outcomes; however, the XGBoost model
achieved a better overall performance although it also expanded more
computational resources, while the RF model recorded a lesser prediction error
and consumed far fewer computing resources than the XGBoost model. Furthermore,
we compared the outcome of both XAi methods in identifying the key determinant
features that influenced the PremiumPrices for each model and whereas both XAi
methods produced similar outcomes, we found that the ICE plots showed in more
detail the interactions between each variable than the SHAP analysis which
seemed to be more high-level. It is the aim of the authors that the
contributions of this study will help policymakers, insurers, and potential
medical insurance buyers in their decision-making process for selecting the
right policies that meet their specific needs. | Machine Learning |
What field is the article from? | Title: Orca 2: Teaching Small Language Models How to Reason
Abstract: Orca 1 learns from rich signals, such as explanation traces, allowing it to
outperform conventional instruction-tuned models on benchmarks like BigBench
Hard and AGIEval. In Orca 2, we continue exploring how improved training
signals can enhance smaller LMs' reasoning abilities. Research on training
small LMs has often relied on imitation learning to replicate the output of
more capable models. We contend that excessive emphasis on imitation may
restrict the potential of smaller models. We seek to teach small LMs to employ
different solution strategies for different tasks, potentially different from
the one used by the larger model. For example, while larger models might
provide a direct answer to a complex task, smaller models may not have the same
capacity. In Orca 2, we teach the model various reasoning techniques
(step-by-step, recall then generate, recall-reason-generate, direct answer,
etc.). More crucially, we aim to help the model learn to determine the most
effective solution strategy for each task. We evaluate Orca 2 using a
comprehensive set of 15 diverse benchmarks (corresponding to approximately 100
tasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of
similar size and attains performance levels similar or better to those of
models 5-10x larger, as assessed on complex tasks that test advanced reasoning
abilities in zero-shot settings. make Orca 2 weights publicly available at
aka.ms/orca-lm to support research on the development, evaluation, and
alignment of smaller LMs | Artificial Intelligence |
What field is the article from? | Title: MELA: Multilingual Evaluation of Linguistic Acceptability
Abstract: Recent benchmarks for Large Language Models (LLMs) have mostly focused on
application-driven tasks such as complex reasoning and code generation, and
this has led to a scarcity in purely linguistic evaluation of LLMs. Against
this background, we introduce Multilingual Evaluation of Linguistic
Acceptability -- MELA, the first multilingual benchmark on linguistic
acceptability with 48K samples covering 10 languages from a diverse set of
language families. We establish baselines of commonly used LLMs along with
supervised models, and conduct cross-lingual transfer and multi-task learning
experiments with XLM-R. In pursuit of multilingual interpretability, we analyze
the weights of fine-tuned XLM-R to explore the possibility of identifying
transfer difficulty between languages. Our results show that ChatGPT benefits
much from in-context examples but still lags behind fine-tuned XLM-R, while the
performance of GPT-4 is on par with fine-tuned XLM-R even in zero-shot setting.
Cross-lingual and multi-task learning experiments show that unlike semantic
tasks, in-language training data is crucial in acceptability judgements.
Results in layerwise probing indicate that the upper layers of XLM-R become a
task-specific but language-agnostic region for multilingual acceptability
judgment. We also introduce the concept of conflicting weight, which could be a
potential indicator for the difficulty of cross-lingual transfer between
languages. Our data will be available at https://github.com/sjtu-compling/MELA. | Computational Linguistics |
What field is the article from? | Title: Music Recommendation on Spotify using Deep Learning
Abstract: Hosting about 50 million songs and 4 billion playlists, there is an enormous
amount of data generated at Spotify every single day - upwards of 600 gigabytes
of data (harvard.edu). Since the algorithms that Spotify uses in recommendation
systems is proprietary and confidential, code for big data analytics and
recommendation can only be speculated. However, it is widely theorized that
Spotify uses two main strategies to target users' playlists and personalized
mixes that are infamous for their retention - exploration and exploitation
(kaggle.com). This paper aims to appropriate filtering using the approach of
deep learning for maximum user likeability. The architecture achieves 98.57%
and 80% training and validation accuracy respectively. | Information Retrieval |
What field is the article from? | Title: Beyond Words: A Mathematical Framework for Interpreting Large Language Models
Abstract: Large language models (LLMs) are powerful AI tools that can generate and
comprehend natural language text and other complex information. However, the
field lacks a mathematical framework to systematically describe, compare and
improve LLMs. We propose Hex a framework that clarifies key terms and concepts
in LLM research, such as hallucinations, alignment, self-verification and
chain-of-thought reasoning. The Hex framework offers a precise and consistent
way to characterize LLMs, identify their strengths and weaknesses, and
integrate new findings. Using Hex, we differentiate chain-of-thought reasoning
from chain-of-thought prompting and establish the conditions under which they
are equivalent. This distinction clarifies the basic assumptions behind
chain-of-thought prompting and its implications for methods that use it, such
as self-verification and prompt programming.
Our goal is to provide a formal framework for LLMs that can help both
researchers and practitioners explore new possibilities for generative AI. We
do not claim to have a definitive solution, but rather a tool for opening up
new research avenues. We argue that our formal definitions and results are
crucial for advancing the discussion on how to build generative AI systems that
are safe, reliable, fair and robust, especially in domains like healthcare and
software engineering. | Machine Learning |
What field is the article from? | Title: Artificial intelligence and the limits of the humanities
Abstract: The complexity of cultures in the modern world is now beyond human
comprehension. Cognitive sciences cast doubts on the traditional explanations
based on mental models. The core subjects in humanities may lose their
importance. Humanities have to adapt to the digital age. New, interdisciplinary
branches of humanities emerge. Instant access to information will be replaced
by instant access to knowledge. Understanding the cognitive limitations of
humans and the opportunities opened by the development of artificial
intelligence and interdisciplinary research necessary to address global
challenges is the key to the revitalization of humanities. Artificial
intelligence will radically change humanities, from art to political sciences
and philosophy, making these disciplines attractive to students and enabling
them to go beyond current limitations. | Artificial Intelligence |
What field is the article from? | Title: Meta Prompting for AGI Systems
Abstract: This paper presents an in-depth exploration of Meta Prompting, a novel
technique that revolutionizes the way large language models (LLMs), multi-modal
foundation models, and AI systems approach problem-solving and data
interpretation. Meta Prompting, rooted in type theory and category theory,
prioritizes the structure and syntax of information, providing a unique
framework that transcends traditional content-focused methods. We delve into
the formal definitions of Meta Prompting, contrasting it with Few-Shot
Prompting, and highlight its applicability and superiority in various AI
applications.
Key to this exploration is the expansion of Meta Prompting into the realm of
complex reasoning. Here, we demonstrate how this technique adeptly breaks down
intricate problems into manageable sub-problems, facilitating a step-by-step,
detailed approach to problem-solving. This method proves especially
advantageous in terms of token efficiency and offering a fair comparison in
problem-solving scenarios, standing out against few-shot example approaches.
Furthermore, the paper breaks new ground by extending Meta Prompting into
multi-modal foundation model settings. This extension addresses the integration
of diverse data types, such as images, audio, and video, within the structured
framework of Meta Prompting, highlighting both the challenges and the vast
potential of this approach in handling complex, multi-faceted data (The code is
available at https://github.com/meta-prompting/meta-prompting). | Artificial Intelligence |
What field is the article from? | Title: Unleashing the Potential of Large Language Model: Zero-shot VQA for Flood Disaster Scenario
Abstract: Visual question answering (VQA) is a fundamental and essential AI task, and
VQA-based disaster scenario understanding is a hot research topic. For
instance, we can ask questions about a disaster image by the VQA model and the
answer can help identify whether anyone or anything is affected by the
disaster. However, previous VQA models for disaster damage assessment have some
shortcomings, such as limited candidate answer space, monotonous question
types, and limited answering capability of existing models. In this paper, we
propose a zero-shot VQA model named Zero-shot VQA for Flood Disaster Damage
Assessment (ZFDDA). It is a VQA model for damage assessment without
pre-training. Also, with flood disaster as the main research object, we build a
Freestyle Flood Disaster Image Question Answering dataset (FFD-IQA) to evaluate
our VQA model. This new dataset expands the question types to include
free-form, multiple-choice, and yes-no questions. At the same time, we expand
the size of the previous dataset to contain a total of 2,058 images and 22,422
question-meta ground truth pairs. Most importantly, our model uses
well-designed chain of thought (CoT) demonstrations to unlock the potential of
the large language model, allowing zero-shot VQA to show better performance in
disaster scenarios. The experimental results show that the accuracy in
answering complex questions is greatly improved with CoT prompts. Our study
provides a research basis for subsequent research of VQA for other disaster
scenarios. | Computer Vision |
What field is the article from? | Title: Forms of Understanding of XAI-Explanations
Abstract: Explainability has become an important topic in computer science and
artificial intelligence, leading to a subfield called Explainable Artificial
Intelligence (XAI). The goal of providing or seeking explanations is to achieve
(better) 'understanding' on the part of the explainee. However, what it means
to 'understand' is still not clearly defined, and the concept itself is rarely
the subject of scientific investigation. This conceptual article aims to
present a model of forms of understanding in the context of XAI and beyond.
From an interdisciplinary perspective bringing together computer science,
linguistics, sociology, and psychology, a definition of understanding and its
forms, assessment, and dynamics during the process of giving everyday
explanations are explored. Two types of understanding are considered as
possible outcomes of explanations, namely enabledness, 'knowing how' to do or
decide something, and comprehension, 'knowing that' -- both in different
degrees (from shallow to deep). Explanations regularly start with shallow
understanding in a specific domain and can lead to deep comprehension and
enabledness of the explanandum, which we see as a prerequisite for human users
to gain agency. In this process, the increase of comprehension and enabledness
are highly interdependent. Against the background of this systematization,
special challenges of understanding in XAI are discussed. | Artificial Intelligence |
What field is the article from? | Title: Predicting Agricultural Commodities Prices with Machine Learning: A Review of Current Research
Abstract: Agricultural price prediction is crucial for farmers, policymakers, and other
stakeholders in the agricultural sector. However, it is a challenging task due
to the complex and dynamic nature of agricultural markets. Machine learning
algorithms have the potential to revolutionize agricultural price prediction by
improving accuracy, real-time prediction, customization, and integration. This
paper reviews recent research on machine learning algorithms for agricultural
price prediction. We discuss the importance of agriculture in developing
countries and the problems associated with crop price falls. We then identify
the challenges of predicting agricultural prices and highlight how machine
learning algorithms can support better prediction. Next, we present a
comprehensive analysis of recent research, discussing the strengths and
weaknesses of various machine learning techniques. We conclude that machine
learning has the potential to revolutionize agricultural price prediction, but
further research is essential to address the limitations and challenges
associated with this approach. | Artificial Intelligence |
What field is the article from? | Title: Common (good) practices measuring trust in HRI
Abstract: Trust in robots is widely believed to be imperative for the adoption of
robots into people's daily lives. It is, therefore, understandable that the
literature of the last few decades focuses on measuring how much people trust
robots -- and more generally, any agent - to foster such trust in these
technologies. Researchers have been exploring how people trust robot in
different ways, such as measuring trust on human-robot interactions (HRI) based
on textual descriptions or images without any physical contact, during and
after interacting with the technology. Nevertheless, trust is a complex
behaviour, and it is affected and depends on several factors, including those
related to the interacting agents (e.g. humans, robots, pets), itself (e.g.
capabilities, reliability), the context (e.g. task), and the environment (e.g.
public spaces vs private spaces vs working spaces). In general, most
roboticists agree that insufficient levels of trust lead to a risk of
disengagement while over-trust in technology can cause over-reliance and
inherit dangers, for example, in emergency situations. It is, therefore, very
important that the research community has access to reliable methods to measure
people's trust in robots and technology. In this position paper, we outline
current methods and their strengths, identify (some) weakly covered aspects and
discuss the potential for covering a more comprehensive amount of factors
influencing trust in HRI. | Robotics |
What field is the article from? | Title: KEN: Kernel Extensions using Natural Language
Abstract: The ability to modify and extend an operating system is an important feature
for improving a system's security, reliability, and performance. The extended
Berkeley Packet Filters (eBPF) ecosystem has emerged as the standard mechanism
for extending the Linux kernel and has recently been ported to Windows. eBPF
programs inject new logic into the kernel that the system will execute before
or after existing logic. While the eBPF ecosystem provides a flexible mechanism
for kernel extension, it is difficult for developers to write eBPF programs
today. An eBPF developer must have deep knowledge of the internals of the
operating system to determine where to place logic and cope with programming
limitations on the control flow and data accesses of their eBPF program
enforced by the eBPF verifier. This paper presents KEN, an alternative
framework that alleviates the difficulty of writing an eBPF program by allowing
Kernel Extensions to be written in Natural language. KEN uses recent advances
in large language models (LLMs) to synthesize an eBPF program given a user's
English language prompt. To ensure that LLM's output is semantically equivalent
to the user's prompt, KEN employs a combination of LLM-empowered program
comprehension, symbolic execution, and a series of feedback loops. KEN's key
novelty is the combination of these techniques. In particular, the system uses
symbolic execution in a novel structure that allows it to combine the results
of program synthesis and program comprehension and build on the recent success
that LLMs have shown for each of these tasks individually. To evaluate KEN, we
developed a new corpus of natural language prompts for eBPF programs. We show
that KEN produces correct eBPF programs on 80% which is an improvement of a
factor of 2.67 compared to an LLM-empowered program synthesis baseline. | Artificial Intelligence |
What field is the article from? | Title: Federated Knowledge Graph Completion via Latent Embedding Sharing and Tensor Factorization
Abstract: Knowledge graphs (KGs), which consist of triples, are inherently incomplete
and always require completion procedure to predict missing triples. In
real-world scenarios, KGs are distributed across clients, complicating
completion tasks due to privacy restrictions. Many frameworks have been
proposed to address the issue of federated knowledge graph completion. However,
the existing frameworks, including FedE, FedR, and FEKG, have certain
limitations. = FedE poses a risk of information leakage, FedR's optimization
efficacy diminishes when there is minimal overlap among relations, and FKGE
suffers from computational costs and mode collapse issues. To address these
issues, we propose a novel method, i.e., Federated Latent Embedding Sharing
Tensor factorization (FLEST), which is a novel approach using federated tensor
factorization for KG completion. FLEST decompose the embedding matrix and
enables sharing of latent dictionary embeddings to lower privacy risks.
Empirical results demonstrate FLEST's effectiveness and efficiency, offering a
balanced solution between performance and privacy. FLEST expands the
application of federated tensor factorization in KG completion tasks. | Machine Learning |
What field is the article from? | Title: Multi-scale Diffusion Denoised Smoothing
Abstract: Along with recent diffusion models, randomized smoothing has become one of a
few tangible approaches that offers adversarial robustness to models at scale,
e.g., those of large pre-trained models. Specifically, one can perform
randomized smoothing on any classifier via a simple "denoise-and-classify"
pipeline, so-called denoised smoothing, given that an accurate denoiser is
available - such as diffusion model. In this paper, we present scalable methods
to address the current trade-off between certified robustness and accuracy in
denoised smoothing. Our key idea is to "selectively" apply smoothing among
multiple noise scales, coined multi-scale smoothing, which can be efficiently
implemented with a single diffusion model. This approach also suggests a new
objective to compare the collective robustness of multi-scale smoothed
classifiers, and questions which representation of diffusion model would
maximize the objective. To address this, we propose to further fine-tune
diffusion model (a) to perform consistent denoising whenever the original image
is recoverable, but (b) to generate rather diverse outputs otherwise. Our
experiments show that the proposed multi-scale smoothing scheme combined with
diffusion fine-tuning enables strong certified robustness available with high
noise level while maintaining its accuracy close to non-smoothed classifiers. | Machine Learning |
What field is the article from? | Title: Multimodal Machine Unlearning
Abstract: Machine Unlearning is the process of removing specific training data samples
and their corresponding effects from an already trained model. It has
significant practical benefits, such as purging private, inaccurate, or
outdated information from trained models without the need for complete
re-training. Unlearning within a multimodal setting presents unique challenges
due to the intrinsic dependencies between different data modalities and the
expensive cost of training on large multimodal datasets and architectures.
Current approaches to machine unlearning have not fully addressed these
challenges. To bridge this gap, we introduce MMUL, a machine unlearning
approach specifically designed for multimodal data and models. MMUL formulates
the multimodal unlearning task by focusing on three key properties: (a):
modality decoupling, which effectively decouples the association between
individual unimodal data points within multimodal inputs marked for deletion,
rendering them as unrelated data points within the model's context, (b):
unimodal knowledge retention, which retains the unimodal representation
capability of the model post-unlearning, and (c): multimodal knowledge
retention, which retains the multimodal representation capability of the model
post-unlearning. MMUL is efficient to train and is not constrained by the
requirement of using a strongly convex loss. Experiments on two multimodal
models and four multimodal benchmark datasets, including vision-language and
graph-language datasets, show that MMUL outperforms existing baselines, gaining
an average improvement of +17.6 points against the best-performing unimodal
baseline in distinguishing between deleted and remaining data. In addition,
MMUL can largely maintain pre-existing knowledge of the original model post
unlearning, with a performance gap of only 0.3 points compared to retraining a
new model from scratch. | Artificial Intelligence |
What field is the article from? | Title: On the Fairness ROAD: Robust Optimization for Adversarial Debiasing
Abstract: In the field of algorithmic fairness, significant attention has been put on
group fairness criteria, such as Demographic Parity and Equalized Odds.
Nevertheless, these objectives, measured as global averages, have raised
concerns about persistent local disparities between sensitive groups. In this
work, we address the problem of local fairness, which ensures that the
predictor is unbiased not only in terms of expectations over the whole
population, but also within any subregion of the feature space, unknown at
training time. To enforce this objective, we introduce ROAD, a novel approach
that leverages the Distributionally Robust Optimization (DRO) framework within
a fair adversarial learning objective, where an adversary tries to infer the
sensitive attribute from the predictions. Using an instance-level re-weighting
strategy, ROAD is designed to prioritize inputs that are likely to be locally
unfair, i.e. where the adversary faces the least difficulty in reconstructing
the sensitive attribute. Numerical experiments demonstrate the effectiveness of
our method: it achieves Pareto dominance with respect to local fairness and
accuracy for a given global fairness level across three standard datasets, and
also enhances fairness generalization under distribution shift. | Machine Learning |
What field is the article from? | Title: LLVMs4Protest: Harnessing the Power of Large Language and Vision Models for Deciphering Protests in the News
Abstract: Large language and vision models have transformed how social movements
scholars identify protest and extract key protest attributes from multi-modal
data such as texts, images, and videos. This article documents how we
fine-tuned two large pretrained transformer models, including longformer and
swin-transformer v2, to infer potential protests in news articles using textual
and imagery data. First, the longformer model was fine-tuned using the Dynamic
of Collective Action (DoCA) Corpus. We matched the New York Times articles with
the DoCA database to obtain a training dataset for downstream tasks. Second,
the swin-transformer v2 models was trained on UCLA-protest imagery data.
UCLA-protest project contains labeled imagery data with information such as
protest, violence, and sign. Both fine-tuned models will be available via
\url{https://github.com/Joshzyj/llvms4protest}. We release this short technical
report for social movement scholars who are interested in using LLVMs to infer
protests in textual and imagery data. | Computer Vision |
What field is the article from? | Title: The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Abstract: The alignment tuning process of large language models (LLMs) typically
involves instruction learning through supervised fine-tuning (SFT) and
preference tuning via reinforcement learning from human feedback (RLHF). A
recent study, LIMA (Zhou et al. 2023), shows that using merely 1K examples for
SFT can achieve significant alignment performance as well, suggesting that the
effect of alignment tuning might be "superficial." This raises questions about
how exactly the alignment tuning transforms a base LLM.
We analyze the effect of alignment tuning by examining the token distribution
shift between base LLMs and their aligned counterpart. Our findings reveal that
base LLMs and their alignment-tuned versions perform nearly identically in
decoding on the majority of token positions. Most distribution shifts occur
with stylistic tokens. These direct evidence strongly supports the Superficial
Alignment Hypothesis suggested by LIMA.
Based on these findings, we rethink the alignment of LLMs by posing the
research question: how effectively can we align base LLMs without SFT or RLHF?
To address this, we introduce a simple, tuning-free alignment method, URIAL.
URIAL achieves effective alignment purely through in-context learning (ICL)
with base LLMs, requiring as few as three constant stylistic examples and a
system prompt. We conduct a fine-grained and interpretable evaluation on a
diverse set of examples, named JUST-EVAL-INSTRUCT. Results demonstrate that
base LLMs with URIAL can match or even surpass the performance of LLMs aligned
with SFT or SFT+RLHF. We show that the gap between tuning-free and tuning-based
alignment methods can be significantly reduced through strategic prompting and
ICL. Our findings on the superficial nature of alignment tuning and results
with URIAL suggest that deeper analysis and theoretical understanding of
alignment is crucial to future LLM research. | Computational Linguistics |
What field is the article from? | Title: NestE: Modeling Nested Relational Structures for Knowledge Graph Reasoning
Abstract: Reasoning with knowledge graphs (KGs) has primarily focused on triple-shaped
facts. Recent advancements have been explored to enhance the semantics of these
facts by incorporating more potent representations, such as hyper-relational
facts. However, these approaches are limited to \emph{atomic facts}, which
describe a single piece of information. This paper extends beyond \emph{atomic
facts} and delves into \emph{nested facts}, represented by quoted triples where
subjects and objects are triples themselves (e.g., ((\emph{BarackObama},
\emph{holds\_position}, \emph{President}), \emph{succeed\_by},
(\emph{DonaldTrump}, \emph{holds\_position}, \emph{President}))). These nested
facts enable the expression of complex semantics like \emph{situations} over
time and \emph{logical patterns} over entities and relations. In response, we
introduce NestE, a novel KG embedding approach that captures the semantics of
both atomic and nested factual knowledge. NestE represents each atomic fact as
a $1\times3$ matrix, and each nested relation is modeled as a $3\times3$ matrix
that rotates the $1\times3$ atomic fact matrix through matrix multiplication.
Each element of the matrix is represented as a complex number in the
generalized 4D hypercomplex space, including (spherical) quaternions,
hyperbolic quaternions, and split-quaternions. Through thorough analysis, we
demonstrate the embedding's efficacy in capturing diverse logical patterns over
nested facts, surpassing the confines of first-order logic-like expressions.
Our experimental results showcase NestE's significant performance gains over
current baselines in triple prediction and conditional link prediction. The
code and pre-trained models are open available at
https://github.com/xiongbo010/NestE. | Artificial Intelligence |
What field is the article from? | Title: Causality is all you need
Abstract: In the fundamental statistics course, students are taught to remember the
well-known saying: "Correlation is not Causation". Till now, statistics (i.e.,
correlation) have developed various successful frameworks, such as Transformer
and Pre-training large-scale models, which have stacked multiple parallel
self-attention blocks to imitate a wide range of tasks. However, in the
causation community, how to build an integrated causal framework still remains
an untouched domain despite its excellent intervention capabilities. In this
paper, we propose the Causal Graph Routing (CGR) framework, an integrated
causal scheme relying entirely on the intervention mechanisms to reveal the
cause-effect forces hidden in data. Specifically, CGR is composed of a stack of
causal layers. Each layer includes a set of parallel deconfounding blocks from
different causal graphs. We combine these blocks via the concept of the
proposed sufficient cause, which allows the model to dynamically select the
suitable deconfounding methods in each layer. CGR is implemented as the stacked
networks, integrating no confounder, back-door adjustment, front-door
adjustment, and probability of sufficient cause. We evaluate this framework on
two classical tasks of CV and NLP. Experiments show CGR can surpass the current
state-of-the-art methods on both Visual Question Answer and Long Document
Classification tasks. In particular, CGR has great potential in building the
"causal" pre-training large-scale model that effectively generalizes to diverse
tasks. It will improve the machines' comprehension of causal relationships
within a broader semantic space. | Artificial Intelligence |
What field is the article from? | Title: Enhancing IoT Security via Automatic Network Traffic Analysis: The Transition from Machine Learning to Deep Learning
Abstract: This work provides a comparative analysis illustrating how Deep Learning (DL)
surpasses Machine Learning (ML) in addressing tasks within Internet of Things
(IoT), such as attack classification and device-type identification. Our
approach involves training and evaluating a DL model using a range of diverse
IoT-related datasets, allowing us to gain valuable insights into how adaptable
and practical these models can be when confronted with various IoT
configurations. We initially convert the unstructured network traffic data from
IoT networks, stored in PCAP files, into images by processing the packet data.
This conversion process adapts the data to meet the criteria of DL
classification methods. The experiments showcase the ability of DL to surpass
the constraints tied to manually engineered features, achieving superior
results in attack detection and maintaining comparable outcomes in device-type
identification. Additionally, a notable feature extraction time difference
becomes evident in the experiments: traditional methods require around 29
milliseconds per data packet, while DL accomplishes the same task in just 2.9
milliseconds. The significant time gap, DL's superior performance, and the
recognized limitations of manually engineered features, presents a compelling
call to action within the IoT community. This encourages us to shift from
exploring new IoT features for each dataset to addressing the challenges of
integrating DL into IoT, making it a more efficient solution for real-world IoT
scenarios. | Cryptography and Security |
What field is the article from? | Title: A Framework to Assess (Dis)agreement Among Diverse Rater Groups
Abstract: Recent advancements in conversational AI have created an urgent need for
safety guardrails that prevent users from being exposed to offensive and
dangerous content. Much of this work relies on human ratings and feedback, but
does not account for the fact that perceptions of offense and safety are
inherently subjective and that there may be systematic disagreements between
raters that align with their socio-demographic identities. Instead, current
machine learning approaches largely ignore rater subjectivity and use gold
standards that obscure disagreements (e.g., through majority voting). In order
to better understand the socio-cultural leanings of such tasks, we propose a
comprehensive disagreement analysis framework to measure systematic diversity
in perspectives among different rater subgroups. We then demonstrate its
utility by applying this framework to a dataset of human-chatbot conversations
rated by a demographically diverse pool of raters. Our analysis reveals
specific rater groups that have more diverse perspectives than the rest, and
informs demographic axes that are crucial to consider for safety annotations. | Computational Linguistics |
What field is the article from? | Title: Data-driven building energy efficiency prediction based on envelope heat losses using physics-informed neural networks
Abstract: The analytical prediction of building energy performance in residential
buildings based on the heat losses of its individual envelope components is a
challenging task. It is worth noting that this field is still in its infancy,
with relatively limited research conducted in this specific area to date,
especially when it comes for data-driven approaches. In this paper we introduce
a novel physics-informed neural network model for addressing this problem.
Through the employment of unexposed datasets that encompass general building
information, audited characteristics, and heating energy consumption, we feed
the deep learning model with general building information, while the model's
output consists of the structural components and several thermal properties
that are in fact the basic elements of an energy performance certificate (EPC).
On top of this neural network, a function, based on physics equations,
calculates the energy consumption of the building based on heat losses and
enhances the loss function of the deep learning model. This methodology is
tested on a real case study for 256 buildings located in Riga, Latvia. Our
investigation comes up with promising results in terms of prediction accuracy,
paving the way for automated, and data-driven energy efficiency performance
prediction based on basic properties of the building, contrary to exhaustive
energy efficiency audits led by humans, which are the current status quo. | Machine Learning |
What field is the article from? | Title: Synthetic Data as Validation
Abstract: This study leverages synthetic data as a validation set to reduce overfitting
and ease the selection of the best model in AI development. While synthetic
data have been used for augmenting the training set, we find that synthetic
data can also significantly diversify the validation set, offering marked
advantages in domains like healthcare, where data are typically limited,
sensitive, and from out-domain sources (i.e., hospitals). In this study, we
illustrate the effectiveness of synthetic data for early cancer detection in
computed tomography (CT) volumes, where synthetic tumors are generated and
superimposed onto healthy organs, thereby creating an extensive dataset for
rigorous validation. Using synthetic data as validation can improve AI
robustness in both in-domain and out-domain test sets. Furthermore, we
establish a new continual learning framework that continuously trains AI models
on a stream of out-domain data with synthetic tumors. The AI model trained and
validated in dynamically expanding synthetic data can consistently outperform
models trained and validated exclusively on real-world data. Specifically, the
DSC score for liver tumor segmentation improves from 26.7% (95% CI:
22.6%-30.9%) to 34.5% (30.8%-38.2%) when evaluated on an in-domain dataset and
from 31.1% (26.0%-36.2%) to 35.4% (32.1%-38.7%) on an out-domain dataset.
Importantly, the performance gain is particularly significant in identifying
very tiny liver tumors (radius < 5mm) in CT volumes, with Sensitivity improving
from 33.1% to 55.4% on an in-domain dataset and 33.9% to 52.3% on an out-domain
dataset, justifying the efficacy in early detection of cancer. The application
of synthetic data, from both training and validation perspectives, underlines a
promising avenue to enhance AI robustness when dealing with data from varying
domains. | Computer Vision |
What field is the article from? | Title: Challenges of Large Language Models for Mental Health Counseling
Abstract: The global mental health crisis is looming with a rapid increase in mental
disorders, limited resources, and the social stigma of seeking treatment. As
the field of artificial intelligence (AI) has witnessed significant
advancements in recent years, large language models (LLMs) capable of
understanding and generating human-like text may be used in supporting or
providing psychological counseling. However, the application of LLMs in the
mental health domain raises concerns regarding the accuracy, effectiveness, and
reliability of the information provided. This paper investigates the major
challenges associated with the development of LLMs for psychological
counseling, including model hallucination, interpretability, bias, privacy, and
clinical effectiveness. We explore potential solutions to these challenges that
are practical and applicable to the current paradigm of AI. From our experience
in developing and deploying LLMs for mental health, AI holds a great promise
for improving mental health care, if we can carefully navigate and overcome
pitfalls of LLMs. | Computational Linguistics |
What field is the article from? | Title: A method for recovery of multidimensional time series based on the detection of behavioral patterns and the use of autoencoders
Abstract: This article presents a method for recovering missing values in
multidimensional time series. The method combines neural network technologies
and an algorithm for searching snippets (behavioral patterns of a time series).
It includes the stages of data preprocessing, recognition and reconstruction,
using convolutional and recurrent neural networks. Experiments have shown high
accuracy of recovery and the advantage of the method over SOTA methods. | Artificial Intelligence |
What field is the article from? | Title: Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents
Abstract: Proactive dialogues serve as a practical yet challenging dialogue problem in
the era of large language models (LLMs), where the dialogue policy planning is
the key to improving the proactivity of LLMs. Most existing studies enable the
dialogue policy planning of LLMs using various prompting schemes or iteratively
enhance this capability in handling the given case with verbal AI feedback.
However, these approaches are either bounded by the policy planning capability
of the frozen LLMs or hard to be transferred to new cases. In this work, we
introduce a new dialogue policy planning paradigm to strategize LLMs for
proactive dialogue problems with a tunable language model plug-in as a
plug-and-play dialogue policy planner, named PPDPP. Specifically, we develop a
novel training framework to facilitate supervised fine-tuning over available
human-annotated data as well as reinforcement learning from goal-oriented AI
feedback with dynamic interaction data collected by the LLM-based self-play
simulation. In this manner, the LLM-powered dialogue agent can not only be
generalized to different cases after the training, but also be applicable to
different applications by just substituting the learned plug-in. In addition,
we propose to evaluate the policy planning capability of dialogue systems under
the interactive setting. Experimental results demonstrate that PPDPP
consistently and substantially outperforms existing approaches on three
different proactive dialogue applications, including negotiation, emotional
support, and tutoring dialogues. | Computational Linguistics |
What field is the article from? | Title: How to Configure Good In-Context Sequence for Visual Question Answering
Abstract: Inspired by the success of Large Language Models in dealing with new tasks
via In-Context Learning (ICL) in NLP, researchers have also developed Large
Vision-Language Models (LVLMs) with ICL capabilities. However, when
implementing ICL using these LVLMs, researchers usually resort to the simplest
way like random sampling to configure the in-context sequence, thus leading to
sub-optimal results. To enhance the ICL performance, in this study, we use
Visual Question Answering (VQA) as case study to explore diverse in-context
configurations to find the powerful ones. Additionally, through observing the
changes of the LVLM outputs by altering the in-context sequence, we gain
insights into the inner properties of LVLMs, improving our understanding of
them. Specifically, to explore in-context configurations, we design diverse
retrieval methods and employ different strategies to manipulate the retrieved
demonstrations. Through exhaustive experiments on three VQA datasets: VQAv2,
VizWiz, and OK-VQA, we uncover three important inner properties of the applied
LVLM and demonstrate which strategies can consistently improve the ICL VQA
performance. Our code is provided in:
https://github.com/GaryJiajia/OFv2_ICL_VQA. | Computer Vision |
What field is the article from? | Title: M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place
Abstract: With the advent of large language models and large-scale robotic datasets,
there has been tremendous progress in high-level decision-making for object
manipulation. These generic models are able to interpret complex tasks using
language commands, but they often have difficulties generalizing to
out-of-distribution objects due to the inability of low-level action
primitives. In contrast, existing task-specific models excel in low-level
manipulation of unknown objects, but only work for a single type of action. To
bridge this gap, we present M2T2, a single model that supplies different types
of low-level actions that work robustly on arbitrary objects in cluttered
scenes. M2T2 is a transformer model which reasons about contact points and
predicts valid gripper poses for different action modes given a raw point cloud
of the scene. Trained on a large-scale synthetic dataset with 128K scenes, M2T2
achieves zero-shot sim2real transfer on the real robot, outperforming the
baseline system with state-of-the-art task-specific models by about 19% in
overall performance and 37.5% in challenging scenes where the object needs to
be re-oriented for collision-free placement. M2T2 also achieves
state-of-the-art results on a subset of language conditioned tasks in RLBench.
Videos of robot experiments on unseen objects in both real world and simulation
are available on our project website https://m2-t2.github.io. | Robotics |
What field is the article from? | Title: AI Agent as Urban Planner: Steering Stakeholder Dynamics in Urban Planning via Consensus-based Multi-Agent Reinforcement Learning
Abstract: In urban planning, land use readjustment plays a pivotal role in aligning
land use configurations with the current demands for sustainable urban
development. However, present-day urban planning practices face two main
issues. Firstly, land use decisions are predominantly dependent on human
experts. Besides, while resident engagement in urban planning can promote urban
sustainability and livability, it is challenging to reconcile the diverse
interests of stakeholders. To address these challenges, we introduce a
Consensus-based Multi-Agent Reinforcement Learning framework for real-world
land use readjustment. This framework serves participatory urban planning,
allowing diverse intelligent agents as stakeholder representatives to vote for
preferred land use types. Within this framework, we propose a novel consensus
mechanism in reward design to optimize land utilization through collective
decision making. To abstract the structure of the complex urban system, the
geographic information of cities is transformed into a spatial graph structure
and then processed by graph neural networks. Comprehensive experiments on both
traditional top-down planning and participatory planning methods from
real-world communities indicate that our computational framework enhances
global benefits and accommodates diverse interests, leading to improved
satisfaction across different demographic groups. By integrating Multi-Agent
Reinforcement Learning, our framework ensures that participatory urban planning
decisions are more dynamic and adaptive to evolving community needs and
provides a robust platform for automating complex real-world urban planning
processes. | Artificial Intelligence |
What field is the article from? | Title: Data Acquisition: A New Frontier in Data-centric AI
Abstract: As Machine Learning (ML) systems continue to grow, the demand for relevant
and comprehensive datasets becomes imperative. There is limited study on the
challenges of data acquisition due to ad-hoc processes and lack of consistent
methodologies. We first present an investigation of current data marketplaces,
revealing lack of platforms offering detailed information about datasets,
transparent pricing, standardized data formats. With the objective of inciting
participation from the data-centric AI community, we then introduce the DAM
challenge, a benchmark to model the interaction between the data providers and
acquirers. The benchmark was released as a part of DataPerf. Our evaluation of
the submitted strategies underlines the need for effective data acquisition
strategies in ML. | Artificial Intelligence |
What field is the article from? | Title: SENetV2: Aggregated dense layer for channelwise and global representations
Abstract: Convolutional Neural Networks (CNNs) have revolutionized image classification
by extracting spatial features and enabling state-of-the-art accuracy in
vision-based tasks. The squeeze and excitation network proposed module gathers
channelwise representations of the input. Multilayer perceptrons (MLP) learn
global representation from the data and in most image classification models
used to learn extracted features of the image. In this paper, we introduce a
novel aggregated multilayer perceptron, a multi-branch dense layer, within the
Squeeze excitation residual module designed to surpass the performance of
existing architectures. Our approach leverages a combination of squeeze
excitation network module with dense layers. This fusion enhances the network's
ability to capture channel-wise patterns and have global knowledge, leading to
a better feature representation. This proposed model has a negligible increase
in parameters when compared to SENet. We conduct extensive experiments on
benchmark datasets to validate the model and compare them with established
architectures. Experimental results demonstrate a remarkable increase in the
classification accuracy of the proposed model. | Computer Vision |
What field is the article from? | Title: Survey on Foundation Models for Prognostics and Health Management in Industrial Cyber-Physical Systems
Abstract: Industrial Cyber-Physical Systems (ICPS) integrate the disciplines of
computer science, communication technology, and engineering, and have emerged
as integral components of contemporary manufacturing and industries. However,
ICPS encounters various challenges in long-term operation, including equipment
failures, performance degradation, and security threats. To achieve efficient
maintenance and management, prognostics and health management (PHM) finds
widespread application in ICPS for critical tasks, including failure
prediction, health monitoring, and maintenance decision-making. The emergence
of large-scale foundation models (LFMs) like BERT and GPT signifies a
significant advancement in AI technology, and ChatGPT stands as a remarkable
accomplishment within this research paradigm, harboring potential for General
Artificial Intelligence. Considering the ongoing enhancement in data
acquisition technology and data processing capability, LFMs are anticipated to
assume a crucial role in the PHM domain of ICPS. However, at present, a
consensus is lacking regarding the application of LFMs to PHM in ICPS,
necessitating systematic reviews and roadmaps to elucidate future directions.
To bridge this gap, this paper elucidates the key components and recent
advances in the underlying model.A comprehensive examination and comprehension
of the latest advances in grand modeling for PHM in ICPS can offer valuable
references for decision makers and researchers in the industrial field while
facilitating further enhancements in the reliability, availability, and safety
of ICPS. | Artificial Intelligence |
What field is the article from? | Title: Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models
Abstract: Identifying and resolving logic errors can be one of the most frustrating
challenges for novices programmers. Unlike syntax errors, for which a compiler
or interpreter can issue a message, logic errors can be subtle. In certain
conditions, buggy code may even exhibit correct behavior -- in other cases, the
issue might be about how a problem statement has been interpreted. Such errors
can be hard to spot when reading the code, and they can also at times be missed
by automated tests. There is great educational potential in automatically
detecting logic errors, especially when paired with suitable feedback for
novices. Large language models (LLMs) have recently demonstrated surprising
performance for a range of computing tasks, including generating and explaining
code. These capabilities are closely linked to code syntax, which aligns with
the next token prediction behavior of LLMs. On the other hand, logic errors
relate to the runtime performance of code and thus may not be as well suited to
analysis by LLMs. To explore this, we investigate the performance of two
popular LLMs, GPT-3 and GPT-4, for detecting and providing a novice-friendly
explanation of logic errors. We compare LLM performance with a large cohort of
introductory computing students $(n=964)$ solving the same error detection
task. Through a mixed-methods analysis of student and model responses, we
observe significant improvement in logic error identification between the
previous and current generation of LLMs, and find that both LLM generations
significantly outperform students. We outline how such models could be
integrated into computing education tools, and discuss their potential for
supporting students when learning programming. | Human-Computer Interaction |
What field is the article from? | Title: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
Abstract: Reinforcement learning (RL) with dense rewards and imitation learning (IL)
with human-generated trajectories are the most widely used approaches for
training modern embodied agents. RL requires extensive reward shaping and
auxiliary losses and is often too slow and ineffective for long-horizon tasks.
While IL with human supervision is effective, collecting human trajectories at
scale is extremely expensive. In this work, we show that imitating
shortest-path planners in simulation produces agents that, given a language
instruction, can proficiently navigate, explore, and manipulate objects in both
simulation and in the real world using only RGB sensors (no depth map or GPS
coordinates). This surprising result is enabled by our end-to-end,
transformer-based, SPOC architecture, powerful visual encoders paired with
extensive image augmentation, and the dramatic scale and diversity of our
training data: millions of frames of shortest-path-expert trajectories
collected inside approximately 200,000 procedurally generated houses containing
40,000 unique 3D assets. Our models, data, training code, and newly proposed
10-task benchmarking suite CHORES will be open-sourced. | Robotics |
What field is the article from? | Title: Guarding Barlow Twins Against Overfitting with Mixed Samples
Abstract: Self-supervised Learning (SSL) aims to learn transferable feature
representations for downstream applications without relying on labeled data.
The Barlow Twins algorithm, renowned for its widespread adoption and
straightforward implementation compared to its counterparts like contrastive
learning methods, minimizes feature redundancy while maximizing invariance to
common corruptions. Optimizing for the above objective forces the network to
learn useful representations, while avoiding noisy or constant features,
resulting in improved downstream task performance with limited adaptation.
Despite Barlow Twins' proven effectiveness in pre-training, the underlying SSL
objective can inadvertently cause feature overfitting due to the lack of strong
interaction between the samples unlike the contrastive learning approaches.
From our experiments, we observe that optimizing for the Barlow Twins objective
doesn't necessarily guarantee sustained improvements in representation quality
beyond a certain pre-training phase, and can potentially degrade downstream
performance on some datasets. To address this challenge, we introduce Mixed
Barlow Twins, which aims to improve sample interaction during Barlow Twins
training via linearly interpolated samples. This results in an additional
regularization term to the original Barlow Twins objective, assuming linear
interpolation in the input space translates to linearly interpolated features
in the feature space. Pre-training with this regularization effectively
mitigates feature overfitting and further enhances the downstream performance
on CIFAR-10, CIFAR-100, TinyImageNet, STL-10, and ImageNet datasets. The code
and checkpoints are available at: https://github.com/wgcban/mix-bt.git | Computer Vision |
What field is the article from? | Title: Multi-Scale and Multi-Modal Contrastive Learning Network for Biomedical Time Series
Abstract: Multi-modal biomedical time series (MBTS) data offers a holistic view of the
physiological state, holding significant importance in various bio-medical
applications. Owing to inherent noise and distribution gaps across different
modalities, MBTS can be complex to model. Various deep learning models have
been developed to learn representations of MBTS but still fall short in
robustness due to the ignorance of modal-to-modal variations. This paper
presents a multi-scale and multi-modal biomedical time series representation
learning (MBSL) network with contrastive learning to migrate these variations.
Firstly, MBTS is grouped based on inter-modal distances, then each group with
minimum intra-modal variations can be effectively modeled by individual
encoders. Besides, to enhance the multi-scale feature extraction (encoder),
various patch lengths and mask ratios are designed to generate tokens with
semantic information at different scales and diverse contextual perspectives
respectively. Finally, cross-modal contrastive learning is proposed to maximize
consistency among inter-modal groups, maintaining useful information and
eliminating noises. Experiments against four bio-medical applications show that
MBSL outperforms state-of-the-art models by 33.9% mean average errors (MAE) in
respiration rate, by 13.8% MAE in exercise heart rate, by 1.41% accuracy in
human activity recognition, and by 1.14% F1-score in obstructive sleep
apnea-hypopnea syndrome. | Machine Learning |
What field is the article from? | Title: LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing
Abstract: LLaVA-Interactive is a research prototype for multimodal human-AI
interaction. The system can have multi-turn dialogues with human users by
taking multimodal user inputs and generating multimodal responses. Importantly,
LLaVA-Interactive goes beyond language prompt, where visual prompt is enabled
to align human intents in the interaction. The development of LLaVA-Interactive
is extremely cost-efficient as the system combines three multimodal skills of
pre-built AI models without additional model training: visual chat of LLaVA,
image segmentation from SEEM, as well as image generation and editing from
GLIGEN. A diverse set of application scenarios is presented to demonstrate the
promises of LLaVA-Interactive and to inspire future research in multimodal
interactive systems. | Computer Vision |
What field is the article from? | Title: Identifying Spurious Correlations using Counterfactual Alignment
Abstract: Models driven by spurious correlations often yield poor generalization
performance. We propose the counterfactual alignment method to detect and
explore spurious correlations of black box classifiers. Counterfactual images
generated with respect to one classifier can be input into other classifiers to
see if they also induce changes in the outputs of these classifiers. The
relationship between these responses can be quantified and used to identify
specific instances where a spurious correlation exists as well as compute
aggregate statistics over a dataset. Our work demonstrates the ability to
detect spurious correlations in face attribute classifiers. This is validated
by observing intuitive trends in a face attribute classifier as well as
fabricating spurious correlations and detecting their presence, both visually
and quantitatively. Further, utilizing the CF alignment method, we demonstrate
that we can rectify spurious correlations identified in classifiers. | Computer Vision |
What field is the article from? | Title: FOCAL: A Cost-Aware Video Dataset for Active Learning
Abstract: In this paper, we introduce the FOCAL (Ford-OLIVES Collaboration on Active
Learning) dataset which enables the study of the impact of annotation-cost
within a video active learning setting. Annotation-cost refers to the time it
takes an annotator to label and quality-assure a given video sequence. A
practical motivation for active learning research is to minimize
annotation-cost by selectively labeling informative samples that will maximize
performance within a given budget constraint. However, previous work in video
active learning lacks real-time annotation labels for accurately assessing cost
minimization and instead operates under the assumption that annotation-cost
scales linearly with the amount of data to annotate. This assumption does not
take into account a variety of real-world confounding factors that contribute
to a nonlinear cost such as the effect of an assistive labeling tool and the
variety of interactions within a scene such as occluded objects, weather, and
motion of objects. FOCAL addresses this discrepancy by providing real
annotation-cost labels for 126 video sequences across 69 unique city scenes
with a variety of weather, lighting, and seasonal conditions. We also introduce
a set of conformal active learning algorithms that take advantage of the
sequential structure of video data in order to achieve a better trade-off
between annotation-cost and performance while also reducing floating point
operations (FLOPS) overhead by at least 77.67%. We show how these approaches
better reflect how annotations on videos are done in practice through a
sequence selection framework. We further demonstrate the advantage of these
approaches by introducing two performance-cost metrics and show that the best
conformal active learning method is cheaper than the best traditional active
learning method by 113 hours. | Computer Vision |
What field is the article from? | Title: Conformal Prediction in Multi-User Settings: An Evaluation
Abstract: Typically, machine learning models are trained and evaluated without making
any distinction between users (e.g, using traditional hold-out and
cross-validation). However, this produces inaccurate performance metrics
estimates in multi-user settings. That is, situations where the data were
collected by multiple users with different characteristics (e.g., age, gender,
height, etc.) which is very common in user computer interaction and medical
applications. For these types of scenarios model evaluation strategies that
provide better performance estimates have been proposed such as mixed,
user-independent, user-dependent, and user-adaptive models. Although those
strategies are better suited for multi-user systems, they are typically
assessed with respect to performance metrics that capture the overall behavior
of the models and do not provide any performance guarantees for individual
predictions nor they provide any feedback about the predictions' uncertainty.
In order to overcome those limitations, in this work we evaluated the conformal
prediction framework in several multi-user settings. Conformal prediction is a
model agnostic method that provides confidence guarantees on the predictions,
thus, increasing the trustworthiness and robustness of the models. We conducted
extensive experiments using different evaluation strategies and found
significant differences in terms of conformal performance measures. We also
proposed several visualizations based on matrices, graphs, and charts that
capture different aspects of the resulting prediction sets. | Machine Learning |
What field is the article from? | Title: BioLORD-2023: Semantic Textual Representations Fusing LLM and Clinical Knowledge Graph Insights
Abstract: In this study, we investigate the potential of Large Language Models to
complement biomedical knowledge graphs in the training of semantic models for
the biomedical and clinical domains. Drawing on the wealth of the UMLS
knowledge graph and harnessing cutting-edge Large Language Models, we propose a
new state-of-the-art approach for obtaining high-fidelity representations of
biomedical concepts and sentences, consisting of three steps: an improved
contrastive learning phase, a novel self-distillation phase, and a weight
averaging phase. Through rigorous evaluations via the extensive BioLORD testing
suite and diverse downstream tasks, we demonstrate consistent and substantial
performance improvements over the previous state of the art (e.g. +2pts on
MedSTS, +2.5pts on MedNLI-S, +6.1pts on EHR-Rel-B). Besides our new
state-of-the-art biomedical model for English, we also distill and release a
multilingual model compatible with 50+ languages and finetuned on 7 European
languages. Many clinical pipelines can benefit from our latest models. Our new
multilingual model enables a range of languages to benefit from our
advancements in biomedical semantic representation learning, opening a new
avenue for bioinformatics researchers around the world. As a result, we hope to
see BioLORD-2023 becoming a precious tool for future biomedical applications. | Computational Linguistics |
What field is the article from? | Title: Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision
Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in
various tasks. However, their suitability for domain-specific tasks, is limited
due to their immense scale at deployment, susceptibility to misinformation, and
more importantly, high data annotation costs. We propose a novel Interactive
Multi-Fidelity Learning (IMFL) framework for the cost-effective development of
small domain-specific LMs under limited annotation budgets. Our approach
formulates the domain-specific fine-tuning process as a multi-fidelity learning
problem, focusing on identifying the optimal acquisition strategy that balances
between low-fidelity automatic LLM annotations and high-fidelity human
annotations to maximize model performance. We further propose an
exploration-exploitation query strategy that enhances annotation diversity and
informativeness, incorporating two innovative designs: 1) prompt retrieval that
selects in-context examples from human-annotated samples to improve LLM
annotation, and 2) variable batch size that controls the order for choosing
each fidelity to facilitate knowledge distillation, ultimately enhancing
annotation quality. Extensive experiments on financial and medical tasks
demonstrate that IMFL achieves superior performance compared with single
fidelity annotations. Given a limited budget of human annotation, IMFL
significantly outperforms the human annotation baselines in all four tasks and
achieves very close performance as human annotations on two of the tasks. These
promising results suggest that the high human annotation costs in
domain-specific tasks can be significantly reduced by employing IMFL, which
utilizes fewer human annotations, supplemented with cheaper and faster LLM
(e.g., GPT-3.5) annotations to achieve comparable performance. | Computational Linguistics |
What field is the article from? | Title: Generating High-Resolution Regional Precipitation Using Conditional Diffusion Model
Abstract: Climate downscaling is a crucial technique within climate research, serving
to project low-resolution (LR) climate data to higher resolutions (HR).
Previous research has demonstrated the effectiveness of deep learning for
downscaling tasks. However, most deep learning models for climate downscaling
may not perform optimally for high scaling factors (i.e., 4x, 8x) due to their
limited ability to capture the intricate details required for generating HR
climate data. Furthermore, climate data behaves differently from image data,
necessitating a nuanced approach when employing deep generative models. In
response to these challenges, this paper presents a deep generative model for
downscaling climate data, specifically precipitation on a regional scale. We
employ a denoising diffusion probabilistic model (DDPM) conditioned on multiple
LR climate variables. The proposed model is evaluated using precipitation data
from the Community Earth System Model (CESM) v1.2.2 simulation. Our results
demonstrate significant improvements over existing baselines, underscoring the
effectiveness of the conditional diffusion model in downscaling climate data. | Machine Learning |
What field is the article from? | Title: HADES: Fast Singularity Detection with Local Measure Comparison
Abstract: We introduce Hades, an unsupervised algorithm to detect singularities in
data. This algorithm employs a kernel goodness-of-fit test, and as a
consequence it is much faster and far more scaleable than the existing
topology-based alternatives. Using tools from differential geometry and optimal
transport theory, we prove that Hades correctly detects singularities with high
probability when the data sample lives on a transverse intersection of
equidimensional manifolds. In computational experiments, Hades recovers
singularities in synthetically generated data, branching points in road network
data, intersection rings in molecular conformation space, and anomalies in
image data. | Machine Learning |
What field is the article from? | Title: TrackDiffusion: Multi-object Tracking Data Generation via Diffusion Models
Abstract: Diffusion models have gained prominence in generating data for perception
tasks such as image classification and object detection. However, the potential
in generating high-quality tracking sequences, a crucial aspect in the field of
video perception, has not been fully investigated. To address this gap, we
propose TrackDiffusion, a novel architecture designed to generate continuous
video sequences from the tracklets. TrackDiffusion represents a significant
departure from the traditional layout-to-image (L2I) generation and copy-paste
synthesis focusing on static image elements like bounding boxes by empowering
image diffusion models to encompass dynamic and continuous tracking
trajectories, thereby capturing complex motion nuances and ensuring instance
consistency among video frames. For the first time, we demonstrate that the
generated video sequences can be utilized for training multi-object tracking
(MOT) systems, leading to significant improvement in tracker performance.
Experimental results show that our model significantly enhances instance
consistency in generated video sequences, leading to improved perceptual
metrics. Our approach achieves an improvement of 8.7 in TrackAP and 11.8 in
TrackAP$_{50}$ on the YTVIS dataset, underscoring its potential to redefine the
standards of video data generation for MOT tasks and beyond. | Computer Vision |
What field is the article from? | Title: Improving Minority Stress Detection with Emotions
Abstract: Psychological stress detection is an important task for mental healthcare
research, but there has been little prior work investigating the effectiveness
of psychological stress models on minority individuals, who are especially
vulnerable to poor mental health outcomes. In this work, we use the related
task of minority stress detection to evaluate the ability of psychological
stress models to understand the language of sexual and gender minorities. We
find that traditional psychological stress models underperform on minority
stress detection, and we propose using emotion-infused models to reduce that
performance disparity. We further demonstrate that multi-task psychological
stress models outperform the current state-of-the-art for minority stress
detection without directly training on minority stress data. We provide
explanatory analysis showing that minority communities have different
distributions of emotions than the general population and that emotion-infused
models improve the performance of stress models on underrepresented groups
because of their effectiveness in low-data environments, and we propose that
integrating emotions may benefit underrepresented groups in other mental health
detection tasks. | Computational Linguistics |
What field is the article from? | Title: Learning Decentralized Traffic Signal Controllers with Multi-Agent Graph Reinforcement Learning
Abstract: This paper considers optimal traffic signal control in smart cities, which
has been taken as a complex networked system control problem. Given the
interacting dynamics among traffic lights and road networks, attaining
controller adaptivity and scalability stands out as a primary challenge.
Capturing the spatial-temporal correlation among traffic lights under the
framework of Multi-Agent Reinforcement Learning (MARL) is a promising solution.
Nevertheless, existing MARL algorithms ignore effective information aggregation
which is fundamental for improving the learning capacity of decentralized
agents. In this paper, we design a new decentralized control architecture with
improved environmental observability to capture the spatial-temporal
correlation. Specifically, we first develop a topology-aware information
aggregation strategy to extract correlation-related information from
unstructured data gathered in the road network. Particularly, we transfer the
road network topology into a graph shift operator by forming a diffusion
process on the topology, which subsequently facilitates the construction of
graph signals. A diffusion convolution module is developed, forming a new MARL
algorithm, which endows agents with the capabilities of graph learning.
Extensive experiments based on both synthetic and real-world datasets verify
that our proposal outperforms existing decentralized algorithms. | Machine Learning |
What field is the article from? | Title: Instruct and Extract: Instruction Tuning for On-Demand Information Extraction
Abstract: Large language models with instruction-following capabilities open the door
to a wider group of users. However, when it comes to information extraction - a
classic task in natural language processing - most task-specific systems cannot
align well with long-tail ad hoc extraction use cases for non-expert users. To
address this, we propose a novel paradigm, termed On-Demand Information
Extraction, to fulfill the personalized demands of real-world users. Our task
aims to follow the instructions to extract the desired content from the
associated text and present it in a structured tabular format. The table
headers can either be user-specified or inferred contextually by the model. To
facilitate research in this emerging area, we present a benchmark named
InstructIE, inclusive of both automatically generated training data, as well as
the human-annotated test set. Building on InstructIE, we further develop an
On-Demand Information Extractor, ODIE. Comprehensive evaluations on our
benchmark reveal that ODIE substantially outperforms the existing open-source
models of similar size. Our code and dataset are released on
https://github.com/yzjiao/On-Demand-IE. | Computational Linguistics |
What field is the article from? | Title: Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering
Abstract: We address the task of evidence retrieval for long document question
answering, which involves locating relevant paragraphs within a document to
answer a question. We aim to assess the applicability of large language models
(LLMs) in the task of zero-shot long document evidence retrieval, owing to
their unprecedented performance across various NLP tasks. However, currently
the LLMs can consume limited context lengths as input, thus providing document
chunks as inputs might overlook the global context while missing out on
capturing the inter-segment dependencies. Moreover, directly feeding the large
input sets can incur significant computational costs, particularly when
processing the entire document (and potentially incurring monetary expenses
with enterprise APIs like OpenAI's GPT variants). To address these challenges,
we propose a suite of techniques that exploit the discourse structure commonly
found in documents. By utilizing this structure, we create a condensed
representation of the document, enabling a more comprehensive understanding and
analysis of relationships between different parts. We retain $99.6\%$ of the
best zero-shot approach's performance, while processing only $26\%$ of the
total tokens used by the best approach in the information seeking evidence
retrieval setup. We also show how our approach can be combined with
\textit{self-ask} reasoning agent to achieve best zero-shot performance in
complex multi-hop question answering, just $\approx 4\%$ short of zero-shot
performance using gold evidence. | Computational Linguistics |
What field is the article from? | Title: Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
Abstract: The evaluation of the fidelity of eXplainable Artificial Intelligence (XAI)
methods to their underlying models is a challenging task, primarily due to the
absence of a ground truth for explanations. However, assessing fidelity is a
necessary step for ensuring a correct XAI methodology. In this study, we
conduct a fair and objective comparison of the current state-of-the-art XAI
methods by introducing three novel image datasets with reliable ground truth
for explanations. The primary objective of this comparison is to identify
methods with low fidelity and eliminate them from further research, thereby
promoting the development of more trustworthy and effective XAI techniques. Our
results demonstrate that XAI methods based on the backpropagation of output
information to input yield higher accuracy and reliability compared to methods
relying on sensitivity analysis or Class Activation Maps (CAM). However, the
backpropagation method tends to generate more noisy saliency maps. These
findings have significant implications for the advancement of XAI methods,
enabling the elimination of erroneous explanations and fostering the
development of more robust and reliable XAI. | Computer Vision |
What field is the article from? | Title: The logic of NTQR evaluations of noisy AI agents: Complete postulates and logically consistent error correlations
Abstract: In his "ship of state" allegory (\textit{Republic}, Book VI, 488) Plato poses
a question -- how can a crew of sailors presumed to know little about the art
of navigation recognize the true pilot among them? The allegory argues that a
simple majority voting procedure cannot safely determine who is most qualified
to pilot a ship when the voting members are ignorant or biased. We formalize
Plato's concerns by considering the problem in AI safety of monitoring noisy AI
agents in unsupervised settings. An algorithm evaluating AI agents using
unlabeled data would be subject to the evaluation dilemma - how would we know
the evaluation algorithm was correct itself? This endless validation chain can
be avoided by considering purely algebraic functions of the observed responses.
We can construct complete postulates than can prove or disprove the logical
consistency of any grading algorithm. A complete set of postulates exists
whenever we are evaluating $N$ experts that took $T$ tests with $Q$ questions
with $R$ responses each. We discuss evaluating binary classifiers that have
taken a single test - the $(N,T=1,Q,R=2)$ tests. We show how some of the
postulates have been previously identified in the ML literature but not
recognized as such - the \textbf{agreement equations} of Platanios. The
complete postulates for pair correlated binary classifiers are considered and
we show how it allows for error correlations to be quickly calculated. An
algebraic evaluator based on the assumption that the ensemble is error
independent is compared with grading by majority voting on evaluations using
the \uciadult and and \texttt{two-norm} datasets. Throughout, we demonstrate
how the formalism of logical consistency via algebraic postulates of evaluation
can help increase the safety of machines using AI algorithms. | Artificial Intelligence |
What field is the article from? | Title: Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
Abstract: We introduce Llama Guard, an LLM-based input-output safeguard model geared
towards Human-AI conversation use cases. Our model incorporates a safety risk
taxonomy, a valuable tool for categorizing a specific set of safety risks found
in LLM prompts (i.e., prompt classification). This taxonomy is also
instrumental in classifying the responses generated by LLMs to these prompts, a
process we refer to as response classification. For the purpose of both prompt
and response classification, we have meticulously gathered a dataset of high
quality. Llama Guard, a Llama2-7b model that is instruction-tuned on our
collected dataset, albeit low in volume, demonstrates strong performance on
existing benchmarks such as the OpenAI Moderation Evaluation dataset and
ToxicChat, where its performance matches or exceeds that of currently available
content moderation tools. Llama Guard functions as a language model, carrying
out multi-class classification and generating binary decision scores.
Furthermore, the instruction fine-tuning of Llama Guard allows for the
customization of tasks and the adaptation of output formats. This feature
enhances the model's capabilities, such as enabling the adjustment of taxonomy
categories to align with specific use cases, and facilitating zero-shot or
few-shot prompting with diverse taxonomies at the input. We are making Llama
Guard model weights available and we encourage researchers to further develop
and adapt them to meet the evolving needs of the community for AI safety. | Computational Linguistics |
What field is the article from? | Title: Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic Robotic Grasping exploiting Domain Randomization
Abstract: Achieving human-level dexterity in robotic grasping remains a challenging
endeavor. Robotic hands frequently encounter slippage and deformation during
object manipulation, issues rarely encountered by humans due to their sensory
receptors, experiential learning, and motor memory. The emulation of the human
grasping reflex within robotic hands is referred to as the ``bionic reflex".
Past endeavors in the realm of bionic reflex control predominantly relied on
model-based and supervised learning approaches, necessitating human
intervention during thresholding and labeling tasks. In this study, we
introduce an innovative bionic reflex control pipeline, leveraging
reinforcement learning (RL); thereby eliminating the need for human
intervention during control design. Our proposed bionic reflex controller has
been designed and tested on an anthropomorphic hand, manipulating deformable
objects in the PyBullet physics simulator, incorporating domain randomization
(DR) for enhanced Sim2Real transferability. Our findings underscore the promise
of RL as a potent tool for advancing bionic reflex control within
anthropomorphic robotic hands. We anticipate that this autonomous, RL-based
bionic reflex controller will catalyze the development of dependable and highly
efficient robotic and prosthetic hands, revolutionizing human-robot interaction
and assistive technologies. | Robotics |
What field is the article from? | Title: What a Whole Slide Image Can Tell? Subtype-guided Masked Transformer for Pathological Image Captioning
Abstract: Pathological captioning of Whole Slide Images (WSIs), though is essential in
computer-aided pathological diagnosis, has rarely been studied due to the
limitations in datasets and model training efficacy. In this paper, we propose
a new paradigm Subtype-guided Masked Transformer (SGMT) for pathological
captioning based on Transformers, which treats a WSI as a sequence of sparse
patches and generates an overall caption sentence from the sequence. An
accompanying subtype prediction is introduced into SGMT to guide the training
process and enhance the captioning accuracy. We also present an Asymmetric
Masked Mechansim approach to tackle the large size constraint of pathological
image captioning, where the numbers of sequencing patches in SGMT are sampled
differently in the training and inferring phases, respectively. Experiments on
the PatchGastricADC22 dataset demonstrate that our approach effectively adapts
to the task with a transformer-based model and achieves superior performance
than traditional RNN-based methods. Our codes are to be made available for
further research and development. | Computer Vision |
What field is the article from? | Title: Advancing State of the Art in Language Modeling
Abstract: Generalization is arguably the most important goal of statistical language
modeling research. Publicly available benchmarks and papers published with an
open-source code have been critical to advancing the field. However, it is
often very difficult, and sometimes even impossible, to reproduce the results
fully as reported in publications. In this paper, we propose a simple framework
that should help advance the state of the art in language modeling in terms of
generalization. We propose to publish not just the code, but also probabilities
on dev and test sets with future publications so that one can easily add the
new model into an ensemble. This has crucial advantages: it is much easier to
determine whether a newly proposed model is actually complementary to the
current baseline. Therefore, instead of inventing new names for the old tricks,
the scientific community can advance faster. Finally, this approach promotes
diversity of ideas: one does not need to create an individual model that is the
new state of the art to attract attention; it will be sufficient to develop a
new model that learns patterns which other models do not. Thus, even a
suboptimal model can be found to have value. Remarkably, our approach has
yielded new state-of-the-art results across various language modeling
benchmarks up to 10%. | Computational Linguistics |
What field is the article from? | Title: Applying Large Language Models to Power Systems: Potential Security Threats
Abstract: Applying large language models (LLMs) to power systems presents a promising
avenue for enhancing decision-making and operational efficiency. However, this
action may also incur potential security threats, which have not been fully
recognized so far. To this end, this letter analyzes potential threats incurred
by applying LLMs to power systems, emphasizing the need for urgent research and
development of countermeasures. | Artificial Intelligence |
What field is the article from? | Title: OccWorld: Learning a 3D Occupancy World Model for Autonomous Driving
Abstract: Understanding how the 3D scene evolves is vital for making decisions in
autonomous driving. Most existing methods achieve this by predicting the
movements of object boxes, which cannot capture more fine-grained scene
information. In this paper, we explore a new framework of learning a world
model, OccWorld, in the 3D Occupancy space to simultaneously predict the
movement of the ego car and the evolution of the surrounding scenes. We propose
to learn a world model based on 3D occupancy rather than 3D bounding boxes and
segmentation maps for three reasons: 1) expressiveness. 3D occupancy can
describe the more fine-grained 3D structure of the scene; 2) efficiency. 3D
occupancy is more economical to obtain (e.g., from sparse LiDAR points). 3)
versatility. 3D occupancy can adapt to both vision and LiDAR. To facilitate the
modeling of the world evolution, we learn a reconstruction-based scene
tokenizer on the 3D occupancy to obtain discrete scene tokens to describe the
surrounding scenes. We then adopt a GPT-like spatial-temporal generative
transformer to generate subsequent scene and ego tokens to decode the future
occupancy and ego trajectory. Extensive experiments on the widely used nuScenes
benchmark demonstrate the ability of OccWorld to effectively model the
evolution of the driving scenes. OccWorld also produces competitive planning
results without using instance and map supervision. Code:
https://github.com/wzzheng/OccWorld. | Computer Vision |
What field is the article from? | Title: METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities
Abstract: Large-Language Models (LLMs) have shifted the paradigm of natural language
data processing. However, their black-boxed and probabilistic characteristics
can lead to potential risks in the quality of outputs in diverse LLM
applications. Recent studies have tested Quality Attributes (QAs), such as
robustness or fairness, of LLMs by generating adversarial input texts. However,
existing studies have limited their coverage of QAs and tasks in LLMs and are
difficult to extend. Additionally, these studies have only used one evaluation
metric, Attack Success Rate (ASR), to assess the effectiveness of their
approaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)
framework to address these issues by applying Metamorphic Testing (MT)
techniques. This approach facilitates the systematic testing of LLM qualities
by defining Metamorphic Relations (MRs), which serve as modularized evaluation
metrics. The METAL framework can automatically generate hundreds of MRs from
templates that cover various QAs and tasks. In addition, we introduced novel
metrics that integrate the ASR method into the semantic qualities of text to
assess the effectiveness of MRs accurately. Through the experiments conducted
with three prominent LLMs, we have confirmed that the METAL framework
effectively evaluates essential QAs on primary LLM tasks and reveals the
quality risks in LLMs. Moreover, the newly proposed metrics can guide the
optimal MRs for testing each task and suggest the most effective method for
generating MRs. | Software Engineering |
What field is the article from? | Title: Calibrated Adaptive Teacher for Domain Adaptive Intelligent Fault Diagnosis
Abstract: Intelligent Fault Diagnosis (IFD) based on deep learning has proven to be an
effective and flexible solution, attracting extensive research. Deep neural
networks can learn rich representations from vast amounts of representative
labeled data for various applications. In IFD, they achieve high classification
performance from signals in an end-to-end manner, without requiring extensive
domain knowledge. However, deep learning models usually only perform well on
the data distribution they have been trained on. When applied to a different
distribution, they may experience performance drops. This is also observed in
IFD, where assets are often operated in working conditions different from those
in which labeled data have been collected. Unsupervised domain adaptation (UDA)
deals with the scenario where labeled data are available in a source domain,
and only unlabeled data are available in a target domain, where domains may
correspond to operating conditions. Recent methods rely on training with
confident pseudo-labels for target samples. However, the confidence-based
selection of pseudo-labels is hindered by poorly calibrated confidence
estimates in the target domain, primarily due to over-confident predictions,
which limits the quality of pseudo-labels and leads to error accumulation. In
this paper, we propose a novel UDA method called Calibrated Adaptive Teacher
(CAT), where we propose to calibrate the predictions of the teacher network
throughout the self-training process, leveraging post-hoc calibration
techniques. We evaluate CAT on domain-adaptive IFD and perform extensive
experiments on the Paderborn benchmark for bearing fault diagnosis under
varying operating conditions. Our proposed method achieves state-of-the-art
performance on most transfer tasks. | Machine Learning |
What field is the article from? | Title: Exploring Popularity Bias in Session-based Recommendation
Abstract: Existing work has revealed that large-scale offline evaluation of recommender
systems for user-item interactions is prone to bias caused by the deployed
system itself, as a form of closed loop feedback. Many adopt the
\textit{propensity} concept to analyze or mitigate this empirical issue. In
this work, we extend the analysis to session-based setup and adapted propensity
calculation to the unique characteristics of session-based recommendation
tasks. Our experiments incorporate neural models and KNN-based models, and
cover both the music and the e-commerce domain. We study the distributions of
propensity and different stratification techniques on different datasets and
find that propensity-related traits are actually dataset-specific. We then
leverage the effect of stratification and achieve promising results compared to
the original models. | Information Retrieval |
What field is the article from? | Title: Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation
Abstract: The advent of large language models (LLMs) has opened up new opportunities in
the field of mobile task automation. Their superior language understanding and
reasoning capabilities allow users to automate complex and repetitive tasks.
However, due to the inherent unreliability and high operational cost of LLMs,
their practical applicability is quite limited. To address these issues, this
paper introduces MemoDroid, an innovative LLM-based mobile task automator
enhanced with a unique app memory. MemoDroid emulates the cognitive process of
humans interacting with a mobile app -- explore, select, derive, and recall.
This approach allows for a more precise and efficient learning of a task's
procedure by breaking it down into smaller, modular components that can be
re-used, re-arranged, and adapted for various objectives. We implement
MemoDroid using online LLMs services (GPT-3.5 and GPT-4) and evaluate its
performance on 50 unique mobile tasks across 5 widely used mobile apps. The
results indicate that MemoDroid can adapt learned tasks to varying contexts
with 100% accuracy and reduces their latency and cost by 69.22% and 77.36%
compared to a GPT-4 powered baseline. | Human-Computer Interaction |
What field is the article from? | Title: DiffusionSat: A Generative Foundation Model for Satellite Imagery
Abstract: Diffusion models have achieved state-of-the-art results on many modalities
including images, speech, and video. However, existing models are not tailored
to support remote sensing data, which is widely used in important applications
including environmental monitoring and crop-yield prediction. Satellite images
are significantly different from natural images -- they can be multi-spectral,
irregularly sampled across time -- and existing diffusion models trained on
images from the Web do not support them. Furthermore, remote sensing data is
inherently spatio-temporal, requiring conditional generation tasks not
supported by traditional methods based on captions or images. In this paper, we
present DiffusionSat, to date the largest generative foundation model trained
on a collection of publicly available large, high-resolution remote sensing
datasets. As text-based captions are sparsely available for satellite images,
we incorporate the associated metadata such as geolocation as conditioning
information. Our method produces realistic samples and can be used to solve
multiple generative tasks including temporal generation, superresolution given
multi-spectral inputs and in-painting. Our method outperforms previous
state-of-the-art methods for satellite image generation and is the first
large-scale $\textit{generative}$ foundation model for satellite imagery. | Computer Vision |
What field is the article from? | Title: Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers
Abstract: Model extraction is a growing concern for the security of AI systems. For
deep neural network models, the architecture is the most important information
an adversary aims to recover. Being a sequence of repeated computation blocks,
neural network models deployed on edge-devices will generate distinctive
side-channel leakages. The latter can be exploited to extract critical
information when targeted platforms are physically accessible. By combining
theoretical knowledge about deep learning practices and analysis of a
widespread implementation library (ARM CMSIS-NN), our purpose is to answer this
critical question: how far can we extract architecture information by simply
examining an EM side-channel trace? For the first time, we propose an
extraction methodology for traditional MLP and CNN models running on a high-end
32-bit microcontroller (Cortex-M7) that relies only on simple pattern
recognition analysis. Despite few challenging cases, we claim that, contrary to
parameters extraction, the complexity of the attack is relatively low and we
highlight the urgent need for practicable protections that could fit the strong
memory and latency requirements of such platforms. | Cryptography and Security |
What field is the article from? | Title: Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation
Abstract: Vision Transformer (ViT) has demonstrated promising performance in computer
vision tasks, comparable to state-of-the-art neural networks. Yet, this new
type of deep neural network architecture is vulnerable to adversarial attacks
limiting its capabilities in terms of robustness. This article presents a novel
contribution aimed at further improving the accuracy and robustness of ViT,
particularly in the face of adversarial attacks. We propose an augmentation
technique called `Dynamic Scanning Augmentation' that leverages dynamic input
sequences to adaptively focus on different patches, thereby maintaining
performance and robustness. Our detailed investigations reveal that this
adaptability to the input sequence induces significant changes in the attention
mechanism of ViT, even for the same image. We introduce four variations of
Dynamic Scanning Augmentation, outperforming ViT in terms of both robustness to
adversarial attacks and accuracy against natural images, with one variant
showing comparable results. By integrating our augmentation technique, we
observe a substantial increase in ViT's robustness, improving it from $17\%$ to
$92\%$ measured across different types of adversarial attacks. These findings,
together with other comprehensive tests, indicate that Dynamic Scanning
Augmentation enhances accuracy and robustness by promoting a more adaptive type
of attention. In conclusion, this work contributes to the ongoing research on
Vision Transformers by introducing Dynamic Scanning Augmentation as a technique
for improving the accuracy and robustness of ViT. The observed results
highlight the potential of this approach in advancing computer vision tasks and
merit further exploration in future studies. | Computer Vision |
What field is the article from? | Title: Vignat: Vulnerability identification by learning code semantics via graph attention networks
Abstract: Vulnerability identification is crucial to protect software systems from
attacks for cyber-security. However, huge projects have more than millions of
lines of code, and the complex dependencies make it hard to carry out
traditional static and dynamic methods. Furthermore, the semantic structure of
various types of vulnerabilities differs greatly and may occur simultaneously,
making general rule-based methods difficult to extend. In this paper, we
propose \textit{Vignat}, a novel attention-based framework for identifying
vulnerabilities by learning graph-level semantic representations of code. We
represent codes with code property graphs (CPGs) in fine grain and use graph
attention networks (GATs) for vulnerability detection. The results show that
Vignat is able to achieve $57.38\%$ accuracy on reliable datasets derived from
popular C libraries. Furthermore, the interpretability of our GATs provides
valuable insights into vulnerability patterns. | Cryptography and Security |
Subsets and Splits