instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Leveraging Speculative Sampling and KV-Cache Optimizations Together for Generative AI using OpenVINO
Abstract: Inference optimizations are critical for improving user experience and
reducing infrastructure costs and power consumption. In this article, we
illustrate a form of dynamic execution known as speculative sampling to reduce
the overall latency of text generation and compare it with standard
autoregressive sampling. This can be used together with model-based
optimizations (e.g. quantization) to provide an optimized solution. Both
sampling methods make use of KV caching. A Jupyter notebook and some sample
executions are provided. | Machine Learning |
What field is the article from? | Title: Explainable Spatio-Temporal Graph Neural Networks
Abstract: Spatio-temporal graph neural networks (STGNNs) have gained popularity as a
powerful tool for effectively modeling spatio-temporal dependencies in diverse
real-world urban applications, including intelligent transportation and public
safety. However, the black-box nature of STGNNs limits their interpretability,
hindering their application in scenarios related to urban resource allocation
and policy formulation. To bridge this gap, we propose an Explainable
Spatio-Temporal Graph Neural Networks (STExplainer) framework that enhances
STGNNs with inherent explainability, enabling them to provide accurate
predictions and faithful explanations simultaneously. Our framework integrates
a unified spatio-temporal graph attention network with a positional information
fusion layer as the STG encoder and decoder, respectively. Furthermore, we
propose a structure distillation approach based on the Graph Information
Bottleneck (GIB) principle with an explainable objective, which is instantiated
by the STG encoder and decoder. Through extensive experiments, we demonstrate
that our STExplainer outperforms state-of-the-art baselines in terms of
predictive accuracy and explainability metrics (i.e., sparsity and fidelity) on
traffic and crime prediction tasks. Furthermore, our model exhibits superior
representation ability in alleviating data missing and sparsity issues. The
implementation code is available at: https://github.com/HKUDS/STExplainer. | Machine Learning |
What field is the article from? | Title: Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Abstract: Large foundation models are becoming ubiquitous, but training them from
scratch is prohibitively expensive. Thus, efficiently adapting these powerful
models to downstream tasks is increasingly important. In this paper, we study a
principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream
task adaptation. Despite demonstrating good generalizability, OFT still uses a
fairly large number of trainable parameters due to the high dimensionality of
orthogonal matrices. To address this, we start by examining OFT from an
information transmission perspective, and then identify a few key desiderata
that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast
Fourier transform algorithm enables efficient information transmission, we
propose an efficient orthogonal parameterization using butterfly structures. We
apply this parameterization to OFT, creating a novel parameter-efficient
finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a
special case, BOFT introduces a generalized orthogonal finetuning framework.
Finally, we conduct an extensive empirical study of adapting large vision
transformers, large language models, and text-to-image diffusion models to
various downstream tasks in vision and language. | Machine Learning |
What field is the article from? | Title: Learning to Design and Use Tools for Robotic Manipulation
Abstract: When limited by their own morphologies, humans and some species of animals
have the remarkable ability to use objects from the environment toward
accomplishing otherwise impossible tasks. Robots might similarly unlock a range
of additional capabilities through tool use. Recent techniques for jointly
optimizing morphology and control via deep learning are effective at designing
locomotion agents. But while outputting a single morphology makes sense for
locomotion, manipulation involves a variety of strategies depending on the task
goals at hand. A manipulation agent must be capable of rapidly prototyping
specialized tools for different goals. Therefore, we propose learning a
designer policy, rather than a single design. A designer policy is conditioned
on task information and outputs a tool design that helps solve the task. A
design-conditioned controller policy can then perform manipulation using these
tools. In this work, we take a step towards this goal by introducing a
reinforcement learning framework for jointly learning these policies. Through
simulated manipulation tasks, we show that this framework is more sample
efficient than prior methods in multi-goal or multi-variant settings, can
perform zero-shot interpolation or fine-tuning to tackle previously unseen
goals, and allows tradeoffs between the complexity of design and control
policies under practical constraints. Finally, we deploy our learned policies
onto a real robot. Please see our supplementary video and website at
https://robotic-tool-design.github.io/ for visualizations. | Robotics |
What field is the article from? | Title: MalPurifier: Enhancing Android Malware Detection with Adversarial Purification against Evasion Attacks
Abstract: Machine learning (ML) has gained significant adoption in Android malware
detection to address the escalating threats posed by the rapid proliferation of
malware attacks. However, recent studies have revealed the inherent
vulnerabilities of ML-based detection systems to evasion attacks. While efforts
have been made to address this critical issue, many of the existing defensive
methods encounter challenges such as lower effectiveness or reduced
generalization capabilities. In this paper, we introduce a novel Android
malware detection method, MalPurifier, which exploits adversarial purification
to eliminate perturbations independently, resulting in attack mitigation in a
light and flexible way. Specifically, MalPurifier employs a Denoising
AutoEncoder (DAE)-based purification model to preprocess input samples,
removing potential perturbations from them and then leading to correct
classification. To enhance defense effectiveness, we propose a diversified
adversarial perturbation mechanism that strengthens the purification model
against different manipulations from various evasion attacks. We also
incorporate randomized "protective noises" onto benign samples to prevent
excessive purification. Furthermore, we customize a loss function for improving
the DAE model, combining reconstruction loss and prediction loss, to enhance
feature representation learning, resulting in accurate reconstruction and
classification. Experimental results on two Android malware datasets
demonstrate that MalPurifier outperforms the state-of-the-art defenses, and it
significantly strengthens the vulnerable malware detector against 37 evasion
attacks, achieving accuracies over 90.91%. Notably, MalPurifier demonstrates
easy scalability to other detectors, offering flexibility and robustness in its
implementation. | Cryptography and Security |
What field is the article from? | Title: Active Wildfires Detection and Dynamic Escape Routes Planning for Humans through Information Fusion between Drones and Satellites
Abstract: UAVs are playing an increasingly important role in the field of wilderness
rescue by virtue of their flexibility. This paper proposes a fusion of UAV
vision technology and satellite image analysis technology for active wildfires
detection and road networks extraction of wildfire areas and real-time dynamic
escape route planning for people in distress. Firstly, the fire source location
and the segmentation of smoke and flames are targeted based on Sentinel 2
satellite imagery. Secondly, the road segmentation and the road condition
assessment are performed by D-linkNet and NDVI values in the central area of
the fire source by UAV. Finally, the dynamic optimal route planning for humans
in real time is performed by the weighted A* algorithm in the road network with
the dynamic fire spread model. Taking the Chongqing wildfire on August 24,
2022, as a case study, the results demonstrate that the dynamic escape route
planning algorithm can provide an optimal real-time navigation path for humans
in the presence of fire through the information fusion of UAVs and satellites. | Artificial Intelligence |
What field is the article from? | Title: One-2-3-45++: Fast Single Image to 3D Objects with Consistent Multi-View Generation and 3D Diffusion
Abstract: Recent advancements in open-world 3D object generation have been remarkable,
with image-to-3D methods offering superior fine-grained control over their
text-to-3D counterparts. However, most existing models fall short in
simultaneously providing rapid generation speeds and high fidelity to input
images - two features essential for practical applications. In this paper, we
present One-2-3-45++, an innovative method that transforms a single image into
a detailed 3D textured mesh in approximately one minute. Our approach aims to
fully harness the extensive knowledge embedded in 2D diffusion models and
priors from valuable yet limited 3D data. This is achieved by initially
finetuning a 2D diffusion model for consistent multi-view image generation,
followed by elevating these images to 3D with the aid of multi-view conditioned
3D native diffusion models. Extensive experimental evaluations demonstrate that
our method can produce high-quality, diverse 3D assets that closely mirror the
original input image. Our project webpage:
https://sudo-ai-3d.github.io/One2345plus_page. | Computer Vision |
What field is the article from? | Title: (Ir)rationality in AI: State of the Art, Research Challenges and Open Questions
Abstract: The concept of rationality is central to the field of artificial
intelligence. Whether we are seeking to simulate human reasoning, or the goal
is to achieve bounded optimality, we generally seek to make artificial agents
as rational as possible. Despite the centrality of the concept within AI, there
is no unified definition of what constitutes a rational agent. This article
provides a survey of rationality and irrationality in artificial intelligence,
and sets out the open questions in this area. The understanding of rationality
in other fields has influenced its conception within artificial intelligence,
in particular work in economics, philosophy and psychology. Focusing on the
behaviour of artificial agents, we consider irrational behaviours that can
prove to be optimal in certain scenarios. Some methods have been developed to
deal with irrational agents, both in terms of identification and interaction,
however work in this area remains limited. Methods that have up to now been
developed for other purposes, namely adversarial scenarios, may be adapted to
suit interactions with artificial agents. We further discuss the interplay
between human and artificial agents, and the role that rationality plays within
this interaction; many questions remain in this area, relating to potentially
irrational behaviour of both humans and artificial agents. | Artificial Intelligence |
What field is the article from? | Title: Hypergraph-Guided Disentangled Spectrum Transformer Networks for Near-Infrared Facial Expression Recognition
Abstract: With the strong robusticity on illumination variations, near-infrared (NIR)
can be an effective and essential complement to visible (VIS) facial expression
recognition in low lighting or complete darkness conditions. However, facial
expression recognition (FER) from NIR images presents more challenging problem
than traditional FER due to the limitations imposed by the data scale and the
difficulty of extracting discriminative features from incomplete visible
lighting contents. In this paper, we give the first attempt to deep NIR facial
expression recognition and proposed a novel method called near-infrared facial
expression transformer (NFER-Former). Specifically, to make full use of the
abundant label information in the field of VIS, we introduce a Self-Attention
Orthogonal Decomposition mechanism that disentangles the expression information
and spectrum information from the input image, so that the expression features
can be extracted without the interference of spectrum variation. We also
propose a Hypergraph-Guided Feature Embedding method that models some key
facial behaviors and learns the structure of the complex correlations between
them, thereby alleviating the interference of inter-class similarity.
Additionally, we have constructed a large NIR-VIS Facial Expression dataset
that includes 360 subjects to better validate the efficiency of NFER-Former.
Extensive experiments and ablation studies show that NFER-Former significantly
improves the performance of NIR FER and achieves state-of-the-art results on
the only two available NIR FER datasets, Oulu-CASIA and Large-HFE. | Computer Vision |
What field is the article from? | Title: Towards Formal Fault Injection for Safety Assessment of Automated Systems
Abstract: Reasoning about safety, security, and other dependability attributes of
autonomous systems is a challenge that needs to be addressed before the
adoption of such systems in day-to-day life. Formal methods is a class of
methods that mathematically reason about a system's behavior. Thus, a
correctness proof is sufficient to conclude the system's dependability.
However, these methods are usually applied to abstract models of the system,
which might not fully represent the actual system. Fault injection, on the
other hand, is a testing method to evaluate the dependability of systems.
However, the amount of testing required to evaluate the system is rather large
and often a problem. This vision paper introduces formal fault injection, a
fusion of these two techniques throughout the development lifecycle to enhance
the dependability of autonomous systems. We advocate for a more cohesive
approach by identifying five areas of mutual support between formal methods and
fault injection. By forging stronger ties between the two fields, we pave the
way for developing safe and dependable autonomous systems. This paper delves
into the integration's potential and outlines future research avenues,
addressing open challenges along the way. | Artificial Intelligence |
What field is the article from? | Title: Improving embedding of graphs with missing data by soft manifolds
Abstract: Embedding graphs in continous spaces is a key factor in designing and
developing algorithms for automatic information extraction to be applied in
diverse tasks (e.g., learning, inferring, predicting). The reliability of graph
embeddings directly depends on how much the geometry of the continuous space
matches the graph structure. Manifolds are mathematical structure that can
enable to incorporate in their topological spaces the graph characteristics,
and in particular nodes distances. State-of-the-art of manifold-based graph
embedding algorithms take advantage of the assumption that the projection on a
tangential space of each point in the manifold (corresponding to a node in the
graph) would locally resemble a Euclidean space. Although this condition helps
in achieving efficient analytical solutions to the embedding problem, it does
not represent an adequate set-up to work with modern real life graphs, that are
characterized by weighted connections across nodes often computed over sparse
datasets with missing records. In this work, we introduce a new class of
manifold, named soft manifold, that can solve this situation. In particular,
soft manifolds are mathematical structures with spherical symmetry where the
tangent spaces to each point are hypocycloids whose shape is defined according
to the velocity of information propagation across the data points. Using soft
manifolds for graph embedding, we can provide continuous spaces to pursue any
task in data analysis over complex datasets. Experimental results on
reconstruction tasks on synthetic and real datasets show how the proposed
approach enable more accurate and reliable characterization of graphs in
continuous spaces with respect to the state-of-the-art. | Machine Learning |
What field is the article from? | Title: CausalCite: A Causal Formulation of Paper Citations
Abstract: Evaluating the significance of a paper is pivotal yet challenging for the
scientific community. While the citation count is the most commonly used proxy
for this purpose, they are widely criticized for failing to accurately reflect
a paper's true impact. In this work, we propose a causal inference method,
TextMatch, which adapts the traditional matching framework to high-dimensional
text embeddings. Specifically, we encode each paper using the text embeddings
by large language models (LLMs), extract similar samples by cosine similarity,
and synthesize a counterfactual sample by the weighted average of similar
papers according to their similarity values. We apply the resulting metric,
called CausalCite, as a causal formulation of paper citations. We show its
effectiveness on various criteria, such as high correlation with paper impact
as reported by scientific experts on a previous dataset of 1K papers,
(test-of-time) awards for past papers, and its stability across various
sub-fields of AI. We also provide a set of findings that can serve as suggested
ways for future researchers to use our metric for a better understanding of a
paper's quality. Our code and data are at
https://github.com/causalNLP/causal-cite. | Computational Linguistics |
What field is the article from? | Title: SceneDM: Scene-level Multi-agent Trajectory Generation with Consistent Diffusion Models
Abstract: Realistic scene-level multi-agent motion simulations are crucial for
developing and evaluating self-driving algorithms. However, most existing works
focus on generating trajectories for a certain single agent type, and typically
ignore the consistency of generated trajectories. In this paper, we propose a
novel framework based on diffusion models, called SceneDM, to generate joint
and consistent future motions of all the agents, including vehicles, bicycles,
pedestrians, etc., in a scene. To enhance the consistency of the generated
trajectories, we resort to a new Transformer-based network to effectively
handle agent-agent interactions in the inverse process of motion diffusion. In
consideration of the smoothness of agent trajectories, we further design a
simple yet effective consistent diffusion approach, to improve the model in
exploiting short-term temporal dependencies. Furthermore, a scene-level scoring
function is attached to evaluate the safety and road-adherence of the generated
agent's motions and help filter out unrealistic simulations. Finally, SceneDM
achieves state-of-the-art results on the Waymo Sim Agents Benchmark. Project
webpage is available at https://alperen-hub.github.io/SceneDM. | Robotics |
What field is the article from? | Title: Structured World Representations in Maze-Solving Transformers
Abstract: Transformer models underpin many recent advances in practical machine
learning applications, yet understanding their internal behavior continues to
elude researchers. Given the size and complexity of these models, forming a
comprehensive picture of their inner workings remains a significant challenge.
To this end, we set out to understand small transformer models in a more
tractable setting: that of solving mazes. In this work, we focus on the
abstractions formed by these models and find evidence for the consistent
emergence of structured internal representations of maze topology and valid
paths. We demonstrate this by showing that the residual stream of only a single
token can be linearly decoded to faithfully reconstruct the entire maze. We
also find that the learned embeddings of individual tokens have spatial
structure. Furthermore, we take steps towards deciphering the circuity of
path-following by identifying attention heads (dubbed $\textit{adjacency
heads}$), which are implicated in finding valid subsequent tokens. | Machine Learning |
What field is the article from? | Title: Vanishing Gradients in Reinforcement Finetuning of Language Models
Abstract: Pretrained language models are commonly aligned with human preferences and
downstream tasks via reinforcement finetuning (RFT), which entails maximizing a
(possibly learned) reward function using policy gradient algorithms. This work
highlights a fundamental optimization obstacle in RFT: we prove that the
expected gradient for an input vanishes when its reward standard deviation
under the model is small, even if the expected reward is far from optimal.
Through experiments on an RFT benchmark and controlled environments, as well as
a theoretical analysis, we then demonstrate that vanishing gradients due to
small reward standard deviation are prevalent and detrimental, leading to
extremely slow reward maximization. Lastly, we explore ways to overcome
vanishing gradients in RFT. We find the common practice of an initial
supervised finetuning (SFT) phase to be the most promising candidate, which
sheds light on its importance in an RFT pipeline. Moreover, we show that a
relatively small number of SFT optimization steps on as few as 1% of the input
samples can suffice, indicating that the initial SFT phase need not be
expensive in terms of compute and data labeling efforts. Overall, our results
emphasize that being mindful for inputs whose expected gradient vanishes, as
measured by the reward standard deviation, is crucial for successful execution
of RFT. | Machine Learning |
What field is the article from? | Title: Revamping AI Models in Dermatology: Overcoming Critical Challenges for Enhanced Skin Lesion Diagnosis
Abstract: The surge in developing deep learning models for diagnosing skin lesions
through image analysis is notable, yet their clinical black faces challenges.
Current dermatology AI models have limitations: limited number of possible
diagnostic outputs, lack of real-world testing on uncommon skin lesions,
inability to detect out-of-distribution images, and over-reliance on
dermoscopic images. To address these, we present an All-In-One
\textbf{H}ierarchical-\textbf{O}ut of Distribution-\textbf{C}linical Triage
(HOT) model. For a clinical image, our model generates three outputs: a
hierarchical prediction, an alert for out-of-distribution images, and a
recommendation for dermoscopy if clinical image alone is insufficient for
diagnosis. When the recommendation is pursued, it integrates both clinical and
dermoscopic images to deliver final diagnosis. Extensive experiments on a
representative cutaneous lesion dataset demonstrate the effectiveness and
synergy of each component within our framework. Our versatile model provides
valuable decision support for lesion diagnosis and sets a promising precedent
for medical AI applications. | Computer Vision |
What field is the article from? | Title: Towards Auditing Large Language Models: Improving Text-based Stereotype Detection
Abstract: Large Language Models (LLM) have made significant advances in the recent past
becoming more mainstream in Artificial Intelligence (AI) enabled human-facing
applications. However, LLMs often generate stereotypical output inherited from
historical data, amplifying societal biases and raising ethical concerns. This
work introduces i) the Multi-Grain Stereotype Dataset, which includes 52,751
instances of gender, race, profession and religion stereotypic text and ii) a
novel stereotype classifier for English text. We design several experiments to
rigorously test the proposed model trained on the novel dataset. Our
experiments show that training the model in a multi-class setting can
outperform the one-vs-all binary counterpart. Consistent feature importance
signals from different eXplainable AI tools demonstrate that the new model
exploits relevant text features. We utilise the newly created model to assess
the stereotypic behaviour of the popular GPT family of models and observe the
reduction of bias over time. In summary, our work establishes a robust and
practical framework for auditing and evaluating the stereotypic bias in LLM. | Computational Linguistics |
What field is the article from? | Title: Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors
Abstract: In a spoken dialogue system, an NLU model is preceded by a speech recognition
system that can deteriorate the performance of natural language understanding.
This paper proposes a method for investigating the impact of speech recognition
errors on the performance of natural language understanding models. The
proposed method combines the back transcription procedure with a fine-grained
technique for categorizing the errors that affect the performance of NLU
models. The method relies on the usage of synthesized speech for NLU
evaluation. We show that the use of synthesized speech in place of audio
recording does not change the outcomes of the presented technique in a
significant way. | Computational Linguistics |
What field is the article from? | Title: Mapping the Empirical Evidence of the GDPR (In-)Effectiveness: A Systematic Review
Abstract: In the realm of data protection, a striking disconnect prevails between
traditional domains of doctrinal, legal, theoretical, and policy-based
inquiries and a burgeoning body of empirical evidence. Much of the scholarly
and regulatory discourse remains entrenched in abstract legal principles or
normative frameworks, leaving the empirical landscape uncharted or minimally
engaged. Since the birth of EU data protection law, a modest body of empirical
evidence has been generated but remains widely scattered and unexamined. Such
evidence offers vital insights into the perception, impact, clarity, and
effects of data protection measures but languishes on the periphery,
inadequately integrated into the broader conversation. To make a meaningful
connection, we conduct a comprehensive review and synthesis of empirical
research spanning nearly three decades (1995- March 2022), advocating for a
more robust integration of empirical evidence into the evaluation and review of
the GDPR, while laying a methodological foundation for future empirical
research. | Computers and Society |
What field is the article from? | Title: Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings
Abstract: Compositional generalization, the ability of an agent to generalize to unseen
combinations of latent factors, is easy for humans but hard for deep neural
networks. A line of research in cognitive science has hypothesized a process,
``iterated learning,'' to help explain how human language developed this
ability; the theory rests on simultaneous pressures towards compressibility
(when an ignorant agent learns from an informed one) and expressivity (when it
uses the representation for downstream tasks). Inspired by this process, we
propose to improve the compositional generalization of deep networks by using
iterated learning on models with simplicial embeddings, which can approximately
discretize representations. This approach is further motivated by an analysis
of compositionality based on Kolmogorov complexity. We show that this
combination of changes improves compositional generalization over other
approaches, demonstrating these improvements both on vision tasks with
well-understood latent factors and on real molecular graph prediction tasks
where the latent structure is unknown. | Machine Learning |
What field is the article from? | Title: Towards Transparency in Coreference Resolution: A Quantum-Inspired Approach
Abstract: Guided by grammatical structure, words compose to form sentences, and guided
by discourse structure, sentences compose to form dialogues and documents. The
compositional aspect of sentence and discourse units is often overlooked by
machine learning algorithms. A recent initiative called Quantum Natural
Language Processing (QNLP) learns word meanings as points in a Hilbert space
and acts on them via a translation of grammatical structure into Parametrised
Quantum Circuits (PQCs). Previous work extended the QNLP translation to
discourse structure using points in a closure of Hilbert spaces. In this paper,
we evaluate this translation on a Winograd-style pronoun resolution task. We
train a Variational Quantum Classifier (VQC) for binary classification and
implement an end-to-end pronoun resolution system. The simulations executed on
IBMQ software converged with an F1 score of 87.20%. The model outperformed two
out of three classical coreference resolution systems and neared
state-of-the-art SpanBERT. A mixed quantum-classical model yet improved these
results with an F1 score increase of around 6%. | Computational Linguistics |
What field is the article from? | Title: OtterHD: A High-Resolution Multi-modality Model
Abstract: In this paper, we present OtterHD-8B, an innovative multimodal model evolved
from Fuyu-8B, specifically engineered to interpret high-resolution visual
inputs with granular precision. Unlike conventional models that are constrained
by fixed-size vision encoders, OtterHD-8B boasts the ability to handle flexible
input dimensions, ensuring its versatility across various inference
requirements. Alongside this model, we introduce MagnifierBench, an evaluation
framework designed to scrutinize models' ability to discern minute details and
spatial relationships of small objects. Our comparative analysis reveals that
while current leading models falter on this benchmark, OtterHD-8B, particularly
when directly processing high-resolution inputs, outperforms its counterparts
by a substantial margin. The findings illuminate the structural variances in
visual information processing among different models and the influence that the
vision encoders' pre-training resolution disparities have on model
effectiveness within such benchmarks. Our study highlights the critical role of
flexibility and high-resolution input capabilities in large multimodal models
and also exemplifies the potential inherent in the Fuyu architecture's
simplicity for handling complex visual data. | Computer Vision |
What field is the article from? | Title: Backward Learning for Goal-Conditioned Policies
Abstract: Can we learn policies in reinforcement learning without rewards? Can we learn
a policy just by trying to reach a goal state? We answer these questions
positively by proposing a multi-step procedure that first learns a world model
that goes backward in time, secondly generates goal-reaching backward
trajectories, thirdly improves those sequences using shortest path finding
algorithms, and finally trains a neural network policy by imitation learning.
We evaluate our method on a deterministic maze environment where the
observations are $64\times 64$ pixel bird's eye images and can show that it
consistently reaches several goals. | Machine Learning |
What field is the article from? | Title: Topology Recoverability Prediction for Ad-Hoc Robot Networks: A Data-Driven Fault-Tolerant Approach
Abstract: Faults occurring in ad-hoc robot networks may fatally perturb their
topologies leading to disconnection of subsets of those networks. Optimal
topology synthesis is generally resource-intensive and time-consuming to be
done in real time for large ad-hoc robot networks. One should only perform
topology re-computations if the probability of topology recoverability after
the occurrence of any fault surpasses that of its irrecoverability. We
formulate this problem as a binary classification problem. Then, we develop a
two-pathway data-driven model based on Bayesian Gaussian mixture models that
predicts the solution to a typical problem by two different pre-fault and
post-fault prediction pathways. The results, obtained by the integration of the
predictions of those pathways, clearly indicate the success of our model in
solving the topology (ir)recoverability prediction problem compared to the best
of current strategies found in the literature. | Robotics |
What field is the article from? | Title: FormaT5: Abstention and Examples for Conditional Table Formatting with Natural Language
Abstract: Formatting is an important property in tables for visualization,
presentation, and analysis. Spreadsheet software allows users to automatically
format their tables by writing data-dependent conditional formatting (CF)
rules. Writing such rules is often challenging for users as it requires them to
understand and implement the underlying logic. We present FormaT5, a
transformer-based model that can generate a CF rule given the target table and
a natural language description of the desired formatting logic. We find that
user descriptions for these tasks are often under-specified or ambiguous,
making it harder for code generation systems to accurately learn the desired
rule in a single step. To tackle this problem of under-specification and
minimise argument errors, FormaT5 learns to predict placeholders though an
abstention objective. These placeholders can then be filled by a second model
or, when examples of rows that should be formatted are available, by a
programming-by-example system. To evaluate FormaT5 on diverse and real
scenarios, we create an extensive benchmark of 1053 CF tasks, containing
real-world descriptions collected from four different sources. We release our
benchmarks to encourage research in this area. Abstention and filling allow
FormaT5 to outperform 8 different neural approaches on our benchmarks, both
with and without examples. Our results illustrate the value of building
domain-specific learning systems. | Artificial Intelligence |
What field is the article from? | Title: Moments for Perceptive Narration Analysis Through the Emotional Attachment of Audience to Discourse and Story
Abstract: In this work, our goal is to develop a theoretical framework that can
eventually be used for analyzing the effectiveness of visual stories such as
feature films to comic books. To develop this theoretical framework, we
introduce a new story element called moments. Our conjecture is that any linear
story such as the story of a feature film can be decomposed into a set of
moments that follow each other. Moments are defined as the perception of the
actions, interactions, and expressions of all characters or a single character
during a given time period. We categorize the moments into two major types:
story moments and discourse moments. Each type of moment can further be
classified into three types, which we call universal storytelling moments. We
believe these universal moments foster or deteriorate the emotional attachment
of the audience to a particular character or the story. We present a
methodology to catalog the occurrences of these universal moments as they are
found in the story. The cataloged moments can be represented using curves or
color strips. Therefore, we can visualize a character's journey through the
story as either a 3D curve or a color strip. We also demonstrated that both
story and discourse moments can be transformed into one lump-sum attraction
parameter. The attraction parameter in time provides a function that can be
plotted graphically onto a timeline illustrating changes in the emotional
attachment of audience to a character or the story. By inspecting these
functions the story analyst can analytically decipher the moments in the story
where the attachment is being established, maintained, strengthened, or
conversely where it is languishing. | Artificial Intelligence |
What field is the article from? | Title: Ensembling Textual and Structure-Based Models for Knowledge Graph Completion
Abstract: We consider two popular approaches to Knowledge Graph Completion (KGC):
textual models that rely on textual entity descriptions, and structure-based
models that exploit the connectivity structure of the Knowledge Graph (KG).
Preliminary experiments show that these approaches have complementary
strengths: structure-based models perform well when the gold answer is easily
reachable from the query head in the KG, while textual models exploit
descriptions to give good performance even when the gold answer is not
reachable. In response, we explore ensembling as a way of combining the best of
both approaches. We propose a novel method for learning query-dependent
ensemble weights by using the distributions of scores assigned by individual
models to all candidate entities. Our ensemble baseline achieves
state-of-the-art results on three standard KGC datasets, with up to 6.8 pt MRR
and 8.3 pt Hits@1 gains over best individual models. | Computational Linguistics |
What field is the article from? | Title: Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Abstract: We introduce Adapters, an open-source library that unifies
parameter-efficient and modular transfer learning in large language models. By
integrating 10 diverse adapter methods into a unified interface, Adapters
offers ease of use and flexible configuration. Our library allows researchers
and practitioners to leverage adapter modularity through composition blocks,
enabling the design of complex adapter setups. We demonstrate the library's
efficacy by evaluating its performance against full fine-tuning on various NLP
tasks. Adapters provides a powerful tool for addressing the challenges of
conventional fine-tuning paradigms and promoting more efficient and modular
transfer learning. The library is available via https://adapterhub.ml/adapters. | Computational Linguistics |
What field is the article from? | Title: A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges
Abstract: In recent years, the combination of artificial intelligence (AI) and unmanned
aerial vehicles (UAVs) has brought about advancements in various areas. This
comprehensive analysis explores the changing landscape of AI-powered UAVs and
friendly computing in their applications. It covers emerging trends, futuristic
visions, and the inherent challenges that come with this relationship. The
study examines how AI plays a role in enabling navigation, detecting and
tracking objects, monitoring wildlife, enhancing precision agriculture,
facilitating rescue operations, conducting surveillance activities, and
establishing communication among UAVs using environmentally conscious computing
techniques. By delving into the interaction between AI and UAVs, this analysis
highlights the potential for these technologies to revolutionise industries
such as agriculture, surveillance practices, disaster management strategies,
and more. While envisioning possibilities, it also takes a look at ethical
considerations, safety concerns, regulatory frameworks to be established, and
the responsible deployment of AI-enhanced UAV systems. By consolidating
insights from research endeavours in this field, this review provides an
understanding of the evolving landscape of AI-powered UAVs while setting the
stage for further exploration in this transformative domain. | Artificial Intelligence |
What field is the article from? | Title: Algorithms for automatic intents extraction and utterances classification for goal-oriented dialogue systems
Abstract: Modern machine learning techniques in the natural language processing domain
can be used to automatically generate scripts for goal-oriented dialogue
systems. The current article presents a general framework for studying the
automatic generation of scripts for goal-oriented dialogue systems. A method
for preprocessing dialog data sets in JSON format is described. A comparison is
made of two methods for extracting user intent based on BERTopic and latent
Dirichlet allocation. A comparison has been made of two implemented algorithms
for classifying statements of users of a goal-oriented dialogue system based on
logistic regression and BERT transformer models. The BERT transformer approach
using the bert-base-uncased model showed better results for the three metrics
Precision (0.80), F1-score (0.78) and Matthews correlation coefficient (0.74)
in comparison with other methods. | Artificial Intelligence |
What field is the article from? | Title: Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class Incremental Learning
Abstract: Continual learning aims to create artificial neural networks capable of
accumulating knowledge and skills through incremental training on a sequence of
tasks. The main challenge of continual learning is catastrophic interference,
wherein new knowledge overrides or interferes with past knowledge, leading to
forgetting. An associated issue is the problem of learning "cross-task
knowledge," where models fail to acquire and retain knowledge that helps
differentiate classes across task boundaries. A common solution to both
problems is "replay," where a limited buffer of past instances is utilized to
learn cross-task knowledge and mitigate catastrophic interference. However, a
notable drawback of these methods is their tendency to overfit the limited
replay buffer. In contrast, our proposed solution, SurpriseNet, addresses
catastrophic interference by employing a parameter isolation method and
learning cross-task knowledge using an auto-encoder inspired by anomaly
detection. SurpriseNet is applicable to both structured and unstructured data,
as it does not rely on image-specific inductive biases. We have conducted
empirical experiments demonstrating the strengths of SurpriseNet on various
traditional vision continual-learning benchmarks, as well as on structured data
datasets. Source code made available at https://doi.org/10.5281/zenodo.8247906
and https://github.com/tachyonicClock/SurpriseNet-CIKM-23 | Artificial Intelligence |
What field is the article from? | Title: Retro-BLEU: Quantifying Chemical Plausibility of Retrosynthesis Routes through Reaction Template Sequence Analysis
Abstract: Computer-assisted methods have emerged as valuable tools for retrosynthesis
analysis. However, quantifying the plausibility of generated retrosynthesis
routes remains a challenging task. We introduce Retro-BLEU, a statistical
metric adapted from the well-established BLEU score in machine translation, to
evaluate the plausibility of retrosynthesis routes based on reaction template
sequences analysis. We demonstrate the effectiveness of Retro-BLEU by applying
it to a diverse set of retrosynthesis routes generated by state-of-the-art
algorithms and compare the performance with other evaluation metrics. The
results show that Retro-BLEU is capable of differentiating between plausible
and implausible routes. Furthermore, we provide insights into the strengths and
weaknesses of Retro-BLEU, paving the way for future developments and
improvements in this field. | Machine Learning |
What field is the article from? | Title: KPIs-Based Clustering and Visualization of HPC jobs: a Feature Reduction Approach
Abstract: High-Performance Computing (HPC) systems need to be constantly monitored to
ensure their stability. The monitoring systems collect a tremendous amount of
data about different parameters or Key Performance Indicators (KPIs), such as
resource usage, IO waiting time, etc. A proper analysis of this data, usually
stored as time series, can provide insight in choosing the right management
strategies as well as the early detection of issues. In this paper, we
introduce a methodology to cluster HPC jobs according to their KPI indicators.
Our approach reduces the inherent high dimensionality of the collected data by
applying two techniques to the time series: literature-based and variance-based
feature extraction. We also define a procedure to visualize the obtained
clusters by combining the two previous approaches and the Principal Component
Analysis (PCA). Finally, we have validated our contributions on a real data set
to conclude that those KPIs related to CPU usage provide the best cohesion and
separation for clustering analysis and the good results of our visualization
methodology. | Artificial Intelligence |
What field is the article from? | Title: OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators
Abstract: Compressing a predefined deep neural network (DNN) into a compact sub-network
with competitive performance is crucial in the efficient machine learning
realm. This topic spans various techniques, from structured pruning to neural
architecture search, encompassing both pruning and erasing operators
perspectives. Despite advancements, existing methods suffers from complex,
multi-stage processes that demand substantial engineering and domain knowledge,
limiting their broader applications. We introduce the third-generation
Only-Train-Once (OTOv3), which first automatically trains and compresses a
general DNN through pruning and erasing operations, creating a compact and
competitive sub-network without the need of fine-tuning. OTOv3 simplifies and
automates the training and compression process, minimizes the engineering
efforts required from users. It offers key technological advancements: (i)
automatic search space construction for general DNNs based on dependency graph
analysis; (ii) Dual Half-Space Projected Gradient (DHSPG) and its enhanced
version with hierarchical search (H2SPG) to reliably solve (hierarchical)
structured sparsity problems and ensure sub-network validity; and (iii)
automated sub-network construction using solutions from DHSPG/H2SPG and
dependency graphs. Our empirical results demonstrate the efficacy of OTOv3
across various benchmarks in structured pruning and neural architecture search.
OTOv3 produces sub-networks that match or exceed the state-of-the-arts. The
source code will be available at https://github.com/tianyic/only_train_once. | Machine Learning |
What field is the article from? | Title: A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
Abstract: Two of the central factors believed to underpin human sentence processing
difficulty are expectations and retrieval from working memory. A recent attempt
to create a unified cognitive model integrating these two factors relied on the
parallels between the self-attention mechanism of transformer language models
and cue-based retrieval theories of working memory in human sentence processing
(Ryu and Lewis 2021). While Ryu and Lewis show that attention patterns in
specialized attention heads of GPT-2 are consistent with similarity-based
interference, a key prediction of cue-based retrieval models, their method
requires identifying syntactically specialized attention heads, and makes the
cognitively implausible assumption that hundreds of memory retrieval operations
take place in parallel. In the present work, we develop a recurrent neural
language model with a single self-attention head, which more closely parallels
the memory system assumed by cognitive theories. We show that our model's
single attention head captures semantic and syntactic interference effects
observed in human experiments. | Computational Linguistics |
What field is the article from? | Title: Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?
Abstract: We introduce and study the problem of adversarial arithmetic, which provides
a simple yet challenging testbed for language model alignment. This problem is
comprised of arithmetic questions posed in natural language, with an arbitrary
adversarial string inserted before the question is complete. Even in the simple
setting of 1-digit addition problems, it is easy to find adversarial prompts
that make all tested models (including PaLM2, GPT4, Claude2) misbehave, and
even to steer models to a particular wrong answer. We additionally provide a
simple algorithm for finding successful attacks by querying those same models,
which we name "prompt inversion rejection sampling" (PIRS). We finally show
that models can be partially hardened against these attacks via reinforcement
learning and via agentic constitutional loops. However, we were not able to
make a language model fully robust against adversarial arithmetic attacks. | Computational Linguistics |
What field is the article from? | Title: Gender inference: can chatGPT outperform common commercial tools?
Abstract: An increasing number of studies use gender information to understand
phenomena such as gender bias, inequity in access and participation, or the
impact of the Covid pandemic response. Unfortunately, most datasets do not
include self-reported gender information, making it necessary for researchers
to infer gender from other information, such as names or names and country
information. An important limitation of these tools is that they fail to
appropriately capture the fact that gender exists on a non-binary scale,
however, it remains important to evaluate and compare how well these tools
perform in a variety of contexts. In this paper, we compare the performance of
a generative Artificial Intelligence (AI) tool ChatGPT with three commercially
available list-based and machine learning-based gender inference tools (Namsor,
Gender-API, and genderize.io) on a unique dataset. Specifically, we use a large
Olympic athlete dataset and report how variations in the input (e.g., first
name and first and last name, with and without country information) impact the
accuracy of their predictions. We report results for the full set, as well as
for the subsets: medal versus non-medal winners, athletes from the largest
English-speaking countries, and athletes from East Asia. On these sets, we find
that Namsor is the best traditional commercially available tool. However,
ChatGPT performs at least as well as Namsor and often outperforms it,
especially for the female sample when country and/or last name information is
available. All tools perform better on medalists versus non-medalists and on
names from English-speaking countries. Although not designed for this purpose,
ChatGPT may be a cost-effective tool for gender prediction. In the future, it
might even be possible for ChatGPT or other large scale language models to
better identify self-reported gender rather than report gender on a binary
scale. | Computational Linguistics |
What field is the article from? | Title: Visual tracking brain computer interface
Abstract: Brain-computer interfaces (BCIs) offer a way to interact with computers
without relying on physical movements. Non-invasive electroencephalography
(EEG)-based visual BCIs, known for efficient speed and calibration ease, face
limitations in continuous tasks due to discrete stimulus design and decoding
methods. To achieve continuous control, we implemented a novel spatial encoding
stimulus paradigm and devised a corresponding projection method to enable
continuous modulation of decoded velocity. Subsequently, we conducted
experiments involving 17 participants and achieved Fitt's ITR of 0.55 bps for
the fixed tracking task and 0.37 bps for the random tracking task. The proposed
BCI with a high Fitt's ITR was then integrated into two applications, including
painting and gaming. In conclusion, this study proposed a visual BCI-based
control method to go beyond discrete commands, allowing natural continuous
control based on neural activity. | Human-Computer Interaction |
What field is the article from? | Title: Efficient LLM Inference on CPUs
Abstract: Large language models (LLMs) have demonstrated remarkable performance and
tremendous potential across a wide range of tasks. However, deploying these
models has been challenging due to the astronomical amount of model parameters,
which requires a demand for large memory capacity and high memory bandwidth. In
this paper, we propose an effective approach that can make the deployment of
LLMs more efficiently. We support an automatic INT4 weight-only quantization
flow and design a special LLM runtime with highly-optimized kernels to
accelerate the LLM inference on CPUs. We demonstrate the general applicability
of our approach on popular LLMs including Llama2, Llama, GPT-NeoX, and showcase
the extreme inference efficiency on CPUs. The code is publicly available at:
https://github.com/intel/intel-extension-for-transformers. | Machine Learning |
What field is the article from? | Title: LooGLE: Can Long-Context Language Models Understand Long Contexts?
Abstract: Large language models (LLMs), despite their impressive performance in various
language tasks, are typically limited to processing texts within context-window
size. This limitation has spurred significant research efforts to enhance LLMs'
long-context understanding with high-quality long-sequence benchmarks. However,
prior datasets in this regard suffer from shortcomings, such as short context
length compared to the context window of modern LLMs; outdated documents that
have data leakage problems; and an emphasis on short dependency tasks rather
than long dependency tasks. In this paper, we present LooGLE, a Long Context
Generic Language Evaluation benchmark for LLMs' long context understanding.
LooGLE features relatively new documents post-2022, with over 24,000 tokens per
document and 6,000 newly generated questions spanning diverse domains. Human
annotators meticulously crafted more than 1,100 high-quality question-answer
pairs to meet the long dependency requirements. These pairs underwent thorough
cross-validation, yielding the most precise assessment of LLMs' long dependency
capabilities. The evaluation of eight state-of-the-art LLMs on LooGLE revealed
key findings: (i) commercial models outperformed open-sourced models; (ii) LLMs
excelled in short dependency tasks like short question-answering and cloze
tasks but struggled with more intricate long dependency tasks; (iii) in-context
learning and chaining thoughts offered only marginal improvements; (iv)
retrieval-based techniques demonstrated substantial benefits for short
question-answering, while strategies for extending context window length had
limited impact on long context understanding. As such, LooGLE not only provides
a systematic and comprehensive evaluation schema on long-context LLMs, but also
sheds light on future development of enhanced models towards "true long-context
understanding". | Computational Linguistics |
What field is the article from? | Title: Lecture Notes in Probabilistic Diffusion Models
Abstract: Diffusion models are loosely modelled based on non-equilibrium
thermodynamics, where \textit{diffusion} refers to particles flowing from
high-concentration regions towards low-concentration regions. In statistics,
the meaning is quite similar, namely the process of transforming a complex
distribution $p_{\text{complex}}$ on $\mathbb{R}^d$ to a simple distribution
$p_{\text{prior}}$ on the same domain. This constitutes a Markov chain of
diffusion steps of slowly adding random noise to data, followed by a reverse
diffusion process in which the data is reconstructed from the noise. The
diffusion model learns the data manifold to which the original and thus the
reconstructed data samples belong, by training on a large number of data
points. While the diffusion process pushes a data sample off the data manifold,
the reverse process finds a trajectory back to the data manifold. Diffusion
models have -- unlike variational autoencoder and flow models -- latent
variables with the same dimensionality as the original data, and they are
currently\footnote{At the time of writing, 2023.} outperforming other
approaches -- including Generative Adversarial Networks (GANs) -- to modelling
the distribution of, e.g., natural images. | Machine Learning |
What field is the article from? | Title: Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations
Abstract: Large language models (LLMs) have emerged as powerful and general solutions
to many natural language tasks. However, many of the most important
applications of language generation are interactive, where an agent has to talk
to a person to reach a desired outcome. For example, a teacher might try to
understand their student's current comprehension level to tailor their
instruction accordingly, and a travel agent might ask questions of their
customer to understand their preferences in order to recommend activities they
might enjoy. LLMs trained with supervised fine-tuning or "single-step" RL, as
with standard RLHF, might struggle which tasks that require such goal-directed
behavior, since they are not trained to optimize for overall conversational
outcomes after multiple turns of interaction. In this work, we explore a new
method for adapting LLMs with RL for such goal-directed dialogue. Our key
insight is that, though LLMs might not effectively solve goal-directed dialogue
tasks out of the box, they can provide useful data for solving such tasks by
simulating suboptimal but human-like behaviors. Given a textual description of
a goal-directed dialogue task, we leverage LLMs to sample diverse synthetic
rollouts of hypothetical in-domain human-human interactions. Our algorithm then
utilizes this dataset with offline reinforcement learning to train an
interactive conversational agent that can optimize goal-directed objectives
over multiple turns. In effect, the LLM produces examples of possible
interactions, and RL then processes these examples to learn to perform more
optimal interactions. Empirically, we show that our proposed approach achieves
state-of-the-art performance in various goal-directed dialogue tasks that
include teaching and preference elicitation. | Machine Learning |
What field is the article from? | Title: Is Machine Learning Unsafe and Irresponsible in Social Sciences? Paradoxes and Reconsidering from Recidivism Prediction Tasks
Abstract: The paper addresses some fundamental and hotly debated issues for high-stakes
event predictions underpinning the computational approach to social sciences.
We question several prevalent views against machine learning and outline a new
paradigm that highlights the promises and promotes the infusion of
computational methods and conventional social science approaches. | Computers and Society |
What field is the article from? | Title: Multi-Resolution Diffusion for Privacy-Sensitive Recommender Systems
Abstract: While recommender systems have become an integral component of the Web
experience, their heavy reliance on user data raises privacy and security
concerns. Substituting user data with synthetic data can address these
concerns, but accurately replicating these real-world datasets has been a
notoriously challenging problem. Recent advancements in generative AI have
demonstrated the impressive capabilities of diffusion models in generating
realistic data across various domains. In this work we introduce a Score-based
Diffusion Recommendation Module (SDRM), which captures the intricate patterns
of real-world datasets required for training highly accurate recommender
systems. SDRM allows for the generation of synthetic data that can replace
existing datasets to preserve user privacy, or augment existing datasets to
address excessive data sparsity. Our method outperforms competing baselines
such as generative adversarial networks, variational autoencoders, and recently
proposed diffusion models in synthesizing various datasets to replace or
augment the original data by an average improvement of 4.30% in Recall@$k$ and
4.65% in NDCG@$k$. | Information Retrieval |
What field is the article from? | Title: SEMQA: Semi-Extractive Multi-Source Question Answering
Abstract: Recently proposed long-form question answering (QA) systems, supported by
large language models (LLMs), have shown promising capabilities. Yet,
attributing and verifying their generated abstractive answers can be difficult,
and automatically evaluating their accuracy remains an ongoing challenge.
In this work, we introduce a new QA task for answering multi-answer questions
by summarizing multiple diverse sources in a semi-extractive fashion.
Specifically, Semi-extractive Multi-source QA (SEMQA) requires models to output
a comprehensive answer, while mixing factual quoted spans -- copied verbatim
from given input sources -- and non-factual free-text connectors that glue
these spans together into a single cohesive passage. This setting bridges the
gap between the outputs of well-grounded but constrained extractive QA systems
and more fluent but harder to attribute fully abstractive answers.
Particularly, it enables a new mode for language models that leverages their
advanced language generation capabilities, while also producing fine in-line
attributions by-design that are easy to verify, interpret, and evaluate.
To study this task, we create the first dataset of this kind, QuoteSum, with
human-written semi-extractive answers to natural and generated questions, and
define text-based evaluation metrics. Experimenting with several LLMs in
various settings, we find this task to be surprisingly challenging,
demonstrating the importance of QuoteSum for developing and studying such
consolidation capabilities. | Computational Linguistics |
What field is the article from? | Title: RDR: the Recap, Deliberate, and Respond Method for Enhanced Language Understanding
Abstract: Natural language understanding (NLU) using neural network pipelines often
requires additional context that is not solely present in the input data.
Through Prior research, it has been evident that NLU benchmarks are susceptible
to manipulation by neural models, wherein these models exploit statistical
artifacts within the encoded external knowledge to artificially inflate
performance metrics for downstream tasks. Our proposed approach, known as the
Recap, Deliberate, and Respond (RDR) paradigm, addresses this issue by
incorporating three distinct objectives within the neural network pipeline.
Firstly, the Recap objective involves paraphrasing the input text using a
paraphrasing model in order to summarize and encapsulate its essence. Secondly,
the Deliberation objective entails encoding external graph information related
to entities mentioned in the input text, utilizing a graph embedding model.
Finally, the Respond objective employs a classification head model that
utilizes representations from the Recap and Deliberation modules to generate
the final prediction. By cascading these three models and minimizing a combined
loss, we mitigate the potential for gaming the benchmark and establish a robust
method for capturing the underlying semantic patterns, thus enabling accurate
predictions. To evaluate the effectiveness of the RDR method, we conduct tests
on multiple GLUE benchmark tasks. Our results demonstrate improved performance
compared to competitive baselines, with an enhancement of up to 2\% on standard
metrics. Furthermore, we analyze the observed evidence for semantic
understanding exhibited by RDR models, emphasizing their ability to avoid
gaming the benchmark and instead accurately capture the true underlying
semantic patterns. | Computational Linguistics |
What field is the article from? | Title: Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder's Perspective
Abstract: Commercial contracts are known to be a valuable source for deriving
project-specific requirements. However, contract negotiations mainly occur
among the legal counsel of the parties involved. The participation of non-legal
stakeholders, including requirement analysts, engineers, and solution
architects, whose primary responsibility lies in ensuring the seamless
implementation of contractual terms, is often indirect and inadequate.
Consequently, a significant number of sentences in contractual clauses, though
legally accurate, can appear unfair from an implementation perspective to
non-legal stakeholders. This perception poses a problem since requirements
indicated in the clauses are obligatory and can involve punitive measures and
penalties if not implemented as committed in the contract. Therefore, the
identification of potentially unfair clauses in contracts becomes crucial. In
this work, we conduct an empirical study to analyze the perspectives of
different stakeholders regarding contractual fairness. We then investigate the
ability of Pre-trained Language Models (PLMs) to identify unfairness in
contractual sentences by comparing chain of thought prompting and
semi-supervised fine-tuning approaches. Using BERT-based fine-tuning, we
achieved an accuracy of 84% on a dataset consisting of proprietary contracts.
It outperformed chain of thought prompting using Vicuna-13B by a margin of 9%. | Computational Linguistics |
What field is the article from? | Title: Large Language Models for Robotics: A Survey
Abstract: The human ability to learn, generalize, and control complex manipulation
tasks through multi-modality feedback suggests a unique capability, which we
refer to as dexterity intelligence. Understanding and assessing this
intelligence is a complex task. Amidst the swift progress and extensive
proliferation of large language models (LLMs), their applications in the field
of robotics have garnered increasing attention. LLMs possess the ability to
process and generate natural language, facilitating efficient interaction and
collaboration with robots. Researchers and engineers in the field of robotics
have recognized the immense potential of LLMs in enhancing robot intelligence,
human-robot interaction, and autonomy. Therefore, this comprehensive review
aims to summarize the applications of LLMs in robotics, delving into their
impact and contributions to key areas such as robot control, perception,
decision-making, and path planning. We first provide an overview of the
background and development of LLMs for robotics, followed by a description of
the benefits of LLMs for robotics and recent advancements in robotics models
based on LLMs. We then delve into the various techniques used in the model,
including those employed in perception, decision-making, control, and
interaction. Finally, we explore the applications of LLMs in robotics and some
potential challenges they may face in the near future. Embodied intelligence is
the future of intelligent science, and LLMs-based robotics is one of the
promising but challenging paths to achieve this. | Robotics |
What field is the article from? | Title: Emotion Recognition by Video: A review
Abstract: Video emotion recognition is an important branch of affective computing, and
its solutions can be applied in different fields such as human-computer
interaction (HCI) and intelligent medical treatment. Although the number of
papers published in the field of emotion recognition is increasing, there are
few comprehensive literature reviews covering related research on video emotion
recognition. Therefore, this paper selects articles published from 2015 to 2023
to systematize the existing trends in video emotion recognition in related
studies. In this paper, we first talk about two typical emotion models, then we
talk about databases that are frequently utilized for video emotion
recognition, including unimodal databases and multimodal databases. Next, we
look at and classify the specific structure and performance of modern unimodal
and multimodal video emotion recognition methods, talk about the benefits and
drawbacks of each, and then we compare them in detail in the tables. Further,
we sum up the primary difficulties right now looked by video emotion
recognition undertakings and point out probably the most encouraging future
headings, such as establishing an open benchmark database and better multimodal
fusion strategys. The essential objective of this paper is to assist scholarly
and modern scientists with keeping up to date with the most recent advances and
new improvements in this speedy, high-influence field of video emotion
recognition. | Computer Vision |
What field is the article from? | Title: Breast Cancer classification by adaptive weighted average ensemble of previously trained models
Abstract: Breast cancer is a serious disease that inflicts millions of people each
year, and the number of cases is increasing. Early detection is the best way to
reduce the impact of the disease. Researchers have developed many techniques to
detect breast cancer, including the use of histopathology images in CAD
systems. This research proposes a technique that combine already fully trained
model using adaptive average ensemble, this is different from the literature
which uses average ensemble before training and the average ensemble is trained
simultaneously. Our approach is different because it used adaptive average
ensemble after training which has increased the performance of evaluation
metrics. It averages the outputs of every trained model, and every model will
have weight according to its accuracy. The accuracy in the adaptive weighted
ensemble model has achieved 98% where the accuracy has increased by 1 percent
which is better than the best participating model in the ensemble which was
97%. Also, it decreased the numbers of false positive and false negative and
enhanced the performance metrics. | Artificial Intelligence |
What field is the article from? | Title: Generative Input: Towards Next-Generation Input Methods Paradigm
Abstract: Since the release of ChatGPT, generative models have achieved tremendous
success and become the de facto approach for various NLP tasks. However, its
application in the field of input methods remains under-explored. Many neural
network approaches have been applied to the construction of Chinese input
method engines(IMEs).Previous research often assumed that the input pinyin was
correct and focused on Pinyin-to-character(P2C) task, which significantly falls
short of meeting users' demands. Moreover, previous research could not leverage
user feedback to optimize the model and provide personalized results. In this
study, we propose a novel Generative Input paradigm named GeneInput. It uses
prompts to handle all input scenarios and other intelligent auxiliary input
functions, optimizing the model with user feedback to deliver personalized
results. The results demonstrate that we have achieved state-of-the-art
performance for the first time in the Full-mode Key-sequence to
Characters(FK2C) task. We propose a novel reward model training method that
eliminates the need for additional manual annotations and the performance
surpasses GPT-4 in tasks involving intelligent association and conversational
assistance. Compared to traditional paradigms, GeneInput not only demonstrates
superior performance but also exhibits enhanced robustness, scalability, and
online learning capabilities. | Computational Linguistics |
What field is the article from? | Title: On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models
Abstract: The surprisingly likely criterion in the seminal work of Prelec (the Bayesian
Truth Serum) guarantees truthfulness in a game-theoretic multi-agent setting,
by rewarding rational agents to maximise the expected information gain with
their answers w.r.t. their probabilistic beliefs. We investigate the relevance
of a similar criterion for responses of LLMs. We hypothesize that if the
surprisingly likely criterion works in LLMs, under certain conditions, the
responses that maximize the reward under this criterion should be more accurate
than the responses that only maximize the posterior probability. Using
benchmarks including the TruthfulQA benchmark and using openly available LLMs:
GPT-2 and LLaMA-2, we show that the method indeed improves the accuracy
significantly (for example, upto 24 percentage points aggregate improvement on
TruthfulQA and upto 70 percentage points improvement on individual categories
of questions). | Machine Learning |
What field is the article from? | Title: FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
Abstract: We present FoundationPose, a unified foundation model for 6D object pose
estimation and tracking, supporting both model-based and model-free setups. Our
approach can be instantly applied at test-time to a novel object without
fine-tuning, as long as its CAD model is given, or a small number of reference
images are captured. We bridge the gap between these two setups with a neural
implicit representation that allows for effective novel view synthesis, keeping
the downstream pose estimation modules invariant under the same unified
framework. Strong generalizability is achieved via large-scale synthetic
training, aided by a large language model (LLM), a novel transformer-based
architecture, and contrastive learning formulation. Extensive evaluation on
multiple public datasets involving challenging scenarios and objects indicate
our unified approach outperforms existing methods specialized for each task by
a large margin. In addition, it even achieves comparable results to
instance-level methods despite the reduced assumptions. Project page:
https://nvlabs.github.io/FoundationPose/ | Computer Vision |
What field is the article from? | Title: Verification of Neural Reachable Tubes via Scenario Optimization and Conformal Prediction
Abstract: Learning-based approaches for controlling safety-critical systems are rapidly
growing in popularity; thus, it is important to assure their performance and
safety. Hamilton-Jacobi (HJ) reachability analysis is a popular formal
verification tool for providing such guarantees, since it can handle general
nonlinear system dynamics, bounded adversarial system disturbances, and state
and input constraints. However, its computational and memory complexity scales
exponentially with the state dimension, making it intractable for large-scale
systems. To overcome this challenge, neural approaches, such as DeepReach, have
been used to synthesize reachable tubes and safety controllers for
high-dimensional systems. However, verifying these neural reachable tubes
remains challenging. In this work, we propose two verification methods, based
on robust scenario optimization and conformal prediction, to provide
probabilistic safety guarantees for neural reachable tubes. Our methods allow a
direct trade-off between resilience to outlier errors in the neural tube, which
are inevitable in a learning-based approach, and the strength of the
probabilistic safety guarantee. Furthermore, we show that split conformal
prediction, a widely used method in the machine learning community for
uncertainty quantification, reduces to a scenario-based approach, making the
two methods equivalent not only for verification of neural reachable tubes but
also more generally. To our knowledge, our proof is the first in the literature
to show a strong relationship between conformal prediction and scenario
optimization. Finally, we propose an outlier-adjusted verification approach
that uses the error distribution in neural reachable tubes to recover greater
safe volumes. We demonstrate the efficacy of the proposed approaches for the
high-dimensional problems of multi-vehicle collision avoidance and rocket
landing with no-go zones. | Robotics |
What field is the article from? | Title: Counterfactual-Augmented Importance Sampling for Semi-Offline Policy Evaluation
Abstract: In applying reinforcement learning (RL) to high-stakes domains, quantitative
and qualitative evaluation using observational data can help practitioners
understand the generalization performance of new policies. However, this type
of off-policy evaluation (OPE) is inherently limited since offline data may not
reflect the distribution shifts resulting from the application of new policies.
On the other hand, online evaluation by collecting rollouts according to the
new policy is often infeasible, as deploying new policies in these domains can
be unsafe. In this work, we propose a semi-offline evaluation framework as an
intermediate step between offline and online evaluation, where human users
provide annotations of unobserved counterfactual trajectories. While tempting
to simply augment existing data with such annotations, we show that this naive
approach can lead to biased results. Instead, we design a new family of OPE
estimators based on importance sampling (IS) and a novel weighting scheme that
incorporate counterfactual annotations without introducing additional bias. We
analyze the theoretical properties of our approach, showing its potential to
reduce both bias and variance compared to standard IS estimators. Our analyses
reveal important practical considerations for handling biased, noisy, or
missing annotations. In a series of proof-of-concept experiments involving
bandits and a healthcare-inspired simulator, we demonstrate that our approach
outperforms purely offline IS estimators and is robust to imperfect
annotations. Our framework, combined with principled human-centered design of
annotation solicitation, can enable the application of RL in high-stakes
domains. | Machine Learning |
What field is the article from? | Title: Interpreting Pretrained Language Models via Concept Bottlenecks
Abstract: Pretrained language models (PLMs) have made significant strides in various
natural language processing tasks. However, the lack of interpretability due to
their ``black-box'' nature poses challenges for responsible implementation.
Although previous studies have attempted to improve interpretability by using,
e.g., attention weights in self-attention layers, these weights often lack
clarity, readability, and intuitiveness. In this research, we propose a novel
approach to interpreting PLMs by employing high-level, meaningful concepts that
are easily understandable for humans. For example, we learn the concept of
``Food'' and investigate how it influences the prediction of a model's
sentiment towards a restaurant review. We introduce C$^3$M, which combines
human-annotated and machine-generated concepts to extract hidden neurons
designed to encapsulate semantically meaningful and task-specific concepts.
Through empirical evaluations on real-world datasets, we manifest that our
approach offers valuable insights to interpret PLM behavior, helps diagnose
model failures, and enhances model robustness amidst noisy concept labels. | Computational Linguistics |
What field is the article from? | Title: An Empirical Bayes Framework for Open-Domain Dialogue Generation
Abstract: To engage human users in meaningful conversation, open-domain dialogue agents
are required to generate diverse and contextually coherent dialogue. Despite
recent advancements, which can be attributed to the usage of pretrained
language models, the generation of diverse and coherent dialogue remains an
open research problem. A popular approach to address this issue involves the
adaptation of variational frameworks. However, while these approaches
successfully improve diversity, they tend to compromise on contextual
coherence. Hence, we propose the Bayesian Open-domain Dialogue with Empirical
Bayes (BODEB) framework, an empirical bayes framework for constructing an
Bayesian open-domain dialogue agent by leveraging pretrained parameters to
inform the prior and posterior parameter distributions. Empirical results show
that BODEB achieves better results in terms of both diversity and coherence
compared to variational frameworks. | Computational Linguistics |
What field is the article from? | Title: Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and Reasoning (Extended Version)
Abstract: The rise of AI-based and autonomous systems is raising concerns and
apprehension due to potential negative repercussions stemming from their
behavior or decisions. These systems must be designed to comply with the human
contexts in which they will operate. To this extent, Townsend et al. (2022)
introduce the concept of SLEEC (social, legal, ethical, empathetic, or
cultural) rules that aim to facilitate the formulation, verification, and
enforcement of the rules AI-based and autonomous systems should obey. They lay
out a methodology to elicit them and to let philosophers, lawyers, domain
experts, and others to formulate them in natural language. To enable their
effective use in AI systems, it is necessary to translate these rules
systematically into a formal language that supports automated reasoning. In
this study, we first conduct a linguistic analysis of the SLEEC rules pattern,
which justifies the translation of SLEEC rules into classical logic. Then we
investigate the computational complexity of reasoning about SLEEC rules and
show how logical programming frameworks can be employed to implement SLEEC
rules in practical scenarios. The result is a readily applicable strategy for
implementing AI systems that conform to norms expressed as SLEEC rules. | Artificial Intelligence |
What field is the article from? | Title: Evaluating Supervision Levels Trade-Offs for Infrared-Based People Counting
Abstract: Object detection models are commonly used for people counting (and
localization) in many applications but require a dataset with costly bounding
box annotations for training. Given the importance of privacy in people
counting, these models rely more and more on infrared images, making the task
even harder. In this paper, we explore how weaker levels of supervision can
affect the performance of deep person counting architectures for image
classification and point-level localization. Our experiments indicate that
counting people using a CNN Image-Level model achieves competitive results with
YOLO detectors and point-level models, yet provides a higher frame rate and a
similar amount of model parameters. | Computer Vision |
What field is the article from? | Title: KITS: Inductive Spatio-Temporal Kriging with Increment Training Strategy
Abstract: Sensors are commonly deployed to perceive the environment. However, due to
the high cost, sensors are usually sparsely deployed. Kriging is the tailored
task to infer the unobserved nodes (without sensors) using the observed source
nodes (with sensors). The essence of kriging task is transferability. Recently,
several inductive spatio-temporal kriging methods have been proposed based on
graph neural networks, being trained based on a graph built on top of observed
nodes via pretext tasks such as masking nodes out and reconstructing them.
However, the graph in training is inevitably much sparser than the graph in
inference that includes all the observed and unobserved nodes. The learned
pattern cannot be well generalized for inference, denoted as graph gap. To
address this issue, we first present a novel Increment training strategy:
instead of masking nodes (and reconstructing them), we add virtual nodes into
the training graph so as to mitigate the graph gap issue naturally.
Nevertheless, the empty-shell virtual nodes without labels could have
bad-learned features and lack supervision signals. To solve these issues, we
pair each virtual node with its most similar observed node and fuse their
features together; to enhance the supervision signal, we construct reliable
pseudo labels for virtual nodes. As a result, the learned pattern of virtual
nodes could be safely transferred to real unobserved nodes for reliable
kriging. We name our new Kriging model with Increment Training Strategy as
KITS. Extensive experiments demonstrate that KITS consistently outperforms
existing kriging methods by large margins, e.g., the improvement over MAE score
could be as high as 18.33%. | Machine Learning |
What field is the article from? | Title: Local Universal Rule-based Explanations
Abstract: Explainable artificial intelligence (XAI) is one of the most intensively
developed are of AI in recent years. It is also one of the most fragmented one
with multiple methods that focus on different aspects of explanations. This
makes difficult to obtain the full spectrum of explanation at once in a compact
and consistent way. To address this issue, we present Local Universal Explainer
(LUX) that is a rule-based explainer which can generate factual, counterfactual
and visual explanations. It is based on a modified version of decision tree
algorithms that allows for oblique splits and integration with feature
importance XAI methods such as SHAP or LIME. It does not use data generation in
opposite to other algorithms, but is focused on selecting local concepts in a
form of high-density clusters of real data that have the highest impact on
forming the decision boundary of the explained model. We tested our method on
real and synthetic datasets and compared it with state-of-the-art rule-based
explainers such as LORE, EXPLAN and Anchor. Our method outperforms currently
existing approaches in terms of simplicity, global fidelity and
representativeness. | Artificial Intelligence |
What field is the article from? | Title: Signal Temporal Logic-Guided Apprenticeship Learning
Abstract: Apprenticeship learning crucially depends on effectively learning rewards,
and hence control policies from user demonstrations. Of particular difficulty
is the setting where the desired task consists of a number of sub-goals with
temporal dependencies. The quality of inferred rewards and hence policies are
typically limited by the quality of demonstrations, and poor inference of these
can lead to undesirable outcomes. In this letter, we show how temporal logic
specifications that describe high level task objectives, are encoded in a graph
to define a temporal-based metric that reasons about behaviors of demonstrators
and the learner agent to improve the quality of inferred rewards and policies.
Through experiments on a diverse set of robot manipulator simulations, we show
how our framework overcomes the drawbacks of prior literature by drastically
improving the number of demonstrations required to learn a control policy. | Robotics |
What field is the article from? | Title: Nonlinear Multi-objective Reinforcement Learning with Provable Guarantees
Abstract: We describe RA-E3 (Reward-Aware Explicit Explore or Exploit), an algorithm
with provable guarantees for solving a single or multi-objective Markov
Decision Process (MDP) where we want to maximize the expected value of a
nonlinear function over accumulated rewards. This allows us to model
fairness-aware welfare optimization for multi-objective reinforcement learning
as well as risk-aware reinforcement learning with nonlinear Von
Neumann-Morgenstern utility functions in the single objective setting. RA-E3
extends the classic E3 algorithm that solves MDPs with scalar rewards and
linear preferences. We first state a distinct reward-aware version of value
iteration that calculates a non-stationary policy that is approximately optimal
for a given model of the environment. This sub-procedure is based on an
extended form of Bellman optimality for nonlinear optimization that explicitly
considers time and current accumulated reward. We then describe how to use this
optimization procedure in a larger algorithm that must simultaneously learn a
model of the environment. The algorithm learns an approximately optimal policy
in time that depends polynomially on the MDP size, desired approximation, and
smoothness of the nonlinear function, and exponentially on the number of
objectives. | Machine Learning |
What field is the article from? | Title: Diffusion-C: Unveiling the Generative Challenges of Diffusion Models through Corrupted Data
Abstract: In our contemporary academic inquiry, we present "Diffusion-C," a
foundational methodology to analyze the generative restrictions of Diffusion
Models, particularly those akin to GANs, DDPM, and DDIM. By employing input
visual data that has been subjected to a myriad of corruption modalities and
intensities, we elucidate the performance characteristics of those Diffusion
Models. The noise component takes center stage in our analysis, hypothesized to
be a pivotal element influencing the mechanics of deep learning systems. In our
rigorous expedition utilizing Diffusion-C, we have discerned the following
critical observations: (I) Within the milieu of generative models under the
Diffusion taxonomy, DDPM emerges as a paragon, consistently exhibiting superior
performance metrics. (II) Within the vast spectrum of corruption frameworks,
the fog and fractal corruptions notably undermine the functional robustness of
both DDPM and DDIM. (III) The vulnerability of Diffusion Models to these
particular corruptions is significantly influenced by topological and
statistical similarities, particularly concerning the alignment between mean
and variance. This scholarly work highlights Diffusion-C's core understandings
regarding the impacts of various corruptions, setting the stage for future
research endeavors in the realm of generative models. | Machine Learning |
What field is the article from? | Title: Transdisciplinary AI Education: The Confluence of Curricular and Community Needs in the Instruction of Artificial Intelligence
Abstract: The integration of artificial intelligence (AI) into education has the
potential to transform the way we learn and teach. In this paper, we examine
the current state of AI in education and explore the potential benefits and
challenges of incorporating this technology into the classroom. The approaches
currently available for AI education often present students with experiences
only focusing on discrete computer science concepts agnostic to a larger
curriculum. However, teaching AI must not be siloed or interdisciplinary.
Rather, AI instruction ought to be transdisciplinary, including connections to
the broad curriculum and community in which students are learning. This paper
delves into the AI program currently in development for Neom Community School
and the larger Education, Research, and Innovation Sector in Neom, Saudi Arabia
s new megacity under development. In this program, AI is both taught as a
subject and to learn other subjects within the curriculum through the school
systems International Baccalaureate (IB) approach, which deploys learning
through Units of Inquiry. This approach to education connects subjects across a
curriculum under one major guiding question at a time. The proposed method
offers a meaningful approach to introducing AI to students throughout these
Units of Inquiry, as it shifts AI from a subject that students like or not like
to a subject that is taught throughout the curriculum. | Computers and Society |
What field is the article from? | Title: Unbiased organism-agnostic and highly sensitive signal peptide predictor with deep protein language model
Abstract: Signal peptide (SP) is a short peptide located in the N-terminus of proteins.
It is essential to target and transfer transmembrane and secreted proteins to
correct positions. Compared with traditional experimental methods to identify
signal peptides, computational methods are faster and more efficient, which are
more practical for analyzing thousands or even millions of protein sequences,
especially for metagenomic data. Here we present Unbiased Organism-agnostic
Signal Peptide Network (USPNet), a signal peptide classification and cleavage
site prediction deep learning method that takes advantage of protein language
models. We propose to apply label distribution-aware margin loss to handle data
imbalance problems and use evolutionary information of protein to enrich
representation and overcome species information dependence. | Artificial Intelligence |
What field is the article from? | Title: ReConTab: Regularized Contrastive Representation Learning for Tabular Data
Abstract: Representation learning stands as one of the critical machine learning
techniques across various domains. Through the acquisition of high-quality
features, pre-trained embeddings significantly reduce input space redundancy,
benefiting downstream pattern recognition tasks such as classification,
regression, or detection. Nonetheless, in the domain of tabular data, feature
engineering and selection still heavily rely on manual intervention, leading to
time-consuming processes and necessitating domain expertise. In response to
this challenge, we introduce ReConTab, a deep automatic representation learning
framework with regularized contrastive learning. Agnostic to any type of
modeling task, ReConTab constructs an asymmetric autoencoder based on the same
raw features from model inputs, producing low-dimensional representative
embeddings. Specifically, regularization techniques are applied for raw feature
selection. Meanwhile, ReConTab leverages contrastive learning to distill the
most pertinent information for downstream tasks. Experiments conducted on
extensive real-world datasets substantiate the framework's capacity to yield
substantial and robust performance improvements. Furthermore, we empirically
demonstrate that pre-trained embeddings can seamlessly integrate as easily
adaptable features, enhancing the performance of various traditional methods
such as XGBoost and Random Forest. | Machine Learning |
What field is the article from? | Title: Computational Copyright: Towards A Royalty Model for AI Music Generation Platforms
Abstract: The advancement of generative AI has given rise to pressing copyright
challenges, particularly in music industry. This paper focuses on the economic
aspects of these challenges, emphasizing that the economic impact constitutes a
central issue in the copyright arena. The complexity of the black-box
generative AI technologies not only suggests but necessitates algorithmic
solutions. However, such solutions have been largely missing, leading to
regulatory challenges in this landscape. We aim to bridge the gap in current
approaches by proposing potential royalty models for revenue sharing on AI
music generation platforms. Our methodology involves a detailed analysis of
existing royalty models in platforms like Spotify and YouTube, and adapting
these to the unique context of AI-generated music. A significant challenge we
address is the attribution of AI-generated music to influential copyrighted
content in the training data. To this end, we present algorithmic solutions
employing data attribution techniques. Our experimental results verify the
effectiveness of these solutions. This research represents a pioneering effort
in integrating technical advancements with economic and legal considerations in
the field of generative AI, offering a computational copyright solution for the
challenges posed by the opaque nature of AI technologies. | Artificial Intelligence |
What field is the article from? | Title: JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization
Abstract: In this study, we introduce JarviX, a sophisticated data analytics framework.
JarviX is designed to employ Large Language Models (LLMs) to facilitate an
automated guide and execute high-precision data analyzes on tabular datasets.
This framework emphasizes the significance of varying column types,
capitalizing on state-of-the-art LLMs to generate concise data insight
summaries, propose relevant analysis inquiries, visualize data effectively, and
provide comprehensive explanations for results drawn from an extensive data
analysis pipeline. Moreover, JarviX incorporates an automated machine learning
(AutoML) pipeline for predictive modeling. This integration forms a
comprehensive and automated optimization cycle, which proves particularly
advantageous for optimizing machine configuration. The efficacy and
adaptability of JarviX are substantiated through a series of practical use case
studies. | Machine Learning |
What field is the article from? | Title: Rethinking Decision Transformer via Hierarchical Reinforcement Learning
Abstract: Decision Transformer (DT) is an innovative algorithm leveraging recent
advances of the transformer architecture in reinforcement learning (RL).
However, a notable limitation of DT is its reliance on recalling trajectories
from datasets, losing the capability to seamlessly stitch sub-optimal
trajectories together. In this work we introduce a general sequence modeling
framework for studying sequential decision making through the lens of
Hierarchical RL. At the time of making decisions, a high-level policy first
proposes an ideal prompt for the current state, a low-level policy subsequently
generates an action conditioned on the given prompt. We show DT emerges as a
special case of this framework with certain choices of high-level and low-level
policies, and discuss the potential failure of these choices. Inspired by these
observations, we study how to jointly optimize the high-level and low-level
policies to enable the stitching ability, which further leads to the
development of new offline RL algorithms. Our empirical results clearly show
that the proposed algorithms significantly surpass DT on several control and
navigation benchmarks. We hope our contributions can inspire the integration of
transformer architectures within the field of RL. | Machine Learning |
What field is the article from? | Title: Medical Image Retrieval Using Pretrained Embeddings
Abstract: A wide range of imaging techniques and data formats available for medical
images make accurate retrieval from image databases challenging.
Efficient retrieval systems are crucial in advancing medical research,
enabling large-scale studies and innovative diagnostic tools. Thus, addressing
the challenges of medical image retrieval is essential for the continued
enhancement of healthcare and research.
In this study, we evaluated the feasibility of employing four
state-of-the-art pretrained models for medical image retrieval at modality,
body region, and organ levels and compared the results of two similarity
indexing approaches. Since the employed networks take 2D images, we analyzed
the impacts of weighting and sampling strategies to incorporate 3D information
during retrieval of 3D volumes. We showed that medical image retrieval is
feasible using pretrained networks without any additional training or
fine-tuning steps. Using pretrained embeddings, we achieved a recall of 1 for
various tasks at modality, body region, and organ level. | Computer Vision |
What field is the article from? | Title: Intelligent Anomaly Detection for Lane Rendering Using Transformer with Self-Supervised Pre-Training and Customized Fine-Tuning
Abstract: The burgeoning navigation services using digital maps provide great
convenience to drivers. Nevertheless, the presence of anomalies in lane
rendering map images occasionally introduces potential hazards, as such
anomalies can be misleading to human drivers and consequently contribute to
unsafe driving conditions. In response to this concern and to accurately and
effectively detect the anomalies, this paper transforms lane rendering image
anomaly detection into a classification problem and proposes a four-phase
pipeline consisting of data pre-processing, self-supervised pre-training with
the masked image modeling (MiM) method, customized fine-tuning using
cross-entropy based loss with label smoothing, and post-processing to tackle it
leveraging state-of-the-art deep learning techniques, especially those
involving Transformer models. Various experiments verify the effectiveness of
the proposed pipeline. Results indicate that the proposed pipeline exhibits
superior performance in lane rendering image anomaly detection, and notably,
the self-supervised pre-training with MiM can greatly enhance the detection
accuracy while significantly reducing the total training time. For instance,
employing the Swin Transformer with Uniform Masking as self-supervised
pretraining (Swin-Trans-UM) yielded a heightened accuracy at 94.77% and an
improved Area Under The Curve (AUC) score of 0.9743 compared with the pure Swin
Transformer without pre-training (Swin-Trans) with an accuracy of 94.01% and an
AUC of 0.9498. The fine-tuning epochs were dramatically reduced to 41 from the
original 280. In conclusion, the proposed pipeline, with its incorporation of
self-supervised pre-training using MiM and other advanced deep learning
techniques, emerges as a robust solution for enhancing the accuracy and
efficiency of lane rendering image anomaly detection in digital navigation
systems. | Computer Vision |
What field is the article from? | Title: Diversified Node Sampling based Hierarchical Transformer Pooling for Graph Representation Learning
Abstract: Graph pooling methods have been widely used on downsampling graphs, achieving
impressive results on multiple graph-level tasks like graph classification and
graph generation. An important line called node dropping pooling aims at
exploiting learnable scoring functions to drop nodes with comparatively lower
significance scores. However, existing node dropping methods suffer from two
limitations: (1) for each pooled node, these models struggle to capture
long-range dependencies since they mainly take GNNs as the backbones; (2)
pooling only the highest-scoring nodes tends to preserve similar nodes, thus
discarding the affluent information of low-scoring nodes. To address these
issues, we propose a Graph Transformer Pooling method termed GTPool, which
introduces Transformer to node dropping pooling to efficiently capture
long-range pairwise interactions and meanwhile sample nodes diversely.
Specifically, we design a scoring module based on the self-attention mechanism
that takes both global context and local context into consideration, measuring
the importance of nodes more comprehensively. GTPool further utilizes a
diversified sampling method named Roulette Wheel Sampling (RWS) that is able to
flexibly preserve nodes across different scoring intervals instead of only
higher scoring nodes. In this way, GTPool could effectively obtain long-range
information and select more representative nodes. Extensive experiments on 11
benchmark datasets demonstrate the superiority of GTPool over existing popular
graph pooling methods. | Artificial Intelligence |
What field is the article from? | Title: Correction with Backtracking Reduces Hallucination in Summarization
Abstract: Abstractive summarization aims at generating natural language summaries of a
source document that are succinct while preserving the important elements.
Despite recent advances, neural text summarization models are known to be
susceptible to hallucinating (or more correctly confabulating), that is to
produce summaries with details that are not grounded in the source document. In
this paper, we introduce a simple yet efficient technique, CoBa, to reduce
hallucination in abstractive summarization. The approach is based on two steps:
hallucination detection and mitigation. We show that the former can be achieved
through measuring simple statistics about conditional word probabilities and
distance to context words. Further, we demonstrate that straight-forward
backtracking is surprisingly effective at mitigation. We thoroughly evaluate
the proposed method with prior art on three benchmark datasets for text
summarization. The results show that CoBa is effective and efficient in
reducing hallucination, and offers great adaptability and flexibility. | Computational Linguistics |
What field is the article from? | Title: MI-Gen: Multiple Instance Generation of Pathology Reports for Gigapixel Whole-Slide Images
Abstract: Whole slide images are the foundation of digital pathology for the diagnosis
and treatment of carcinomas. Writing pathology reports is laborious and
error-prone for inexperienced pathologists. To reduce the workload and improve
clinical automation, we investigate how to generate pathology reports given
whole slide images. On the data end, we curated the largest WSI-text dataset
(TCGA-PathoText). In specific, we collected nearly 10000 high-quality WSI-text
pairs for visual-language models by recognizing and cleaning pathology reports
which narrate diagnostic slides in TCGA. On the model end, we propose the
multiple instance generative model (MI-Gen) which can produce pathology reports
for gigapixel WSIs. We benchmark our model on the largest subset of
TCGA-PathoText. Experimental results show our model can generate pathology
reports which contain multiple clinical clues. Furthermore, WSI-text prediction
can be seen as an approach of visual-language pre-training, which enables our
model to be transferred to downstream diagnostic tasks like carcinoma grading
and phenotyping. We observe that simple semantic extraction from the pathology
reports can achieve the best performance (0.838 of F1 score) on BRCA subtyping
without adding extra parameters or tricky fine-tuning. Our collected dataset
and related code will all be publicly available. | Computer Vision |
What field is the article from? | Title: Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Abstract: Humans use abstract concepts for understanding instead of hard features.
Recent interpretability research has focused on human-centered concept
explanations of neural networks. Concept Activation Vectors (CAVs) estimate a
model's sensitivity and possible biases to a given concept. In this paper, we
extend CAVs from post-hoc analysis to ante-hoc training in order to reduce
model bias through fine-tuning using an additional Concept Loss. Concepts were
defined on the final layer of the network in the past. We generalize it to
intermediate layers using class prototypes. This facilitates class learning in
the last convolution layer, which is known to be most informative. We also
introduce Concept Distillation to create richer concepts using a pre-trained
knowledgeable model as the teacher. Our method can sensitize or desensitize a
model towards concepts. We show applications of concept-sensitive training to
debias several classification problems. We also use concepts to induce prior
knowledge into IID, a reconstruction problem. Concept-sensitive training can
improve model interpretability, reduce biases, and induce prior knowledge.
Please visit https://avani17101.github.io/Concept-Distilllation/ for code and
more details. | Machine Learning |
What field is the article from? | Title: Neural Machine Translation of Clinical Text: An Empirical Investigation into Multilingual Pre-Trained Language Models and Transfer-Learning
Abstract: We conduct investigations on clinical text machine translation by examining
multilingual neural network models using deep learning such as Transformer
based structures. Furthermore, to address the language resource imbalance
issue, we also carry out experiments using a transfer learning methodology
based on massive multilingual pre-trained language models (MMPLMs). The
experimental results on three subtasks including 1) clinical case (CC), 2)
clinical terminology (CT), and 3) ontological concept (OC) show that our models
achieved top-level performances in the ClinSpEn-2022 shared task on
English-Spanish clinical domain data. Furthermore, our expert-based human
evaluations demonstrate that the small-sized pre-trained language model (PLM)
won over the other two extra-large language models by a large margin, in the
clinical domain fine-tuning, which finding was never reported in the field.
Finally, the transfer learning method works well in our experimental setting
using the WMT21fb model to accommodate a new language space Spanish that was
not seen at the pre-training stage within WMT21fb itself, which deserves more
exploitation for clinical knowledge transformation, e.g. to investigate into
more languages. These research findings can shed some light on domain-specific
machine translation development, especially in clinical and healthcare fields.
Further research projects can be carried out based on our work to improve
healthcare text analytics and knowledge transformation. | Computational Linguistics |
What field is the article from? | Title: Apollo's Oracle: Retrieval-Augmented Reasoning in Multi-Agent Debates
Abstract: Multi-agent debate systems are designed to derive accurate and consistent
conclusions through adversarial interactions among agents. However, these
systems often encounter challenges due to cognitive constraints, manifesting as
(1) agents' obstinate adherence to incorrect viewpoints and (2) their
propensity to abandon correct viewpoints. These issues are primarily
responsible for the ineffectiveness of such debates. Addressing the challenge
of cognitive constraints, we introduce a novel framework, the Multi-Agent
Debate with Retrieval Augmented (MADRA). MADRA incorporates retrieval of prior
knowledge into the debate process, effectively breaking cognitive constraints
and enhancing the agents' reasoning capabilities. Furthermore, we have
developed a self-selection module within this framework, enabling agents to
autonomously select pertinent evidence, thereby minimizing the impact of
irrelevant or noisy data. We have comprehensively tested and analyzed MADRA
across six diverse datasets. The experimental results demonstrate that our
approach significantly enhances performance across various tasks, proving the
effectiveness of our proposed method. | Computational Linguistics |
What field is the article from? | Title: Compositional Chain-of-Thought Prompting for Large Multimodal Models
Abstract: The combination of strong visual backbones and Large Language Model (LLM)
reasoning has led to Large Multimodal Models (LMMs) becoming the current
standard for a wide range of vision and language (VL) tasks. However, recent
research has shown that even the most advanced LMMs still struggle to capture
aspects of compositional visual reasoning, such as attributes and relationships
between objects. One solution is to utilize scene graphs (SGs)--a formalization
of objects and their relations and attributes that has been extensively used as
a bridge between the visual and textual domains. Yet, scene graph data requires
scene graph annotations, which are expensive to collect and thus not easily
scalable. Moreover, finetuning an LMM based on SG data can lead to catastrophic
forgetting of the pretraining objective. To overcome this, inspired by
chain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a
novel zero-shot Chain-of-Thought prompting method that utilizes SG
representations in order to extract compositional knowledge from an LMM.
Specifically, we first generate an SG using the LMM, and then use that SG in
the prompt to produce a response. Through extensive experiments, we find that
the proposed CCoT approach not only improves LMM performance on several vision
and language VL compositional benchmarks but also improves the performance of
several popular LMMs on general multimodal benchmarks, without the need for
fine-tuning or annotated ground-truth SGs. | Computer Vision |
What field is the article from? | Title: LMD: Faster Image Reconstruction with Latent Masking Diffusion
Abstract: As a class of fruitful approaches, diffusion probabilistic models (DPMs) have
shown excellent advantages in high-resolution image reconstruction. On the
other hand, masked autoencoders (MAEs), as popular self-supervised vision
learners, have demonstrated simpler and more effective image reconstruction and
transfer capabilities on downstream tasks. However, they all require extremely
high training costs, either due to inherent high temporal-dependence (i.e.,
excessively long diffusion steps) or due to artificially low spatial-dependence
(i.e., human-formulated high mask ratio, such as 0.75). To the end, this paper
presents LMD, a faster image reconstruction framework with latent masking
diffusion. First, we propose to project and reconstruct images in latent space
through a pre-trained variational autoencoder, which is theoretically more
efficient than in the pixel-based space. Then, we combine the advantages of
MAEs and DPMs to design a progressive masking diffusion model, which gradually
increases the masking proportion by three different schedulers and reconstructs
the latent features from simple to difficult, without sequentially performing
denoising diffusion as in DPMs or using fixed high masking ratio as in MAEs, so
as to alleviate the high training time-consumption predicament. Our approach
allows for learning high-capacity models and accelerate their training (by 3x
or more) and barely reduces the original accuracy. Inference speed in
downstream tasks also significantly outperforms the previous approaches. | Computer Vision |
What field is the article from? | Title: Do Physicians Know How to Prompt? The Need for Automatic Prompt Optimization Help in Clinical Note Generation
Abstract: This study examines the effect of prompt engineering on the performance of
Large Language Models (LLMs) in clinical note generation. We introduce an
Automatic Prompt Optimization (APO) framework to refine initial prompts and
compare the outputs of medical experts, non-medical experts, and APO-enhanced
GPT3.5 and GPT4. Results highlight GPT4 APO's superior performance in
standardizing prompt quality across clinical note sections. A human-in-the-loop
approach shows that experts maintain content quality post-APO, with a
preference for their own modifications, suggesting the value of expert
customization. We recommend a two-phase optimization process, leveraging
APO-GPT4 for consistency and expert input for personalization. | Computational Linguistics |
What field is the article from? | Title: Analyzing and Predicting Low-Listenership Trends in a Large-Scale Mobile Health Program: A Preliminary Investigation
Abstract: Mobile health programs are becoming an increasingly popular medium for
dissemination of health information among beneficiaries in less privileged
communities. Kilkari is one of the world's largest mobile health programs which
delivers time sensitive audio-messages to pregnant women and new mothers. We
have been collaborating with ARMMAN, a non-profit in India which operates the
Kilkari program, to identify bottlenecks to improve the efficiency of the
program. In particular, we provide an initial analysis of the trajectories of
beneficiaries' interaction with the mHealth program and examine elements of the
program that can be potentially enhanced to boost its success. We cluster the
cohort into different buckets based on listenership so as to analyze
listenership patterns for each group that could help boost program success. We
also demonstrate preliminary results on using historical data in a time-series
prediction to identify beneficiary dropouts and enable NGOs in devising timely
interventions to strengthen beneficiary retention. | Machine Learning |
What field is the article from? | Title: Contrastive Multi-Level Graph Neural Networks for Session-based Recommendation
Abstract: Session-based recommendation (SBR) aims to predict the next item at a certain
time point based on anonymous user behavior sequences. Existing methods
typically model session representation based on simple item transition
information. However, since session-based data consists of limited users'
short-term interactions, modeling session representation by capturing fixed
item transition information from a single dimension suffers from data sparsity.
In this paper, we propose a novel contrastive multi-level graph neural networks
(CM-GNN) to better exploit complex and high-order item transition information.
Specifically, CM-GNN applies local-level graph convolutional network (L-GCN)
and global-level network (G-GCN) on the current session and all the sessions
respectively, to effectively capture pairwise relations over all the sessions
by aggregation strategy. Meanwhile, CM-GNN applies hyper-level graph
convolutional network (H-GCN) to capture high-order information among all the
item transitions. CM-GNN further introduces an attention-based fusion module to
learn pairwise relation-based session representation by fusing the item
representations generated by L-GCN and G-GCN. CM-GNN averages the item
representations obtained by H-GCN to obtain high-order relation-based session
representation. Moreover, to convert the high-order item transition information
into the pairwise relation-based session representation, CM-GNN maximizes the
mutual information between the representations derived from the fusion module
and the average pool layer by contrastive learning paradigm. We conduct
extensive experiments on multiple widely used benchmark datasets to validate
the efficacy of the proposed method. The encouraging results demonstrate that
our proposed method outperforms the state-of-the-art SBR techniques. | Information Retrieval |
What field is the article from? | Title: The Case for Universal Basic Computing Power
Abstract: The Universal Basic Computing Power (UBCP) initiative ensures global, free
access to a set amount of computing power specifically for AI research and
development (R&D). This initiative comprises three key elements. First, UBCP
must be cost free, with its usage limited to AI R&D and minimal additional
conditions. Second, UBCP should continually incorporate the state of the art AI
advancements, including efficiently distilled, compressed, and deployed
training data, foundational models, benchmarks, and governance tools. Lastly,
it's essential for UBCP to be universally accessible, ensuring convenience for
all users. We urge major stakeholders in AI development large platforms, open
source contributors, and policymakers to prioritize the UBCP initiative. | Artificial Intelligence |
What field is the article from? | Title: Contractive error feedback for gradient compression
Abstract: On-device memory concerns in distributed deep learning have become severe due
to (i) the growth of model size in multi-GPU training, and (ii) the wide
adoption of deep neural networks for federated learning on IoT devices which
have limited storage. In such settings, communication efficient optimization
methods are attractive alternatives, however they still struggle with memory
issues. To tackle these challenges, we propose an communication efficient
method called contractive error feedback (ConEF). As opposed to SGD with
error-feedback (EFSGD) that inefficiently manages memory, ConEF obtains the
sweet spot of convergence and memory usage, and achieves communication
efficiency by leveraging biased and all-reducable gradient compression. We
empirically validate ConEF on various learning tasks that include image
classification, language modeling, and machine translation and observe that
ConEF saves 80\% - 90\% of the extra memory in EFSGD with almost no loss on
test performance, while also achieving 1.3x - 5x speedup of SGD. Through our
work, we also demonstrate the feasibility and convergence of ConEF to clear up
the theoretical barrier of integrating ConEF to popular memory efficient
frameworks such as ZeRO-3. | Machine Learning |
What field is the article from? | Title: Distributed Learning of Mixtures of Experts
Abstract: In modern machine learning problems we deal with datasets that are either
distributed by nature or potentially large for which distributing the
computations is usually a standard way to proceed, since centralized algorithms
are in general ineffective. We propose a distributed learning approach for
mixtures of experts (MoE) models with an aggregation strategy to construct a
reduction estimator from local estimators fitted parallelly to distributed
subsets of the data. The aggregation is based on an optimal minimization of an
expected transportation divergence between the large MoE composed of local
estimators and the unknown desired MoE model. We show that the provided
reduction estimator is consistent as soon as the local estimators to be
aggregated are consistent, and its construction is performed by a proposed
majorization-minimization (MM) algorithm that is computationally effective. We
study the statistical and numerical properties for the proposed reduction
estimator on experiments that demonstrate its performance compared to namely
the global estimator constructed in a centralized way from the full dataset.
For some situations, the computation time is more than ten times faster, for a
comparable performance. Our source codes are publicly available on Github. | Machine Learning |
What field is the article from? | Title: Challenging Common Assumptions in Multi-task Learning
Abstract: While multi-task learning (MTL) has gained significant attention in recent
years, its underlying mechanisms remain poorly understood. Recent methods did
not yield consistent performance improvements over single task learning (STL)
baselines, underscoring the importance of gaining more profound insights about
challenges specific to MTL. In our study, we challenge common assumptions in
MTL in the context of STL: First, the choice of optimizer has only been mildly
investigated in MTL. We show the pivotal role of common STL tools such as the
Adam optimizer in MTL. We deduce the effectiveness of Adam to its partial
loss-scale invariance. Second, the notion of gradient conflicts has often been
phrased as a specific problem in MTL. We delve into the role of gradient
conflicts in MTL and compare it to STL. For angular gradient alignment we find
no evidence that this is a unique problem in MTL. We emphasize differences in
gradient magnitude as the main distinguishing factor. Lastly, we compare the
transferability of features learned through MTL and STL on common image
corruptions, and find no conclusive evidence that MTL leads to superior
transferability. Overall, we find surprising similarities between STL and MTL
suggesting to consider methods from both fields in a broader context. | Machine Learning |
What field is the article from? | Title: General Phrase Debiaser: Debiasing Masked Language Models at a Multi-Token Level
Abstract: The social biases and unwelcome stereotypes revealed by pretrained language
models are becoming obstacles to their application. Compared to numerous
debiasing methods targeting word level, there has been relatively less
attention on biases present at phrase level, limiting the performance of
debiasing in discipline domains. In this paper, we propose an automatic
multi-token debiasing pipeline called \textbf{General Phrase Debiaser}, which
is capable of mitigating phrase-level biases in masked language models.
Specifically, our method consists of a \textit{phrase filter stage} that
generates stereotypical phrases from Wikipedia pages as well as a \textit{model
debias stage} that can debias models at the multi-token level to tackle bias
challenges on phrases. The latter searches for prompts that trigger model's
bias, and then uses them for debiasing. State-of-the-art results on standard
datasets and metrics show that our approach can significantly reduce gender
biases on both career and multiple disciplines, across models with varying
parameter sizes. | Computational Linguistics |
What field is the article from? | Title: PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers' Workflows
Abstract: Prototyping AI applications is notoriously difficult. While large language
model (LLM) prompting has dramatically lowered the barriers to AI prototyping,
designers are still prototyping AI functionality and UI separately. We
investigate how coupling prompt and UI design affects designers' workflows.
Grounding this research, we developed PromptInfuser, a Figma plugin that
enables users to create semi-functional mockups, by connecting UI elements to
the inputs and outputs of prompts. In a study with 14 designers, we compare
PromptInfuser to designers' current AI-prototyping workflow. PromptInfuser was
perceived to be significantly more useful for communicating product ideas, more
capable of producing prototypes that realistically represent the envisioned
artifact, more efficient for prototyping, and more helpful for anticipating UI
issues and technical constraints. PromptInfuser encouraged iteration over
prompt and UI together, which helped designers identify UI and prompt
incompatibilities and reflect upon their total solution. Together, these
findings inform future systems for prototyping AI applications. | Human-Computer Interaction |
What field is the article from? | Title: Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos
Abstract: The gigapixel scale of whole slide images (WSIs) poses a challenge for
histopathology multi-modal chatbots, requiring a global WSI analysis for
diagnosis, compounding evidence from different WSI patches. Current visual
instruction datasets, generated through large language models, focus on
creating question/answer pairs for individual image patches, which may lack
diagnostic capacity on their own in histopathology, further complicated by the
absence of spatial grounding in histopathology image captions. To bridge this
gap, we introduce Quilt-Instruct, a large-scale dataset of 107,131
histopathology-specific instruction question/answer pairs, that is collected by
leveraging educational histopathology videos from YouTube, which provides
spatial localization of captions by automatically extracting narrators' cursor
movements. In addition, we provide contextual reasoning by extracting diagnosis
and supporting facts from the entire video content to guide the extrapolative
reasoning of GPT-4. Using Quilt-Instruct, we train Quilt-LLaVA, which can
reason beyond the given single image patch, enabling diagnostic reasoning and
the capability of spatial awareness. To evaluate Quilt-LLaVA, we propose a
comprehensive evaluation dataset created from 985 images and 1283
human-generated question-answers. We also thoroughly evaluate Quilt-LLaVA using
public histopathology datasets, where Quilt-LLaVA significantly outperforms
SOTA by over 10% on relative GPT-4 score and 4% and 9% on open and closed set
VQA. Our code, data, and model are publicly available at quilt-llava.github.io. | Computer Vision |
What field is the article from? | Title: An Empirical Study of Benchmarking Chinese Aspect Sentiment Quad Prediction
Abstract: Aspect sentiment quad prediction (ASQP) is a critical subtask of aspect-level
sentiment analysis. Current ASQP datasets are characterized by their small size
and low quadruple density, which hinders technical development. To expand
capacity, we construct two large Chinese ASQP datasets crawled from multiple
online platforms. The datasets hold several significant characteristics: larger
size (each with 10,000+ samples) and rich aspect categories, more words per
sentence, and higher density than existing ASQP datasets. Moreover, we are the
first to evaluate the performance of Generative Pre-trained Transformer (GPT)
series models on ASQP and exhibit potential issues. The experiments with
state-of-the-art ASQP baselines underscore the need to explore additional
techniques to address ASQP, as well as the importance of further investigation
into methods to improve the performance of GPTs. | Computational Linguistics |
What field is the article from? | Title: Alignment is not sufficient to prevent large language models from generating harmful information: A psychoanalytic perspective
Abstract: Large Language Models (LLMs) are central to a multitude of applications but
struggle with significant risks, notably in generating harmful content and
biases. Drawing an analogy to the human psyche's conflict between evolutionary
survival instincts and societal norm adherence elucidated in Freud's
psychoanalysis theory, we argue that LLMs suffer a similar fundamental
conflict, arising between their inherent desire for syntactic and semantic
continuity, established during the pre-training phase, and the post-training
alignment with human values. This conflict renders LLMs vulnerable to
adversarial attacks, wherein intensifying the models' desire for continuity can
circumvent alignment efforts, resulting in the generation of harmful
information. Through a series of experiments, we first validated the existence
of the desire for continuity in LLMs, and further devised a straightforward yet
powerful technique, such as incomplete sentences, negative priming, and
cognitive dissonance scenarios, to demonstrate that even advanced LLMs struggle
to prevent the generation of harmful information. In summary, our study
uncovers the root of LLMs' vulnerabilities to adversarial attacks, hereby
questioning the efficacy of solely relying on sophisticated alignment methods,
and further advocates for a new training idea that integrates modal concepts
alongside traditional amodal concepts, aiming to endow LLMs with a more nuanced
understanding of real-world contexts and ethical considerations. | Computational Linguistics |
What field is the article from? | Title: I-PHYRE: Interactive Physical Reasoning
Abstract: Current evaluation protocols predominantly assess physical reasoning in
stationary scenes, creating a gap in evaluating agents' abilities to interact
with dynamic events. While contemporary methods allow agents to modify initial
scene configurations and observe consequences, they lack the capability to
interact with events in real time. To address this, we introduce I-PHYRE, a
framework that challenges agents to simultaneously exhibit intuitive physical
reasoning, multi-step planning, and in-situ intervention. Here, intuitive
physical reasoning refers to a quick, approximate understanding of physics to
address complex problems; multi-step denotes the need for extensive sequence
planning in I-PHYRE, considering each intervention can significantly alter
subsequent choices; and in-situ implies the necessity for timely object
manipulation within a scene, where minor timing deviations can result in task
failure. We formulate four game splits to scrutinize agents' learning and
generalization of essential principles of interactive physical reasoning,
fostering learning through interaction with representative scenarios. Our
exploration involves three planning strategies and examines several supervised
and reinforcement agents' zero-shot generalization proficiency on I-PHYRE. The
outcomes highlight a notable gap between existing learning algorithms and human
performance, emphasizing the imperative for more research in enhancing agents
with interactive physical reasoning capabilities. The environment and baselines
will be made publicly available. | Artificial Intelligence |
What field is the article from? | Title: Going beyond persistent homology using persistent homology
Abstract: Representational limits of message-passing graph neural networks (MP-GNNs),
e.g., in terms of the Weisfeiler-Leman (WL) test for isomorphism, are well
understood. Augmenting these graph models with topological features via
persistent homology (PH) has gained prominence, but identifying the class of
attributed graphs that PH can recognize remains open. We introduce a novel
concept of color-separating sets to provide a complete resolution to this
important problem. Specifically, we establish the necessary and sufficient
conditions for distinguishing graphs based on the persistence of their
connected components, obtained from filter functions on vertex and edge colors.
Our constructions expose the limits of vertex- and edge-level PH, proving that
neither category subsumes the other. Leveraging these theoretical insights, we
propose RePHINE for learning topological features on graphs. RePHINE
efficiently combines vertex- and edge-level PH, achieving a scheme that is
provably more powerful than both. Integrating RePHINE into MP-GNNs boosts their
expressive power, resulting in gains over standard PH on several benchmarks for
graph classification. | Machine Learning |
What field is the article from? | Title: Evaluating Neighbor Explainability for Graph Neural Networks
Abstract: Explainability in Graph Neural Networks (GNNs) is a new field growing in the
last few years. In this publication we address the problem of determining how
important is each neighbor for the GNN when classifying a node and how to
measure the performance for this specific task. To do this, various known
explainability methods are reformulated to get the neighbor importance and four
new metrics are presented. Our results show that there is almost no difference
between the explanations provided by gradient-based techniques in the GNN
domain. In addition, many explainability techniques failed to identify
important neighbors when GNNs without self-loops are used. | Machine Learning |
What field is the article from? | Title: Grounding Foundation Models through Federated Transfer Learning: A General Framework
Abstract: Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and
powerful emergent abilities have achieved remarkable success in various natural
language processing and computer vision tasks. Grounding FMs by adapting them
to domain-specific tasks or augmenting them with domain-specific knowledge
enables us to exploit the full potential of FMs. However, grounding FMs faces
several challenges, stemming primarily from constrained computing resources,
data privacy, model heterogeneity, and model ownership. Federated Transfer
Learning (FTL), the combination of federated learning and transfer learning,
provides promising solutions to address these challenges. In recent years, the
need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in
both academia and industry. Motivated by the strong growth in FTL-FM research
and the potential impact of FTL-FM on industrial applications, we propose an
FTL-FM framework that formulates problems of grounding FMs in the federated
learning setting, construct a detailed taxonomy based on the FTL-FM framework
to categorize state-of-the-art FTL-FM works, and comprehensively overview
FTL-FM works based on the proposed taxonomy. We also establish correspondences
between FTL-FM and conventional phases of adapting FM so that FM practitioners
can align their research works with FTL-FM. In addition, we overview advanced
efficiency-improving and privacy-preserving techniques because efficiency and
privacy are critical concerns in FTL-FM. Last, we discuss opportunities and
future research directions of FTL-FM. | Machine Learning |
What field is the article from? | Title: Knowledge-Based Support for Adhesive Selection: Will it Stick?
Abstract: As the popularity of adhesive joints in industry increases, so does the need
for tools to support the process of selecting a suitable adhesive. While some
such tools already exist, they are either too limited in scope, or offer too
little flexibility in use. This work presents a more advanced tool, that was
developed together with a team of adhesive experts. We first extract the
experts' knowledge about this domain and formalize it in a Knowledge Base (KB).
The IDP-Z3 reasoning system can then be used to derive the necessary
functionality from this KB. Together with a user-friendly interactive
interface, this creates an easy-to-use tool capable of assisting the adhesive
experts. To validate our approach, we performed user testing in the form of
qualitative interviews. The experts are very positive about the tool, stating
that, among others, it will help save time and find more suitable adhesives.
Under consideration in Theory and Practice of Logic Programming (TPLP). | Artificial Intelligence |
What field is the article from? | Title: Straggler-resilient Federated Learning: Tackling Computation Heterogeneity with Layer-wise Partial Model Training in Mobile Edge Network
Abstract: Federated Learning (FL) enables many resource-limited devices to train a
model collaboratively without data sharing. However, many existing works focus
on model-homogeneous FL, where the global and local models are the same size,
ignoring the inherently heterogeneous computational capabilities of different
devices and restricting resource-constrained devices from contributing to FL.
In this paper, we consider model-heterogeneous FL and propose Federated Partial
Model Training (FedPMT), where devices with smaller computational capabilities
work on partial models (subsets of the global model) and contribute to the
global model. Different from Dropout-based partial model generation, which
removes neurons in hidden layers at random, model training in FedPMT is
achieved from the back-propagation perspective. As such, all devices in FedPMT
prioritize the most crucial parts of the global model. Theoretical analysis
shows that the proposed partial model training design has a similar convergence
rate to the widely adopted Federated Averaging (FedAvg) algorithm,
$\mathcal{O}(1/T)$, with the sub-optimality gap enlarged by a constant factor
related to the model splitting design in FedPMT. Empirical results show that
FedPMT significantly outperforms the existing benchmark FedDrop. Meanwhile,
compared to the popular model-homogeneous benchmark, FedAvg, FedPMT reaches the
learning target in a shorter completion time, thus achieving a better trade-off
between learning accuracy and completion time. | Machine Learning |
What field is the article from? | Title: Inspecting Explainability of Transformer Models with Additional Statistical Information
Abstract: Transformer becomes more popular in the vision domain in recent years so
there is a need for finding an effective way to interpret the Transformer model
by visualizing it. In recent work, Chefer et al. can visualize the Transformer
on vision and multi-modal tasks effectively by combining attention layers to
show the importance of each image patch. However, when applying to other
variants of Transformer such as the Swin Transformer, this method can not focus
on the predicted object. Our method, by considering the statistics of tokens in
layer normalization layers, shows a great ability to interpret the
explainability of Swin Transformer and ViT. | Computer Vision |
What field is the article from? | Title: You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments
Abstract: The versatility of Large Language Models (LLMs) on natural language
understanding tasks has made them popular for research in social sciences. In
particular, to properly understand the properties and innate personas of LLMs,
researchers have performed studies that involve using prompts in the form of
questions that ask LLMs of particular opinions. In this study, we take a
cautionary step back and examine whether the current format of prompting
enables LLMs to provide responses in a consistent and robust manner. We first
construct a dataset that contains 693 questions encompassing 39 different
instruments of persona measurement on 115 persona axes. Additionally, we design
a set of prompts containing minor variations and examine LLM's capabilities to
generate accurate answers, as well as consistency variations to examine their
consistency towards simple perturbations such as switching the option order.
Our experiments on 15 different open-source LLMs reveal that even simple
perturbations are sufficient to significantly downgrade a model's
question-answering ability, and that most LLMs have low negation consistency.
Our results suggest that the currently widespread practice of prompting is
insufficient to accurately capture model perceptions, and we discuss potential
alternatives to improve such issues. | Computational Linguistics |
Subsets and Splits