instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Evaluating Large Language Models for Health-related Queries with Presuppositions
Abstract: As corporations rush to integrate large language models (LLMs) to their
search offerings, it is critical that they provide factually accurate
information that is robust to any presuppositions that a user may express. In
this work, we introduce UPHILL, a dataset consisting of health-related queries
with varying degrees of presuppositions. Using UPHILL, we evaluate the factual
accuracy and consistency of InstructGPT, ChatGPT, and BingChat models. We find
that while model responses rarely disagree with true health claims (posed as
questions), they often fail to challenge false claims: responses from
InstructGPT agree with 32% of the false claims, ChatGPT 26% and BingChat 23%.
As we increase the extent of presupposition in input queries, the responses
from InstructGPT and ChatGPT agree with the claim considerably more often,
regardless of its veracity. Responses from BingChat, which rely on retrieved
webpages, are not as susceptible. Given the moderate factual accuracy, and the
inability of models to consistently correct false assumptions, our work calls
for a careful assessment of current LLMs for use in high-stakes scenarios. | Computational Linguistics |
What field is the article from? | Title: Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias
Abstract: Large language models (LLMs) have demonstrated their potential in social
science research by emulating human perceptions and behaviors, a concept
referred to as algorithmic fidelity. This study assesses the algorithmic
fidelity and bias of LLMs by utilizing two nationally representative climate
change surveys. The LLMs were conditioned on demographics and/or psychological
covariates to simulate survey responses. The findings indicate that LLMs can
effectively capture presidential voting behaviors but encounter challenges in
accurately representing global warming perspectives when relevant covariates
are not included. GPT-4 exhibits improved performance when conditioned on both
demographics and covariates. However, disparities emerge in LLM estimations of
the views of certain groups, with LLMs tending to underestimate worry about
global warming among Black Americans. While highlighting the potential of LLMs
to aid social science research, these results underscore the importance of
meticulous conditioning, model selection, survey question format, and bias
assessment when employing LLMs for survey simulation. Further investigation
into prompt engineering and algorithm auditing is essential to harness the
power of LLMs while addressing their inherent limitations. | Artificial Intelligence |
What field is the article from? | Title: On the verification of Embeddings using Hybrid Markov Logic
Abstract: The standard approach to verify representations learned by Deep Neural
Networks is to use them in specific tasks such as classification or regression,
and measure their performance based on accuracy in such tasks. However, in many
cases, we would want to verify more complex properties of a learned
representation. To do this, we propose a framework based on a probabilistic
first-order language, namely, Hybrid Markov Logic Networks (HMLNs) where we
specify properties over embeddings mixed with symbolic domain knowledge. We
present an approach to learn parameters for the properties within this
framework. Further, we develop a verification method to test embeddings in this
framework by encoding this task as a Mixed Integer Linear Program for which we
can leverage existing state-of-the-art solvers. We illustrate verification in
Graph Neural Networks, Deep Knowledge Tracing and Intelligent Tutoring Systems
to demonstrate the generality of our approach. | Machine Learning |
What field is the article from? | Title: Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
Abstract: Contrastive Language-Image Pre-training (CLIP) plays an essential role in
extracting valuable content information from images across diverse tasks. It
aligns textual and visual modalities to comprehend the entire image, including
all the details, even those irrelevant to specific tasks. However, for a finer
understanding and controlled editing of images, it becomes crucial to focus on
specific regions of interest, which can be indicated as points, masks, or boxes
by humans or perception models. To fulfill the requirements, we introduce
Alpha-CLIP, an enhanced version of CLIP with an auxiliary alpha channel to
suggest attentive regions and fine-tuned with constructed millions of RGBA
region-text pairs. Alpha-CLIP not only preserves the visual recognition ability
of CLIP but also enables precise control over the emphasis of image contents.
It demonstrates effectiveness in various tasks, including but not limited to
open-world recognition, multimodal large language models, and conditional 2D /
3D generation. It has a strong potential to serve as a versatile tool for
image-related tasks. | Computer Vision |
What field is the article from? | Title: Heuristics-Driven Link-of-Analogy Prompting: Enhancing Large Language Models for Document-Level Event Argument Extraction
Abstract: In this study, we investigate in-context learning (ICL) in document-level
event argument extraction (EAE). The paper identifies key challenges in this
problem, including example selection, context length limitation, abundance of
event types, and the limitation of Chain-of-Thought (CoT) prompting in
non-reasoning tasks. To address these challenges, we introduce the
Heuristic-Driven Link-of-Analogy (HD-LoA) prompting method. Specifically, we
hypothesize and validate that LLMs learn task-specific heuristics from
demonstrations via ICL. Building upon this hypothesis, we introduce an explicit
heuristic-driven demonstration construction approach, which transforms the
haphazard example selection process into a methodical method that emphasizes
task heuristics. Additionally, inspired by the analogical reasoning of human,
we propose the link-of-analogy prompting, which enables LLMs to process new
situations by drawing analogies to known situations, enhancing their
adaptability. Extensive experiments show that our method outperforms the
existing prompting methods and few-shot supervised learning methods, exhibiting
F1 score improvements of 4.53% and 9.38% on the document-level EAE dataset.
Furthermore, when applied to sentiment analysis and natural language inference
tasks, the HD-LoA prompting achieves accuracy gains of 2.87% and 2.63%,
indicating its effectiveness across different tasks. | Computational Linguistics |
What field is the article from? | Title: Combining Past, Present and Future: A Self-Supervised Approach for Class Incremental Learning
Abstract: Class Incremental Learning (CIL) aims to handle the scenario where data of
novel classes occur continuously and sequentially. The model should recognize
the sequential novel classes while alleviating the catastrophic forgetting. In
the self-supervised manner, it becomes more challenging to avoid the conflict
between the feature embedding spaces of novel classes and old ones without any
class labels. To address the problem, we propose a self-supervised CIL
framework CPPF, meaning Combining Past, Present and Future. In detail, CPPF
consists of a prototype clustering module (PC), an embedding space reserving
module (ESR) and a multi-teacher distillation module (MTD). 1) The PC and the
ESR modules reserve embedding space for subsequent phases at the prototype
level and the feature level respectively to prepare for knowledge learned in
the future. 2) The MTD module maintains the representations of the current
phase without the interference of past knowledge. One of the teacher networks
retains the representations of the past phases, and the other teacher network
distills relation information of the current phase to the student network.
Extensive experiments on CIFAR100 and ImageNet100 datasets demonstrate that our
proposed method boosts the performance of self-supervised class incremental
learning. We will release code in the near future. | Computer Vision |
What field is the article from? | Title: The voraus-AD Dataset for Anomaly Detection in Robot Applications
Abstract: During the operation of industrial robots, unusual events may endanger the
safety of humans and the quality of production. When collecting data to detect
such cases, it is not ensured that data from all potentially occurring errors
is included as unforeseeable events may happen over time. Therefore, anomaly
detection (AD) delivers a practical solution, using only normal data to learn
to detect unusual events. We introduce a dataset that allows training and
benchmarking of anomaly detection methods for robotic applications based on
machine data which will be made publicly available to the research community.
As a typical robot task the dataset includes a pick-and-place application which
involves movement, actions of the end effector and interactions with the
objects of the environment. Since several of the contained anomalies are not
task-specific but general, evaluations on our dataset are transferable to other
robotics applications as well. Additionally, we present MVT-Flow (multivariate
time-series flow) as a new baseline method for anomaly detection: It relies on
deep-learning-based density estimation with normalizing flows, tailored to the
data domain by taking its structure into account for the architecture. Our
evaluation shows that MVT-Flow outperforms baselines from previous work by a
large margin of 6.2% in area under ROC. | Robotics |
What field is the article from? | Title: StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter
Abstract: Text-to-video (T2V) models have shown remarkable capabilities in generating
diverse videos. However, they struggle to produce user-desired stylized videos
due to (i) text's inherent clumsiness in expressing specific styles and (ii)
the generally degraded style fidelity. To address these challenges, we
introduce StyleCrafter, a generic method that enhances pre-trained T2V models
with a style control adapter, enabling video generation in any style by
providing a reference image. Considering the scarcity of stylized video
datasets, we propose to first train a style control adapter using style-rich
image datasets, then transfer the learned stylization ability to video
generation through a tailor-made finetuning paradigm. To promote content-style
disentanglement, we remove style descriptions from the text prompt and extract
style information solely from the reference image using a decoupling learning
strategy. Additionally, we design a scale-adaptive fusion module to balance the
influences of text-based content features and image-based style features, which
helps generalization across various text and style combinations. StyleCrafter
efficiently generates high-quality stylized videos that align with the content
of the texts and resemble the style of the reference images. Experiments
demonstrate that our approach is more flexible and efficient than existing
competitors. | Computer Vision |
What field is the article from? | Title: TRIALSCOPE: A Unifying Causal Framework for Scaling Real-World Evidence Generation with Biomedical Language Models
Abstract: The rapid digitization of real-world data offers an unprecedented opportunity
for optimizing healthcare delivery and accelerating biomedical discovery. In
practice, however, such data is most abundantly available in unstructured
forms, such as clinical notes in electronic medical records (EMRs), and it is
generally plagued by confounders. In this paper, we present TRIALSCOPE, a
unifying framework for distilling real-world evidence from population-level
observational data. TRIALSCOPE leverages biomedical language models to
structure clinical text at scale, employs advanced probabilistic modeling for
denoising and imputation, and incorporates state-of-the-art causal inference
techniques to combat common confounders. Using clinical trial specification as
generic representation, TRIALSCOPE provides a turn-key solution to generate and
reason with clinical hypotheses using observational data. In extensive
experiments and analyses on a large-scale real-world dataset with over one
million cancer patients from a large US healthcare network, we show that
TRIALSCOPE can produce high-quality structuring of real-world data and
generates comparable results to marquee cancer trials. In addition to
facilitating in-silicon clinical trial design and optimization, TRIALSCOPE may
be used to empower synthetic controls, pragmatic trials, post-market
surveillance, as well as support fine-grained patient-like-me reasoning in
precision diagnosis and treatment. | Machine Learning |
What field is the article from? | Title: MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition
Abstract: With the advent of deep learning, progressively larger neural networks have
been designed to solve complex tasks. We take advantage of these capacity-rich
models to lower the cost of inference by exploiting computation in
superposition. To reduce the computational burden per input, we propose
Multiple-Input-Multiple-Output Neural Networks (MIMONets) capable of handling
many inputs at once. MIMONets augment various deep neural network architectures
with variable binding mechanisms to represent an arbitrary number of inputs in
a compositional data structure via fixed-width distributed representations.
Accordingly, MIMONets adapt nonlinear neural transformations to process the
data structure holistically, leading to a speedup nearly proportional to the
number of superposed input items in the data structure. After processing in
superposition, an unbinding mechanism recovers each transformed input of
interest. MIMONets also provide a dynamic trade-off between accuracy and
throughput by an instantaneous on-demand switching between a set of
accuracy-throughput operating points, yet within a single set of fixed
parameters. We apply the concept of MIMONets to both CNN and Transformer
architectures resulting in MIMOConv and MIMOFormer, respectively. Empirical
evaluations show that MIMOConv achieves about 2-4 x speedup at an accuracy
delta within [+0.68, -3.18]% compared to WideResNet CNNs on CIFAR10 and
CIFAR100. Similarly, MIMOFormer can handle 2-4 inputs at once while maintaining
a high average accuracy within a [-1.07, -3.43]% delta on the long range arena
benchmark. Finally, we provide mathematical bounds on the interference between
superposition channels in MIMOFormer. Our code is available at
https://github.com/IBM/multiple-input-multiple-output-nets. | Machine Learning |
What field is the article from? | Title: Thermal Face Image Classification using Deep Learning Techniques
Abstract: Thermal images have various applications in security, medical and industrial
domains. This paper proposes a practical deep-learning approach for thermal
image classification. Accurate and efficient classification of thermal images
poses a significant challenge across various fields due to the complex image
content and the scarcity of annotated datasets. This work uses a convolutional
neural network (CNN) architecture, specifically ResNet-50 and VGGNet-19, to
extract features from thermal images. This work also applied Kalman filter on
thermal input images for image denoising. The experimental results demonstrate
the effectiveness of the proposed approach in terms of accuracy and efficiency. | Computer Vision |
What field is the article from? | Title: Integration and Implementation Strategies for AI Algorithm Deployment with Smart Routing Rules and Workflow Management
Abstract: This paper reviews the challenges hindering the widespread adoption of
artificial intelligence (AI) solutions in the healthcare industry, focusing on
computer vision applications for medical imaging, and how interoperability and
enterprise-grade scalability can be used to address these challenges. The
complex nature of healthcare workflows, intricacies in managing large and
secure medical imaging data, and the absence of standardized frameworks for AI
development pose significant barriers and require a new paradigm to address
them.
The role of interoperability is examined in this paper as a crucial factor in
connecting disparate applications within healthcare workflows. Standards such
as DICOM, Health Level 7 (HL7), and Integrating the Healthcare Enterprise (IHE)
are highlighted as foundational for common imaging workflows. A specific focus
is placed on the role of DICOM gateways, with Smart Routing Rules and Workflow
Management leading transformational efforts in this area.
To drive enterprise scalability, new tools are needed. Project MONAI,
established in 2019, is introduced as an initiative aiming to redefine the
development of medical AI applications. The MONAI Deploy App SDK, a component
of Project MONAI, is identified as a key tool in simplifying the packaging and
deployment process, enabling repeatable, scalable, and standardized deployment
patterns for AI applications.
The abstract underscores the potential impact of successful AI adoption in
healthcare, offering physicians both life-saving and time-saving insights and
driving efficiencies in radiology department workflows. The collaborative
efforts between academia and industry, are emphasized as essential for
advancing the adoption of healthcare AI solutions. | Artificial Intelligence |
What field is the article from? | Title: Extracting Self-Consistent Causal Insights from Users Feedback with LLMs and In-context Learning
Abstract: Microsoft Windows Feedback Hub is designed to receive customer feedback on a
wide variety of subjects including critical topics such as power and battery.
Feedback is one of the most effective ways to have a grasp of users' experience
with Windows and its ecosystem. However, the sheer volume of feedback received
by Feedback Hub makes it immensely challenging to diagnose the actual cause of
reported issues. To better understand and triage issues, we leverage Double
Machine Learning (DML) to associate users' feedback with telemetry signals. One
of the main challenges we face in the DML pipeline is the necessity of domain
knowledge for model design (e.g., causal graph), which sometimes is either not
available or hard to obtain. In this work, we take advantage of reasoning
capabilities in Large Language Models (LLMs) to generate a prior model that
which to some extent compensates for the lack of domain knowledge and could be
used as a heuristic for measuring feedback informativeness. Our LLM-based
approach is able to extract previously known issues, uncover new bugs, and
identify sequences of events that lead to a bug, while minimizing out-of-domain
outputs. | Artificial Intelligence |
What field is the article from? | Title: The Development of LLMs for Embodied Navigation
Abstract: In recent years, the rapid advancement of Large Language Models (LLMs) such
as the Generative Pre-trained Transformer (GPT) has attracted increasing
attention due to their potential in a variety of practical applications. The
application of LLMs with Embodied Intelligence has emerged as a significant
area of focus. Among the myriad applications of LLMs, navigation tasks are
particularly noteworthy because they demand a deep understanding of the
environment and quick, accurate decision-making. LLMs can augment embodied
intelligence systems with sophisticated environmental perception and
decision-making support, leveraging their robust language and image-processing
capabilities. This article offers an exhaustive summary of the symbiosis
between LLMs and embodied intelligence with a focus on navigation. It reviews
state-of-the-art models, research methodologies, and assesses the advantages
and disadvantages of existing embodied navigation models and datasets. Finally,
the article elucidates the role of LLMs in embodied intelligence, based on
current research, and forecasts future directions in the field. A comprehensive
list of studies in this survey is available at
https://github.com/Rongtao-Xu/Awesome-LLM-EN | Artificial Intelligence |
What field is the article from? | Title: RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
Abstract: LLMs and AI chatbots have improved people's efficiency in various fields.
However, the necessary knowledge for answering the question may be beyond the
models' knowledge boundaries. To mitigate this issue, many researchers try to
introduce external knowledge, such as knowledge graphs and Internet contents,
into LLMs for up-to-date information. However, the external information from
the Internet may include counterfactual information that will confuse the model
and lead to an incorrect response. Thus there is a pressing need for LLMs to
possess the ability to distinguish reliable information from external
knowledge. Therefore, to evaluate the ability of LLMs to discern the
reliability of external knowledge, we create a benchmark from existing
knowledge bases. Our benchmark consists of two tasks, Question Answering and
Text Generation, and for each task, we provide models with a context containing
counterfactual information. Evaluation results show that existing LLMs are
susceptible to interference from unreliable external knowledge with
counterfactual information, and simple intervention methods make limited
contributions to the alleviation of this issue. | Computational Linguistics |
What field is the article from? | Title: EduGym: An Environment Suite for Reinforcement Learning Education
Abstract: Due to the empirical success of reinforcement learning, an increasing number
of students study the subject. However, from our practical teaching experience,
we see students entering the field (bachelor, master and early PhD) often
struggle. On the one hand, textbooks and (online) lectures provide the
fundamentals, but students find it hard to translate between equations and
code. On the other hand, public codebases do provide practical examples, but
the implemented algorithms tend to be complex, and the underlying test
environments contain multiple reinforcement learning challenges at once.
Although this is realistic from a research perspective, it often hinders
educational conceptual understanding. To solve this issue we introduce EduGym,
a set of educational reinforcement learning environments and associated
interactive notebooks tailored for education. Each EduGym environment is
specifically designed to illustrate a certain aspect/challenge of reinforcement
learning (e.g., exploration, partial observability, stochasticity, etc.), while
the associated interactive notebook explains the challenge and its possible
solution approaches, connecting equations and code in a single document. An
evaluation among RL students and researchers shows 86% of them think EduGym is
a useful tool for reinforcement learning education. All notebooks are available
from https://sites.google.com/view/edu-gym/home, while the full software
package can be installed from https://github.com/RLG-Leiden/edugym. | Machine Learning |
What field is the article from? | Title: The Ethics of Automating Legal Actors
Abstract: The introduction of large public legal datasets has brought about a
renaissance in legal NLP. Many of these datasets are comprised of legal
judgements - the product of judges deciding cases. This fact, together with the
way machine learning works, means that several legal NLP models are models of
judges. While some have argued for the automation of judges, in this position
piece, we argue that automating the role of the judge raises difficult ethical
challenges, in particular for common law legal systems. Our argument follows
from the social role of the judge in actively shaping the law, rather than
merely applying it. Since current NLP models come nowhere close to having the
facilities necessary for this task, they should not be used to automate judges.
Furthermore, even in the case the models could achieve human-level
capabilities, there would still be remaining ethical concerns inherent in the
automation of the legal process. | Computational Linguistics |
What field is the article from? | Title: War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars
Abstract: Can we avoid wars at the crossroads of history? This question has been
pursued by individuals, scholars, policymakers, and organizations throughout
human history. In this research, we attempt to answer the question based on the
recent advances of Artificial Intelligence (AI) and Large Language Models
(LLMs). We propose \textbf{WarAgent}, an LLM-powered multi-agent AI system, to
simulate the participating countries, their decisions, and the consequences, in
historical international conflicts, including the World War I (WWI), the World
War II (WWII), and the Warring States Period (WSP) in Ancient China. By
evaluating the simulation effectiveness, we examine the advancements and
limitations of cutting-edge AI systems' abilities in studying complex
collective human behaviors such as international conflicts under diverse
settings. In these simulations, the emergent interactions among agents also
offer a novel perspective for examining the triggers and conditions that lead
to war. Our findings offer data-driven and AI-augmented insights that can
redefine how we approach conflict resolution and peacekeeping strategies. The
implications stretch beyond historical analysis, offering a blueprint for using
AI to understand human history and possibly prevent future international
conflicts. Code and data are available at
\url{https://github.com/agiresearch/WarAgent}. | Artificial Intelligence |
What field is the article from? | Title: Sleep Deprivation in the Forward-Forward Algorithm
Abstract: This paper aims to explore the separation of the two forward passes in the
Forward-Forward algorithm from a biological perspective in the context of
sleep. We show the size of the gap between the sleep and awake phase influences
the learning capabilities of the algorithm and highlight the importance of
negative data in diminishing the devastating effects of sleep deprivation. | Artificial Intelligence |
What field is the article from? | Title: Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation
Abstract: Model hallucination has been a crucial interest of research in Natural
Language Generation (NLG). In this work, we propose sequence-level certainty as
a common theme over hallucination in NLG, and explore the correlation between
sequence-level certainty and the level of hallucination in model responses. We
categorize sequence-level certainty into two aspects: probabilistic certainty
and semantic certainty, and reveal through experiments on Knowledge-Grounded
Dialogue Generation (KGDG) task that both a higher level of probabilistic
certainty and a higher level of semantic certainty in model responses are
significantly correlated with a lower level of hallucination. What's more, we
provide theoretical proof and analysis to show that semantic certainty is a
good estimator of probabilistic certainty, and therefore has the potential as
an alternative to probability-based certainty estimation in black-box
scenarios. Based on the observation on the relationship between certainty and
hallucination, we further propose Certainty-based Response Ranking (CRR), a
decoding-time method for mitigating hallucination in NLG. Based on our
categorization of sequence-level certainty, we propose 2 types of CRR approach:
Probabilistic CRR (P-CRR) and Semantic CRR (S-CRR). P-CRR ranks individually
sampled model responses using their arithmetic mean log-probability of the
entire sequence. S-CRR approaches certainty estimation from meaning-space, and
ranks a number of model response candidates based on their semantic certainty
level, which is estimated by the entailment-based Agreement Score (AS). Through
extensive experiments across 3 KGDG datasets, 3 decoding methods, and on 4
different models, we validate the effectiveness of our 2 proposed CRR methods
to reduce model hallucination. | Computational Linguistics |
What field is the article from? | Title: Towards Explainable Strategy Templates using NLP Transformers
Abstract: This paper bridges the gap between mathematical heuristic strategies learned
from Deep Reinforcement Learning (DRL) in automated agent negotiation, and
comprehensible, natural language explanations. Our aim is to make these
strategies more accessible to non-experts. By leveraging traditional Natural
Language Processing (NLP) techniques and Large Language Models (LLMs) equipped
with Transformers, we outline how parts of DRL strategies composed of parts
within strategy templates can be transformed into user-friendly, human-like
English narratives. To achieve this, we present a top-level algorithm that
involves parsing mathematical expressions of strategy templates, semantically
interpreting variables and structures, generating rule-based primary
explanations, and utilizing a Generative Pre-trained Transformer (GPT) model to
refine and contextualize these explanations. Subsequent customization for
varied audiences and meticulous validation processes in an example illustrate
the applicability and potential of this approach. | Artificial Intelligence |
What field is the article from? | Title: Beyond Size: How Gradients Shape Pruning Decisions in Large Language Models
Abstract: Large Language Models (LLMs) with a billion or more parameters are prime
targets for network pruning, which aims to reduce a portion of the network
weights without compromising performance. Prior approaches such as Weights
Magnitude, SparseGPT, and Wanda, either concentrated solely on weights or
integrated weights with activations for sparsity. However, they overlooked the
informative gradients derived from pretrained large language models. In this
paper, we present a novel sparsity-centric pruning method for pretrained LLMs,
termed Gradient-based Language Model Pruner (GBLM-Pruner). GBLM-Pruner
leverages the first-order term of the Taylor expansion, operating in a
training-free manner by harnessing properly normalized gradients from a few
calibration samples to determine the importance pruning score, and
substantially outperforms competitive counterparts like SparseGPT and Wanda in
multiple benchmarks. Intriguing, after incorporating gradients, the
unstructured pruning method tends to reveal some structural patterns
post-pruning, which mirrors the geometric interdependence inherent in the LLMs'
parameter structure. Additionally, GBLM-Pruner functions without any subsequent
retraining or weight updates to maintain its simplicity as other counterparts.
Extensive evaluations on LLaMA-1 and LLaMA-2 across various language benchmarks
and perplexity show that GBLM-Pruner surpasses magnitude pruning, Wanda
(weights+activations) and SparseGPT (weights+activations+weight update) by
significant margins. Our code and models are available at
https://github.com/RocktimJyotiDas/GBLM-Pruner. | Computational Linguistics |
What field is the article from? | Title: A Cross Attention Approach to Diagnostic Explainability using Clinical Practice Guidelines for Depression
Abstract: The lack of explainability using relevant clinical knowledge hinders the
adoption of Artificial Intelligence-powered analysis of unstructured clinical
dialogue. A wealth of relevant, untapped Mental Health (MH) data is available
in online communities, providing the opportunity to address the explainability
problem with substantial potential impact as a screening tool for both online
and offline applications. We develop a method to enhance attention in popular
transformer models and generate clinician-understandable explanations for
classification by incorporating external clinical knowledge. Inspired by how
clinicians rely on their expertise when interacting with patients, we leverage
relevant clinical knowledge to model patient inputs, providing meaningful
explanations for classification. This will save manual review time and engender
trust. We develop such a system in the context of MH using clinical practice
guidelines (CPG) for diagnosing depression, a mental health disorder of global
concern. We propose an application-specific language model called ProcesS
knowledge-infused cross ATtention (PSAT), which incorporates CPGs when
computing attention. Through rigorous evaluation on three expert-curated
datasets related to depression, we demonstrate application-relevant
explainability of PSAT. PSAT also surpasses the performance of nine baseline
models and can provide explanations where other baselines fall short. We
transform a CPG resource focused on depression, such as the Patient Health
Questionnaire (e.g. PHQ-9) and related questions, into a machine-readable
ontology using SNOMED-CT. With this resource, PSAT enhances the ability of
models like GPT-3.5 to generate application-relevant explanations. | Artificial Intelligence |
What field is the article from? | Title: Deep Unsupervised Domain Adaptation for Time Series Classification: a Benchmark
Abstract: Unsupervised Domain Adaptation (UDA) aims to harness labeled source data to
train models for unlabeled target data. Despite extensive research in domains
like computer vision and natural language processing, UDA remains underexplored
for time series data, which has widespread real-world applications ranging from
medicine and manufacturing to earth observation and human activity recognition.
Our paper addresses this gap by introducing a comprehensive benchmark for
evaluating UDA techniques for time series classification, with a focus on deep
learning methods. We provide seven new benchmark datasets covering various
domain shifts and temporal dynamics, facilitating fair and standardized UDA
method assessments with state of the art neural network backbones (e.g.
Inception) for time series data. This benchmark offers insights into the
strengths and limitations of the evaluated approaches while preserving the
unsupervised nature of domain adaptation, making it directly applicable to
practical problems. Our paper serves as a vital resource for researchers and
practitioners, advancing domain adaptation solutions for time series data and
fostering innovation in this critical field. The implementation code of this
benchmark is available at https://github.com/EricssonResearch/UDA-4-TSC. | Machine Learning |
What field is the article from? | Title: DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding
Abstract: This work presents DocPedia, a novel large multimodal model (LMM) for
versatile OCR-free document understanding, capable of parsing images up to
2,560$\times$2,560 resolution. Unlike existing work either struggle with
high-resolution documents or give up the large language model thus vision or
language ability constrained, our DocPedia directly processes visual input in
the frequency domain rather than the pixel space. The unique characteristic
enables DocPedia to capture a greater amount of visual and textual information
using a limited number of visual tokens. To consistently enhance both
perception and comprehension abilities of our model, we develop a dual-stage
training strategy and enrich instructions/annotations of all training tasks
covering multiple document types. Extensive quantitative and qualitative
experiments conducted on various publicly available benchmarks confirm the
mutual benefits of jointly learning perception and comprehension tasks. The
results provide further evidence of the effectiveness and superior performance
of our DocPedia over other methods. | Computer Vision |
What field is the article from? | Title: Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects
Abstract: Policy learning in robot-assisted surgery (RAS) lacks data efficient and
versatile methods that exhibit the desired motion quality for delicate surgical
interventions. To this end, we introduce Movement Primitive Diffusion (MPD), a
novel method for imitation learning (IL) in RAS that focuses on gentle
manipulation of deformable objects. The approach combines the versatility of
diffusion-based imitation learning (DIL) with the high-quality motion
generation capabilities of Probabilistic Dynamic Movement Primitives (ProDMPs).
This combination enables MPD to achieve gentle manipulation of deformable
objects, while maintaining data efficiency critical for RAS applications where
demonstration data is scarce. We evaluate MPD across various simulated tasks
and a real world robotic setup on both state and image observations. MPD
outperforms state-of-the-art DIL methods in success rate, motion quality, and
data efficiency. | Robotics |
What field is the article from? | Title: Comparing Generative Chatbots Based on Process Requirements
Abstract: Business processes are commonly represented by modelling languages, such as
Event-driven Process Chain (EPC), Yet Another Workflow Language (YAWL), and the
most popular standard notation for modelling business processes, the Business
Process Model and Notation (BPMN). Most recently, chatbots, programs that allow
users to interact with a machine using natural language, have been increasingly
used for business process execution support. A recent category of chatbots
worth mentioning is generative-based chatbots, powered by Large Language Models
(LLMs) such as OpenAI's Generative Pre-Trained Transformer (GPT) model and
Google's Pathways Language Model (PaLM), which are trained on billions of
parameters and support conversational intelligence. However, it is not clear
whether generative-based chatbots are able to understand and meet the
requirements of constructs such as those provided by BPMN for process execution
support. This paper presents a case study to compare the performance of
prominent generative models, GPT and PaLM, in the context of process execution
support. The research sheds light into the challenging problem of using
conversational approaches supported by generative chatbots as a means to
understand process-aware modelling notations and support users to execute their
tasks. | Computational Linguistics |
What field is the article from? | Title: SurreyAI 2023 Submission for the Quality Estimation Shared Task
Abstract: Quality Estimation (QE) systems are important in situations where it is
necessary to assess the quality of translations, but there is no reference
available. This paper describes the approach adopted by the SurreyAI team for
addressing the Sentence-Level Direct Assessment shared task in WMT23. The
proposed approach builds upon the TransQuest framework, exploring various
autoencoder pre-trained language models within the MonoTransQuest architecture
using single and ensemble settings. The autoencoder pre-trained language models
employed in the proposed systems are XLMV, InfoXLM-large, and XLMR-large. The
evaluation utilizes Spearman and Pearson correlation coefficients, assessing
the relationship between machine-predicted quality scores and human judgments
for 5 language pairs (English-Gujarati, English-Hindi, English-Marathi,
English-Tamil and English-Telugu). The MonoTQ-InfoXLM-large approach emerges as
a robust strategy, surpassing all other individual models proposed in this
study by significantly improving over the baseline for the majority of the
language pairs. | Computational Linguistics |
What field is the article from? | Title: AI Competitions and Benchmarks: towards impactful challenges with post-challenge papers, benchmarks and other dissemination actions
Abstract: Organising an AI challenge does not end with the final event. The
long-lasting impact also needs to be organised. This chapter covers the various
activities after the challenge is formally finished. The target audience of
different post-challenge activities is identified. The various outputs of the
challenge are listed with the means to collect them. The main part of the
chapter is a template for a typical post-challenge paper, including possible
graphs as well as advice on how to turn the challenge into a long-lasting
benchmark. | Machine Learning |
What field is the article from? | Title: Are we going MAD? Benchmarking Multi-Agent Debate between Language Models for Medical Q&A
Abstract: Recent advancements in large language models (LLMs) underscore their
potential for responding to medical inquiries. However, ensuring that
generative agents provide accurate and reliable answers remains an ongoing
challenge. In this context, multi-agent debate (MAD) has emerged as a prominent
strategy for enhancing the truthfulness of LLMs. In this work, we provide a
comprehensive benchmark of MAD strategies for medical Q&A, along with
open-source implementations. This explores the effective utilization of various
strategies including the trade-offs between cost, time, and accuracy. We build
upon these insights to provide a novel debate-prompting strategy based on agent
agreement that outperforms previously published strategies on medical Q&A
tasks. | Computational Linguistics |
What field is the article from? | Title: A Bag of Receptive Fields for Time Series Extrinsic Predictions
Abstract: High-dimensional time series data poses challenges due to its dynamic nature,
varying lengths, and presence of missing values. This kind of data requires
extensive preprocessing, limiting the applicability of existing Time Series
Classification and Time Series Extrinsic Regression techniques. For this
reason, we propose BORF, a Bag-Of-Receptive-Fields model, which incorporates
notions from time series convolution and 1D-SAX to handle univariate and
multivariate time series with varying lengths and missing values. We evaluate
BORF on Time Series Classification and Time Series Extrinsic Regression tasks
using the full UEA and UCR repositories, demonstrating its competitive
performance against state-of-the-art methods. Finally, we outline how this
representation can naturally provide saliency and feature-based explanations. | Machine Learning |
What field is the article from? | Title: Unmasking Bias and Inequities: A Systematic Review of Bias Detection and Mitigation in Healthcare Artificial Intelligence Using Electronic Health Records
Abstract: Objectives: Artificial intelligence (AI) applications utilizing electronic
health records (EHRs) have gained popularity, but they also introduce various
types of bias. This study aims to systematically review the literature that
address bias in AI research utilizing EHR data. Methods: A systematic review
was conducted following the Preferred Reporting Items for Systematic Reviews
and Meta-analyses (PRISMA) guideline. We retrieved articles published between
January 1, 2010, and October 31, 2022, from PubMed, Web of Science, and the
Institute of Electrical and Electronics Engineers. We defined six major types
of bias and summarized the existing approaches in bias handling. Results: Out
of the 252 retrieved articles, 20 met the inclusion criteria for the final
review. Five out of six bias were covered in this review: eight studies
analyzed selection bias; six on implicit bias; five on confounding bias; four
on measurement bias; two on algorithmic bias. For bias handling approaches, ten
studies identified bias during model development, while seventeen presented
methods to mitigate the bias. Discussion: Bias may infiltrate the AI
application development process at various stages. Although this review
discusses methods for addressing bias at different development stages, there is
room for implementing additional effective approaches. Conclusion: Despite
growing attention to bias in healthcare AI, research using EHR data on this
topic is still limited. Detecting and mitigating AI bias with EHR data
continues to pose challenges. Further research is needed to raise a
standardized method that is generalizable and interpretable to detect, mitigate
and evaluate bias in medical AI. | Artificial Intelligence |
What field is the article from? | Title: Weakly Supervised Semantic Parsing with Execution-based Spurious Program Filtering
Abstract: The problem of spurious programs is a longstanding challenge when training a
semantic parser from weak supervision. To eliminate such programs that have
wrong semantics but correct denotation, existing methods focus on exploiting
similarities between examples based on domain-specific knowledge. In this
paper, we propose a domain-agnostic filtering mechanism based on program
execution results. Specifically, for each program obtained through the search
process, we first construct a representation that captures the program's
semantics as execution results under various inputs. Then, we run a majority
vote on these representations to identify and filter out programs with
significantly different semantics from the other programs. In particular, our
method is orthogonal to the program search process so that it can easily
augment any of the existing weakly supervised semantic parsing frameworks.
Empirical evaluations on the Natural Language Visual Reasoning and
WikiTableQuestions demonstrate that applying our method to the existing
semantic parsers induces significantly improved performances. | Computational Linguistics |
What field is the article from? | Title: A Meta-Level Learning Algorithm for Sequential Hyper-Parameter Space Reduction in AutoML
Abstract: AutoML platforms have numerous options for the algorithms to try for each
step of the analysis, i.e., different possible algorithms for imputation,
transformations, feature selection, and modelling. Finding the optimal
combination of algorithms and hyper-parameter values is computationally
expensive, as the number of combinations to explore leads to an exponential
explosion of the space. In this paper, we present the Sequential
Hyper-parameter Space Reduction (SHSR) algorithm that reduces the space for an
AutoML tool with negligible drop in its predictive performance. SHSR is a
meta-level learning algorithm that analyzes past runs of an AutoML tool on
several datasets and learns which hyper-parameter values to filter out from
consideration on a new dataset to analyze. SHSR is evaluated on 284
classification and 375 regression problems, showing an approximate 30%
reduction in execution time with a performance drop of less than 0.1%. | Machine Learning |
What field is the article from? | Title: OASIS: Offsetting Active Reconstruction Attacks in Federated Learning
Abstract: Federated Learning (FL) has garnered significant attention for its potential
to protect user privacy while enhancing model training efficiency. However,
recent research has demonstrated that FL protocols can be easily compromised by
active reconstruction attacks executed by dishonest servers. These attacks
involve the malicious modification of global model parameters, allowing the
server to obtain a verbatim copy of users' private data by inverting their
gradient updates. Tackling this class of attack remains a crucial challenge due
to the strong threat model. In this paper, we propose OASIS, a defense
mechanism based on image augmentation that effectively counteracts active
reconstruction attacks while preserving model performance. We first uncover the
core principle of gradient inversion that enables these attacks and
theoretically identify the main conditions by which the defense can be robust
regardless of the attack strategies. We then construct OASIS with image
augmentation showing that it can undermine the attack principle. Comprehensive
evaluations demonstrate the efficacy of OASIS highlighting its feasibility as a
solution. | Cryptography and Security |
What field is the article from? | Title: Difference of Probability and Information Entropy for Skills Classification and Prediction in Student Learning
Abstract: The probability of an event is in the range of [0, 1]. In a sample space S,
the value of probability determines whether an outcome is true or false. The
probability of an event Pr(A) that will never occur = 0. The probability of the
event Pr(B) that will certainly occur = 1. This makes both events A and B thus
a certainty. Furthermore, the sum of probabilities Pr(E1) + Pr(E2) + ... +
Pr(En) of a finite set of events in a given sample space S = 1. Conversely, the
difference of the sum of two probabilities that will certainly occur is 0.
Firstly, this paper discusses Bayes' theorem, then complement of probability
and the difference of probability for occurrences of learning-events, before
applying these in the prediction of learning objects in student learning. Given
the sum total of 1; to make recommendation for student learning, this paper
submits that the difference of argMaxPr(S) and probability of
student-performance quantifies the weight of learning objects for students.
Using a dataset of skill-set, the computational procedure demonstrates: i) the
probability of skill-set events that has occurred that would lead to higher
level learning; ii) the probability of the events that has not occurred that
requires subject-matter relearning; iii) accuracy of decision tree in the
prediction of student performance into class labels; and iv) information
entropy about skill-set data and its implication on student cognitive
performance and recommendation of learning [1]. | Artificial Intelligence |
What field is the article from? | Title: A novel transformer-based approach for soil temperature prediction
Abstract: Soil temperature is one of the most significant parameters that plays a
crucial role in glacier energy, dynamics of mass balance, processes of surface
hydrological, coaction of glacier-atmosphere, nutrient cycling, ecological
stability, the management of soil, water, and field crop. In this work, we
introduce a novel approach using transformer models for the purpose of
forecasting soil temperature prediction. To the best of our knowledge, the
usage of transformer models in this work is the very first attempt to predict
soil temperature. Experiments are carried out using six different FLUXNET
stations by modeling them with five different transformer models, namely,
Vanilla Transformer, Informer, Autoformer, Reformer, and ETSformer. To
demonstrate the effectiveness of the proposed model, experiment results are
compared with both deep learning approaches and literature studies. Experiment
results show that the utilization of transformer models ensures a significant
contribution to the literature, thence determining the new state-of-the-art. | Machine Learning |
What field is the article from? | Title: Deep Group Interest Modeling of Full Lifelong User Behaviors for CTR Prediction
Abstract: Extracting users' interests from their lifelong behavior sequence is crucial
for predicting Click-Through Rate (CTR). Most current methods employ a
two-stage process for efficiency: they first select historical behaviors
related to the candidate item and then deduce the user's interest from this
narrowed-down behavior sub-sequence. This two-stage paradigm, though effective,
leads to information loss. Solely using users' lifelong click behaviors doesn't
provide a complete picture of their interests, leading to suboptimal
performance. In our research, we introduce the Deep Group Interest Network
(DGIN), an end-to-end method to model the user's entire behavior history. This
includes all post-registration actions, such as clicks, cart additions,
purchases, and more, providing a nuanced user understanding. We start by
grouping the full range of behaviors using a relevant key (like item_id) to
enhance efficiency. This process reduces the behavior length significantly,
from O(10^4) to O(10^2). To mitigate the potential loss of information due to
grouping, we incorporate two categories of group attributes. Within each group,
we calculate statistical information on various heterogeneous behaviors (like
behavior counts) and employ self-attention mechanisms to highlight unique
behavior characteristics (like behavior type). Based on this reorganized
behavior data, the user's interests are derived using the Transformer
technique. Additionally, we identify a subset of behaviors that share the same
item_id with the candidate item from the lifelong behavior sequence. The
insights from this subset reveal the user's decision-making process related to
the candidate item, improving prediction accuracy. Our comprehensive
evaluation, both on industrial and public datasets, validates DGIN's efficacy
and efficiency. | Information Retrieval |
What field is the article from? | Title: chatGPT for generating questions and assessments based on accreditations
Abstract: This research aims to take advantage of artificial intelligence techniques in
producing students assessment that is compatible with the different academic
accreditations of the same program. The possibility of using generative
artificial intelligence technology was studied to produce an academic
accreditation compliant test the National Center for Academic Accreditation of
Kingdom of Saudi Arabia and Accreditation Board for Engineering and Technology.
A novel method was introduced to map the verbs used to create the questions
introduced in the tests. The method allows a possibility of using the
generative artificial intelligence technology to produce and check the validity
of questions that measure educational outcomes. A questionnaire was distributed
to ensure that the use of generative artificial intelligence to create exam
questions is acceptable by the faculty members, as well as to ask about the
acceptance of assistance in validating questions submitted by faculty members
and amending them in accordance with academic accreditations. The questionnaire
was distributed to faculty members of different majors in the Kingdom of Saudi
Arabias universities. one hundred twenty responses obtained with eight five
percentile approval percentage for generate complete exam questions by
generative artificial intelligence . Whereas ninety eight percentage was the
approval percentage for editing and improving already existed questions. | Computers and Society |
What field is the article from? | Title: Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation
Abstract: Transformers have been successfully applied in the field of video-based 3D
human pose estimation. However, the high computational costs of these video
pose transformers (VPTs) make them impractical on resource-constrained devices.
In this paper, we present a plug-and-play pruning-and-recovering framework,
called Hourglass Tokenizer (HoT), for efficient transformer-based 3D human pose
estimation from videos. Our HoT begins with pruning pose tokens of redundant
frames and ends with recovering full-length tokens, resulting in a few pose
tokens in the intermediate transformer blocks and thus improving the model
efficiency. To effectively achieve this, we propose a token pruning cluster
(TPC) that dynamically selects a few representative tokens with high semantic
diversity while eliminating the redundancy of video frames. In addition, we
develop a token recovering attention (TRA) to restore the detailed
spatio-temporal information based on the selected tokens, thereby expanding the
network output to the original full-length temporal resolution for fast
inference. Extensive experiments on two benchmark datasets (i.e., Human3.6M and
MPI-INF-3DHP) demonstrate that our method can achieve both high efficiency and
estimation accuracy compared to the original VPT models. For instance, applying
to MotionBERT and MixSTE on Human3.6M, our HoT can save nearly 50% FLOPs
without sacrificing accuracy and nearly 40% FLOPs with only 0.2% accuracy drop,
respectively. Our source code will be open-sourced. | Computer Vision |
What field is the article from? | Title: The Disagreement Problem in Faithfulness Metrics
Abstract: The field of explainable artificial intelligence (XAI) aims to explain how
black-box machine learning models work. Much of the work centers around the
holy grail of providing post-hoc feature attributions to any model
architecture. While the pace of innovation around novel methods has slowed
down, the question remains of how to choose a method, and how to make it fit
for purpose. Recently, efforts around benchmarking XAI methods have suggested
metrics for that purpose -- but there are many choices. That bounty of choice
still leaves an end user unclear on how to proceed. This paper focuses on
comparing metrics with the aim of measuring faithfulness of local explanations
on tabular classification problems -- and shows that the current metrics don't
agree; leaving users unsure how to choose the most faithful explanations. | Machine Learning |
What field is the article from? | Title: Is Feedback All You Need? Leveraging Natural Language Feedback in Goal-Conditioned Reinforcement Learning
Abstract: Despite numerous successes, the field of reinforcement learning (RL) remains
far from matching the impressive generalisation power of human behaviour
learning. One possible way to help bridge this gap be to provide RL agents with
richer, more human-like feedback expressed in natural language. To investigate
this idea, we first extend BabyAI to automatically generate language feedback
from the environment dynamics and goal condition success. Then, we modify the
Decision Transformer architecture to take advantage of this additional signal.
We find that training with language feedback either in place of or in addition
to the return-to-go or goal descriptions improves agents' generalisation
performance, and that agents can benefit from feedback even when this is only
available during training, but not at inference. | Computational Linguistics |
What field is the article from? | Title: INTERVENOR: Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing
Abstract: This paper proposes INTERactiVE chaiN Of Repairing (INTERVENOR), which mimics
human code repairing behavior (iteratively judging, rethinking, and repairing)
and prompts the coding ability of regard Large Language Models (LLMs).
Specifically, INTERVENOR employs two LLM based agents, Code Learner and Code
Teacher, to play different roles in code repairing and work interactively to
repair the generated codes. The Code Learner is asked to generate and repair
code according to the instructions from the Code Teacher. The Code Teacher
rethinks the code errors according to the corresponding feedback from compilers
and iteratively generates the chain-of-repairing (CoR) to guide the code
repairing process for Code Learner. Our experiments show that INTERVENOR
outperforms the state-of-the-art methods and achieves about 13% and 4.5%
improvements over the GPT-3.5 model in code generation and code translation
tasks, respectively. Our further analyses show that CoR can illuminate the bug
reasons and solution plans via natural language. Thanks to the feedback of code
compilers, INTERVENOR can accurately identify the syntax errors and assertion
errors in the code and provide precise instructions to repair codes, making
LLMs achieve the plateau performance with only three repairing turns. All data
and codes are available at https://github.com/NEUIR/INTERVENOR | Software Engineering |
What field is the article from? | Title: Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks
Abstract: Growing applications of large language models (LLMs) trained by a third party
raise serious concerns on the security vulnerability of LLMs.It has been
demonstrated that malicious actors can covertly exploit these vulnerabilities
in LLMs through poisoning attacks aimed at generating undesirable outputs.
While poisoning attacks have received significant attention in the image domain
(e.g., object detection), and classification tasks, their implications for
generative models, particularly in the realm of natural language generation
(NLG) tasks, remain poorly understood. To bridge this gap, we perform a
comprehensive exploration of various poisoning techniques to assess their
effectiveness across a range of generative tasks. Furthermore, we introduce a
range of metrics designed to quantify the success and stealthiness of poisoning
attacks specifically tailored to NLG tasks. Through extensive experiments on
multiple NLG tasks, LLMs and datasets, we show that it is possible to
successfully poison an LLM during the fine-tuning stage using as little as 1\%
of the total tuning data samples. Our paper presents the first systematic
approach to comprehend poisoning attacks targeting NLG tasks considering a wide
range of triggers and attack settings. We hope our findings will assist the AI
security community in devising appropriate defenses against such threats. | Cryptography and Security |
What field is the article from? | Title: Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?
Abstract: A distinction is often drawn between a model's ability to predict a label for
an evaluation sample that is directly memorised from highly similar training
samples versus an ability to predict the label via some method of
generalisation. In the context of using Language Models for question-answering,
discussion continues to occur as to the extent to which questions are answered
through memorisation. We consider this issue for questions that would ideally
be answered through reasoning over an associated context. We propose a method
of identifying evaluation samples for which it is very unlikely our model would
have memorised the answers. Our method is based on semantic similarity of input
tokens and label tokens between training and evaluation samples. We show that
our method offers advantages upon some prior approaches in that it is able to
surface evaluation-train pairs that have overlap in either contiguous or
discontiguous sequences of tokens. We use this method to identify unmemorisable
subsets of our evaluation datasets. We train two Language Models in a multitask
fashion whereby the second model differs from the first only in that it has two
additional datasets added to the training regime that are designed to impart
simple numerical reasoning strategies of a sort known to improve performance on
some of our evaluation datasets but not on others. We then show that there is
performance improvement between the two models on the unmemorisable subsets of
the evaluation datasets that were expected to benefit from the additional
training datasets. Specifically, performance on unmemorisable subsets of two of
our evaluation datasets, DROP and ROPES significantly improves by 9.0%, and
25.7% respectively while other evaluation datasets have no significant change
in performance. | Computational Linguistics |
What field is the article from? | Title: Advancing Post Hoc Case Based Explanation with Feature Highlighting
Abstract: Explainable AI (XAI) has been proposed as a valuable tool to assist in
downstream tasks involving human and AI collaboration. Perhaps the most
psychologically valid XAI techniques are case based approaches which display
'whole' exemplars to explain the predictions of black box AI systems. However,
for such post hoc XAI methods dealing with images, there has been no attempt to
improve their scope by using multiple clear feature 'parts' of the images to
explain the predictions while linking back to relevant cases in the training
data, thus allowing for more comprehensive explanations that are faithful to
the underlying model. Here, we address this gap by proposing two general
algorithms (latent and super pixel based) which can isolate multiple clear
feature parts in a test image, and then connect them to the explanatory cases
found in the training data, before testing their effectiveness in a carefully
designed user study. Results demonstrate that the proposed approach
appropriately calibrates a users feelings of 'correctness' for ambiguous
classifications in real world data on the ImageNet dataset, an effect which
does not happen when just showing the explanation without feature highlighting. | Artificial Intelligence |
What field is the article from? | Title: TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
Abstract: The Diffusion model, a prevalent framework for image generation, encounters
significant challenges in terms of broad applicability due to its extended
inference times and substantial memory requirements. Efficient Post-training
Quantization (PTQ) is pivotal for addressing these issues in traditional
models. Different from traditional models, diffusion models heavily depend on
the time-step $t$ to achieve satisfactory multi-round denoising. Usually, $t$
from the finite set $\{1, \ldots, T\}$ is encoded to a temporal feature by a
few modules totally irrespective of the sampling data. However, existing PTQ
methods do not optimize these modules separately. They adopt inappropriate
reconstruction targets and complex calibration methods, resulting in a severe
disturbance of the temporal feature and denoising trajectory, as well as a low
compression efficiency. To solve these, we propose a Temporal Feature
Maintenance Quantization (TFMQ) framework building upon a Temporal Information
Block which is just related to the time-step $t$ and unrelated to the sampling
data. Powered by the pioneering block design, we devise temporal information
aware reconstruction (TIAR) and finite set calibration (FSC) to align the
full-precision temporal features in a limited time. Equipped with the
framework, we can maintain the most temporal information and ensure the
end-to-end generation quality. Extensive experiments on various datasets and
diffusion models prove our state-of-the-art results. Remarkably, our
quantization approach, for the first time, achieves model performance nearly on
par with the full-precision model under 4-bit weight quantization.
Additionally, our method incurs almost no extra computational cost and
accelerates quantization time by $2.0 \times$ on LSUN-Bedrooms $256 \times 256$
compared to previous works. | Computer Vision |
What field is the article from? | Title: ChatGPT in the context of precision agriculture data analytics
Abstract: In this study we argue that integrating ChatGPT into the data processing
pipeline of automated sensors in precision agriculture has the potential to
bring several benefits and enhance various aspects of modern farming practices.
Policy makers often face a barrier when they need to get informed about the
situation in vast agricultural fields to reach to decisions. They depend on the
close collaboration between agricultural experts in the field, data analysts,
and technology providers to create interdisciplinary teams that cannot always
be secured on demand or establish effective communication across these diverse
domains to respond in real-time. In this work we argue that the speech
recognition input modality of ChatGPT provides a more intuitive and natural way
for policy makers to interact with the database of the server of an
agricultural data processing system to which a large, dispersed network of
automated insect traps and sensors probes reports. The large language models
map the speech input to text, allowing the user to form its own version of
unconstrained verbal query, raising the barrier of having to learn and adapt
oneself to a specific data analytics software. The output of the language model
can interact through Python code and Pandas with the entire database, visualize
the results and use speech synthesis to engage the user in an iterative and
refining discussion related to the data. We show three ways of how ChatGPT can
interact with the database of the remote server to which a dispersed network of
different modalities (optical counters, vibration recordings, pictures, and
video), report. We examine the potential and the validity of the response of
ChatGPT in analyzing, and interpreting agricultural data, providing real time
insights and recommendations to stakeholders | Artificial Intelligence |
What field is the article from? | Title: QualEval: Qualitative Evaluation for Model Improvement
Abstract: Quantitative evaluation metrics have traditionally been pivotal in gauging
the advancements of artificial intelligence systems, including large language
models (LLMs). However, these metrics have inherent limitations. Given the
intricate nature of real-world tasks, a single scalar to quantify and compare
is insufficient to capture the fine-grained nuances of model behavior. Metrics
serve only as a way to compare and benchmark models, and do not yield
actionable diagnostics, thus making the model improvement process challenging.
Model developers find themselves amid extensive manual efforts involving
sifting through vast datasets and attempting hit-or-miss adjustments to
training data or setups. In this work, we address the shortcomings of
quantitative metrics by proposing QualEval, which augments quantitative scalar
metrics with automated qualitative evaluation as a vehicle for model
improvement. QualEval uses a powerful LLM reasoner and our novel flexible
linear programming solver to generate human-readable insights that when
applied, accelerate model improvement. The insights are backed by a
comprehensive dashboard with fine-grained visualizations and
human-interpretable analyses. We corroborate the faithfulness of QualEval by
demonstrating that leveraging its insights, for example, improves the absolute
performance of the Llama 2 model by up to 15% points relative on a challenging
dialogue task (DialogSum) when compared to baselines. QualEval successfully
increases the pace of model development, thus in essence serving as a
data-scientist-in-a-box. Given the focus on critiquing and improving current
evaluation metrics, our method serves as a refreshingly new technique for both
model evaluation and improvement. | Machine Learning |
What field is the article from? | Title: diff History for Long-Context Language Agents
Abstract: Language Models (LMs) offer an exciting solution for general-purpose embodied
control. However, a key technical issue arises when using an LM-based
controller: environment observations must be converted to text, which coupled
with history, leads to prohibitively large textual prompts. As a result, prior
work in LM agents is limited to restricted domains with either small
observation size or minimal needs for interaction history. In this paper, we
introduce a simple and highly effective solution to these issues. We exploit
the fact that consecutive text observations have high similarity and propose to
compress them via the Unix diff command. We demonstrate our approach in
NetHack, a complex rogue-like video game, that requires long-horizon reasoning
for decision-making and is far from solved, particularly for neural agents.
Diff history offers an average of 4x increase in the length of the text-based
interaction history available to the LM. This observational compression along
with the benefits of abstraction yields a 7x improvement in game score on
held-out environment instances over state-of-the-art baselines. It also
outperforms prior agents that use visual observations by over 40%. | Artificial Intelligence |
What field is the article from? | Title: Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Model
Abstract: Text-to-image generative models offer many innovative services but also raise
ethical concerns due to their potential to generate unethical images. Most
publicly available text-to-image models employ safety filters to prevent
unintended generation intents. In this work, we introduce the
Divide-and-Conquer Attack to circumvent the safety filters of state-of-the-art
text-to-image models. Our attack leverages LLMs as agents for text
transformation, creating adversarial prompts from sensitive ones. We have
developed effective helper prompts that enable LLMs to break down sensitive
drawing prompts into multiple harmless descriptions, allowing them to bypass
safety filters while still generating sensitive images. This means that the
latent harmful meaning only becomes apparent when all individual elements are
drawn together. Our evaluation demonstrates that our attack successfully
circumvents the closed-box safety filter of SOTA DALLE-3 integrated natively
into ChatGPT to generate unethical images. This approach, which essentially
uses LLM-generated adversarial prompts against GPT-4-assisted DALLE-3, is akin
to using one's own spear to breach their shield. It could have more severe
security implications than previous manual crafting or iterative model querying
methods, and we hope it stimulates more attention towards similar efforts. Our
code and data are available at:
https://github.com/researchcode001/Divide-and-Conquer-Attack | Artificial Intelligence |
What field is the article from? | Title: Dense Video Captioning: A Survey of Techniques, Datasets and Evaluation Protocols
Abstract: Untrimmed videos have interrelated events, dependencies, context, overlapping
events, object-object interactions, domain specificity, and other semantics
that are worth highlighting while describing a video in natural language. Owing
to such a vast diversity, a single sentence can only correctly describe a
portion of the video. Dense Video Captioning (DVC) aims at detecting and
describing different events in a given video. The term DVC originated in the
2017 ActivityNet challenge, after which considerable effort has been made to
address the challenge. Dense Video Captioning is divided into three sub-tasks:
(1) Video Feature Extraction (VFE), (2) Temporal Event Localization (TEL), and
(3) Dense Caption Generation (DCG). This review aims to discuss all the studies
that claim to perform DVC along with its sub-tasks and summarize their results.
We also discuss all the datasets that have been used for DVC. Lastly, we
highlight some emerging challenges and future trends in the field. | Computer Vision |
What field is the article from? | Title: Explainable Product Classification for Customs
Abstract: The task of assigning internationally accepted commodity codes (aka HS codes)
to traded goods is a critical function of customs offices. Like court decisions
made by judges, this task follows the doctrine of precedent and can be
nontrivial even for experienced officers. Together with the Korea Customs
Service (KCS), we propose a first-ever explainable decision supporting model
that suggests the most likely subheadings (i.e., the first six digits) of the
HS code. The model also provides reasoning for its suggestion in the form of a
document that is interpretable by customs officers. We evaluated the model
using 5,000 cases that recently received a classification request. The results
showed that the top-3 suggestions made by our model had an accuracy of 93.9\%
when classifying 925 challenging subheadings. A user study with 32 customs
experts further confirmed that our algorithmic suggestions accompanied by
explainable reasonings, can substantially reduce the time and effort taken by
customs officers for classification reviews. | Artificial Intelligence |
What field is the article from? | Title: Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy
Abstract: Data pruning, which aims to downsize a large training set into a small
informative subset, is crucial for reducing the enormous computational costs of
modern deep learning. Though large-scale data collections invariably contain
annotation noise and numerous robust learning methods have been developed, data
pruning for the noise-robust learning scenario has received little attention.
With state-of-the-art Re-labeling methods that self-correct erroneous labels
while training, it is challenging to identify which subset induces the most
accurate re-labeling of erroneous labels in the entire training set. In this
paper, we formalize the problem of data pruning with re-labeling. We first show
that the likelihood of a training example being correctly re-labeled is
proportional to the prediction confidence of its neighborhood in the subset.
Therefore, we propose a novel data pruning algorithm, Prune4Rel, that finds a
subset maximizing the total neighborhood confidence of all training examples,
thereby maximizing the re-labeling accuracy and generalization performance.
Extensive experiments on four real and one synthetic noisy datasets show that
\algname{} outperforms the baselines with Re-labeling models by up to 9.1% as
well as those with a standard model by up to 21.6%. | Machine Learning |
What field is the article from? | Title: CAMRA: Copilot for AMR Annotation
Abstract: In this paper, we introduce CAMRA (Copilot for AMR Annotatations), a
cutting-edge web-based tool designed for constructing Abstract Meaning
Representation (AMR) from natural language text. CAMRA offers a novel approach
to deep lexical semantics annotation such as AMR, treating AMR annotation akin
to coding in programming languages. Leveraging the familiarity of programming
paradigms, CAMRA encompasses all essential features of existing AMR editors,
including example lookup, while going a step further by integrating Propbank
roleset lookup as an autocomplete feature within the tool. Notably, CAMRA
incorporates AMR parser models as coding co-pilots, greatly enhancing the
efficiency and accuracy of AMR annotators. To demonstrate the tool's
capabilities, we provide a live demo accessible at: https://camra.colorado.edu | Computational Linguistics |
What field is the article from? | Title: The Rise of the AI Co-Pilot: Lessons for Design from Aviation and Beyond
Abstract: The fast pace of advances in AI promises to revolutionize various aspects of
knowledge work, extending its influence to daily life and professional fields
alike. We advocate for a paradigm where AI is seen as a collaborative co-pilot,
working under human guidance rather than as a mere tool. Drawing from relevant
research and literature in the disciplines of Human-Computer Interaction and
Human Factors Engineering, we highlight the criticality of maintaining human
oversight in AI interactions. Reflecting on lessons from aviation, we address
the dangers of over-relying on automation, such as diminished human vigilance
and skill erosion. Our paper proposes a design approach that emphasizes active
human engagement, control, and skill enhancement in the AI partnership, aiming
to foster a harmonious, effective, and empowering human-AI relationship. We
particularly call out the critical need to design AI interaction capabilities
and software applications to enable and celebrate the primacy of human agency.
This calls for designs for human-AI partnership that cede ultimate control and
responsibility to the human user as pilot, with the AI co-pilot acting in a
well-defined supporting role. | Human-Computer Interaction |
What field is the article from? | Title: Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models
Abstract: We propose the Data Contamination Quiz, a simple and effective approach to
detect data contamination in large language models (LLMs) and estimate the
amount of it. Specifically, we frame data contamination detection as a series
of multiple-choice questions. We devise a quiz format wherein three perturbed
versions of each dataset instance are created. These changes only include
word-level perturbations, replacing words with their contextual synonyms,
ensuring both the semantic and sentence structure remain exactly the same as
the original instance. Together with the original instance, these perturbed
versions constitute the choices in the quiz. Given that the only distinguishing
signal among these choices is the exact wording, an LLM, when tasked with
identifying the original instance from the choices, opts for the original if it
has memorized it in its pre-training phase--a trait intrinsic to LLMs. A
dataset partition is then marked as contaminated if the LLM's performance on
the quiz surpasses what random chance suggests. Our evaluation spans seven
datasets and their respective splits (train and test/validation) on two
state-of-the-art LLMs: GPT-4 and GPT-3.5. While lacking access to the
pre-training data, our results suggest that our approach not only enhances the
detection of data contamination but also provides an accurate estimation of its
extent, even when the contamination signal is weak. | Computational Linguistics |
What field is the article from? | Title: Multi-criteria recommendation systems to foster online grocery
Abstract: With the exponential increase in information, it has become imperative to
design mechanisms that allow users to access what matters to them as quickly as
possible. The recommendation system ($RS$) with information technology
development is the solution, it is an intelligent system. Various types of data
can be collected on items of interest to users and presented as
recommendations. $RS$ also play a very important role in e-commerce. The
purpose of recommending a product is to designate the most appropriate
designation for a specific product. The major challenges when recommending
products are insufficient information about the products and the categories to
which they belong. In this paper, we transform the product data using two
methods of document representation: bag-of-words (BOW) and the neural
network-based document combination known as vector-based (Doc2Vec). We propose
three-criteria recommendation systems (product, package, and health) for each
document representation method to foster online grocery, which depends on
product characteristics such as (composition, packaging, nutrition table,
allergen, etc.). For our evaluation, we conducted a user and expert survey.
Finally, we have compared the performance of these three criteria for each
document representation method, discovering that the neural network-based
(Doc2Vec) performs better and completely alters the results. | Information Retrieval |
What field is the article from? | Title: Generalization to New Sequential Decision Making Tasks with In-Context Learning
Abstract: Training autonomous agents that can learn new tasks from only a handful of
demonstrations is a long-standing problem in machine learning. Recently,
transformers have been shown to learn new language or vision tasks without any
weight updates from only a few examples, also referred to as in-context
learning. However, the sequential decision making setting poses additional
challenges having a lower tolerance for errors since the environment's
stochasticity or the agent's actions can lead to unseen, and sometimes
unrecoverable, states. In this paper, we use an illustrative example to show
that naively applying transformers to sequential decision making problems does
not enable in-context learning of new tasks. We then demonstrate how training
on sequences of trajectories with certain distributional properties leads to
in-context learning of new sequential decision making tasks. We investigate
different design choices and find that larger model and dataset sizes, as well
as more task diversity, environment stochasticity, and trajectory burstiness,
all result in better in-context learning of new out-of-distribution tasks. By
training on large diverse offline datasets, our model is able to learn new
MiniHack and Procgen tasks without any weight updates from just a handful of
demonstrations. | Machine Learning |
What field is the article from? | Title: Math-Shepherd: A Label-Free Step-by-Step Verifier for LLMs in Mathematical Reasoning
Abstract: Large language models (LLMs) have demonstrated remarkable capabilities across
a wide range of tasks. However, even the most advanced open-source LLMs, such
as the LLaMA family models, still face challenges when it comes to accurately
solving complex multi-step mathematical problems. In this paper, we present an
innovative process-oriented math verifier called \textbf{Math-Shepherd}, which
assigns a reward score to each step of the LLM's outputs on math problems. The
training of Math-Shepherd is achieved using automatically constructed
process-wise supervision data, breaking the bottleneck of heavy reliance on
manual annotation in existing work. With the guidance of Math-Shepherd, a
series of open-source LLMs demonstrate exceptional performance. Among them,
DeepSeek 67B \citep{DeepSeek-llm} stands out by achieving accuracy rates of
93.3\% on the GSM8K dataset and 48.1\% on the MATH dataset, without external
enhancement such as tool usage. Our Math-Shepherd also outperforms the
self-consistency method and other existing verification models. We believe that
automatic process supervision holds significant potential for the future
evolution of LLMs. | Artificial Intelligence |
What field is the article from? | Title: Replay across Experiments: A Natural Extension of Off-Policy RL
Abstract: Replaying data is a principal mechanism underlying the stability and data
efficiency of off-policy reinforcement learning (RL). We present an effective
yet simple framework to extend the use of replays across multiple experiments,
minimally adapting the RL workflow for sizeable improvements in controller
performance and research iteration times. At its core, Replay Across
Experiments (RaE) involves reusing experience from previous experiments to
improve exploration and bootstrap learning while reducing required changes to a
minimum in comparison to prior work. We empirically show benefits across a
number of RL algorithms and challenging control domains spanning both
locomotion and manipulation, including hard exploration tasks from egocentric
vision. Through comprehensive ablations, we demonstrate robustness to the
quality and amount of data available and various hyperparameter choices.
Finally, we discuss how our approach can be applied more broadly across
research life cycles and can increase resilience by reloading data across
random seeds or hyperparameter variations. | Machine Learning |
What field is the article from? | Title: End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply Systems
Abstract: Reply suggestion systems represent a staple component of many instant
messaging and email systems. However, the requirement to produce sets of
replies, rather than individual replies, makes the task poorly suited for
out-of-the-box retrieval architectures, which only consider individual
message-reply similarity. As a result, these system often rely on additional
post-processing modules to diversify the outputs. However, these approaches are
ultimately bottlenecked by the performance of the initial retriever, which in
practice struggles to present a sufficiently diverse range of options to the
downstream diversification module, leading to the suggestions being less
relevant to the user. In this paper, we consider a novel approach that
radically simplifies this pipeline through an autoregressive text-to-text
retrieval model, that learns the smart reply task end-to-end from a dataset of
(message, reply set) pairs obtained via bootstrapping. Empirical results show
this method consistently outperforms a range of state-of-the-art baselines
across three datasets, corresponding to a 5.1%-17.9% improvement in relevance,
and a 0.5%-63.1% improvement in diversity compared to the best baseline
approach. We make our code publicly available. | Computational Linguistics |
What field is the article from? | Title: NNG-Mix: Improving Semi-supervised Anomaly Detection with Pseudo-anomaly Generation
Abstract: Anomaly detection (AD) is essential in identifying rare and often critical
events in complex systems, finding applications in fields such as network
intrusion detection, financial fraud detection, and fault detection in
infrastructure and industrial systems. While AD is typically treated as an
unsupervised learning task due to the high cost of label annotation, it is more
practical to assume access to a small set of labeled anomaly samples from
domain experts, as is the case for semi-supervised anomaly detection.
Semi-supervised and supervised approaches can leverage such labeled data,
resulting in improved performance. In this paper, rather than proposing a new
semi-supervised or supervised approach for AD, we introduce a novel algorithm
for generating additional pseudo-anomalies on the basis of the limited labeled
anomalies and a large volume of unlabeled data. This serves as an augmentation
to facilitate the detection of new anomalies. Our proposed algorithm, named
Nearest Neighbor Gaussian Mixup (NNG-Mix), efficiently integrates information
from both labeled and unlabeled data to generate pseudo-anomalies. We compare
the performance of this novel algorithm with commonly applied augmentation
techniques, such as Mixup and Cutout. We evaluate NNG-Mix by training various
existing semi-supervised and supervised anomaly detection algorithms on the
original training data along with the generated pseudo-anomalies. Through
extensive experiments on 57 benchmark datasets in ADBench, reflecting different
data types, we demonstrate that NNG-Mix outperforms other data augmentation
methods. It yields significant performance improvements compared to the
baselines trained exclusively on the original training data. Notably, NNG-Mix
yields up to 16.4%, 8.8%, and 8.0% improvements on Classical, CV, and NLP
datasets in ADBench. Our source code will be available at
https://github.com/donghao51/NNG-Mix. | Machine Learning |
What field is the article from? | Title: Predicting Ground Reaction Force from Inertial Sensors
Abstract: The study of ground reaction forces (GRF) is used to characterize the
mechanical loading experienced by individuals in movements such as running,
which is clinically applicable to identify athletes at risk for stress-related
injuries. Our aim in this paper is to determine if data collected with inertial
measurement units (IMUs), that can be worn by athletes during outdoor runs, can
be used to predict GRF with sufficient accuracy to allow the analysis of its
derived biomechanical variables (e.g., contact time and loading rate).
In this paper, we consider lightweight approaches in contrast to
state-of-the-art prediction using LSTM neural networks. Specifically, we
compare use of LSTMs to k-Nearest Neighbors (KNN) regression as well as propose
a novel solution, SVD Embedding Regression (SER), using linear regression
between singular value decomposition embeddings of IMUs data (input) and GRF
data (output). We evaluate the accuracy of these techniques when using training
data collected from different athletes, from the same athlete, or both, and we
explore the use of acceleration and angular velocity data from sensors at
different locations (sacrum and shanks). Our results illustrate that simple
machine learning methods such as SER and KNN can be similarly accurate or more
accurate than LSTM neural networks, with much faster training times and
hyperparameter optimization; in particular, SER and KNN are more accurate when
personal training data are available, and KNN comes with benefit of providing
provenance of prediction. Notably, the use of personal data reduces prediction
errors of all methods for most biomechanical variables. | Machine Learning |
What field is the article from? | Title: Causality and Explainability for Trustworthy Integrated Pest Management
Abstract: Pesticides serve as a common tool in agricultural pest control but
significantly contribute to the climate crisis. To combat this, Integrated Pest
Management (IPM) stands as a climate-smart alternative. Despite its potential,
IPM faces low adoption rates due to farmers' skepticism about its
effectiveness. To address this challenge, we introduce an advanced data
analysis framework tailored to enhance IPM adoption. Our framework provides i)
robust pest population predictions across diverse environments with invariant
and causal learning, ii) interpretable pest presence predictions using
transparent models, iii) actionable advice through counterfactual explanations
for in-season IPM interventions, iv) field-specific treatment effect
estimations, and v) assessments of the effectiveness of our advice using causal
inference. By incorporating these features, our framework aims to alleviate
skepticism and encourage wider adoption of IPM practices among farmers. | Machine Learning |
What field is the article from? | Title: Multi-modal Latent Space Learning for Chain-of-Thought Reasoning in Language Models
Abstract: Chain-of-thought (CoT) reasoning has exhibited impressive performance in
language models for solving complex tasks and answering questions. However,
many real-world questions require multi-modal information, such as text and
images. Previous research on multi-modal CoT has primarily focused on
extracting fixed image features from off-the-shelf vision models and then
fusing them with text using attention mechanisms. This approach has limitations
because these vision models were not designed for complex reasoning tasks and
do not align well with language thoughts. To overcome this limitation, we
introduce a novel approach for multi-modal CoT reasoning that utilizes latent
space learning via diffusion processes to generate effective image features
that align with language thoughts. Our method fuses image features and text
representations at a deep level and improves the complex reasoning ability of
multi-modal CoT. We demonstrate the efficacy of our proposed method on
multi-modal ScienceQA and machine translation benchmarks, achieving
state-of-the-art performance on ScienceQA. Overall, our approach offers a more
robust and effective solution for multi-modal reasoning in language models,
enhancing their ability to tackle complex real-world problems. | Artificial Intelligence |
What field is the article from? | Title: Systematic AI Approach for AGI: Addressing Alignment, Energy, and AGI Grand Challenges
Abstract: AI faces a trifecta of grand challenges the Energy Wall, the Alignment
Problem and the Leap from Narrow AI to AGI. Contemporary AI solutions consume
unsustainable amounts of energy during model training and daily
operations.Making things worse, the amount of computation required to train
each new AI model has been doubling every 2 months since 2020, directly
translating to increases in energy consumption.The leap from AI to AGI requires
multiple functional subsystems operating in a balanced manner, which requires a
system architecture. However, the current approach to artificial intelligence
lacks system design; even though system characteristics play a key role in the
human brain from the way it processes information to how it makes decisions.
Similarly, current alignment and AI ethics approaches largely ignore system
design, yet studies show that the brains system architecture plays a critical
role in healthy moral decisions.In this paper, we argue that system design is
critically important in overcoming all three grand challenges. We posit that
system design is the missing piece in overcoming the grand challenges.We
present a Systematic AI Approach for AGI that utilizes system design principles
for AGI, while providing ways to overcome the energy wall and the alignment
challenges. | Artificial Intelligence |
What field is the article from? | Title: Internet of Federated Digital Twins (IoFDT): Connecting Twins Beyond Borders for Society 5.0
Abstract: The concept of digital twin (DT), which enables the creation of a
programmable, digital representation of physical systems, is expected to
revolutionize future industries and will lie at the heart of the vision of a
future smart society, namely, Society 5.0, in which high integration between
cyber (digital) and physical spaces is exploited to bring economic and societal
advancements. However, the success of such a DT-driven Society 5.0 requires a
synergistic convergence of artificial intelligence and networking technologies
into an integrated, programmable system that can coordinate networks of DTs to
effectively deliver diverse Society 5.0 services. Prior works remain restricted
to either qualitative study, simple analysis or software implementations of a
single DT, and thus, they cannot provide the highly synergistic integration of
digital and physical spaces as required by Society 5.0. In contrast, this paper
envisions a novel concept of an Internet of Federated Digital Twins (IoFDT)
that holistically integrates heterogeneous and physically separated DTs
representing different Society 5.0 services within a single framework and
system. For this concept of IoFDT, we first introduce a hierarchical
architecture that integrates federated DTs through horizontal and vertical
interactions, bridging the cyber and physical spaces to unlock new
possibilities. Then, we discuss the challenges of realizing IoFDT, highlighting
the intricacies across communication, computing, and AI-native networks while
also underscoring potential innovative solutions. Subsequently, we elaborate on
the importance of the implementation of a unified IoFDT platform that
integrates all technical components and orchestrates their interactions,
emphasizing the necessity of practical experimental platforms with a focus on
real-world applications in areas like smart mobility. | Artificial Intelligence |
What field is the article from? | Title: FedReverse: Multiparty Reversible Deep Neural Network Watermarking
Abstract: The proliferation of Deep Neural Networks (DNN) in commercial applications is
expanding rapidly. Simultaneously, the increasing complexity and cost of
training DNN models have intensified the urgency surrounding the protection of
intellectual property associated with these trained models. In this regard, DNN
watermarking has emerged as a crucial safeguarding technique. This paper
presents FedReverse, a novel multiparty reversible watermarking approach for
robust copyright protection while minimizing performance impact. Unlike
existing methods, FedReverse enables collaborative watermark embedding from
multiple parties after model training, ensuring individual copyright claims. In
addition, FedReverse is reversible, enabling complete watermark removal with
unanimous client consent. FedReverse demonstrates perfect covering, ensuring
that observations of watermarked content do not reveal any information about
the hidden watermark. Additionally, it showcases resistance against Known
Original Attacks (KOA), making it highly challenging for attackers to forge
watermarks or infer the key. This paper further evaluates FedReverse through
comprehensive simulations involving Multi-layer Perceptron (MLP) and
Convolutional Neural Networks (CNN) trained on the MNIST dataset. The
simulations demonstrate FedReverse's robustness, reversibility, and minimal
impact on model accuracy across varying embedding parameters and multiple
client scenarios. | Cryptography and Security |
What field is the article from? | Title: Learning Multi-graph Structure for Temporal Knowledge Graph Reasoning
Abstract: Temporal Knowledge Graph (TKG) reasoning that forecasts future events based
on historical snapshots distributed over timestamps is denoted as extrapolation
and has gained significant attention. Owing to its extreme versatility and
variation in spatial and temporal correlations, TKG reasoning presents a
challenging task, demanding efficient capture of concurrent structures and
evolutional interactions among facts. While existing methods have made strides
in this direction, they still fall short of harnessing the diverse forms of
intrinsic expressive semantics of TKGs, which encompass entity correlations
across multiple timestamps and periodicity of temporal information. This
limitation constrains their ability to thoroughly reflect historical
dependencies and future trends. In response to these drawbacks, this paper
proposes an innovative reasoning approach that focuses on Learning Multi-graph
Structure (LMS). Concretely, it comprises three distinct modules concentrating
on multiple aspects of graph structure knowledge within TKGs, including
concurrent and evolutional patterns along timestamps, query-specific
correlations across timestamps, and semantic dependencies of timestamps, which
capture TKG features from various perspectives. Besides, LMS incorporates an
adaptive gate for merging entity representations both along and across
timestamps effectively. Moreover, it integrates timestamp semantics into graph
attention calculations and time-aware decoders, in order to impose temporal
constraints on events and narrow down prediction scopes with historical
statistics. Extensive experimental results on five event-based benchmark
datasets demonstrate that LMS outperforms state-of-the-art extrapolation
models, indicating the superiority of modeling a multi-graph perspective for
TKG reasoning. | Artificial Intelligence |
What field is the article from? | Title: Context Shift Reduction for Offline Meta-Reinforcement Learning
Abstract: Offline meta-reinforcement learning (OMRL) utilizes pre-collected offline
datasets to enhance the agent's generalization ability on unseen tasks.
However, the context shift problem arises due to the distribution discrepancy
between the contexts used for training (from the behavior policy) and testing
(from the exploration policy). The context shift problem leads to incorrect
task inference and further deteriorates the generalization ability of the
meta-policy. Existing OMRL methods either overlook this problem or attempt to
mitigate it with additional information. In this paper, we propose a novel
approach called Context Shift Reduction for OMRL (CSRO) to address the context
shift problem with only offline datasets. The key insight of CSRO is to
minimize the influence of policy in context during both the meta-training and
meta-test phases. During meta-training, we design a max-min mutual information
representation learning mechanism to diminish the impact of the behavior policy
on task representation. In the meta-test phase, we introduce the non-prior
context collection strategy to reduce the effect of the exploration policy.
Experimental results demonstrate that CSRO significantly reduces the context
shift and improves the generalization ability, surpassing previous methods
across various challenging domains. | Machine Learning |
What field is the article from? | Title: Responsibility in Extensive Form Games
Abstract: Two different forms of responsibility, counterfactual and seeing-to-it, have
been extensively discussed in the philosophy and AI in the context of a single
agent or multiple agents acting simultaneously. Although the generalisation of
counterfactual responsibility to a setting where multiple agents act in some
order is relatively straightforward, the same cannot be said about seeing-to-it
responsibility. Two versions of seeing-to-it modality applicable to such
settings have been proposed in the literature. Neither of them perfectly
captures the intuition of responsibility. This paper proposes a definition of
seeing-to-it responsibility for such settings that amalgamate the two
modalities.
This paper shows that the newly proposed notion of responsibility and
counterfactual responsibility are not definable through each other and studies
the responsibility gap for these two forms of responsibility. It shows that
although these two forms of responsibility are not enough to ascribe
responsibility in each possible situation, this gap does not exist if
higher-order responsibility is taken into account. | Artificial Intelligence |
What field is the article from? | Title: TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems
Abstract: Large Language Models (LLMs) have demonstrated proficiency in addressing
tasks that necessitate a combination of task planning and the usage of external
tools that require a blend of task planning and the utilization of external
tools, such as APIs. However, real-world complex systems present three
prevalent challenges concerning task planning and tool usage: (1) The real
system usually has a vast array of APIs, so it is impossible to feed the
descriptions of all APIs to the prompt of LLMs as the token length is limited;
(2) the real system is designed for handling complex tasks, and the base LLMs
can hardly plan a correct sub-task order and API-calling order for such tasks;
(3) Similar semantics and functionalities among APIs in real systems create
challenges for both LLMs and even humans in distinguishing between them. In
response, this paper introduces a comprehensive framework aimed at enhancing
the Task Planning and Tool Usage (TPTU) abilities of LLM-based agents operating
within real-world systems. Our framework comprises three key components
designed to address these challenges: (1) the API Retriever selects the most
pertinent APIs for the user task among the extensive array available; (2) LLM
Finetuner tunes a base LLM so that the finetuned LLM can be more capable for
task planning and API calling; (3) the Demo Selector adaptively retrieves
different demonstrations related to hard-to-distinguish APIs, which is further
used for in-context learning to boost the final performance. We validate our
methods using a real-world commercial system as well as an open-sourced
academic dataset, and the outcomes clearly showcase the efficacy of each
individual component as well as the integrated framework. | Artificial Intelligence |
What field is the article from? | Title: PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF
Abstract: We show that physics-based simulations can be seamlessly integrated with NeRF
to generate high-quality elastodynamics of real-world objects. Unlike existing
methods, we discretize nonlinear hyperelasticity in a meshless way, obviating
the necessity for intermediate auxiliary shape proxies like a tetrahedral mesh
or voxel grid. A quadratic generalized moving least square (Q-GMLS) is employed
to capture nonlinear dynamics and large deformation on the implicit model. Such
meshless integration enables versatile simulations of complex and codimensional
shapes. We adaptively place the least-square kernels according to the NeRF
density field to significantly reduce the complexity of the nonlinear
simulation. As a result, physically realistic animations can be conveniently
synthesized using our method for a wide range of hyperelastic materials at an
interactive rate. For more information, please visit our project page at
https://fytalon.github.io/pienerf/. | Computer Vision |
What field is the article from? | Title: Polynomial-based Self-Attention for Table Representation learning
Abstract: Structured data, which constitutes a significant portion of existing data
types, has been a long-standing research topic in the field of machine
learning. Various representation learning methods for tabular data have been
proposed, ranging from encoder-decoder structures to Transformers. Among these,
Transformer-based methods have achieved state-of-the-art performance not only
in tabular data but also in various other fields, including computer vision and
natural language processing. However, recent studies have revealed that
self-attention, a key component of Transformers, can lead to an oversmoothing
issue. We show that Transformers for tabular data also face this problem, and
to address the problem, we propose a novel matrix polynomial-based
self-attention layer as a substitute for the original self-attention layer,
which enhances model scalability. In our experiments with three representative
table learning models equipped with our proposed layer, we illustrate that the
layer effectively mitigates the oversmoothing problem and enhances the
representation performance of the existing methods, outperforming the
state-of-the-art table representation methods. | Artificial Intelligence |
What field is the article from? | Title: Make me an Offer: Forward and Reverse Auctioning Problems in the Tourism Industry
Abstract: Most tourist destinations are facing regular and consistent seasonality with
significant economic and social impacts. This phenomenon is more pronounced in
the post-covid era, where demand for travel has increased but unevenly among
different geographic areas. To counter these problems that both customers and
hoteliers are facing, we have developed two auctioning systems that allow
hoteliers of lower popularity tier areas or during low season periods to
auction their rooms in what we call a forward auction model, and also allows
customers to initiate a bidding process whereby hoteliers in an area may make
offers to the customer for their rooms, in what constitutes a reverse auction
model initiated by the customer, similar to the bidding concept of
priceline.com. We develop mathematical programming models that define
explicitly both types of auctions, and show that in each type, there are
significant benefits to be gained both on the side of the hotelier as well as
on the side of the customer. We discuss algorithmic techniques for the
approximate solution of these optimization problems, and present results using
exact optimization solvers to solve them to guaranteed optimality. These
techniques could be beneficial to both customer and hotelier reducing
seasonality during middle and low season and providing the customer with
attractive offers. | Artificial Intelligence |
What field is the article from? | Title: Optimizing the Passenger Flow for Airport Security Check
Abstract: Due to the necessary security for the airport and flight, passengers are
required to have strict security check before getting aboard. However, there
are frequent complaints of wasting huge amount of time while waiting for the
security check. This paper presents a potential solution aimed at optimizing
gate setup procedures specifically tailored for Chicago OHare International
Airport. By referring to queueing theory and performing Monte Carlo
simulations, we propose an approach to significantly diminish the average
waiting time to a more manageable level. Additionally, our study meticulously
examines and identifies the influential factors contributing to this
optimization, providing a comprehensive understanding of their impact. | Artificial Intelligence |
What field is the article from? | Title: Modular Control Architecture for Safe Marine Navigation: Reinforcement Learning and Predictive Safety Filters
Abstract: Many autonomous systems face safety challenges, requiring robust closed-loop
control to handle physical limitations and safety constraints. Real-world
systems, like autonomous ships, encounter nonlinear dynamics and environmental
disturbances. Reinforcement learning is increasingly used to adapt to complex
scenarios, but standard frameworks ensuring safety and stability are lacking.
Predictive Safety Filters (PSF) offer a promising solution, ensuring constraint
satisfaction in learning-based control without explicit constraint handling.
This modular approach allows using arbitrary control policies, with the safety
filter optimizing proposed actions to meet physical and safety constraints. We
apply this approach to marine navigation, combining RL with PSF on a simulated
Cybership II model. The RL agent is trained on path following and collision
avpodance, while the PSF monitors and modifies control actions for safety.
Results demonstrate the PSF's effectiveness in maintaining safety without
hindering the RL agent's learning rate and performance, evaluated against a
standard RL agent without PSF. | Robotics |
What field is the article from? | Title: Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and Insights
Abstract: Artificial Intelligence (AI) is expected to play an instrumental role in the
next generation of wireless systems, such as sixth-generation (6G) mobile
network. However, massive data, energy consumption, training complexity, and
sensitive data protection in wireless systems are all crucial challenges that
must be addressed for training AI models and gathering intelligence and
knowledge from distributed devices. Federated Learning (FL) is a recent
framework that has emerged as a promising approach for multiple learning agents
to build an accurate and robust machine learning models without sharing raw
data. By allowing mobile handsets and devices to collaboratively learn a global
model without explicit sharing of training data, FL exhibits high privacy and
efficient spectrum utilization. While there are a lot of survey papers
exploring FL paradigms and usability in 6G privacy, none of them has clearly
addressed how FL can be used to improve the protocol stack and wireless
operations. The main goal of this survey is to provide a comprehensive overview
on FL usability to enhance mobile services and enable smart ecosystems to
support novel use-cases. This paper examines the added-value of implementing FL
throughout all levels of the protocol stack. Furthermore, it presents important
FL applications, addresses hot topics, provides valuable insights and explicits
guidance for future research and developments. Our concluding remarks aim to
leverage the synergy between FL and future 6G, while highlighting FL's
potential to revolutionize wireless industry and sustain the development of
cutting-edge mobile services. | Machine Learning |
What field is the article from? | Title: Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders
Abstract: Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite
their success, ViTs lack inductive biases, which can make it difficult to train
them with limited data. To address this challenge, prior studies suggest
training ViTs with self-supervised learning (SSL) and fine-tuning sequentially.
However, we observe that jointly optimizing ViTs for the primary task and a
Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the
amount of training data is limited. We explore the appropriate SSL tasks that
can be optimized alongside the primary task, the training schemes for these
tasks, and the data scale at which they can be most effective. Our findings
reveal that SSAT is a powerful technique that enables ViTs to leverage the
unique characteristics of both the self-supervised and primary tasks, achieving
better performance than typical ViTs pre-training with SSL and fine-tuning
sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT
significantly improves ViT performance while reducing carbon footprint. We also
confirm the effectiveness of SSAT in the video domain for deepfake detection,
showcasing its generalizability. Our code is available at
https://github.com/dominickrei/Limited-data-vits. | Computer Vision |
What field is the article from? | Title: Automaton Distillation: Neuro-Symbolic Transfer Learning for Deep Reinforcement Learning
Abstract: Reinforcement learning (RL) is a powerful tool for finding optimal policies
in sequential decision processes. However, deep RL methods suffer from two
weaknesses: collecting the amount of agent experience required for practical RL
problems is prohibitively expensive, and the learned policies exhibit poor
generalization on tasks outside of the training distribution. To mitigate these
issues, we introduce automaton distillation, a form of neuro-symbolic transfer
learning in which Q-value estimates from a teacher are distilled into a
low-dimensional representation in the form of an automaton. We then propose two
methods for generating Q-value estimates: static transfer, which reasons over
an abstract Markov Decision Process constructed based on prior knowledge, and
dynamic transfer, where symbolic information is extracted from a teacher Deep
Q-Network (DQN). The resulting Q-value estimates from either method are used to
bootstrap learning in the target environment via a modified DQN loss function.
We list several failure modes of existing automaton-based transfer methods and
demonstrate that both static and dynamic automaton distillation decrease the
time required to find optimal policies for various decision tasks. | Machine Learning |
What field is the article from? | Title: The language of prompting: What linguistic properties make a prompt successful?
Abstract: The latest generation of LLMs can be prompted to achieve impressive zero-shot
or few-shot performance in many NLP tasks. However, since performance is highly
sensitive to the choice of prompts, considerable effort has been devoted to
crowd-sourcing prompts or designing methods for prompt optimisation. Yet, we
still lack a systematic understanding of how linguistic properties of prompts
correlate with task performance. In this work, we investigate how LLMs of
different sizes, pre-trained and instruction-tuned, perform on prompts that are
semantically equivalent, but vary in linguistic structure. We investigate both
grammatical properties such as mood, tense, aspect and modality, as well as
lexico-semantic variation through the use of synonyms. Our findings contradict
the common assumption that LLMs achieve optimal performance on lower perplexity
prompts that reflect language use in pretraining or instruction-tuning data.
Prompts transfer poorly between datasets or models, and performance cannot
generally be explained by perplexity, word frequency, ambiguity or prompt
length. Based on our results, we put forward a proposal for a more robust and
comprehensive evaluation standard for prompting research. | Computational Linguistics |
What field is the article from? | Title: Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
Abstract: The significant progress of large language models (LLMs) provides a promising
opportunity to build human-like systems for various practical applications.
However, when applied to specific task domains, an LLM pre-trained on a
general-purpose corpus may exhibit a deficit or inadequacy in two types of
domain-specific knowledge. One is a comprehensive set of domain data that is
typically large-scale and continuously evolving. The other is specific working
patterns of this domain reflected in the data. The absence or inadequacy of
such knowledge impacts the performance of the LLM. In this paper, we propose a
general paradigm that augments LLMs with DOmain-specific KnowledgE to enhance
their performance on practical applications, namely DOKE. This paradigm relies
on a domain knowledge extractor, working in three steps: 1) preparing effective
knowledge for the task; 2) selecting the knowledge for each specific sample;
and 3) expressing the knowledge in an LLM-understandable way. Then, the
extracted knowledge is incorporated through prompts, without any computational
cost of model fine-tuning. We instantiate the general paradigm on a widespread
application, i.e. recommender systems, where critical item attributes and
collaborative filtering signals are incorporated. Experimental results
demonstrate that DOKE can substantially improve the performance of LLMs in
specific domains. | Information Retrieval |
What field is the article from? | Title: Understanding Parameter Saliency via Extreme Value Theory
Abstract: Deep neural networks are being increasingly implemented throughout society in
recent years. It is useful to identify which parameters trigger
misclassification in diagnosing undesirable model behaviors. The concept of
parameter saliency is proposed and used to diagnose convolutional neural
networks (CNNs) by ranking convolution filters that may have caused
misclassification on the basis of parameter saliency. It is also shown that
fine-tuning the top ranking salient filters efficiently corrects
misidentification on ImageNet. However, there is still a knowledge gap in terms
of understanding why parameter saliency ranking can find the filters inducing
misidentification. In this work, we attempt to bridge the gap by analyzing
parameter saliency ranking from a statistical viewpoint, namely, extreme value
theory. We first show that the existing work implicitly assumes that the
gradient norm computed for each filter follows a normal distribution. Then, we
clarify the relationship between parameter saliency and the score based on the
peaks-over-threshold (POT) method, which is often used to model extreme values.
Finally, we reformulate parameter saliency in terms of the POT method, where
this reformulation is regarded as statistical anomaly detection and does not
require the implicit assumptions of the existing parameter-saliency
formulation. Our experimental results demonstrate that our reformulation can
detect malicious filters as well. Furthermore, we show that the existing
parameter saliency method exhibits a bias against the depth of layers in deep
neural networks. In particular, this bias has the potential to inhibit the
discovery of filters that cause misidentification in situations where domain
shift occurs. In contrast, parameter saliency based on POT shows less of this
bias. | Computer Vision |
What field is the article from? | Title: To Tell The Truth: Language of Deception and Language Models
Abstract: Text-based misinformation permeates online discourses, yet evidence of
people's ability to discern truth from such deceptive textual content is
scarce. We analyze a novel TV game show data where conversations in a
high-stake environment between individuals with conflicting objectives result
in lies. We investigate the manifestation of potentially verifiable language
cues of deception in the presence of objective truth, a distinguishing feature
absent in previous text-based deception datasets. We show that there exists a
class of detectors (algorithms) that have similar truth detection performance
compared to human subjects, even when the former accesses only the language
cues while the latter engages in conversations with complete access to all
potential sources of cues (language and audio-visual). Our model, built on a
large language model, employs a bottleneck framework to learn discernible cues
to determine truth, an act of reasoning in which human subjects often perform
poorly, even with incentives. Our model detects novel but accurate language
cues in many cases where humans failed to detect deception, opening up the
possibility of humans collaborating with algorithms and ameliorating their
ability to detect the truth. | Computational Linguistics |
What field is the article from? | Title: Towards Generic Anomaly Detection and Understanding: Large-scale Visual-linguistic Model (GPT-4V) Takes the Lead
Abstract: Anomaly detection is a crucial task across different domains and data types.
However, existing anomaly detection models are often designed for specific
domains and modalities. This study explores the use of GPT-4V(ision), a
powerful visual-linguistic model, to address anomaly detection tasks in a
generic manner. We investigate the application of GPT-4V in multi-modality,
multi-domain anomaly detection tasks, including image, video, point cloud, and
time series data, across multiple application areas, such as industrial,
medical, logical, video, 3D anomaly detection, and localization tasks. To
enhance GPT-4V's performance, we incorporate different kinds of additional cues
such as class information, human expertise, and reference images as
prompts.Based on our experiments, GPT-4V proves to be highly effective in
detecting and explaining global and fine-grained semantic patterns in
zero/one-shot anomaly detection. This enables accurate differentiation between
normal and abnormal instances. Although we conducted extensive evaluations in
this study, there is still room for future evaluation to further exploit
GPT-4V's generic anomaly detection capacity from different aspects. These
include exploring quantitative metrics, expanding evaluation benchmarks,
incorporating multi-round interactions, and incorporating human feedback loops.
Nevertheless, GPT-4V exhibits promising performance in generic anomaly
detection and understanding, thus opening up a new avenue for anomaly
detection. | Computer Vision |
What field is the article from? | Title: Continual Learning with Low Rank Adaptation
Abstract: Recent work using pretrained transformers has shown impressive performance
when fine-tuned with data from the downstream problem of interest. However,
they struggle to retain that performance when the data characteristics changes.
In this paper, we focus on continual learning, where a pre-trained transformer
is updated to perform well on new data, while retaining its performance on data
it was previously trained on. Earlier works have tackled this primarily through
methods inspired from prompt tuning. We question this choice, and investigate
the applicability of Low Rank Adaptation (LoRA) to continual learning. On a
range of domain-incremental learning benchmarks, our LoRA-based solution,
CoLoR, yields state-of-the-art performance, while still being as parameter
efficient as the prompt tuning based methods. | Machine Learning |
What field is the article from? | Title: SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control
Abstract: The field of generative image inpainting and object insertion has made
significant progress with the recent advent of latent diffusion models.
Utilizing a precise object mask can greatly enhance these applications.
However, due to the challenges users encounter in creating high-fidelity masks,
there is a tendency for these methods to rely on more coarse masks (e.g.,
bounding box) for these applications. This results in limited control and
compromised background content preservation. To overcome these limitations, we
introduce SmartMask, which allows any novice user to create detailed masks for
precise object insertion. Combined with a ControlNet-Inpaint model, our
experiments demonstrate that SmartMask achieves superior object insertion
quality, preserving the background content more effectively than previous
methods. Notably, unlike prior works the proposed approach can also be used
even without user-mask guidance, which allows it to perform mask-free object
insertion at diverse positions and scales. Furthermore, we find that when used
iteratively with a novel instruction-tuning based planning model, SmartMask can
be used to design detailed layouts from scratch. As compared with user-scribble
based layout design, we observe that SmartMask allows for better quality
outputs with layout-to-image generation methods. Project page is available at
https://smartmask-gen.github.io | Computer Vision |
What field is the article from? | Title: Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts
Abstract: Existing work on jailbreak Multimodal Large Language Models (MLLMs) has
focused primarily on adversarial examples in model inputs, with less attention
to vulnerabilities in model APIs. To fill the research gap, we carry out the
following work: 1) We discover a system prompt leakage vulnerability in GPT-4V.
Through carefully designed dialogue, we successfully steal the internal system
prompts of GPT-4V. This finding indicates potential exploitable security risks
in MLLMs; 2)Based on the acquired system prompts, we propose a novel MLLM
jailbreaking attack method termed SASP (Self-Adversarial Attack via System
Prompt). By employing GPT-4 as a red teaming tool against itself, we aim to
search for potential jailbreak prompts leveraging stolen system prompts.
Furthermore, in pursuit of better performance, we also add human modification
based on GPT-4's analysis, which further improves the attack success rate to
98.7\%; 3) We evaluated the effect of modifying system prompts to defend
against jailbreaking attacks. Results show that appropriately designed system
prompts can significantly reduce jailbreak success rates. Overall, our work
provides new insights into enhancing MLLM security, demonstrating the important
role of system prompts in jailbreaking, which could be leveraged to greatly
facilitate jailbreak success rates while also holding the potential for
defending against jailbreaks. | Cryptography and Security |
What field is the article from? | Title: Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models
Abstract: Transformers are remarkably good at in-context learning (ICL) -- learning
from demonstrations without parameter updates -- but how they perform ICL
remains a mystery. Recent work suggests that Transformers may learn in-context
by internally running Gradient Descent, a first-order optimization method. In
this paper, we instead demonstrate that Transformers learn to implement
higher-order optimization methods to perform ICL. Focusing on in-context linear
regression, we show that Transformers learn to implement an algorithm very
similar to Iterative Newton's Method, a higher-order optimization method,
rather than Gradient Descent. Empirically, we show that predictions from
successive Transformer layers closely match different iterations of Newton's
Method linearly, with each middle layer roughly computing 3 iterations. In
contrast, exponentially more Gradient Descent steps are needed to match an
additional Transformers layer; this suggests that Transformers have an
comparable rate of convergence with high-order methods such as Iterative
Newton, which are exponentially faster than Gradient Descent. We also show that
Transformers can learn in-context on ill-conditioned data, a setting where
Gradient Descent struggles but Iterative Newton succeeds. Finally, we show
theoretical results which support our empirical findings and have a close
correspondence with them: we prove that Transformers can implement $k$
iterations of Newton's method with $\mathcal{O}(k)$ layers. | Machine Learning |
What field is the article from? | Title: Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability
Abstract: Grasslands are known for their high biodiversity and ability to provide
multiple ecosystem services. Challenges in automating the identification of
indicator plants are key obstacles to large-scale grassland monitoring. These
challenges stem from the scarcity of extensive datasets, the distributional
shifts between generic and grassland-specific datasets, and the inherent
opacity of deep learning models. This paper delves into the latter two
challenges, with a specific focus on transfer learning and eXplainable
Artificial Intelligence (XAI) approaches to grassland monitoring, highlighting
the novelty of XAI in this domain. We analyze various transfer learning methods
to bridge the distributional gaps between generic and grassland-specific
datasets. Additionally, we showcase how explainable AI techniques can unveil
the model's domain adaptation capabilities, employing quantitative assessments
to evaluate the model's proficiency in accurately centering relevant input
features around the object of interest. This research contributes valuable
insights for enhancing model performance through transfer learning and
measuring domain adaptability with explainable AI, showing significant promise
for broader applications within the agricultural community. | Machine Learning |
What field is the article from? | Title: Exploring Machine Learning Models for Federated Learning: A Review of Approaches, Performance, and Limitations
Abstract: In the growing world of artificial intelligence, federated learning is a
distributed learning framework enhanced to preserve the privacy of individuals'
data. Federated learning lays the groundwork for collaborative research in
areas where the data is sensitive. Federated learning has several implications
for real-world problems. In times of crisis, when real-time decision-making is
critical, federated learning allows multiple entities to work collectively
without sharing sensitive data. This distributed approach enables us to
leverage information from multiple sources and gain more diverse insights. This
paper is a systematic review of the literature on privacy-preserving machine
learning in the last few years based on the Preferred Reporting Items for
Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Specifically, we have
presented an extensive review of supervised/unsupervised machine learning
algorithms, ensemble methods, meta-heuristic approaches, blockchain technology,
and reinforcement learning used in the framework of federated learning, in
addition to an overview of federated learning applications. This paper reviews
the literature on the components of federated learning and its applications in
the last few years. The main purpose of this work is to provide researchers and
practitioners with a comprehensive overview of federated learning from the
machine learning point of view. A discussion of some open problems and future
research directions in federated learning is also provided. | Machine Learning |
What field is the article from? | Title: DeSIQ: Towards an Unbiased, Challenging Benchmark for Social Intelligence Understanding
Abstract: Social intelligence is essential for understanding and reasoning about human
expressions, intents and interactions. One representative benchmark for its
study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice
questions on videos of complex social interactions. We define a comprehensive
methodology to study the soundness of Social-IQ, as the soundness of such
benchmark datasets is crucial to the investigation of the underlying research
problem. Our analysis reveals that Social-IQ contains substantial biases, which
can be exploited by a moderately strong language model to learn spurious
correlations to achieve perfect performance without being given the context or
even the question. We introduce DeSIQ, a new challenging dataset, constructed
by applying simple perturbations to Social-IQ. Our empirical analysis shows
DeSIQ significantly reduces the biases in the original Social-IQ dataset.
Furthermore, we examine and shed light on the effect of model size, model
style, learning settings, commonsense knowledge, and multi-modality on the new
benchmark performance. Our new dataset, observations and findings open up
important research questions for the study of social intelligence. | Computational Linguistics |
What field is the article from? | Title: MASP: Scalable GNN-based Planning for Multi-Agent Navigation
Abstract: We investigate the problem of decentralized multi-agent navigation tasks,
where multiple agents need to reach initially unassigned targets in a limited
time. Classical planning-based methods suffer from expensive computation
overhead at each step and offer limited expressiveness for complex cooperation
strategies. In contrast, reinforcement learning (RL) has recently become a
popular paradigm for addressing this issue. However, RL struggles with low data
efficiency and cooperation when directly exploring (nearly) optimal policies in
the large search space, especially with an increased agent number (e.g., 10+
agents) or in complex environments (e.g., 3D simulators). In this paper, we
propose Multi-Agent Scalable GNN-based P lanner (MASP), a goal-conditioned
hierarchical planner for navigation tasks with a substantial number of agents.
MASP adopts a hierarchical framework to divide a large search space into
multiple smaller spaces, thereby reducing the space complexity and accelerating
training convergence. We also leverage graph neural networks (GNN) to model the
interaction between agents and goals, improving goal achievement. Besides, to
enhance generalization capabilities in scenarios with unseen team sizes, we
divide agents into multiple groups, each with a previously trained number of
agents. The results demonstrate that MASP outperforms classical planning-based
competitors and RL baselines, achieving a nearly 100% success rate with minimal
training data in both multi-agent particle environments (MPE) with 50 agents
and a quadrotor 3-dimensional environment (OmniDrones) with 20 agents.
Furthermore, the learned policy showcases zero-shot generalization across
unseen team sizes. | Machine Learning |
What field is the article from? | Title: Adinkra Symbol Recognition using Classical Machine Learning and Deep Learning
Abstract: Artificial intelligence (AI) has emerged as a transformative influence,
engendering paradigm shifts in global societies, spanning academia and
industry. However, in light of these rapid advances, addressing the
underrepresentation of black communities and African countries in AI is
crucial. Boosting enthusiasm for AI can be effectively accomplished by
showcasing straightforward applications around tasks like identifying and
categorizing traditional symbols, such as Adinkra symbols, or familiar objects
within the community. In this research endeavor, we dived into classical
machine learning and harnessed the power of deep learning models to tackle the
intricate task of classifying and recognizing Adinkra symbols. The idea led to
a newly constructed ADINKRA dataset comprising 174,338 images meticulously
organized into 62 distinct classes, each representing a singular and emblematic
symbol. We constructed a CNN model for classification and recognition using six
convolutional layers, three fully connected (FC) layers, and optional dropout
regularization. The model is a simpler and smaller version of VGG, with fewer
layers, smaller channel sizes, and a fixed kernel size. Additionally, we tap
into the transfer learning capabilities provided by pre-trained models like VGG
and ResNet. These models assist us in both classifying images and extracting
features that can be used with classical machine learning models. We assess the
model's performance by measuring its accuracy and convergence rate and
visualizing the areas that significantly influence its predictions. These
evaluations serve as a foundational benchmark for future assessments of the
ADINKRA dataset. We hope this application exemplar inspires ideas on the
various uses of AI in organizing our traditional and modern lives. | Computer Vision |
What field is the article from? | Title: PromptBench: A Unified Library for Evaluation of Large Language Models
Abstract: The evaluation of large language models (LLMs) is crucial to assess their
performance and mitigate potential security risks. In this paper, we introduce
PromptBench, a unified library to evaluate LLMs. It consists of several key
components that are easily used and extended by researchers: prompt
construction, prompt engineering, dataset and model loading, adversarial prompt
attack, dynamic evaluation protocols, and analysis tools. PromptBench is
designed to be an open, general, and flexible codebase for research purposes
that can facilitate original study in creating new benchmarks, deploying
downstream applications, and designing new evaluation protocols. The code is
available at: https://github.com/microsoft/promptbench and will be continuously
supported. | Artificial Intelligence |
What field is the article from? | Title: A Survey on Knowledge Editing of Neural Networks
Abstract: Deep neural networks are becoming increasingly pervasive in academia and
industry, matching and surpassing human performance on a wide variety of fields
and related tasks. However, just as humans, even the largest artificial neural
networks make mistakes, and once-correct predictions can become invalid as the
world progresses in time. Augmenting datasets with samples that account for
mistakes or up-to-date information has become a common workaround in practical
applications. However, the well-known phenomenon of catastrophic forgetting
poses a challenge in achieving precise changes in the implicitly memorized
knowledge of neural network parameters, often requiring a full model
re-training to achieve desired behaviors. That is expensive, unreliable, and
incompatible with the current trend of large self-supervised pre-training,
making it necessary to find more efficient and effective methods for adapting
neural network models to changing data. To address this need, knowledge editing
is emerging as a novel area of research that aims to enable reliable,
data-efficient, and fast changes to a pre-trained target model, without
affecting model behaviors on previously learned tasks. In this survey, we
provide a brief review of this recent artificial intelligence field of
research. We first introduce the problem of editing neural networks, formalize
it in a common framework and differentiate it from more notorious branches of
research such as continuous learning. Next, we provide a review of the most
relevant knowledge editing approaches and datasets proposed so far, grouping
works under four different families: regularization techniques, meta-learning,
direct model editing, and architectural strategies. Finally, we outline some
intersections with other fields of research and potential directions for future
works. | Machine Learning |
What field is the article from? | Title: ZeST-NeRF: Using temporal aggregation for Zero-Shot Temporal NeRFs
Abstract: In the field of media production, video editing techniques play a pivotal
role. Recent approaches have had great success at performing novel view image
synthesis of static scenes. But adding temporal information adds an extra layer
of complexity. Previous models have focused on implicitly representing static
and dynamic scenes using NeRF. These models achieve impressive results but are
costly at training and inference time. They overfit an MLP to describe the
scene implicitly as a function of position. This paper proposes ZeST-NeRF, a
new approach that can produce temporal NeRFs for new scenes without retraining.
We can accurately reconstruct novel views using multi-view synthesis techniques
and scene flow-field estimation, trained only with unrelated scenes. We
demonstrate how existing state-of-the-art approaches from a range of fields
cannot adequately solve this new task and demonstrate the efficacy of our
solution. The resulting network improves quantitatively by 15% and produces
significantly better visual results. | Computer Vision |
What field is the article from? | Title: Mutual Enhancement of Large and Small Language Models with Cross-Silo Knowledge Transfer
Abstract: While large language models (LLMs) are empowered with broad knowledge, their
task-specific performance is often suboptimal. It necessitates fine-tuning LLMs
with task-specific data, but such data may be inaccessible due to privacy
concerns. In this paper, we propose a novel approach to enhance LLMs with
smaller language models (SLMs) that are trained on clients using their private
task-specific data. To enable mutual enhancement between LLMs and SLMs, we
propose CrossLM, where the SLMs promote the LLM to generate task-specific
high-quality data, and both the LLM and SLMs are enhanced with the generated
data. We evaluate CrossLM using publicly accessible language models across a
range of benchmark tasks. The results demonstrate that CrossLM significantly
enhances the task-specific performance of SLMs on clients and the LLM on the
cloud server simultaneously while preserving the LLM's generalization
capability. | Artificial Intelligence |
What field is the article from? | Title: Meta-learning of semi-supervised learning from tasks with heterogeneous attribute spaces
Abstract: We propose a meta-learning method for semi-supervised learning that learns
from multiple tasks with heterogeneous attribute spaces. The existing
semi-supervised meta-learning methods assume that all tasks share the same
attribute space, which prevents us from learning with a wide variety of tasks.
With the proposed method, the expected test performance on tasks with a small
amount of labeled data is improved with unlabeled data as well as data in
various tasks, where the attribute spaces are different among tasks. The
proposed method embeds labeled and unlabeled data simultaneously in a
task-specific space using a neural network, and the unlabeled data's labels are
estimated by adapting classification or regression models in the embedding
space. For the neural network, we develop variable-feature self-attention
layers, which enable us to find embeddings of data with different attribute
spaces with a single neural network by considering interactions among examples,
attributes, and labels. Our experiments on classification and regression
datasets with heterogeneous attribute spaces demonstrate that our proposed
method outperforms the existing meta-learning and semi-supervised learning
methods. | Machine Learning |
Subsets and Splits