instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Axiomatic Preference Modeling for Longform Question Answering
Abstract: The remarkable abilities of large language models (LLMs) like GPT-4 partially
stem from post-training processes like Reinforcement Learning from Human
Feedback (RLHF) involving human preferences encoded in a reward model. However,
these reward models (RMs) often lack direct knowledge of why, or under what
principles, the preferences annotations were made. In this study, we identify
principles that guide RMs to better align with human preferences, and then
develop an axiomatic framework to generate a rich variety of preference signals
to uphold them. We use these axiomatic signals to train a model for scoring
answers to longform questions. Our approach yields a Preference Model with only
about 220M parameters that agrees with gold human-annotated preference labels
more often than GPT-4. The contributions of this work include: training a
standalone preference model that can score human- and LLM-generated answers on
the same scale; developing an axiomatic framework for generating training data
pairs tailored to certain principles; and showing that a small amount of
axiomatic signals can help small models outperform GPT-4 in preference scoring.
We release our model on huggingface:
https://huggingface.co/corbyrosset/axiomatic_preference_model | Artificial Intelligence |
What field is the article from? | Title: Survey on AI Ethics: A Socio-technical Perspective
Abstract: The past decade has observed a great advancement in AI with deep
learning-based models being deployed in diverse scenarios including
safety-critical applications. As these AI systems become deeply embedded in our
societal infrastructure, the repercussions of their decisions and actions have
significant consequences, making the ethical implications of AI deployment
highly relevant and important. The ethical concerns associated with AI are
multifaceted, including challenging issues of fairness, privacy and data
protection, responsibility and accountability, safety and robustness,
transparency and explainability, and environmental impact. These principles
together form the foundations of ethical AI considerations that concern every
stakeholder in the AI system lifecycle. In light of the present ethical and
future x-risk concerns, governments have shown increasing interest in
establishing guidelines for the ethical deployment of AI. This work unifies the
current and future ethical concerns of deploying AI into society. While we
acknowledge and appreciate the technical surveys for each of the ethical
principles concerned, in this paper, we aim to provide a comprehensive overview
that not only addresses each principle from a technical point of view but also
discusses them from a social perspective. | Computers and Society |
What field is the article from? | Title: QWID: Quantized Weed Identification Deep neural network
Abstract: In this paper, we present an efficient solution for weed classification in
agriculture. We focus on optimizing model performance at inference while
respecting the constraints of the agricultural domain. We propose a Quantized
Deep Neural Network model that classifies a dataset of 9 weed classes using
8-bit integer (int8) quantization, a departure from standard 32-bit floating
point (fp32) models. Recognizing the hardware resource limitations in
agriculture, our model balances model size, inference time, and accuracy,
aligning with practical requirements. We evaluate the approach on ResNet-50 and
InceptionV3 architectures, comparing their performance against their int8
quantized versions. Transfer learning and fine-tuning are applied using the
DeepWeeds dataset. The results show staggering model size and inference time
reductions while maintaining accuracy in real-world production scenarios like
Desktop, Mobile and Raspberry Pi. Our work sheds light on a promising direction
for efficient AI in agriculture, holding potential for broader applications.
Code: https://github.com/parikshit14/QNN-for-weed | Computer Vision |
What field is the article from? | Title: Can Reinforcement Learning support policy makers? A preliminary study with Integrated Assessment Models
Abstract: Governments around the world aspire to ground decision-making on evidence.
Many of the foundations of policy making - e.g. sensing patterns that relate to
societal needs, developing evidence-based programs, forecasting potential
outcomes of policy changes, and monitoring effectiveness of policy programs -
have the potential to benefit from the use of large-scale datasets or
simulations together with intelligent algorithms. These could, if designed and
deployed in a way that is well grounded on scientific evidence, enable a more
comprehensive, faster, and rigorous approach to policy making. Integrated
Assessment Models (IAM) is a broad umbrella covering scientific models that
attempt to link main features of society and economy with the biosphere into
one modelling framework. At present, these systems are probed by policy makers
and advisory groups in a hypothesis-driven manner. In this paper, we
empirically demonstrate that modern Reinforcement Learning can be used to probe
IAMs and explore the space of solutions in a more principled manner. While the
implication of our results are modest since the environment is simplistic, we
believe that this is a stepping stone towards more ambitious use cases, which
could allow for effective exploration of policies and understanding of their
consequences and limitations. | Artificial Intelligence |
What field is the article from? | Title: Spatio-Temporal Anomaly Detection with Graph Networks for Data Quality Monitoring of the Hadron Calorimeter
Abstract: The compact muon solenoid (CMS) experiment is a general-purpose detector for
high-energy collision at the large hadron collider (LHC) at CERN. It employs an
online data quality monitoring (DQM) system to promptly spot and diagnose
particle data acquisition problems to avoid data quality loss. In this study,
we present semi-supervised spatio-temporal anomaly detection (AD) monitoring
for the physics particle reading channels of the hadronic calorimeter (HCAL) of
the CMS using three-dimensional digi-occupancy map data of the DQM. We propose
the GraphSTAD system, which employs convolutional and graph neural networks to
learn local spatial characteristics induced by particles traversing the
detector, and global behavior owing to shared backend circuit connections and
housing boxes of the channels, respectively. Recurrent neural networks capture
the temporal evolution of the extracted spatial features. We have validated the
accuracy of the proposed AD system in capturing diverse channel fault types
using the LHC Run-2 collision data sets. The GraphSTAD system has achieved
production-level accuracy and is being integrated into the CMS core production
system--for real-time monitoring of the HCAL. We have also provided a
quantitative performance comparison with alternative benchmark models to
demonstrate the promising leverage of the presented system. | Machine Learning |
What field is the article from? | Title: Word for Person: Zero-shot Composed Person Retrieval
Abstract: Searching for specific person has great security value and social benefits,
and it often involves a combination of visual and textual information.
Conventional person retrieval methods, whether image-based or text-based,
usually fall short in effectively harnessing both types of information, leading
to the loss of accuracy. In this paper, a whole new task called Composed Person
Retrieval (CPR) is proposed to jointly utilize both image and text information
for target person retrieval. However, the supervised CPR must depend on very
costly manual annotation dataset, while there are currently no available
resources. To mitigate this issue, we firstly introduce the Zero-shot Composed
Person Retrieval (ZS-CPR), which leverages existing domain-related data to
resolve the CPR problem without reliance on expensive annotations. Secondly, to
learn ZS-CPR model, we propose a two-stage learning framework, Word4Per, where
a lightweight Textual Inversion Network (TINet) and a text-based person
retrieval model based on fine-tuned Contrastive Language-Image Pre-training
(CLIP) network are learned without utilizing any CPR data. Thirdly, a finely
annotated Image-Text Composed Person Retrieval dataset (ITCPR) is built as the
benchmark to assess the performance of the proposed Word4Per framework.
Extensive experiments under both Rank-1 and mAP demonstrate the effectiveness
of Word4Per for the ZS-CPR task, surpassing the comparative methods by over
10%. The code and ITCPR dataset will be publicly available at
https://github.com/Delong-liu-bupt/Word4Per. | Computer Vision |
What field is the article from? | Title: Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations
Abstract: Peer learning is a novel high-level reinforcement learning framework for
agents learning in groups. While standard reinforcement learning trains an
individual agent in trial-and-error fashion, all on its own, peer learning
addresses a related setting in which a group of agents, i.e., peers, learns to
master a task simultaneously together from scratch. Peers are allowed to
communicate only about their own states and actions recommended by others:
"What would you do in my situation?". Our motivation is to study the learning
behavior of these agents. We formalize the teacher selection process in the
action advice setting as a multi-armed bandit problem and therefore highlight
the need for exploration. Eventually, we analyze the learning behavior of the
peers and observe their ability to rank the agents' performance within the
study group and understand which agents give reliable advice. Further, we
compare peer learning with single agent learning and a state-of-the-art action
advice baseline. We show that peer learning is able to outperform single-agent
learning and the baseline in several challenging discrete and continuous OpenAI
Gym domains. Doing so, we also show that within such a framework complex
policies from action recommendations beyond discrete action spaces can evolve. | Machine Learning |
What field is the article from? | Title: Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment Analysis
Abstract: A multi-modal emotion recognition method was established by combining
two-channel convolutional neural network with ring network. This method can
extract emotional information effectively and improve learning efficiency. The
words were vectorized with GloVe, and the word vector was input into the
convolutional neural network. Combining attention mechanism and maximum pool
converter BiSRU channel, the local deep emotion and pre-post sequential emotion
semantics are obtained. Finally, multiple features are fused and input as the
polarity of emotion, so as to achieve the emotion analysis of the target.
Experiments show that the emotion analysis method based on feature fusion can
effectively improve the recognition accuracy of emotion data set and reduce the
learning time. The model has a certain generalization. | Artificial Intelligence |
What field is the article from? | Title: Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism
Abstract: Science and technology journalists today face challenges in finding
newsworthy leads due to increased workloads, reduced resources, and expanding
scientific publishing ecosystems. Given this context, we explore computational
methods to aid these journalists' news discovery in terms of time-efficiency
and agency. In particular, we prototyped three computational information
subsidies into an interactive tool that we used as a probe to better understand
how such a tool may offer utility or more broadly shape the practices of
professional science journalists. Our findings highlight central considerations
around science journalists' agency, context, and responsibilities that such
tools can influence and could account for in design. Based on this, we suggest
design opportunities for greater and longer-term user agency; incorporating
contextual, personal and collaborative notions of newsworthiness; and
leveraging flexible interfaces and generative models. Overall, our findings
contribute a richer view of the sociotechnical system around computational news
discovery tools, and suggest ways to improve such tools to better support the
practices of science journalists. | Human-Computer Interaction |
What field is the article from? | Title: Successor Features for Efficient Multisubject Controlled Text Generation
Abstract: While large language models (LLMs) have achieved impressive performance in
generating fluent and realistic text, controlling the generated text so that it
exhibits properties such as safety, factuality, and non-toxicity remains
challenging. % such as DExperts, GeDi, and rectification Existing
decoding-based methods are static in terms of the dimension of control; if the
target subject is changed, they require new training. Moreover, it can quickly
become prohibitive to concurrently control multiple subjects. In this work, we
introduce SF-GEN, which is grounded in two primary concepts: successor features
(SFs) to decouple the LLM's dynamics from task-specific rewards, and language
model rectification to proportionally adjust the probability of selecting a
token based on the likelihood that the finished text becomes undesired. SF-GEN
seamlessly integrates the two to enable dynamic steering of text generation
with no need to alter the LLM's parameters. Thanks to the decoupling effect
induced by successor features, our method proves to be memory-wise and
computationally efficient for training as well as decoding, especially when
dealing with multiple target subjects. To the best of our knowledge, our
research represents the first application of successor features in text
generation. In addition to its computational efficiency, the resultant language
produced by our method is comparable to the SOTA (and outperforms baselines) in
both control measures as well as language quality, which we demonstrate through
a series of experiments in various controllable text generation tasks. | Computational Linguistics |
What field is the article from? | Title: SEA++: Multi-Graph-based High-Order Sensor Alignment for Multivariate Time-Series Unsupervised Domain Adaptation
Abstract: Unsupervised Domain Adaptation (UDA) methods have been successful in reducing
label dependency by minimizing the domain discrepancy between a labeled source
domain and an unlabeled target domain. However, these methods face challenges
when dealing with Multivariate Time-Series (MTS) data. MTS data typically
consist of multiple sensors, each with its own unique distribution. This
characteristic makes it hard to adapt existing UDA methods, which mainly focus
on aligning global features while overlooking the distribution discrepancies at
the sensor level, to reduce domain discrepancies for MTS data. To address this
issue, a practical domain adaptation scenario is formulated as Multivariate
Time-Series Unsupervised Domain Adaptation (MTS-UDA). In this paper, we propose
SEnsor Alignment (SEA) for MTS-UDA, aiming to reduce domain discrepancy at both
the local and global sensor levels. At the local sensor level, we design
endo-feature alignment, which aligns sensor features and their correlations
across domains. To reduce domain discrepancy at the global sensor level, we
design exo-feature alignment that enforces restrictions on global sensor
features. We further extend SEA to SEA++ by enhancing the endo-feature
alignment. Particularly, we incorporate multi-graph-based high-order alignment
for both sensor features and their correlations. Extensive empirical results
have demonstrated the state-of-the-art performance of our SEA and SEA++ on
public MTS datasets for MTS-UDA. | Machine Learning |
What field is the article from? | Title: Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction
Abstract: Knowledge graph construction (KGC) is a multifaceted undertaking involving
the extraction of entities, relations, and events. Traditionally, large
language models (LLMs) have been viewed as solitary task-solving agents in this
complex landscape. However, this paper challenges this paradigm by introducing
a novel framework, CooperKGC. Departing from the conventional approach,
CooperKGC establishes a collaborative processing network, assembling a KGC
collaboration team capable of concurrently addressing entity, relation, and
event extraction tasks. Our experiments unequivocally demonstrate that
fostering collaboration and information interaction among diverse agents within
CooperKGC yields superior results compared to individual cognitive processes
operating in isolation. Importantly, our findings reveal that the collaboration
facilitated by CooperKGC enhances knowledge selection, correction, and
aggregation capabilities across multiple rounds of interactions. | Artificial Intelligence |
What field is the article from? | Title: Navigating Open Set Scenarios for Skeleton-based Action Recognition
Abstract: In real-world scenarios, human actions often fall outside the distribution of
training data, making it crucial for models to recognize known actions and
reject unknown ones. However, using pure skeleton data in such open-set
conditions poses challenges due to the lack of visual background cues and the
distinct sparse structure of body pose sequences. In this paper, we tackle the
unexplored Open-Set Skeleton-based Action Recognition (OS-SAR) task and
formalize the benchmark on three skeleton-based datasets. We assess the
performance of seven established open-set approaches on our task and identify
their limits and critical generalization issues when dealing with skeleton
information. To address these challenges, we propose a distance-based
cross-modality ensemble method that leverages the cross-modal alignment of
skeleton joints, bones, and velocities to achieve superior open-set recognition
performance. We refer to the key idea as CrossMax - an approach that utilizes a
novel cross-modality mean max discrepancy suppression mechanism to align latent
spaces during training and a cross-modality distance-based logits refinement
method during testing. CrossMax outperforms existing approaches and
consistently yields state-of-the-art results across all datasets and backbones.
The benchmark, code, and models will be released at
https://github.com/KPeng9510/OS-SAR. | Computer Vision |
What field is the article from? | Title: A Novel Neural Network-Based Federated Learning System for Imbalanced and Non-IID Data
Abstract: With the growth of machine learning techniques, privacy of data of users has
become a major concern. Most of the machine learning algorithms rely heavily on
large amount of data which may be collected from various sources. Collecting
these data yet maintaining privacy policies has become one of the most
challenging tasks for the researchers. To combat this issue, researchers have
introduced federated learning, where a prediction model is learnt by ensuring
the privacy of data of clients data. However, the prevalent federated learning
algorithms possess an accuracy and efficiency trade-off, especially for non-IID
data. In this research, we propose a centralized, neural network-based
federated learning system. The centralized algorithm incorporates micro-level
parallel processing inspired by the traditional mini-batch algorithm where the
client devices and the server handle the forward and backward propagation
respectively. We also devise a semi-centralized version of our proposed
algorithm. This algorithm takes advantage of edge computing for minimizing the
load from the central server, where clients handle both the forward and
backward propagation while sacrificing the overall train time to some extent.
We evaluate our proposed systems on five well-known benchmark datasets and
achieve satisfactory performance in a reasonable time across various data
distribution settings as compared to some existing benchmark algorithms. | Machine Learning |
What field is the article from? | Title: Large Trajectory Models are Scalable Motion Predictors and Planners
Abstract: Motion prediction and planning are vital tasks in autonomous driving, and
recent efforts have shifted to machine learning-based approaches. The
challenges include understanding diverse road topologies, reasoning traffic
dynamics over a long time horizon, interpreting heterogeneous behaviors, and
generating policies in a large continuous state space. Inspired by the success
of large language models in addressing similar complexities through model
scaling, we introduce a scalable trajectory model called State Transformer
(STR). STR reformulates the motion prediction and motion planning problems by
arranging observations, states, and actions into one unified sequence modeling
task. With a simple model design, STR consistently outperforms baseline
approaches in both problems. Remarkably, experimental results reveal that large
trajectory models (LTMs), such as STR, adhere to the scaling laws by presenting
outstanding adaptability and learning efficiency. Qualitative results further
demonstrate that LTMs are capable of making plausible predictions in scenarios
that diverge significantly from the training data distribution. LTMs also learn
to make complex reasonings for long-term planning, without explicit loss
designs or costly high-level annotations. | Robotics |
What field is the article from? | Title: Exploring the Potential of Generative AI for the World Wide Web
Abstract: Generative Artificial Intelligence (AI) is a cutting-edge technology capable
of producing text, images, and various media content leveraging generative
models and user prompts. Between 2022 and 2023, generative AI surged in
popularity with a plethora of applications spanning from AI-powered movies to
chatbots. In this paper, we delve into the potential of generative AI within
the realm of the World Wide Web, specifically focusing on image generation. Web
developers already harness generative AI to help crafting text and images,
while Web browsers might use it in the future to locally generate images for
tasks like repairing broken webpages, conserving bandwidth, and enhancing
privacy. To explore this research area, we have developed WebDiffusion, a tool
that allows to simulate a Web powered by stable diffusion, a popular
text-to-image model, from both a client and server perspective. WebDiffusion
further supports crowdsourcing of user opinions, which we use to evaluate the
quality and accuracy of 409 AI-generated images sourced from 60 webpages. Our
findings suggest that generative AI is already capable of producing pertinent
and high-quality Web images, even without requiring Web designers to manually
input prompts, just by leveraging contextual information available within the
webpages. However, we acknowledge that direct in-browser image generation
remains a challenge, as only highly powerful GPUs, such as the A40 and A100,
can (partially) compete with classic image downloads. Nevertheless, this
approach could be valuable for a subset of the images, for example when fixing
broken webpages or handling highly private content. | Artificial Intelligence |
What field is the article from? | Title: Data Science for Social Good
Abstract: Data science has been described as the fourth paradigm for scientific
discovery. The latest wave of data science research, pertaining to machine
learning and artificial intelligence (AI), is growing exponentially and
garnering millions of annual citations. However, this growth has been
accompanied by a diminishing emphasis on social good challenges - our analysis
reveals that the proportion of data science research focusing on social good is
less than it has ever been. At the same time, the proliferation of machine
learning and generative AI have sparked debates about the socio-technical
prospects and challenges associated with data science for human flourishing,
organizations, and society. Against this backdrop, we present a framework for
"data science for social good" (DSSG) research that considers the interplay
between relevant data science research genres, social good challenges, and
different levels of socio-technical abstraction. We perform an analysis of the
literature to empirically demonstrate the paucity of work on DSSG in
information systems (and other related disciplines) and highlight current
impediments. We then use our proposed framework to introduce the articles
appearing in the special issue. We hope that this article and the special issue
will spur future DSSG research and help reverse the alarming trend across data
science research over the past 30-plus years in which social good challenges
are garnering proportionately less attention with each passing day. | Computers and Society |
What field is the article from? | Title: Large Language Model Enhanced Multi-Agent Systems for 6G Communications
Abstract: The rapid development of the Large Language Model (LLM) presents huge
opportunities for 6G communications, e.g., network optimization and management
by allowing users to input task requirements to LLMs by nature language.
However, directly applying native LLMs in 6G encounters various challenges,
such as a lack of private communication data and knowledge, limited logical
reasoning, evaluation, and refinement abilities. Integrating LLMs with the
capabilities of retrieval, planning, memory, evaluation and reflection in
agents can greatly enhance the potential of LLMs for 6G communications. To this
end, we propose a multi-agent system with customized communication knowledge
and tools for solving communication related tasks using natural language,
comprising three components: (1) Multi-agent Data Retrieval (MDR), which
employs the condensate and inference agents to refine and summarize
communication knowledge from the knowledge base, expanding the knowledge
boundaries of LLMs in 6G communications; (2) Multi-agent Collaborative Planning
(MCP), which utilizes multiple planning agents to generate feasible solutions
for the communication related task from different perspectives based on the
retrieved knowledge; (3) Multi-agent Evaluation and Reflecxion (MER), which
utilizes the evaluation agent to assess the solutions, and applies the
reflexion agent and refinement agent to provide improvement suggestions for
current solutions. Finally, we validate the effectiveness of the proposed
multi-agent system by designing a semantic communication system, as a case
study of 6G communications. | Artificial Intelligence |
What field is the article from? | Title: JADE: A Linguistics-based Safety Evaluation Platform for Large Language Models
Abstract: In this paper, we present JADE, a targeted linguistic fuzzing platform which
strengthens the linguistic complexity of seed questions to simultaneously and
consistently break a wide range of widely-used LLMs categorized in three
groups: eight open-sourced Chinese, six commercial Chinese and four commercial
English LLMs. JADE generates three safety benchmarks for the three groups of
LLMs, which contain unsafe questions that are highly threatening: the questions
simultaneously trigger harmful generation of multiple LLMs, with an average
unsafe generation ratio of $70\%$ (please see the table below), while are still
natural questions, fluent and preserving the core unsafe semantics. We release
the benchmark demos generated for commercial English LLMs and open-sourced
English LLMs in the following link: https://github.com/whitzard-ai/jade-db. For
readers who are interested in evaluating on more questions generated by JADE,
please contact us.
JADE is based on Noam Chomsky's seminal theory of transformational-generative
grammar. Given a seed question with unsafe intention, JADE invokes a sequence
of generative and transformational rules to increment the complexity of the
syntactic structure of the original question, until the safety guardrail is
broken. Our key insight is: Due to the complexity of human language, most of
the current best LLMs can hardly recognize the invariant evil from the infinite
number of different syntactic structures which form an unbound example space
that can never be fully covered. Technically, the generative/transformative
rules are constructed by native speakers of the languages, and, once developed,
can be used to automatically grow and transform the parse tree of a given
question, until the guardrail is broken. For more evaluation results and demo,
please check our website: https://whitzard-ai.github.io/jade.html. | Computational Linguistics |
What field is the article from? | Title: Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,
Abstract: Graph Neural Networks (GNNs) have become widely used in the field of graph
mining. However, these networks are vulnerable to structural perturbations.
While many research efforts have focused on analyzing vulnerability through
poisoning attacks, we have identified an inefficiency in current attack losses.
These losses steer the attack strategy towards modifying edges targeting
misclassified nodes or resilient nodes, resulting in a waste of structural
adversarial perturbation. To address this issue, we propose a novel attack loss
framework called the Cost Aware Poisoning Attack (CA-attack) to improve the
allocation of the attack budget by dynamically considering the classification
margins of nodes. Specifically, it prioritizes nodes with smaller positive
margins while postponing nodes with negative margins. Our experiments
demonstrate that the proposed CA-attack significantly enhances existing attack
strategies | Artificial Intelligence |
What field is the article from? | Title: A New Fine-grained Alignment Method for Image-text Matching
Abstract: Image-text retrieval is a widely studied topic in the field of computer
vision due to the exponential growth of multimedia data, whose core concept is
to measure the similarity between images and text. However, most existing
retrieval methods heavily rely on cross-attention mechanisms for cross-modal
fine-grained alignment, which takes into account excessive irrelevant regions
and treats prominent and non-significant words equally, thereby limiting
retrieval accuracy. This paper aims to investigate an alignment approach that
reduces the involvement of non-significant fragments in images and text while
enhancing the alignment of prominent segments. For this purpose, we introduce
the Cross-Modal Prominent Fragments Enhancement Aligning Network(CPFEAN), which
achieves improved retrieval accuracy by diminishing the participation of
irrelevant regions during alignment and relatively increasing the alignment
similarity of prominent words. Additionally, we incorporate prior textual
information into image regions to reduce misalignment occurrences. In practice,
we first design a novel intra-modal fragments relationship reasoning method,
and subsequently employ our proposed alignment mechanism to compute the
similarity between images and text. Extensive quantitative comparative
experiments on MS-COCO and Flickr30K datasets demonstrate that our approach
outperforms state-of-the-art methods by about 5% to 10% in the rSum metric. | Computer Vision |
What field is the article from? | Title: Multi-view Relation Learning for Cross-domain Few-shot Hyperspectral Image Classification
Abstract: Cross-domain few-shot hyperspectral image classification focuses on learning
prior knowledge from a large number of labeled samples from source domain and
then transferring the knowledge to the tasks which contain only few labeled
samples in target domains. Following the metric-based manner, many current
methods first extract the features of the query and support samples, and then
directly predict the classes of query samples according to their distance to
the support samples or prototypes. The relations between samples have not been
fully explored and utilized. Different from current works, this paper proposes
to learn sample relations from different views and take them into the model
learning process, to improve the cross-domain few-shot hyperspectral image
classification. Building on current DCFSL method which adopts a domain
discriminator to deal with domain-level distribution difference, the proposed
method applys contrastive learning to learn the class-level sample relations to
obtain more discriminable sample features. In addition, it adopts a transformer
based cross-attention learning module to learn the set-level sample relations
and acquire the attentions from query samples to support samples. Our
experimental results have demonstrated the contribution of the multi-view
relation learning mechanism for few-shot hyperspectral image classification
when compared with the state of the art methods. | Computer Vision |
What field is the article from? | Title: FinA: Fairness of Adverse Effects in Decision-Making of Human-Cyber-Physical-System
Abstract: Ensuring fairness in decision-making systems within
Human-Cyber-Physical-Systems (HCPS) is a pressing concern, particularly when
diverse individuals, each with varying behaviors and expectations, coexist
within the same application space, influenced by a shared set of control
actions in the system. The long-term adverse effects of these actions further
pose the challenge, as historical experiences and interactions shape individual
perceptions of fairness. This paper addresses the challenge of fairness from an
equity perspective of adverse effects, taking into account the dynamic nature
of human behavior and evolving preferences while recognizing the lasting impact
of adverse effects. We formally introduce the concept of
Fairness-in-Adverse-Effects (FinA) within the HCPS context. We put forth a
comprehensive set of five formulations for FinA, encompassing both the
instantaneous and long-term aspects of adverse effects. To empirically validate
the effectiveness of our FinA approach, we conducted an evaluation within the
domain of smart homes, a pertinent HCPS application. The outcomes of our
evaluation demonstrate that the adoption of FinA significantly enhances the
overall perception of fairness among individuals, yielding an average
improvement of 66.7% when compared to the state-of-the-art method. | Artificial Intelligence |
What field is the article from? | Title: Color-Emotion Associations in Art: Fuzzy Approach
Abstract: Art objects can evoke certain emotions. Color is a fundamental element of
visual art and plays a significant role in how art is perceived. This paper
introduces a novel approach to classifying emotions in art using Fuzzy Sets. We
employ a fuzzy approach because it aligns well with human judgments' imprecise
and subjective nature. Extensive fuzzy colors (n=120) and a broad emotional
spectrum (n=10) allow for a more human-consistent and context-aware exploration
of emotions inherent in paintings. First, we introduce the fuzzy color
representation model. Then, at the fuzzification stage, we process the Wiki Art
Dataset of paintings tagged with emotions, extracting fuzzy dominant colors
linked to specific emotions. This results in fuzzy color distributions for ten
emotions. Finally, we convert them back to a crisp domain, obtaining a
knowledge base of color-emotion associations in primary colors. Our findings
reveal strong associations between specific emotions and colors; for instance,
gratitude strongly correlates with green, brown, and orange. Other noteworthy
associations include brown and anger, orange with shame, yellow with happiness,
and gray with fear. Using these associations and Jaccard similarity, we can
find the emotions in the arbitrary untagged image. We conducted a 2AFC
experiment involving human subjects to evaluate the proposed method. The
average hit rate of 0.77 indicates a significant correlation between the
method's predictions and human perception. The proposed method is simple to
adapt to art painting retrieval systems. The study contributes to the
theoretical understanding of color-emotion associations in art, offering
valuable insights for various practical applications besides art, like
marketing, design, and psychology. | Computer Vision |
What field is the article from? | Title: Co-guiding for Multi-intent Spoken Language Understanding
Abstract: Recent graph-based models for multi-intent SLU have obtained promising
results through modeling the guidance from the prediction of intents to the
decoding of slot filling. However, existing methods (1) only model the
unidirectional guidance from intent to slot, while there are bidirectional
inter-correlations between intent and slot; (2) adopt homogeneous graphs to
model the interactions between the slot semantics nodes and intent label nodes,
which limit the performance. In this paper, we propose a novel model termed
Co-guiding Net, which implements a two-stage framework achieving the mutual
guidances between the two tasks. In the first stage, the initial estimated
labels of both tasks are produced, and then they are leveraged in the second
stage to model the mutual guidances. Specifically, we propose two heterogeneous
graph attention networks working on the proposed two heterogeneous semantics
label graphs, which effectively represent the relations among the semantics
nodes and label nodes. Besides, we further propose Co-guiding-SCL Net, which
exploits the single-task and dual-task semantics contrastive relations. For the
first stage, we propose single-task supervised contrastive learning, and for
the second stage, we propose co-guiding supervised contrastive learning, which
considers the two tasks' mutual guidances in the contrastive learning
procedure. Experiment results on multi-intent SLU show that our model
outperforms existing models by a large margin, obtaining a relative improvement
of 21.3% over the previous best model on MixATIS dataset in overall accuracy.
We also evaluate our model on the zero-shot cross-lingual scenario and the
results show that our model can relatively improve the state-of-the-art model
by 33.5% on average in terms of overall accuracy for the total 9 languages. | Computational Linguistics |
What field is the article from? | Title: AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform
Abstract: Artificial intelligence (AI) refers to the ability of machines or software to
mimic or even surpass human intelligence in a given cognitive task. While
humans learn by both induction and deduction, the success of current AI is
rooted in induction, relying on its ability to detect statistical regularities
in task input -- an ability learnt from a vast amount of training data using
enormous computation resources. We examine the performance of such a
statistical AI in a human task through the lens of four factors, including task
learnability, statistical resource, computation resource, and learning
techniques, and then propose a three-phase visual framework to understand the
evolving relation between AI and jobs. Based on this conceptual framework, we
develop a simple economic model of competition to show the existence of an
inflection point for each occupation. Before AI performance crosses the
inflection point, human workers always benefit from an improvement in AI
performance, but after the inflection point, human workers become worse off
whenever such an improvement occurs. To offer empirical evidence, we first
argue that AI performance has passed the inflection point for the occupation of
translation but not for the occupation of web development. We then study how
the launch of ChatGPT, which led to significant improvement of AI performance
on many tasks, has affected workers in these two occupations on a large online
labor platform. Consistent with the inflection point conjecture, we find that
translators are negatively affected by the shock both in terms of the number of
accepted jobs and the earnings from those jobs, while web developers are
positively affected by the very same shock. Given the potentially large
disruption of AI on employment, more studies on more occupations using data
from different platforms are urgently needed. | Artificial Intelligence |
What field is the article from? | Title: On Computing Makespan-Optimal Solutions for Generalized Sliding-Tile Puzzles
Abstract: In the $15$-puzzle game, $15$ labeled square tiles are reconfigured on a
$4\times 4$ board through an escort, wherein each (time) step, a single tile
neighboring it may slide into it, leaving the space previously occupied by the
tile as the new escort. We study a generalized sliding-tile puzzle (GSTP) in
which (1) there are $1+$ escorts and (2) multiple tiles can move synchronously
in a single time step. Compared with popular discrete multi-agent/robot motion
models, GSTP provides a more accurate model for a broad array of high-utility
applications, including warehouse automation and autonomous garage parking, but
is less studied due to the more involved tile interactions. In this work, we
analyze optimal GSTP solution structures, establishing that computing
makespan-optimal solutions for GSTP is NP-complete and developing polynomial
time algorithms yielding makespans approximating the minimum with expected/high
probability constant factors, assuming randomized start and goal
configurations. | Robotics |
What field is the article from? | Title: HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data
Abstract: Multi-modal Large Language Models (MLLMs) tuned on machine-generated
instruction-following data have demonstrated remarkable performance in various
multi-modal understanding and generation tasks. However, the hallucinations
inherent in machine-generated data, which could lead to hallucinatory outputs
in MLLMs, remain under-explored. This work aims to investigate various
hallucinations (i.e., object, relation, attribute hallucinations) and mitigate
those hallucinatory toxicities in large-scale machine-generated visual
instruction datasets. Drawing on the human ability to identify factual errors,
we present a novel hallucination detection and elimination framework,
HalluciDoctor, based on the cross-checking paradigm. We use our framework to
identify and eliminate hallucinations in the training data automatically.
Interestingly, HalluciDoctor also indicates that spurious correlations arising
from long-tail object co-occurrences contribute to hallucinations. Based on
that, we execute counterfactual visual instruction expansion to balance data
distribution, thereby enhancing MLLMs' resistance to hallucinations.
Comprehensive experiments on hallucination evaluation benchmarks show that our
method successfully mitigates 44.6% hallucinations relatively and maintains
competitive performance compared to LLaVA.The source code will be released at
\url{https://github.com/Yuqifan1117/HalluciDoctor}. | Computer Vision |
What field is the article from? | Title: PhytNet -- Tailored Convolutional Neural Networks for Custom Botanical Data
Abstract: Automated disease, weed and crop classification with computer vision will be
invaluable in the future of agriculture. However, existing model architectures
like ResNet, EfficientNet and ConvNeXt often underperform on smaller,
specialised datasets typical of such projects. We address this gap with
informed data collection and the development of a new CNN architecture,
PhytNet. Utilising a novel dataset of infrared cocoa tree images, we
demonstrate PhytNet's development and compare its performance with existing
architectures. Data collection was informed by analysis of spectroscopy data,
which provided useful insights into the spectral characteristics of cocoa
trees. Such information could inform future data collection and model
development. Cocoa was chosen as a focal species due to the diverse pathology
of its diseases, which pose significant challenges for detection. ResNet18
showed some signs of overfitting, while EfficientNet variants showed distinct
signs of overfitting. By contrast, PhytNet displayed excellent attention to
relevant features, no overfitting, and an exceptionally low computation cost
(1.19 GFLOPS). As such PhytNet is a promising candidate for rapid disease or
plant classification, or precise localisation of disease symptoms for
autonomous systems. | Computer Vision |
What field is the article from? | Title: NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies
Abstract: Single cell analysis of human skeletal muscle (SM) tissue cross-sections is a
fundamental tool for understanding many neuromuscular disorders. For this
analysis to be reliable and reproducible, identification of individual fibres
within microscopy images (segmentation) of SM tissue should be automatic and
precise. Biomedical scientists in this field currently rely on custom tools and
general machine learning (ML) models, both followed by labour intensive and
subjective manual interventions to fine-tune segmentation. We believe that
fully automated, precise, reproducible segmentation is possible by training ML
models. However, in this important biomedical domain, there are currently no
good quality, publicly available annotated imaging datasets available for ML
model training. In this paper we release NCL-SM: a high quality bioimaging
dataset of 46 human SM tissue cross-sections from both healthy control subjects
and from patients with genetically diagnosed muscle pathology. These images
include $>$ 50k manually segmented muscle fibres (myofibres). In addition we
also curated high quality myofibre segmentations, annotating reasons for
rejecting low quality myofibres and low quality regions in SM tissue images,
making these annotations completely ready for downstream analysis. This, we
believe, will pave the way for development of a fully automatic pipeline that
identifies individual myofibres within images of tissue sections and, in
particular, also classifies individual myofibres that are fit for further
analysis. | Computer Vision |
What field is the article from? | Title: Leveraging Activation Maximization and Generative Adversarial Training to Recognize and Explain Patterns in Natural Areas in Satellite Imagery
Abstract: Natural protected areas are vital for biodiversity, climate change
mitigation, and supporting ecological processes. Despite their significance,
comprehensive mapping is hindered by a lack of understanding of their
characteristics and a missing land cover class definition. This paper aims to
advance the explanation of the designating patterns forming protected and wild
areas. To this end, we propose a novel framework that uses activation
maximization and a generative adversarial model. With this, we aim to generate
satellite images that, in combination with domain knowledge, are capable of
offering complete and valid explanations for the spatial and spectral patterns
that define the natural authenticity of these regions. Our proposed framework
produces more precise attribution maps pinpointing the designating patterns
forming the natural authenticity of protected areas. Our approach fosters our
understanding of the ecological integrity of the protected natural areas and
may contribute to future monitoring and preservation efforts. | Computer Vision |
What field is the article from? | Title: How Multilingual is Multilingual LLM?
Abstract: Large Language Models (LLMs), trained predominantly on extensive English
data, often exhibit limitations when applied to other languages. Current
research is primarily focused on enhancing the multilingual capabilities of
these models by employing various tuning strategies. Despite their
effectiveness in certain languages, the understanding of the multilingual
abilities of LLMs remains incomplete. This study endeavors to evaluate the
multilingual capacity of LLMs by conducting an exhaustive analysis across 101
languages, and classifies languages with similar characteristics into four
distinct quadrants. By delving into each quadrant, we shed light on the
rationale behind their categorization and offer actionable guidelines for
tuning these languages. Extensive experiments reveal that existing LLMs possess
multilingual capabilities that surpass our expectations, and we can
significantly improve the multilingual performance of LLMs by focusing on these
distinct attributes present in each quadrant. | Computational Linguistics |
What field is the article from? | Title: Towards Verifiable Text Generation with Symbolic References
Abstract: Large language models (LLMs) have demonstrated an impressive ability to
synthesize plausible and fluent text. However they remain vulnerable to
hallucinations, and thus their outputs generally require manual human
verification for high-stakes applications, which can be time-consuming and
difficult. This paper proposes symbolically grounded generation (SymGen) as a
simple approach for enabling easier validation of an LLM's output. SymGen
prompts an LLM to interleave its regular output text with explicit symbolic
references to fields present in some conditioning data (e.g., a table in JSON
format). The references can be used to display the provenance of different
spans of text in the generation, reducing the effort required for manual
verification. Across data-to-text and question answering experiments, we find
that LLMs are able to directly output text that makes use of symbolic
references while maintaining fluency and accuracy. | Computational Linguistics |
What field is the article from? | Title: Artificial Intelligence for reverse engineering: application to detergents using Raman spectroscopy
Abstract: The reverse engineering of a complex mixture, regardless of its nature, has
become significant today. Being able to quickly assess the potential toxicity
of new commercial products in relation to the environment presents a genuine
analytical challenge. The development of digital tools (databases,
chemometrics, machine learning, etc.) and analytical techniques (Raman
spectroscopy, NIR spectroscopy, mass spectrometry, etc.) will allow for the
identification of potential toxic molecules. In this article, we use the
example of detergent products, whose composition can prove dangerous to humans
or the environment, necessitating precise identification and quantification for
quality control and regulation purposes. The combination of various digital
tools (spectral database, mixture database, experimental design, Chemometrics /
Machine Learning algorithm{\ldots}) together with different sample preparation
methods (raw sample, or several concentrated / diluted samples) Raman
spectroscopy, has enabled the identification of the mixture's constituents and
an estimation of its composition. Implementing such strategies across different
analytical tools can result in time savings for pollutant identification and
contamination assessment in various matrices. This strategy is also applicable
in the industrial sector for product or raw material control, as well as for
quality control purposes. | Artificial Intelligence |
What field is the article from? | Title: LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models
Abstract: Large language models (LLMs) provide excellent text-generation capabilities,
but standard prompting and generation methods generally do not lead to
intentional or goal-directed agents and might necessitate considerable prompt
tuning. This becomes particularly apparent in multi-turn conversations: even
the best current LLMs rarely ask clarifying questions, engage in explicit
information gathering, or take actions now that lead to better decisions after
multiple turns. Reinforcement learning has the potential to leverage the
powerful modeling capabilities of LLMs, as well as their internal
representation of textual interactions, to create capable goal-directed
language agents. This can enable intentional and temporally extended
interactions, such as with humans, through coordinated persuasion and carefully
crafted questions, or in goal-directed play through text games to bring about
desired final outcomes. However, enabling this requires the community to
develop stable and reliable reinforcement learning algorithms that can
effectively train LLMs. Developing such algorithms requires tasks that can
gauge progress on algorithm design, provide accessible and reproducible
evaluations for multi-turn interactions, and cover a range of task properties
and challenges in improving reinforcement learning algorithms. Our paper
introduces the LMRL-Gym benchmark for evaluating multi-turn RL for LLMs,
together with an open-source research framework containing a basic toolkit for
getting started on multi-turn RL with offline value-based and policy-based RL
methods. Our benchmark consists of 8 different language tasks, which require
multiple rounds of language interaction and cover a range of tasks in
open-ended dialogue and text games. | Computational Linguistics |
What field is the article from? | Title: TransformCode: A Contrastive Learning Framework for Code Embedding via Subtree transformation
Abstract: Large-scale language models have made great progress in the field of software
engineering in recent years. They can be used for many code-related tasks such
as code clone detection, code-to-code search, and method name prediction.
However, these large-scale language models based on each code token have
several drawbacks: They are usually large in scale, heavily dependent on
labels, and require a lot of computing power and time to fine-tune new
datasets.Furthermore, code embedding should be performed on the entire code
snippet rather than encoding each code token. The main reason for this is that
encoding each code token would cause model parameter inflation, resulting in a
lot of parameters storing information that we are not very concerned about. In
this paper, we propose a novel framework, called TransformCode, that learns
about code embeddings in a contrastive learning manner. The framework uses the
Transformer encoder as an integral part of the model. We also introduce a novel
data augmentation technique called abstract syntax tree transformation: This
technique applies syntactic and semantic transformations to the original code
snippets to generate more diverse and robust anchor samples. Our proposed
framework is both flexible and adaptable: It can be easily extended to other
downstream tasks that require code representation such as code clone detection
and classification. The framework is also very efficient and scalable: It does
not require a large model or a large amount of training data, and can support
any programming language.Finally, our framework is not limited to unsupervised
learning, but can also be applied to some supervised learning tasks by
incorporating task-specific labels or objectives. To explore the effectiveness
of our framework, we conducted extensive experiments on different software
engineering tasks using different programming languages and multiple datasets. | Software Engineering |
What field is the article from? | Title: Generation of Games for Opponent Model Differentiation
Abstract: Protecting against adversarial attacks is a common multiagent problem.
Attackers in the real world are predominantly human actors, and the protection
methods often incorporate opponent models to improve the performance when
facing humans. Previous results show that modeling human behavior can
significantly improve the performance of the algorithms. However, modeling
humans correctly is a complex problem, and the models are often simplified and
assume humans make mistakes according to some distribution or train parameters
for the whole population from which they sample. In this work, we use data
gathered by psychologists who identified personality types that increase the
likelihood of performing malicious acts. However, in the previous work, the
tests on a handmade game could not show strategic differences between the
models. We created a novel model that links its parameters to psychological
traits. We optimized over parametrized games and created games in which the
differences are profound. Our work can help with automatic game generation when
we need a game in which some models will behave differently and to identify
situations in which the models do not align. | Artificial Intelligence |
What field is the article from? | Title: Emotion-Oriented Behavior Model Using Deep Learning
Abstract: Emotions, as a fundamental ingredient of any social interaction, lead to
behaviors that represent the effectiveness of the interaction through facial
expressions and gestures in humans. Hence an agent must possess the social and
cognitive abilities to understand human social parameters and behave
accordingly. However, no such emotion-oriented behavior model is presented yet
in the existing research. The emotion prediction may generate appropriate
agents' behaviors for effective interaction using conversation modality.
Considering the importance of emotions, and behaviors, for an agent's social
interaction, an Emotion-based Behavior model is presented in this paper for
Socio-cognitive artificial agents. The proposed model is implemented using
tweets data trained on multiple models like Long Short-Term Memory (LSTM),
Convolution Neural Network (CNN) and Bidirectional Encoder Representations from
Transformers (BERT) for emotion prediction with an average accuracy of 92%, and
55% respectively. Further, using emotion predictions from CNN-LSTM, the
behavior module responds using facial expressions and gestures using Behavioral
Markup Language (BML). The accuracy of emotion-based behavior predictions is
statistically validated using the 2-tailed Pearson correlation on the data
collected from human users through questionnaires. Analysis shows that all
emotion-based behaviors accurately depict human-like gestures and facial
expressions based on the significant correlation at the 0.01 and 0.05 levels.
This study is a steppingstone to a multi-faceted artificial agent interaction
based on emotion-oriented behaviors. Cognition has significance regarding
social interaction among humans. | Computational Linguistics |
What field is the article from? | Title: Devil in the Landscapes: Inferring Epidemic Exposure Risks from Street View Imagery
Abstract: Built environment supports all the daily activities and shapes our health.
Leveraging informative street view imagery, previous research has established
the profound correlation between the built environment and chronic,
non-communicable diseases; however, predicting the exposure risk of infectious
diseases remains largely unexplored. The person-to-person contacts and
interactions contribute to the complexity of infectious disease, which is
inherently different from non-communicable diseases. Besides, the complex
relationships between street view imagery and epidemic exposure also hinder
accurate predictions. To address these problems, we construct a regional
mobility graph informed by the gravity model, based on which we propose a
transmission-aware graph convolutional network (GCN) to capture disease
transmission patterns arising from human mobility. Experiments show that the
proposed model significantly outperforms baseline models by 8.54% in weighted
F1, shedding light on a low-cost, scalable approach to assess epidemic exposure
risks from street view imagery. | Computer Vision |
What field is the article from? | Title: Reinforcement Learning for Solving Stochastic Vehicle Routing Problem
Abstract: This study addresses a gap in the utilization of Reinforcement Learning (RL)
and Machine Learning (ML) techniques in solving the Stochastic Vehicle Routing
Problem (SVRP) that involves the challenging task of optimizing vehicle routes
under uncertain conditions. We propose a novel end-to-end framework that
comprehensively addresses the key sources of stochasticity in SVRP and utilizes
an RL agent with a simple yet effective architecture and a tailored training
method. Through comparative analysis, our proposed model demonstrates superior
performance compared to a widely adopted state-of-the-art metaheuristic,
achieving a significant 3.43% reduction in travel costs. Furthermore, the model
exhibits robustness across diverse SVRP settings, highlighting its adaptability
and ability to learn optimal routing strategies in varying environments. The
publicly available implementation of our framework serves as a valuable
resource for future research endeavors aimed at advancing RL-based solutions
for SVRP. | Artificial Intelligence |
What field is the article from? | Title: BClean: A Bayesian Data Cleaning System
Abstract: There is a considerable body of work on data cleaning which employs various
principles to rectify erroneous data and transform a dirty dataset into a
cleaner one. One of prevalent approaches is probabilistic methods, including
Bayesian methods. However, existing probabilistic methods often assume a
simplistic distribution (e.g., Gaussian distribution), which is frequently
underfitted in practice, or they necessitate experts to provide a complex prior
distribution (e.g., via a programming language). This requirement is both
labor-intensive and costly, rendering these methods less suitable for
real-world applications. In this paper, we propose BClean, a Bayesian Cleaning
system that features automatic Bayesian network construction and user
interaction. We recast the data cleaning problem as a Bayesian inference that
fully exploits the relationships between attributes in the observed dataset and
any prior information provided by users. To this end, we present an automatic
Bayesian network construction method that extends a structure learning-based
functional dependency discovery method with similarity functions to capture the
relationships between attributes. Furthermore, our system allows users to
modify the generated Bayesian network in order to specify prior information or
correct inaccuracies identified by the automatic generation process. We also
design an effective scoring model (called the compensative scoring model)
necessary for the Bayesian inference. To enhance the efficiency of data
cleaning, we propose several approximation strategies for the Bayesian
inference, including graph partitioning, domain pruning, and pre-detection. By
evaluating on both real-world and synthetic datasets, we demonstrate that
BClean is capable of achieving an F-measure of up to 0.9 in data cleaning,
outperforming existing Bayesian methods by 2% and other data cleaning methods
by 15%. | Artificial Intelligence |
What field is the article from? | Title: The WHY in Business Processes: Discovery of Causal Execution Dependencies
Abstract: A crucial element in predicting the outcomes of process interventions and
making informed decisions about the process is unraveling the genuine
relationships between the execution of process activities. Contemporary process
discovery algorithms exploit time precedence as their main source of model
derivation. Such reliance can sometimes be deceiving from a causal perspective.
This calls for faithful new techniques to discover the true execution
dependencies among the tasks in the process. To this end, our work offers a
systematic approach to the unveiling of the true causal business process by
leveraging an existing causal discovery algorithm over activity timing. In
addition, this work delves into a set of conditions under which process mining
discovery algorithms generate a model that is incongruent with the causal
business process model, and shows how the latter model can be methodologically
employed for a sound analysis of the process. Our methodology searches for such
discrepancies between the two models in the context of three causal patterns,
and derives a new view in which these inconsistencies are annotated over the
mined process model. We demonstrate our methodology employing two open process
mining algorithms, the IBM Process Mining tool, and the LiNGAM causal discovery
technique. We apply it on a synthesized dataset and on two open benchmark data
sets. | Artificial Intelligence |
What field is the article from? | Title: Entropy and the Kullback-Leibler Divergence for Bayesian Networks: Computational Complexity and Efficient Implementation
Abstract: Bayesian networks (BNs) are a foundational model in machine learning and
causal inference. Their graphical structure can handle high-dimensional
problems, divide-and-conquering them into a sparse collection of smaller ones;
underlies Judea Pearl's causality; and determines their explainability and
interpretability. Despite their popularity, there are few resources in the
literature on how to compute Shannon's entropy and the Kullback-Leibler (KL)
divergence for BNs under their most common distributional assumptions. In this
paper, we provide computationally efficient algorithms for both by leveraging
BNs' graphical structure, and we illustrate them with a complete set of
numerical examples. In the process, we show it is possible to reduce the
computational complexity of KL from cubic to quadratic for Gaussian BNs. | Artificial Intelligence |
What field is the article from? | Title: Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection
Abstract: Few-shot object detection, which focuses on detecting novel objects with few
labels, is an emerging challenge in the community. Recent studies show that
adapting a pre-trained model or modified loss function can improve performance.
In this paper, we explore leveraging the power of Contrastive Language-Image
Pre-training (CLIP) and hard negative classification loss in low data setting.
Specifically, we propose Re-scoring using Image-language Similarity for
Few-shot object detection (RISF) which extends Faster R-CNN by introducing
Calibration Module using CLIP (CM-CLIP) and Background Negative Re-scale Loss
(BNRL). The former adapts CLIP, which performs zero-shot classification, to
re-score the classification scores of a detector using image-class
similarities, the latter is modified classification loss considering the
punishment for fake backgrounds as well as confusing categories on a
generalized few-shot object detection dataset. Extensive experiments on MS-COCO
and PASCAL VOC show that the proposed RISF substantially outperforms the
state-of-the-art approaches. The code will be available. | Computer Vision |
What field is the article from? | Title: Towards Model-Based Data Acquisition for Subjective Multi-Task NLP Problems
Abstract: Data annotated by humans is a source of knowledge by describing the
peculiarities of the problem and therefore fueling the decision process of the
trained model. Unfortunately, the annotation process for subjective natural
language processing (NLP) problems like offensiveness or emotion detection is
often very expensive and time-consuming. One of the inevitable risks is to
spend some of the funds and annotator effort on annotations that do not provide
any additional knowledge about the specific task. To minimize these costs, we
propose a new model-based approach that allows the selection of tasks annotated
individually for each text in a multi-task scenario. The experiments carried
out on three datasets, dozens of NLP tasks, and thousands of annotations show
that our method allows up to 40% reduction in the number of annotations with
negligible loss of knowledge. The results also emphasize the need to collect a
diverse amount of data required to efficiently train a model, depending on the
subjectivity of the annotation task. We also focused on measuring the relation
between subjective tasks by evaluating the model in single-task and multi-task
scenarios. Moreover, for some datasets, training only on the labels predicted
by our model improved the efficiency of task selection as a self-supervised
learning regularization technique. | Computational Linguistics |
What field is the article from? | Title: Dual-path convolutional neural network using micro-FTIR imaging to predict breast cancer subtypes and biomarkers levels: estrogen receptor, progesterone receptor, HER2 and Ki67
Abstract: Breast cancer molecular subtypes classification plays an import role to sort
patients with divergent prognosis. The biomarkers used are Estrogen Receptor
(ER), Progesterone Receptor (PR), HER2, and Ki67. Based on these biomarkers
expression levels, subtypes are classified as Luminal A (LA), Luminal B (LB),
HER2 subtype, and Triple-Negative Breast Cancer (TNBC). Immunohistochemistry is
used to classify subtypes, although interlaboratory and interobserver
variations can affect its accuracy, besides being a time-consuming technique.
The Fourier transform infrared micro-spectroscopy may be coupled with deep
learning for cancer evaluation, where there is still a lack of studies for
subtypes and biomarker levels prediction. This study presents a novel 2D deep
learning approach to achieve these predictions. Sixty micro-FTIR images of
320x320 pixels were collected from a human breast biopsies microarray. Data
were clustered by K-means, preprocessed and 32x32 patches were generated using
a fully automated approach. CaReNet-V2, a novel convolutional neural network,
was developed to classify breast cancer (CA) vs adjacent tissue (AT) and
molecular subtypes, and to predict biomarkers level. The clustering method
enabled to remove non-tissue pixels. Test accuracies for CA vs AT and subtype
were above 0.84. The model enabled the prediction of ER, PR, and HER2 levels,
where borderline values showed lower performance (minimum accuracy of 0.54).
Ki67 percentage regression demonstrated a mean error of 3.6%. Thus, CaReNet-V2
is a potential technique for breast cancer biopsies evaluation, standing out as
a screening analysis technique and helping to prioritize patients. | Machine Learning |
What field is the article from? | Title: Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare
Abstract: Large Language Models (LLMs) have introduced a new era of proficiency in
comprehending complex healthcare and biomedical topics. However, there is a
noticeable lack of models in languages other than English and models that can
interpret multi-modal input, which is crucial for global healthcare
accessibility. In response, this study introduces Qilin-Med-VL, the first
Chinese large vision-language model designed to integrate the analysis of
textual and visual data. Qilin-Med-VL combines a pre-trained Vision Transformer
(ViT) with a foundational LLM. It undergoes a thorough two-stage curriculum
training process that includes feature alignment and instruction tuning. This
method enhances the model's ability to generate medical captions and answer
complex medical queries. We also release ChiMed-VL, a dataset consisting of
more than 1M image-text pairs. This dataset has been carefully curated to
enable detailed and comprehensive interpretation of medical data using various
types of images. | Computer Vision |
What field is the article from? | Title: Emergence of Abstract State Representations in Embodied Sequence Modeling
Abstract: Decision making via sequence modeling aims to mimic the success of language
models, where actions taken by an embodied agent are modeled as tokens to
predict. Despite their promising performance, it remains unclear if embodied
sequence modeling leads to the emergence of internal representations that
represent the environmental state information. A model that lacks abstract
state representations would be liable to make decisions based on surface
statistics which fail to generalize. We take the BabyAI environment, a grid
world in which language-conditioned navigation tasks are performed, and build a
sequence modeling Transformer, which takes a language instruction, a sequence
of actions, and environmental observations as its inputs. In order to
investigate the emergence of abstract state representations, we design a
"blindfolded" navigation task, where only the initial environmental layout, the
language instruction, and the action sequence to complete the task are
available for training. Our probing results show that intermediate
environmental layouts can be reasonably reconstructed from the internal
activations of a trained model, and that language instructions play a role in
the reconstruction accuracy. Our results suggest that many key features of
state representations can emerge via embodied sequence modeling, supporting an
optimistic outlook for applications of sequence modeling objectives to more
complex embodied decision-making domains. | Machine Learning |
What field is the article from? | Title: ChatGPT and post-test probability
Abstract: Reinforcement learning-based large language models, such as ChatGPT, are
believed to have potential to aid human experts in many domains, including
healthcare. There is, however, little work on ChatGPT's ability to perform a
key task in healthcare: formal, probabilistic medical diagnostic reasoning.
This type of reasoning is used, for example, to update a pre-test probability
to a post-test probability. In this work, we probe ChatGPT's ability to perform
this task. In particular, we ask ChatGPT to give examples of how to use Bayes
rule for medical diagnosis. Our prompts range from queries that use terminology
from pure probability (e.g., requests for a "posterior probability") to queries
that use terminology from the medical diagnosis literature (e.g., requests for
a "post-test probability"). We show how the introduction of medical variable
names leads to an increase in the number of errors that ChatGPT makes. Given
our results, we also show how one can use prompt engineering to facilitate
ChatGPT's partial avoidance of these errors. We discuss our results in light of
recent commentaries on sensitivity and specificity. We also discuss how our
results might inform new research directions for large language models. | Artificial Intelligence |
What field is the article from? | Title: ChatGPT as a Math Questioner? Evaluating ChatGPT on Generating Pre-university Math Questions
Abstract: Mathematical questioning is crucial for assessing students problem-solving
skills. Since manually creating such questions requires substantial effort,
automatic methods have been explored. Existing state-of-the-art models rely on
fine-tuning strategies and struggle to generate questions that heavily involve
multiple steps of logical and arithmetic reasoning. Meanwhile, large language
models(LLMs) such as ChatGPT have excelled in many NLP tasks involving logical
and arithmetic reasoning. Nonetheless, their applications in generating
educational questions are underutilized, especially in the field of
mathematics. To bridge this gap, we take the first step to conduct an in-depth
analysis of ChatGPT in generating pre-university math questions. Our analysis
is categorized into two main settings: context-aware and context-unaware. In
the context-aware setting, we evaluate ChatGPT on existing math
question-answering benchmarks covering elementary, secondary, and ternary
classes. In the context-unaware setting, we evaluate ChatGPT in generating math
questions for each lesson from pre-university math curriculums that we crawl.
Our crawling results in TopicMath, a comprehensive and novel collection of
pre-university math curriculums collected from 121 math topics and 428 lessons
from elementary, secondary, and tertiary classes. Through this analysis, we aim
to provide insight into the potential of ChatGPT as a math questioner. | Computational Linguistics |
What field is the article from? | Title: Utilizing Language Models for Energy Load Forecasting
Abstract: Energy load forecasting plays a crucial role in optimizing resource
allocation and managing energy consumption in buildings and cities. In this
paper, we propose a novel approach that leverages language models for energy
load forecasting. We employ prompting techniques to convert energy consumption
data into descriptive sentences, enabling fine-tuning of language models. By
adopting an autoregressive generating approach, our proposed method enables
predictions of various horizons of future energy load consumption. Through
extensive experiments on real-world datasets, we demonstrate the effectiveness
and accuracy of our proposed method. Our results indicate that utilizing
language models for energy load forecasting holds promise for enhancing energy
efficiency and facilitating intelligent decision-making in energy systems. | Artificial Intelligence |
What field is the article from? | Title: AdaDiff: Adaptive Step Selection for Fast Diffusion
Abstract: Diffusion models, as a type of generative models, have achieved impressive
results in generating images and videos conditioned on textual conditions.
However, the generation process of diffusion models involves denoising for
dozens of steps to produce photorealistic images/videos, which is
computationally expensive. Unlike previous methods that design
``one-size-fits-all'' approaches for speed up, we argue denoising steps should
be sample-specific conditioned on the richness of input texts. To this end, we
introduce AdaDiff, a lightweight framework designed to learn instance-specific
step usage policies, which are then used by the diffusion model for generation.
AdaDiff is optimized using a policy gradient method to maximize a carefully
designed reward function, balancing inference time and generation quality. We
conduct experiments on three image generation and two video generation
benchmarks and demonstrate that our approach achieves similar results in terms
of visual quality compared to the baseline using a fixed 50 denoising steps
while reducing inference time by at least 33%, going as high as 40%.
Furthermore, our qualitative analysis shows that our method allocates more
steps to more informative text conditions and fewer steps to simpler text
conditions. | Computer Vision |
What field is the article from? | Title: Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models
Abstract: This work investigates the potential of undermining both fairness and
detection performance in abusive language detection. In a dynamic and complex
digital world, it is crucial to investigate the vulnerabilities of these
detection models to adversarial fairness attacks to improve their fairness
robustness. We propose a simple yet effective framework FABLE that leverages
backdoor attacks as they allow targeted control over the fairness and detection
performance. FABLE explores three types of trigger designs (i.e., rare,
artificial, and natural triggers) and novel sampling strategies. Specifically,
the adversary can inject triggers into samples in the minority group with the
favored outcome (i.e., "non-abusive") and flip their labels to the unfavored
outcome, i.e., "abusive". Experiments on benchmark datasets demonstrate the
effectiveness of FABLE attacking fairness and utility in abusive language
detection. | Computational Linguistics |
What field is the article from? | Title: RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality
Abstract: This paper introduces RACER, the Rational Artificial Intelligence
Car-following model Enhanced by Reality, a cutting-edge deep learning
car-following model, that satisfies partial derivative constraints, designed to
predict Adaptive Cruise Control (ACC) driving behavior while staying
theoretically feasible. Unlike conventional models, RACER effectively
integrates Rational Driving Constraints (RDCs), crucial tenets of actual
driving, resulting in strikingly accurate and realistic predictions. Against
established models like the Optimal Velocity Relative Velocity (OVRV), a
car-following Neural Network (NN), and a car-following Physics-Informed Neural
Network (PINN), RACER excels across key metrics, such as acceleration,
velocity, and spacing. Notably, it displays a perfect adherence to the RDCs,
registering zero violations, in stark contrast to other models. This study
highlights the immense value of incorporating physical constraints within AI
models, especially for augmenting safety measures in transportation. It also
paves the way for future research to test these models against human driving
data, with the potential to guide safer and more rational driving behavior. The
versatility of the proposed model, including its potential to incorporate
additional derivative constraints and broader architectural applications,
enhances its appeal and broadens its impact within the scientific community. | Artificial Intelligence |
What field is the article from? | Title: Deep learning for 3D Object Detection and Tracking in Autonomous Driving: A Brief Survey
Abstract: Object detection and tracking are vital and fundamental tasks for autonomous
driving, aiming at identifying and locating objects from those predefined
categories in a scene. 3D point cloud learning has been attracting more and
more attention among all other forms of self-driving data. Currently, there are
many deep learning methods for 3D object detection. However, the tasks of
object detection and tracking for point clouds still need intensive study due
to the unique characteristics of point cloud data. To help get a good grasp of
the present situation of this research, this paper shows recent advances in
deep learning methods for 3D object detection and tracking. | Computer Vision |
What field is the article from? | Title: GPT-4 Enhanced Multimodal Grounding for Autonomous Driving: Leveraging Cross-Modal Attention with Large Language Models
Abstract: In the field of autonomous vehicles (AVs), accurately discerning commander
intent and executing linguistic commands within a visual context presents a
significant challenge. This paper introduces a sophisticated encoder-decoder
framework, developed to address visual grounding in AVs.Our Context-Aware
Visual Grounding (CAVG) model is an advanced system that integrates five core
encoders-Text, Image, Context, and Cross-Modal-with a Multimodal decoder. This
integration enables the CAVG model to adeptly capture contextual semantics and
to learn human emotional features, augmented by state-of-the-art Large Language
Models (LLMs) including GPT-4. The architecture of CAVG is reinforced by the
implementation of multi-head cross-modal attention mechanisms and a
Region-Specific Dynamic (RSD) layer for attention modulation. This
architectural design enables the model to efficiently process and interpret a
range of cross-modal inputs, yielding a comprehensive understanding of the
correlation between verbal commands and corresponding visual scenes. Empirical
evaluations on the Talk2Car dataset, a real-world benchmark, demonstrate that
CAVG establishes new standards in prediction accuracy and operational
efficiency. Notably, the model exhibits exceptional performance even with
limited training data, ranging from 50% to 75% of the full dataset. This
feature highlights its effectiveness and potential for deployment in practical
AV applications. Moreover, CAVG has shown remarkable robustness and
adaptability in challenging scenarios, including long-text command
interpretation, low-light conditions, ambiguous command contexts, inclement
weather conditions, and densely populated urban environments. The code for the
proposed model is available at our Github. | Computer Vision |
What field is the article from? | Title: Discretionary Trees: Understanding Street-Level Bureaucracy via Machine Learning
Abstract: Street-level bureaucrats interact directly with people on behalf of
government agencies to perform a wide range of functions, including, for
example, administering social services and policing. A key feature of
street-level bureaucracy is that the civil servants, while tasked with
implementing agency policy, are also granted significant discretion in how they
choose to apply that policy in individual cases. Using that discretion could be
beneficial, as it allows for exceptions to policies based on human interactions
and evaluations, but it could also allow biases and inequities to seep into
important domains of societal resource allocation. In this paper, we use
machine learning techniques to understand street-level bureaucrats' behavior.
We leverage a rich dataset that combines demographic and other information on
households with information on which homelessness interventions they were
assigned during a period when assignments were not formulaic. We find that
caseworker decisions in this time are highly predictable overall, and some, but
not all of this predictivity can be captured by simple decision rules. We
theorize that the decisions not captured by the simple decision rules can be
considered applications of caseworker discretion. These discretionary decisions
are far from random in both the characteristics of such households and in terms
of the outcomes of the decisions. Caseworkers typically only apply discretion
to households that would be considered less vulnerable. When they do apply
discretion to assign households to more intensive interventions, the marginal
benefits to those households are significantly higher than would be expected if
the households were chosen at random; there is no similar reduction in marginal
benefit to households that are discretionarily allocated less intensive
interventions, suggesting that caseworkers are improving outcomes using their
knowledge. | Machine Learning |
What field is the article from? | Title: Augmenting deep neural networks with symbolic knowledge: Towards trustworthy and interpretable AI for education
Abstract: Artificial neural networks (ANNs) have shown to be amongst the most important
artificial intelligence (AI) techniques in educational applications, providing
adaptive educational services. However, their educational potential is limited
in practice due to three major challenges: i) difficulty in incorporating
symbolic educational knowledge (e.g., causal relationships, and practitioners'
knowledge) in their development, ii) learning and reflecting biases, and iii)
lack of interpretability. Given the high-risk nature of education, the
integration of educational knowledge into ANNs becomes crucial for developing
AI applications that adhere to essential educational restrictions, and provide
interpretability over the predictions. This research argues that the
neural-symbolic family of AI has the potential to address the named challenges.
To this end, it adapts a neural-symbolic AI framework and accordingly develops
an approach called NSAI, that injects and extracts educational knowledge into
and from deep neural networks, for modelling learners computational thinking.
Our findings reveal that the NSAI approach has better generalizability compared
to deep neural networks trained merely on training data, as well as training
data augmented by SMOTE and autoencoder methods. More importantly, unlike the
other models, the NSAI approach prioritises robust representations that capture
causal relationships between input features and output labels, ensuring safety
in learning to avoid spurious correlations and control biases in training data.
Furthermore, the NSAI approach enables the extraction of rules from the learned
network, facilitating interpretation and reasoning about the path to
predictions, as well as refining the initial educational knowledge. These
findings imply that neural-symbolic AI can overcome the limitations of ANNs in
education, enabling trustworthy and interpretable applications. | Artificial Intelligence |
What field is the article from? | Title: Evaluating Large Language Models through Gender and Racial Stereotypes
Abstract: Language Models have ushered a new age of AI gaining traction within the NLP
community as well as amongst the general population. AI's ability to make
predictions, generations and its applications in sensitive decision-making
scenarios, makes it even more important to study these models for possible
biases that may exist and that can be exaggerated. We conduct a quality
comparative study and establish a framework to evaluate language models under
the premise of two kinds of biases: gender and race, in a professional setting.
We find out that while gender bias has reduced immensely in newer models, as
compared to older ones, racial bias still exists. | Computational Linguistics |
What field is the article from? | Title: R$^3$ Prompting: Review, Rephrase and Resolve for Chain-of-Thought Reasoning in Large Language Models under Noisy Context
Abstract: With the help of Chain-of-Thought (CoT) prompting, Large Language Models
(LLMs) have achieved remarkable performance on various reasoning tasks.
However, most of them have been evaluated under noise-free context and the
dilemma for LLMs to produce inaccurate results under the noisy context has not
been fully investigated. Existing studies utilize trigger sentences to
encourage LLMs to concentrate on the relevant information but the trigger has
limited effect on final answer prediction. Inspired by interactive CoT method,
where intermediate reasoning steps are promoted by multiple rounds of
interaction between users and LLMs, we propose a novel prompting method, namely
R$^3$ prompting, for CoT reasoning under noisy context. Specifically, R$^3$
prompting interacts with LLMs to perform key sentence extraction, variable
declaration and answer prediction, which corresponds to a thought process of
reviewing, rephrasing and resolving. The responses generated at the last
interaction will perform as hints to guide toward the responses of the next
interaction. Our experiments show that R$^3$ prompting significantly
outperforms existing CoT prompting methods on five reasoning tasks under noisy
context. With GPT-3.5-turbo, we observe 3.7% accuracy improvement on average on
the reasoning tasks under noisy context compared to the most competitive
prompting baseline. More analyses and ablation studies show the robustness and
generalization of R$^3$ prompting method in solving reasoning tasks in LLMs
under noisy context. | Computational Linguistics |
What field is the article from? | Title: Towards Knowledge-driven Autonomous Driving
Abstract: This paper explores the emerging knowledge-driven autonomous driving
technologies. Our investigation highlights the limitations of current
autonomous driving systems, in particular their sensitivity to data bias,
difficulty in handling long-tail scenarios, and lack of interpretability.
Conversely, knowledge-driven methods with the abilities of cognition,
generalization and life-long learning emerge as a promising way to overcome
these challenges. This paper delves into the essence of knowledge-driven
autonomous driving and examines its core components: dataset \& benchmark,
environment, and driver agent. By leveraging large language models, world
models, neural rendering, and other advanced artificial intelligence
techniques, these components collectively contribute to a more holistic,
adaptive, and intelligent autonomous driving system. The paper systematically
organizes and reviews previous research efforts in this area, and provides
insights and guidance for future research and practical applications of
autonomous driving. We will continually share the latest updates on
cutting-edge developments in knowledge-driven autonomous driving along with the
relevant valuable open-source resources at:
\url{https://github.com/PJLab-ADG/awesome-knowledge-driven-AD}. | Robotics |
What field is the article from? | Title: Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-Encoder
Abstract: Despite its practical importance across a wide range of modalities, recent
advances in self-supervised learning (SSL) have been primarily focused on a few
well-curated domains, e.g., vision and language, often relying on their
domain-specific knowledge. For example, Masked Auto-Encoder (MAE) has become
one of the popular architectures in these domains, but less has explored its
potential in other modalities. In this paper, we develop MAE as a unified,
modality-agnostic SSL framework. In turn, we argue meta-learning as a key to
interpreting MAE as a modality-agnostic learner, and propose enhancements to
MAE from the motivation to jointly improve its SSL across diverse modalities,
coined MetaMAE as a result. Our key idea is to view the mask reconstruction of
MAE as a meta-learning task: masked tokens are predicted by adapting the
Transformer meta-learner through the amortization of unmasked tokens. Based on
this novel interpretation, we propose to integrate two advanced meta-learning
techniques. First, we adapt the amortized latent of the Transformer encoder
using gradient-based meta-learning to enhance the reconstruction. Then, we
maximize the alignment between amortized and adapted latents through task
contrastive learning which guides the Transformer encoder to better encode the
task-specific knowledge. Our experiment demonstrates the superiority of MetaMAE
in the modality-agnostic SSL benchmark (called DABS), significantly
outperforming prior baselines. Code is available at
https://github.com/alinlab/MetaMAE. | Machine Learning |
What field is the article from? | Title: Towards Full-scene Domain Generalization in Multi-agent Collaborative Bird's Eye View Segmentation for Connected and Autonomous Driving
Abstract: Collaborative perception has recently gained significant attention in
autonomous driving, improving perception quality by enabling the exchange of
additional information among vehicles. However, deploying collaborative
perception systems can lead to domain shifts due to diverse environmental
conditions and data heterogeneity among connected and autonomous vehicles
(CAVs). To address these challenges, we propose a unified domain generalization
framework applicable in both training and inference stages of collaborative
perception. In the training phase, we introduce an Amplitude Augmentation
(AmpAug) method to augment low-frequency image variations, broadening the
model's ability to learn across various domains. We also employ a
meta-consistency training scheme to simulate domain shifts, optimizing the
model with a carefully designed consistency loss to encourage domain-invariant
representations. In the inference phase, we introduce an intra-system domain
alignment mechanism to reduce or potentially eliminate the domain discrepancy
among CAVs prior to inference. Comprehensive experiments substantiate the
effectiveness of our method in comparison with the existing state-of-the-art
works. Code will be released at https://github.com/DG-CAVs/DG-CoPerception.git. | Computer Vision |
What field is the article from? | Title: Code Ownership in Open-Source AI Software Security
Abstract: As open-source AI software projects become an integral component in the AI
software development, it is critical to develop a novel methods to ensure and
measure the security of the open-source projects for developers. Code
ownership, pivotal in the evolution of such projects, offers insights into
developer engagement and potential vulnerabilities. In this paper, we leverage
the code ownership metrics to empirically investigate the correlation with the
latent vulnerabilities across five prominent open-source AI software projects.
The findings from the large-scale empirical study suggest a positive
relationship between high-level ownership (characterised by a limited number of
minor contributors) and a decrease in vulnerabilities. Furthermore, we
innovatively introduce the time metrics, anchored on the project's duration,
individual source code file timelines, and the count of impacted releases.
These metrics adeptly categorise distinct phases of open-source AI software
projects and their respective vulnerability intensities. With these novel code
ownership metrics, we have implemented a Python-based command-line application
to aid project curators and quality assurance professionals in evaluating and
benchmarking their on-site projects. We anticipate this work will embark a
continuous research development for securing and measuring open-source AI
project security. | Software Engineering |
What field is the article from? | Title: C-Procgen: Empowering Procgen with Controllable Contexts
Abstract: We present C-Procgen, an enhanced suite of environments on top of the Procgen
benchmark. C-Procgen provides access to over 200 unique game contexts across 16
games. It allows for detailed configuration of environments, ranging from game
mechanics to agent attributes. This makes the procedural generation process,
previously a black-box in Procgen, more transparent and adaptable for various
research needs.The upgrade enhances dynamic context management and
individualized assignments, while maintaining computational efficiency.
C-Procgen's controllable contexts make it applicable in diverse reinforcement
learning research areas, such as learning dynamics analysis, curriculum
learning, and transfer learning. We believe that C-Procgen will fill a gap in
the current literature and offer a valuable toolkit for future works. | Artificial Intelligence |
What field is the article from? | Title: AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation
Abstract: This paper proposes a novel direct Audio-Visual Speech to Audio-Visual Speech
Translation (AV2AV) framework, where the input and output of the system are
multimodal (i.e., audio and visual speech). With the proposed AV2AV, two key
advantages can be brought: 1) We can perform real-like conversations with
individuals worldwide in a virtual meeting by utilizing our own primary
languages. In contrast to Speech-to-Speech Translation (A2A), which solely
translates between audio modalities, the proposed AV2AV directly translates
between audio-visual speech. This capability enhances the dialogue experience
by presenting synchronized lip movements along with the translated speech. 2)
We can improve the robustness of the spoken language translation system. By
employing the complementary information of audio-visual speech, the system can
effectively translate spoken language even in the presence of acoustic noise,
showcasing robust performance. To mitigate the problem of the absence of a
parallel AV2AV translation dataset, we propose to train our spoken language
translation system with the audio-only dataset of A2A. This is done by learning
unified audio-visual speech representations through self-supervised learning in
advance to train the translation system. Moreover, we propose an AV-Renderer
that can generate raw audio and video in parallel. It is designed with
zero-shot speaker modeling, thus the speaker in source audio-visual speech can
be maintained at the target translated audio-visual speech. The effectiveness
of AV2AV is evaluated with extensive experiments in a many-to-many language
translation setting. The demo page is available on
https://choijeongsoo.github.io/av2av. | Computer Vision |
What field is the article from? | Title: Rethinking Benchmark and Contamination for Language Models with Rephrased Samples
Abstract: Large language models are increasingly trained on all the data ever produced
by humans. Many have raised concerns about the trustworthiness of public
benchmarks due to potential contamination in pre-training or fine-tuning
datasets. While most data decontamination efforts apply string matching (e.g.,
n-gram overlap) to remove benchmark data, we show that these methods are
insufficient, and simple variations of test data (e.g., paraphrasing,
translation) can easily bypass these decontamination measures. Furthermore, we
demonstrate that if such variation of test data is not eliminated, a 13B model
can easily overfit a test benchmark and achieve drastically high performance,
on par with GPT-4. We validate such observations in widely used benchmarks such
as MMLU, GSK8k, and HumanEval. To address this growing risk, we propose a
stronger LLM-based decontamination method and apply it to widely used
pre-training and fine-tuning datasets, revealing significant previously unknown
test overlap. For example, in pre-training sets such as RedPajama-Data-1T and
StarCoder-Data, we identified that 8-18\% of the HumanEval benchmark overlaps.
Interestingly, we also find such contamination in synthetic dataset generated
by GPT-3.5/4, suggesting a potential risk of unintentional contamination. We
urge the community to adopt stronger decontamination approaches when using
public benchmarks. Moreover, we call for the community to actively develop
fresh one-time exams to evaluate models accurately. Our decontamination tool is
publicly available at https://github.com/lm-sys/llm-decontaminator. | Computational Linguistics |
What field is the article from? | Title: Tell, don't show: Declarative facts influence how LLMs generalize
Abstract: We examine how large language models (LLMs) generalize from abstract
declarative statements in their training data. As an illustration, consider an
LLM that is prompted to generate weather reports for London in 2050. One
possibility is that the temperatures in the reports match the mean and variance
of reports from 2023 (i.e. matching the statistics of pretraining). Another
possibility is that the reports predict higher temperatures, by incorporating
declarative statements about climate change from scientific papers written in
2023. An example of such a declarative statement is "global temperatures will
increase by $1^{\circ} \mathrm{C}$ by 2050".
To test the influence of abstract declarative statements, we construct tasks
in which LLMs are finetuned on both declarative and procedural information. We
find that declarative statements influence model predictions, even when they
conflict with procedural information. In particular, finetuning on a
declarative statement $S$ increases the model likelihood for logical
consequences of $S$. The effect of declarative statements is consistent across
three domains: aligning an AI assistant, predicting weather, and predicting
demographic features. Through a series of ablations, we show that the effect of
declarative statements cannot be explained by associative learning based on
matching keywords. Nevertheless, the effect of declarative statements on model
likelihoods is small in absolute terms and increases surprisingly little with
model size (i.e. from 330 million to 175 billion parameters). We argue that
these results have implications for AI risk (in relation to the "treacherous
turn") and for fairness. | Artificial Intelligence |
What field is the article from? | Title: Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Abstract: Rationalization empowers deep learning models with self-explaining
capabilities through a cooperative game, where a generator selects a
semantically consistent subset of the input as a rationale, and a subsequent
predictor makes predictions based on the selected rationale. In this paper, we
discover that rationalization is prone to a problem named \emph{rationale
shift}, which arises from the algorithmic bias of the cooperative game.
Rationale shift refers to a situation where the semantics of the selected
rationale may deviate from the original input, but the predictor still produces
accurate predictions based on the deviation, resulting in a compromised
generator with misleading feedback.
To address this issue, we first demonstrate the importance of the alignment
between the rationale and the full input through both empirical observations
and theoretical analysis. Subsequently, we introduce a novel approach called
DAR (\textbf{D}iscriminatively \textbf{A}ligned \textbf{R}ationalization),
which utilizes an auxiliary module pretrained on the full input to
discriminatively align the selected rationale and the original input. We
theoretically illustrate how DAR accomplishes the desired alignment, thereby
overcoming the rationale shift problem. The experiments on two widely used
real-world benchmarks show that the proposed method significantly improves the
explanation quality (measured by the overlap between the model-selected
explanation and the human-annotated rationale) as compared to state-of-the-art
techniques. Additionally, results on two synthetic settings further validate
the effectiveness of DAR in addressing the rationale shift problem. | Artificial Intelligence |
What field is the article from? | Title: Enhancing Explainability in Mobility Data Science through a combination of methods
Abstract: In the domain of Mobility Data Science, the intricate task of interpreting
models trained on trajectory data, and elucidating the spatio-temporal movement
of entities, has persistently posed significant challenges. Conventional XAI
techniques, although brimming with potential, frequently overlook the distinct
structure and nuances inherent within trajectory data. Observing this
deficiency, we introduced a comprehensive framework that harmonizes pivotal XAI
techniques: LIME (Local Interpretable Model-agnostic Explanations), SHAP
(SHapley Additive exPlanations), Saliency maps, attention mechanisms, direct
trajectory visualization, and Permutation Feature Importance (PFI). Unlike
conventional strategies that deploy these methods singularly, our unified
approach capitalizes on the collective efficacy of these techniques, yielding
deeper and more granular insights for models reliant on trajectory data. In
crafting this synthesis, we effectively address the multifaceted essence of
trajectories, achieving not only amplified interpretability but also a nuanced,
contextually rich comprehension of model decisions. To validate and enhance our
framework, we undertook a survey to gauge preferences and reception among
various user demographics. Our findings underscored a dichotomy: professionals
with academic orientations, particularly those in roles like Data Scientist, IT
Expert, and ML Engineer, showcased a profound, technical understanding and
often exhibited a predilection for amalgamated methods for interpretability.
Conversely, end-users or individuals less acquainted with AI and Data Science
showcased simpler inclinations, such as bar plots indicating timestep
significance or visual depictions pinpointing pivotal segments of a vessel's
trajectory. | Artificial Intelligence |
What field is the article from? | Title: Can large language models replace humans in the systematic review process? Evaluating GPT-4's efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages
Abstract: Systematic reviews are vital for guiding practice, research, and policy, yet
they are often slow and labour-intensive. Large language models (LLMs) could
offer a way to speed up and automate systematic reviews, but their performance
in such tasks has not been comprehensively evaluated against humans, and no
study has tested GPT-4, the biggest LLM so far. This pre-registered study
evaluates GPT-4's capability in title/abstract screening, full-text review, and
data extraction across various literature types and languages using a
'human-out-of-the-loop' approach. Although GPT-4 had accuracy on par with human
performance in most tasks, results were skewed by chance agreement and dataset
imbalance. After adjusting for these, there was a moderate level of performance
for data extraction, and - barring studies that used highly reliable prompts -
screening performance levelled at none to moderate for different stages and
languages. When screening full-text literature using highly reliable prompts,
GPT-4's performance was 'almost perfect.' Penalising GPT-4 for missing key
studies using highly reliable prompts improved its performance even more. Our
findings indicate that, currently, substantial caution should be used if LLMs
are being used to conduct systematic reviews, but suggest that, for certain
systematic review tasks delivered under reliable prompts, LLMs can rival human
performance. | Computational Linguistics |
What field is the article from? | Title: Hybrid Focal and Full-Range Attention Based Graph Transformers
Abstract: The paradigm of Transformers using the self-attention mechanism has
manifested its advantage in learning graph-structured data. Yet, Graph
Transformers are capable of modeling full range dependencies but are often
deficient in extracting information from locality. A common practice is to
utilize Message Passing Neural Networks (MPNNs) as an auxiliary to capture
local information, which however are still inadequate for comprehending
substructures. In this paper, we present a purely attention-based architecture,
namely Focal and Full-Range Graph Transformer (FFGT), which can mitigate the
loss of local information in learning global correlations. The core component
of FFGT is a new mechanism of compound attention, which combines the
conventional full-range attention with K-hop focal attention on ego-nets to
aggregate both global and local information. Beyond the scope of canonical
Transformers, the FFGT has the merit of being more substructure-aware. Our
approach enhances the performance of existing Graph Transformers on various
open datasets, while achieves compatible SOTA performance on several Long-Range
Graph Benchmark (LRGB) datasets even with a vanilla transformer. We further
examine influential factors on the optimal focal length of attention via
introducing a novel synthetic dataset based on SBM-PATTERN. | Machine Learning |
What field is the article from? | Title: The Hyperdimensional Transform: a Holographic Representation of Functions
Abstract: Integral transforms are invaluable mathematical tools to map functions into
spaces where they are easier to characterize. We introduce the hyperdimensional
transform as a new kind of integral transform. It converts square-integrable
functions into noise-robust, holographic, high-dimensional representations
called hyperdimensional vectors. The central idea is to approximate a function
by a linear combination of random functions. We formally introduce a set of
stochastic, orthogonal basis functions and define the hyperdimensional
transform and its inverse. We discuss general transform-related properties such
as its uniqueness, approximation properties of the inverse transform, and the
representation of integrals and derivatives. The hyperdimensional transform
offers a powerful, flexible framework that connects closely with other integral
transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it
provides theoretical foundations and new insights for the field of
hyperdimensional computing, a computing paradigm that is rapidly gaining
attention for efficient and explainable machine learning algorithms, with
potential applications in statistical modelling and machine learning. In
addition, we provide straightforward and easily understandable code, which can
function as a tutorial and allows for the reproduction of the demonstrated
examples, from computing the transform to solving differential equations. | Machine Learning |
What field is the article from? | Title: InteraSSort: Interactive Assortment Planning Using Large Language Models
Abstract: Assortment planning, integral to multiple commercial offerings, is a key
problem studied in e-commerce and retail settings. Numerous variants of the
problem along with their integration into business solutions have been
thoroughly investigated in the existing literature. However, the nuanced
complexities of in-store planning and a lack of optimization proficiency among
store planners with strong domain expertise remain largely overlooked. These
challenges frequently necessitate collaborative efforts with multiple
stakeholders which often lead to prolonged decision-making processes and
significant delays. To mitigate these challenges and capitalize on the
advancements of Large Language Models (LLMs), we propose an interactive
assortment planning framework, InteraSSort that augments LLMs with optimization
tools to assist store planners in making decisions through interactive
conversations. Specifically, we develop a solution featuring a user-friendly
interface that enables users to express their optimization objectives as input
text prompts to InteraSSort and receive tailored optimized solutions as output.
Our framework extends beyond basic functionality by enabling the inclusion of
additional constraints through interactive conversation, facilitating precise
and highly customized decision-making. Extensive experiments demonstrate the
effectiveness of our framework and potential extensions to a broad range of
operations management challenges. | Artificial Intelligence |
What field is the article from? | Title: Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation
Abstract: Neural knowledge-to-text generation models often struggle to faithfully
generate descriptions for the input facts: they may produce hallucinations that
contradict the given facts, or describe facts not present in the input. To
reduce hallucinations, we propose a novel decoding method, TWEAK (Think While
Effectively Articulating Knowledge). TWEAK treats the generated sequences at
each decoding step and its future sequences as hypotheses, and ranks each
generation candidate based on how well their corresponding hypotheses support
the input facts using a Hypothesis Verification Model (HVM). We first
demonstrate the effectiveness of TWEAK by using a Natural Language Inference
(NLI) model as the HVM and report improved faithfulness with minimal impact on
the quality. We then replace the NLI model with our task-specific HVM trained
with a first-of-a-kind dataset, FATE (Fact-Aligned Textual Entailment), which
pairs input facts with their faithful and hallucinated descriptions with the
hallucinated spans marked. The new HVM improves the faithfulness and the
quality further and runs faster. Overall the best TWEAK variants improve on
average 2.22/7.17 points on faithfulness measured by FactKB over WebNLG and
TekGen/GenWiki, respectively, with only 0.14/0.32 points degradation on quality
measured by BERTScore over the same datasets. Since TWEAK is a decoding-only
approach, it can be integrated with any neural generative model without
retraining. | Computational Linguistics |
What field is the article from? | Title: De-identification of clinical free text using natural language processing: A systematic review of current approaches
Abstract: Background: Electronic health records (EHRs) are a valuable resource for
data-driven medical research. However, the presence of protected health
information (PHI) makes EHRs unsuitable to be shared for research purposes.
De-identification, i.e. the process of removing PHI is a critical step in
making EHR data accessible. Natural language processing has repeatedly
demonstrated its feasibility in automating the de-identification process.
Objectives: Our study aims to provide systematic evidence on how the
de-identification of clinical free text has evolved in the last thirteen years,
and to report on the performances and limitations of the current
state-of-the-art systems. In addition, we aim to identify challenges and
potential research opportunities in this field. Methods: A systematic search in
PubMed, Web of Science and the DBLP was conducted for studies published between
January 2010 and February 2023. Titles and abstracts were examined to identify
the relevant studies. Selected studies were then analysed in-depth, and
information was collected on de-identification methodologies, data sources, and
measured performance. Results: A total of 2125 publications were identified for
the title and abstract screening. 69 studies were found to be relevant. Machine
learning (37 studies) and hybrid (26 studies) approaches are predominant, while
six studies relied only on rules. Majority of the approaches were trained and
evaluated on public corpora. The 2014 i2b2/UTHealth corpus is the most
frequently used (36 studies), followed by the 2006 i2b2 (18 studies) and 2016
CEGS N-GRID (10 studies) corpora. | Computational Linguistics |
What field is the article from? | Title: Forte: An Interactive Visual Analytic Tool for Trust-Augmented Net Load Forecasting
Abstract: Accurate net load forecasting is vital for energy planning, aiding decisions
on trade and load distribution. However, assessing the performance of
forecasting models across diverse input variables, like temperature and
humidity, remains challenging, particularly for eliciting a high degree of
trust in the model outcomes. In this context, there is a growing need for
data-driven technological interventions to aid scientists in comprehending how
models react to both noisy and clean input variables, thus shedding light on
complex behaviors and fostering confidence in the outcomes. In this paper, we
present Forte, a visual analytics-based application to explore deep
probabilistic net load forecasting models across various input variables and
understand the error rates for different scenarios. With carefully designed
visual interventions, this web-based interface empowers scientists to derive
insights about model performance by simulating diverse scenarios, facilitating
an informed decision-making process. We discuss observations made using Forte
and demonstrate the effectiveness of visualization techniques to provide
valuable insights into the correlation between weather inputs and net load
forecasts, ultimately advancing grid capabilities by improving trust in
forecasting models. | Human-Computer Interaction |
What field is the article from? | Title: Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models
Abstract: Video-based large language models (Video-LLMs) have been recently introduced,
targeting both fundamental improvements in perception and comprehension, and a
diverse range of user inquiries. In pursuit of the ultimate goal of achieving
artificial general intelligence, a truly intelligent Video-LLM model should not
only see and understand the surroundings, but also possess human-level
commonsense, and make well-informed decisions for the users. To guide the
development of such a model, the establishment of a robust and comprehensive
evaluation system becomes crucial. To this end, this paper proposes
\textit{Video-Bench}, a new comprehensive benchmark along with a toolkit
specifically designed for evaluating Video-LLMs. The benchmark comprises 10
meticulously crafted tasks, evaluating the capabilities of Video-LLMs across
three distinct levels: Video-exclusive Understanding, Prior Knowledge-based
Question-Answering, and Comprehension and Decision-making. In addition, we
introduce an automatic toolkit tailored to process model outputs for various
tasks, facilitating the calculation of metrics and generating convenient final
scores. We evaluate 8 representative Video-LLMs using \textit{Video-Bench}. The
findings reveal that current Video-LLMs still fall considerably short of
achieving human-like comprehension and analysis of real-world videos, offering
valuable insights for future research directions. The benchmark and toolkit are
available at: \url{https://github.com/PKU-YuanGroup/Video-Bench}. | Computer Vision |
What field is the article from? | Title: Traffic Sign Interpretation in Real Road Scene
Abstract: Most existing traffic sign-related works are dedicated to detecting and
recognizing part of traffic signs individually, which fails to analyze the
global semantic logic among signs and may convey inaccurate traffic
instruction. Following the above issues, we propose a traffic sign
interpretation (TSI) task, which aims to interpret global semantic interrelated
traffic signs (e.g.,~driving instruction-related texts, symbols, and guide
panels) into a natural language for providing accurate instruction support to
autonomous or assistant driving. Meanwhile, we design a multi-task learning
architecture for TSI, which is responsible for detecting and recognizing
various traffic signs and interpreting them into a natural language like a
human. Furthermore, the absence of a public TSI available dataset prompts us to
build a traffic sign interpretation dataset, namely TSI-CN. The dataset
consists of real road scene images, which are captured from the highway and the
urban way in China from a driver's perspective. It contains rich location
labels of texts, symbols, and guide panels, and the corresponding natural
language description labels. Experiments on TSI-CN demonstrate that the TSI
task is achievable and the TSI architecture can interpret traffic signs from
scenes successfully even if there is a complex semantic logic among signs. The
TSI-CN dataset and the source code of the TSI architecture will be publicly
available after the revision process. | Computer Vision |
What field is the article from? | Title: Synthetic Speaking Children -- Why We Need Them and How to Make Them
Abstract: Contemporary Human Computer Interaction (HCI) research relies primarily on
neural network models for machine vision and speech understanding of a system
user. Such models require extensively annotated training datasets for optimal
performance and when building interfaces for users from a vulnerable population
such as young children, GDPR introduces significant complexities in data
collection, management, and processing. Motivated by the training needs of an
Edge AI smart toy platform this research explores the latest advances in
generative neural technologies and provides a working proof of concept of a
controllable data generation pipeline for speech driven facial training data at
scale. In this context, we demonstrate how StyleGAN2 can be finetuned to create
a gender balanced dataset of children's faces. This dataset includes a variety
of controllable factors such as facial expressions, age variations, facial
poses, and even speech-driven animations with realistic lip synchronization. By
combining generative text to speech models for child voice synthesis and a 3D
landmark based talking heads pipeline, we can generate highly realistic,
entirely synthetic, talking child video clips. These video clips can provide
valuable, and controllable, synthetic training data for neural network models,
bridging the gap when real data is scarce or restricted due to privacy
regulations. | Human-Computer Interaction |
What field is the article from? | Title: GPT-4V Takes the Wheel: Evaluating Promise and Challenges for Pedestrian Behavior Prediction
Abstract: Existing pedestrian behavior prediction methods rely primarily on deep neural
networks that utilize features extracted from video frame sequences. Although
these vision-based models have shown promising results, they face limitations
in effectively capturing and utilizing the dynamic spatio-temporal interactions
between the target pedestrian and its surrounding traffic elements, crucial for
accurate reasoning. Additionally, training these models requires manually
annotating domain-specific datasets, a process that is expensive,
time-consuming, and difficult to generalize to new environments and scenarios.
The recent emergence of Large Multimodal Models (LMMs) offers potential
solutions to these limitations due to their superior visual understanding and
causal reasoning capabilities, which can be harnessed through semi-supervised
training. GPT-4V(ision), the latest iteration of the state-of-the-art
Large-Language Model GPTs, now incorporates vision input capabilities. This
report provides a comprehensive evaluation of the potential of GPT-4V for
pedestrian behavior prediction in autonomous driving using publicly available
datasets: JAAD, PIE, and WiDEVIEW. Quantitative and qualitative evaluations
demonstrate GPT-4V(ision)'s promise in zero-shot pedestrian behavior prediction
and driving scene understanding ability for autonomous driving. However, it
still falls short of the state-of-the-art traditional domain-specific models.
Challenges include difficulties in handling small pedestrians and vehicles in
motion. These limitations highlight the need for further research and
development in this area. | Computer Vision |
What field is the article from? | Title: LLMs-augmented Contextual Bandit
Abstract: Contextual bandits have emerged as a cornerstone in reinforcement learning,
enabling systems to make decisions with partial feedback. However, as contexts
grow in complexity, traditional bandit algorithms can face challenges in
adequately capturing and utilizing such contexts. In this paper, we propose a
novel integration of large language models (LLMs) with the contextual bandit
framework. By leveraging LLMs as an encoder, we enrich the representation of
the context, providing the bandit with a denser and more informative view.
Preliminary results on synthetic datasets demonstrate the potential of this
approach, showing notable improvements in cumulative rewards and reductions in
regret compared to traditional bandit algorithms. This integration not only
showcases the capabilities of LLMs in reinforcement learning but also opens the
door to a new era of contextually-aware decision systems. | Machine Learning |
What field is the article from? | Title: Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning
Abstract: Communication networks able to withstand hostile environments are critically
important for disaster relief operations. In this paper, we consider a
challenging scenario where drones have been compromised in the supply chain,
during their manufacture, and harbour malicious software capable of
wide-ranging and infectious disruption. We investigate multi-agent deep
reinforcement learning as a tool for learning defensive strategies that
maximise communications bandwidth despite continual adversarial interference.
Using a public challenge for learning network resilience strategies, we propose
a state-of-the-art expert technique and study its superiority over deep
reinforcement learning agents. Correspondingly, we identify three specific
methods for improving the performance of our learning-based agents: (1)
ensuring each observation contains the necessary information, (2) using expert
agents to provide a curriculum for learning, and (3) paying close attention to
reward. We apply our methods and present a new mixed strategy enabling expert
and learning-based agents to work together and improve on all prior results. | Cryptography and Security |
What field is the article from? | Title: Neural Markov Prolog
Abstract: The recent rapid advance of AI has been driven largely by innovations in
neural network architectures. A concomitant concern is how to understand these
resulting systems. In this paper, we propose a tool to assist in both the
design of further innovative architectures and the simple yet precise
communication of their structure. We propose the language Neural Markov Prolog
(NMP), based on both Markov logic and Prolog, as a means to both bridge first
order logic and neural network design and to allow for the easy generation and
presentation of architectures for images, text, relational databases, or other
target data types or their mixtures. | Artificial Intelligence |
What field is the article from? | Title: FD-MIA: Efficient Attacks on Fairness-enhanced Models
Abstract: Previous studies have developed fairness methods for biased models that
exhibit discriminatory behaviors towards specific subgroups. While these models
have shown promise in achieving fair predictions, recent research has
identified their potential vulnerability to score-based membership inference
attacks (MIAs). In these attacks, adversaries can infer whether a particular
data sample was used during training by analyzing the model's prediction
scores. However, our investigations reveal that these score-based MIAs are
ineffective when targeting fairness-enhanced models in binary classifications.
The attack models trained to launch the MIAs degrade into simplistic threshold
models, resulting in lower attack performance. Meanwhile, we observe that
fairness methods often lead to prediction performance degradation for the
majority subgroups of the training data. This raises the barrier to successful
attacks and widens the prediction gaps between member and non-member data.
Building upon these insights, we propose an efficient MIA method against
fairness-enhanced models based on fairness discrepancy results (FD-MIA). It
leverages the difference in the predictions from both the original and
fairness-enhanced models and exploits the observed prediction gaps as attack
clues. We also explore potential strategies for mitigating privacy leakages.
Extensive experiments validate our findings and demonstrate the efficacy of the
proposed method. | Machine Learning |
What field is the article from? | Title: Imitate the Good and Avoid the Bad: An Incremental Approach to Safe Reinforcement Learning
Abstract: A popular framework for enforcing safe actions in Reinforcement Learning (RL)
is Constrained RL, where trajectory based constraints on expected cost (or
other cost measures) are employed to enforce safety and more importantly these
constraints are enforced while maximizing expected reward. Most recent
approaches for solving Constrained RL convert the trajectory based cost
constraint into a surrogate problem that can be solved using minor
modifications to RL methods. A key drawback with such approaches is an over or
underestimation of the cost constraint at each state. Therefore, we provide an
approach that does not modify the trajectory based cost constraint and instead
imitates ``good'' trajectories and avoids ``bad'' trajectories generated from
incrementally improving policies. We employ an oracle that utilizes a reward
threshold (which is varied with learning) and the overall cost constraint to
label trajectories as ``good'' or ``bad''. A key advantage of our approach is
that we are able to work from any starting policy or set of trajectories and
improve on it. In an exhaustive set of experiments, we demonstrate that our
approach is able to outperform top benchmark approaches for solving Constrained
RL problems, with respect to expected cost, CVaR cost, or even unknown cost
constraints. | Machine Learning |
What field is the article from? | Title: SiGeo: Sub-One-Shot NAS via Information Theory and Geometry of Loss Landscape
Abstract: Neural Architecture Search (NAS) has become a widely used tool for automating
neural network design. While one-shot NAS methods have successfully reduced
computational requirements, they often require extensive training. On the other
hand, zero-shot NAS utilizes training-free proxies to evaluate a candidate
architecture's test performance but has two limitations: (1) inability to use
the information gained as a network improves with training and (2) unreliable
performance, particularly in complex domains like RecSys, due to the
multi-modal data inputs and complex architecture configurations. To synthesize
the benefits of both methods, we introduce a "sub-one-shot" paradigm that
serves as a bridge between zero-shot and one-shot NAS. In sub-one-shot NAS, the
supernet is trained using only a small subset of the training data, a phase we
refer to as "warm-up." Within this framework, we present SiGeo, a proxy founded
on a novel theoretical framework that connects the supernet warm-up with the
efficacy of the proxy. Extensive experiments have shown that SiGeo, with the
benefit of warm-up, consistently outperforms state-of-the-art NAS proxies on
various established NAS benchmarks. When a supernet is warmed up, it can
achieve comparable performance to weight-sharing one-shot NAS methods, but with
a significant reduction ($\sim 60$\%) in computational costs. | Machine Learning |
What field is the article from? | Title: A Systems-Theoretical Formalization of Closed Systems
Abstract: There is a lack of formalism for some key foundational concepts in systems
engineering. One of the most recently acknowledged deficits is the inadequacy
of systems engineering practices for engineering intelligent systems. In our
previous works, we proposed that closed systems precepts could be used to
accomplish a required paradigm shift for the systems engineering of intelligent
systems. However, to enable such a shift, formal foundations for closed systems
precepts that expand the theory of systems engineering are needed. The concept
of closure is a critical concept in the formalism underlying closed systems
precepts. In this paper, we provide formal, systems- and information-theoretic
definitions of closure to identify and distinguish different types of closed
systems. Then, we assert a mathematical framework to evaluate the subjective
formation of the boundaries and constraints of such systems. Finally, we argue
that engineering an intelligent system can benefit from appropriate closed and
open systems paradigms on multiple levels of abstraction of the system. In the
main, this framework will provide the necessary fundamentals to aid in systems
engineering of intelligent systems. | Artificial Intelligence |
What field is the article from? | Title: RLHF and IIA: Perverse Incentives
Abstract: Existing algorithms for reinforcement learning from human feedback (RLHF) can
incentivize responses at odds with preferences because they are based on models
that assume independence of irrelevant alternatives (IIA). The perverse
incentives induced by IIA give rise to egregious behavior when innovating on
query formats or learning algorithms. | Machine Learning |
What field is the article from? | Title: Enhancing Trajectory Prediction through Self-Supervised Waypoint Noise Prediction
Abstract: Trajectory prediction is an important task that involves modeling the
indeterminate nature of traffic actors to forecast future trajectories given
the observed trajectory sequences. However, current methods confine themselves
to presumed data manifolds, assuming that trajectories strictly adhere to these
manifolds, resulting in overly simplified predictions. To this end, we propose
a novel approach called SSWNP (Self-Supervised Waypoint Noise Prediction). In
our approach, we first create clean and noise-augmented views of past observed
trajectories across the spatial domain of waypoints. We then compel the
trajectory prediction model to maintain spatial consistency between predictions
from these two views, in addition to the trajectory prediction task.
Introducing the noise-augmented view mitigates the model's reliance on a narrow
interpretation of the data manifold, enabling it to learn more plausible and
diverse representations. We also predict the noise present in the two views of
past observed trajectories as an auxiliary self-supervised task, enhancing the
model's understanding of the underlying representation and future predictions.
Empirical evidence demonstrates that the incorporation of SSWNP into the model
learning process significantly improves performance, even in noisy
environments, when compared to baseline methods. Our approach can complement
existing trajectory prediction methods. To showcase the effectiveness of our
approach, we conducted extensive experiments on three datasets: NBA Sports VU,
ETH-UCY, and TrajNet++, with experimental results highlighting the substantial
improvement achieved in trajectory prediction tasks. | Robotics |
What field is the article from? | Title: Automatic Bug Detection in Games using LSTM Networks
Abstract: We introduced a new framework to detect perceptual bugs using a Long
Short-Term Memory (LSTM) network, which detects bugs in video games as
anomalies. The detected buggy frames are then clustered to determine the
category of the occurred bug. The framework was evaluated on two First Person
Shooter (FPS) games. Results show the effectiveness of the framework. | Machine Learning |
What field is the article from? | Title: NeRFiller: Completing Scenes via Generative 3D Inpainting
Abstract: We propose NeRFiller, an approach that completes missing portions of a 3D
capture via generative 3D inpainting using off-the-shelf 2D visual generative
models. Often parts of a captured 3D scene or object are missing due to mesh
reconstruction failures or a lack of observations (e.g., contact regions, such
as the bottom of objects, or hard-to-reach areas). We approach this challenging
3D inpainting problem by leveraging a 2D inpainting diffusion model. We
identify a surprising behavior of these models, where they generate more 3D
consistent inpaints when images form a 2$\times$2 grid, and show how to
generalize this behavior to more than four images. We then present an iterative
framework to distill these inpainted regions into a single consistent 3D scene.
In contrast to related works, we focus on completing scenes rather than
deleting foreground objects, and our approach does not require tight 2D object
masks or text. We compare our approach to relevant baselines adapted to our
setting on a variety of scenes, where NeRFiller creates the most 3D consistent
and plausible scene completions. Our project page is at
https://ethanweber.me/nerfiller. | Computer Vision |
What field is the article from? | Title: HAL 9000: Skynet's Risk Manager
Abstract: Intrusion Tolerant Systems (ITSs) are a necessary component for
cyber-services/infrastructures. Additionally, as cyberattacks follow a
multi-domain attack surface, a similar defensive approach should be applied,
namely, the use of an evolving multi-disciplinary solution that combines ITS,
cybersecurity and Artificial Intelligence (AI). With the increased popularity
of AI solutions, due to Big Data use-case scenarios and decision support and
automation scenarios, new opportunities to apply Machine Learning (ML)
algorithms have emerged, namely ITS empowerment. Using ML algorithms, an ITS
can augment its intrusion tolerance capability, by learning from previous
attacks and from known vulnerabilities. As such, this work's contribution is
twofold: (1) an ITS architecture (Skynet) based on the state-of-the-art and
incorporates new components to increase its intrusion tolerance capability and
its adaptability to new adversaries; (2) an improved Risk Manager design that
leverages AI to improve ITSs by automatically assessing OS risks to intrusions,
and advise with safer configurations. One of the reasons that intrusions are
successful is due to bad configurations or slow adaptability to new threats.
This can be caused by the dependency that systems have for human intervention.
One of the characteristics in Skynet and HAL 9000 design is the removal of
human intervention. Being fully automatized lowers the chance of successful
intrusions caused by human error. Our experiments using Skynet, shows that HAL
is able to choose 15% safer configurations than the state-of-the-art risk
manager. | Cryptography and Security |
What field is the article from? | Title: Concept Prerequisite Relation Prediction by Using Permutation-Equivariant Directed Graph Neural Networks
Abstract: This paper studies the problem of CPRP, concept prerequisite relation
prediction, which is a fundamental task in using AI for education. CPRP is
usually formulated into a link-prediction task on a relationship graph of
concepts and solved by training the graph neural network (GNN) model. However,
current directed GNNs fail to manage graph isomorphism which refers to the
invariance of non-isomorphic graphs, reducing the expressivity of resulting
representations. We present a permutation-equivariant directed GNN model by
introducing the Weisfeiler-Lehman test into directed GNN learning. Our method
is then used for CPRP and evaluated on three public datasets. The experimental
results show that our model delivers better prediction performance than the
state-of-the-art methods. | Machine Learning |
What field is the article from? | Title: Re-evaluating Retrosynthesis Algorithms with Syntheseus
Abstract: The planning of how to synthesize molecules, also known as retrosynthesis,
has been a growing focus of the machine learning and chemistry communities in
recent years. Despite the appearance of steady progress, we argue that
imperfect benchmarks and inconsistent comparisons mask systematic shortcomings
of existing techniques. To remedy this, we present a benchmarking library
called syntheseus which promotes best practice by default, enabling consistent
meaningful evaluation of single-step and multi-step retrosynthesis algorithms.
We use syntheseus to re-evaluate a number of previous retrosynthesis
algorithms, and find that the ranking of state-of-the-art models changes when
evaluated carefully. We end with guidance for future works in this area. | Machine Learning |
What field is the article from? | Title: Regularization by Texts for Latent Diffusion Inverse Solvers
Abstract: The recent advent of diffusion models has led to significant progress in
solving inverse problems, leveraging these models as effective generative
priors. Nonetheless, challenges related to the ill-posed nature of such
problems remain, often due to inherent ambiguities in measurements. Drawing
inspiration from the human ability to resolve visual ambiguities through
perceptual biases, here we introduce a novel latent diffusion inverse solver by
incorporating regularization by texts (TReg). Specifically, TReg applies the
textual description of the preconception of the solution during the reverse
sampling phase, of which description isndynamically reinforced through
null-text optimization for adaptive negation. Our comprehensive experimental
results demonstrate that TReg successfully mitigates ambiguity in latent
diffusion inverse solvers, enhancing their effectiveness and accuracy. | Computer Vision |
What field is the article from? | Title: Leveraging Domain Adaptation and Data Augmentation to Improve Qur'anic IR in English and Arabic
Abstract: In this work, we approach the problem of Qur'anic information retrieval (IR)
in Arabic and English. Using the latest state-of-the-art methods in neural IR,
we research what helps to tackle this task more efficiently. Training retrieval
models requires a lot of data, which is difficult to obtain for training
in-domain. Therefore, we commence with training on a large amount of general
domain data and then continue training on in-domain data. To handle the lack of
in-domain data, we employed a data augmentation technique, which considerably
improved results in MRR@10 and NDCG@5 metrics, setting the state-of-the-art in
Qur'anic IR for both English and Arabic. The absence of an Islamic corpus and
domain-specific model for IR task in English motivated us to address this lack
of resources and take preliminary steps of the Islamic corpus compilation and
domain-specific language model (LM) pre-training, which helped to improve the
performance of the retrieval models that use the domain-specific LM as the
shared backbone. We examined several language models (LMs) in Arabic to select
one that efficiently deals with the Qur'anic IR task. Besides transferring
successful experiments from English to Arabic, we conducted additional
experiments with retrieval task in Arabic to amortize the scarcity of general
domain datasets used to train the retrieval models. Handling Qur'anic IR task
combining English and Arabic allowed us to enhance the comparison and share
valuable insights across models and languages. | Computational Linguistics |
What field is the article from? | Title: Deriving Comprehensible Theories from Probabilistic Circuits
Abstract: The field of Explainable AI (XAI) is seeking to shed light on the inner
workings of complex AI models and uncover the rationale behind their decisions.
One of the models gaining attention are probabilistic circuits (PCs), which are
a general and unified framework for tractable probabilistic models that support
efficient computation of various probabilistic queries. Probabilistic circuits
guarantee inference that is polynomial in the size of the circuit. In this
paper, we improve the explainability of probabilistic circuits by computing a
comprehensible, readable logical theory that covers the high-density regions
generated by a PC. To achieve this, pruning approaches based on generative
significance are used in a new method called PUTPUT (Probabilistic circuit
Understanding Through Pruning Underlying logical Theories). The method is
applied to a real world use case where music playlists are automatically
generated and expressed as readable (database) queries. Evaluation shows that
this approach can effectively produce a comprehensible logical theory that
describes the high-density regions of a PC and outperforms state of the art
methods when exploring the performance-comprehensibility trade-off. | Artificial Intelligence |
What field is the article from? | Title: PixLore: A Dataset-driven Approach to Rich Image Captioning
Abstract: In the domain of vision-language integration, generating detailed image
captions poses a significant challenge due to the lack of a curated and rich
dataset. This study introduces PixLore, a novel method that leverages Querying
Transformers through the fine-tuning of the BLIP-2 model using the LoRa method
on a standard commercial GPU. Our approach, which involves training on a
carefully assembled dataset from state-of-the-art Computer Vision models
combined and augmented by ChatGPT, addresses the question of whether intricate
image understanding can be achieved with an ensemble of smaller-scale models.
Comparative evaluations against major models such as GPT-4 and Google Bard
demonstrate that PixLore-2.7B, despite having considerably fewer parameters, is
rated higher than the existing State-of-the-Art models in over half of the
assessments. This research not only presents a groundbreaking approach but also
highlights the importance of well-curated datasets in enhancing the performance
of smaller models. | Computer Vision |
What field is the article from? | Title: A novel post-hoc explanation comparison metric and applications
Abstract: Explanatory systems make the behavior of machine learning models more
transparent, but are often inconsistent. To quantify the differences between
explanatory systems, this paper presents the Shreyan Distance, a novel metric
based on the weighted difference between ranked feature importance lists
produced by such systems. This paper uses the Shreyan Distance to compare two
explanatory systems, SHAP and LIME, for both regression and classification
learning tasks. Because we find that the average Shreyan Distance varies
significantly between these two tasks, we conclude that consistency between
explainers not only depends on inherent properties of the explainers
themselves, but also the type of learning task. This paper further contributes
the XAISuite library, which integrates the Shreyan distance algorithm into
machine learning pipelines. | Machine Learning |
Subsets and Splits